id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
3213607 | pes2o/s2orc | v3-fos-license | Tropical anurans mature early and die young: Evidence from eight Afromontane Hyperolius species and a meta-analysis
Age- and size-related life-history traits of anuran amphibians are thought to vary systematically with latitude and altitude. Because the available data base is strongly biased towards temperate-zone species, we provide new estimates on eight afrotropical Reed Frog species. A meta-analysis of the demographic traits in 44 tropical anuran species aims to test for the predicted clinal variation and to contrast results with variation detected in temperate-zone species. The small-sized reed frogs reach sexual maturity during the first or second year of life, but longevity does not exceed three to four years. Latitudinal effects on demographic life-history traits are not detectable in tropical anurans, and altitudinal effects are limited to a slight size reduction at higher elevations. Common features of anuran life-history in the tropics are early sexual maturation at small size and low longevity resulting in low lifetime fecundity. This pattern contrasts with that found in temperate-zone anurans which mature later at larger size and grow considerably older yielding greater lifetime fecundity than in the tropics. Latitudinal and altitudinal contraction of the yearly activity period shape the evolution of life-history traits in the temperate region, while trait variation in the tropics seems to be driven by distinct, not yet identified selective forces.
Introduction
Demographic life-history traits of amphibians are thought to vary systematically with latitude and altitude among species and also among conspecific populations [1]. Available evidence on interspecific variation suggests that in fact average age and longevity augments from equatorial regions towards the poles and also or exclusively with increasing altitude [2][3][4]. At the intraspecific level, populations of the European anurans Epidalea (Bufo) calamita and Rana temporaria showed similar trends in the latitudinal and altitudinal variation of age at maturity and longevity, whereas age-adjusted size was insensitive to altitudinal effects and weekly affected by latitude [4][5][6]. Unfortunately, current evidence available for the analysis of demographic trends is considerably biased towards temperate-zone species (>23.44˚N or S) rendering inferences on tropical amphibians inhabiting the equatorial belt from 23.44˚N to 23.44˚S tentative [7]. Detectable latitudinal variation of traits in tropical species is not probable because the location of the intertropical convergence zone varies over time in annual cycles and moves southwards during the past 600 years [8].
The major source of information on the age of amphibians without previous recapture history is the retrospective estimation by skeletochronology [4,9,10]. Lines of arrested growth (LAG) interrupting round bone growth are the result of a genetically based, circannual rhythm synchronised with seasonal cycles, usually hibernation in temperate-zone amphibians [11]. LAGs are also formed in tropical habitats in which seasonality is mainly based on precipitation regime [12]. Skeletochronological studies on the demography of tropical anurans focus currently on Asia [13][14][15], South America/Caribbean [7,16], and Madagascar [17,18], whereas knowledge on afrotropical species is limited to currently three species (S1 Table) [19].
We aim to scale down the apparent gap of knowledge on tropical amphibians and specifically on African frog species by providing demographic data on eight of the eleven currently known Hyperolius species (Anura, Hyperoliidae) inhabiting Rwanda [20]. The genus Hyperolius is among the most diverse sub-Saharian anuran genera (141 species) [21], but to our best knowledge this is the first report of estimates on demographic key-traits such as age at maturity, longevity and age-adjusted size. Moreover, the populations sampled at latitudes of 1.6-2.6˚S are closer to the equator than those of any other anuran species studied skeletochronologically so far, and cover an altitudinal range of 1,643-2,379m asl. Specific aims of our demographic analysis of Afromontane frog species are (1) to describe the post-metamorphic life history represented by age at maturity, median age and longevity, and (2) to identify sex-specific differences in traits and growth patterns in those species represented by larger samples. We complement the new evidence on eight afromontane species with a meta-analysis of published evidence on a total of 37 anuran species inhabiting the tropical belt all over the world (S1 Table). We selected those studies reporting age and size data derived from at least five individuals per gender to test for among-species variation of age and snout-vent length at maturity, and of longevity and maximum size, and their association with latitude and altitude as proxies for environmental variation in the tropics. The first comprehensive and quantitative analysis of demographic traits emphasizes that trait evolution in tropical anurans varies in several aspects from that in temperate-zone anurans.
Bone sampling and skeletochronological processing
Each individual was sexed and snout-vent length (SVL, distance between snout tip and cloaca) measured to the nearest 0.1mm using a calliper. With the exception of nine H. castaneus (kept in captivity) all specimens were sacrificed immediately after collection in accordance to the accepted standards of veterinary medicine by exposure to an overdose (buffered 1% solution for five minutes) of the anaesthetic MS-222 (deep anaesthesia within 10-25 seconds). Collection, sacrifice and export of the specimens were approved by the Rwandan Development Board (RDB, national agency of nature conservation). Specimens were stored individually in 70% ethanol at room temperature for future molecular and morphological examination. For skeletochronological age determination the 3 rd or 4 th digit of a forelimb was toe-clipped, and in some individuals a humerus or femur as well. Laboratory protocols followed the standard methods of skeletochronology [4,25]. The samples were embedded in Historesin™ (JUNG) and stained with 0.5% cresylviolet [26]. Diaphysis was cross-sectioned at 12μm using a JUNG RM2055 rotation microtome. Cross sections were examined light microscopically for the presence of growth marks at magnifications of 400x using an OLYMPUS BX 50. We distinguished strongly stained lines of arrested growth (LAGs) in the periosteal bone, separated by faintly stained broad growth zones, and the line of metamorphosis (LM), separating larval from postmetamorphic bone [4,27]. We selected diaphysis sections in which the size of the medullar cavity was at its minimum and that of periosteal bone at its maximum. The number of LAGs was assessed independently by the authors to estimate age.
Meta-analysis of demographic life-history traits of tropical anurans
The tropics are geographically limited to the region between the northern and southern latitude of 23.44˚. We considered all published skeletochronological studies on anuran species inhabiting this equatorial belt and reporting the age of at least five individuals per gender. Demography of tropical anurans Including our own data we obtained data on a total of 44 species, specifically on males of 43 species and on females of 29 species (S1 Table). If the studies did not explicitly associate age data with gender, we assumed that the majority of data were obtained from males due to the general capture bias towards males (see our Hyperolius data as an example). Plasticity of male and female life history was analysed in four traits: (1) minimum age at maturity (n LAGs) = age of the youngest adult of the sample; (2) minimum SVL (mm) of adults = size of the smallest adult irrespective of age; (3) longevity = maximum age detected within a sample (n LAGs); (4) maximum SVL (mm) of adults sampled irrespective of age. As a proxy for the climate at the sampling localities we used latitude and altitude above sea level. If these environmental variables were not explicitly given in the corresponding skeletochronological study, we estimated latitude and altitude by locating the sampling sites in electronic topographical maps. If anuran sampling occurred along an altitudinal transect, we calculated the average altitude to represent the transect climate.
Statistical analysis
All variables were first tested for normality. As size and age distributions of Hyperolius spp. were significantly skewed, descriptive statistics included median, minimum and maximum. Statistical comparison between gender and between taxa was based on the non-parametric Mann-Whitney-Wilcoxon W-test. Growth following metamorphosis was estimated using the von Bertalanffy equation [28]: where SVL t = average body length at age t; SVL max = asymptotic body length; SVL met = body length at metamorphosis; t = number of growing seasons experienced (n LAGs), and k = growth coefficient (i.e. shape of the growth curve). SVL met was assessed for each species from tadpoles of Gosner stages 40-43 collected in the field. The von Bertalanffy growth model was fitted to the average growth curve using the least square procedure (nonlinear regression). Estimates of SVL max and k are given with the corresponding 95% confidence interval. Sexual size dimorphism was tested for based on SVL max . The absence of overlap between confidence intervals was considered as a significant deviation at P<0.05.
The meta-analysis of size-and age-related life-history traits was performed separately on males and females because pronounced sexual dimorphism is present in many tropical anurans [7,18]. The association among life-history traits and latitude and altitude was tested for applying a factor analysis. Variables were standardized by dividing the difference between value and arithmetic mean by the standard deviation. Extraction criterion for principal components was an eigenvalue >1. Extracted principal components were submitted an orthogonal VARIMAX-rotation to yield factor loading of original variables close to 1 (strong association) or 0 (no association). Identified associations between a life-history trait and latitude and/or altitude were modelled using regression analyses (selection criterion: maximum R 2 ). Significance level was set at alpha = 0.05. All calculations were performed using the procedures of the program package STATGRAPHICS Centurion, version XVI (STATPOINT Inc.).
Histological features of round bone diaphysis sections in Hyperolius spp.
In all species examined, discernible growth marks were present in the stained diaphysis cross sections of humeri, femura, first and second phalanges (e.g. H. castaneus, Fig 2). The periosteal bone produced during the larval period stained darker than that produced during the terrestrial stage and was separated by a faint line of metamorphosis (LM Lines of arrested growth (LAGs) were easily distinguishable from the less-stained growth zones in the post-metamorphic periosteal bone (Fig 2). The number of LAGs was the same in the phalanx and femur or humerus of randomly chosen preserved individuals (two per species) demonstrating that non-lethal phalange sampling allows for precise LAG estimation.
The position of the last LAG and the corresponding collection date of the individual allowed for an estimation of the season in which bone growth was arrested. The periphery of bones collected between March and May showed narrow to broad growth zones without a terminal LAG, whereas the bones of individuals collected in September or October showed a peripheral LAG without or with a very small terminal growth zone. We conclude that growth was arrested during the dry season between June and September (Fig 1). This growth pattern was evident in all species examined. As double lines, i.e. multiple LAGs, were never observed, there was no indication for more than one period of arrested growth per year.
LAG formation was also observed in 7 out of 9 H. castaneus which were captured in October 2010 and kept in captivity in terraria until March 2012 (Fig 3A and 3B). The LAG formed in captivity was located at the periphery of the bone, with very little additional bone growth. Two individuals did not show any periosteal bone growth during captivity. Age distribution and longevity in Hyperolius spp The Hyperolius species studied were generally short-lived with a longevity ranging from one year (= survived dry season) in H. lateralis to four in H. rwandae (Table 1, Fig 4). In the samples including a significant number of females, median age did not differ significantly between sexes (H. castaneus: Mann-Whitney-Wilcoxon W-test, W = 379.5, P = 0.187; H. glandicolor: Mann-Whitney-Wilcoxon W-test, W = 74.0, P = 0.179; Table 1). Sexual maturation often occurred in the same year of metamorphosis, but at latest following the first dry season (Table 1). Males were considered mature reproductive, if their throats were coloured yellow, females were identified by having egg masses visible through the transparent parts of the abdominal ventral skin. Small-sized species had a greater life expectancy than the large ones ( Fig 5).
Growth pattern in Hyperolius spp
Using the von Bertalanffy growth model on the age-size data of amphibians requires knowledge of the snout-vent length at metamorphosis and of the duration of the growth period between metamorphosis and the first arrestment of growth. Size at metamorphosis was available for seven of the eight species and ranged from 8.3mm in H. cf. cinnamomeoventris to 14.0mm in H. viridiflavus (Table 1). Metamorphs and tadpoles of all stages were found at the beginning and end of the rainy period suggesting continuous reproductive activity and subsequently continuous recruitment of metamorphs in H. castaneus, H. discodactylus, H. kivuensis, H. lateralis and H. viridiflavus. Age class 0-LAGs individuals of these species had indeed SVL ranging from the size at metamorphosis to the size of 1 LAG old mature specimens (Fig 6). As the duration of growth period of the larger age class 0-LAGs individuals was an unknown fraction of a year, they were excluded from growth model estimation. Asymptotic maximum SVL max as estimated by the von Bertalanffy model was significantly female-biased in H. castaneus and in H. glandicolor (assumed size at metamorphosis 14mm as in the closely related H. viridiflavus), whereas the growth coefficient k did not differ significantly (Table 2; comparison of CI, P<0.05). There were also significant differences among species with respect to SVL max of males: H. lateralis < H. castaneus = H. glandicolor H. kivuensis = H. viridiflavus (Table 2; comparison of CI, P<0.05).
The analogous analysis of the data on females pertaining to 24 species yielded similar results as in the males. Two principal components (eigenvalue > 1) explained 76.3% of total variation (Table 3B). Factor loading by the original variables of the data set was as described for males. Multiple regression analysis demonstrated that only altitude explained a significant portion of variation the size variables. Again, there were significant correlations between SVL at maturity and maximum SVL, respectively, and altitude (R 2 = 0.255, F 1,24 = 9.23, P = 0.0059; R 2 = 0.284, F 1,24 = 10.5, P = 0.0036), described by multiplicative regression models (SVL maturity = e (4.550-0.156 Ã in(altitude)) ; SVL max = e (5.231-0.213 Ã in(altitude)) ; Fig 7). Table 1 and text. H. castaneus (A), H. kivuensis (B), H. lateralis (C) and H. viridiflavus (D). All 1 LAG and few 0 LAG individuals were sexually mature (usually males, see Table 1). The bars represent 2mm size classes of the individuals.
Is skeletochronology reliable for aging tropical anurans?
Afromontane anurans showed regular alternations between periods of periosteal growth and arrestment of growth analogous to those observed in temperate-zone amphibians. Bone growth was observed in all individuals collected during the rainy season (e.g. Fig 2). Our data suggest that reinforcement of the genetically-based circannual growth rhythm is mediated by the seasonal variation of precipitation rather than that of temperature. Seven out of nine H. castaneus specimens held in captivity (equivalent to a prolonged dry period) had formed an additional LAG as predicted (77.8%), whereas the bones of the deviant two individuals did not grow during the whole period of captivity (presumably quality and quantity of food were suboptimal). Similarly, 11 out 21 captive-held L. fallax showed the predicted number of additional Table 3.
Factorial analyses of four demographic life-history traits of tropical anurans and corresponding latitude and altitude of collection sites.
Matrix of factorial loads following VARIMAX-rotation of principal components in (A) Males (n = 41 species) and (B) Females (n = 28 species). Details on species involved are listed in S1 LAGs (52%), the other one supernumerary LAG or one or two less than predicted [7] supporting the circannual periodicity of LAG formation even in the zoo environment [32]. In the natural habitat LAG formation during the dry period was also observed in the Asian frog Sylvirana nigrovittata [12] emphasising that low water availability combined with optimal temperatures may act as external zeitgeber for the circannual clock in the same way as hot temperatures and dryness in arid regions and cold temperatures in temperate climate zones (4).
There is no indication that skeletochronological age estimation may be generally unreliable in tropical amphibians because LAG formation is less pronounced as in the temperate zone ( [7]; but see [2] for a failure of LAG detection in perennial Litoria lesueuri). The rate of correct skeleotochronological age estimation is about 86% in temperate-zone amphibians younger than eight years [4] and available evidence for tropical anurans suggests that the rate is similar. We conclude that aging tropical frogs by counting LAGs yields a conservative estimate of longevity because in case of proven deviation from the actual lifespan longevity tends to be underestimated ( [2,7] this study). Unlike adults of temperate-zone anurans, many reproductive adults of tropical species do not show visible LAGs because sexual maturity is often reached before finishing the first year of life ( [13,14,33] this study).
Does variation of demographic life-history traits differ among tropical and temperate anurans?
Age at maturity. Tropical anurans mature on average one year earlier than temperatezone species (estimate based on compiled published data of 124 species). Temperature-and precipitation regime in the tropics usually allow for activity during most of the year so that the majority of species mature within their first or second year of life (e.g. Table 1), while the Demography of tropical anurans delayed maturity of temperate-zone anurans can be attributed to the shorter annual growth period. Morrison and Hero [1] predict that age at maturity generally increases along latitudinal and altitudinal clines. Case studies on species inhabiting wide geographical ranges provide support for this prediction in temperate-zone anurans [5,6]. In contrast, we did not find any evidence that age at maturity is affected by latitudinal or altitudinal variation in tropical species. Yet, the age variation detectable by skeletochronology has a resolution of one year, whereas a resolution of months would have been required possibly to identify potential latitudinal and altitudinal effects. We cannot completely rule out clinal geographical influence on the age at maturity in tropical anurans, but we expect adaptive delay of maturation to be rare because of the generally favourable environmental conditions.
Minimum size at maturity. The tendency of females being larger than males at attaining maturity was not statistically significant in the complete data set on tropical species, but well established in two Hyperolius species (see Table 2). Comparing median SVL at maturity of tropical anuran species (26.9mm) with that of temperate ones (41.8mm; estimate based on compiled published data of 18 species) the size threshold of maturity seems to be considerably lower in the tropics. Since female size is positively related to clutch size in most anuran species [34,35] and longevity of tropical species is low ( [17,30], this study), lifetime fecundity appears to be much smaller than in temperate-zone species, paralleling trait evolution in birds [36]. At the same time, the diversity of reproductive modes including parental care is far greater in Amazonian amphibians [37] than in those of temperate regions [38] suggesting a potential evolutionary advantage of k-strategists with small clutches in the Neotropics. Yet, the eight Hyperolius spp. analysed in this study and all other Rwandan anuran species are short-lived and unspecialized pond or stream breeders [20,23] demonstrating that the coupling between low lifetime fecundity and parental care in the Neotropics does not prevail in the Afrotropics. It is intriguing that size of recently matured tropical anurans decreases with increasing altitude, a factor explaining about a third of observed variance. Since the surface/volume-ratio is unfavourable for small individuals with respect to evaporative water loss and thermal relations [39], this tendency may indicate a trade-off between early maturation and size.
Longevity. The maximum lifespan of tropical anurans is about 2-3 years lower than that of temperate anurans (estimate based on compiled published data of 140 species). In longlived species the discrepancy is even greater, 13 LAGs in the tropical E. hexadactylus [31] compared with 17 LAGs in the temperate E. calamita [40] and 18 LAGs in R. temporaria [41]. This pattern seems to indicate that the risk of dying during the inactivity period of winter is probably lower than that of being predated during the season of activity. Again, longevity is predicted to increase along latitudinal and altitudinal clines [1]. There is ample support for this prediction in temperate-zone amphibians [3,5,6,42,43]. In contrast, longevity of tropical species did not co-vary significantly with latitude or altitude suggesting local constraints such as predator impact and parasite load being more important than macroclimate gradients.
Maximum size. Short lifespan and small maximum size are associated in most tropical species (e.g. Fig 5 for Hyperolius spp.). Maximum SVL (median: 33.2mm) is only about half of that of temperate-zone species (median: 65.0mm; estimate based on compiled published data of 80 species), but there are remarkable exceptions especially in aquatic tropical species (e.g. SVL up to 320mm in Conraua goliath [44] and 170.3mm in Telmatobius macrostomus [45]). Maximum SVL of European and North American anuran species increases with latitude [46], whereas the intraspecific pattern is more complex and SVL decreases with altitude [5,47]. The later tendency is present in tropical frogs as well, whereas latitudinal effects seem to be absent. Female size variation in temperate-zone species has been suggested to be the evolutionary byproduct of the optimization of lifetime fecundity [5,47], while that in tropical species remains enigmatic requiring further investigations.
Conclusions
Variations of demographic life-history traits in afromontane Hyperolius spp. are in line with those in anuran species inhabiting the tropical regions in South America, Madagascar and Asia. Common features are early sexual maturation at small size and low longevity resulting in low lifetime fecundity. This pattern contrasts with that found in temperate-zone anurans which mature later at larger size and grow considerably older, experiencing greater lifetime fecundity. Macroclimatic constraints mediated by latitude and altitude account for a large portion of age-and size-related life-history traits in temperate-zone species, whereas only sizerelated traits co-vary slightly with altitude and not at all with latitude in tropical species. We conclude that the contraction of activity period at increasing latitudes and altitudes shapes demographic life-history traits of anurans in the temperate region. In the tropical belt, however, climate does not constrain activity in the lowland at any latitude and only to a minor extent in the highlands indicating that timing of sexual maturation, short lifespan and size limitation respond to different evolutionary forces than those in the temperate zone.
Supporting information S1 | 2018-04-03T06:05:57.262Z | 2017-02-09T00:00:00.000 | {
"year": 2017,
"sha1": "5b7859370e0045068c00e39122ffdf6b00e3418c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0171666&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b7859370e0045068c00e39122ffdf6b00e3418c",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
208127044 | pes2o/s2orc | v3-fos-license | An Improved SIFT Algorithm for Monocular Vision Positioning
In view of the high real-time and accuracy requirements of the monocular hand-eye vision system in the positioning process, and in the case that the existing image matching algorithm can not meet these two requirements well at the same time, this paper improves the SIFT feature matching algorithm based on local features. First, the corner points determined in the Harris operator are used instead of the key points determined by the SIFT algorithm as feature points in the template image and the image to be matched. Then, a 32-dimensional feature description vector is constructed for each of the selected feature points through a Gaussian circular window. In the registration phase, the Euclidean distance is used as a measure function to match the 32-dimensional feature descriptors. Finally, 100 template images acquired by the monocular hand-eye vision experimental platform are used to test the matching effect of the improved SIFT algorithm, which proves that the improved algorithm has higher improvement in matching time and registration accuracy than the original algorithm. It is applicable to image registration for monocular vision positioning in industrial practice.
Introduction
In industrial production, vision is an important source of information for industrial robots to perceive the external environment [1]. The vision-guided positioning technology has good performances such as non-contact, high efficiency and fast dynamic response, which greatly improves the flexibility and operational flexibility of industrial robots [2][3]. Binocular stereo vision acquires the three-dimensional geometric information of the target point based on the principle of parallax, and the measurement accuracy of the target is high [4], but the hardware system is relatively complicated [5]. The monocular Eye-in-hand system has a simple structure, ensuring that the field of view of the industrial robot is unobstructed during the operation, and the detection area can be changed with the movement of the robot, It's widely used in the grasping and placement, the peg-hole alignment and the high-precision self-assembly system [6][7][8].
Target detection is one of the core problems of monocular vision positioning [9]. Image matching is the key technology to achieve target detection. The matching algorithm directly affects the final effect of visual positioning [10]. The SIFT (Scale Invariant Feature Transform) algorithm based on local features [11,12] has the invariance of scale, rotation and translation, and can suppress the influence of noise and viewing angle change to a certain extent. It is a well-known image matching algorithm. However, the computational complexity of SIFT algorithm is high, and in most cases, the real-time requirements of industrial robots cannot be met. To this end, many scholars have invested in research to improve the SIFT algorithm. The literature [13,14] uses principal component analysis(PCA) to represent high-dimensional data in low-dimensional subspaces and compress vector dimensions. This method effectively exploits the advantages of PCA technology, but increases the workload of training the projection matrix. In [15], the 60-dimensional square neighborhood descriptor is used to reduce the dimension of the descriptor, which increases the statistical range of the neighborhood pixel and enhances the uniqueness of the feature descriptor. The algorithm has improved in real-time, but it has high requirements for the environment of image acquisition, and it is not suitable for the case where the disturbance of working conditions in industrial production is uncertain. In [16], a circular window is used as a descriptor, and each feature point is represented by a 12-dimensional feature vector, which achieves a large dimensionality reduction, but the matching accuracy is not high in a complex scene. The literature [17][18][19] considers that Harris corner points are simple to calculate and are not affected by illumination, rotation and noise. When improving the SIFT algorithm, the Harris algorithm is combined to achieve fast and robust image registration. However, when the unknown image rotation changes greatly, the mismatch rate increases.
In the case of fully considering the advantages and disadvantages of the SIFT algorithm, the corner points determined by the Harris operator are used to replace the key points determined by the SIFT algorithm in the template image and the image to be matched, so that these feature points are equipped with Harris anti-radiative transformation and the rotation invariance ofSIFT at the same time. Then construct a 32-dimensional feature descriptor for these corner points through a Gaussian circular window, and reduce the dimension of the feature description vector in the original SIFT algorithm. Finally, the RANSAC algorithm is used to eliminate the mismatch point. The experimental results show that the improved SIFT image matching algorithm can meet the requirements of practicality and realtime in monocular vision positioning.
SIFT algorithm
The SIFT algorithm remains invariant to scale scaling, rotation, and even perspective changes. It is the most stable algorithm among many feature-based image matching algorithms. The main flow of the algorithm is shown in Figure 1. [20] .
Improved SIFT algorithm
In the process of feature point extraction, the SIFT algorithm has to convolute the image with the Gaussian kernel function multiple times, so there is a problem that the calculation amount is large and the time is long. Moreover, the matching precision of the algorithm is insufficient to some extent, and mismatch may happen during the process of feature point matching. Therefore, an improved SIFT algorithm is proposed in this paper. In the extraction of feature points, the key points determined by the SIFT algorithm are not used, instead, the corner points determined in the Harris algorithm are employed, and a 32-dimensional feature description vector is constructed for these corner points through a Gaussian circular window to reduce the dimension of the feature description vector in the original SIFT algorithm. At the same time, the RANSAC algorithm is used to remove the mismatching point, which improves the matching precision of the algorithm.
Basic
Steps of the Improved SIFT Algorithm (1) The Harris operator is used to to extract the corner points of the template image and the image to be matched, respectively. A set of corner points are established.
(2) For all the corner points determined in (1), the Gaussian circular window is used to construct a 32-dimensional feature descriptor for each of them. (3) The Euclidean distance ratio is employed to match the feature descriptor determined in both the template image and the image to be matched. (4) The RANSAC [21] algorithm is used to remove the mismatched points.
Harris Operator for Corner Extraction
The corner extraction of the Harris operator is determined by equations (1) and (2). In the formula (1), M det represents the determinant of M , which is equal to the sum of a and b. traceM represents the trace of M , which is equal to the product of a and b. k is a constant (typically 0.04-0.06). When the value of R is greater than a certain threshold value and a local extremum is obtained within a certain neighborhood, it is marked as a corner point.
Establishment of a 32-Dimensional Feature Descriptor
In this paper, a Gaussian circular window is used to establish a 32-dimensional feature description vector for selected feature points, which reduces the dimension of feature descriptors in the original SIFT algorithm. Figure 2 is a schematic diagram of establishing the neighborhood of feature points. In the algorithm,the feature point is used as the origin, is the polar angle to construct a two-dimensional coordinate system, which is divided into 32 sub-regions by the feature point neighborhood, and the feature vectors In order to make the feature descriptors be invariance to small angle, a principal orientation, also known as a reference orientation, must be determined for the feature points according to the local image features of the feature point neighborhood. The formula for calculating the polar angle is as follows: Where c x and c y are the coordinates of the feature points.
Experimental results and analysis
The experimental environment is Intel (R) Core (TM) i5 -2450M CPU@2.5GHZ2.50GHZ processor, 4.00GB memory, simulation platform Matlab2010b, operating system Windows7. 100 template images were collected from the experimental platform of monocular hand-eye vision, including rotation transformation and scaling transformation fuzzy transformation. The collected template images were used for simulation, the performance of the proposed algorithm is evaluated in terms of total running time of the algorithm and matching accuracy. Figure 3. Monocular hand-eye experiment platform The 100 collected template images were used for experiments, and the experimental results reveals that the improved algorithm is about 20% to 40% of the original SIFT algorithm in matching time (see Figure 4), and the registration accuracy rate (see Figure 5) is significantly higher than the original algorithm.
Conclusion
The improved SIFT algorithm uses the corner points determined by the Harris operator as feature points, avoids the large amount of convolution calculations required by the original SIFT algorithm, reduces the computational complexity, At the same time, the 32-dimensional feature descriptor is constructed by Gaussian circular window, which reduces the dimension of the original SIFT algorithm descriptor. Experiments show that the time consumed by the improved SIFT algorithm matching is 20% to 40% of | 2019-10-24T09:12:58.877Z | 2019-10-19T00:00:00.000 | {
"year": 2019,
"sha1": "b27fee2463cc4b544931b6f7733d5a1389f445bf",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/612/3/032124",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "282fe624fda7720510ade199d4a0a7d31728371f",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
115756014 | pes2o/s2orc | v3-fos-license | Innovative solutions implemented in design of iset tower
A significant increase in the construction of high-rise buildings in Russia is observed in the last decades. Ekaterinburg takes the second place in Russia after Moscow as regards the annual construction volumes. The first high-rise buildings in the Urals region were built as early as in 19th century and the height of these buildings reached approximately 75 meters. Nowadays, two northernmost skyscrapers in the world are located in Ekaterinburg, one of which is a part of a business district "Ekaterinburg-City". The height of these skyscrapers is above 150 meters. The incompleteness of the Russian regulatory basis for designing high-rise buildings makes it necessary to carry out a large amount of additional design and construction processes. Therefore, despite the experience of previous projects, designers have to create individual innovative design solutions for every new high-rise building. This article describes the features of design of the high-rise building is the Iset Tower, located in Ekaterinburg. Basic design conditions are described and architectural, planning, and design features of the building are reviewed. The effects of harmful factors, acting on the building frame and including the wind loads, are analyzed. Some distinctive features of the analysis of the bearing structures are given. The conclusion contains the summary on specifics of high-rise buildings design and construction in the Urals.
Introduction
A significant increase in the construction of high-rise buildings in Russia is observed in the last decades. The development of high-rise building construction rates is directly linked to the growth of major cities. If in the beginning of the last century there were only several scores of high-rise buildings in Russia, but today their number exceeds several hundreds.
The first high-rise buildings in the Urals region were built as early as in 19th century, and the height of these buildings reached approximately 75 meters. For example, a 77-meter-high church named "Bolshoy Zlatoust" was built in Ekaterinburg (the capital of the region) in 1876. The church was designed by Russian architect Vasiliy E. Morgan in Russian-Byzantine style. This church was the highest building in the Urals at that time [1]. In 1982, upon an initiative of Boris N. Yeltsin, a 89meter-high building for the Sverdlovsk Regional Committee of Communist Party of the Soviet Union, was built. This building, also known as a "White House" was designed in constructivist style. For more than 20 years (till 2000s) this building was the highest structure in Ekaterinburg [1].
Since the year 2000 more than sixty buildings higher than 75 meters have been built in Ekaterinburg. Of this number more than 10 buildings are higher than 100 meters and 2 building are higher than 150 meters.
Since the year 2006 a project of development of a business district "Ekaterinburg-City" located within the right-bank part of the city historical center is being actively implemented. The project includes four skyscrapers (named respectively: "Iset", "Yekaterina", "Tatischev", and "de Hennin") and some high-rise buildings.
The "Iset" tower is the highest building in the Urals region and one of the northernmost skyscrapers in the world [2]. The tower was built in 2016 as the first building of "Ekaterinburg-City" business complex. The height of the tower is 209 meters including 52 underground floors. Total area of the building is 70 600 square meters. The tower is a multipurpose building housing residential apartments at the upper floors, and shops and offices, restaurants, health care facilities, and an underground parking in the low-rise part of the building.
The incompleteness of Russian regulatory basis for designing high-rise buildings makes it necessary to carry-out a large amount of additional design and construction process. Local specialists were able to solve all complex problems that have arisen in the course of designing this high-rise building. In the process of the design and construction of the tower some innovative solutions were implemented that may be useful for designing other high-rise buildings in the future.
Innovative solutions implemented in the design of Iset Tower
First of all the architectural appearance of the tower should be mentioned. It is commonly known that the architecture of all high-rise buildings has a number of common features. In the present time these features mainly consist of modern streamline shapes which are not very expressive. On the other hand, the architectural appearance of every skyscraper bears not only aesthetic, but also an emotional function, and reflects the image of a district, a city, or a region, and in one way or the other is connected to the historical meaning of the construction site [3][4][5][6]. The architectural appearance of "Iset" tower ( Figure. 1) is full of meaning for the citizens of Ekaterinburg. Innovative architectural solutions in the tower appearance include serrated shape of walling elements and an all-glass facade. Also the horizontal cross-section of the tower looks like a gear thus referring to the main industrial sector of the Urals region, the machine-building industry ( Figure. 2). In the course of development of the architectural appearance of the tower the great attention was paid towards energy efficiency of the building. The saving of energy is one of the ways to reduce heating and air conditioning costs of a high-rise complex. There are numerous methods of saving 3 energy used in the design of different high-rises around the world. The most effective of these methods are those which provide for the energy efficiency of the whole building as early as at the concept development stage [8][9].
When designing the "Iset" tower architects aimed to create not just an energy efficient building but a "green" building. The term "green" means a building that is energy efficient and environmentally friendly at the same time, a building capable of ensuring highly-comfortable human environment. The most distinctive feature of the "green" buildings is the focus on the microclimate of human environment [10][11][12][13][14]. Design solutions adopted by the architects of the building included, first of all, positioning the tower relative the cardinal directions considering the existing buildings near the construction site.
The next important task was the selection of the external enclosure of the tower, i.e. facade materials. It was decided to use a silver-spray-coated glass to ensure high energy-efficiency and insolation resistance. Internal double-glass panels are made of shatterproof glass and their chambers are filled with argon in order to reduce energy losses in accommodation areas [7]. One of the state-ofart architectural solutions in the building design was to place ventilation windows on end faces of the serrated walls in order to reduce air conditioning costs. The design of the ventilation windows allows opening them even at the height of up to 200 meters with high wind loads ( Figure. 3). Also a special air conditioning system including air cooling and heating facilities was provided for maintaining the required air parameters in the accommodation rooms and office areas. Air is supplied into the building ventilation system through special cleaning and decontaminating filters and heating facilities [11]. Besides that all rooms of the tower are equipped with a "smart house" system controlling temperature inside the room, floor heating systems, and air conditioning systems making it possible to create comfortable microclimate in every room.
Aside from solving the microclimate issues the architects and designers of "Iset" tower worked out a design solution for the interior layout of the building. A braced frame design was chosen in order to ensure rational use of internal area of the building. The design includes a central core formed by the walls of stairs and lift shaft, a cylindrical closed diaphragm around the shaft, solid reinforced concrete columns and beamless floors. This design allowed the architects to separate corridors, utility and lift rooms from accommodation areas without reducing living space [5].
Upon the completion of the architectural design of the tower, the development of detailed documentation for the bearing structures has started. The design of the bearing structures included several analyses, the most important of which were wind load effect calculations. Wind loading acting on a high-rise building was determined using two methods: numerical modeling methods and aerodynamic tunnel tests. When designing the Iset tower, the 1/380-scale model of the building was (Germany). Due to the small scale of the model it became necessary to verify the distribution of wind loading on the external surface of tower using a mathematical model [16][17]. Numerical modeling was performed by the specialists of the "Civil Engineering and Architecture Institute of the Urals Federal University named after the first president of Russia B. N. Yeltsyn" using ANSYS software package [15]. Finite element modeling included the following stages: • Selecting mathematical model and turbulence model; • Creating domain; • Building finite element mesh; • Specifying boundary conditions. The numerical simulation of the Iset tower was created using the numerical model of incompressible air flow on the basis of Reynolds equation (see below).
where: ρ -density; V -velocity; p -pressure; t µ µ µ ∑ = + , µ -molecular viscosity coefficient, t µturbulent viscosity coefficient; S • -velocity tensor. As a model of turbulence the model SST (shear stress transport) has been chosen. The model effectively combines the stability and accuracy to the standard k-ω-model in the parietal areas and the effectiveness of the k-e model at a distance from the walls with a smooth transition between them (input expansion functions) [18][19][20][21][22][23].
Then, the building and its surroundings were put in a domain functionally similar to an aerodynamic tunnel (Figure 4) Besides the innovations in design, there also were some innovative solutions involved in the construction of the building. First of all the concrete mix composition was developed especially for this project. The development was carried out by the specialists of UMMC (Ural Mining and Metallurgical Company) together with St.-Petersburg university and consultants from Germany. A batching plant and a laboratory were built at the site due to colossal quantities of concrete required and stringent requirements to the quality of concrete: it was necessary to pour approx. 10 000 m³ in three days in low temperature conditions (up to -20°С). A concrete mix grade B60 with low cement content was developed especially for the project and became the first concrete of this type ever used in Yekaterinburg. The setting temperature of the new mix was reduced due to low cement content, addition of fly ash, and a complex of additives including plasticizer, decelerator, and highly-active suspension of a mineral component. The second innovation was using a self-climbing system for the construction of the building core: six separate platforms with independent hydraulic systems for each lift shaft. The use of self-lifting forms and a self-climbing crane allowed reducing the time of construction as well as achieving prominent vertical accuracy of the building.
Conclusion
The construction of high-rise building in Ekaterinburg shows a tendency towards multifunctional complexes housing accommodation, retail and office areas instead of separate residential buildings. One of such complexes in the Urals region is "Iset" tower. In the process of the design and construction of the tower some innovative solutions were implemented, and these solutions, with no doubt, will be used in the future projects. The design process included the analysis of effects of harmful factors acting on the building frame including the wind loads. The analysis consisted of a computer simulation performed using ANSYS software package and experimental testing in a wind tunnel. The results of the analysis were used in the design of bearing and enveloping structures as well as the natural ventilation system of the building. | 2019-04-16T13:29:14.723Z | 2018-12-14T00:00:00.000 | {
"year": 2018,
"sha1": "9dd241e16f08433b9eb4f49ab0de18162c93024e",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/451/1/012049",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "12eb7b6ebd50a3ed009f3ba424f3e61b5f76c347",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering",
"Physics"
]
} |
71330 | pes2o/s2orc | v3-fos-license | Inflammatory and anti-inflammatory states of adipose tissue in transgenic mice bearing a single TCR
A skewed TCR repertoire does not directly trigger inflammation in adipose tissue
Introduction
The population suffering from obesity keeps increasing in modern society because of rapid and extreme changes in lifestyle, particularly eating habits. Obesity causes insulin resistance and consequent type 2 diabetes as well as multiple metabolic and cardiovascular diseases. It is well known that obesity-associated insulin resistance is associated with chronic inflammation in visceral adipose tissue (1)(2)(3)(4)(5), which is mainly dependent on the innate immune system through overproduction of inflammatory mediators, including TNFα, IL-6, IL-1β and MCP-1, by both infiltrating inflammatory macrophages (classically activated inflammatory macrophages; known as M1 macrophages) and adipocytes themselves (6)(7)(8). Conversely, lean adipose tissue contains a resident population of alternative activated macrophages (also known as M2 macrophages), which can suppress adipose tissue inflammation via the production of inflammatory regulators such as IL-10 (9)(10)(11)(12). Thus, at the cellular level, it is believed that macrophages are a key mediator of obesity-associated adipose tissue inflammation and the consequent metabolic disorders.
Other than macrophages, the accumulation of other types of immune cells has been documented in obese adipose tissue. Of these, the important roles of T cells in the regulation of adipose tissue inflammation were highlighted recently. Mathis's group reported that lean visceral fat is enriched with a unique population of Foxp3 + CD4 + T reg cells harboring a distinct TCR repertoire and transcriptome, which suppress adipose tissue inflammation and, thus, insulin resistance (13). They also showed that such T reg cells are strikingly and specifically reduced with the progression of obesity, leading to an acceleration of adipose tissue inflammation. More recently, they reported that these adipose tissue T reg cells are derived from the thymus at the neonatal stage and accumulate in response to specific antigen recognition by their TCRs and soluble mediators, notably IL-33 (14,15). Conversely, Winer et al. demonstrated that CD4 T cells with specific Vα repertoires accumulate in obese visceral adipose tissue and may expand in response to antigen recognition (16). Furthermore, they proposed that obesity-associated chronic inflammation and insulin resistance are under the control of specific T h 1 and T h 2 T cells, by showing that the transfer of CD4 T h 2 cells or depleting predominantly T h 1 cells by anti-CD3 antibody treatment reversed diet-induced insulin resistance in mice. In addition, Nishimura et al. found that a preceding infiltration of CD8 T cells and their activation in obese visceral adipose tissue are indispensable for macrophage recruitment and adipose tissue inflammation (17). All of these observations implicate essential roles for different types of T cells (i.e. T h 1/T h 2 CD4, CD8 and T reg ) in controlling macrophage-dependent pathological inflammation in visceral adipose tissue and consequent local and systemic metabolic disorders.
One worthy strategy to characterize further specific adipose tissue T cells and assess their roles is the identification of the TCRs of T cells that accumulate in obese adipose tissue. In the current study, we isolated a specific TCR that was biased in visceral adipose tissue in wild-type mice fed a high-fat diet (HFD) for 9 weeks. Furthermore, we generated transgenic mice in which all T cells expressed the isolated TCR on a TCRα-null background. In the TCR transgenic mice, we analyzed adipose tissue focusing on the development of an anti-inflammatory environment along with ageing in lean conditions, as well as the state of inflammation originating from M1 macrophage infiltration in response to a HFD. Our findings may suggest another view of T cell involvement in obesity-associated chronic inflammation.
Mice
C57BL/6 (B6) and HFD (HFD32, fat kcal: 60%) were purchased from Crea Inc. (Japan). TCRα −/− mice were purchased from JAX® MICE (The Jackson Laboratory, USA). Generation of TCR transgenic mice using B6 mice was carried out by prenuclear micro-injection of DNA fragments into fertilized eggs from B6 at the Center for Animal Resources and Development, Kumamoto University, and the mice were transferred to the University of Tokyo and used for experiments. All mice were maintained under an specific pathogen-free condition. All animal experiments were carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The protocol was approved by the Committee on the Ethics of Animal Experiments of the University of Tokyo (Permit Number: P10-143). All surgeries were performed under sodium pentobarbital anesthesia, and all efforts were made to minimize suffering.
Isolation of stromal-vascular cell fraction cells
Stromal vascular fraction (SVF) isolation was performed as previously described (18) with some modifications. Briefly, mice were killed after an anesthesia and systemic heparinization by infusion of PBS containing heparin (1 U ml −1 ). Epididymal fat pads were collected, minced into small pieces, washed in PBS containing heparin (1 U ml −1 ) for 1 min to remove blood cells and then centrifuged at 1000 × g for 10 min. Floating fat pieces were collected and incubated for 40 min in type 2 collagenase solution (2 mg ml −1 ; type 2 collagenase was purchased from Calbiochem Inc.). Thereafter, the digested tissue was filtered using a 100 µm cell strainer and centrifuged at 1400 rpm for 8 min, and the resultant pellet containing enriched SVF was washed twice in PBS and FACS wash buffer [PBS supplemented with 2.5% fetal bovine serum (FBS)].
Single-cell sorting and RT-PCR
The SVF cells from epididymal fat were collected as described above and stained for CD4 and CD8. Single CD4 + or CD8 + T cells were sorted using an Aria™ II cell sorter (BD) into a MicroAmp® Fast Optical 96-well Reaction Plate (Life Technologies Inc.), in which each well contains 20 µl of 1× RNA extraction buffer. After 15-min incubation at room temperature, RT-PCR was performed using ReverTra Ace® (TOYOBO, Japan) following the manufacturer's protocol. Nested PCR was then performed using KOD Plus-Ver. 2 Taq polymerase (TOYOBO) with degenerate primers for TCRα (Table 1) and TCR β ( Table 2).
Lentivirus construction and transfection
To produce recombinant lentiviruses, 293FT cells were cotransfected with pLenti vector and ViraPower Packaging Mix (Invitrogen) using X-treameGENE9 (Roche). Cells were cultured for 4-5 days in DMEM culture medium containing 10% FBS and 500 µg ml −1 G418 (Invitrogen). The culture supernatant containing lentiviral particles was collected and concentrated using Lenti-X Concentrator (Clontech). Afterwards, 1 × 10 5 of 58α − β − cells (19) were suspended in 10 ml of DMEM medium without antibiotics, and 1 ml of supernatant containing lentiviral particles and 10 µg ml −1 Polybrene (MILLIPORE) were added. After an overnight culture, the culture medium was removed, followed by additional culture for 10-12 days in new DMEM medium containing 10 µg ml −1 Blasticidin (Life Technologies) to enrich transfected cells. Cells were harvested and analyzed for the TCRα/β expression by flow cytometer.
In vitro T-cell activation assay
To stimulate T cells by concanavalin A (Con A), lymph node cells were plated at 1 × 10 6 cells per well in a 96-well flatbottomed culture plate. Then, 1 µg ml −1 of Con A (SIGMA-ALDRICH) was added, and cells were cultured for 6 or 16 h in DMEM culture medium supplemented with 10% FBS. Thereafter, cells were harvested and analyzed for CD25 expression levels using a flow cytometer. To stimulate TCR, 96-well flat-bottomed culture plates were coated with the αCD3 (clone 145-2C11) antibody (0.1 µg per well) at 4°C overnight. Plates were washed twice with PBS and then, lymph node cells were plated at 1 × 10 6 cells per well. Cells were cultured for 6 or 16 h in DMEM culture medium supplemented with 10% FBS. Thereafter, cells were harvested and analyzed for CD25 expression levels using a flow cytometer.
Histology
Epididymal fat tissues were fixed in 4% paraformaldehyde in PBS for 24 h, and frozen sections were made at 7-10 µm thickness. For immunohistochemistry, frozen sections were treated with G-block (Genostaff Inc.) to block non-specific background, and then incubated with a primary antibody at 4°C overnight, followed by incubation with a fluorescently conjugated secondary antibody at room temperature for 1 h. Nuclei were additionally stained with Hoechst33342, and then, the slides were mounted with Prolong Gold anti-fade reagent (Molecular Probes). The specimens were analyzed using confocal microscopy (FV10i, Olympus).
Insulin tolerance test
Mice were injected intra-peritoneally with 0.75 U kg −1 bodyweight of insulin under the same fasting condition. Thereafter, at indicated time points, blood glucose levels were measured.
qPCR assay
The quantitative evaluation of mRNA was performed by the ∆∆C T method using a QuantStudio3 Real-Time PCR system (Thermo Fisher Scientific). Sequences of the oligonucleotides used are shown in Table 3:
Statistical analysis
Paired results were assessed using parametric tests such as Student's t-test. The significance code is added in each figure legend.
Specific Vα5/Vβ8.2 T cells accumulate in adipose tissue of obese mice
We examined the increase of T cells in adipose tissue along with the progression of obesity. To this end, the visceral epididymal fat tissue of wild-type C57BL/6 (B6) mice fed a HFD for 0, 4, 9 or 12 weeks was digested by collagenase Table 3. PCR primers for qPCR (20), and the non-adipocyte SVF was isolated. SVF cells were assessed for the proportion of CD4 and CD8 T cells using a flow cytometer. As shown in Fig. 1(A), a significant increase of both CD4 and CD8 T cells was observed until 9 weeks. The proportion of CD4 T cells was larger than that of CD8 T cells (~2:1), and this was not changed by HFD challenge (Fig. 1A). Thus, we confirmed the recruitment of T cells to adipose tissue with the progression of obesity.
We then investigated the TCR repertoire of T cells that accumulated in visceral adipose tissue at various time points in mice fed a HFD. As the available antibodies for flow cytometry do not sufficiently cover the whole repertoire of the TCR V region, in particular for Vα, we amplified the TCR V-(D)-J region by degenerative RT-PCR using total RNA from adipose tissue SVF cells. The PCR fragments were sub-cloned, and 50 independent clones per mouse were sequenced for Vα and Vβ usage at 0, 4, 9 and 12 weeks of HFD challenge. The percentage of different Vα (TRAV) and Vβ (TRBV) in the 50 clones analyzed is presented in Fig. 1(B). No specific skew in Vα was observed at 0 (lean) and 4-week HFD. In contrast, a marked accumulation of T cells with TRAV3 (Vα5) was observed at 9-week HFD ( Fig. 1(B), upper panel). In one mouse, 49 out of 50 clones showed Vα5. Interestingly, in the spleen of mice at 9-week HFD, there was no apparent accumulation of Vα5 (Supplementary Figure 1, available at International Immunology Online), suggesting that the accumulation of Vα5-bearing T cells was due to the preferential recruitment of these cells to adipose tissue, and not due to their systemic amplification. Winer et al. similarly reported Vα5 as one of several TCR Vα repertoires identified in obese adipose tissue (16). Interestingly, at 12-week HFD, the dominancy of TRAV3 (Vα5) clones was no longer observed (Fig. 1B, upper panel). The progression of tissue injury following the prominent recruitment of specific TRAV3 (Vα5) T cells might cause the exposure of multiple tissue antigens, leading to the subsequent infiltration of various T cells. In contrast to Vα5, however, the Vβ repertoire was varied throughout the HFD challenge; no specific skew was observed at 9 weeks (Fig. 1B, lower panel).
Next, we isolated CD4 and CD8 T cells from 9-week HFD adipose tissue and sequenced the whole V-(D)-J sequence in single cells. Twelve of the 15 CD4 cells analyzed possessed an identical Vα5-J sequence. To our surprise, this Vα5-J mRNA was also expressed in all 12 CD8 cells analyzed. In addition, although no specific skew was observed in the Vβ repertoire at 9-week HFD by bulk analysis of RT-PCR products, the Vβ-D-J sequence was identical and contained TRBV13-2 (Vβ8.2) in these CD4 and CD8 T cells carrying the specific Vα5-J sequence. The entire amino acid sequences of Vα5-J and Vβ8.2-D-J (plus Cα or Cβ, respectively) are presented in Fig. 1(C). Thus, CD4 and CD8 T cells bearing an identical TCR accumulated in visceral adipose tissue in mice fed a HFD for 9 weeks.
To test whether the isolated TCRα (Vα5) and TCRβ (Vβ8.2) chains develop a TCR complex, we generated a lentivirus vector carrying the TCRα and TCRβ coding sequences linked by the 2A peptide sequence. The vector also expressed GFP as a marker protein. A mouse T-cell clone lacking TCRα and β, 58α − β − cells (19) was transfected with the lentivirus vector, and TCR expression on the cell surface was assessed by flow cytometry. The GFP + population, corresponding to successfully transfected cells, was positive for both the TCRβ chain (H57 antibody) and CD3ɛ (145-2C11 antibody), indicating that the isolated TCRα and β chains formed a TCR complex (Fig. 1D).
Generation of transgenic mice expressing the Vα5/ Vβ8.2 TCR
We generated transgenic mice expressing the Vα5/Vβ8.2 TCR in T cells under the control of the human CD2 promoter (Fig. 2A). The transgene contained the cDNA of the TCRα or β chain and the 5.5 kb locus control region sequence to exclude the possible influence of the insertional locus of the transgene in the genome (21,22). After co-injection of DNA fragments for the TCRα and β transgenes into fertilized eggs from B6 mice, 13 independent transgenic mouse lines that carried both α and β transgenes were obtained, and in three of them (lines #3, #11 and #12), the majority of CD4 and CD8 peripheral T cells expressed only the Vβ8.2 transgenic TCRβ chain, indicating entire allelic exclusion of the endogenous β chain (Supplementary Figure 2, available at International Immunology Online). These three founder mice were bred to TCRα-deficient (TCRα −/− ) mice to eliminate endogenous TCRα protein.
In the periphery, the TCR level in the T cells of the resulting TCR transgenic mice on a TCRα −/− background [Adipose tissue T-cell Transgenic mice (ATT) mice] was comparable to that in wild-type mice in all three independent lines, when lymph node and splenic T cells were analyzed using an anti-H57 antibody (Fig. 2B, histograms). Thus, we employed one mouse line (line #3) for the following analyses. Intriguingly, the proportion of CD8 T cells was larger than that of CD4 T cells in ATT mice, unlike wild-type mice (Fig. 2B, CD4/CD8 profiles).
We addressed whether the transgenic T cells responded to activating stimulation. Lymph node cells from wild-type and ATT mice were stimulated using an anti-CD3ɛ antibody or Con A, and the cell activation state was assessed by CD25 expression. As shown in Fig. 2(C), CD25 levels on CD4 and CD8 T cells increased in response to CD3 stimulation in both types of mice, suggesting that the transgenic T cells possessed normal responsiveness to TCR stimulation. Parallel results were obtained when the cells were stimulated with Con A (Fig. 2C). Thus, the transgenic T cells harbor normal responsiveness to TCR-dependent and -independent stimulation.
We then assessed the thymic development of the transgenic T cells. The total number of thymocytes was similar in wildtype and ATT mice (1.66 × 10 8 ± 4.33 × 10 7 in wild-type mice versus 1.52 × 10 8 ± 1.30 × 10 7 in ATT mice; mean ± SD, n = 4 each). Both CD4 and CD8 T cells were positively selected and developed to single positive (SP) thymocytes in ATT mice (Fig. 2D). As Vα5/Vβ8.2 thymocytes were not negatively selected, the specific antigen that is recognized by Vα5/Vβ8.2 T cells may not be expressed highly in the thymus. The number of CD4 SP cells was smaller in ATT thymus than in wild-type thymus, indicating the less efficient positive selection of CD4 cells in ATT mice. Although the number of CD8 SP cells was much larger than that of CD4 SP cells in ATT mice (Fig. 2D, left panels), it was comparable in CD4 and CD8 cells when gated on cells that bear high levels of TCR, namely mature SP cells (Fig. 2D, right panels), indicating a large proportion of CD8 SP cells in ATT thymus were immature single positive (ISP) cells. Moreover, we assessed the positive selection of CD4 and CD8 cells by staining thymocytes for CD69 and TCR (H57 antibody). Both CD69 and TCR expression levels are upregulated in thymocytes after positive selection, and thereafter, CD69 levels decrease, whereas TCR levels remain high in mature SP cells (23). As demonstrated in Fig. 2(E), the number of CD69 − TCR low CD4 cells before positive selection was markedly larger in ATT mice than in wild-type mice (Fig. 2E), supporting the less efficient positive selection of CD4 T cells in ATT mice. The majority of CD8 SP cells belonged to the CD69 − TCR low population in ATT mice (Fig. 2E), consistent with the conclusion that the majority of CD8 SP cells were ISP cells in ATT thymus. TCR levels in CD4 and CD8 cells post-positive selection (in the CD69 low-negative TCR high population) were similar in ATT and wild-type mice (Fig. 2E).
ATT and wild-type mice were fed with a HFD, and the accumulation of T cells in adipose tissue and their activation state were analyzed. Interestingly, the proportion of CD4 T cells was larger than that of CD8 T cells in adipose tissue of ATT mice, in contrast to the CD8-dominant profile in the spleen (Fig. 2F). In CD4 T cells (Foxp3non-T reg CD4 T cells), the proportion of CD4 + activated T cells was profoundly larger in ATT than in wild-type adipose tissue, though it was comparable in the spleen in both mice (Fig. 2G). In contrast, most CD8 adipose tissue T cells were CD25-negative in obese ATT adipose tissue, indicating that they were not activated (Fig. 2G). These results suggest that that the Vα5/Vβ8.2 TCR appeared to recognize certain adipose tissue antigen(s) predominantly in a class II MHC dependent fashion, and CD4 transgenic T cells were specifically activated in obese adipose tissue.
Abrogated anti-inflammatory states in adipose tissue in lean ATT mice
Mathis's group reported that thymus-derived T reg cells are amplified in response to a specific antigen in adipose tissue, thereby suppressing adipose tissue inflammation under lean conditions, and that such T reg cells are decreased with the progression of obesity, contributing to the acceleration of inflammation (13)(14)(15). Therefore, we first assessed adipose tissue T reg cells in wild-type and ATT mice fed a normal chow diet. The proportion of Foxp3 + T reg cells within all CD4 + cells in adipose tissue was similar in young (8 weeks of age) wild-type and ATT mice. In 30-week-old wild-type mice, the proportion of T reg cells was essentially larger than in young mice; in some mice, it increased up to 40% of CD4 T cells (Fig. 3A, left panel). In contrast, in ATT mice, the T reg proportion was even decreased in aged mice compared with young mice (Fig. 3A, left panel). In the spleen, such a profound increase in T reg cells was not observed in aged wild-type mice, and a similar T reg cell population size was found in wild-type and ATT mice (Fig. 3A, right panel). These results support the presence of specific adipose tissue T reg cells that migrate directly from the thymus, and they were not our Vα5/Vβ8.2 cells. Intriguingly, the T reg proportion was varied in aged wild-type mice, and in some mice, it was almost comparable to that in young mice (Fig. 3A). Since the level of Il33 mRNA in adipose tissue was similar in all aged wild-type mice and comparable with that in ATT mice (Fig. 3B), such an insufficient increase of T reg cells in some wild-type mice might be due to inadequate exposure of the specific antigen recognized by the T reg TCR in adipose tissue. Consistent with a previous report, the proportion of Foxp3 + cells in CD4 + cells after a 12-week HFD was reduced compared with that in aged, lean mice, and it was smaller in ATT mice than in wild-type mice (Fig. 3A, left panel).
In addition to T reg cells, anti-inflammatory M2 macrophages increased in aged, lean wild-type mice. As shown in Fig. 3(C), the mRNA levels for M2 macrophage marker genes, Cd163 and Mrc1 [encoding mannose receptor (MR)] increased significantly in adipose tissue at 30 weeks of age compared to 8 weeks of age, when assessed by qPCR using RNA from SVF cells. In sharp contrast, no significant increase in the levels of M2 macrophage marker genes was observed in aged, lean ATT mice (Fig. 3C). Thus, the anti-inflammatory state in adipose tissue induced by the accumulation of T reg cells and M2 macrophages was abrogated in ATT mice. Although the link for the expansion of T reg cells and the increase in M2 macrophages is not clear, specific T cells (but not our Vα5/Vβ8.2 cells) appeared to be involved in the development of antiinflammatory states in aged, lean adipose tissue.
Comparable inflammatory states in adipose tissue in wildtype and ATT mice
We wondered whether adipose tissue inflammation, including M1 macrophage recruitment, was promoted in lean, aged ATT mice, as no accumulation of T reg cells and M2 macrophages was observed in their adipose tissue. In histology, however, no F4/80 + macrophages developed obvious clusters or crown-like structures (CLSs) (24) in both wild-type and ATT mice (Fig. 4A). qPCR analysis showed that the mRNA levels for inflammatory/M1 genes, including Tnfα, Ccl2 (encoding MCP-1), Nos2 (encoding iNOS) and Il1b (encoding IL-1β) increased mildly at 30 weeks of age compared to 8 weeks of age, in both wild-type and ATT mice (Fig. 4B). The proportion of CD25 + -activated Foxp3non-T reg CD4 + cells in adipose tissue was not different significantly in 30-week-old wildtype and ATT mice (Fig. 4C). Thus, despite the abrogated anti-inflammatory environment, no profound inflammation occurred in adipose tissue in aged, lean ATT mice in terms of M1 macrophage infiltration and/or rigorous accumulation of effector T cells.
Conversely, a large number of macrophages demonstrating CLS formation were observed within visceral fat tissue in both wild-type and ATT mice after a 12-week HFD (Fig. 4D). The mRNA levels of Adgre1 (encoding F4/80) increased comparably in both mice (Fig. 4E). Thus, the obesity-associated recruitment of M1 macrophages to adipose tissue was induced similarly in wild-type and ATT mice. The mRNA levels for inflammatory/M1 (Tnfα, Ccl2, Nos2 and Il1b), and regulatory/M2 (Cd163 and Mrc1) genes expressed in macrophages were also essentially similar in obese wild-type and ATT mice (Fig. 4E), indicating no essential difference in macrophage M1/M2 polarity and inflammatory state in adipose tissue in obese wild-type and ATT mice. In addition, whole-body insulin resistance was observed at comparable levels in wild-type and ATT mice fed a HFD for 12 weeks, based on the results of an intra-peritoneal insulin tolerance test (Fig. 4F). Accordingly, serum glucose and insulin levels increased similarly in both mice after the 12-week HFD (Fig. 4G). Thus, the induction of obesity-associated M1 macrophage recruitment and the following inflammation, as well as insulin resistance, were not influenced under the condition where all T cells expressed a single TCR. Together with the observations in aged ATT mice under a normal chow diet, it is likely that the removal of an anti-inflammatory state is not sufficient, but obesity progression (e.g. by a HFD) is required to induce the rigorous recruitment of M1 macrophages to adipose tissue, resulting in subclinical inflammation.
Discussion
In the current study, we isolated a Vα5/Vβ8.2 T cell that accumulated in adipose tissue at a specific period during the progression of obesity. In addition, we generated transgenic mice expressing this TCR (ATT mice). Various important lessons were obtained by the analysis of ATT mice, which provided new insights into the role of T cells in obesity-associated adipose tissue inflammation. It was curious that both CD4 + and CD8 + T cells harbored the same Vα5/Vβ8.2 TCR in obese adipose tissue. One might argue that this finding was brought about by a cross-contamination during the single-cell RT-PCR. However, we believe it is unlikely, since both CD4 + and CD8 + T cells were positively selected and appeared in the periphery in ATT mice. It is not clear whether both T cells were positively selected by only class I or class II MHC, or both. Nevertheless, the Vα5/Vβ8.2 TCR should recognize a certain adipose tissue antigen presented by class II MHC cells, as CD4 T cells predominantly infiltrated and were activated in adipose tissue in ATT mice. These data confirm the accumulation of the Vα5/Vβ8.2 T cells in adipose tissue of obese wild-type mice presented in Fig. 1 and also implicate their participation in the obesity-associated chronic inflammatory events of adipose tissue, which lead to the insulin resistance.
However, intriguingly, there was no difference in the inflammatory state, including macrophage CLS formation and inflammatory cytokine levels in adipose tissue, as well as in the consequent insulin resistance, between obese wild-type and ATT mice fed a HFD. This is contrast to a marked disease acceleration often observed in various transgenic mouse models for autoimmune diseases, which express a TCR isolated from T cells recognizing pathogenic antigens. For instance, Katz et al. identified a set of TCRs from T cells infiltrating to the Langerhans islets in non-obese diabetic (NOD) mice, and produced its TCR transgenic mice on a NOD background. They observed prominently accelerated insulitis and consequent diabetes in the mice (25). Thus, it appeared that T cells might not be involved in the adipose tissue chronic inflammation causing insulin resistance. One might argue that although the Vα5/Vβ8.2 T cells did accumulate and were activated in obese adipose tissue, they might not be responsible T cells for inflammatory events by chance. If so, however, adipose tissue inflammation should have been ameliorated in ATT mice, as no T cells other than Vα5/Vβ8.2 cells were present in the mice. It might be possible that multiple T-cell repertoires are required to be effective to obesity-associated adipose tissue inflammation. If so, the involvement of T h 1 T cells in the progression of inflammation and the requirement were assessed by qPCR using RNA isolated from epididymal fat obtained from wild-type and ATT mice maintained under NCD at 8 and 30 weeks of age. n = 3-5 for each group. Values were normalized to those of GAPDH and presented as relative expression to those of lean wild-type mice. Error bar: SEM. (C) The proportion of CD25 + cells in CD4 + Foxp3 − T cells in the epididymal white adipose tissue in wild-type and ATT mice maintained under a NCD for 30 weeks. Each dot corresponds to the result from an individual mouse. Averages are indicated by bars. (D) Specimens of the epididymal white adipose tissue (WAT) from wild-type and ATT mice fed a HFD for 12 weeks were stained for F4/80 (pan macrophage marker; green) and Hoechst (blue), or by H&E. Scale bar: 100 µm. (E) qPCR analysis for the mRNA levels for Adgre1 (F4/80), Tnfα, Ccl2 (Mcp1), Nos2 (iNos), Il1b (M1 markers), Cd163 and Mrc1 (MR) (M2 markers) using RNA isolated from epididymal fat obtained from wild-type and ATT mice fed a HFD for 0 (Pre) and 12 weeks. n = 4-6 for each group. Values were normalized to those of GAPDH and presented as relative expression to those of lean wild-type mice. Error bar: SEM. (F) Insulin tolerance test (ITT) performed on wild-type and ATT mice fed a HFD for 0 (Pre) or 12 weeks; n = 3-5 for each group. (G) Insulin levels in mice maintained under NCD or fed a HFD for 12 weeks. n = 4-5 for each genotype. Fasting blood glucose levels in mice fed a HFD for 0 (Pre) or 12 weeks. n = 4 for each genotype. of CD8 T cells for the initiation of macrophage infiltration demonstrated in previous reports (16,17) might occur in TCRindependent fashions. Certainly, these possibilities need to be evaluated further, such as by analyzing other TCR transgenic mice lines fed a HFD. In addition, the role of Vα5/Vβ8.2 T cells in inflammatory events in adipose tissue needs to be clarified by pursuing additional experiments including the identification of the antigen recognized by this TCR.
In addition, the aged, lean ATT mice (under a NCD) provided important information about the role of T cells in developing an anti-inflammatory state in adipose tissue. We confirmed the increase of adipose tissue T reg cells in wild-type mice along with ageing (although there was large variation among individuals). In contrast, the number of adipose tissue T reg cells was decreased in aged, lean ATT mice. This result re-confirmed the scenario proposing the presence of specific T reg cells that increase in adipose tissue along with ageing (13-15), but our Vα5/Vβ8.2 T cells are not included in this process. Ideally, however, the generation and analysis of a new transgenic mouse line expressing the TCR from the thymus-derived specific adipose tissue T reg cells would further corroborate Mathis's hypotheses regarding whether T cells only accumulate in adipose tissue and not in lymphoid tissue; whether they decrease in adipose tissue in obese mice and whether adipose tissue inflammation and subsequent insulin resistance are prevented in obese mice.
In addition to T reg cells, we observed that M2 macrophages increased in adipose tissue in aged wild-type mice, but not in ATT mice. However, whether the increase of M2 macrophages was brought about by T reg cells or by other non-T reg cells (certainly different from Vα5/Vβ8.2 T cells) is still unclear. The transgenic mice expressing the TCR from adipose tissue T reg cells might also provide solid information about this issue. Finally, it may be worth re-emphasizing that aged, lean ATT mice, in which the anti-inflammatory environment consisting of T reg and M2 macrophage cells was abrogated, exhibited no spontaneous M1 macrophage infiltration and related inflammation in adipose tissue. Thus, it is likely that the decrease of T reg cells (as well as M2 macrophages) is not solely sufficient for the initiation of M1 macrophage recruitment. Further studies are necessary to clarify the link between these two events precisely.
Supplementary data
Supplementary data are available at International Immunology online.
Funding
This work was supported by AMED-CREST, AMED and a research grant by ONSENDO Co., Ltd. (to T.M.). | 2018-04-03T00:00:40.077Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "53b0ef2aadbbb29835abc8304fe3953811f20d3c",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/intimm/article-pdf/29/1/21/25447400/dxx003.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "53b0ef2aadbbb29835abc8304fe3953811f20d3c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
226254793 | pes2o/s2orc | v3-fos-license | Gelfoam Embolization Technique to Prevent Bone Cement Leakage during Percutaneous Vertebroplasty: Comparative Study of Gelfoam only vs. Gelfoam with Venography
Objective Percutaneous vertebroplasty (VP) has been used for the safe treatment of osteoporotic compression fracture. However, cement leakage is the most common complication. To reduce the leakage of bone cement, we did the gelfoam embolization during VP. The purpose of this study is to compare the safety and feasibility of different two gelfoam embolization technique during VP. Methods Total 127 patients (146 level) who had the thoracolumbar osteoporotic compression fracture were enrolled. Group A was treated by gelfoam-only technique and, Group B was treated by gelfoam with venography technique. We compared the incidence of bone cement leakage between two groups using post-operative computed tomography scan and X-ray. Results Seventy-four patients (81 levels) were treated with gelfoam-only technique (A), and 53 patients (65 levels) were treated with gelfoam with venography technique (B). There were 22 leakages on group A, and 19 leakages on group B. There was no statistical significant difference between two groups (Chi-square test, p-value =0.958). Incidence of leakage to spinal canal was 11 levels in Group A, 3 levels in group B, and there was statistical significant difference (Fisher's exact test, p-value=0.027). Conclusion Complication induced by the bone cement leakage are the most careful point during VP. Gelfoam embolization with venography is very easy and safe method. Gelfoam with venography technique could make lower the incidence of cement leakage to spinal canal.
INTRODUCTION
Percutaneous vertebroplasty (VP) has been widely used for pain relief and strengthening of weakened vertebral bodies for osteoporotic compression fracture. 6) However, the VP has the potential risk of serious complications such as infection, new fractures of the adjacent vertebral body and cardiopulmonary complications. Especially, leakage of cement after VP is one of the most serious complication and it has been reported between 38% and 75%. 3-6, 12,16) To prevent this complication, the technique using gelfoam during VP has been reported. Although gelfoam technique could reduce the cement leakage, it still remains one of the major problems of vertebroplasty. Techniques to reduce these complications more effectively have been studied include the intraosseous venography during VP. 2, 13,14) We developed the technique for venography before VP by mixing contrast and gelfoam to avoid cement leakage via intravertebral venous flow or facture line. There had been no reports comparing gelfoam only technique and gelfoam with venography technique. The purpose of this study was to determine the safety and feasibility of routine pre-injection of gelfoam with venography during VP.
MATERIALS AND METHODS
Patients who underwent VP for painful osteoporotic thoracolumbar compression fracture by single surgeon form 2011 to 2015 were retrospectively reviewed. A total of 127 patients (146 levels) enrolled in this study.
Fractured levels were selected from the 9th thoracic spine to the 5th lumbar spine. All had suffered the severe back pain and tenderness for more than 2 weeks which did not respond to conservative treatment. The level of VP was selected on the basis of clinical symptom, magnetic resonance image, and radioisotope bone scan. We divided the groups: (A) gelfoam only technique group, (B) gelfoam with venography group. In 2011 to 2013, we used gelfoam only technique during VP. After that time, we developed gelfoam with venography technique, and newly developed technique was used from 2013 to 2015.
All procedures were performed as elective schedule under local anesthesia by only one experienced spinal neurosurgeon. After placing the patient in the prone position on the radiolucent table, the back was prepared and draped. Under the biplane C-am guided, the Jamshidi needle was introduced through the pedicle and advanced to the anterior third of the vertebral body to prevent the fenestration of anterior cortex. All procedures were performed via bipedicular approach.
In group A, the gelfoam sponge was cut into regular shape (5×5 mm). The gelfoam pieces were mixed with the 10 mL of normal saline using 10-mL syringe. And then, pre-procedural gelfoam embolization was performed without contrast using 5 mL of gelfoam on each side. After 1 minute, 1.5 mL of bone cement was injected per one Jamshidi needle (total 3 cc per level).
In group B, the pieces of gelfoam sponge was mixed with 3 mL of normal saline and 7 mL of contrast (Vispaque-320) using 10-mL syringe. And 5 mL of pre-procedural gelfoam embolization with contrast was performed on each side (FIGURE 1). Venography was done to identify the basivertebral plexus and any other large vessels or fracture line which the cement might leak. If there was an active venous flow or contrast leakage, we advanced the Jamshidi neelde a litte more (FIGURE 2). After 1 minute, 1.5 mL of vertebroplasty cement was injected per one Jamshidi needle (total 3 cc per level).
During bone cement injection, the C-arm was used to confirm whether there was any cement leakage. If any signs of cement leakage were suspected, the injection was stopped immediately, and injection was performed again after 1 minute waiting. If it was suspected that there was cement leakage along the vein according to the venography, we advance the Jamshidi needle a little and reinject the bone cement after 1 minute waiting. After the procedure, cement leakage was assessed using postoperative plain radiography and computed tomography (CT) scan. Based on obtained images, we classified cement leakage into 4 patterns: type 1 to the paravertebral muscle and soft tissue, type 2 to the paravertebral vein, type 3 to the disc space, type 4 to the spinal canal (FIGURE 3). 14) All statistical analysis were performed using SPSS ver. 14.0 (SPSS Inc., Chicago, IL, USA). T-test, chi-square test, and Fisher's exact test were used for statistical analysis. Statistical significance was accepted for p<0.05.
RESULTS
On this study, we classified into two groups; group A (gelfoam only technique) were 81 levels and group B (gelfoam with venography technique) were 65 levels. Group A consisted of 74 patients (54 females and 20 males); average age was 71 years (40-86). All patients had the severe osteoporosis (average bone mineral density [BMD] −3.1). Group B consisted of 53 patients (42 females and 11 males); average age was 73 years (46-95); those had also severe osteoporosis (average BMD −3.2). There was no statistical significance difference between two groups in patient demographics (TABLE 1). In comparison of cement leakage incidence, there were 22 leakages on group A, and 19 leakages on group B. There was no statistical significant difference between two groups in comparison of overall leakage incidence (Chihttps://kjnt.org https://doi.org/10.13004/kjnt.2020.16.e42 square test, p-value=0.958) (TABLE 2). In comparison of leakage pattern, type 1 was 1, type 2 was 11, type 3 was 1, type 4 was 11 levels on A group. On B group, type 1 was 1, type 2 was 14, type 3 was 5, type 4 was 3 levels (
[REMOVED HYPERLINK FIELD][REMOVED HYPERLINK FIELD] The leakage can lead to severe neurological or pulmonary complications. 9,10) It has been reported that the gelfoam embolization technique could reduce cement leakage. 1) Since our hospital had been started the pre-procedural gelfoam embolization during VP, the leakage was markedly reduced. However, concerns about the risk of potential bone cement leakage still remain for young spine neurosurgeon.
Although controversy has existed, venography before VP is known as one of an effective method. If surgeon know venography pattern in advance, it would be helpful to prevent the cement leakage complication by controlling the needle tip position or cement amount. In addition, the agreement rate between the pattern of venography and cement leakage has been reported to be 83%. 13) We studied of the incidence and pattern of bone cement leakage between groups which were performed gelfoam technique during VP with or without venography.
The leakage patterns of bone cement classified by many author were useful to decrease the complication. 11) In this study, we classified the cement leakage pattern into four types. This four types were divided based on the direction of leakage and related structures. Type 1 is in case of leakage to the paravertebral muscle and soft tissue, type 2 is considered when cement leaks to paravertebral vessels, type 3 and 4 are classified when cement leaks to disc space and spinal canal. All patterns were decided using postoperative CT and X-ray by one neurosurgeon who performed the operation.
Overall incidence of cement leakage showed no significant difference between two groups. However, incidence of cement leakage to spinal canal (type 4) in gelfoam with venography group is low compared with gelfoam only group. One of most catastrophic complication of VP is induced by leakage type 4 (migraine to spinal canal). As leakage to spinal canal might cause serious complications like paraplegia or radiculopathy, gelfoam with venography technique could help to avoid leakage to spinal canal for young spine neurosurgeon. Gelfoam slows the injected cement and venous flow, and it could reduce the cement leakage to the spinal canal. And, it is possible to further reduce leakage incidence to the spinal canal by confirming whether there is a contrast leak through venography.
Our study have some limitations; First, all VPs were performed by only one surgeon who experienced more than 500 VP cases. Because VP is not a difficult surgery, surgeons familiar enough with VP has many experience to reduce the leakage of bone cement, so the overall leakage incidence may show no significant difference. If operations was performed by the beginner, the results would have been different. In general, experienced spine neurosurgeon have only below 2% complication rate of VP. 7,8) Second, in this study, it was confirmed that the risk of cement leakage to the spinal canal can be reduced by performing venography during VP, but cement leakage that cannot be predicted by venography also exists. Intraoperative venography could confirm not only the large venous flow in the vertebral body, but also large fracture line where contrast can reach. However, the fracture line of the part where the contrast medium did not reach is not entirely certain by venography alone, but it can be known and avoided through preoperative CT scan.
Third, the total number of VP procedures is small. We limited on the T9-L5 spine level and only one neurosurgeon for the high degree of accuracy of study. Further study might be needed.
CONCLUSION
In our study, there was no statistical significant difference between gelfoam groups which applied venography or not. But gelfoam with venography make lower the incidence of cement leakage to spinal canal. This study has limitation that all this procedure was performed by only one skillful neurosurgeon and the bone cement leakage during VP will be higher from unexperienced operator. Therefore, the gelfoam with venography technique could be helpful for beginners. | 2020-10-29T09:07:56.314Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "2ab430a2610cc936c83ec0f18e997fb71cab6969",
"oa_license": "CCBYNC",
"oa_url": "http://kjnt.org/Synapse/Data/PDFData/0203KJN/kjn-16-e42.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "088e0f2b7b8620e8be71fba9883e3957909993aa",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
139433440 | pes2o/s2orc | v3-fos-license | A Research on the Use of Copper Core Yarns in Electromagnetic Shielding Application
In this work, possibilities for producing textile materials with copper core cotton yarns for the purpose of electromagnetic shielding were studied. Cu/Co core yarns were produced by using copper filaments as core material with two different diameters (0.05 and 0.07mm) and cotton as sheath material. All core yarns are produced in Ne 8 yarn count. Conductive Cu/Co yarns were integrated in to 3/1 twill woven fabric structure in the weft direction with 5 different weft densities (8, 13, 18, 23 and 28 per cm). According to the TS EN 50147-1 standard, the shielding effectiveness of these fabrics was measured in the frequency range 1 GHz - 6 GHz. The results have shown that EMSE of fabrics can be tailored by changing the weft density, core filament fineness and wave frequency parameters.
Introduction
With technological progresses, electrical and electronic devices have reached a more important position in our daily life. The use of these devices makes our life easier but at same time they cause an important environmental pollution since they emit electromagnetic waves to their surroundings while they are working. Electromagnetic waves have harmful effects for both sensitive electronic equipment and also for human health. Electrical devices interfere with other electrical and electronic devices in that they generate the electromagnetic field. This interference may cause malfunctioning of other sensitive electronic devices, when specific limit is exceeded [1].
If an electromagnetic wave gets into an organism, it vibrates molecules to spread heat. In the same way, when an electromagnetic wave enters the human body, it will hinder a cell's regeneration of DNA and RNA. Moreover, it causes abnormal chemical activities to produce cancer cells [2]. 2 Protection from electromagnetic waves is very significant because of these reasons. Shielding is one of the most efficient solutions for protection from electromagnetic waves. Electromagnetic shielding can be defined as prevention of electromagnetic radiation transmission by a material [1].
There are some previous studies conducted on this topic. Su and Chern, (2004), used stainless steel as a conductive filler to produce core, cover and plied yarns to make different types of woven fabrics. The electromagnetic shielding effectiveness (EMSE) of these fabrics was measured by coaxial transmission equipment. Electromagnetic frequencies range from 9KHz to 3GHz. The experimental results showed that a denser fabric structure had a higher EMSE. In respect to the influence of yarn structure, a fabric made from the core yarn has a higher EMSE than fabrics made from cover or plied yarn [2]. Duran and Kadoglu, (2015), studied woven fabrics, produced with two different types of conductive yarns, namely silver containing (Ag/PA,Co) core yarns and silver-containing (Ag/PA-Co) blended yarns. The effect of various yarn and fabric properties on EMSE was investigated. They obtained results that the shielding effectiveness can be tailored by changing the yarn and fabric parameters and also there are significant differences between the electromagnetic shielding characteristics and performances of the fabrics produced with different types of yarns. Such fabrics are convenient for both daily and professional uses, since they have both high EMSE performance and comfort properties [5].
Production
Yarn production was carried out on a ring spinning machine. Copper core cotton yarns (Cu/Co) were produced by using copper filaments as core material and cotton as sheath matersial. A core yarn apparatus was used in the machine for feed the conductive filament into the centre of the cotton 3 sheath before twistings [1]. 0,05 mm and 0,07 mm copper wires were used as core material. All of copper core cotton yarns were produced in Ne 8 yarn count [6].
After the yarn production, copper core cotton yarns were integrated to the 3/1 twill woven fabric structure in the weft, with five different weft densities, namely 8, 13, 18, 23, 28 wefts/cm, for the evaluation of their electromagnetic shielding effectiveness (EMSE) properties [1]. Ne 20/2, 100% cotton yarns were used as warp yarns for all the fabrics. Fabric samples have been coded according to their wire diameters and weft densities [7]. For instance; Sample 5.28. has 0,05 mm copper core and 28 wires/ cm weft density.
Tests
Electromagnetic shielding properties of the woven fabrics were tested by using unechoic chamber test system according to EN50147-1 standard in 1GHz-6GHz frequency range. The results were attained in decibels (dB) [5].
The system is consist of a signal generator which generates signals, an RF power amplifier which amplifies the signals before being sent to the sample, two antennas and two adjacent shielded rooms, each having one of the antennas inside (First antenna connected to the signal generator and second to the spectrum analyzer as signal receiver) and a spectrum analyzer which analyzes the signals obtained from the receiver antenna [1].
Fabric was placed the gap between two rooms (two antennas) during the measurements. Signals were produced and amplified and then sent onto the fabric by an antenna. The signals transmitted by the fabric were detected by the receiver antenna, situated on the other side of the fabric [1]. Results were read from spectrum analyzer. EMSE of the copper core conductive fabrics were measured at frequencies ranging from 1 GHz to 6 GHz [8]. Each fabric sample has been measured three times. So as to obtain the actual shielding effectiveness results, blank measurements (without fabrics/ shielding materials) are regarded at all frequencies [7]. The results were analyzed by using Excel tables, diagrams and SPSS software program.
Results and Discussion
The effects of wave frequency, fineness of the core filament and weft density on the electromagnetic shielding effectiveness of fabrics were investigated.
Effects of Wave Frequency
According to the ANOVA and SNK tests, wave frequency had a statistically significant effect on the shielding effectiveness [1,5].
Change of EMSE of fabric samples with wave frequency are given in Figure 1.
Effects of Core Fineness
According to the Paired Samples T Test, the core fineness has a statistically significant influence on the shielding effectiveness of the fabrics [3]. With an increase in wire diameter (wire becomes thick), a decrease in shielding effectiveness has been observed [3]. The reason of this, as copper filament becomes thick, its rigidity soars.
Increased rigidity of the copper filament, makes it difficult to bent and take the form of copper. This can cause apertures on the fabric structure [1,8]. Therefore, EMSE of fabrics decrease. According to Figure 2, fabrics have 0,05 mm core fineness indicate higher EMSE than 0,07 mm at all frequencies.
Effects of Weft Density
In accordance with the results of ANOVA and SNK tests, weft density had a statistically important effect on the shielding effectiveness.
Conclusion
In this work, textile materials were developed for protecting from electromagnetic waves and their usage areas were investigated. For this purpose, Cu/Co core yarns were produced by using copper filaments as core material and cotton as sheath material on a ring spinning frame. Copper monofilaments of 0.05 mm and 0.07 mm were used as core materials. All core yarns are produced in Ne 8 yarn count. Conductive Cu/Co yarns were integrated in to 3/1 twill woven fabric structure in the weft direction with 5 different weft densities (8, 13, 18, 23 and 28 per cm).
The effects of wave frequency, core filament count and weft density on the EMSE were investigated. The shielding effectiveness of these fabrics were measured between 1GHz and 6GHz using the TS EN 50147-1 standard [10].
The results have shown that there are significant effects of fineness of the core filament and weft density on the EMSE. The fabrics that was produced with 0,05 mm core filament have represented higher EMSE than 0,07mm core fineness. EMSE values of the fabrics increased with increasing weft density [11]. Since the fabric comprises more conductive materials.
The highest results were obtained from the fabric that was produced with 0,05 mm core filament and 28 wefts/ cm weft density (sample 5.28) at 2 GHz. Average EMSE value of the sample 5.28 is 31.98 dB ( Figure 2).
In conclusion, since satisfactory results were received, the fabrics produced in this research can be used for electromagnetic shielding. Especially against devices which emit between 1 and 6 GHz frequency as cell phones, computers, tablets, radios and household electrical appliances. For this purpose, the produced fabrics are thought to be used in various applications in both professional and daily life such as curtain, mosquito netting, apron, tent, awning and protective clothing. | 2019-04-30T13:08:46.466Z | 2018-12-01T00:00:00.000 | {
"year": 2018,
"sha1": "b61166bd119b62798a010ef65dbee5bdbbcad95d",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/459/1/012070",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "ceeebfe5d14813e68ed5f63208ae850029d55d1c",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
266075453 | pes2o/s2orc | v3-fos-license | Analysis of Core Employee Satisfaction for Port Project: A Case Study of Dalian Port
: This paper conducts an in-depth analysis of the satisfaction of core employees at Dalian Port regarding various port projects. It aims to understand employees' expectations of port projects, assess if these projects meet their expectations, and explore the impact of motivational and healthcare factors on employee satisfaction. The study addresses key questions related to employee satisfaction, motivational factors, and healthcare factors, hypothesizing that these factors significantly influence satisfaction, with variations based on employee characteristics and project types. The findings provide practical guidance for improving employee satisfaction in Dalian Port's port enterprises. The primary objective of this study is to conduct an in-depth analysis of the satisfaction levels of core employees at Dalian Port with various port projects. By delving into their expectations and assessing whether these projects meet those expectations, the study aims to shed light on the intricate interplay between motivational and healthcare factors influencing employee satisfaction. The relevance of this research lies in providing practical insights and theoretical contributions that can guide Dalian Port's port enterprises in enhancing employee satisfaction.
Introduction
Dalian Port, as one of Northeast China's largest ports, holds a pivotal role in international trade and logistics.The efficiency and service quality of the port are significantly influenced by its core employees, who actively participate in various port projects.The satisfaction of these employees is integral to the operational performance and development of Dalian Port.Given the challenges faced by the port's workforce, such as work intensity, project pressures, and evolving job requirements, understanding their satisfaction with different port projects becomes imperative.Consequently, examining factors influencing employee satisfaction and their perspectives on various port projects holds both practical and theoretical significance.
Employee Satisfaction in the Port Industry
Studying employee satisfaction in the port industry is a topic of great interest.Many studies have explored the level of employee satisfaction in different ports and the factors that influence it.For example, Kim and Ra (2022) [1] found in their study that employees' satisfaction with wages and work environment is closely related to their work motivation.This is closely related to our research, as we will also be looking at employee satisfaction with pay and working environment.
Definition and measurement of employee satisfaction
Employee satisfaction is often defined as the employee's feeling and emotional state of their work and work environment (Park I J, 2021) [2].Employee satisfaction is often measured using standardized survey tools such as Won and Pan (2023) [3].In our research, we will draw on these tools to measure employee satisfaction at the Port of Dalian.
Theoretical framework of motivational and health factors
When studying employee satisfaction, the theoretical framework of motivating factors and health factors is important.Herzberg's two-factor theory (Alrawahi S, 2020) [4] states that motivational factors such as job content and responsibilities can increase employee satisfaction, while health factors such as pay and working conditions can prevent employee dissatisfaction.This theory will be the theoretical basis for our research.
Related Studies
There is limited research on employee satisfaction in the Port of Dalian, but some relevant literature can provide useful references.For example, Le D N (2020) [5] studied the influencing factors of port staff satisfaction in Vietnam and provided some insights, and then Hsu CT (2023) [6] et al. used the SERVQUAL Service Quality Scale to analyze the basic characteristics and service quality of smart ports in the post-pandemic era, although their research scope did not include the Port of Dalian.We will consider these findings and work to fill the research gaps in this area..
Data Collection
Quantitative research through employee surveys will be employed.A structured questionnaire covering satisfaction, motivational, and healthcare factors will be designed based on prior studies and literature.
Sample Selection
Core employees at Dalian Port will be randomly sampled, ensuring representativeness.Cooperation with the port management will provide a list for random selection, with sample size determined for statistical reliability.
Data Analysis Methods
Statistical software will analyse collected data, employing descriptive statistics, correlation analysis, and regression analysis to explore satisfaction levels, factor correlations, and the impact of motivational and healthcare factors on satisfaction.
Questionnaire Design and Variable Definitions
The questionnaire will cover employee demographics, port project satisfaction, evaluations of motivational factors, and evaluations of healthcare factors.Variables will be defined based on the nature of the questions.
Employee Satisfaction Evaluation
Results from the survey of ten Dalian Port employees indicate high overall satisfaction with port projects, with an average rating of 5 ("very satisfied").Descriptive statistics are provided in Table 1.
Evaluation of Motivational Factors
Motivators are a key component of employee satisfaction.According to the survey results, employees rated their satisfaction with the job content as "satisfied" on a 4 point.This indicates that employees are satisfied with what they do, but there may be room for improvement.With a score of 3 for workload, it is classified as "moderate", which means that some employees may feel that the workload is high, which may require more in-depth research and improvement.However, a score of up to 5 points in the aspect of job responsibilities indicates that employees are very satisfied with their job responsibilities, which is essential for employees' self-motivation and performance improvement.
Evaluation of Healthcare Factors
Health factors include salary, working environment, and program policies.According to the survey results, employees have a satisfaction rating of 4 for their salary, indicating that employees are satisfied with the level of salary.A score of 5 in terms of work environment is a very positive result and indicates that employees are very satisfied with the work environment.However, the project policy is rated at a "moderate" level of 3, indicating low staff satisfaction with the project policy, which is an area of concern and may require further research and improvement.
Summary of Results
According to the results of this survey, employees at the Port of Dalian as a whole show a high level of satisfaction with the port's projects and motivational factors.They are more satisfied with the job content, job responsibilities, salary, and working environment.However, there was room for improvement in terms of workload and project policies.The workload can be further studied to ensure that the employee's workload is within acceptable limits.Attention needs to be paid to the project policy aspect to improve employee satisfaction.
These results provide valuable insights for Dalian Port management that can be used to develop improvement strategies to enhance the work experience and improve performance of employees.In the following research, we will further analyse the data to identify specific factors that influence employee satisfaction and make more specific recommendations to meet employee expectations.
Conclusion
In this study, we conducted a comprehensive evaluation of employee satisfaction in Dalian Port, and analysed the satisfaction level of various port projects and the motivational and health factors that affect employee satisfaction from the perspective of employees.Here are the key findings of the study: 1) The overall level of satisfaction among employees at the Port of Dalian is a positive sign reflecting the port's performance and high level of employee satisfaction.
2) Employees are more satisfied with job content and job responsibilities, which emphasizes the critical role of motivators in employee satisfaction.
3) Although employee satisfaction with pay and working environment is high, satisfaction with program policies is low, suggesting that program policies may need further improvement to meet employee expectations.
Suggestions
Based on the conclusions of this study, here are some recommendations to help Dalian Port further improve employee satisfaction and performance.
Employee engagement helps build a trusting relationship between employees and management and increases employee loyalty to the organization.
Summary
By continuing to optimize employee satisfaction and focus on motivators, health factors, and improvements in program policies, the Port of Dalian can improve the work experience of employees and enhance employee performance to achieve organizational success.This study provides the Port of Dalian with important insights into employee satisfaction, which can be used to develop future management strategies and improvement plans.Employee satisfaction has a significant impact not only on individual employee happiness and performance, but also on the success and reputation of the entire organization, so it is critical to continuously focus on and improve employee satisfaction.
Table 1 :
Descriptive statistics on employee satisfaction scoring | 2023-12-08T16:36:33.736Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "191a452880d06b5ffb4d88947163df3d77dd0e65",
"oa_license": "CCBY",
"oa_url": "http://www.clausiuspress.com/assets/default/article/2023/12/06/article_1701854836.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1d80a6bda862b3ab1c23cd942828ae390cb037c4",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
10180676 | pes2o/s2orc | v3-fos-license | ANTIDIABETIC AND CYTOTOXICITY SCREENING OF FIVE MEDICINAL PLANTS USED BY TRADITIONAL AFRICAN HEALTH PRACTITIONERS IN THE NELSON MANDELA METROPOLE, SOUTH AFRICA
Diabetes mellitus is a growing problem in South Africa and of concern to traditional African health practitioners in the Nelson Mandela Metropole, because they experience a high incidence of diabetic cases in their practices. A collaborative research project with these practitioners focused on the screening of Bulbine frutescens, Ornithogalum longibracteatum, Ruta graveolens, Tarchonanthus camphoratus and Tulbaghia violacea for antidiabetic and cytotoxic potential. In vitro glucose utilisation assays with Chang liver cells and C2C12 muscle cells, and growth inhibition assays with Chang liver cells were conducted. The aqueous extracts of Bulbine frutescens (143.5%), Ornithogalum longibracteatum (131.9%) and Tarchonanthus camphoratus (131.5%) showed significant increased glucose utilisation activity in Chang liver cells. The ethanol extracts of Ruta graveolens (136.9%) and Tulbaghia violacea (140.5%) produced the highest increase in glucose utilisation in C2C12 muscle cells. The ethanol extract of Bulbine frutescens produced the most pronounced growth inhibition (33.3%) on Chang liver cells. These findings highlight the potential for the use of traditional remedies in the future for the management of diabetes and it is recommended that combinations of these plants be tested in future. The objectives of the study were to test aqueous and ethanol extracts of these plants for in vitro glucose utilisation activity into C2C12 muscle and Chang liver cells and cytotoxic activity in Chang liver cells. The focus point of which was to share the results with participating practitioners in order to develop and promote transparent research within the collaboration boundaries. wild
Introduction
South African epidemiological data on diabetes mellitus show an increased incidence of diabetes in urban areas as compared to rural areas and interestingly report a strong genetic component for the development of diabetes in Xhosa-speaking people in the Eastern Cape (Molleutze and Levitt, 2005). The focus of this article is on the potential of medicinal plants used by traditional health practitioners in the Nelson Mandela Metropole to influence glucose utilisation in cells. These experiments were done in an interactive research setting which included collaborating practitioners in the investigative process (van Huyssteen et al., 2004).
Medicinal plants were selected for the study according to their favourable sustainable cultivation profiles established through a jointly developed medicinal garden and included Bulbine frutescens (L.) Wild. (Asphodelaceae), Ornithogalum longibracteatum Jacq. (Ornithogaloideae: Hyacinthaceae), Ruta graveolens L. (Rutaceae), Tarchonanthus camphoratus L. (Asteraceae) and Tulbaghia violacea Harv. (Alliaceae). In addition, ethnobotanical data on R. graveolens reported its use in the management of diabetes mellitus in the Bredasdorp/Elim area, Southern Cape, South Africa (Thring and Weitz, 2006). Furthermore, Allium sativum L. has been used for diabetes in traditional Indian medicine (Mukherjee et al., 2006), which might indicate some potential for T. violacea because the plants belong to the same family and possess a number of similar active ingredients (Cox and Ballick, 1994; Motsei et al., 2003). Interestingly, practitioners told us that they used T. camphoratus in remedies for diabetes and this was also documented among communities living in the George/Knysna area (Southern Cape, South Africa) by Yvette van Wijk (personal communication). Water and ethanol were used as extraction solvents because collaborating practitioners used water as vehicle for most of their remedies, which was supported by numerous literature citings (Eloff 1998; Grierson and Afolayan, 1999;Inngjerdingen et al., 2004;Kelmanson et al., 2004;Shale et al., 1999). The second extraction solvent was ethanol, because it is relatively inexpensive and freely available to practitioners (Louw et al., 2002).
The objectives of the study were to test aqueous and ethanol extracts of these plants for in vitro glucose utilisation activity into C2C12 muscle and Chang liver cells and cytotoxic activity in Chang liver cells. The focus point of which was to share the results with participating practitioners in order to develop and promote transparent research within the collaboration boundaries.
Plant material collection and extraction procedures
All the plants were collected from the Nelson Mandela Metropolitan NMM area. Plant material was collected in the early morning, kept in closed plastic bags and extracted fresh as soon as possible after harvesting (< 2 hours after collection). Freshly harvested plant material was macerated in either deionised water or 99% ethanol at room temperature in just enough solvent to cover it. The solvent was replaced every 24 hrs for three days. Extracts were then vacuum-filtered through Whatman No1 filters. Ethanol extracts were concentrated in a rotary evaporator at a temperature of ≤ 67ºC for a maximum of three hours. Concentrated ethanol extracts that were not yet dry after three hours and aqueous extracts were freeze-dried. Dried extracts were stored in 50 ml polypropylene tubes in the dark at 4ºC in a desiccator. Table 1 summarises the plant parts used, month of collection, yield of dried extract in the case of aqueous extracts and authentication of the plant. In the case of ethanol extracts, yields were not calculated, because the dried extracts stuck to the round bottom flasks in which they were evaporated and could not be removed in an accurate fashion. For this reason only the starting weights of the plant material were recorded for the ethanol extracts (Table 1).
Routine maintenance of cell cultures
Chang liver cells and C2C12 muscle cells were maintained in 10 cm culture dishes and incubated at 37ºC in a 5% CO2 environment. Growth medium consisted of RPMI-1640 (BioWhittaker, Walkerville, USA) supplemented with 10% fetal bovine serum (fbs; Delta Bioproducts, Johannesburg, South Africa). Growth medium was changed every 48 to 72 hrs. When about 70% confluence was reached, cells were detached by washing with phosphatebuffered saline-A (PBSA) and incubating with trypsin 0.25% (v/v) in PBSA (Roche Diagnostics, Manheim, Germany). Cells were routinely divided at a split ratio of one in six.
Glucose utilisation in C2C12 muscle cells
C2C12 muscle cells were seeded into flat-bottom 96-well culture plates (NUNC, Roskilde, Denmark) at a density of 5 000 cells/well in a volume of 200 µl/well growth medium. The plates were incubated for three and a half days at 37ºC without changing the medium. On the day of the assay all reagents were made up with incubation buffer (RPMI-1640 with an adjusted glucose concentration of 8 mM using phosphate-buffered saline (PBS) plus 0.1% (m/v) BSA). The spent growth medium was aspirated and 50 µl of incubation buffer with or without insulin (1 µM; Human, recombinant; Roche, Penzberg, Germany) or test samples (0.5 and 50 µg/ml) were added to each well. The plates were incubated for one hour at 37ºC. After incubation, 10 µl aliquots were taken from each well and transferred to an empty 96-well microtiter plate. Two hundred microliters of the glucose oxidase reagent (Sera-pak Plus, Hong Kong) was added to each 10 µl aliquot and developed for 15 minutes at 37ºC. The plate was read after development at 492 nm in a multiplate reader (Multiscan MS ® version 4.0 Labsystem ® type 352).
Glucose standards were run with each experiment. The standards were made up with the incubation buffer (containing 0.1% (m/v) BSA) at concentrations of 2, 4, 6 and 7 mM diluted with PBS. PBS was also used as the blank. The glucose concentration (± 8 mM) in the incubation buffer given to the cells at zero time, was also determined with the same method. Results were expressed as percentage glucose uptake as compared to control cells of which the glucose uptake was assigned as 100%.
% GU = (Avg: T0 -TS) x 100% [Avg: (Avg T0 -NC)] Key: Avg = average; GU = glucose uptake or utilisation; NC = negative control measurement; T0 = zero time measurement; TS = test sample measurement Data was recalculated in some cases to allow for the variation in insulin response between experiments. These variations were attributed to slight variations in seeding densities, time of cell growth and differentiation of the cells. In these cases, the data were expressed as a percentage of the insulin response (0%): % GU as a function of IR = % GU of TS -Avg: (%GU of insulin) x 100% Avg: (% GU of Insulin) Key: Avg = average; GU = glucose uptake or utilisation; IR = insulin response
Glucose utilisation assay in Chang liver cells
Chang liver cells were seeded into flat-bottom 96-well culture plates (NUNC, Roskilde, Denmark) at a density of 6 000 cells/well in a volume of 200 µl/well growth medium. After about 72 hours, 10 µl of fresh growth medium, containing either no additive, metformin (1 µM; Helm AG, Hamburg, Germany) or test sample (0.125 or 12.5 µg/ml), was added to the 200 µl medium already in the well (ethanol extracts contained a maximum of 1.25% (v/v) DMSO, which was used for the initial dissolution of these extracts). The final metformin concentration in the medium was 1 µM and the extract concentrations were 0.125 and 12.5 µg/ml. The cells were exposed to these extracts or metformin for 48 hours before the glucose utilisation assay was done.
On the day of the glucose utilisation assay, incubation buffer (RPMI-1640 diluted with phosphate-buffered saline (PBS) to 8 mM glucose with 0.1% (m/v) bovine serum albumin (BSA; Roche Diagnostics, Germany)) was prepared with and without metformin (1 µM) or test sample (0.5 or 50 µg/ml). The spent growth medium was aspirated from the wells and 50 µl of each solution were added per well. The cells were incubated at 37ºC for a further 3 hours for glucose utilisation to occur.
After three hours of incubation at 37ºC, glucose utilisation was determined as in section 2.3. Concurrently with the glucose assay, a viability assay utilising MTT (Sigma, Germany) was also done in separate wells to detect any possible liver cell toxicity of the test samples during the 48 hr exposure period (Mosmann, 1983). One-hundred microliters of 0.5 mg/ml MTT in growth medium was added per well and incubated at 37ºC for 3 hours. At the end of the incubation time, this solution was aspirated and 100 µl DMSO added to each well to dissolve the formazan crystals formed in the cells. The plate was shaken for 60 seconds to dissolve the formazan salts and read at 540 nm with a multiplate reader (Multiscan MS ® version 4.0 Labsystem ® type 352). Results were expressed as percentage glucose uptake as compared to control cells of which the glucose uptake was assigned as 100%. Results from the MTT viability assay were taken into account to normalise the data (i.e. to compensate for differences in cell numbers with different extracts).
Cytotoxicity assay on Chang liver cells
Chang liver cells were seeded into flat-bottom 96-well culture plates (NUNC, Roskilde, Denmark) at a density of 6 000 cells/well in a volume of 200 µl/well growth medium. The next morning, the medium was replaced with 200 µl fresh growth medium or growth medium containing different concentrations of crude extracts (125 and 62.5 µg/ml). Additionally, ethanol extracts contained 0.31 and 0.63% (v/v) DMSO (Associated Chemical Enterprises, South Africa) to dissolve the extracts and corresponding negative controls also contained these amounts of DMSO. The cells were exposed to the various buffers for 48 hrs at 37ºC after which the MTT assay was performed as described above.
Results are expressed as percentage growth inhibition, taking the growth of the corresponding negative control to be 100%.
Statistical analysis
Results were statistically analysed using GraphPad Prism ® 4 (Graphpad Software, 2003). Means and standard error of the mean (SEM) values were calculated and used in statistical tests to determine significance. The unpaired t-test was used to compare the negative control values with sample values. Results were considered statistically significant if p < 0.05. Glucose utilisation responses were further compared to the response of the relative positive control to ascertain if the response produced by the extract were significant or not. Table 2 provides a summary of the average percentage glucose uptake achieved in C2C12 muscle cells by aqueous and ethanol extracts (0.5 and 50 µg/ml). Aqueous extracts that increased glucose utilisation more than insulin included both concentrations of Bulbine frutescens (130.1%; 121.3%) and the 0.5 µg/ml concentration of Ornithogalum longibracteatum (129.2%), Ruta graveolens (124.7%) and Tarchonanthus camphoratus (128.4%). Significant concentration-independent responses were observed for B. frutescens (p < 0.05), O. longibracteatum (p < 0.005) and T. camphoratus (p <0.05). The largest increase in glucose utilisation for ethanol extracts was produced by Tulbaghia violacea (0.5 µg/ml; 140.5%), which showed a significant concentration-independent response (p < 0.05). The two extracts of O. longibracteatum were the only ethanol extracts that produced a concentrationdependent increase in glucose utilisation (p < 0.05). .66** Averages and p-values were calculated with the two-tailed unpaired t-test from three to six experiments per sample and with 12 replicates per sample for each experiment (* p < 0.05; ** p < 0.005; *** p < 0.0005).
Results
Extracts were also tested in combination with insulin to see whether the combination was more effective in increasing glucose utilisation in C2C12 muscle cells than when used alone. All glucose utilisation values were recalculated to obtain the percentage response compared to an insulin response of 0%. The 50 µg/ml aqueous extracts of R. graveolens (p < 0.05) and T. camphoratus (p < 0.005) produced statistically significant increased combined responses as compared to the extracts alone ( Figure 1). Additionally, the glucose utilisation activity of the 50 µg/ml concentration of the ethanol extracts of B. frutescens (p < 0.05), R. graveolens (p < 0.05), T. camphoratus (p < 0.05) and T. violacea (p < 0.005) were significantly potentiated when they were combined with insulin, which may point to an additive or synergistic effect with insulin ( Figure 2). However, glucose utilisation activity significantly decreased when insulin was added to the 0.5 µg/ml ethanol extracts of T. camphoratus (p < 0.05) and T. violacea (p < 0.0005) in comparison to the response with these extracts alone. Table 3 provides a summary of the average percentage glucose uptake achieved in Chang liver cells by aqueous and ethanol extracts tested at 0.5 and 50 µg/ml. The glucose uptake values were normalised according to cell viability variations (determined by the MTT assay) that might have occurred during the 48 hour exposure period to the different extracts. Aqueous extracts that significantly increased glucose uptake into Chang liver cells were B. frutescens (0.5 µg/ml; 143.5%), O. longibracteatum (0.5 µg/ml; 131.9%), T. camphoratus (50 µg/ml; 131.5%) and T. violacea (50 µg/ml; 124.5%). Significant concentration-independent responses were recorded for the aqueous extracts of B. frutescens (p < 0.0001) and O. longibracteatum (p < 0.0005) and significant concentration-dependent responses for the aqueous extracts of T. camphoratus (p < 0.05) and T. violacea (p < 0.005). The ethanol extract of T. camphoratus (0.5 µg/ml; 126.9%) produced the greatest glucose utilisation response of the ethanol extracts on Chang liver cells. In summary, the aqueous extracts produced better glucose uptake into Chang liver cells than the ethanol extracts. Interestingly, the aqueous extracts of B. frutescens (0.125 µg/ml) and T. violacea (12.5 µg/ml) were the only to produce significant growth inhibition, although very mild.
Discussion
These scientific findings were communicated to participating traditional health practitioners through feedback seminars. The feedback seminars were based on interactive workshops which were initially used in the collaboration to exchange information regarding disease conditions (van Huyssteen et al., 2004). The seminars described methodology employed in the screenings and subsequent findings as accurately as possible with the assistance of interpreters, demonstrations, photos and translated notes. The interactive nature of the seminars encouraged twoway discussions and practitioners shared that they used Bulbine frutescens and Ornithogalum longibracteatum in their diabetic remedies.
B. frutescens and O. longibracteatum produced similar glucose utilisation profiles; with B. frutescens producing slightly more potent glucose utilisation activity than O. longibracteatum. The extracts of B. frutescens increased glucose utilisation (except for the ethanol extracts in Chang liver cells) in both cell lines in a concentrationindependent manner. Similar dose-independent responses have been observed for knipholone, one of the active ingredients contained in B. frutescens, in a leukotriene biosynthesis assay (Wube et al., 2006). In addition, glucose utilisation was significantly enhanced for the 50 µg/ml ethanol extract when combined with insulin as compared to the response of the extract alone. However, the ethanol extract of B. frutescens was found to have toxic effects on the growth of Chang liver cells at concentrations of 62.5 and 125 µg/ml. Apart form the glucose utilisation activity, the aqueous extract of O. longibracteatum produced significant, but minor growth inhibitory effects on Chang liver cells (62.5 µg/ml; 18.14 ± 4.14%). Concern about the potential toxicity of O. longibracteatum has been raised (Van Wyk et al., 2002) and toxicity evaluations have provided unequivocal results (Mulholland et al., 2004). The bulb that was extracted for this study was approximately three years old and had flowered during this time. The little toxicity produced in this study seems to support literary observations which suggest that pre-flowering young plants as well as the fruits (Watt and Breyer-Brandwijk, 1962) and leaves (Verschaeve et al., 2004) of the plant are more toxic.
The ethanol extract of Ruta graveolens produced a significant increase in glucose utilisation activity in C2C12 muscle cells. This response might have been caused by the presence of the hypoglycaemic flavanoid, quercetin (Mukherjee et al., 2006), in the R. graveolens extracts. In addition, the presence of insulin (1 µM) significantly potentiated the glucose utilisation responses for both aqueous and ethanol extracts (50 µg/ml) by 4 and 14%, respectively as compared to the activity of the extracts alone. Interestingly, the antioxidant action of quercetin (Wube et al., 2006) has been shown to increase insulin sensitivity in muscle cells (Laight et al., 2000). R. graveolens produced no toxicity in any of the screens, except growth stimulation produced by the aqueous extract on Chang liver cells. However, the continued use of R. graveolens has been disputed by many scientists due to potential toxicity and serious side-effects induced by this plant (Van Wyk et al., 2002).
Tarchonanthus camphoratus contains many compounds that may be potentially hypoglycaemic, including saponins (Mukherjee et al., 2006), flavanones and tannins (Scott and Springfield, 2005). It was thus interesting to see that the aqueous extracts were more effective at increasing glucose utilisation in Chang liver cells and the ethanol extracts in C2C12 muscle cells, both following concentration-independent trends. The glucose utilisation responses in C2C12 muscle cells were significantly influenced by the presence of insulin and this phenomena warrants further investigation. The cytotoxicity trend of the ethanol extracts in Chang liver cells suggests increased toxicity with increased concentrations. Participating practitioners also noted that they always used T. camphoratus in combination with other plants to "stop the strongness", despite the fact that they used water in most of their remedies. However, literature reassures that the leaves of T. camphoratus are the subject of regular feeding by a variety of wild and domestic animals (Venter and Venter, 2002;Watt and Breyer-Brandwijk, 1962) especially during summer (Parker et al., 2003), which was when the leaves for this study were collected.
The green parts and flowers of Tulbaghia violacea had been consumed as vegetables (Marshalkar, 2003;Roberts, 1990) and more recently, it showed an absence of genotoxicity in the Ames and VITOTOX ® tests (Elgorashi et al., 2003). However, cytotoxicity was observed at 62.5 and 125 µg/ml for the ethanol extract on Chang liver cells. At the same time, T. violacea contains several ingredients such as steroidal saponins, quercetin, kaempherol, sugars, and / or sulfur-containing compounds (Duncan et al., 1999) that might be considered glucose lowering. Similar sulfurcontaining compounds in garlic have been shown to be hypoglycaemic in diabetic animals and has been ascribed to their anti-oxidant activity and the interaction of these compounds with thiol-containing proteins (Mukherjee et al., 2006). Accordingly, the aqueous extract of T. violacea (50 µg/ml) showed significant increased glucose uptake activity into Chang liver cells (124.5%) and the ethanol extract (0.5 and 50 µg/ml) into C2C12 muscle cells (140.5% and 117.7%, respectively). Similar to the ethanol extracts of T. camphoratus, the presence of insulin influenced the glucose utilisation response into muscle cells significantly as compared to the response of the extracts alone.
In returning to an ethnobotanical text, not all the plants in a diabetic remedy may necessarily have antihyperglycaemic activity (Alarcon-Aguilara et al., 1998). Practitioners of this collaboration explained that additional plants used in the remedies were either added to enhance the activity of the active plants or to stop toxic side-effects of another plant. This is in accordance with literature, which states that cytotoxic concerns are often addressed by indigenous cultures through using combinations of very small amounts of different plants in diabetic remedies . Recent evidence on the pathology of diabetes suggests an increasing amount of targets that might prolong the onset of type 2 diabetes mellitus (T2DM) ie. treating causes of insulin resistance, inflammation, the procoagulative state, obesity etc. In light of this, natural products or crude extracts of natural products may exhibit more than one mechanism which may potentially (Raghav et al., 2006) prolong the development, improve treatment and/or prevent complications of T2DM. In conclusion, the favourable glucose utilisation profiles of most of the plants tested in the study highlights the potential effectiveness of traditional remedies in the management for diabetes mellitus.
However, preliminary toxicity results reinforce the need to determine optimal and safe dosing measurements if these remedies are to be used. | 2018-04-03T00:17:25.912Z | 2011-06-28T00:00:00.000 | {
"year": 2011,
"sha1": "a871d94e8a724063c6521072704bef915963add0",
"oa_license": "CCBY",
"oa_url": "https://www.ajol.info/index.php/ajtcam/article/download/63202/51081",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "7a930a9c37b0bb5c45252577e6c2ba8ba43377d8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
31441275 | pes2o/s2orc | v3-fos-license | Crystal structure of the Yersinia enterocolitica type III secretion chaperone SycT.
Several Gram-negative pathogens deploy type III secretion systems (TTSSs) as molecular syringes to inject effector proteins into host cells. Prior to secretion, some of these effectors are accompanied by specific type III secretion chaperones. The Yersinia enterocolitica TTSS chaperone SycT escorts the effector YopT, a cysteine protease that inactivates the small GTPase RhoA of targeted host cells. We solved the crystal structure of SycT at 2.5 angstroms resolution. Despite limited sequence similarity among TTSS chaperones, the SycT structure revealed a global fold similar to that exhibited by other structurally solved TTSS chaperones. The dimerization domain of SycT, however, differed from that of all other known TTSS chaperone structures. Thus, the dimerization domain of TTSS chaperones does not likely serve as a general recognition pattern for downstream processing of effector/chaperone complexes. Yersinia Yop effectors are bound to their specific Syc chaperones close to the Yop N termini, distinct from their catalytic domains. Here, we showed that the catalytically inactive YopT(C139S) is reduced in its ability to bind SycT, suggesting an ancillary interaction between YopT and SycT. This interaction could maintain the protease inactive prior to secretion or could influence the secretion competence and folding of YopT.
Type three secretion systems (TTSSs) 1 are utilized by various Gram-negative bacteria, pathogens of animals and plants, to inject effector proteins into host cells, where they manipulate cellular functions (1,2). TTSSs of pathogens share a common ancestor with the most widespread TTSS, the bacterial flagellum (3). The core components of TTSS transport machineries are highly conserved, and some are functionally exchangeable among TTSSs (4,5). In contrast, TTSS effectors, some of them accompanied by dedicated chaperones, are relatively speciesspecific because of the particularities of every host-pathogen relationship. TTSS chaperones are characterized as small, acidic proteins that lack ATPase activity and associate with one or several TTSS secretion substrates within bacterial cells (6 -8). Several functions have been attributed to TTSS chaperones. Phenotypically, these chaperones are required for efficient translocation of their assisted effectors. However, their mode of action remains elusive. Some effectors are poorly soluble in the absence of their chaperones and are apparently stabilized upon their binding. Further, the prevention of unproductive interactions and the maintenance of secretion-competent folding states are discussed. In addition, it was proposed that effector/chaperone complexes could constitute three-dimensional secretion signals (9) and that chaperones could govern a hierarchical order of effector secretion (9 -11). A recent study suggests the guidance of effectors by their chaperones toward the TTSS ATPase (12).
TTSS chaperones were recently categorized into three classes (8). The class I chaperones specific to a single effector belong to the subgroup IA. Class IB comprises the promiscuous chaperones assisting more than one effector. Chaperones of class II assist the TTSS translocators, and the flagellar TTSS chaperones constitute class III according to this classification. Crystal structures of representatives of class IA (Salmonella SicP (13) and SigE (14); Yersinia SycE (15), SycH (16), and SycN/YscB (17); Escherichia coli CesT (14); Pseudomonas syringae AvrPphF ORF1 (18)), class IB (Shigella flexneri Spa15 (19)), and class III (Aquifex aeolicus FliS (20)) support this classification. The class IA chaperones SicP, SigE, SycE, CesT, and AvrPphF ORF1, although not similar on a sequence level, all form homodimers and share a common fold. The SycN/YscB complex is an exception in that SycN and YscB form a heterodimer (17) with a fold similar to that of the typical homodimers. The SycH fold also resembles that of the aforementioned chaperones (16); however, its biologically relevant oligomerization state is not unambiguously clear (21). The fold of the class IB chaperone Spa15 is very similar to that of class IA chaperones; dimer formation, however, is different, with the subunits rotated relative to each other (19). The crystal structure of the flagellar secretion chaperone FliS (class III) reveals a fold distinct from that of pathogenicity-related TTSS secretion chaperones (20). This suggests that in contrast to the conserved transport machineries of flagella and the TTSSs of pathogens, their accessory chaperones have different evolutionary origins. Structures of class II chaperones have not been solved so far, but the identification of a tetratricopeptide-like repeat motif in Yersinia SycD/LcrH (22) supports the view of a fold different from that of class I representatives. Recently, another distinct class of TTSS-related chaperones was disclosed with the crystallization of the EspA filament protein complexed to its antipolymerization chaperone CesA of enteropathogenic E. coli (23). CesA exhibits a highly extended three-helix hairpin forming extensive coiled-coil interactions with EspA.
The effector domains bound by class I chaperones typically encompass about 50 -100 amino acid residues that are localized at the N terminus directly following the putative secretion signal. Co-crystallization of TTSS chaperones together with effector fragments shed light on the binding mode of these chaperones (9,13,16). The effector domain is wrapped around the chaperone dimer in an expanded form retaining secondary structure elements. Work from several groups further suggests that the C-terminally located effector domains are folded and catalytically active in the presence of their respective chaperones (9,14,21,24). Only recently, the structure of the secretable regulatory TTSS component YopN from Yersinia in complex with the heterodimeric chaperone SycN/YscB was presented (17). This structure unambiguously confirms that the influence of chaperone binding on folding of the secretion substrate is confined to the chaperone-binding site and does not extend globally.
The TTSS of pathogenic Yersinia species is used to paralyze host cells such as macrophages by injection of effectors, called Yops (2). One of these effectors, YopT, is the representative of a novel family of cysteine proteases with a catalytic triad formed by residues Cys-139, His-258, and Asp-274 (25,26). In the Yersinia cytosol, YopT is accompanied by the specific Yop chaperone SycT (27). Here, we report on the Yersinia enterocolitica SycT structure, a representative of the class IA TTSS chaperones. The only close homologue of SycT (56% identity) is found in the entomopathogenic Photorhabdus luminescens (28). SycT is 20% identical to Yersinia SycE, which has been structurally solved (15,29,30).
The SycT-binding site of YopT was located within the first 124 amino acid residues (27). We demonstrated that a C139S mutation of YopT reduces its SycT binding efficiency, suggesting an accessory interaction between SycT and the catalytic domain of YopT.
MATERIALS AND METHODS
Recombinant DNA Techniques-The cloning of sycT was as follows. The SycT coding sequence was amplified by PCR using primers 5Ј-CAT-ATGCAGACAACCTTCACAGAACTTATGCA-3Ј and 5Ј-GTCGACTCAG-ATGAATAATATAGGTGATGTCG-3Ј, thereby introducing flanking NdeI and SalI restriction sites (underlined). Bacterial lysate from Y. enterocolitica strain WA-314 served as template. The PCR product was subcloned into TOPO TA cloning vector (Invitrogen), and the sycT fragment was cut out with NdeI and SalI and inserted into NdeI-SalI-cleaved pWS. pWS is a derivative of pMS470⌬8 (32) generated by cleavage with NdeI and HindIII and insertion of a linker hybridized from 5Ј-phosphorylated oligonucleotides 5Ј-TATGAAGCTTAGATCTGTCGACGGATC-3Ј and 5Ј-AGCTGATCCGTCGACAGATCTAAGCTTCA-3Ј.
Expression of GST-fused proteins in E. coli (BL21) was induced with 0.4 mM isopropyl-1-thio--D-galactopyranoside for 3 h in cells growing at 37°C when A 600 nm reached 0.5-0.7. Cells were lysed in phosphatebuffered saline lysis buffer supplemented with 1 mM DTT, 100 M phenylmethylsulfonyl fluoride, and lysozyme (200 g/ml final concentration). The GST-fused proteins were purified from bacterial lysates by affinity chromatography using glutathione-coupled Sepharose beads (Amersham Biosciences). Recombinant proteins were eluted off the glutathione beads with 20 mM glutathione in 100 mM Tris-HCl (pH 7.4) and 50 mM NaCl.
Analysis of SycT Binding to GST-YopT Fusion
Variants-Glutathione-Sepharose resin was loaded with equal amounts of GST-YopT/SycT, GST-YopT C139S /SycT, or GST-YopT ⌬1-22 /SycT. Loaded beads were incubated overnight (gently rotating) in 1 or 50 ml of phosphate-buffered saline. Supernatant was removed, and beads were eluted in SDS sample buffer. The eluted samples were resolved on SDS-PAGE and analyzed by Western blotting with antibodies raised against YopT and SycT.
Crystallization and Structure Determination-Crystals of SycT were grown at 20°C within 4 days to their final size of 500 ϫ 100 ϫ 50 m 3 by using the hanging drop vapor diffusion method. The drops contained equal volumes of protein (40 mg/ml) and reservoir solution (1.2 M DL-malic acid, pH 7.0, and 100 mM Bis-Tris propane, pH 7.0). Before exposure to X-rays, crystals were soaked in a solution of 1.2 M DL-malic acid, pH 7.0, and 100 mM Bis-Tris propane, pH 7.0, 25% glycerol for 30 s and subsequently frozen in a stream of cold nitrogen gas at 100 K (Oxford Cryosystems). Multiple anomalous dispersion methods were performed using synchrotron radiation at the BW6 beamline at the Deutsches Elektronen Synchrotron Center (DESY) in Hamburg, Germany. However, crystals severely suffered during their exposure with synchrotron radiation, which resulted in increasing R merge values upon their exposure time. Furthermore, SycT crystals showed high anisotropy, in particular at high resolution, which caused tremendous difficulties in obtaining suitable data sets. Molecular replacement using the structure of the monomer or the dimer of SycE (Protein Data Bank ID: 1JYA) as a poly-Ala-model failed, most likely because: 1) the SycT crystals have high anisotropy; 2) the structure of SycT shows significant differences in its dimerization domain, as compared with other Syc proteins; and 3) the primary sequences of both molecules show only 20% sequence identity.
Testing a broad variety of SycT crystals, incubated with various heavy metal atom solutions, it turned out that crystals treated with K 2 PtCl 6 for 6 h were most suitable for structure elucidation. After a successful wavelength scan for the platinum absorption edge, data sets were collected on a MAR CCD detector for peak (1.0719 Å), inflection point (1.0723 Å), and remote (0.98 Å) wavelength. Images in frames of 1°were recorded over a range of 360°, resulting in complete anomalous data sets. The space group of ScyT crystals was C2 with unit cell dimensions of a ϭ 91 Å, b ϭ 46 Å, c ϭ 34 Å, and  ϭ 105°. The images were processed with DENZO and SCALEPACK (34) and scaled further with TRUNCATE, CAD, and SCALEIT of the CCP4 software package (35). The Pt 4ϩ position in the asymmetric unit cell was localized by a combination of direct and difference Patterson search methods using ShelXD (36). Initial phase angles were calculated with MLPHARE, and the electron density was calculated by Fourier transformation and improved by solvent flattening (35). Hereby, the hand ambiguity could be solved. However, the quality of the multiple anomalous dispersionelectron density was rather poor, which most likely found its reason in the radiation damage of the crystals during their exposure. Calculating phase angles at the single Pt 4ϩ peak wavelength (single wavelength anomalous dispersion) and using the program SHARP (37) resulted in an electron density that allowed interpreting the secondary structure elements of SycT. Conceivable -strands and helices were traced as polyalanine residues. Subsequent phase combination using the preliminary model and the experimental Pt 4ϩ phase angles was performed with SHARP (37). Using these improved phase angles, we looked for further derivatives of our measured data sets. We were able to identify a second derivative by difference Fourier analysis and could locate two Pb ϩ heavy metal atom-binding sites in a data set, in which crystals were soaked for 6 h with Pb(CH3) 3 Cl. However, the quality of the single wavelength anomalous dispersion peak-Pb ϩ data set was rather weak as compared with the Pt 4ϩ data set, revealing a figure of merit of 0.62/0.5 (centric/acentric) and a phasing power of 1.4; the Pt 4ϩ data set data showed a figure of merit of 0.77/0.68 and a phasing power of 3.4 (Table I). Although the phasing power of the Pb ϩ derivative was quite low (mainly caused by the low occupancy of the heavy metal atoms), it could successfully be used to enhance the signal-to-noise ratio by averaging the Pt 4ϩ and Pb ϩ electron densities, which allowed us to interpret and to complete the SycT model (both data sets, Pt 4ϩ and Pb ϩ , showed an acceptable estimation in their isomorphous differences). The calculated electron density allowed identifying characteristic side chain residues, thus completing the structure. Continuous model building and refinement was performed with the interactive three-dimensional graphics program MAIN (38) and REFMAC5 (39).
There was no electron density visible for the 2 N-terminal residues and 18 C-terminal residues, whereas residues 3-112 could be built in the defined electron density (see Fig. 2D). The protein model was refined by REFMAC5 using TLS (TLS parameters describe anisotropic motion; an anisotropic U factor is derived for each atom in the TLS group) (40), conventional crystallographic rigid body, and positional and anisotropic temperature factor refinements (39), resulting in the current crystallographic values of R cryst ϭ 24.3%, R free ϭ 25.9%, and root mean square difference of bonds as follows: root mean square difference Ϫ bond ϭ 0.007 Å and root mean square difference Ϫ angle ϭ 1.25° ( 41). The slightly increased R-values for this resolution are due to the anisotropic crystalline order causing deterioration in data quality. The current SycT model comprises 910 non-hydrogen atoms and 22 water molecules/asymmetric unit cell. Data of SycT have been deposited in the RCSB Protein Data Bank with Protein Data Bank ID code 2bho.
Co-expression and Purification of YopT/SycT
Complexes-Work from several groups suggests that the influence of class I TTSS chaperones on the folding of their substrates is restricted to the binding site generally encompassing about 50 -100 amino acid residues (9,14,17,21,24). To understand the functioning of these chaperones, high resolution structural data on complete type III effectors in complex with their respective chaperones are required. We established a co-expression and co-purification protocol to facilitate crystallization trials with YopT/SycT complexes. The bicistronic organization of yopT and sycT on the Yersinia virulence plasmid was utilized to construct a plasmid for translational fusion of YopT to GST, allowing the simultaneous production of SycT. The yield was ϳ15 mg of YopT/SycT per liter of bacterial culture. We did not succeed in growing any YopT/SycT crystals. However, we ob- e Cullis R-factor is the lack of closure residual/isomorphous difference. acent./cent., acentric/centric. f Phasing power ϭ root mean square. F H /lack of closure, where FH is the calculated heavy atom contribution. ano. acent., anomalous acentric. g Figure of merit ϭ ͚͗ ␣ P(␣)e ix /͚ ␣ P(␣), after density modification (35), where ␣ is the phase and P(␣) is the phase probability distribution. acent./cent., acentric/centric. h r ϭ ͚ hkl ʈF obs ͉ Ϫ ͉F calc ʈ/͚ hkl ͉F obs ͉, where R free (39) is calculated without a sigma cutoff for a randomly chosen 10% of reflections, which were not used for structure refinement, and R work is calculated for the remaining reflections.
i Deviations from ideal bond lengths/angles. j Number of residues in favored region/allowed region/outlier region.
served some degree of instability of YopT after cleaving off GST, which could interfere with crystallization. Assuming autoproteolytic cleavage, we introduced a C139S mutation into YopT, which renders the protease inactive (25). Further, an N-terminally truncated form of YopT was produced with the first 22 amino acid residues being deleted. Expression and purification of both variant YopT/SycT complexes proved feasible and resulted in homogeneous material. However, up to now, we have not succeeded in yielding crystals from this material. Catalytically Inactive YopT C139S Is Reduced in Its Ability to Bind SycT-When we accomplished the YopT C139S /SycT purification, we observed that the amount of co-purified SycT was reduced as compared with YopT/SycT or YopT ⌬1-22 /SycT preparations. The effect was rather subtle but could be intensified by prolonged incubation of the GST-YopT C139S /SycT complexes in dilute solutions followed by affinity purification on glutathione-Sepharose resin. The phenomenon is illustrated in Fig. 1. Glutathione-Sepharose resin was loaded with equal amounts of purified GST-YopT/SycT, GST-YopT C139S /SycT, and GST-YopT ⌬1-22 /SycT. Loaded beads were incubated overnight in 1 (A) or 50 ml (B) of phosphate-buffered saline. Beads were centrifuged, eluted with SDS loading buffer, and subjected to SDS-PAGE. The immunoblot was developed with antisera specific to YopT and SycT. The intensified dissociation of SycT from GST-YopT C139S upon dilution can be observed (B). In contrast, deletion of the YopT N terminus (⌬1-22) did not interfere with SycT binding. We thus conclude that the catalytic center of YopT also interacts with SycT. This finding suggested SycT properties undescribed for TTSS chaperones and motivated us to recombinantly express SycT in E. coli to crystallize it.
Crystal Structure of SycT-SycT was heterologously produced in E. coli in its outright form without an affinity tag and purified to near homogeneity. We managed to grow SycT crystals and solved the structure at 2.5 Å resolution by multiple wavelength anomalous dispersion methods using a platinum derivative. Our model included residues 3-112 and lacks 18 disordered residues at the C terminus. SycT has a ␣-() 6 -␣ fold and appears as a dimer, although the asymmetric unit cell comprises only one subunit (Fig. 2, A and B). Dimer formation of SycT was also in accordance with analytical size exclusion chromatography. 2 Basically, the SycT fold is very similar to that of all other class I chaperones crystallized so far (13-19). However, in SycT ␣-helix H2, which mediates dimerization of all so far structurally analyzed class I TTSS chaperones, was replaced by an outstretched loop-like structure. Among the TTSS chaperones structurally solved, SycE was the one closest to SycT (20% identical to SycE). A superimposition of the SycT and SycE structural models (Fig. 2C) illustrates the characteristic similarities as well as the differences. The N-and Cterminal ␣-helices of both chaperones (H1 and H3) were almost identically oriented in the monomer. An antiparallel -sheet formed of five -strands in SycE was found in a very similar orientation in the SycT structure. Strands S1-S5 of SycE match well with strands S1-S5 of SycT (Fig. 2, C and D). An interesting difference between SycT and SycE concerns the orientation of the loop connecting ␣-helix H1 and -strand S1. In the SycT structure, an additional small -strand, designated S0, cut in on this connecting loop. The major difference between the structures of SycT and SycE, however, was found at the dimerization interface. The dimerization of SycT is mainly brought about by hydrophobic interactions (Fig. 3, A-C). In particular, Trp-47 of one subunit was in close contact with Trp-69 and Pro-70 of the second subunit, and Phe-65 makes contact with Gln-49 and Trp-84. Trp-69 is in contact with Tyr-43, His-44, Trp-47, Gln-49, and Gln-86. Finally, Trp-84 interacts with Phe-65 and Ala-71. The dimer interface of SycE, also mainly stabilized via hydrophobic contacts (15), is arranged differently, as depicted in Fig. 3D. The differences in the organization of the dimer contacts of SycT and SycE result in a different tilting of the respective subunits (Figs. 2C and 3, C and D, see the limited congruence of the left-hand side subunits).
DISCUSSION
The Three-dimensional Secretion Signal-Birtalan et al. (9) have recently suggested that chaperone/effector complexes may function as three-dimensional secretion signals targeting the secretion substrates toward the type III transport machinery. This hypothesis is supported by the impressive conservation of folds among class I chaperones of TTSSs given their low sequence similarity. Further, it is supported by the structural conservation of the mode of effector/chaperone interaction, which is not based on a high degree of sequence similarity (9,13,16,17). This functional conservation of chaperone activity is underscored by the interchangeability of effector/chaperone pairs among TTSSs of different species (5,42). Recently, Gauthier and Finlay (12) could show that the ATPase of the TTSS from enteropathogenic E. coli, EscN, binds to the chaperone CesT directly and independently of the belonging effector Tir. This suggests that it is the chaperone that directs the effector toward the ATPase. This finding is in line with our recently presented model of type III secretion, predicting that TTSS ATPases act as unfoldases using the TTSS chaperones to encounter the secretion substrates (21,24,43). After displacement of the chaperone by the ATPase, the chaperone-binding site of the effector, lacking tertiary structure, is distinguished as an ideal starting point for unraveling the effector. Which part of the chaperone structure could represent the common recognition pattern? In view of the SycT structure, we can now exclude ␣-helix H2 and the dimerization domain as part of this recognition pattern. Further, when comparing all available structures, it seems unlikely that the dimer as a whole is recognized. Rather, the most conserved parts of the monomer should be considered. A superimposition of SycE, SycH, SycN/ YscB, and SycT ( Fig. 2C) (16,17) displays that ␣-helices H1 and H3 as well as -strands S1-S5 colocalize very nicely but do so in only one subunit because of considerable variation of subunit tilting. The latter phenomenon is most pronounced in the structure of S. flexneri Spa15 (19). The Spa15 monomer, however, exhibited the same overall fold as class IA chaperones. Further, it has been shown that Yersinia, Salmonella, and Shigella TTSSs are functionally conserved (5). Taken together, we concluded that the recognition pattern is represented by structural features of the chaperone monomer. Furthermore, the model of a recognition pattern as part of the monomer may provide a clue to understanding of the Nterminal secretion signal. Why should there be an N-terminal secretion signal if there is a three-dimensional recognition pattern? We suggest that the N terminus may serve as the starting point for chaperone displacement followed by substrate unfolding and that the ATPase therefore recognizes the one subunit of the chaperone that accommodates the N terminus. This model could also help to explain the conflicting data on the nature of the N-terminal secretion signal (see Refs. 44 and 45 for reviews). If the function of the N terminus is not only to target the secretion substrate but also to serve as an initiation site for chaperone displacement and substrate unfolding, the characteristics of the N terminus would follow completely different constraints than previously thought. The structures of SycE, SycH, SycN/YscB, and SycT should now provide a basis for detailed mutational analyses of these chaperones to reveal the residues and structural elements critical for recognition.
The YopT/SycT Interaction-In the absence of the structure of a YopT/SycT complex, we can use the YopE 23-78 /SycE model for comparison. Basically, the YopE 23-78 peptide cannot be simply accommodated onto SycT; the peptide clashes with SycT at several positions (not shown). In Fig. 2D, 4 residues of SycE, which contribute significantly to binding of the YopE peptide, are indicated. Two of these residues were not conserved in SycT; one showed conservation, and only one was identical. Further, SycT owned an additional small -strand, designated S0, which was protruding into that space occupied by the YopE peptide in the YopE 23-78 /SycE model. Therefore, -strand S0 might be a putative interaction site for YopT. Finally, the different relative orientation of the subunits in the SycT and SycE dimer contributed to clashes when modeling YopE 23-78 binding to SycT. It would be interesting to test whether it is possible, due to the conserved fold of the chaperones, to mutate one of the chaperones in such a way that it is able to bind to a non-destined effector and mediate its type III-dependent transport.
A Novel Form of Chaperone/Effector Interaction-Work from several groups suggests that the influence of TTSS chaperones FIG. 2. a, crystal packing 3. a, stereo view of SycT dimerization. One subunit is drawn as a surface representation, and the other is drawn as a ribbon plot (blue). Specific residues, particularly those contributing to dimerization, are drawn as balls-and-sticks. Carbon atoms are drawn in green, and atoms of oxygen and nitrogen are shown in red and blue, respectively. b, an electron density map of the dimerization domain. Wire frame structure of the dimerization domain in SycT is shown. Interactions between subunits are particularly formed by hydrophobic side chains. Electron density is calculated with experimental phases calculated form the platinum and lead heavy metal atom derivatives. The map is contoured at 1 , with 2F o Ϫ F c coefficients. Temperature factors of residues in the dimerization domain are below the average temperature factor of the whole molecule, showing less flexibility in this region. c, a stereo view of the SycT dimerization domain, similarly oriented as in a. d, a stereo view of the SycE dimerization domain. The orientation of SycE is comparable with SycT and based on the structural superposition.
on the conformation of their substrates is restricted to the binding sites identified close to the N termini (9,14,17,21,24). Here, for the first time, an interaction between the catalytic domain of an effector, YopT, and its specific chaperone, SycT, was revealed. We demonstrated that catalytically inactive YopT C139S is reduced in its ability to bind SycT. It is not very likely that this conserved substitution causes dramatic conformational changes. Rather, this finding suggested that a direct interaction between the catalytic pocket and the chaperone was disturbed. We could imagine two plausible functions for such an interaction. Since YopT is a cysteine protease, the ancillary interaction with SycT could serve to prevent proteolytic activity inside the bacteria. Alternatively, the interaction could contribute to the secretion competence of YopT. However, we have scrutinized the latter hypothesis but could not demonstrate any difference in respect to transport-efficiency between YopT and YopT C139S . 2 Which part of SycT is likely to interact with the catalytic domain of YopT? Assuming that YopT is accommodated by SycT in a manner comparable with the YopE/SycE interaction, the obviously flexible and thus unresolved C-terminal end was the most likely candidate for this interaction. Class I chaperones differ considerably with respect to the Cterminal extension following ␣-helix H3. Several chaperones exhibit very small extensions of 1-5 residues (e.g. Spa15, SigE, AvrPphF ORF1), whereas others, such as the Yersinia Syc chaperones, have 12-18-residue extensions that are generally undefined in the structures and to which no function could be assigned so far. Truncated versions of these chaperones now have to be analyzed in vitro and in vivo to learn about the role of these extensions. | 2018-04-03T01:54:47.796Z | 2005-09-02T00:00:00.000 | {
"year": 2005,
"sha1": "0bb390c0cac7b8ff82d95dbe7e1f048ecb1e45f8",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/280/35/31149.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "73f8088e56f2ee176655c26650aafad577625c9f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
252546945 | pes2o/s2orc | v3-fos-license | Evaluation of the pulmonary vein ostia during the cardiac cycle using electrocardiography-gated cardiac computed tomography in cats
Several studies in humans have provided detailed descriptions of the anatomy of the pulmonary veins (PVs) and their ostia for the implementation of thoracic interventions, such as radiofrequency ablation, for patients with atrial fibrillation. These studies have shown that electrocardiography (ECG)-gated multidetector computed tomography (MDCT) can evaluate the dimensional variations in the PVs or ostium according to the cardiac cycle. However, few studies have examined the PVs or ostia using MDCT in veterinary medicine. Therefore, this study investigated the variation in the diameter of the PV ostium in cats during the cardiac cycle using ECG-gated MDCT and determined the correlation between the size of the heart or left atrium (LA) and diameter of the PV ostium. This study included six cats, including five normal animals and one cat with hypertrophic cardiomyopathy. The PVs were found to drain into the LA via three ostia, i.e., the right cranial ostium, left cranial ostium, and caudodorsal ostium. Moreover, a diametric variation was observed in all PV ostia according to the cardiac cycle phase on ECG-gated MDCT: the maximal diameter was observed at the end of ventricular systole, and the minimal diameter was observed at the end of ventricular diastole for each PV ostium. There were no significant correlations between the heart or LA size and maximal or minimal diameter of each of the three PV ostia (p > 0.05); however, the enlargement of each PV ostium at the end of ventricular systole differed significantly from that at the end of ventricular diastole. This study suggested the clinical feasibility of ECG-gated MDCT in providing more detailed anatomical information about the PVs, including the dimensional changes during the cardiac cycle in cats. Based on this study, knowledge of the variations in the PV ostium offers interesting avenues for research into the effect of PV function. Furthermore, ECG-gated MDCT could allow for greater clinical application of interventional procedures in animals with various cardiac diseases.
Several studies in humans have provided detailed descriptions of the anatomy of the pulmonary veins (PVs) and their ostia for the implementation of thoracic interventions, such as radiofrequency ablation, for patients with atrial fibrillation. These studies have shown that electrocardiography (ECG)-gated multidetector computed tomography (MDCT) can evaluate the dimensional variations in the PVs or ostium according to the cardiac cycle. However, few studies have examined the PVs or ostia using MDCT in veterinary medicine. Therefore, this study investigated the variation in the diameter of the PV ostium in cats during the cardiac cycle using ECG-gated MDCT and determined the correlation between the size of the heart or left atrium (LA) and diameter of the PV ostium. This study included six cats, including five normal animals and one cat with hypertrophic cardiomyopathy. The PVs were found to drain into the LA via three ostia, i.e., the right cranial ostium, left cranial ostium, and caudodorsal ostium. Moreover, a diametric variation was observed in all PV ostia according to the cardiac cycle phase on ECG-gated MDCT: the maximal diameter was observed at the end of ventricular systole, and the minimal diameter was observed at the end of ventricular diastole for each PV ostium. There were no significant correlations between the heart or LA size and maximal or minimal diameter of each of the three PV ostia (p > . ); however, the enlargement of each PV ostium at the end of ventricular systole di ered significantly from that at the end of ventricular diastole. This study suggested the clinical feasibility of ECG-gated MDCT in providing more detailed anatomical information about the PVs, including the dimensional changes during the cardiac cycle in cats. Based on this study, knowledge of the variations in the PV ostium o ers interesting avenues for research into the e ect of PV function. Furthermore, ECG-gated MDCT could allow for greater clinical application of interventional procedures in animals with various cardiac diseases. KEYWORDS caudodorsal ostium, feline, hypertrophic cardiomyopathy, left atrium to aorta ratio, left cranial ostium, right cranial ostium
Introduction
The pulmonary veins (PVs) and their ostia, which are important sources of ectopic atrial activity, have been implicated in chronic and paroxysmal atrial fibrillation (1)(2)(3)(4). Radiofrequency ablation of the PV is performed for patients with atrial fibrillation to disconnect the PV electrically from the left atrium (LA) in humans; thus, detailed anatomical information about the PV is important for catheter size selection during the ablation procedure (1)(2)(3)5). The PV also constitutes an essential aspect of thoracic interventions, such as lung transplantation and pneumonectomy; moreover, pulmonary venous congestion is a clinically important indicator of elevated pulmonary venous pressure, a cause of pulmonary edema (6,7). Therefore, detailed anatomical knowledge of the PV is important clinically in both human and veterinary medicine.
Technological advancement and electrocardiographic (ECG) gating have enabled detailed visualization of the anatomical features of cardiovascular structures using multidetector computed tomography (MDCT) (1,2). Several studies in humans have reported detailed anatomical information on the PV and ostium, and ECG-gated MDCT was used to evaluate the dimensional variations of the PV or ostium according to the cardiac cycle (1,2,6,8,9). These studies in human medicine have demonstrated significant dimensional differences in the PV and ostia between the ventricular end-systole and ventricular end-diastole, which apparently become less significant further from the LA (2, 9). Moreover, patients with chronic atrial fibrillation and left atrial enlargement may have larger PVs and ostia than those with paroxysmal atrial fibrillation and a normal-sized atrium (1,4). These results suggest the clinical significance of ECG-gated MDCT in evaluating the PV and ostium in human medicine.
However, few studies have evaluated the PV or ostium using MDCT in veterinary medicine. Most studies of pulmonary vessels using MDCT have focused primarily on the pulmonary arteries, and studies related to the PVs have reported only on pulmonary venous drainage patterns and the number of PV ostia in dogs and cats (10)(11)(12)(13)(14)(15). To the best of our knowledge, no study has reported the diametric variation in the PV or ostium during the cardiac cycle using ECG-gated MDCT in veterinary medicine. Additionally, although the PV to pulmonary artery (PA) ratio has been evaluated as a predictive factor for congestive heart failure using echocardiography, evaluation of the PV or ostium using MDCT may be clinically useful in the future, considering the limitations of echocardiographic examination arising from its great dependence on the operator's scan techniques and/or the patient's respiration.
Therefore, this study aimed to investigate the variation in the diameter of the PV ostium in cats during the cardiac cycle using ECG-gated MDCT, as well as the correlation between the size of the heart or LA and diameter of the PV ostium.
Materials and methods Population
This study represents a retrospective analysis of a subset of the original, prospective study. Data from five clinically normal cats and one cat with hypertrophic cardiomyopathy (HCM) that underwent ECG-gated MDCT were analyzed retrospectively. The study design and care, as well as animal maintenance, followed protocols approved by the institutional animal care and use committee of Seoul National University in February 2022 (approval number: SNU-220113-4). Medical history and informed consent were obtained from the owners for all client-owned cats prior to all procedures. Six domestic short-haired cats with no clinical signs provided by the owners were included. Before MDCT examination, all cats underwent basic health tests, including physical examination, complete blood counts, serum biochemistry, and electrolyte tests. Cardiac evaluation was performed using N-terminal pro-B-type natriuretic peptide (NT-proBNP) testing, thoracic radiography, and transthoracic echocardiography (Aplio 500, Toshiba, Canon Medical Systems Co., Otawara, Japan). Twodimensional, M-mode, and Doppler echocardiography was performed for all cats. The time interval between all basic health tests and MDCT examination for each cat was <5 days.
Anesthesia
An intravenous 24-G catheter was placed in the right cephalic vein for premedication and injection of the contrast agent during MDCT. General anesthesia was induced as follows: premedication with butorphanol (0.2 mg/kg intravenously; 1 mg/mL, Butophan R ; Myungmoon Pharm Co., Ltd., Seoul, Republic of Korea), induction with propofol (6 mg/kg intravenously; 10 mg/mL, Provive R 1%, Myungmoon Pharm Co., Ltd.,), and maintenance with isoflurane (Isotroy R 100, Troikaa Pharm Ltd., Gujarat, India) in a gaseous mixture of 100% oxygen in air via an endotracheal tube. End-tidal carbon dioxide levels were maintained between 35 and 45 mmHg using a mechanical ventilator. During anesthesia, the heart rate, oxygen saturation, and end-tidal .
/fvets. . carbon dioxide were monitored continuously using ECG and pulse oximetry. Data acquisition was initiated within 5-10 min after anesthesia induction to ensure the stability of anesthetic conditions. For individual scans, apnea was induced by breath-holding at inspiration immediately before the scan. All cats were monitored until recovery from anesthesia.
ECG-gated MDCT
All MDCT examinations were performed using an 80row, 160-multislice CT system (Aquilion Lightning, Canon Medical Systems Co., Otawara, Japan). During the examination, the patients were positioned in sternal recumbency on a CT table with the neck extended and the forelimbs placed caudally. ECG leads were attached to the paws; ECG data were recorded simultaneously during spiral MDCT examination. The scan protocol was as follows: voltage, 120 kVp (kVp: kilovoltage peak); gantry speed, 0.5 s/rotation; slice collimation, 0.5 mm × 80; 150 mA; slice thickness, 0.5 mm; and pitch factor, 0.813. All patients underwent a non-ECG-gated scan, followed by a retrospective ECG-gated scan after a short interval (5 min) to allow washout of the contrast medium from the heart. All cats underwent a pre-contrast MDCT scan of the full thorax from the thoracic inlet to the most caudal border of the lungs before the post-contrast studies. For non-ECG-gated scans, the following were injected into the cephalic vein using a dual power injector (OptiVantageTM DH, Mallinckrodt, Dublin, Ireland) at a rate of 1.5 mL/s: a biphasic injection, a non-ionic contrast medium (300 mgI/mL, Omnipaque; GE healthcare, Seoul, Republic of Korea) of 1.5 mL/kg, followed by a saline flush of 1.5 mL/kg. After 8 s of contrast injection, five sequential scans were performed over the cardiac silhouette from the cranial to the caudal side at 5 s intervals. The delay time for retrospective ECG-gated scans was 14 s in all cats; this was determined based on non-ECG-gated sequential scan images without bolus tracking to reduce radiation exposure and anesthesia time. The contrast medium administration for ECG-gated MDCT was conducted in the same manner as that for the non-ECG-gated scan. For data postprocessing, images were reconstructed in multiple datasets, by increasing the temporal reconstruction window in 10% increments within the cardiac cycle, centered over the 0-90% R-R interval. All images were reviewed by three veterinary diagnostic imaging experts on a dedicated viewing station using specialized software (Vitrea 7.12, Vital Images, Minnetonka, MN, USA), which depicted maximum intensity projection (MIP), three-dimensional volume rendering, and multiplanar reconstructions to optimize visualization of the PV and ostium.
Evaluation of the PV ostium in cats
Number of the PV ostia within the LA and classification of the pulmonary venous drainage system just before opening into the PV ostium The pulmonary venous drainage system was classified according to a previous study (10): (a) separate, when the PVs drained independently into the LA; (b) short common trunk, when two or more PVs fused by forming a "short neck" just before opening into the LA; or (c) long common trunk, when two or more PVs fused by forming a "long neck" just before opening into the LA.
Variation in the PV ostial diameter during the cardiac cycle on ECG-gated MDCT In 10 sets of phase reconstruction data, ranging from 0 to 90% in 10% increments of the R-R interval on the ECG in all cats, the diameter of each PV ostium entering the LA according to the cardiac cycle was measured and compared through the same cross-section at the MIP oblique transverse or coronal planes, by dropping a perpendicular to the long axis of the PV. The ostium is defined as the point of inflection between the PV and LA walls, and the ostial diameter was measured from PV wall to PV wall ( Figure 1). Measurement of each PV ostial diameter according to the cardiac cycle was performed three times each by three veterinary diagnostic imaging experts (J Kim, K Kim, and D Oh), and the average value for the average value of three veterinary diagnostic imaging experts was determined as the FIGURE Measurement of the pulmonary vein (PV) ostial diameter. First, the maximum intensity projection (MIP) of the oblique transverse or coronal planes for best visualization of each PV ostium were selected using multiplanar reconstruction, and then each PV ostial diameter during the cardiac cycle was measured and compared at the same cross-section. Each ostium is defined as the point of inflection between the PV wall and the LA wall, and the ostial diameter was measured from the PV wall to the PV wall. Correlation between the PV ostial diameter and sizes of the heart or LA In all cats, the maximal or minimal values of the PV ostial diameters were compared with the vertebral heart score (VHS) on thoracic radiography or LA to aorta (AO) ratio on echocardiography.
Statistical analyses
All data were expressed as the mean ± standard deviation. Statistical analyses were performed using IBM SPSS 26.0 (IBM Corp., Armonk, NY, USA). As the data were not normally distributed owing to the small sample size, we used non-parametric tests, except for the comparison of the PV ostial diameter between the end-systolic and end-diastolic phases. Spearman's rho test was used to identify the statistical correlations between age, body weight (BW), VHS or LA/AO ratio, and maximal or minimal diameter of each PV ostium. The Kruskal-Wallis test was used to compare all values with respect to sex. An independent t-test was used to compare the mean value of each PV ostium at the ventricular end-systolic and ventricular end-diastolic phases. A one-sample t-test was used to compare the statistically significant differences among the five clinically normal cats and one cat with HCM for the BW, VHS, LA/AO ratio, and maximal/minimal diameter of each PV ostium. A p-value < 0.05 was considered statistically significant.
Results
Six domestic short-haired cats in this study included three spayed female and three male (one intact, two neutered). The mean age was 4.85 ± 3.74 years, the mean BW was 4.82 ± 1.0 kg, the mean VHS was 7.03 ± 0.72 v, and the mean LA/AO was 1.27 ± 0.22 (Table 1). Of the six cats, five were normal in all basic health tests, although one HCM cat (case 6) showed regional thickening (0.66 cm) of the interventricular septum and a positive result in the NT-ProBNP test (Table 1). One HCM patient had a systemic blood pressure of 150 MmHg and a normal serum thyroid hormone concentration at 2.1 µg/dL (reference range, 0.6-3.9 µg/dL).
ECG-gated MDCT scan allowed the measurement of the ostial diameter of each PV entering the LA during the cardiac cycle in all cats (Figure 2). The total time from induction of anesthesia to the end of MDCT examination ranged between 25 and 35 min, with an average of 30 min per animal. The heart rate during the MDCT scan ranged from 120 to 150 bpm in all cats. No complication associated with the anesthetic protocol was documented during the procedure. The PVs drained into the LA via three ostia, i.e., the right cranial ostium (RO), left cranial ostium (LO), and caudodorsal ostium (CDO), which were identified in all cats (Figure 3) (10). The RO, draining from the PVs of the right cranial and middle lung lobes, formed a long common trunk before opening into the right cranial part of the LA. The LO, draining from the PVs of the cranial and caudal parts of the left cranial lung lobe, also formed a long common trunk before opening into the left cranial part of the LA. The CDO, draining from the PVs of the bilateral caudal and accessory lung lobes, formed a short common trunk before opening into the caudodorsal part of the LA. These findings were similar to those of a previous study (10).
As a result of PV diameter measurement using a specialized software, the average value of three veterinary diagnostic imaging experts showed consistent results. In this study, the CDO was found to be the largest and the LO the smallest, and all ostia showed the dimensional variation during the cardiac cycle ( Table 2). The maximal diameter showed at the end-systole (30-40% R-R interval) except in two cases (50% R-R interval) and the minimal at the end-diastole (0%, 70-90% R-R interval) except in one case (50% R-R interval) ( Table 2). In six cats, the average diameter of each PV ostium according to the cardiac cycle was the maximum at the end-systole and minimum at Frontiers in Veterinary Science frontiersin.org . /fvets. . the end-diastole (Figure 4). There was a significant difference in the mean diameter of each PV ostium between the end-systole and end-diastole (p < 0.05) ( Table 3). There were no statistical correlations between the maximal or minimal diameter of all PV ostia and age, BW, sex, VHS, or LA/AO ratio (p > 0.05).
One HCM cat showed a significant enlargement in the VHS and LA/AO ratio compared with the other five normal cats (p < 0.05), but showed no significant differences in all PV ostial diameters between the two groups (p > 0.05).
Discussion
This study showed that the diameter variation of all PV ostia according to the cardiac cycle in cats could be identified on ECG-gated MDCT. As in humans, the maximal diameter of each PV ostium corresponded to the phase of the end of ventricular systole, and the minimal diameter was measured in the end of ventricular diastole. Furthermore, this study showed significant differences in each of the three PV ostia enlarged at Frontiers in Veterinary Science frontiersin.org . /fvets. . the end of ventricular systole compared with those at the end of ventricular diastole. In humans, the right superior PV ostium was found to be the largest and the right inferior PV ostium was the smallest, with the greatest dimensional change in the superior PV compared with the inferior PV, and the left superior PV exhibiting the greatest change (2). However, in our feline study, the diameter of the CDO was the largest and that of the LO was the smallest. In addition, the diameter change at the RO and CDO was larger than that of the LO. These anatomical differences suggest that further extensive studies are needed to determine the difference between human and cats in the effect of PV function as well as the pathophysiology of atrial fibrillation, associated with the extent and degree of myocardial sleeve at each PV ostium, and pulmonary congestion or edema.
Pulmonary venous blood flow is typically biphasic: the first phase of flow occurs during ventricular systole, and the second phase of flow occurs during ventricular diastole (2, 9). The PV orifice area also changes considerably during the cardiac cycle (2). During ventricular systole, blood flows from the PVs into the LA upon closure of the mitral valve. This is driven by left ventricular long-axis shortening, which lengthens the LA, thus increasing the pressure gradient between the PV and LA, and in effect, "sucks" blood into the LA (2). During early diastole, while blood is flowing into the left ventricular chamber, Frontiers in Veterinary Science frontiersin.org . /fvets. . there is a drop in LA pressure and blood is passively pulled into the LA, as it moves through the mitral valve into the left ventricular chamber. By mid to end ventricular diastole, the pulmonary venous pressure equalizes with ventricular diastolic pressure, because of which antegrade pulmonary venous flow begins to cease. In small animal clinics, clinicians routinely evaluate the relative size of the PV and PA on radiographs by comparing them to the size of the ribs (16). However, radiographic examination may vary depending on the breed, age, obesity, and underlying thoracic disease. In addition, the PV to PA ratio using echocardiography has been suggested as a predictive factor for discriminating healthy or subclinical patients with cardiomyopathy and patients with congestive heart failure in dogs and cats (7,16,17). In a feline study, healthy and subclinical cats did not differ in the PV to PA ratio in the echocardiography; meanwhile, cats with congestive heart failure had a larger ratio compared with healthy and subclinical cats (17). In accordance with a previous study, our data also showed that the PV ostial diameter in one subclinical HCM cat did not differ from that in the five normal cats, although the maximal diameter in the CDO and LO of one HCM cat was slightly larger than those of the five normal cats. However, considering several limitations in the echocardiographic examination, highly dependent on the operator's technique, obesity and patient's heart rate or respiration, the evaluation of all PV ostia using ECG-gated MDCT could be valuable and useful in veterinary clinics in the future. There is an additional limitation which is that the echocardiography can image only in the RO region and not all three PV ostia.
In this study, the pulmonary venous drainage patterns of six cats showed three PV ostia, unlike that in humans who have four PV ostia, similar to a previous feline study (10). In humans, many studies have shown that the superior veins have longer myocardial sleeves than the inferior veins, with the left superior PV having the longest sleeve and the right inferior PV having the shortest (2). In this study, in all cats, the RO and LO had a long common trunk, and the CDO had a short common trunk.
Although further studies with histological examination should be accompanied, we suggest the possibility that myocardial sleeves of the RO and LO are longer than that of the CDO and could be considered as a pathophysiology of atrial fibrillation.
Although studies in humans have reported that patients with an enlarged LA may have larger PVs than those with a normal-sized LA and the diameter of the left superior PV was significantly larger in men than in women, there was no significant correlation between the heart or LA size, sex, BW, age, and the PV ostial diameter in this feline study (1,4). This may be attributed to a slightly larger LA/AO ratio (1.56) in one HCM patient than in the five normal cats and six cats showing normal mitral E flow on echocardiography. Therefore, although there was no statistical significance, further extensive research may be necessary for many populations with various degrees of LA/AO ratio or high mitral E flow. With these future studies, it may be clinically useful to establish new criteria for an early prediction or cut-off value of pulmonary congestion or pulmonary edema in patients with early stages of the various heart diseases.
As aforementioned, one limitation of our study was the small population. Other limitations include the possibility of inaccurate PV ostial diameter measurement on MDCT as well as no confirmation by biopsy or necropsy. Although our data could be sufficiently reliable in that we compared over the same cross-section and showed significant differences in all PV ostia between the ventricular end-systole and ventricular end-diastole, further research is needed to measure the crosssectional area using three-dimensional imaging or vessel tracking in all PV ostia, because the diameter of each PV ostium by two-dimensional measurement is more susceptible to inaccuracies by shifting the imaging plane and choice of window width and level settings for display of the CT angiographic data which can affect the measured vein diameter and cross sectional area. In addition, the volume measurement of each PV common trunk may be useful as an additional predictive factor for the pulmonary congestion in the future. Finally, the risk of anesthesia can be significant, especially in animals with heart disease. Thus, difficulties in practical application of MDCT imaging may also be considered.
Despite some limitations and the necessity of further research, this study suggests the clinical feasibility of using ECGgated MDCT to provide more detailed anatomical information on the PV, including the dimensional changes during the cardiac cycle in cats, similar to that in humans. ECG-gated MDCT of the PVs may also be useful, because the non-invasive, easily reproducible nature and ability to demonstrate threedimensional anatomy are worthwhile advantages over other techniques. Based on this study, knowledge of variation in the PV ostium offers interesting avenues for potential research into the effect of PV function in felines with atrial fibrillation and early detection or variable distribution of pulmonary congestion or edema in cats. Furthermore, ECG-gated MDCT could allow . /fvets. . for greater clinical application of interventional procedures for various cardiac diseases, including radiofrequency ablation, for the treatment of atrial fibrillation in veterinary clinics.
Data availability statement
The original contributions presented in the study are included in the article/supplementary files, further inquiries can be directed to the corresponding author.
Ethics statement
The animal study was reviewed and approved by the study design and care, as well as animal maintenance, followed protocols approved by the institutional animal care and use Committee of Seoul National University (Approval Number: SNU-220113-4). Written informed consent was obtained from the owners for the participation of their animals in this study.
Author contributions
JK: setting the direction and design for the overall study and writing the main paper. D-HK: writing the thesis, analyzing and interpreting data, and reviewing the patients' conditions. KK: collecting and analyzing the patients' data and contributing to the direction of this study. DO: analyzing data and considering clinical aspects of the study. JC: additional data analysis and interpretation, revising paper, and great advice and assistance in the final approval of this study. JY: coordinating the overall flow and direction of the study and writing and editing the paper. All authors contributed to the article and approved the submitted version. | 2022-09-28T14:03:36.713Z | 2022-09-27T00:00:00.000 | {
"year": 2022,
"sha1": "70f8144b93c55e4f139462a07960e157bb02f3bc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "70f8144b93c55e4f139462a07960e157bb02f3bc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225739876 | pes2o/s2orc | v3-fos-license | Image Quality of Coronary CT Angiography (CCTA) using 640-slice Scanner: Qualitative and Quantitative Assessments of Coronary Arteries Visibility
The purpose of this study was to evaluate the image quality and diagnostic accuracy of coronary computed tomography angiography (CCTA) using 640-slice scanner. Advancement of multidetector computed tomography (MDCT) technology with higher spatial, temporal resolution, and increasing detector array have improved the image quality and diagnostic accuracy of CCTA. A total of 25 patients (12 men and 13 women) underwent CCTA examination was chosen and data was acquired by 640-slice scanner. All 16 segments of coronary arteries were evaluated by two reviewers using a 4-likert scale for qualitative assessment. In quantitative assessment, the evaluation of 4 main coronary arteries were analysed in terms of signal intensity (SI), image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). All 25 patients with a mean age of 52.88 ± 14.75 years old and body mass index (BMI) of 24.24 ± 3.28 kg/m2 were analysed. In qualitative assessment, from the total of 400 segments, 379 segments (95%) had diagnostic value while 21 segments did not have diagnostic value, which means 5% artefact was detected. In quantitative assessment, there was no statistical differences in gender, race, and BMI (p>0.05). Overall evaluation showed that higher SI at the left main artery (LM) at 393.7 ± 47.19. Image noise was higher at right coronary artery (RCA) at 39.01 ± 13.97. SNR and CNR showed higher at left anterior descending (LAD) with 12.73 ± 5.17 and LM 9.14 ± 4.2, respectively. In conclusion, this study indicates that 640-slice MDCT has higher diagnostic value in CCTA examination with 95% vessel visibility with 5% artefact detection.
INTRODUCTION
Coronary artery disease (CAD) is the leading cause of morbidity and mortality in Malaysia. CAD can be defined as the coronary artery stenosis with lumen diameter reduction of at least 50% from their actual size (Raff et al. 2005). Early detection of CAD is important to reduce the morbidity and mortality rate. In the detection of CAD, invasive coronary angiography is known as the gold standard procedure with high sensitivity and specificity (Tan et al. 2016). However, this procedure is very much complicated since it involves admission into the hospital, high risk due to its invasive procedure, and also a costly procedure. Therefore, computed tomography (CT) has been used as an alternative to provide the diagnosis of CAD. Coronary computed tomography angiography (CCTA) has been introduced to overcome those issues and is in demand widely since it is a non-invasive anatomical assessment tool with high sensitivity (93%) and specificity (94%) in detecting CAD (Klass et al. 2009;Raff et al. 2005;Tan et al. 2016). CT technology has evolved rapidly since its introduction in the beginning of 1980's with electron beam CT (EBCT) in 1987 and multidetector CT (MDCT) in 1999 (Bijl et al. 2011;Budoff et al. 2006). Nowadays, the MDCT technology continues to develop from 4-slice to the recent 640-slice in order to achieve better spatial and temporal resolution so that more coronary segments available for evaluation (Nandurkar et al. 2009;Tan et al. 2016). The challenge of CCTA exists when it comes to the visualization of complex anatomy involving small coronary artery, tortuous coronary branches, and small diameter size (Jean Patrick 2014). Production of artefact is also one of the challenges in CCTA as it can cause degradation in image quality (Tan et al. 2016). There are two kinds of motion artefacts that usually influenced the quality of coronary artery image, which are the step and blurring artefacts . To solve this, MDCT with higher spatial and temporal resolutions, as well as wider detector array were developed, thus the entire heart can be acquired in single rotation by volumetric scan (Qin et al. 2012). This volumetric scan provides improvement of image quality, temporal uniformity, and reduction of artefacts (Rybicki et al. 2008). This advance development of MDCT technology allows CCTA to become a useful tool in the non-invasive anatomic assessment of CAD with high diagnostic accuracy (Bijl et al. 2011;Klass et al. 2009;Raff et al. 2005) in detecting CAD because it produces images with high spatial and temporal resolution (Takx et al. 2015) the number of evaluable segments, image quality, heart rate and blood pressure, and diagnostic accuracy of coronary computed tomography (CT. When newest features were introduced, the most important part needed to be discussed is the image quality. To the best of our knowledge, the study of image quality CCTA using 640-slice scanner with CT volumetric scanning in combination of qualitative and quantitative analysis has yet to be reported. Thus, the purpose of this study was to evaluate the image quality of CCTA using 640-slice scanner with volumetric scan in terms of vessels visibility, diagnostic accuracy, and artefact detection.
STUDY DESIGN AND POPULATION
This retrospective study was conducted at Hospital Canselor Tuanku Muhriz, Universiti Kebangsaan Malaysia Medical Centre (UKMMC) and data was collected from January to December 2018, which is since its operation started. Ethical approval was obtained from the institutional ethical committee (UKM PPI/111/8/JEP-2019-207). A total of 25 patients who underwent CCTA were selected as subject. Patients with higher calcium score, previous coronary artery bypass graft (CABG), primary pacemaker (PPM), and post percutaneous coronary intervention (PCI) were excluded from this study.
CORONARY CT ANGIOGRAPHY PROTOCOLS
The CCTA dataset were acquired using 640-slice scanner (Aquilion ONE, Toshiba Medical Systems, Otawara, Japan) which made up from a detector element consisting of 320 × 0.5 mm detector and provides 16 cm of coverage in the z direction. Images were acquired using prospective electrocardiogram (ECG)-gating with parameters as follows: tube voltage 120 kVp, auto exposure control (AEC) with maximum 600 mA, 350 ms of gantry rotation time, acquisition slice 0.5 mm with 0.5 mm interval, and ECGgating set at 70 -80% of R-R interval. No pitch was used in this study.
IMAGE ANALYSIS
The images were analysed through qualitative and quantitative assessments. Both assessments were evaluated using offline workstation (Vitrea 6.4, Vital Images, A Toshiba Medical Systems Group, Minnetonka, MN). Images were presented in axial views and reconstructed at 0.5 mm slice thickness. All images were reformed into oblique multiplanar reformation (MPR), curved multiplanar reformation (cMPR), and maximum intensity projection (MIP) for evaluation. The image evaluation system used in this study was following the standard segmentation of 16 segments coronary arteries which consist of left main (LM), proximal left circumflex (pLCX), mid left circumflex (mLCX), distal left circumflex (dLCX), proximal left anterior descending (pLAD), mid left anterior descending (mLAD), distal left anterior descending (dLAD), proximal right coronary (pRCA), mid right coronary (mRCA), and distal right coronary (dRCA) for main segments. Meanwhile, the obtuse marginal 1 (OM1), obtuse marginal 1 (OM2), diagonal 1 (D1), diagonal 2 (D2), right posterior descending artery (R-PDA), and right posterolateral branch (R-PLA) were used for branches segment in line with the guidelines provided by the American Heart Association (Chian et al. 2017;Zhang et al. 2011)contrast volume and radiation dose at the 100-kilovolt (kV).
QUALITATIVE ANALYSIS
The qualitative assessment of image quality was performed by two independent reviewers (cardiologist) with certified experience in CCTA (more than 5 years). Both reviewers were blinded to all clinical information of dataset. The evaluation of subjective assessments of image quality was expressed by using 4-likert scale; score 1 indicated excellent quality without artefacts, score 2 indicated good quality with mild artefacts, score 3 indicated acceptable image quality with moderate artefacts, and score 4 indicated unevaluable quality with severe artefacts (Qin et al. 2012).
QUANTITATIVE ANALYSIS
The quantitative assessment was evaluated for 4 main coronary arteries which is LM, LCX, LAD and RCA. These assessment of image quality was determined by measuring 4 parameters which is signal intensity (SI), image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) (Kojima et al. 2017). The measurements were performed on axial image of coronary arteries by one reviewer (radiographer). A fixed round shape region of interest (ROI) of 5 mm was placed into cardiac wall (A) and coronary arteries (B) to measure the Hounsfield unit (HU) as presented in Figure 1. SI was derived from the mean values of HU from two ROIs. Image noise can be defined as the mean standard deviation HU of SI. The SNR and CNR were calculated using the following formula: where SICA is SI at coronary artery and SICW is SI at the cardiac wall (Feuchtner et al. 2010;Jean Patrick 2014).
STATISTICAL ANALYSIS
Statistical analysis was performed using SPSS version 22.0 for Windows (SPSS Inc., USA). In qualitative assessment, data were expressed as frequencies or percentages and inter-observer agreement between two reviewers was analysed using Cohen's kappa statistics. For quantitative assessment, data were expressed as mean and standard deviation, otherwise p-value <0.05 was considered statistically significant. The comparison of image quality parameter among the gender, ethnicity and BMI classification was done using ANOVA.
A total of 400 segments were evaluated by two reviewers, in which 250 segments (62.5%) was identified as main vessels while 150 segments (37.5%) was identified as branches. The results showed that 236 segments (59%) were graded as excellent (scale 1), 112 segments (28%) were graded as good (scale 2), 31 segments (7.6%) were graded as acceptable (scale 3), and 21 segments (5.3%) were unevaluable image quality (scale 4). The images with diagnostic value were identified as graded 1, 2, and 3 score while score of 4 was identified as no diagnostic value. Total evaluation showed 378.5 segments have diagnostic value with 95% vessels visibility and only 21 segments don't have diagnostic value, which means 5% artefact was detected. The detailed evaluation is presented in Table 1 and overall evaluation score for both reviewers is presented in Table 2. The kappa value shows excellent inter-reviewer agreements (k = 0.94) between 2 reviewers. For reviewer 1, graded image with diagnostic value was 280 (95%) and 20 images (5%) graded as no diagnostic value. For reviewer 2, the graded image with diagnostic value was 278 (94.5%) and 22 (5.5%) without diagnostic value.
Overall, the evaluation of objective assessment showed in Table 3. From the analysis, LM vessels showed higher SI (393.7 ± 47.19) and LCX vessels showed lower SI (358.72 ± 50.83). For noise parameter, the lowest noise found at LAD vessels which was at 34.87 ± 16.14, while RCA shows the highest noise at 39.01 ± 13.97. The higher results of SNR and CNR found in LAD vessels, which was 12.73 ± 5.17 for SNR and LM vessels at 9.14 ± 4.2 for CNR. Meanwhile, the RCA vessels shows the lowest results for SNR and CNR which were 10.32 ± 4.03 and 7.61 ± 3.58, respectively.
The comparison shows no significant differences between gender, race, and BMI classification because p-value is more than 0.05. The comparison of quantitative image quality parameters between genders is presented in Table 4. The objective assessment results for comparison between three major ethnicities is shown in Table 5. While for the assessment among the BMI classification which is underweight, normal, overweight, and obese is tabulated in Table 6.
DISCUSSION
This study highlighted four important findings in CCTA evaluation using 640-slice scanner. Firstly, this study found that 640-slice scanner provides further improved image quality in terms of vessels visibility. Secondly, we found that reduction of artefact can be achieved using 640-slice technology. Thirdly, this study produced better image quality in term of SNR and CNR parameters. Last finding showed that no significant difference in image quality among patients' gender, race, and BMI. Firstly, 640-slice scanner provides further improved image quality in term of vessels visibility. The results show 95% visualization of coronary arteries vessels. This study proved that 640-slice scanner increased CCTA diagnostic accuracy as compared to previous study with lower slices scanner, with visualization of coronary arteries obtained at 87.3% (Pannu et al. 2006), 88% (Raff et al. 2005), 83% (Hoffmann et al. 2005), and 74.7% (Kuettner et al. 2005). The main reason in the improvement of coronaries arteries visualization is due to higher spatial and temporal resolution provided by 640-slice scanner when compared to the previous generation of CT technology (Qin et al. 2012). The 640-slice scanner also used wide area of detector that allows image acquisition of the entire heart within a single gantry rotation and single heart beat (Fleur R. de Graaf et al. 2013)independent determinants of subsequent ICA and revascularization were evaluated. CTA studies were performed using a 64-row (n = 413. Despite of that, 640-slice scanner shows higher ability for the evaluation of main segments as compared to branches segment. These were proven by the excellent visualization score and good image quality was achieved only in main segments. The excellent and good image quality score (62.5%) comes from main segments while the other 5% score was contributed by unevaluable image quality found in branches segments. In line with study by Jean Patrick (2014), it was reported that the diagnostic accuracy of CCTA is lesser than other diagnostic tools when it is used for the evaluation of secondary segments. The finding was also supported by Nandurkar et al. (2009), which stated that MDCT has a higher ability to evaluate main vessels when compared to secondary segment of coronary artery. The reason of higher diagnostic score of main vessels is because of the size of main vessels which is larger, while unevaluable score is higher in branches segments due to the small size of coronary arteries. To increase the visualization of secondary branches, it is suggested to use pre-medication such as nitrate to improve the visualization of small branches. Secondly, we found that 640-slice scanner could reduce the production of artefact in coronary arteries. Generally, image artefacts are related to the limitations of temporal resolution (Sabarudin & Sun 2013). The most common reason of image quality degradation was motion artefact with 21% of segments were degraded by motion artefact (Tan et al. 2016). Only 5% of artefact presents in this study when compared to previous study which was 7.1% (Kuettner et al. 2005) and 12% (Fleur R. de Graaf et al. 2013)independent determinants of subsequent ICA and revascularization were evaluated. CTA studies were performed using a 64-row (n = 413. The factor that lowers the artefact production in this study is the recent technology in the 640-slice scanner with 16 cm volumetric data acquisition allowing entire cardiac scanning within a single gantry rotation which allows prospective ECGgating. (Fleur R. de Graaf et al. 2013;Dewey et al. 2006) Nitrate [nitroglycerine], Nitrolingual N Spray. This type of scanning mode has eliminated helical acquisition artefact, pitch artefact, stair-step artefact, misregistration artefact, and motion artefact, therefore artefact produced are lower. Helical acquisition and pitch artefact were eliminated because of acquisition in single gantry rotation to cover entire heart in one heartbeat without table movement (Rybicki et al. 2008;van der Wall et al. 2012). Meanwhile, stair-step and misregistration artefact were eliminated due to sequential scanning or prospective ECG-gating used in this study (Nandurkar et al. 2009). In other hand, motion artefact that caused the blurring of image happens because of involuntary movement of coronary arteries. Thereby higher temporal resolution was beneficial in this study to eliminate the motion artefact, these 640-slice scanner (350 ms per gantry rotation) has slightly lower temporal resolution when compared to 64-MDCT (330 ms per gantry rotation) and dual source CT scanner (DSCT) (83 ms per gantry rotation). However, these 640-slice scanner used half scan reconstruction (175 ms per gantry rotation) in single gantry rotation with wide area coverage, unlike 64-MDCT and DSCT with small area coverage and require multiple gantry rotation to cover entire heart (F.R. de Graaf et al. 2010;van der Wall et al. 2012). Owing to that, the 640-slice scanner produces faster scanning time and has the ability to freeze cardiac motion. In order to optimize the production of artefact, it is suggested to control the heart rate to be in line with the principle of volumetric scan. The recommended heart rate is regular and lower than 65 bpm, so that the entire heart can be acquired in a single heartbeat without the degradation image quality.
Thirdly, in order to increase the diagnostic accuracy of coronary artery assessment, quantitative assessment of image quality was done in this study. Quantitative assessment evaluated the SI, image noise, SNR and CNR as image quality parameter. Only 4 main vessels evaluated in this study without including segment branches, because ROI measurements were difficult and potentially less accurate for small branches of coronaries arteries (Karaca et al. 2007). The ROI size of 5 mm 2 was large enough for adequate pixel sampling and also small enough to affect non-uniformity of CT numbers, and because of that reason, small branches of coronary artery were not included in this objective assessment (Chian et al. 2017). Generally, with lower kVp setting, higher image noise will be produced and low contrast resolution will be increased (Feuchtner et al. 2010). However, this study used the same setting which is 120 kVp and because of that no significant difference between image noise in each segment was obtained. Hausleiter et al. (2006) reported that the increased SI will increase the image noise. However, it is not shown in this study as increasing SI did not increase the image noise. One of the factors that produce less image noise is the introduction of iterative reconstruction algorithms that was used in the system of this study, which is known as Adaptive Iterative Dose Reduction (AIDR). The AIDR is useful to reduce image noise while maintaining signal intensity and at the same time increasing SNR and CNR (Achenbach et al. 2017). Heyer et al. (2007) stated that the best criteria for objective parameter of image quality is the determination of SNR and CNR. The SNR value of more than 10 is classified as a good image quality (Abada et al. 2006). While study by Karaca et al. (2007) stated that CNR can be classified into different grades which is CNR >8 indicates high image quality, 4 -8 indicates moderate image quality, and <4 indicates poor image quality. This study shows that LM, LCX and LAD have the best image quality while the RCA shows moderate image quality. This is due to the anatomical position of coronary arteries. The LM, LCX and LAD are more proximal in anatomical position as compared to RCA and because of that the RCA have moderate image quality as compared to the other main vessels. Previous study by Yang et al. (2010) also stated that proximal anatomy segment has higher CNR over the distal segment of coronary arteries. The findings from this study is in line with previous study, whereby the RCA produces more motion artefact and more affected at higher heart rate. In contrast with previous study, this study found lower SNR and CNR as compared with study by Lee et al. (2019); and Yang et al. (2010). The possible explanation is the inadequate selection of ROI placement for bolus tracking, inadequate breath-holds, poor iterative reconstruction, and patient with higher BMI can result in low SNR and CNR, resulting in poorly visualized coronary arteries (Yang et al. 2010).
The comparison of 4 main vessels with all quantitative parameters between male and female patients show no significant difference. However, this study found that the image noise in female is higher than male. Previously, Yoshimura et al. (2006) reported that image noise did not significantly correlates with BMI in males, whereas image noise showed excellent correlation with BMI in females. This is due to the difference size in chest area and also weight composition in female rather than male. The comparison of image quality between 3 major races shows no significant difference because all races showed almost same characteristic in body habitus. There are no previous studies that reported on the differences in quality of image over race but the body habitus among races was studied. Due to body habitus reason, this study used tube current modulation to allow automatic selection for the most appropriate tube potential and the mAs setting according to the patient's body habitus (H.S. ). However, results produced were not consistent and has variable measurement of SI, image noise, SNR and CNR. Therefore this study agreed with previous statement by Wang et al. (2018) which stated that tube current modulation method in coronary CTA was not able to achieve consistent objective image quality across the entire patient population. The last comparison of objective assessment found no significant difference between the BMI classification. Previous study found that, image noise was correlated with biometric data which is BMI (Yoshimura et al. 2006). Greater BMI are associated with higher image noise, thus reducing SNR and CNR, and negatively affects the quality of CCTA (Yang et al. 2010). However, results from this study challenges the statement of previous study, where it shows variable and uneven readings. The discrepancy may be due to implementation of ECG-tube current modulation, automatic tube potential selection, and iterative reconstruction (J. Lee et al. 2019). In addition, the placement of ROI also affects the result of image quality as mentioned in previous study, in which the placement of ROI is important in obtaining accurate measurements (Chian et al. 2017).
There were several limitations in this study. Firstly, the number of patients recruited in this study was small due to being conducted at one institution. A larger sample size, preferably at multi-centre sites are desirable for future studies. Secondly, the accuracy of ROI measurement in objective analysis is questionable since the measurement was only performed by single person, even though the measurement was taken twice for each ROI to reduce uncertainties.
CONCLUSION
In conclusion, this study shows that the 640-slice scanner with wide area of detector produces higher CCTA image quality in terms of qualitative and quantitative assessments. | 2020-07-09T09:11:21.714Z | 2020-06-30T00:00:00.000 | {
"year": 2020,
"sha1": "eb2b69565eb5cf64f77f5e52a4198e8fee17c4d0",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.17576/jskm-2020-1802-06",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1db97f1137839797fc6511b7f40f63979c8fcbb6",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221150485 | pes2o/s2orc | v3-fos-license | Attention-Based Fully Gated CNN-BGRU for Russian Handwritten Text
This article considers the task of handwritten text recognition using attention-based encoder–decoder networks trained in the Kazakh and Russian languages. We have developed a novel deep neural network model based on a fully gated CNN, supported by multiple bidirectional gated recurrent unit (BGRU) and attention mechanisms to manipulate sophisticated features that achieve 0.045 Character Error Rate (CER), 0.192 Word Error Rate (WER), and 0.253 Sequence Error Rate (SER) for the first test dataset and 0.064 CER, 0.24 WER and 0.361 SER for the second test dataset. Our proposed model is the first work to handle handwriting recognition models in Kazakh and Russian languages. Our results confirm the importance of our proposed Attention-Gated-CNN-BGRU approach for training handwriting text recognition and indicate that it can lead to statistically significant improvements (p-value < 0.05) in the sensitivity (recall) over the tests dataset. The proposed method’s performance was evaluated using handwritten text databases of three languages: English, Russian, and Kazakh. It demonstrates better results on the Handwritten Kazakh and Russian (HKR) dataset than the other well-known models.
Introduction
Today, handwriting recognition is a crucial task. Providing solutions to this problem will facilitate business process automation for many companies. A clear example is a postal company, where the task of sorting a large volume of parcels is a complicated issue.
Handwriting recognition (HWR) or Handwritten Text Recognition (HTR) is a machine's capacity to obtain and interpret intelligible handwriting information from such sources as paper documents, images, touchscreens, and other tools. Offline HTR is the task of converting letters or words into images and then into a digital text. The input is a variable two-dimensional image, and the output is a sequence of characters. It provides excellent human-machine contact, and it can support the automated processing of handwritten documents. It also considers a sub-task of Optical Character Recognition (OCR), mainly focusing on extracting text from scanned documents and natural scene images. The recognition of Russian handwriting poses specific challenges and advantages and has been more recently addressed than recognizing texts in other languages.
First, the main differences here are that the set of words for recognition is limited to words that can occur in addresses; secondly, handwritten texts are written on a monotonous background, which significantly facilitates the segmentation process. However, other aspects of text recognition are general and can be considered regardless of their specific application. For example, an image of an envelope, like an image of any other handwritten document, cannot be directly used for address recognition since the system must first determine where the text is located; separate it from the background; segment text by words; and normalize words so that they are free of spatial transformation. Only after these procedures can the data be used to build descriptors, which are input data for the recognition model.
The main developments in the field of HTR for postal correspondence were studied. They are mainly aimed at solving the problems of determining the area of interest, text segmentation, removal of background noises that interfere with the work with the text, such as lost or unclear fragments, spots on paper, skew detection, as well as training artificial intelligence to recognize written text in the used language. The most frequently used recognition models in this context are analyzed, namely models based on HMM, hybrid Markov models (Hybrid HMM), convolutional (CNN), and recurrent neural networks (RNN).
Previous approaches to the offline HTR use Hidden Markov Models (HMM) for transcription tasks [1], extracting features from images using a sliding window and then predicting character labels with an HMM [2], which is the prevalent automatic speech recognition approach [3]. HMM's key benefits are their probabilistic nature, suitability for noise-corrupted signals like speech or handwriting, their computational foundations using efficient algorithms to change the model parameters automatically and iteratively. The success of HMM led many researchers to expand it to handwriting recognition, describing each word picture as a series of remarks. Two approaches can be differentiated according to how this representation is performed: implicit segmentation [4,5], which leads the system to search for the image with components that match classes in its alphabet, also termed as recognition-based segmentation, and explicit segmentation, which involves a segmentation algorithm to divide words into simple units such as letters [6,7]. Despite the mentioned benefits of the HMM approach, there are some limitations [8,9] compared to the new models that are using an encoding-decoder network, which combines a convolutional neural network (CNN) with a bidirectional recurrent neural network and with a Connectionist Temporal Classification (CTC) output layer [10,11]. Inspired by the latest advances in machine translation [12,13], automated question answer [14], image captioning [15], sentimental text processing [16], and speech recognition [17], we believe that encoder-decoder models with attention mechanisms [18,19] will become the new state-of-the-art for HTR tasks.
Attention-based methods have been used to help the networks learn the correct features, focus on the right features, and determine an alignment between an image pixel and target characters [20]. Attention increases the network's capacity to collect the most critical information for every part of the output sequence. Furthermore, attention networks can model language structures in the output sequence instead of mapping the input to the correct output [17]. We proposed extensions to attention-based recurrent networks that allow them applicable to handwritten recognition. Handwritten recognition can be considered to be learning to generate a sequence. In our research, attention-based models are tested on variety of datasets, such as (HKR, IAM, etc.). The attention mechanism assigns or weighs the sequences provided by the trained extraction mechanism of the function for all possible features in the input sequence, then the weighted vector function helps to generate the next output sequence.
In this research, our contribution is to present a novel attention-based fully gated convolutional recurrent neural network, tested in Kazakh and Russian dataset [21], using the following data: The main results of our work are summarized as follows: (i) For the first time, the use of gated convolutional neural network for the HTR task is studied; (ii) The effective attention mechanism and bidirectional gated recurrent neural network for the HTR task is developed; (iii) The problem of achieving results compatible with other models by keeping the number of parameters small for Attention-Gated-CNN-BGRU is considered; (iv) To verify the efficiency of our method, comprehensive ablation and comparative studies are carried out. Our proposed HTR system achieves state-of-the-art performance on the public HKR dataset [21].
The related work on Offline Handwriting Text Recognition is considered in Section 2. Section 3 demonstrates the attention-based, fully gated, convolutional recurrent neural network. Sections 4 and 5 present the experimental results and analysis of the tested data obtained from Kazakh and Russian dataset; conclusions and remarks are given in Section 6.
Related Work
Handwritten text recognition approaches can be divided into specific categories: HMM-based approaches and RNN-based approaches. We discuss the strategies for each of these significant classes.
In offline HTR, the input features are extracted and selected from images, then artificial neural network (ANN) or HMM are used to predict the probabilities and decode them for the final text. The main disadvantage of HMMs is that they cannot predict long sequences of data. HMMs have been commonly used for offline HTR because they have achieved good results in automatic speech recognition [22]. The basic idea is that handwriting can be perceived as a series of ink signals from left to the right, similar to the acoustic signals invoice sequence. The inspiration for the hybrid HMM research models came from: 1.
On the other hand, RNNs such as gated recurrent unit (GRU) [27] and long short-term memory (LSTM) [28] can solve this problem. RNN models have shown remarkable abilities in sequence-to-sequence learning tasks such as speech recognition [29], machine translation [30], video summarizing [31], automated question answer [14], and others.
To transform a two-dimensional image for offline HTR, it is necessary to take the image as a vector and forward it to an encoder and decoder. The task is solved by HTR, GRU, and LSTM, which take information and feature from many directions. These handwriting sequences are fed into RNN networks. Due to the use of Connectionist Temporal Classification (CTC) [32] models, the input feature requires no segmentation. One of the CTC algorithm's key benefits is that it does not need any segmented labeled data. The CTC algorithm allows us to use data alignment with the output.
Anshul Gupta et al. [33] performed an analysis of various feature-based classification strategies to recognize offline handwritten characters. It was proposed to use an Optical Character Recognition technique after experimentation. The approach proposed involves the segmentation of a handwritten word using heuristics and artificial intelligence. Three Fourier descriptor combinations are used in parallel as vectors of the features. A support vector machine (SVM) is used as a classifier. Using the lexicon to verify the validity of the predicted word, post-processing is performed. It is found that the results obtained by using the proposed CR system are satisfying. It also tried a support vector machine (SVM) as a classifier on the same feature set. It achieved 98% classification accuracy on the training data set and 62.93% on the test data set.
Bianne Bernard et al. [34] created an effective system of word recognition resulting from the combination of three handwriting recognizers. A hybrid framework's key component is an HMM-based recognizer that considers complex and contextual knowledge for better writing device modeling. A state-tying method based on decision tree clustering was implemented for modeling of the contextual units. Decision trees are built according to a collection of expert questions about how characters are created. Performance of the Proposed Systems on the IAM Test Database for HHM-based context-independent is 64.6% recognition rate, HHM-based context-dependent is 67.3%, and HHM-based proposed combination is 78.1% recognition rate.
Theodore Bluche et al. [18] presented an attentive model for the identification of end-to-end handwriting. The model was inspired by the recently introduced differential models that focus on voice recognition, image captioning, or translation. The key difference with a multidimensional LSTM network is the implementation of hidden and overt focus. Their main contribution to identifying handwriting is illustrated in automated transcription without prior line segmentation, which was imperative in the previous approaches. The machine can also learn the order of reading, and it can handle bidirectional scripts such as Arabic. They performed tests on the popular IAM database and announced promising results to full paragraph transcription. The proposed work achieved for one word 12.6% Character Error Rate(CER), for two words, 9.4% CER, for three words, 8.2% CER and for four words, 7.8% CER.
Theodore Bluche et al. [35] proposed new neural network architecture for state-of-the-art handwriting recognition as an alternative to recurrent neural networks in multidimensional long-term memory (MD-LSTM). The CNN model and a bidirectional LSTM decoder were used to predict sequences of characters. The research aims to generate generic, multilingual, and reusable features with the convolutional encoder, leveraging more data for learning transfer. The architecture is also motivated by the need for fast GPU training and quick decoding on CPUs. The results are competitive with the previous version of A2iA Text-Reader [36], and give 15.4%, 17.2%, 17.6% and 19.5% for accurate, fast, fast-small AND faster-small models, respectively.
Joan Puigcerver et al. [37] fulfilled state-of-the-art offline handwritten text recognition, mainly relying on long-term multidimensional memory networks. The long-term, two-dimensional dependencies, theoretically represented by multidimensional recurrent layers, may not be necessary to achieve a good accuracy of recognition, at least in the lower layers of architecture. In this study, an alternative model is considered, relying only on convolutional and one-dimensional recurrent layers that achieve better or comparable results than those of the current state-of-the-art architecture and run much faster. Furthermore, it was found that random distortions as synthetic data significantly improve the model precision. 1D-LSTM model achieves 5.1% CER on validation and 5.7% CER on the test. 2D-LSTM achieves 8.2% CER on validation and 8.3% CER on the test.
Zi-Rui Wang et al. [38] proposed a novel WCNN-PHMM architecture for offline handwritten Chinese text recognition. It is proposed to address two key issues: the extensive vocabulary of Chinese characters and the diversity of writing styles. By combining parsimonious HMM based on the state of unsupervised learning based on the writer's code, the approach demonstrates its superiority to other state-of-the-art methods based on both experimental results and analysis. It achieves a relative 16.6% CER over the conventional CNN-HMM without considering language modeling. Nam Tuan Ly et al. [39] presents a model of attention-based convolutional sequence to sequence (ACseq2seq) for recognizing an input image of multiple text lines without explicit line segmentation from Japanese historical documents. There are three main parts to the recognition system: a feature extractor using convolutional neural network (CNN) to extract a sequence of features from an input image; an encoder that uses bidirectional Long Short-Term Memory (BLSTM) to encode the sequence of features; and a decoder that uses a unidirectional LSTM with the mechanism of attention to generate the final target text based on the relevant features involved. In the test set for level 2 and level 3, the proposed ACseq2seq model achieved a 4.42% and 12.98% character error rate, respectively.
Lei Kang et al. [40] implemented Convolve, Attend and Spell in this paper, an attention-based seq2seq model for handwritten word recognition. There are three key components of the proposed architecture: an encoder consisting of a CNN architecture such as the VGG-19-BN and a bidirectional GRU, an attention mechanism devoted to the related characteristics, and a decoder generated by a single-directional GRU capable of spelling the corresponding word, character by character. The proposed model achieve 6.88% character error rate and 17.45% word error rate on IAM word-level dataset.
Arindam Chowdhury et al. [41] presented a novel approach that combines a deep convolutional network with a recurrent encoder-decoder network to map an image to a sequence of characters corresponding to the text present in the image. Using Focal Loss, the entire model is trained end-to-end, an improvement over the traditional Cross-Entropy loss that solves the issue of class imbalance inherent to text recognition. The Beam Search algorithm is used to boost the decoding ability of the model, which searches for the best sequence from a set of hypotheses based on a common distribution of individual characters. The proposed model achieves 8.1% character error rate and 16.7% word error rate on IAM word-level dataset and on RIMES dataset achieve 3.5% character error rate and 9.6% word error rate Zelun Wang et al. [42] research proposed a deep neural network model with an encoder-decoder architecture that converts math formula images into their LaTeX markup sequences. The encoder is a neural convolutional network that converts images into a group of maps of features. Before being unfolded into a vector, the function maps are supplemented with 2D positional encoding to better capture the spatial relationships of math symbols. The decoder is a stacked long-term bidirectional memory model integrated with the mechanism of soft attention that acts as a language model to convert the output of the encoder into a sequence of LaTeX tokens. This is the first paper that uses the HKR dataset [21]. The HKR dataset was also proposed by the authors, which is the first publicly available Russian and Kazakh handwritten dataset. Until now, there has been no dataset in these languages available to researchers.
Proposed Model
Attention-Gated-CNN-BGRU focuses on the Cyrillic symbol extracted in handwritten form. An input is a cropped word image of a 1D sequence of characters or symbols. We propose a model based on the Attention-Gated-CNN-BGRU architecture with the number of parameters around (885,337); it has a high recognition rate, is more compact and faster, and has a lower error rate compared with the other models. The algorithm consists of six stages, which will be described as follows: 1.
Preprocessing such as Resize with padding (1024 × 128), Illumination Compensation, and Deslant Cursive Images is performed. We then convert the raw data into a hierarchical data format (HDF5) files; this conversation helps us provide fast loading of the data.
2.
Extract characteristics by using the CNN layers.
3.
Bahdanau attention mechanism that makes the model pay attention to the inputs and relates them to the output. 4.
The map features sequence by BGRU.
Decode the output into the text format and perform post-processing to improve the final text.
Image Preprocessing
To minimize lighting irregularities in the image, such as light peaks and excessive shadows, the first approach, brightness and contrast correction, is applied. Other techniques such as lighting compensation are also useful in this case, as background and foreground noises are both eliminated by lighting. Contrast adjustment remaps image intensity values to the full display range of the data type. An image with good contrast has sharp differences between black and white.
The second approach, slant correction or de-slant, is meant to normalize the characters' vertical tendency. The process identifies the slant angle of the writing on the vertical axis, then applies a transformation of the geometric image to change the observed angle. In HTR applications, slant correction is an important part of the normalization task. The slant is the deviation from the vertical direction of the vertical strokes of words. It is possible to extend the de-slanting process to many applications. It may assist the segmentation and identification of digits in natural scenes, license plates, zip codes, and handwritten texts of italic machine-printed characters.
The last approach, standardization (or normalization of the Z-score), aims to rescale the image characteristics (pixels) to result in an image with mean and variance values equal to 0 and 1, respectively.
Model
The model is fed by the image through the Gated CNN, processed using the Bahdanau's attention, with bidirectional GRU. Finally, GRU's output matrix is passed to the Connectionist Temporal Classification (CTC [32]) to calculate the loss value and decode the output matrix into the final text. The model architecture, which has four primary parts: an encoder, an attention block, a decoder, and CTC, is shown in Figure 1. CTC. An input image is converted into a series of constant feature vectors by the encoder. The attention block is a mechanism that provides a richer encoding of the source sequence that facilitates the building of a context vector, such that the decoder can use it. The decoder is processing feature sequences to predict sequences of characters. CTC of the output layer for RNN is used for sequence labeling problems.
Convolutional Blocks
The encoder receives the input and generates the feature vectors. These feature vectors hold the information and the characteristics that represent the input. The encoder network consists of 6 convolutional blocks that correspond to training to extract relevant features from the images. Each block consists of a convolution operation, which applies a filter kernel of size (3,3) in the first, second, fourth, and sixth blocks and (2,4) in the third and fifth block. Parametric Rectified Linear Unit (ReLU) and Batch Normalization are applied. To reduce overfitting, we also use Dropout at some of the convolutional layers [43] (with dropout probability equal to 0.2).
Gated Convolutional Layer
The idea of gate controls is to propagate a feature vector to the next layer. The gate layer looks at the vector feature's value at the given position and the adjacent values and determines if it should be held or discarded at that position. It allows generic features to be computed across the entire image and filtered when the features are appropriate, depending on the context. The gate (g) layer is implemented as a convolutional layer with the Tanh activation layer. It is added to the input function maps (x). The output of the gate mechanism is the pointwise multiplication of the inputs and outputs of the gate.
(1) Figure 2 shows the feature maps of a real example of the feature before and after the gated layer. This example shows that the gated layer allows the feature to be more effective and excitatory.
Decoder
The decoder is a bidirectional Gated Recurrent Unit (GRU) [20] RNN that processes feature sequences to predict sequences of characters. The feature vector contains 256 features per time-step, and the recurrent neural network propagates the information through the sequence. The GRU implementation of RNNs is employed as a gating mechanism in recurrent neural networks (RNN), almost like an extended LSTM unit without an output gate. GRU tries to unravel the matter of the vanishing gradient [44].
GRU can solve the vanishing gradients problem by using an update gate and a reset gate. The update gate can control the information that flows into the memory, and the reset gate can control the memory-flowing information. The gates for updating and resetting are two vectors that determine which information will be passed on to the output. Two vectors can be qualified to retain past knowledge or delete information unrelated to prediction. The GRU is similar to LSTM with a forgotten gate, but it contains fewer parameters because it lacks a gate for output. The output sequence of the RNN layer is a matrix of size 128 × 96.
Attention Mechanism
The attention block is a mechanism that provides a richer encoding of the source sequence (h 1 ,...,h s ) that facilitates the building of a context vector (c t ), such that the decoder can use it.
The attention block allows the model to pay attention to the most relevant information in the source. The source sequence's hidden state is obtained from the encoder for each input time-step instead of the hidden state for the final time-step. The attention weights (α ts ) calculated based on the encoder hidden stateh s , and the sequence of decoder hidden state vectors h t . The context vector (c t ) is computed by the current attention weights (α ts ) and the sequence of encoded hidden state vectorshs in the following equation: In the target sequence, a context vector(c t ) is constructed explicitly for every output word. First, using a neural network, every hidden state from the encoder is graded and then normalized to have a probability over the encoders' hidden states. Finally, the probabilities are used to calculate a weighted sum of the encoder's hidden states to provide a context vector that should be used in the decoder. The attention layer produces outputs of dimension 128 × 256.
where both υ a and W c are weight matrices to be learned in the alignment model, S is source sequence length,hs is encoder hidden state and h t is decoder hidden state network.
Connectionist Temporal Classification(CTC)
The Connectionist Temporal Classification (CTC) of the output layer for RNN is used for sequence labeling problems. There is no alignment between the inputs and the target labels. Neural networks need different training goals for each section of the input sequence or time-step.
CTC has two significant consequences. First, it ensures that the training data must be pre-segmented to set the goals. Secondly, as the network generates only local classifications, the global aspects of the sequence (such as the probability of two labels occurring consecutively) must be modeled externally. Indeed, the final label sequence cannot be inferred reliably without some post-processing. CTC is achieved by allowing the network to make label predictions at any time in the input sequence, provided that the overall label sequence is correct. CTC can eliminate the necessity of pre-segmented data because it is no longer essential to align the labels with the input. CTC also offers complete label sequence probabilities directly, ensuring that no additional post-processing is required for using the network as a time classifier.
While training the neural network, the CTC is given the RNN output matrix and the ground-truth text, and it computes the loss value. While inferring, the CTC is only given the matrix, and it decodes it into the final text. Both the ground-truth text and the recognized text length can be mostly 96 characters long.
Data
The handwritten Kazakh and Russian database [21] can serve as a basis for research on handwriting recognition. It contains Russian Words (Areas, Cities, Villages, Settlements, Streets) written by a hundred different writers. It also incorporates the most popular words in the Republic of Kazakhstan. A few preprocessing and segmentation procedures have been developed together with the database. They contain free different handwriting forms. The database is prepared to provide training and testing set for Kazakh and Russian word-recognition research. The database consists of more than 1400 filled forms. There are approximately 63,000 sentences, more than 715,699 symbols, and there are approximately 106,718 words.
Due to the scarcity of public data for Kazakh and Russian languages, the HKR dataset [21] is used in this work, which contains 64,943 text lines is divided, as shown in Figure 3. The basis for this work's dataset was made up of different words (or short sentences) written in Russian and Kazakh languages (approximately 95% of Russian and 5% of Kazakh words/sentences, respectively). These languages use Cyrillic and share the same 33 characters. Besides these characters, the Kazakh alphabet also contains nine additional specific characters. The dataset of distinct words/sentences was boosted by applying various handwriting styles ( approximately 50-100 different persons) to each of these other words. These procedures resulted in a final dataset with a large number of handwritten words/sentences. After that, the final dataset was split into three datasets as follows: Training (70%), Validation (15%), and Testing (15%). The test dataset was equally split into two sub-datasets (7.5% each): the first dataset was named TEST1 and consisted of words that did not exist in Training and Validation datasets; the second was named TEST2 and was made up of words that exist in Training dataset but with totally different handwriting styles. The primary purpose of splitting the Test dataset into TEST1 and TEST2 was to check the accuracy of the difference between recognizing unseen words and the other words, which were seen in the training phase but with unseen handwriting styles. After training, validation, and testing datasets were prepared, the models were trained, and a series of comparative evaluation experiments were conducted. Figure 4 shows examples of images in the dataset.
Training
Attention-Gated-CNN-BGRU are trained to minimize the CTC function's validation loss value. The optimization with stochastic gradient descent is performed, using the RMSProp method [45] with a base learning rate of 0.001 and mini-batches of 32. Also, early stopping with patience 20 is applied, we wanted to monitor the validation loss at each epoch, and when the validation loss does not improve after 20 epochs, training is interrupted.
Experiments
The proposed and tested models have all been implemented using the Tensorflow library [46] for Python, which allows transparent use of highly optimized mathematical operations on GPU through Python. A computational graph is defined in the Python script to define all necessary operations for the specific computations. The tensors are then evaluated, and Tensorflow runs the necessary part of the computational graph implemented C code on the CPU or the GPU if any is made available to the script. The operations have a supported GPU implementation. Although Tensorflow supports the use of multiple GPUs, in our project, we use only one GPU for each test run to make the processing easier. The experiments were run on a machine with 2× "Intel(R) Xeon(R) E-5-2680" CPUs and 4× "NVIDIA Tesla k20x".
Comparison with State-of-the-Art on HKR Dataset
This section presents the results of applying the research model and compares its performance with the other published models, which used different datasets to achieve a state-of-the-art scientific comparison (Bluche, and Puigcerver) [35,37]. The dataset is divided into four parts: training, validation, Test1, and Test2. Attention-Gated-CNN-BGRU and other models are tested via double tests of datasets, as shown in Table 1. This Table shows that there is quite a difference between Attention-Gated-CNN-BGRU and the Puigcerver model in the character error rate (CER) because the Puigcerver model has many parameters (around 9.6 million) and overfitting after 30-50 epochs.
Our network was trained from scratch because there is no pre-training model or transfer from another dataset used before. The standard performance measures are used for all the results presented: the character error rate (CER) and word error rate (WER) [47]. The CER is determined as the distance from Levenshtein, which is the sum of the character substitution (S), insertion (I), and deletions (D) required to turn one string into another, divided by the total number of characters in the ground-truth word (N).
Similarly, the WER is calculated as the sum of the number of the term substitutions (S w ), insertion (I w ), and deletions (D w ), which is necessary for the transformation of one string into another, and divided by the total number of ground-truth terms (N w ). Attention-Gated-CNN-BGRU was trained to minimize the validation loss value of the CTC function. Training and validation loss is shown in Figure 5. Table 2 presents randomly selected samples obtained on the HKR dataset. An impressive result is shown in these examples. In the first example, only one character was not detected; it is the letter "щ". In the second example also only one character, "Ь" was not detected, and the two letters are the same, but one is capital, and the other is small "б, Б". The third example has many errors in characters because the style of writing is contacted, and some characters in Russian are the same if contacted, for example, "иц,щ". The rest of the examples are correct. Attention-Gated-CNN-BGRU not only detects characters, but it can also detect symbols like ('.','!','?', etc.). Table 3 shows the effect of the attention mechanism. We trained the model with attention, without attention, and finally without BGRU. We see that attention helps to reduce the error rates when it is applied with a decoder. Neural networks use randomness in design to ensure that the function approximated for the problem is effectively learned. Randomness is used because it can provide better performance with a machine learning algorithm than the others. Random initialization of the network weights is the most common type of randomness used in neural networks. Randomness can also be used in other areas.
Here are some examples: • Initialization Randomness, such as weights. • Regularization randomness, such as dropout. • Randomness in layers, like embedding of words. • Optimization randomness, such as stochastic optimization.
Experimental Evaluation on HKR dataset
To use the appropriate statistical test, we examine whether the differences between WER and CER results are normally distributed or not. Consequently, the Quantile-Quantile (Q-Q) plot [48], the Shapiro-Wilk test [49] and D'Agostino and Pearson's Test [50] are used. In this regard, according to visual inspection in the Q-Q plot, a sample is considered to be consistent with a normal distribution if the sample and theoretical quantiles fall close to the line representing the theoretical distribution. Additionally, this decision is supported by an assessment of whether the points fall inside the envelope of 95% pointwise confidence intervals [51]; thus, we inspect the p-value that is yielded by the Shapiro-Wilk test if it has a significant result (p-value < 0.05), and hence this shows that the data significantly deviate from a normal distribution. Accordingly, we look for results with a larger p-value (i.e., p-value > 0.05) to verify that the distribution of the underlying sample in the analysis is normally distributed. Figures 6-8 present the distributions of the differences between Attention-Gated-CNN-BGRU and other competitors that are computed based on the two test datasets. According to these figures, the red border image in the figure represents the nonnormal distribution, where one of the points crossed the boundary of the envelope of 95% pointwise confidence intervals and the p-value is less than 0.05. All the distributions are normally distributed since the points fall inside the envelope of 95% pointwise confidence intervals and the p-value > 0.05. Thus, as the distributions of the underlying differences in the analysis are normally distributed, we favor using the paired t-test with the confidence of 95% (p-value< 0.05) to find the statistical significance of the results. In HKR dataset, we performed ten training executions for each model for statistical testing and used a t-test with 5% significance. As null hypothesis we considered H 0 : µ 1 ≥ µ 2 , and an alternative hypothesis H 0 : µ 1 < µ 2 . We analyzed the hypotheses for both the CER, WER, and SER scenarios, where µ 1 is the average of the proposed model's errors, and µ 2 is the average of the errors of the other model in comparison. This means that the p-value must be lower than α= 0.05 to assume that the proposed model offers a significantly lower error rate.
In both Test 1 and Test 2 datasets, we computed the CER p-value and t -test, WER p-value and t-test, and SER p-value and t-test lower than 0.001. This is below the standard p-value = 0.05, meaning that we can conclude that the proposed model, based on Attention-Gated-CNN-BGRU, has a significantly lower CER, WER, and SER in the test1 and test2 dataset. The p-values in case of test1 and test2 dataset shown in Table 4.
Another way we used it to prove that our proposed model has a significantly lower CER, WER, and SER. It is called Wilcoxon Signed-Ranks Test [52]. Wilcoxon test is a nonparametric test designed to evaluate the difference between two treatments or conditions where the samples are correlated. It is particularly suitable for evaluating the data from a repeated-measures design in a situation where the prerequisites for a dependent samples t-test are not met. We have calculated both a W-value and a z-value. If N's size is at least 20, then the distribution of the Wilcoxon W statistic tends to form a normal distribution. This means we can use the z-value to evaluate our hypothesis. If, on the other hand, N's size is small, and particularly if it is below 10, we should use the W-value to evaluate our hypothesis. The value of z is −2.8031, and the p-value is 0.00512 . The result is significant at p < 0.05. Table 4. A comparison of the recall (i.e., sensitivity) of the CER, WER, and SER for test1 and test2. To verify whether our proposed model achieves statistically significant improvements in the sensitivity of reducing the errors, the paired t-test is conducted with the confidence of 95% (p-value < 0.05).
Comparison with Other Models on HKR Dataset Using Character Accuracy Rates
We evaluated the results of Attention-Gated-CNN-BGRU and the other models using another method called Character Accuracy Rates(CAR) [53,54], this method is implemented to calculate the accuracy of symbols on Test1 and Test2 dataset.
LineHTR model [57] is just an extension of the previous SimpleHTR model, which was developed to enable the model to process images with a full text line (not a single word only), thus, to increase the model's accuracy further. Architecture of LineHTR model is quite similar to SimpleHTR model, with few differences at the number of CNN and RNN layers and size of those layers' input: it has 7 CNN and 2 Bidirectinal LSTM (BLSTM) RNN layers.
According to the authors of "Nomeroff Net" automatic number plate recognition system [58], the OCR architecture solution was originally taken from [59]. Although Nomeroff Net OCR architecture was designed to recognize "machine typed" car numbers, it is also worth to check the model's performance on handwritten text recognition tasks.
The first experiment was conducted with SimpleHTR model which showed a performance with average Character Accuracy Rates (CAR) of 38.28% and 90.29% on TEST1 and TEST2 datasets respectively ( Figure 9). This considerable difference in CAR rates shows that SimpleHTR model overfitted to words seen in training stage and demonstrated lower level of generalization.
Next experiment was carried out with LineHTR model which was trained on data for 100 epochs. This model demonstrated a performance with average CARs of 29.86% and 86.71% on TEST1 and TEST2 datasets respectively (Figure 9). Similar tendency of overfitting to training data can be observed here as well.
The same experiments were conducted with NomeroffNet HTR model. Unlike previous models examined, this model showed lower average CAR rate (70.87%) even on TEST2 dataset ( Figure 10). The model's CAR value on TEST1 dataset was reported as 23.9%. As can be observed from the figure, NomeroffNet model also suffers from overfitting. the last experiment was for Attention-Gated-CNN-BGRU, Bluche and Puigcerver models. Bluche model achieved 38.71% and 54.34% on TEST1 and TEST2 datasets ( Figure 11). The PuigcerverHTR model achieved 7.22% and 16.56% on TEST1 and TEST2 datasets( Figure 11). The Attention-Gated-CNN-BGRU CAR rates on TEST1 and TEST2 datasets were reported as 56.23% and 67.06% respectively ( Figure 10). As shown in the figure, Attention-based Fully Gated CNN-BGRU model resulted in higher CAR and generalization rates overall.
Comparison with Other Datasets
The results of our research model will be demonstrated and compared with other public datasets such as IAM [60], Saint Gall [61], Bentham [62], and Washington [63]. The IAM Handwriting Database 3.0 includes: 1539 scanned text pages, 5685 isolated and labeled sentences, 657 authors' contributed handwriting samples, 13,353 isolated and labeled text lines, and 115,320 isolated and labeled words. This dataset includes training, validation, and test splits, where the validation or test splitting cannot involve an author contributing to the training set.
The Saint Gall database contains a handwritten historical manuscript with the following features: the 9th century, the Latin language, a single writer, Carolingian script, and parchment ink. The Saint Gall database contains 60 pages, 1410 text lines, 11,597 words, 4890 word labels, 5436 word spellings, and 49 letters. The Washington database was developed at the Library of Congress from George Washington Papers and has the following characteristics: the eighteenth century, the English language, two authors, longhand script, and paper ink. The Washington database includes 20 pages, 656 lines of text, 4894 instances of the word, 1471 classes of words, and 82 letters.
Bentham's writings contain many articles written by renowned British philosopher and reformist Jeremy Bentham (1748-1832). This sequence is currently transcribed by an amateur volunteer involved in the award-winning crowd-sourced initiative, Transcribe Now more than 6000 documents have been transcribed through this public online site. Bentham data set is a subset of documents transcribed using TranScriptorium. This dataset is free and available in two parts for research purposes: the images and the GT. The GT provides information on each image's layout and transcription on the line stage in page format. Both sections must be downloaded separately. In each section, a comprehensive explanation is given of how the dataset is structured. We obtained state-of-the-art results on IAM, Saint Gall, Bentham, and Washington's databases presented in Table 5.
Data Augmentation and Batch Normalization
To artificially supplement the training samples and minimize overfitting, we performed sufficient random distortions on the input images. Also, we have used the data augmentation schemes of Perez [70] like affine, flips, scaling, gray-scale erosion, dilation etc. Table 6 describes the implementation details of each strategies. Those strategies are applied randomly while training the model. 6 Erosion and Dilation gray-scale erosion with 5%, and dilation with 5%. Table 7 shows the impact of data augmentation and batch normalization. When adding random distortions to the input images, batch normalization leads to decrease the error rates as the following: (1) In the Attention-Gated-CNN-BGRU on IAM, the CER on the test set decreases from 5.79% to 3.23%, and WER decreases from 15.85% to 9.21%. In the Bluche model on IAM, the CER on the test set decreases from 6.96% to 4.94%, and WER decreases from 18.89% to 11.21%. Finally, In the Puigcerver model on IAM, the CER on the test set decreases from 8.20% to 5.97%, and WER decreases from 25.0% to 13.73%.
(2) In the Attention-Gated-CNN-BGRU on Saint Gall, the CER on the test set decreases from 7.25% to 4.47%, and WER decreases from 23.0% to 19
Experimental Evaluation on Other Datasets
In each dataset (IAM, Saint Gall, Washington, and Bentham), we performed ten training executions for each model for statistical testing and we used a t-test with 5% significance. As null hypothesis we considered H 0 : µ 1 ≥ µ 2 , and an alternative hypothesis H 0 : µ 1 < µ 2 . We analyzed the hypotheses for both the CER, and WER scenarios, where µ 1 is the average of the proposed model's errors, and µ 2 is the average of the errors of the other model in comparison. This means that the p-value must be lower than α= 0.05 to assume that the proposed model offers a significantly lower error rate.
We computed the CER p-value and t-test, and WER p-value and t-test lower than 0.0001 in test datasets. This is below the standard p-value = 0.05, meaning that we can assume that the proposed model, based on Attention-Gated-CNN-BGRU, has a significantly lower CER, WER, and SER in the test datasets. The p-values in case of test datasets shown in Table 8. Table 8. A comparison of the recall (i.e., sensitivity) of the CER, WER, and SER for test dataset on (IAM, Saint Gall, Washington, and Bentham). To verify whether our proposed model achieves statistically significant improvements in the sensitivity of reducing the errors, the paired t-test is conducted with the confidence of 95% (p-value < 0.05).
Discussion
This research's primary goal was to study and quantitatively compare the state-of-the-art RNN models to choose the best one in the task of handwritten Cyrillic postal-address recognition. This goal also incorporates all efforts put into improving the best performing RNN model. According to the experimental results, Attention-Gated-CNN-BGRU demonstrated comparatively better results in terms of generalization and overall accuracy (see Table 1). As the dataset includes a small number of Kazakh language handwritings, the language characters have lower frequencies (distribution in the dataset) compared to other Cyrillic letters. Consequently, the models mentioned above struggle to recognize these characters, which gives low error recognition rates. Hence, this affects the overall average CAR rates. The dataset also includes non-alphabetic characters (such as ". , !" and so on) with small distributions. Puigcerver models seemed prone to overfitting while being trained to Cyrillic handwriting. We suppose that enrichment of the dataset with various Kazakh and Russian words will solve this problem.
In the proposed architecture, the "image-level" consisting of convolutions and a language-level model of recurrent layers has been conceptually separated. Attention-Gated-CNN-BGRU has been trained for more than ten times, and all results for each experiment of training, evaluation, and testing in the different random seed are recorded in Table 9. By training the encoder on a large volume of Russian and Kazakh dataset from various collections, we plan in the future to use Attention-Gated-CNN-BGRU for other applications, including speech recognition, image tagging, video captioning, sign language translation, music composition, and genome sequencing, which may benefit from our approach. For example, a recurrent neural network transforms raw voice into character streams using the DeepSpeech speech recognition technique. Both streams of characters use CTC for logical words in text streams.
Conclusions
In this paper, we have considered the use of encoder-decoder neural network architecture to achieve state-of-the-art results for Kazakh and Russian handwriting recognition. It consists of a fully gated convolutional encoder that extracts generic features of handwritten text, an attention mechanism, a BGRU decoder, and a CTC model to predict the sequence of characters. An important block is the attention mechanism that increases the network's capacity to collect the essential information for every part of the output sequence and gated layers implemented in the encoder, which can select essential features and inhibit the others. Our method achieves a high level of recognition rate with a small number of parameters. The experiments results empirically evaluates the performance of the proposed method and it achieves state-of-the-art results on the English-based IAM, Saint Gall, Bentham, Washington, and the Russian-Kazakh dataset(HKR). Future work includes experimentation with larger contextual filters, the addition of sentence-level features like sent2vec, the introduction of hierarchical processing to achieve a high recognition rate at the symbol, word, sentence, and paragraph level, usage of fully convolutional networks, to be able to apply these methods to more languages, resource-constrained languages, and other recurrent cases. Funding: This work was done with the support of grant funding for scientific projects of the MES RK No AR05135175 "Development and implementation of a system for recognizing handwritten addresses of written correspondence JSC "KazPost" using machine learning".
Conflicts of Interest:
The authors declare no conflict of interest. | 2020-08-13T01:01:16.959Z | 2020-08-12T00:00:00.000 | {
"year": 2020,
"sha1": "c1efbca18181cebf398506ed9cdf686ded5b293b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2313-433X/6/12/141/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e6a47af58b3fdbe19e485bb8486aff41b683d933",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
220311313 | pes2o/s2orc | v3-fos-license | Utilization of waste wool fibers for fabrication of wool powders and keratin: a review
Wool fiber contains approximately 95% keratinous proteins, which is one of the most abundant sources of structural protein. However, a large amount of wool waste is underutilized. Developing appropriate approaches to recycle wool waste and produce value-added products is vital for sustainable development and reducing environmental burden. Thus, this paper reviews the mechanical methods of fabricating wool powder, including pan milling, combined wet and air-jet milling, steam explosion, freeze milling, and three-stage milling. The influencing factors of shape and size, structure, and properties are highlighted to overview of the possible controlling methods. Then, this review summarizes various chemical methods for the extraction of wool keratin, underlining the dissolution efficiency and the structure of wool keratin. Furthermore, the application of reused wool particles in textile, biosorbent, and biomaterials are also reported. Finally, several perspectives in terms of future research on the fabrication and application of wool particles are highlighted.
Introduction
Wool is a biodegradable and biocompatible natural fiber that has attracted considerable attention for use in textiles and various technological fields due to its outstanding properties, such as specific structure, high moisture regain, excellent resiliency, good elasticity and good insulation capacities, low heat conductivity, and excellent affinity for dyestuffs [1]. More than 2.5 million tons of wool are produced annually worldwide [2]. Waste wool mainly comprises by-products of wool fiber subjected to textile processing, poor-quality raw wool that is not fit for spinning, and other secondary waste that is generally obtained in the textile industry. However, large amounts of wool waste are dumped in landfills or burnt, which may contribute to the pollution of the environment. Rising concern for the environment and growing demand for safe and sustainable bio-based materials are prompting the search for improved recycle methods of wool waste.
Several reviews on fabrication methods and applications of wool particles have been reported in recent years. Patil et al. [3] reviewed the direct and indirect routes to fabricate the ultrafine particles from protein fibers. They also presented the applications of protein fiber particles. K. Donato et al. [4] introduced the keratin and its-based biomaterials. Karthikeyan et al. [5] reported the industrial applications of keratins extracted from feathers, hair, wool, and horn. Gosh et al. [6] summarized the keratin-based materials for toxic pollutants absorption. Shavandi et al. [7] discussed the advantages and limitations of the major methods for extracting keratin, including reduction, oxidation, microwave irradiation, alkali extraction, steam explosion, sulfitolysis, and ionic liquids. Furthermore, they reviewed various modification methods of wool fibers and the recent keratin-based thermoplastic biocomposites fabrication methods, including intermixed blending, and melt processing [8,9].
In this review, we focused on the environmentally friendly methods to recycle wool fiber. Firstly, the structure and properties of wool powders fabricated using different methods, including pan milling, combined wet and air-jet milling, steam explosion, freeze milling, and three-stage milling, were compared. Secondly, the traditional methods for extraction wool keratin were summarized. The limitations of traditional chemical methods to extract wool keratin have inspired researchers to develop simple and eco-friendly processing methods. The extraction efficiency of wool keratin using a series of green ionic liquids (ILs) and deep eutectic solvent (DEP) solvent are highlighted. In addition, the promising applications in the textiles, biosorbents, cosmetic, and biomaterials are reviewed. Finally, the problems and challenges of fabrication methods as well as the applications for wool particles were discussed.
Preparation methods of wool particle
Recovered wool waste is considered a "rich" material owing to its composition and properties [10]. Ideally, wool particles should be fabricated via a technology that does not produce hazardous waste. To this end, various mechanical and chemical methods have been developed. This section presents a detailed overview of the utilization of wool waste.
Mechanical method
Mechanical methods are effective for fabricating various wool powders of different sizes and shapes. Generally, these methods are characterized by short durations as well as low-cost and are suitability for mass production. Moreover, these methods can reuse all the wool waste generated at different stages of the textile lifecycle. Micro and nanoparticles produced by mechanical methods can maintain the natural microstructure and high crystallinity of the wool fiber. Among various mechanical methods, pan milling, combined wet and air-jet milling, steam explosion, freeze milling, and three-stage milling have been commonly used to fabricate wool powders. Figure 1 shows the corresponding preparation process.
Pan milling
According to Xu et al. [11,12], needle-like wool powder with an average diameter of 2 μm can be obtained after grinding wool fiber for only 3 h using a home-made machine with two special milling pans [13], one of which has a concave surface, and the other, a convex surface. The two milling pans exhibited low heat generation and high anti-abrasion properties, preventing heat accumulation in the grinding zone. This approach has been used for the fabrication of other fiber powders, including fibroin powder, down powder, leather powder, and cellulose powder at room temperature without additional cooling processes [14][15][16][17]. The shape and size of the wool powders are directly determined by the grinding time. The wool powder prepared using a grinding time of 5 min exhibited the original outline of the wool fiber. After grinding for 0.5 h, the amorphous region was first destroyed, leading to the formation of wool powder with a small diameter. As the grinding time was increased, certain crystals of the wool fiber were also destroyed, slightly decreasing the crystallinity. Additionally, Xu et al. [12] compared the influence of the grinding time on thermal properties obtained using this technique. They found that as the grinding time increased, the wool powder's affinity to water increased. Furthermore, the temperature for the crystal cleavage and the destruction of the crosslinks also increased. The fine wool powder, with improved thermal stability, had advantages in polymer-based applications.
From an industrial perspective, the fabrication of wool powder by pan-milling is convenient, simple, and economical since it allows the rapid production of powders at room temperature. This method solves the issue of uneven particle size distribution, commonly observed when using mechanical methods. This effective fabrication strategy exhibits potential for the fabrication of fine powder. The approach, however, requires a special machine.
Combined wet milling and air-jet milling
Rajkhowa et al. [18,19] obtained ultrafine wool powder using a combination of two sets of milling methods, i.e., wet and air-jet milling. The typical wool powder fabrication process is as follows. First, wool fibers are chopped to a length of 1 mm. The snippets are immersed in water for 6 h, for wet milling. Circulating cooling water (18°C) is used to keep the product temperature low during the milling process. After wet milling, the wet powders were spray-dried at 130°C. Finally, the resulting dried wool particles are ground using an air-jet milling machine with a grinding air pressure of 110 psi.
The wet milling time and air-jet milling process can significantly affect the size and morphology of the wool powders, resulting in the formation of different morphologies of wool powders, as shown in Fig. 2. SEM images show that the particle size of the wool powder is reduced by increasing the wet milling time. After wet milling for 5 h, the wool powder appears to be aggregated. The air jet milling process is then used to improve the separation of particles through the application of external high pressure. The final prepared wool powder shows a considerable reduction in median size (from 4.0 μm to 1.5 μm) using air-jet milling, as shown in Fig. 2d. Brunauer-Emmett-Teller (BET) analysis of the wool powder demonstrated that the surface area of ultrafine wool particles is 700 times higher than the wool fiber. Therefore, it is necessary to separate the wool particle aggregates by the air-jet milling method.
Faruque et al. [20] fabricated alpaca powders using this method. The average diameter of the as-prepared particles was 2.5 μm without using any chemicals in the [18]. Copyright 2012, Elsevier pretreatment and powder fabrication processes. X-ray diffraction (XRD) analysis demonstrated that the crystallinity of the prepared alpaca powders decreased with increasing milling time. Moreover, the differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA) tests revealed that the thermal properties of the powders were similar to that of the alpaca fibers.
Using a similar method, Patil et al. [21] compared the morphology and surface properties of cashmere guard hair powders, with and without acid hydrolyzed guard pretreatment. They found that the average diameter of the as-prepared powder reduced from 2.32 μm to 0.46 μm when the cashmere guard hair was subjected to acid hydrolysis pretreatment. The particle size and shape of the powder were greatly affected by acid hydrolysis during dry air jet milling but not during wet attritor milling. Additionally, the hydrolysis induced a more porous structure, leading to an increased BET surface.
For the preparation of micron-scale powder, combined wet and air-jet milling does not require chemical pretreatment, and thus, avoids chemical degradation of the protein structure. This is a huge advantage. The air-jet milling process can fabricate well-separated fine particles, but the machine requires high pressure.
Steam explosion
Tonin et al. [22] investigated a rapidly developing technique for the fabrication of wool powder. The wet wool fibers were soaked with saturated steam at 220°C for 10 min. The slurry was then filtered, and two phases (wet solid-state and liquid phase) were obtained. The wet solid was dried at 105°C and then ground into short wool fragments or shapeless aggregates. Additionally, sediment in the liquid phase was separated using a centrifugal force. The wool powder with spherical morphology (with a diameter in the range 0.5-3.0 μm) was observed in the dried sediment, which was attributed to the external thermal-induced shrinkage of protein. The obtained wool powders exhibited more β-sheet and disordered structures.
This approach showed the conversion of the wool fiber under strong physical conditions, confirming the cleavage of the disulfide bonds in the wool fiber without the addition of harmful chemical agents. The main advantage of this method is the rapid and simple production process without the need of additional toxic chemical agents. However, the size of 62.36 wt% solid particle is considered too large. Thus, this green process has been considered suitable for the rapid pretreatment of wool fiber.
Freeze milling
The high temperature during the milling process oxidizes the powder easily. Therefore, Hassabo et al. [23,24] presented a systematic study on the fabrication of wool powder using a freezer/mill machine using liquid nitrogen. The freezing and crushing time under liquid nitrogen in the milling process were optimized. The size of the wool powder depended significantly on the crushing time. The results showed that the fabrication of wool powder, in the size of 60-80 μm, is optimized when the freezing time is 5 min, crushing time is 3 min, and number of cycles is 12. In the liquid nitrogen, the oxidation is hindered, resulting in white wool powder. Raman spectroscopy showed that the freeze milling process does not affect the chemical structure of the wool. Additionally, the wool powder retained the thermal properties of the wool fibers. The freeze milling method can preserve the structure of the wool fiber; further, this method is considered suitable for the production of high valueadded products, or as a pretreatment method, owing to the use and safety of liquid nitrogen in the grinding process.
Three-stage milling
As expected, soft viscoelastic fibers are difficult to mill into nano-powder. Thus, the pretreatment for wool fiber is an effective manner to produce nanoscale powder. Cheng et al. [25] developed a pretreatment approach for wool fibers using a hydrogen peroxide solution. This included three main stages, as indicated by its name.
First stage
After pretreatment, the disulfide bond of the wool fiber was gradually oxidized. First, the wool was milled into a combination of rod-like particles of around 300 μm, and then into superfine powder, smaller than 10 μm.
Second stage
Large wool particles were pulverized with an ultrasonic crusher to obtain wool powder with a diameter of 0.1-7 μm.
Third stage
The fine wool powder was crushed into nanoscale spherical powder (with a diameter less than 100 nm) using a nanocolliding machine. The spherical nanoscale wool powder produced by this method exhibited decreased crystallinity and improved amounts of the secondary amine groups. Additionally, no evident changes were observed in the chemical structure of the wool particles.
As described above, small-sized wool powder particles, with spherical morphologies, are obtained using the three-stage milling method. This method of fabricating wool nanopowder combines the pretreatment and design of the milling parameters effectively.
Chemical method 2.2.1 Oxidative and reductive methods
Wool fiber is a fibrous protein that consists of approximately 95% pure keratin [10]. The presence of a high degree of disulfide cross-linkages, ionic and hydrogen bonds and other bonds in wool keratin constraints the polypeptide backbone. Traditional keratin extraction methods of wool fiber, including reduction, oxidation, and sulfitolysis, utilize the different properties of oxidative and reductive agents to cleave disulfide bonds. The extraction mechanisms are shown in Fig. 3. For decades, oxidation methods have been performed to extract keratin. The commonly used oxidative agent is peracetic acid and performic acid. Pakkaner et al. [26] reported the synthesis of wool keratin through peracetic acid oxidation. In a typical process, 2% w/v wool/peracetic acid was dissolved at 37°C. The oxidized fibers wre then treated with a 100 mM tris-base solution, at pH 10.5, for 2 h. Finally, the keratin solution was centrifugated, dialyzed, and freeze-dried. The spherical nanoparticles, in sizes ranging from 15 to 100 nm, were obtained at low protein concentration (5 mg/ml) in solution. The molecular weights (MWs) of water-soluble oxidized keratin proteins are between 23 and 33 kDa. Buchanan et al. [27] used performic acid to fabricate keratin containing cysteine and cysteic acid with low MWs.
The main advantage of the oxidation methods is that the protein can be separated into α, β, and γ keratose, based on their different solubility at different pH [28].
Furthermore, the obtained keratose exhibited high content of cysteine-S-sulfonated residues. The disadvantage of this method is the low extraction yield and the high process time.
As shown in the Fig. 3, the disulfide bonds can be disrupted using thiol and sulfite. The commonly used thiol is thioglycolate acid, and mercapto-ethanol (MEC). Yamanaka et al. [29] extracted soluble wool keratin using 0.2 M thioglycolic acid at 30°C at a pH in the range 11-13. They reported that the yield increased with the increase in pH and reaction time (pH < 13). The wool keratin maintains the chemical structure of wool.
Recently, Fitz-Binder et al. [30] were the first to obtain wool keratin using the combination of calcium chloride (CaCl 2 )-water-ethanol solvent and thioglycolic acid. The effect of pH, reaction time and temperature of the dissolution of wool fiber were investigated. The results highlighted that the dissolution was optimal when pH = 7.0, time = 2 h, and temperature = 60°C. Up to 70% of the wool keratin was dissolved in the solvent using this method. The advantage of the mixed solution used in this approach is that they can reduce the risk of hydrolytic degradation of wool keratin. After the formation of wool keratin, the keratin solution directly solidified under the heat-press, enabling the formation of a keratin composite.
Yamauchi et al. [31] obtained wool keratin using MEC. The typical process of thiol reduction preparation used is as follows: The cleaned wool was mixed with 7 M urea, sodium dodecyl sulfate (SDS), and MEC at 50°C for 12 h. The aqueous was maintained at neutral pH. Next, the solution was dialyzed to get regenerated wool keratin. They found that the surfactant accelerated the wool keratin extraction, preventing the aggregation of keratin polypeptide chains during dialysis. The MWs of wool keratin was in the range 52-69 kDa. The preparation of wool keratin using thiol is highly efficient at low temperatures. The regenerated keratin can maintain the original structure of wool with a high yield. However, the thiol is harmful to handle. Sodium sulfite, a green chemical that can breakdown disulfide bonds, was used to replace thiols. The mechanism of keratin dissolution using sulfitolysis is shown in Fig. 3 Shavandi et al. [32] dissolved wool using 8 M urea and sodium metabisulfite at 60°C in 10 h. The wool keratin powder was then obtained by filtration, dialysis, and freeze-drying. The keratin extraction yield (41%) by sulfitolysis method was lower than the thiol method, while the physicochemical properties of the wool keratin were similar to those obtained using reduction methods using thiol. The sulfitolysis method, adopting cheap and less harmful chemical agents, has a significant industrial impact on wool processing.
Ionic liquids
Ionic liquids (ILs) with a low melting temperature have been widely used to dissolve various biomass due to their outstanding properties, including nonvolatility, nonflammability, thermal stability, easy recycling, and tunable structure [7]. Thus, a simple and eco-friendly method to extract wool keratin can be developed. Several research groups have attempted to fabricate wool keratin with different ILs. The extraction of wool keratin is generally based on a three-step process. The cleaned wool fibers are first added into the ILs with a magnetic stirrer, under nitrogen (N2) or air atmosphere. The dissolved wool keratin is then washed to remove any ionic liquid. Finally, keratin powders were obtained by ovendrying or freeze-drying. 6 on the solubility of wool fibers. The results showed that the Cl-ion showed better solubility for wool fibers than Br-, BF4-, and PF6-. The obtained wool keratin exhibited a β-sheet structure with no α-helix structure. The regenerated keratin using ILs leads to a high thermal stability compared with natural wool. They also demonstrated that the [BMIM] Cl has the ability to significantly disrupt hydrogen bonds in wool keratin/cellulose blended materials.
Idris et al. [34] The authors added a reducing agent to the ILs to facilitate the dissolution of wool fibers. The results demonstrated that the reducing agent could increase the dissolution of the wool by 50-100 mg/g. The authors attributed the improved solubility to the cleaving of the inter-and intra-molecular disulfide bonds caused by the reducing agent. The MWs of the regenerated keratin range from 15 kDa to > 120 kDa, confirming that the ILs had cleaved the protein into smaller polypeptide chains.
The regenerated wool keratin with decreased crystallinity retained the protein backbone.
Zheng et al. [35] synthesized various ILs to investigate the influence of structures of cations and anions on dissolution capability for wool fiber. As listed in Table 1, the dissolution time is shortened to 10 min when using [EMIM]OAc. However, the structural and thermal stability of the regenerated wool keratin were damaged. Although [EMIM] DMP needed 1.5 h to dissolve the wool fibers, the proportion of α-helical structure in the regenerated keratin was as high as 78 While ILs satisfy the demands of good solubility for wool fiber, some of them could not guarantee the strength of regenerated keratin materials. Zhang et al. [37] studied the disulfide bonds and microstructure variation of regenerated wool keratin during the dissolution Moreover, the imidazolium-based ILs showed a higher ability to cleave disulfide bonds than quaternary ammonium and phosphonium ILs. The authors claimed that the wool fibers are considered to be well dissolved when over 65% of the disulfide bonds are cleaved in the ILs. Meanwhile, optimal ILs with a disulfide bond cleaving ability of 70-80% can avoid the severe breakdown of keratin microstructure, making it suitable for fabricating keratin materials with good mechanical properties. Generally, the extraction of wool keratin using ILs needs to be performed in an N 2 atmosphere due to the hygroscopic nature of ILs, which needs expensive equipment. Another drawback is the limited reusability of ILs, increasing the cost of keratin extraction. According to the above discussion [35][36][37], properly designed of ILs have proved to be able to dissolve wool fibers efficiently with high protein yield under the atmosphere of air. The disruption of hydrogen and disulfide bonds is significantly related to the polarity of ILs. Therefore, the structure of anions and the cations should be considered in designing of ILs, rather than the slide chains of cations.
Deep eutectic solvent and other methods
Deep eutectic solvents (DES) have been used as alternative solvents for ILs in some cases owing to their similarities in physical properties with ILs, particularly their potential as tunable solvents that can be customized to a particular type of chemical reagent. Wang et al. [38] extracted wool keratin in the mixture of choline chloride/ oxalic acid (DES solvent). The optimized dissolution conditions were as follows: Choline chloride/oxalic acid molar ratio 1:2; wool-DES 5 wt.%, 110-125°C, and 2 h reaction time. X-ray diffraction analysis highlighted that the wool fibers were decrystallized because of the use of the DES solvent. The as-prepared keratin had MWs ranging from 3.3-7.8 kDa, further confirming the good dissolution ability of DES solvents. Furthermore, the wool keratin fabricated using DES method has amino acid compositions similar to those of the wool fiber. Jiang et al. [39] replaced oxalic acid with urea as a co-solvent in the DES. They highlighted that the dissolution ability of wool fiber improved to 35.1 mg/g using DES at optimum conditions. The regenerated keratin in the MWs is in the range 43-67 kDa. The choline chloride/ urea mixed solvent was found to have a higher solubility of high-tyrosine and low-sulfur keratin that high-sulfur keratin. Most of the α-helix crystal structure changed to a β-sheet or disordered structure during the extraction process.
Compared with traditional extraction methods, the advantages of the DES method are that the solvent is lowcost, biocompatible, and environmentally friendly, and specifically, the process has a low extraction temperature.
Lyu et al. [40] prepared size-and morphologycontrolled wool nanoparticles using the neutralization method and addressed the associated mechanisms for regulating the shape and size of wool nanoparticles. The well dispersed and stable wool nanoparticles with a size of 50 nm were obtained by the addition of 3% sodium hydroxide solution. XRD analysis demonstrated that the obtained wool nanoparticles maintained the chemical structure of the wool protein.
Wang et al. [41] extracted the wool keratin using Lcysteine as a reducing agent to dissolve the wool keratin better. The L-cysteine exhibited excellent solubility of 72%, greater than other methods. The MWs of the obtained keratin were in the range 40-55 kDa. The results demonstrated that the regenerated keratin exhibited an increased β-sheet structure with a decreased α-helix structure. An appropriate mechanism of the formation of wool keratin was proposed. The disulfide link was cleaved by either oxidization, or can, in part, form a new disulfide bridge.
He et al. [42] reported the use of ethanol as a cosolvent to weak the salt bond and hydrogen bond in wool fiber. The wool fiber dissolved in the L-cysteine hydrochloride, odium sulfite (Na 2 SO 3 ), and ethanol solution yields up to 67% keratin and is more efficient for extracting large molecular weight keratin (130 kDa). This enhancement was attributed to the incorporation of ethanol, which facilitated the recombination of intermolecular -SH. This new avenue for the extraction of wool keratin is advantageous owing to its simplicity, stability, and high yield.
Current applications of wool particles
Wool particle, a kind of fibrous protein, which exhibits higher stability in comparison with most of the proteins. In this section, we focused on the applications of wool particles. At present, wool particles could be as textile materials, filtration adsorbents, cosmetic materials, and biomaterials, as shown in Fig. 5.
Wool particle-based textiles
Wool particles exhibit good capacity for moisture absorption, retention, and stability, and can be used as coating and modifying agents in textile processing.
Wool particle-based materials have received attention due to their ability to improve dyeing properties upon application as surface layers, fillers, or dyeing media for natural and synthetic polymers. Kantouch et al. [43] fabricated a keratin-coated wool fabric using epichlorohydrin as the cross-linking agent. They observed that the as-fabricated wool fabric exhibited enhanced dyeability with respect to acid and reactive dyes. Ke et al. [44] produced wool powder/chitosan composite membranes and reported that wool powders were necessary for an improvement in the water resistance and dyeing properties of chitosan. Incorporating wool powder into synthetic polymers could also improve the moisture absorption and dyeing properties of viscose and PP [45][46][47]. In addition, wool keratin hydrolysates were for the first time used as a foaming agent in the dyeing of wool and cotton fabrics [48]. The role of hydrolyzed keratin is to convey the dye molecules during the dyeing process as illustrated in Fig. 6. Results showed that the foam-dyed cotton fabric yielded a K/S value that was similar to those observed for conventional padding samples. The foam-dyed wool fabric yields a higher K/S value as compared to that of the traditional pad steam. The utilization of hydrolyzed keratin as a foaming agent is a promising method of dyeing due to its favorable qualities 6 Hypothetical mechanism for foam dyeing of cotton and wool fabric using keratin hydrolysate [48]. Reprinted with permission from Reference [48]. Copyright 2017, American Chemical Society of low-cost, good foam stability, energy-saving capacity, and environment friendliness. Wool particles are commonly considered by researchers as fillers for the attainment of composites with good moisture permeability. Wool powder/PU composite film was clearly found to significantly improve water vapor permeability and moisture regain [49]. In addition, wool powder could improve the ability to absorb moisture and formaldehyde of PVA composites [24].
Wool particles are also used as reinforcing fillers in the obtainment of composites with high mechanical properties. The 2 wt% wool powder/PPC composites obtained through solvent evaporation and hot-compression indicated an improvement in the mechanical properties, glass transition performance, and thermal decomposition temperatures [50].
The use of surface treatment to improve the antifelting of wool has received particular attention. Keratin has been deposited as a coating on wool fabric through a combination of L-cysteine pretreatment and wool keratin cross-link fixation [51]. The anti-felting finishing process is illustrated in Fig. 7. The authors demonstrated that wool fabric possessed good anti-felting capacity by using keratin recycled for the tenth time. The surface of the processed wool fabric was found to be similar to that of the original wool fabric. Moreover, the wool keratintreated wool fabric has the advantages of improved hydrophilicity, whiteness, softness, and dyeability, as compared to those of the untreated wool fabric. The main advantage of this method is the environmentfriendliness of keratin and its capacity to be reused as the finishing agent at least 10 times. Jia et al. [52] developed a novel anti-shrink finishing agent using keratin. The anti-pilling grade of keratin-treated wool fabric reaches up to 4.5.
Wool keratin hydrolysate can be used to improve exhaustion in the leather tanning process. Keratin has the ability to react with chromium and further enhance the uptake of chromium by leather. In addition, keratin can be used with retanning agents to improve the grain smoothness and softness of leathers [5]. All these studies suggest that keratin-based materials exhibit a wide range of applicability as a functional finishing agent.
Wool particle-based biosorbent
Regenerated wool particles have been demonstrated as suitable potential sorbents due to their high surface activity, large surface area, and low density. Wool particlebased materials are commonly used to adsorb dye effluents, heavy metals, and toxic gases.
Dyes are an essential requirement in various significant industries such as tannery, paper, and textile industries because of their color-giving properties. Due to its low-cost and eco-friendliness, physical adsorption is one of the most important methods of removing dyes from wastewater. To address the issue of dye, Wen et al. [53] fabricated wool powder with a particle diameter of 4.5 μm for the purpose of absorbing acid dyes and methylene blue. The methylene blue sorption capacity of wool powder was 142.9 mg/g, while that of the activated charcoal was 0.024 mg/g; these results indicated good dye absorption. Due to the electrostatic force between acid dyes and wool powders, the C.I. acid red 88 sorption capacity of wool powder was 555.6 mg/g. Wool powder exhibited the capacity to be a good dye sorbent with excellent acid dye absorption capacity at room temperature. Furthermore, Glafar et al. [54] demonstrated that the use of wool powders modified with citric acid monohydrate improved the efficiency of methylene blue removal from 75 to 86%. On the other hand, Aluigi et al. [55]. fabricated highly porous keratin nanofibrous through electrospinning. Free-standing and flexible keratin membranes demonstrated good adsorption of methylene blue. The authors proposed the potential application of keratin nanofibrous membranes as dye adsorption filters. The presence of heavy metal ions in wastewater is another serious problem that could cause biological hazards. The metal iron adsorption of wool proteins was closely related to their morphology. The most crucial aspect of an adsorbent is its adsorption capacity. The metal-ion adsorption capacities of various keratin-based materials were compared in Table 2. EI-sayed et al. [56] reported that wool powder exhibited significantly higher Cu (II) and Zn (II) adsorption capacities than that of wool fiber (approximately three-fold). Wool powders demonstrated a relatively better ability to remove Cu (II), which was attributed to the strong interaction between Cu (II) and wool. The author found that wool powder could sustain its ability to absorb Cu (II) and Zn (II) after four cycles of absorption/desorption. Naik et al. [57] also found that the Cu (II) uptake rate of the wool powder was significantly higher than that of the wool fiber (approximately 42-fold). They demonstrated that after pretreatment with 4% sodium salt, wool powder (approximately 4.6 μm) exhibited a seven-fold increase in Co (II) adsorption as compared to that observed for the wool snippets (approximately 500 μm), because the Co (II) exhibited a weak binding affinity for wool fiber. Among keratin-based materials, keratin solution displayed the lowest percentage of ion removal [58]. Compared with the keratin nanofiber membrane fabricated by Aluigi et al. [59,61,62] the keratin/polymer composite showed enhanced adsorption of metal irons. Aluigi et al. revealed that keratin/PVA nanofibers exhibited a high capacity to adsorb Cu (II). The adsorption capacity of keratin/PVA nanofibers increased when the surface area of the nanofiber mats was increased. Jin et al. [63] proposed that the adsorption removal of Cr (VI) was mainly a result of the electrostatic adsorption of the amino acids and the redox reaction of disulfide bond in cystine oxide. The development of the wool keratin/PET composite may have significant potential for application in Cr (VI) removal.
Li et al. [64] developed an eco-friendly air purification filter using keratin/PEO nanofibers. The filtration efficiency of keratin/PEO-based nonwoven polypropylene (PP) fabric reached up to 88%, which is higher than that of the two-layer nonwoven PP fabric (only 2.5%). In addition, Shen et al. [65] found that the keratin/polyamide-6 composite nanofiber membrane improved air filtration efficiency and water-vapor transmission. These research studies revealed that wool keratin could be a promising material for the fabrication of novel filter materials.
Wool particle-based cosmetic materials
Wool keratin-based cosmetics have been reported to be used as hair and skin care products due to their numerous merits including smoothness, luster, softness, elasticity, and protective efficacy. Barba et al. [66] found that wool keratin could improve the moisture absorption and desorption capacity of bleached hair. In addition, they evaluated the water barrier function of the skin after the use of wool keratin peptides [67]. The results showed that the skin exhibited an improved water holding capacity and elasticity after the use of wool keratin peptides. Bayramoglu et al. [68] stated that keratin/Calendula extract emulsions could be entirely absorbed by the skin and did not leave any waste on the skin surface. As mentioned above, wool keratin was demonstrated as being versatile for the fabrication of cosmetic materials.
Wool particle-based biomaterials
The wool keratin-based composites integrate the physicochemical and biological properties of both materials, rendering them ideal for tissue engineering, wound dressings, biomedical applications, drug-delivery systems, bio-inks, and bioplastics, as shown in Fig. 8. The structure of wool keratin is similar to that of collagen, rendering it a possible choice for tissue repair applications. As the first line of defense, skin is prone to damage in fire and traffic accidents and has limited selfhealing ability. A moist environment around the wound, good ventilation to stimulate cell growth, and low bacterial load are conducive to proper wound recovery. It is common knowledge that open wounds promote microbial growth. To address this issue, Aluigi et al. [69] first fabricated wool keratin doped with a methylene blue film using a solvent casting method. Up to 99.9% of Staphylococcus aureus (S. aureus) with concentration of 108 cfu/mL were successfully annihilated using the wool keratin-based film under visible light. In yet another endeavor to prevent microbial growth, keratin/ionic liquid/PAN nanofibrous membranes exhibited a good water transport capacity and an effective antibacterial activity against Escherichia coli (E. coli) and S.aureus [70]. The proliferation of cells in the wound area could promote the rapid repair of injured skin. Introduction of gelatin and sodium alginate into keratin via the casting method resulted in a composite film with good thermal, mechanical, cell proliferation, swelling, and antibacterial properties, thereby promoting faster cell growth in the wound area [74]. In addition, keratin-based materials were found to support hBM-MSCs, mouse fibroblast, PC1 2 , HOS, MEFs, and adult mammalian skin cells [71,72,[75][76][77]. Moreover, wool keratin was found to be a possible choice for application in drug release and delivery systems for personalized wound or tissue repair. The ornidazole has strong inhibitory and deadly effects against most anaerobes. In particular, 1% ornidazole/PLGA/keratin membrane exhibited drug release characteristics and biodegradability in vitro, and could promote the growth and proliferation of hPDLFs, resulting in the treatment of periodontal disease and repair of periodontal defects [78]. Giuri et al. [79] used non-woven keratin/hydrotalcites to fabricate drug delivery systems and scaffolds for fibroblast cell growth. The non-woven keratin/hydrotalcites exhibited good biocompatibility and a controlled diclofenac release, thereby facilitating the growth of fibroblast cells.
The thermoplasticity and biocompatibility of keratin is conducive to its production of tissue-like structures that are used in bio-ink printing; these properties are expected to facilitate tissue construction and host tissue integration, such as vascularization and cell infiltration [28]. This provides a novel method of printing and weaving tissue structures using biomaterials. Furthermore, wool particles have also been used in the preparation of environment-friendly bioplastics. Wang et al. [80] fabricated a hot-pressed wool powder film mixed with plasticizer glycerol. The as-prepared wool film was ductile and soft, which can be used as a bioplastic for packaging applications. Arlas et al. [73] used glycerin and SDS to plasticize the keratin film. The resulting bioplastic exhibited a good transparency, a large UV barrier capacity, an excellent thermal stability, and decent mechanical properties. It was also potentially applicable in regenerative medicine, coatings, and packaging.
Conclusions and outlooks
In this study, we present a comprehensive review of the utilization of wool waste. We focused on the mechanical and chemical recycle methods along with associated formation mechanisms, their structure, properties, and promising applications. Despite that the remarkable developments of wool particle and wool particle-based composites have been reviewed in this paper, some issues and challenges are also needed to be addressed in large-scale production.
As a rapid, simple, and high-efficiency approach to wool powder production, the mechanical method can maintain the structure of the protein. More importantly, the yield from the mechanical method is high; some can even reach 100%. The pretreatment, temperature, pressure, and milling time are essential to control the shape and size of the wool powders. As discussed above, pretreatment of wool fibers with steam or chemical reagents can further reduce the particle size of wool powders. However, the production of nanowool powder with regular shape, particularly spherical particles using the mechanical method, is challenging. At present, the main goal is to combine milling methods with other pretreatments to eliminate their limitations and disadvantages. Additionally, the interfacial interaction between wool powder and polymer matrix need to be improved to enhance the mechanical property of wool powder-based composites.
Wool keratin is a kind of fibrous protein to be exploited for the design of advanced biomaterials, due to its physical properties, biocompatibility, and biodegradability. A critical challenge with wool keratin is the reliable and environmentally friendly extraction method. Another challenge is that the MWs and yield of wool keratin can be controlled in mass production to obtain wool particle-based composites with high mechanical properties. Keratin bioplastic films and the weight of degraded bioplastic after 5 days at composting conditions for the four representative materials [69][70][71][72][73]. Reprinted with permission from Reference [69]. Copyright 2015, American Chemical Society. Reprinted with permission from Reference [70]. Copyright 2020, Elsevier. Reprinted with permission from Reference [71]. Copyright 2014, Elsevier. Reprinted with permission from Reference [72]. | 2020-07-03T14:37:57.190Z | 2020-07-03T00:00:00.000 | {
"year": 2020,
"sha1": "d60c009ae985a093670f3f4bcf48889abaf47d69",
"oa_license": "CCBY",
"oa_url": "https://jlse.springeropen.com/track/pdf/10.1186/s42825-020-00030-3",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "d60c009ae985a093670f3f4bcf48889abaf47d69",
"s2fieldsofstudy": [
"Materials Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
255810331 | pes2o/s2orc | v3-fos-license | Genome-wide association study for feed efficiency and growth traits in U.S. beef cattle
Single nucleotide polymorphism (SNP) arrays for domestic cattle have catalyzed the identification of genetic markers associated with complex traits for inclusion in modern breeding and selection programs. Using actual and imputed Illumina 778K genotypes for 3887 U.S. beef cattle from 3 populations (Angus, Hereford, SimAngus), we performed genome-wide association analyses for feed efficiency and growth traits including average daily gain (ADG), dry matter intake (DMI), mid-test metabolic weight (MMWT), and residual feed intake (RFI), with marker-based heritability estimates produced for all traits and populations. Moderate and/or large-effect QTL were detected for all traits in all populations, as jointly defined by the estimated proportion of variance explained (PVE) by marker effects (PVE ≥ 1.0%) and a nominal P-value threshold (P ≤ 5e-05). Lead SNPs with PVE ≥ 2.0% were considered putative evidence of large-effect QTL (n = 52), whereas those with PVE ≥ 1.0% but < 2.0% were considered putative evidence for moderate-effect QTL (n = 35). Identical or proximal lead SNPs associated with ADG, DMI, MMWT, and RFI collectively supported the potential for either pleiotropic QTL, or independent but proximal causal mutations for multiple traits within and between the analyzed populations. Marker-based heritability estimates for all investigated traits ranged from 0.18 to 0.60 using 778K genotypes, or from 0.17 to 0.57 using 50K genotypes (reduced from Illumina 778K HD to Illumina Bovine SNP50). An investigation to determine if QTL detected by 778K analysis could also be detected using 50K genotypes produced variable results, suggesting that 50K analyses were generally insufficient for QTL detection in these populations, and that relevant breeding or selection programs should be based on higher density analyses (imputed or directly ascertained). Fourteen moderate to large-effect QTL regions which ranged from being physically proximal (lead SNPs ≤ 3Mb) to fully overlapping for RFI, DMI, ADG, and MMWT were detected within and between populations, and included evidence for pleiotropy, proximal but independent causal mutations, and multi-breed QTL. Bovine positional candidate genes for these traits were functionally conserved across vertebrate species.
Background
Feed efficiency analyses for beef cattle and other important food animals generally seek to relate measures of feed intake with measures of animal productivity (i.e., agricultural input versus output). Relevant to beef cattle production, and particularly among feedlot operations, the most costly component of the production process is feed, with feed-based expenditures during certain stages of production representing more than 80% of the total costs [1]. The overall economic importance of feed efficiency in beef cattle was initially recognized more than 40 years ago [2,3], with modern improvements in the efficiency of feed utilization expected to yield positive economic returns across many facets of the beef industry [1,4]. Moreover, an estimated annual cost savings exceeding $1 billion U.S. dollars could likely be achieved by increasing the efficiency of feed utilization by 10% in U.S. beef cattle alone [5].
Traditional measures of feed efficiency, such as the ratio of feed consumed to observed body weight gain (i.e., feed conversion ratio), are likely to be very successful in selecting for increased growth rates in beef cattle (i.e., increased producer income), but selection for enhanced weaning or yearling weights may also lead to increased costs associated with maintaining larger mature cows (i.e., increased nutrient requirements and feed costs; increased calving difficulty due to larger birth weights, etc) [4,6,7]. An alternative measure that has gained in popularity among livestock species is Residual Feed Intake (RFI), with this trait often defined as the difference between an animal's observed and expected feed intake in relation to the animal's body weight and growth rate during a specified feeding period (for review see [2,4,[8][9][10][11][12][13][14]). The primary advantages of using RFI to estimate or measure feed efficiency include its phenotypic independence from daily gain as well as the traits used to calculate RFI, with previous heritability estimates among cattle populations ranging from 0.08 to 0.49 [8-10, 12, 13], thereby making RFI a preferred measure for dissecting the underlying biology related to feed efficiency, and for enabling genomic selection [13][14][15]. Given the economic importance of genomic selection for feed efficiency and growth traits in beef cattle, genome-wide association studies (GWAS) and/or linkage studies have now been performed (for review see [11,[16][17][18][19][20]), with the advent of the Illumina Bovine SNP50 Assay [21], and thereafter, the Illumina Bovine HD Assay [22] directly enabling most of these studies. However, a need remains to thoroughly investigate quantitative trait loci (QTL) associated with RFI as well as other feed intake and growth traits among U.S. beef cattle populations. Therefore, we used a single marker approach with relationship matrix and variance component analysis to map QTL associated with feedlot RFI, average daily dry matter intake (DMI; lb/d), average daily gain on feed (ADG; lb/d), and mid-test metabolic body weight (MMWT; lb 0.75 ) in U.S. Angus, Hereford, and SimAngus (Simmental × Angus) beef cattle [13], with corresponding heritability estimates produced for all investigated traits. Thereafter, we investigated whether QTL discovered using the Illumina BovineHD markers (hereafter 778K) were also detectable using the Illumina BovineSNP50 Assay (hereafter 50K), which is important for modern genomic selection programs predicated on historic 50K analyses, and also compared the marker-based heritability estimates produced by the two arrays. Finally, we evaluated our 778K results within the context of the established GWAS literature, and found some positional candidate genes underlying bovine feed efficiency and/or growth related QTL to be conserved across divergent food-animal species, with a tangible proportion of our bovine GWAS results also overlapping with specific loci implicated in studies of obesity or metabolic syndrome, diabetes, and insulin resistance among humans and/or mice. The results of this study are immediately useful for enabling genomic selection in beef cattle, but also suggest that domestic cattle may be relevant models for some human biomedical studies.
Results and discussion
Heritability estimates for RFI, DMI, ADG, and MMWT Using a genomic relationship matrix [23] that was normalized using Gower's centering approach [24,25], to yield a sampling variance of 1.0, we estimated the pseudoheritability [24,25] [(i. e., h a 2 = σ a 2 /(σ a 2 + σ e 2 ); also represented as: h 2 = V A /(V A + V E )] for all investigated feed efficiency and growth traits (RFI, DMI, ADG and MMWT) in U.S. Angus, Hereford, and SimAngus cattle, thereby representing a major segment of the commercial U.S. beef industry ( Table 1). The pseudo-heritability, as previously defined [24,25], is the proportion of phenotypic variance that is explained by the marker-based genomic relationship matrix [23]. We used both the 50K and the 778K marker sets to construct relationship matrices [23]. Pseudo-heritability estimates obtained using the 778K markers ranged from 0.18 to 0.60 across the investigated traits for the three beef cattle populations, with individual ranges of: RFI = 0.20-0.49; DMI = 0.18-0.46; ADG = 0.21-0.37; MMWT = 0.47-0.60 (Table 1). These estimates are similar to those previously reported [13,16,17,26], and are strongly correlated (Angus, all traits, r = 0.81; Hereford, all traits, r = 0.99; SimAngus, all traits, r = 0.77) with previously obtained heritability estimates for the same populations [13]. An in silico reduction of the Illumina marker density from 778K to 50K (see Methods) for all populations yielded similar pseudo-heritability estimates across all traits (r > 0.99, all traits; see Table 2), suggesting that either SNP platform was suitable for marker-based heritability estimation. Across all populations (i.e., Angus, Hereford, and SimAngus), pseudoheritability estimates for feed efficiency and growth traits were highest among the Hereford beef cattle (Tables 1 and 2), which is likely due to a small number of contemporary groups and a uniform feeding regiment (see Methods) [13]. In contrast, both the Angus and SimAngus populations were used in nutritional trials, with each population possessing large numbers of contemporary groups, as previously described [13]. Nevertheless, moderate to high pseudo-heritability estimates obtained for feed efficiency and growth traits, particularly RFI and MMWT, further support the expectation of positive economic gains resulting from the implementation of genomic selection for feed efficiency traits in U.S. beef cattle, as evaluated using both multi-marker Bayesian models [13] and the single marker approaches [24,25] applied in this study.
EMMAX GWAS for RFI in U.S. Beef Cattle
The results of our Illumina 778K single-marker mixed model analyses for RFI are shown in Fig. 1. Statistical evidence for moderate or large effect QTL was observed for all populations [13,24,25]. Lead SNPs (i.e., the most strongly associated SNPs within a QTL region) with estimated proportion of variance explained (PVE; phenotypic) [24,25] ≥ 2.0% were considered putative evidence of large-effect QTL, whereas lead SNPs with PVE ≥ 1.0% but < 2.0% were considered putative evidence for moderateeffect QTL. Evaluation of all markers meeting the minimum Wellcome Trust significance threshold (P < 5e-05) [27] across all three populations revealed markers with PVE that ranged from 1.4% − 3.3% (Additional file 1).
A summary of the largest effect QTL detected for RFI is provided in Table 3.
Seven large-effect QTL (PVE > 2.0%) distributed across seven autosomes were found in Angus using the 778K data ( Table 3). Most of the positional candidate genes either underlying or flanking the detected QTL (XIRP2, HSPB8, TOX/TRNAC-GCA, DDB1, DAK, ADPRHL1, CDC-16; see Table 3) have previously been associated with feed efficiency and growth in other livestock species (i.e., Angus cattle, broilers, domestic fowl, pigs), with components of obesity in humans, and are involved in the resumption of the human cell cycle following the S-phase checkpoint [28][29][30][31][32][33][34][35][36][37]. Moreover, one interesting positional candidate gene (lead SNP located within an intron of DAK, 29_40 Mb, Table 3) produces the only known enzymatic source of riboflavin 4′, 5′phosphate (cyclic flavin mononucleotide or FMN) [32], which acts as an electron acceptor in the oxidative metabolism of carbohydrates, amino acids, and fatty acids, but can also donate electrons to the electron transport chain [38,39]. This is important because riboflavin is known to be essential for energy production (i.e., via oxidative phosphorylation), growth, and development in a variety of species, including humans and livestock [32-35, 38, 39]. Additionally, because riboflavin depletion interferes with the normal progression of the cell cycle [38], the proximity of DDB1 to DAK (29_40 Mb; Table 3) is also quite interesting considering that DDB1 contributes to the checkpoint recovery process, and the resumption of the human cell cycle [31]. Among the seven large-effect RFI QTL detected in Angus, four (i.e., 2_30 Mb, 11_80 Mb, 14_27 Mb, 29_40 Mb; Table 2 Variance component analysis with pseudo-heritability estimates (h a 2 = σ a 2 /(σ a 2 + σ e 2 ); see references [24,25]) representing the proportion of phenotypic variance explained by the 50K marker-based relationship matrix [23] Table 1 Variance component analysis with pseudo-heritability estimates (h a 2 = σ a 2 /(σ a 2 + σ e 2 ); see references [24,25]) representing the proportion of phenotypic variance explained by the 778K marker-based relationship matrix [23] Table 3) possessed minor alleles at lower frequencies (0.01 < MAF < 0.06).
A difference between the present study and a Bayesian analysis (i.e., multi-marker 1 Mb windows) recently reported for the same Angus population [13] was our use of only 706 Angus cattle (See Methods) for the investigation of RFI QTL. We limited our sample size for the RFI GWAS because many of the available Angus cattle were missing information about their age (i.e., birth dates), which negatively impacted our RFI variance component analysis using EMMAX [24,25] (See Methods). Additionally, the 706 Angus cattle used to investigate RFI QTL in this study were all fed a uniform diet (See Methods). To determine whether proximally similar genomic regions (i.e., within 1 Mb) would be detected using both of the Illumina SNP assays routinely used for QTL discovery (i.e., 50K and 778K), we conducted a second Angus RFI GWAS by rerunning EMMAX [24,25] after reducing the marker density from 778K to 50K. Subsequently, we observed that only five of the seven largest-effect Angus RFI QTL detected in the 778K analysis (2_30 Mb; 8_13 Mb; 14_27 Mb; 29_40 Mb; 12_91 Mb; Table 3) were proximally replicated (i.e., within ≤ 1 Mb) among the top 100 ranked 50K markers (i.e., with P ≤ 5e-05 and/or PVE ≥ 1.0%). Moreover, the 50K lead SNPs (i.e., 8_13 Mb, 14_27 Mb, 29_40 Mb) were physically proximal to the positions of the 778K lead SNPs, supporting the prioritization of the same positional candidate genes as found in the 778K analysis (Table 3).
Among U.S. Hereford cattle, analysis of RFI using the 778K genotypes revealed evidence for four large-effect QTL (PVE Range: 2.0% − 3.1%; Table 3) distributed across four autosomes (6_113 Mb; 19_54 Mb; 3_29 Mb; 1_72 Mb; see Table 3). Evaluation of the positional candidate genes implicated by our 778K analysis (DNAH17, RAB28, DLG1) revealed established associations with human obesity traits [40], adipogenic differentiation [41], type 1 diabetes and rheumatoid arthritis [27], and glucose uptake in mouse cells [42] (see Table 3). Notably, our (See figure on previous page.) Fig. 1 Residual feed intake (RFI) QTL. The top pane of each composite panel reflects a Manhattan plot with EMMAX -log 10 P-values for Illumina 778K markers, whereas the bottom pane depicts the estimated proportion of variance explained (PVE) by marker effects. Lead and supporting SNPs for moderate (1.0% < PVE < 2.0%) or large-effect QTL (PVE ≥ 2.0%) with P ≤ 5e-05 and MAF ≥ 0.01 are shown at or above the red line for U.S. Angus (a; n = 706), Hereford (b; n = 846), and SimAngus (c; n = 1217) beef cattle. The pseudo-autosomal region of BTAX is not depicted. A summary of all markers meeting the nominal significance level and MAF cutoff are presented in Additional File 1. Bovine 778K QTL criteria are described in Methods Table 3), thereby supporting the prioritization of the same positional candidate genes. For U.S. SimAngus cattle, our investigation of RFI using the 778K genotypes produced statistical evidence for one large-effect (PVE ≥ 2.0%) and five moderateeffect (1.0% < PVE < 2.0%) QTL distributed across five autosomes ( Table 3). Evaluation of the top six SimAngus RFI QTL signals again revealed positional candidate genes that had previously been implicated in the modulation of traits related to obesity, diabetes, growth, and feed-efficiency (Table 3; Additional file 1). The lead SNP for the largest effect QTL (i.e., rs135481840; 15_84 Mb, Table 3) lies within an intergenic region between STX3, GIF and two adjacent genes (i.e., ncRNA LOC101907671, TCN1). Notably, STX3 is known to underlie a mouse QTL for serum glucose levels [43] and human studies have shown STX3 to be upregulated in obese subjects following physical exercise [44]. Moreover, both GIF and TCN1 are vitamin B 12 (i.e., cobalamin) binding proteins and the mean body-weight gains for calves treated with vitamin B 12 have previously been observed to be superior to those that were untreated during the first 30 weeks of a feeding trial [45]. As was also found for our Hereford RFI QTL scan, a mitochondrial ribosomal protein gene (MRPL16) lies within 1 Mb of the largest-effect SimAngus RFI QTL (Table 3), which may be biologically relevant in view of the proposed relationship between mitochondrial function and feed efficiency in livestock [46]. Other notable positional candidates (Table 3; Additional file 1) including LUZP2 (29_20 Mb), TMEM72 (28_45 Mb), and TMEM40 (22_57 Mb) have been associated with serum magnesium levels in humans [47] and feed efficiency and growth traits in cattle [11,48,49]. However, despite the biological support for the role of TMEM72 in RFI [48], only a single marker met the significance threshold for this QTL [27], suggesting caution in further considering a role for this putative QTL in selection or genetic evaluation models.
Interestingly, olfactory transduction pathways modulated by olfactory receptors are known to be associated with RFI in pigs [50], which is conceptually concordant with our detection of the positional candidate gene OR9Q2 (15_83 Mb; Table 3). We also observed that the largest-effect QTL region detected for RFI in SimAngus (15_84 Mb, Table 3) was flanked by eight olfactory receptor or receptor-like genes (≤0.52 Mb from the lead SNP rs135481840). An evaluation of all of the genomic regions that met the minimum significance threshold [27] (PVE Range = 1.4% -2.0%) in the analysis of the 778K genotypes revealed that multiple markers supported the six largest-effect SimAngus RFI QTL ( Table 3), but that several other biologically relevant positional candidate genes were supported by only a single significant marker (Additional file 1). Positional candidate genes corresponding to several of these putative moderate-effect QTL (PVE < 2.0%) have previously been associated with lipid metabolism and intramuscular fat deposition in chickens (YWHAH) [51], obesity traits in mice and humans (CST7, ADAM10) [52,53] and RFI in cattle (CST7) [54] (Additional file 1). Analysis of the 50K genotype set for RFI in SimAngus resulted in only two markers meeting the minimum significance threshold [27], and identified only one (22_57 Mb) of the six QTL that were detected in the 778K analysis (Table 3). However, among the 100 top-ranked 50K markers, we found evidence supporting three of the largest-effect SimAngus RFI QTL that were detected in the 778K analysis (i.e., ≤ 1 Mb from 22_57 Mb, 8_97 Mb, 15_83 Mb; Table 3).
EMMAX GWAS for DMI in U.S. Beef Cattle
Results of the single-marker mixed model analyses using 778K genotypes for DMI are shown in Fig. 2. Statistical evidence for moderate or large-effect DMI QTL was again observed for all of the investigated populations [13,24,25]. An evaluation of all markers meeting the minimum significance threshold (P < 5e-05) [27] across all three populations revealed estimates of PVE that ranged from 1.4% − 3.9% (Additional file 1). Summary data for the largest effect QTL detected for DMI are provided in Table 4.
For example, TNC (8_107 Mb; Table 4) has previously been reported to be differentially expressed between lowand high-RFI Angus cattle [55], and LMX1B (11_97 Mb; Table 4) has recently been implicated in human agerelated obesity, with obese subjects displaying decreased methylation with age [56]. Across the eight largest-effect DMI QTL revealed by the 778K markers that met the minimum significance threshold [27] (PVE ≥ 2.0%; Table 4), those located on BTA2 (2_63 Mb), BTA1 (1_106 Mb), BTA11 (11_7 Mb), and BTA9 (9_90 Mb) were individually supported by the greatest numbers of markers (Additional file 1). Moreover, among the largesteffect Angus DMI QTL (Table 4), three positional candidate genes (CUX2, TMEM163, ESR1) have been associated with diabetes in humans [57][58][59]61], while four (TNC, SLC12A2, IL1R2, ESR1) have been associated with aspects of feed efficiency and growth in cattle [11,55] or adiposity in pigs [60]. Further investigation of all Angus DMI QTL regions detected in the 778K analysis revealed several additional candidate genes of interest, including MGAT5 (2_63 Mb). Notably, Mgat5 -/null mice were previously shown to experience diminished glycemic response to (See figure on previous page.) Fig. 2 Dry matter intake (DMI) QTL. The top pane of each composite panel reflects a Manhattan plot with EMMAX -log 10 P-values for Illumina 778K markers, whereas the bottom pane depicts the estimated proportion of variance explained (PVE) by marker effects. Lead and supporting SNPs for moderate (1.0% < PVE < 2.0%) or large-effect QTL (PVE ≥ 2.0%) with P ≤ 5e-05 and MAF ≥ 0.01 are shown at or above the red line for U.S. Angus (a; n = 706), Hereford (b; n = 846), and SimAngus (c; n = 1218) beef cattle. The pseudo-autosomal region of BTAX is not depicted. A summary of all markers meeting the nominal significance level and MAF cutoff are presented in Additional File 1. Bovine 778K QTL criteria are described in Methods exogenous glucagon, and increased insulin sensitivity [62]. Moreover, positional candidate genes associated with feeding behavior and growth in pigs (GATA3, GLRX3) [15,63,64] as well as obesity traits in mice (IL17A) [65,66] were also located within the Angus DMI QTL regions. Similar to our RFI analyses, differences between the results presented here and those previously reported for the same Angus population [13] primarily stem from our use of 706 Angus cattle (n = 706 with age data and a uniform diet; See Methods) for the investigation of DMI QTL. The MAFs for all lead SNPs defining the eight largest-effect DMI QTL in Angus ranged from 0.01 (BTA8_107 Mb) to 0.49 (BTA9_90 Mb), with five of the eight QTL having lead SNPs with MAFs ≥ 0.11. We next analyzed DMI using 50K genotypes and found evidence for three of the eight largest-effect QTL detected by 778K analysis (11_97 Mb; 17_57 Mb; 9_90 Mb; Table 4). Moreover, among the 100 top-ranked 50K markers, we found evidence for the replication of two additional Angus DMI QTL detected in the 778K analysis (2_63 Mb; 7_27 Mb; Table 4). Evaluation of the largest-effect DMI QTL detected for Hereford in the 778K analysis revealed at least five biologically relevant positional candidate genes (TYW3/ CRYZ, MGAT5B, RAB12, RAB37) that may harbor genetic variation influencing aspects of feed intake, including appetite ( Table 4). The genomic region harboring TYW3 and CRYZ (3_70 Mb; Table 4) has previously been associated with insulin resistance (resistin) in humans [67], while Mgat5 -/null mice exhibit increased insulin sensitivity [62]. The evolutionarily related gene family member and positional candidate MGAT5B (MGAT5 isozyme B or GnT-VB; 19_55 Mb) underlies the second largest-effect QTL detected for DMI in Hereford (Table 4). MGAT5 and MGAT5B are both known to be involved in the biosynthesis of N-glycans [62,68], and both genes have been implicated as positional candidates for large-effect DMI QTL in Angus and Hereford (Table 4). Moreover, the positional candidate genes RAB12 (24_41 Mb), NEGR1 (3_73 Mb), RAB37 (19_57 Mb), and CA12/LACTB (10_47 Mb) have also been associated with autophagy related to cellular nutrient sensing in mice [69], feed intake and/or obesity traits in humans and rats [70][71][72], insulin release in mice [73], hyponatremia in humans with loss of appetite and poor weight gain [74], and adiposity in humans [75] ( Table 4). Importantly, the biochemical roles and pathway relationships between insulin and glucagon with respect to food intake, satiety, and body weight have been reported and reviewed, with both insulin and glucagon normally acting to reduce meal size [76,77]. The largeeffect Hereford DMI QTL detected on BTA3 (3_70 Mb) is generally compatible with a recent Bayesian analysis of these data which utilized 1 Mb windows [13], but herein, we used single markers to identify specific positional candidate genes that were most proximal to the putative QTL peak(s), as defined by the lead SNP (Table 4). Investigation of all Illumina 778K markers meeting the minimum significance threshold [27] (PVE Range = 2.0% -2.6%) provided additional support for TYW3/CRYZ, MGAT5B, RAB12, NEGR1, and RAB37, but also revealed several other biologically relevant positional candidate genes with less marker-based support (i.e., only one SNP meeting the significance threshold; Additional file 1). For example, CORIN (BTA6_68 Mb) has previously been associated with body weight and obesity traits in mice [78,79], while DNAH17 (BTA19_54 Mb), ANXA10 (BTA8_1 Mb) and AADAT (8_2 Mb), have all been associated with aspects of feed efficiency and growth in either beef cattle or chickens [13,80,81] (Additional file 1). The MAFs for all lead SNPs defining the six large-effect DMI QTL in Hereford ranged from 0.15 (BTA24_41 Mb; BTA3_73 Mb; 19_57 Mb; 10_47 Mb) to 0.32 (BTA3_70 Mb). In the analysis of the 50K genotype set three markers met the minimum significance threshold [27], and two (19_57 Mb, RAB37; 3_73 Mb, NEGR1) supported large-effect QTL detected in the 778K analysis (i.e., within 1 Mb; Table 4). Moreover, SNPs proximal to all of the largest-effect Hereford DMI QTL detected in the 778K analysis (Table 4) were observed among the 100 top-ranked 50K markers.
Analysis of DMI for the SimAngus cattle using the 778K genotypes revealed seven moderate-effect QTL (1.0% < PVE < 2.0%) with at least ten biologically relevant positional candidate genes (NDUFB9, RNF139, MTSS1, SV2B, FMO5, IDE, CYP26A1, CAPN5, MYO7A, CXCL1, LYPD6, see Table 4). The largest-effect DMI QTL region (14_17 Mb; Table 4) included genes that have previously been associated with several aspects of feed efficiency (NDUFB9, RNF139) in beef cattle [28,82] and Type 2 diabetes (MTSS1) in humans [83]. Moreover, all of the positional candidate genes near the largest-effect DMI QTL (14_17 Mb; Table 4) were also immediately flanked by SQLE (14_16.7 Mb), a cholesterol biosynthesis enzyme previously implicated as a primary positional candidate for a large-effect obesity QTL in mice [84,85]. Proximal to the second largesteffect SimAngus DMI QTL, we detected SV2B (21_16 Mb; Table 4), which may be involved in aspects of regulated insulin secretion [86], and has been associated with human weight-loss across a diverse array of dietary regimes [87]. It should also be noted that previous studies also support the involvement of FMO5, IDE, CYP26A1, CAPN5, MYO7A, and CXCL1 (Table 4) in the manifestation of obesity traits in multiple vertebrate species (human, mice, poultry), in the onset of diabetes in humans, and/or in aspects of beef palatability [88][89][90][91][92][93][94]. Interestingly, while the DMI positional candidate gene MYO7A (15_57 Mb) has previously been associated with abdominal fat deposition in broilers, another proximal positional candidate gene (CAPN5, 15_57 Mb) has been associated with beef tenderness in Nelore cattle [92,93]. Additionally, the lead SNP defining a DMI QTL on BTA2 (2_47 Mb; Table 4) was proximal to the transcriptional start site of LYPD6, which has previously been found to be differentially expressed between low-and high-RFI cattle [55]. An investigation of all 778K markers that met the minimum significance threshold [27] (PVE Range = 1.4% -1.9%) revealed additional support for the seven moderate-effect DMI QTL (Additional file 1). Mixed model analysis of DMI in SimAngus using the 50K marker set revealed four markers that met the minimum significance threshold [27] and these corresponded to three of the six moderate-effect QTL detected in the 778K analysis (were ≤ 1 Mb from 14_17 Mb, 21_16 Mb, 15_57 Mb; Table 4). Further investigation of the 100 top-ranked 50K markers revealed evidence for the replication of two additional QTL detected in the 778K analysis (i.e., were ≤ 1 Mb from 26_14 Mb and 3_22 Mb; Table 4).
EMMAX GWAS for ADG in U.S. Beef Cattle
Results of the 778K single-marker mixed model analyses for ADG are presented in Fig. 3 and Table 5. Statistical evidence for moderate or large-effect QTL was observed for all of the investigated populations [13,24,25] with PVE that ranged from 1.1% − 3.2% (Additional file 1).
An investigation of the largest-effect ADG QTL detected in Angus in the 778K analysis revealed at least 9 biologically relevant positional candidate genes (NOX3, OR4X1, SKAP2, ADAMTS19, SLC27A6, SLC12A2, PGMI, PITX2, EGF, Table 5) previously associated with livestock feed efficiency, insulin resistance, Type 1 diabetes, and adipogenesis in humans [11,50,[95][96][97][98][99][100][101][102]. The largest-effect ADG QTL was detected on BTA9 (9_93 Mb) upstream of NOX3, which has previously been shown to modulate palmitate-induced insulin resistance in human hepatic cells [95]. We also detected an ADG QTL on BTA15 (15_79 Mb; OR4X1; Table 5), which was located within an olfactory receptor and receptor-like gene cluster. Olfactory receptors have previously been proposed as underlying feed efficiency QTL in pigs [50], and this QTL is proximal to the RFI QTL detected in SimAngus (OR9Q2; Table 3) [13]. A moderate-effect ADG QTL detected on BTA4 (4_69 Mb) suggested only a single positional candidate gene (SKAP2) that has previously been confirmed to be associated with Type 1 diabetes in humans [97], and additional positional candidate genes underlying Angus ADG QTL on BTA7 (7_26 Mb; SLC12A2) and BTA3 (3_82 Mb; PGMI) have also been associated with feed efficiency in cattle and pigs, respectively [11,100]. We identified two positional candidate genes (PITX2, EGF; Table 5) for the ADG QTL detected on BTA6 (6_16 Mb). While PITX2 is known to be associated with stem cell commitment to adipogenesis in humans [101], EGF has been shown to regulate the absorption of nutrients and electrolytes from the small intestine of rabbits [102]. Evaluation of all genes proximal to the largest-effect ADG QTL detected in Angus revealed at least two additional positional candidate genes on BTA7 (7_26 Mb; ADAMTS19, SLC27A6; Table 5). Significant synergistic interactions have been reported between human genetic variation in ADAMTS19 and IGFR2 [98], whereas genetic variation in SLC27A6 has been associated with fatty acid composition of bovine milk [99]. The MAFs for all lead SNPs defining the six moderate-effect ADG QTL in Angus ranged from 0.24 (BTA6_16 Mb) to 0.45 (BTA7_26 Mb), with five of the six QTL having lead SNPs with MAFs ≥ 0.25 (Additional file 1). The 50K analysis produced seven markers that met the minimum significance threshold [27], which corresponded to three (15_79 Mb; 3_82 Mb; 6_16 Mb) of the six largest effect QTL detected by the 778K analysis (Table 5). Evidence for the ADG QTL on BTA9 (9_93 Mb) was found among the nine top-ranked markers from the 50K analysis. Notably, the Angus ADG QTL detected on BTA4 (4_69 Mb) and BTA7 (7_26 Mb) in the 778K analysis were not supported by the locations of the top 100 ranked 50K markers.
Analysis of ADG in Hereford cattle using the 778K genotypes revealed at least eight biologically relevant positional candidate genes corresponding to seven large-effect QTL (PVE Range = 2.1% -3.1%; Table 5). Positional candidate genes proximal to the largesteffect QTL on BTA5 (5_106 Mb; FGF6, FGF23, CCND2; Table 5) have previously been associated with carcass muscle mass in Charolais cattle, blood-based markers for insulin resistance and obesity in humans, and insulin resistance versus sensitivity in human adipose tissue, respectively [103][104][105]. A large-effect QTL also was detected on BTA7 (7_93 Mb), for which we found one underlying positional candidate gene (ARRDC3) that has previously been associated with obesity in humans and mice [106]. Four additional large-effect ADG QTL signals were also detected on BTA8 (8_1 Mb; 8_4 Mb; 8_2 Mb) and BTA1 (1_144 Mb; Table 5). Further evaluations of these QTL revealed positional candidate genes which have previously been associated with human adipocyte size (PALLD) [107] as well as feed efficiency and growth traits in cattle (GALNTL6) [108][109][110]; with a recent investigation providing some evidence for bovine copy number variants proximal to GALNTL6 [110]. Overlapping ADG and DMI QTL were detected on BTA8 (8_1 Mb; 8_2 Mb) in Hereford, with PALLD and AADAT being positional candidate genes for both traits. Notably, while AADAT has previously been associated with feed efficiency and growth traits in both poultry and seabass [81,111] (Table 5), the BTA8 QTL signals (8_1 Mb; 8_2 Mb) for DMI in Hereford were limited to a single lead SNP underlying each QTL that met the minimum significance threshold [27] (Additional file 1). An investigation of the QTL on BTA1 revealed at least three biologically relevant positional candidate genes (ABCG1, TFF3, UBASH3A; Table 5). The ATP-binding cassette transporter protein encoded by ABCG1 has unambiguously been shown to possess a major role in adiposity and fat mass growth in humans as well as mice [112], while increased levels of the intestinal protein encoded by TFF3 improved glucose tolerance in a diet-induced mouse obesity model [113]. Along the same lines, the positional candidate UBASH3A has previously been associated with marbling score in pigs [114], and is differentially methylated among peripheral blood leukocytes derived from lean and obese human adolescents [115]. Finally, we also detected a large-effect QTL on BTA2 (2_23 Mb; (See figure on previous page.) Fig. 3 Average daily gain (ADG) QTL. The top pane of each composite panel reflects a Manhattan plot with EMMAX -log 10 P-values for Illumina 778K markers, whereas the bottom pane depicts the estimated proportion of variance explained (PVE) by marker effects. Lead and supporting SNPs for moderate (1.0% < PVE < 2.0%) or large-effect QTL (PVE ≥ 2.0%) with P ≤ 5e-05 and MAF ≥ 0.01 are shown at or above the red line for U.S. Angus (a; n = 1572), Hereford (b; n = 849), and SimAngus (c; n = 1237) beef cattle. The pseudo-autosomal region of BTAX is not depicted. A summary of all markers meeting the nominal significance level and MAF cutoff are presented in Additional File 1. Bovine 778K QTL criteria are described in Methods [50], and OLA1, which has been associated with the incidence of infectious bovine keratoconjunctivitis in Angus cattle [116]. The results of our 778K analysis are generally concordant with a recent Bayesian analysis that utilized 1 Mb windows to investigate ADG QTL in U.S. Hereford cattle [13]; with our analysis further refining the positional candidate genes for large-effect QTL detected by both analyses on BTA5, BTA7, and BTA8. Moreover, we also provide evidence for additional ADG QTL on BTA8 (8_2 Mb; 8_4 Mb), BTA1 (1_144 Mb), and BTA2 (2_23 Mb) that were not previously detected [13]. Table 5) predicted to be proximal to, or overlap with, QTL detected for other traits or populations. For example, the SimAngus ADG QTL detected on BTA3 (3_22 Mb; Table 5) was also detected in the SimAngus DMI analysis (Table 4), suggesting either pleiotropy or that independent but proximal causal mutations influence ADG and DMI. Identical lead SNPs were found for both traits in SimAngus, which is concordant with a pleiotropic QTL. Further, the SimAngus ADG QTL detected on BTA7 (7_26 Mb; Table 5) was also detected in the analysis of ADG in Angus (Table 5), and was proximal (≤ 1 Mb) to an Angus DMI QTL (7_27 Mb; Table 4). These results were relatively unsurprising considering the Angus influence within this SimAngus population. While the lead SNPs defining the ADG and DMI QTL (7_26 Mb; 7_27 Mb) were not concordant in Angus, the direction of the SNP effects were concordant (Table 4; Table 5), suggesting either proximal but independent causal mutations, or the inability to accurately estimate a pleiotropic QTL position based on the Angus sample size for DMI (n = 706). Finally, the SimAngus ADG QTL detected on BTA19 (19_54 Mb) overlaps with one of the largest-effect QTL detected for RFI in Hereford (Table 3), and was also proximal (≤ 1.71 Mb; 19_55 Mb) to a Hereford DMI QTL (Table 4). Therefore, our analyses of these data suggests the existence of pleiotropic feed efficiency and growth QTL in U.S. beef populations that have not previously been reported [13]. Moreover, the overlap also suggests that positional candidate genes on BTA3 (CHD1L, FMO5; 3_22 Mb), BTA7 (ADAMTS19, SLC27A6; 7_26 Mb), and BTA19 (DNAH17, RBFOX3; 19_54 Mb) may potentially hold biological value beyond selection for ADG in SimAngus cattle (Table 5), as overlapping or proximal genomic regions and corresponding positional candidate genes were identified during our analyses of RFI, DMI, and ADG for all three populations (see Tables 3-5).
Beyond the pleiotropic and multi-breed QTL described above, moderate-effect QTL for ADG were also detected on BTA27 (27_26 Mb), BTA13 (13_53 Mb), BTA14 (14_7 Mb), and BTA20 (20_39 Mb) in SimAngus ( Table 5). The QTL explaining the largest proportion of variance in ADG (BTA27_26 Mb) colocalized with the positional candidate gene GTF2E2, which has previously been associated with a metabolite (X-11787; hydroxy-leucine or hydroxyisoleucine) that is predictive of heart failure in African Americans [117]. Additionally, at least three positional candidate genes were identified (STK35, PDYN, SIRPA) for the ADG QTL on BTA13 (13_53 Mb). In particular, STK35, which is a nuclear Serine/Threonine kinase, is differentially expressed in the peripheral blood monocytes of Type 1 versus Type 2 diabetes patients [118], while PDYN (prodynorphin) is a prohormone that is differentially expressed in the livers of cattle exposed to differing nutritional statuses [119]. PDYN knockout affects feeding behavior, fasting weight loss, fat mass, and bone mineral content in mice [120], whereas SIRPA expression in human platelets was found to interact with obesity traits such as adiposity in a study of inflammatory proteins and artherosclerosis [121]. In regards to the overlapping SimAngus ADG and DMI QTL (3_22 Mb), FMO5 (BTA3_22 Mb) has been associated with obesity and diabetes in mice [88] and CHD1L is known to influence cellular proliferation [122,123]. Evaluation of the SimAngus ADG QTL on BTA14 (14_7 Mb) revealed an annotated positional candidate gene (KHDRBS3) that is associated with intramuscular fat deposition in cattle [124,125]. A positional candidate gene (RAI14) for the ADG QTL on BTA20 (20_39 Mb), is associated with obesity in mice [126,127]. Similarly, while the positional candidate gene DNAH17 (19_54 Mb) has also been associated with aspects of human adipogenesis (adipogenic differentiation) [40], a flanking positional candidate gene for this QTL (RBFOX3; Table 5) has been associated with serum urate levels in relation to human BMI [128]. The MAFs for all lead SNPs defining the seven moderateeffect ADG QTL in SimAngus ranged from 0.03 (BTA14_7 Mb) to nearly 0.50 (BTA3_22 Mb, MAF = 0.496), with five of the seven QTL having lead SNPs with MAFs ≥ 0.12 (Additional file 1). After reducing the marker density from 778K to 50K and reanalyzing ADG, five markers met the minimum significance threshold [27], including evidence for QTL replication on BTA20 (20_39 Mb). Among the top 100 ranked 50K markers, we only found evidence for the replication of two additional SimAngus ADG QTL detected in the 778K analysis (27_26 Mb; 7_26 Mb), again underscoring an inability to detect ADG QTL using the reduced marker panel.
EMMAX GWAS for MMWT in U.S. Beef Cattle
Results of the 778K single-marker mixed model analyses for MMWT are presented in Fig. 4 and Table 6. Evidence for moderate or large-effect QTL was observed for all of the populations with PVE that ranged from 1.1% − 4.6% (Additional file 1).
Investigation of MMWT in Angus using the 778K genotype set revealed evidence for eight large-effect QTL (PVE ≥ 2.0%) distributed across five autosomes ( Table 6). The lead SNP defining the largest-effect QTL on BTA7 (7_24 Mb) explained 4.6% of the variance in MMWT, with the positional candidate gene ACSL6 directly underlying the QTL signal. This result is concordant with the previous Bayesian study employing 1 Mb windows, in which ACSL6 was implicated for QTL underlying both DMI and MMWT [13]. In contrast, we place the Angus DMI QTL signal approximately 3 Mb away from ACSL6, and approximately 2 Mb from an ADG QTL (Tables 4, 5 and 6). It should also be noted that the largest-effect QTL detected for MMWT in Angus was also proximal to a SimAngus ADG QTL (7_26 Mb; Table 5). While ACSL6 has been identified as a strong positional candidate gene for feed efficiency and growth in Angus [13], it has also been implicated in diabetic myocardial metabolism in rodents, with evidence for ACSL6 being an insulin-regulated gene [129]. Among the eight large-effect MMWT QTL detected in Angus, three were located on BTA1 (1_98 Mb; 1_108 Mb; 1_133 Mb). Positional candidate genes (MYNN, ACTRT3; Table 6) which were proximal to the lead SNP at 98.5 Mb on BTA1 have been implicated in obesity and diabetes traits in humans and mice, respectively [130,131]. Moreover, two proximally relevant positional candidate genes were also suggested by the locations of the lead SNPs for Angus MMWT QTL detected near 108 Mb (PPM1L) and 133 Mb (NCK1) on BTA1, with PPM1L previously being associated with mouse obesity, metabolic syndrome, and growth [132], and NCK1 associated with glucose tolerance and insulin resistance in mice [133]. Other large-effect MMWT QTL were detected on BTA7 (7_0 Mb), BTA20 (20_72 Mb), BTA21 (21_13 Mb), and BTA18 (18_18 Mb; Table 6). Several positional candidate genes were proximal to the lead SNP defining a second BTA7 QTL (7_0 Mb; Table 6) for MMWT including CNOT6, which together with a related paralog, CNOT6L, has been associated with the regulation of human cell growth and survival [134,135]. Additionally, at least four unannotated genes are also in close physical proximity to the lead SNP on BTA7, including an olfactory receptor like gene, two loci with similarity to CLEC7A (a C-type lectin domain superfamily member), one uncharacterized locus, and FLT4 which encodes a tyrosine kinase receptor for vascular endothelial growth factors C and D (Table 6).
Three positional candidate genes were identified for the Angus MMWT QTL detected on BTA20 (20_72 Mb) and BTA21 (21_13 Mb) ( Table 6). On BTA20, the lead SNP (71.9 Mb) was proximal to PDCD6 and SLC9A3, which have been associated with muscle cell membrane repair in mice, and diet induced changes of the caprine rumen, respectively [136,137]. The lead SNP for the Angus MMWT QTL on BTA21 (21_13 Mb; Table 6) was adjacent to MCTP2 (13.24 Mb), which is known to be associated with human adiposity and obesity traits [138]. Additionally, two positional candidate genes for the MMWT QTL on BTA18 (18_18 Mb) have been associated with appetite regulation in the rat (CBLN1), and feed efficiency and growth in cattle (ZNF423) [139,140]. A final investigation of all Illumina 778K markers that met or exceeded the Wellcome Trust significance threshold [27] revealed additional marker-based support for seven of the eight large-effect, and at least eight moderate-effect (1.0% ≤ PVE ≤ 2.0%) MMWT QTL (i.e., 7_68 Mb; 8_80 Mb; 9_91 Mb; 10_44 Mb; 14_68 Mb; 17_35 Mb; 17_41 Mb; 19_42 Mb; Additional file 1). The directions of the marker effects were also consistently positive for all eight large-effect Angus MMWT QTL (Table 6), which was a unique observation among the three populations and traits analyzed in this study. The MAFs for all lead SNPs defining the eight large-effect MMWT QTL in Angus ranged from 0.06 (BTA18_18 Mb) to 0.20 (BTA7_24 Mb), with six of the eight QTL having lead SNPs possessing MAFs ≥ 0.11. Reduction of the marker set from 778K to 50K revealed four markers that met or exceeded the Wellcome Trust significance threshold [27], but none of the markers identified the genomic regions harboring the eight largest-effect QTL signals detected in the 778K analysis (Table 6). Among the top 100 ranked 50K markers, there was no evidence to support the eight largest-effect Angus MMWT QTL.
Analysis of MMWT in Hereford using the 778K genotypes revealed evidence for nine large-effect QTL (PVE ≥ 2.0%) distributed across six autosomes and BTAX ( Table 6). The two largest-effect QTL were detected on BTAX (X_113 Mb; X_105 Mb), with lead SNPs (112.90 Mb;105.46 Mb) that were estimated to explain approximately 4.6% and 3.2% of the variance in MMWT. The location of the lead SNP near 113 Mb was upstream of the transcriptional start site of MAGEB16, which is differentially expressed under high and low protein diets, and is associated with pathways related to cancer [141]. The primary positional candidate genes underlying the lead SNP on BTAX near 105 Mb were MAOA and MAOB, which are associated with human obesity [142]. Moreover, the proposed mechanism of action by which MAOA and MAOB genotypes influence obesity relates to dopamine bioavailability, which is implicated in appetite regulation [142]. We also detected a MMWT QTL on BTA20 (20_5 Mb; lead SNP 4.90 Mb), which ranked third in our genome-wide analysis (PVE = 2.8%), and for which we identified several biologically important positional candidate genes including STC2 and LOC101903982 (SYNPO2 like; Table 6). Notably, STC2 is an endoplasmic reticulum stress response gene that is associated with adiposity and obesity in nondiabetic humans [143], while SYNPO2 plays a role in early skeletal muscle development (myofibrillogenesis), and because of its association with myopathyrelated proteins, SYNPO2 is considered a candidate gene for muscle disease [144]. Beyond STC2, several additional genes encoding endoplasmic reticulum-related proteins (http://www.genecards.org) were also noted proximal to the MMWT QTL on BTA20 (20_5 Mb) including ERGIC1, CREBRF, and BNIP1 (Table 6).
Interestingly, most of the primary positional candidate genes for a large-effect QTL on BTA14 (14_6 Mb; lead SNP 6.02 Mb; PVE = 2.5%) have been associated with aspects of feed efficiency and growth or obesity (Table 6) across three vertebrate species [11,145,146]. Specifically, FAM135B has been associated with residual average daily gain (RADG) in SimAngus [11], and LRP12 is differentially expressed in the adipose tissue of divergently selected (lean versus fat) broiler lines [145] (Table 6). The biological significance of the positional candidate gene DPYS for the BTA14 (14_6 Mb) QTL stems from the fact that a deficiency in the enzyme encoded by this gene (dihydropyrimidinase) has been proposed to modulate growth retardation, failure to thrive, and other disadvantageous phenotypes in humans [146]. An evaluation of positional candidate genes colocalizing to the Hereford MMWT QTL on BTA19 (PVE = 2.4%; lead SNP 55.67 Mb) suggested that pleiotropic QTL influence both DMI and MMWT in Hereford (Tables 4 and 6), or that independent but proximal causal mutations influence the traits. While the lead SNPs identifying these QTL were not identical, they were proximally located (≈ 141 kb), implicating MGAT5B as a positional candidate gene for both DMI and MMWT (Tables 4 and 6). While MGAT5B is involved in the biosynthesis of N-linked and O-mannosyl-linked glycosylation [62,68], at least one prior investigation of this locus was catalyzed by the supposition that inactivation of the glycosyltransferases comprising the Omannosyl processing pathway leads to human congenital disorders characterized by muscular atrophy with neuronal defects [68]. Additional positional candidate genes flanking the MMWT QTL on BTA19 include METTL23 and JMJD6 ( Table 6). The protein encoded by METTL23 has methyltransferase activity and regulates a pathway underlying human cognition as well as GABPA function [147]. GABPA, which encodes a mitochondrial biogenesis and maintenance protein, was recently shown to be differentially expressed in the brown adipose tissue of mice subject to exercise, dietary restriction, and ephedrine treatment; with all such treatments resulting in weight loss as compared to controls [148]. Moreover, JMJD6, which encodes a nuclear protein involved in histone modification, transcription, RNA processing, and tissue development, was recently shown to have multiple roles in promoting adipogenic differentiation in mouse cells [149]. A large-effect MMWT QTL supported by 17 SNPs also was detected on BTA22 (lead SNP 11.03 Mb; PVE = 2.3%), with ITGA9 implicated as a primary positional candidate gene. Notably, ITGA9 has recently been associated with RFI in dairy cattle [54].
For U.S. Hereford cattle, our analysis of MMWT again revealed the potential for proximal but independent causal mutations, or a pleiotropic QTL on BTA8 (8_2Mb) that influences ADG and MMWT (Table 5, Table 6), but that may also influence DMI (Additional file 1). While the lead SNP defining the Hereford MMWT QTL differed among the three traits, all were located upstream of the AADAT transcriptional start site, with the lead SNP for the MMWT QTL being most proximal to this positional candidate gene (BTA8 at 1.90 Mb). While we did not detect large-effect MMWT QTL for Hereford on BTA7 (7_93 Mb) or BTA18 (18_63 Mb) as previously described [13], we did replicate the QTL on BTA20 [13]; the only difference being that we defined QTL position (20_5 Mb) by rounding the position of the lead SNP (4.90 Mb) to the nearest Mb. One plausible explanation for the differences (See figure on previous page.) Fig. 4 Mid-test metabolic weight (MMWT) QTL. The top pane of each composite panel reflects a Manhattan plot with EMMAX -log 10 P-values for Illumina 778K markers, whereas the bottom pane depicts the estimated proportion of variance explained (PVE) by marker effects. Lead and supporting SNPs for moderate (1.0% < PVE < 2.0%) or large-effect QTL (PVE ≥ 2.0%) with P ≤ 5e-05 and MAF ≥ 0.01 are shown at or above the red line for U.S. Angus (a; n = 1572), Hereford (b; n = 849), and SimAngus (c; n = 1238) beef cattle. The pseudo-autosomal region of BTAX is not depicted. A summary of all markers meeting the nominal significance level and MAF cutoff are presented in Additional File 1. Bovine 778K QTL criteria are described in Methods [13] ( Table 6). The two primary positional candidate genes GOLGA7B and CRTAC1 implicated by the lead and supporting SNPs on BTA26 (26_19 Mb) have been associated with dairy production traits in buffaloes, and lateral olfactory tract formation in mice [150,151]. Investigation of the lead and supporting SNPs (Additional file 1) defining a third large-effect QTL detected on BTAX near 145 Mb identified ANOS1 (i.e., previously KAL1) as the primary positional candidate gene (Table 6). ANOS1 has been associated with puberty timing in humans and serum testosterone levels in men; with this gene encoding an extracellular matrix protein implicated in the embryonic migration of gonadotrophin releasing hormone and olfactory-related neurons [152,153]. The MAFs for all lead SNPs defining the nine large-effect MMWT QTL in Hereford ranged from 0.01 (BTAX_113 Mb) to 0.39 (BTA20_5 Mb), with seven of the nine QTL having lead SNPs possessing MAFs > 0.05 (Additional file 1). Reduction of the marker content from 778K to 50K revealed 11 markers that met or exceeded the Wellcome Trust significance threshold [27], and included four of the nine largest-effect QTL that were detected in the 778K analysis (≤ 1 Mb from X_113 Mb; X_105 Mb; 20_5 Mb; 22_11 Mb; Table 6). Further investigation of the 100 top-ranked 50K markers revealed evidence supporting one additional large-effect QTL detected in the 778K analysis (≤ 1 Mb from 19_56 Mb; Table 6). Analysis of the 778K genotypes for U.S. SimAngus revealed evidence for two large-effect (PVE ≥ 2.0%) and 10 moderate-effect (1.0% > PVE < 2.0%) MMWT QTL ( Table 6). The lead SNPs identifying the two largeeffect QTL on BTA14 (14_25 Mb, PVE = 2.3%; 14_24 Mb, PVE = 2.2%) were ≤ 590 kb apart (at 24.9 Mb and 24.3 Mb). However, both QTL were supported by additional SNPs, including some that were located ≥ 1 Mb apart (Additional file 1), suggesting that independent causal mutations influence MMWT in this region of BTA14 in SimAngus. At least eight biologically relevant positional candidate genes were identified within the QTL intervals spanning 14_25 Mb and 14_24 Mb (Table 6), including LYN, RPS20, MOS, PLAG1, CHCHD7, SDR16C5, SDR16C6 and PENK. The genomic region harboring these genes is of particular interest considering previous associations with birth weight in Nellore cattle (LYN, RPS20, MOS, PLAG1, SDR16C5, SDR16C6, PENK), variation in adult human height (TGS1, LYN, RPS20, MOS, PLAG1, CHCHD7, SDR16C5, SDR16C6, PENK), and variation in body stature (hip-height) among beef and dairy cattle (PLAG1, CHCHD7, SDR16C5, SDR16C6) [154][155][156]. A moderate-effect MMWT QTL detected on BTA17 (17_18 Mb; lead SNP 18.04 Mb; PVE = 1.7%) suggests a positional candidate gene (MAML3) that was associated with the degree of obesity (obesity index) in pigs during an expression QTL analysis [157]. As in Hereford, we also detected a MMWT QTL in SimAngus on BTA20 (20_5 Mb; lead SNP 4.91 Mb; PVE = 1.6%; Table 6). The genomic location of this QTL physically overlaps between the two populations in this study, including some concordant supporting SNPs between the SimAngus MMWT QTL and the Hereford MMWT QTL (Additional file 1) whereby STC2 was implicated as the primary positional candidate gene (Table 6). This result is only partially concordant with the results of the Bayesian study employing non-overlapping 1 Mb windows, in which STC2 was found to underlie a large-effect QTL for MMWT and RFI in Hereford, but was not implicated in MMWT in SimAngus [13]. While the previous study reported a SimAngus MMWT QTL on BTA20 (20_6 Mb), the identified 1 Mb window was not concordant with the genomic positions of the lead and supporting SNPs from our analysis (lead SNP position = 4.91 Mb; range of supporting SNP positions = 4.87 Mb to 4.91 Mb, n = 7 supporting SNPs).
One moderate-effect MMWT QTL was detected on BTAX (X_148 Mb; lead SNP 147.54; PVE 1.6%) in SimAngus, with NLGN4X directly underlying both the lead and the supporting SNPs (Table 6). NLGN4X encodes a neuronal cell surface protein that is a member of the type-B carboxylesterase/lipase protein family, and has been observed to exhibit sex-biased exon usage across multiple developmental time points in humans, but has no known association with feed efficiency or growth [158]. However, NLGN4X knockdown in human neural stem cells significantly altered the expression of genes comprising several biologically relevant ontology groups including organ development (GO:0048513), Table 6). On BTA28, the protein coding genes TAF5L and URB2 were most proximal to both the lead and the supporting SNPs defining this QTL. Human genetic variation in TAF5L, which is involved in myogenic transcription and differentiation, has been associated with risk for type-1 diabetes, while rare coding variants in URB2, a ribosome biogenesis gene, have been associated with fasting insulin levels in humans [161,162]. Relevant to feed efficiency and growth QTL recently detected and described [13], LOC101904320, LCORL, and NCAPG were proximal to the MMWT QTL on BTA6 (6_39 Mb; Table 6). Both the lead and all supporting SNPs were located in a noncoding intergenic region flanking all three positional candidate genes. This result is interesting because LCORL and NCAPG have recently been reported to underlie a MMWT QTL in Angus that spanned two contiguous 1 Mb windows identified by a Bayesian analysis (6_38 Mb; 6_39 Mb), and this genomic region is also known to harbor feed efficiency, growth, and carcass QTL (http://www.animalgenome.org/cgi-bin/QTLdb/BT/index) [13,163,164]. Four relevant positional candidate genes (FAM110B, UBXN2B, NSMAF, TOX) were identified for the MMWT QTL on BTA14 (14_26 Mb), with the lead SNP located in an intron of UBXN2B, a protein coding gene involved in endoplasmic reticulum biogenesis (Table 6). However, it should also be noted that FAM110B, NSMAF, and TOX have all previously been associated with puberty related traits in Brahman cattle, including age at formation of the first corpus luteum, and age at which scrotal circumference was ≥ 26 cm [165].
For the SimAngus MMWT QTL on BTA13 near 50 Mb, the most proximal protein coding gene with reference annotation was BMP2, while ITIH5 was determined to directly underlie the QTL near 16 Mb. We also noted that the transcriptional start site of SFMBT2 was proximal to both the lead and supporting SNPs that defined this QTL (13_16 Mb). Relevant to feed efficiency and growth, BMP2 has previously been associated with loin muscle area, body size, and structural traits in pigs, while ITIH5 and SFMBT2 have been associated with fasting insulin levels and BMI in humans, respectively [166][167][168]. Finally, the lead and supporting SNPs underlying the QTL on BTA27 (27_22 Mb) were found within an intron of SGCZ (lead SNP 21.70 Mb), which encodes a protein that bridges the inner cytoskeleton and the extra-cellular matrix, and has been associated with obesity-related traits in humans [41]. The MAFs for all lead SNPs defining the MMWT QTL detected in SimAngus ranged from 0.04 (4_10 Mb) to nearly 0.50 (13_50 Mb, MAF = 0.497), with 10 of the 12 QTL having lead SNPs possessing MAFs ≥ 0.15 (Additional file 1). Following a reduction in marker content from 778K to 50K, four markers met the Wellcome Trust significance threshold [27]
Conclusions
We present evidence for both large (PVE ≥ 2.0%) and moderate-effect QTL (1.0% ≤ PVE ≤ 2.0%) that influence RFI, DMI, ADG, and MMWT in U.S. Angus, Hereford, and SimAngus beef cattle. Collectively, the positional candidate genes implicated by the QTL analyses for these populations suggest that feed efficiency and growth-associated loci are likely to be conserved across vertebrate species. Moreover, among the detected feed efficiency and growth QTL, we frequently observed positional candidate genes that had previously been associated with obesity-related traits or metabolic syndrome, biological aspects of adiposity, diabetes-related traits, and feed efficiency and growth traits across a variety of vertebrate species (humans, mice, rats, pigs, chickens, fish), which suggests both a relationship among phenotypes (i.e., feed efficiency, metabolic syndrome and adiposity) and a conserved biological system underlying feed intake and efficiency. Moreover, we detected 14 QTL regions within and between populations that ranged from being physically proximal (≤ 3 Mb) to fully overlapping for RFI, DMI, ADG, and MMWT, suggesting the existence of pleiotropy, proximal but independent causal mutations influencing one or more of these traits, and some multibreed QTL (Additional file 1). For example, one such 3 Mb QTL interval (19_54 Mb to 19_57 Mb) was implicated in every analyzed trait (i.e., Hereford: RFI, DMI, MMWT; SimAngus: ADG). Finally, our comparison of the detection resolution limits for the 50K versus the 778K assays is important, particularly since many previous analyses have been performed using 50K genotypes, with the results used to catalyze genetic progress in U.S. beef and dairy cattle. Herein, we demonstrate that while the 50K and the 778K produce very similar heritability estimates for RFI, DMI, ADG, and MMWT in our populations, some large and moderate-effect QTL go undetected in the 50K analysis, potentially reducing the opportunities for causal variant discovery. Therefore, additional QTL of moderate to large-effect will undoubtedly be discovered in historic data sets by imputing the data from 50K to 778K or beyond, and repeating previously performed analyses, which are also likely to produce further evidence for pleiotropy or multi-population segregation.
Methods
All feed efficiency and growth data were either collected by commercial producers or under the approval from the University of Missouri (ACUC Protocol 7505) or the University of Illinois at Champaign-Urbana (IACUC Protocols 06091 and 09078) Animal Care and Use Committees.
Cattle populations, phenotypes, and genotypes
All study animals, genotyping, and methods related to the ascertainment of phenotypes of interest such as ADG, daily average DMI, and MMWT, measured in Angus, Hereford, and SimAngus cohorts, were recently described [13]. Likewise, residual feed intake (RFI) was estimated by including partial linear regressions on ADG and MMWT in the model used to analyze DMI [13]. Using the available BovineHD genotypes previously described for the Angus, Hereford, and SimAngus cohorts, we conducted sample filtering by call rate (< 0.90), SNP filtering by call rate (< 0.85), and MAF filtering (< 0.001) as described by Saatchi and colleagues [13]. Details regarding the cattle included in the present study were as follows: (A) 1572 U.S. Angus steers were available for the analysis of ADG and MMWT, whereas 706 purebred Angus steers were used to analyze DMI and RFI (See Statistics for further details); (B) 850 U.S. Hereford cattle (826 steers, 24 heifers) were available for the analysis of ADG, DMI, MMWT, and RFI; (C) 1465 U.S. SimAngus steers were available for the analysis of ADG, DMI, MMWT, and RFI. For the 778K analyses, there were 722,716 SNPs for Angus, 659,688 SNPs for Hereford and 653,132 SNPs for SimAngus. All 50K analyses were performed following a populationspecific reduction in marker content from the filtered 778K to 50K density resulting in 47,582, 48,728, and 45,926 available markers for Angus, Hereford, and SimAngus respectively.
Statistics
Genome-wide association analyses were performed using a mixed linear model, implemented in EMMAX [24,[169][170][171], and were executed in Python as well as the SVS environment (Golden Helix, Version 7.7.6), as previously described [169,170]. The mixed model used in this study can be generally specified as: y = Xβ + Zu + ϵ, where y is a n × 1 vector of the observed phenotypes, X is a n × q matrix of fixed effects, β is a q × 1 vector representing the coefficients of fixed effects, and Z is a n × t matrix relating instances of the random effect to the specified phenotypes [169,170] (http://doc.goldenhelix.com/SVS/ 8.2.1/mixed_models.html). Moreover, we assume that Var(u) = σ g 2 K and Var(ϵ) = σ e 2 I, such that Var(y) = σ g 2 ZKZ ' + σ e 2 I, but in the present study, Z is simply the identity matrix I, and K is a kinship matrix among all genotyped samples. To solve the mixed model equations using a generalized least squares approach, the variance components (σ g 2 and σ e 2 ) must be estimated as previously described [24,169,171] (http://doc.goldenhelix.com/SVS/ 8.2.1/mixed_models.html). We used the REML-based EMMA approach to estimate variance components [24,171], with stratification accounted for and controlled using the genomic kinship matrix [23] computed from either the filtered Illumina 778K or 50K genotypes. Models were evaluated using phenotypes and available data previously described [13], with final models parameterized to include the following: (A) Angus RFI included DMI as the dependent variable, with date of birth (DOB), contemporary group (CG), days on feed (DOF), ADG, and MMWT as covariates (n = 706 observations); Angus DMI included DOB, CG, and DOF as covariates (n = 706 observations); Angus ADG included DOB (mean for missing values), percent Angus (i.e., not all were purebred), CG, and DOF as covariates (n = 1572 observations); Angus MMWT included DOB (mean for missing values), percent Angus (i.e., not all purebred), CG, and DOF as covariates (n = 1572 observations); Briefly, 706 purebred Angus were used for final RFI and DMI analyses because they possessed complete information regarding DOB, and were all fed a uniform diet; (B) Hereford RFI included DMI as the dependent variable, with DOB, breed composition (i.e., not all were purebred), CG, sex, ADG, and MMWT as covariates (n = 846 observations); Hereford DMI included DOB, breed composition, CG, and sex as covariates (n = 846 observations). Hereford ADG included DOB, breed composition, CG, and sex as covariates (n = 849 observations); Hereford MMWT included DOB, breed composition, CG, and sex as covariates (n = 849 observations); (C) SimAngus RFI included DMI as the dependent variable, with DOB, breed composition, ranch, pen, experimental year, nutritional treatments (diet), DOF, slaughter group (SG), ADG, and MMWT as covariates (n = 1217 observations); SimAngus DMI included DOB, breed composition, ranch, pen, experimental year, nutritional treatments (diet), DOF, and SG as covariates (n = 1218 observations); SimAngus ADG included DOB, breed composition, ranch, pen, experimental year, nutritional treatments (diet), DOF, and SG as covariates (n = 1237 observations); SimAngus MMWT included DOB, breed composition, ranch, pen, experimental year, nutritional treatments (diet), DOF, and SG as covariates (n = 1238 observations). All covariates were specified and treated as previously described [24]. Prior to analysis, all array genotypes were recoded as 0, 1, or 2, based on the count of the minor allele for that animal at each SNP marker. For 778K and 50K analyses, markers were ranked by P-value and PVE. Bovine 778K QTL were defined by ≥ 2 markers with MAF ≥ 0.01 (i.e., a lead SNP plus at least one supporting SNP within 1 Mb) which also met a nominal significance (P ≤ 5e-05) [27] and PVE [24,25,169] threshold (PVE ≥ 1.0%), and those QTL were evaluated, reported and discussed. However, a few putative QTL signals were presented for which only 1 lead SNP met all reporting criteria, and with one or more supporting SNPs falling below the nominal significance threshold (i.e., 28_45 Mb SimAngus RFI − TMEM72, P < 1e-04). For the 50K analyses, the following criteria were used to evaluate whether a 778K QTL was replicated: Among the top 100 ranked 50K SNPs, we required ≥ 1 SNP to be within 1 Mb of the 778K lead SNP.
Additional file
Additional file 1: Summary data for all lead and supporting SNPs from the 778K analyses for RFI, DMI, ADG, and MMWT, including dbSNP identifiers and QTL coordinates. (XLSX 710 kb) | 2023-01-15T14:16:11.712Z | 2017-05-18T00:00:00.000 | {
"year": 2017,
"sha1": "357da6ba12effe84dce5543cad20e4fb24cbae63",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12864-017-3754-y",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "357da6ba12effe84dce5543cad20e4fb24cbae63",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
244247166 | pes2o/s2orc | v3-fos-license | A BRIEF OVERVIEW ON THE ROLE OF THE INSTITUTE OF ANATOMY AT THE UNIVERSITY OF TARTU (THE FORMER IMPERIAL UNIVERSITY OF DORPAT/YURYEV) IN THE DEVELOPMENT OF ANTHROPOLOGY IN 1876–1918
During its history of nearly 390 years, the development of the University of Tartu has been discontinuous and complicated; sometimes it has even changed its location, but it has always included the Faculty of Medicine. For the longest time, the university operated as the Imperial University of Dorpat/Yuryev within the Russian Empire from 1802–1918. Even today, additions can be made to the biographies of some professors or graduates about the years they spent in Tartu (Dorpat/Yuryev) in that period. So, the role of the famous professors of anatomy Christian Hermann Ludwig Stieda (1837–1918) and August Antonius Rauber (1841–1917) in the development of anthropology at the Institute of Anatomy headed by them from 1876–1911 and in the following years until the Russian university in Tartu closed down in May 1918 has been studied modestly until now. To fill this gap, we present a brief overview on the role of the Institute of Anatomy at the University of Tartu in the development of anthropology in 1876–1918.
INTRODUCTION
The Faculty of Medicine at the University of Tartu (Dorpat), reopened in April 1802, developed successfully for several decades until it became an Estonians' , Livonians' , Latvians' , Jews' and Lithuanians' anthropology were defended, and two students' prize essays were written [13].
The post of Ordinary Professor of Anatomy, which had remained vacant after Prof. C. H. L. Stieda's departure, was taken over by August Antonius Rauber who had been invited from Germany and was employed in February 1886 [11]. He also became Director of the auxiliary academic institution, the Institute of Anatomy, founded by his predecessor ten years earlier, in 1876. The staff of the institute consisted of the director and the prosector and sometimes an extraordinary assistant to the prosector [24].
After Stieda's departure, the duties of Director of the Institute of Anatomy were fulfilled for a short time by Ordinary Professor of Public Health Bernhard Eduard Otto Körber . He handed the property of the institute, including a rich collection of anatomical specimens (882 designations and 1170 items) officially over to its new director [2]. Meanwhile, anatomy had been taught by Adam Bruno Wikszemski (1847-?) who had worked as prosector from 1876 [25].
AUGUST ANTONIUS RAUBER
August Rauber, who arrived in Tartu at the age of 45, was born in the town of Obermoschel in Bavaria in the large family of a court bailiff on 9 March 1841. After excellently completing the gymnasium in 1859, young Rauber entered, as his father had recommended, the Faculty of Law at the University of Munich. Along with courses of law, he also attended lectures at the Faculty of Medicine. At the end of the first academic year, he passed exams at both faculties. Thereafter, he transferred to the Faculty of Medicine but also continued the studies of philosophy and law. A. Rauber graduated from the University of Munich in 1865. In the same year, he defended his doctoral thesis on Vater bodies and their relations to muscle sensitivity, which attracted broad attention.
As in his student years A. Rauber had worked under T. L. W. Bischoff and N. Rüdinger, well-known morphologists of the period, he continued working with them after graduation too. In 1866, Rauber left for Vienna for two years; having returned, he habilitated in Munich in 1869. Thereafter, on 10 August 1870, A. Rauber was conscripted to the army and participated in the entire Prussian-French war as a battalion physician. There he acquired rich experience in field surgery and a deeper understanding of surgical anatomy. He was released from military service in March 1871. He began academic work in Munich again but without a permanent position of an assistant. In 1873, A. Rauber became Prof. W. His' prosector in Basel. In the same year, they moved to Leipzig together. There, Rauber was awarded the title of Extraordinary Professor. In 1875, he left because of fundamental disagreements with Prof. His, setting his scientific interests higher that personal material interests.
Thereafter, A. Rauber lived in Germany for 11 years as a freelance scholar entirely devoted to research. He made his living by delivering lectures and courses (including on anthropology) to students, although there were vacant positions at departments of anatomy in Germany.
More than half of his research papers (on macro-and microscopic anatomy, anthropology, embryology, teratology, general morphology) were written in this period, including the majority of his most significant works [9,11].
Even before coming to Dorpat, A. Rauber had become the co-author of the third revised and supplemented edition of the two-volume textbook of anatomy by C. E. E. Hoffmann, former Professor of Anatomy at the University of Basel, which was published in Erlangen in 1886 [7]. This laid the foundation to his world-famous textbook of human anatomy, which was published in many editions and translated into several languages. The Russian edition came out in 1904 under the author's personal supervision [11].
Prof. A. Rauber's versatile educational and research activity continued in Dorpat/Yuryev (Tartu) during the following 25 years. Although the Russified university required that all lecturers should lecture and examine in Russian, the administration of the University of Yuryev made an exception for Rauber. He could continue teaching in German -his mother tongue and the main language of research at that time. After Professor of Surgery W. Koch left in 1906, A. Rauber was the only one at the Faculty of Medicine who continued teaching in German [16].
Prof. A. Rauber's lectures on human anatomy, microscopic anatomy and topographical anatomy were richly illustrated with specimens and models. To improve students' training in their speciality, he had begun organising an educational museum of anatomy immediately after his arrival in Dorpat. Rauber systematised the existing specimens and added a few to the collection. In 1890, he opened an educational museum of anatomy at the Institute of Anatomy. Students could use it for independent work in the hall of studies. By 1 January, the total number of specimens at the Institute of Anatomy was 921 desig nations and 1209 items [11]. All this became possible as the Institute of Anatomy acquired additional rooms in the Old Anatomical Theatre when the Institute of Pathological Anatomy and Physiology was transferred to the New Anatomical Theatre that had just been completed [21].
Materials on the staff members and research papers of the Institute of Anatomy headed by Prof. Rauber can be found in several issues of the journals Anatomischer Anzeiger and Russky Antropologichesky Zhurnal [6,8,9,19]. A closer look at them reveals that the Institute of Anatomy carried out extensive anthropological research. The doctoral theses defended under Prof. A. Rauber's supervision were C. H. von Samson's study on the sigmoid colon (1890), R. Weinberg's anatomical-anthropological study on Estonians' cerebral gyri (1894), J. E. Jürgensohn's craniological study on the palatal torus (1896), E. Hugo's study on frontal suture ossification (1910), N. Goryainov's comparative study on Insula Reili (1912) [9].
ANTHROPOLOGY AT THE INSTITUTE OF ANATOMY
It should be specified that officially there was no separate structural unit as a laboratory of anthropology at the Institute of Anatomy of the University of Yuryev at that time like some researchers have written [19].
Still, it is known that, to improve teaching of anthropology, Prof. A. Rauber applied as early as in 1901 for the establishment of an independent professorship of anthropology at the university. To improve research and practical work in anthropology, he even submitted a project in 1909, according to which the Old Anatomical Theatre (one of the first buildings of the Faculty of Medicine, built in 1805, annexes in 1827 and 1860) should after reconstruction be given to the existing Institute of Anatomy and the future Institute of Anthropology. For other departments and institutes that were located in the Old Anatomical Theatre, he envisaged constructing two new buildings in its immediate vicinity. His application was not approved, and the project did not materialise [4].
Along with that, it is known that a few months after the reopening of the university in Dorpat, in the autumn semester of 1802, D. G. Balk (1764-1826, studied in Königsberg and Berlin), Ordinary Professor of Pathology, Semiotics, Therapy and Clinic, started lectures of anthropology at his own initiative. The lectures were delivered to the students of the Faculty of Medicine until 1808, four times a week, an hour at a time. He lectured according to the book Medizinisch-Philosophische Anthropologie für Aerzte und Nichtaerzte by Johann Daniel Metzger (1739-1805) who was physician in ordinary to the Prussian king, privy councillor and professor of Königsberg University. The book, published in Leipzig in 1790, had been recommended for academic lectures. The 208-page book consisted of an introduction and six chapters. It began with an overview of the descent of humans, which was relatively thorough for that time.
This was followed by chapters on medical psychology, physiology, dietetics, pathology and medicines [10].
Prof. Balk's private anatomical-pathological collection, which he used for visualisation of his lectures, possibly also those on anthropology, laid the foundation to collections of specimens on normal and pathological human anatomy at our university [10].
This could be compared to the beginning of teaching of anthropology at the other older universities in Tsarist Russia. In 1805, in his speech at the festive ceremony of the 50th anniversary of Moscow University (the oldest university in Tsarist Russia), I. Vensovich, professor of anatomy, physiology and forensic medicine, proposed for the first time that anthropology should be taught at the University of Moscow and substantiated his stance.
In Tartu, anthropology had been taught for three years by that time. This year could also be considered the peak of teaching of anthropology at the University of Dorpat (Tartu), as four professors of the Faculty of Medicine simultaneously delivered five different courses of lectures concerning anthropology and its elements [10]. The other older universities of Tsarist Russia were restored or founded after the reopening of the University Dorpat (Tartu) in 1802 [16]. Thus, the lectures of anthropology taught by Professor of Pathology, Semiotics, Therapy and Clinic D. G. Balk in 1802 can be said to be the first in the universities of Tsarist Russia.
At Prof. Balk's suggestion, K. E. von Baer, a future natural scientist of world renown, graduate of the Imperial University of Dorpat, wrote his doctoral thesis On Estonians Endemic Diseases in 1814 [10]. This marks the beginning of Estonians' anthropological research [17].
Based on the programme of lectures of the university, it can be said that Professor of Anatomy A. Rauber never delivered lectures on anthropology in Dorpat/Yuryev (Tartu), but two of his students did.
RICHARD JACOB WEINBERG AND ABRAM EBER LANDAU
Prof. A. Rauber's student Richard Jacob Weinberg (until 1895 used the first names Jacob Salomon) was born in Talsi, in the governorate of Courland (now in Latvia) on 30 December 1867. He studied in Mitau (Jelgava) and Riga Governorate Gymnasium; thereafter, at the department of natural science at Moscow University for two semesters in 1886-1887. On 10 February 1888, he became a student of the Faculty of Medicine at the University of Dorpat. He completed his studies in 1893 with the qualification of a physician, after which he became supernumerary assistant to prosector at the Institute of Anatomy. In the following year, he was awarded the degree of Doctor of Medicine. As Weinberg had received a state scholarship for three years when a student, he had to work in civil service for four and a half years. According to the notice of the Department of Medicine at the Ministry of Internal Affairs (of 24 July 1895), he was appointed as doctor of Kolyma district in Yakutsk region. He accepted the money for travelling expenses (1854 roubles and 47 copecks in that time's currency), but for some reason, did not start working in the Far East of Russia. Therefore, had to pay the money back in small monthly instalments during many years [1].
In 1897, R. Weinberg submitted an application to the St Petersburg Academy of Sciences to receive the K. E. von Baer Prize for four papers published in print. One of them was a revised variant of his doctoral thesis, the second was a comparative anatomical study of Latvians' brains, the third included the measurement data of Latvians' skulls at the Institute of Anatomy of the university; the fourth paper dealt with the anatomy of newborns [1]. Thus, the St Petersburg Academy of Sciences awarded R. J. Weinberg the K. E. von Baer Prize (5000 roubles) for his research in 1897, quite at the beginning of his scientific career.
R. J. Weinberg received the rights of Privatdozent in normal human anatomy in January 1903 [1] and delivered his opening lecture on an anthropological theme -Slavs and their physical evolution [12]. He began lecturing on anthropo logy in the autumn of the same year and continued in the following five semesters. The list of his lectures according to semesters was: anthropo logy, part 1 -2 hours per week; anthropology, part 2 -2 hours; homeland anthropology (for students of all faculties) -1 hour; anthropological methods -1 hour; anthropology, part 1 (general physical anthropology) -1 hour.
In the spring semester of 1906, the university was closed in the aftermath of the revolutionary unrest in the previous year [22]. In August of the same year, 1906, supernumerary assistant to the prosector, Privatdozent R. J. Weinberg became an extraordinary professor at the Department of Anatomy at St Petersburg Medical Institute for Women [1,19].
The years-long dream of Professor of Anatomy A. Rauber for a separate Institute of Anthropology with his student R. J. Weinberg as its director and professor of anthropology did not come true. He also expresses the opinion that it would be recommendable that as soon as in the following semester lectures on anthropology could be delivered at the Institute of Anatomy. This would help students acquaint themselves with the broad area of contemporary anthropology. Prof. A. Rauber assesses all the submitted research papers highly. At the end of the review, Prof. A. Rauber notes that Doctor Landau has a great desire to be among the university teaching staff as Privatdozent of anatomy and anthropology. The reviewer considered his wish fully substantiated and supported it; he also called on the faculty to show a benevolent attitude to Landau's application.
Prof. V. A. Afanasyev gave an assessment in his review only to the doctoral thesis he had supervised by submitting again the copy of the review he had written for the defence of the doctoral thesis. In conclusion, he finds that Dr. Landau can be allowed to deliver lectures as Privatdozent.
Prof. P. Polyakov begins his review by saying that he does not agree with the two previous reviewers. Landau wants to get the post of Privatdozent in anatomy and anthropology, but the ten papers submitted belong to five different disciplines. Four out of the five papers on histology concern adrenal glands and add nothing new to science; one paper deals with fixation of tissues by boiling; this is a very old method known since the 17th century. The paper on embryology has been written in cooperation with the experienced researcher Krasusska, and Landau was merely in the role of a student. The first paper on anthropology is about a device for measuring the internal volume of the skull. Prof. Polyakov does not understand why such a complicated apparatus should be used for a simple procedure. Why should the opinion of Prof. Waldeyer's anonymous assistant about this device influence Russian professors as if they were unable to orientate themselves in simple things? In the reviewer's opinion, the device does not give a correct result when used, as the measuring material does not get everywhere in the cranial cavity, as the device does not shake the skull. About the other anthropological paper, Prof. Polyakov finds that Landau has given an overview of Livonians' history based on the works of other authors. It also turns out that Landau did not measure Livonians in their actual residences, in villages, but in a house in Windau (now Ventspils, Latvia). About the whole work he did, he has presented scanty material on two pages about 14 measured persons. A more modest author would not have published it yet. The paper on comparative anatomy concerns domestic cat's cerebral convolutions; here the author has discovered a new convolution. Finally, the reviewer asks why a chapter on the histology of adrenal glands has artificially been added to a doctoral thesis on experimental pathology. Then, Prof. Polyakov discusses why Landau is applying for the post of Privatdozent at the Department of Anatomy and Anthropology, as there is no such department at the University of Yuryev or in the whole Russian Empire. Does it make sense to burden students with additional lectures on anthropology when more than half of the students of the Faculty of Medicine were not transferred to the third year? In conclusion, Prof. Polyakov thinks that Landau could become a Privatdozent in anatomy only [3].
A. E. Landau received the rights of Privatdozent of Anatomy in January 1909 and delivered his introductory lecture on Votians' anthropology [6]. In the autumn of the same year, he began teaching anthropology to students. According to semesters, the list of his lectures and practical classes was: racial variations of humankind -1 hour per week; methods of anthropological research -1 hour; in the following semester methods of anthropological research again -1 hour -and Lamarck's and Darwin's teaching -1 hour; practical classes in anthropology -2 hours; racial variations of humankind -1 hour; practical classes in anthropology -2 hours; course of anthropology -2 hours [21]. For those interested in somatic anthropology, Privatdozent A. E. Landau MD published a small handbook of anthropology in Russian in Yuryev (Tartu) in 1912 [23]. It was based on the teaching materials of this course and consisted of an introduction, seven chapters and a terminological glossary, 78 pages in total.
The practical classes in anthropology planned for the spring semester of 1913, 2 hours per week, were cancelled. On 13 January 1913, the university administration released A. E. Landau from the post of prosector's supernumerary assistant at his own wish. A. E. Landau became extraordinary professor of anatomy and somatic anthropology in Bern [24].
THE MUSEUM OF ANTHROPOLOGY
In the final year when Prof. A. Rauber was Director of the Institute of Anatomy, more precisely on 6 November 1910, Privatdozent A. E. Landau, super numerary assistant to the prosector, submitted an application to the Faculty of Medicine, where he asked, if possible, to allocate one or two rooms for his anthropological collection in one of the university buildings. In his view, these rooms could be found in the first student dormitory of the university in Hetzel Street ( completed in 1904, now J. Liivi Street 4) near the Old Anatomical Theatre. He had set up a private collection to illustrate the anthropology lectures he had delivered since the autumn of the previous year. The number of listeners had been about 40-50; many of them had taken a deeper interest in anthropology. A. E. Landau's collection was housed in two rooms of a private flat for which he had paid from his own pocket. In his opinion, his anthropo logical collection was still in its infancy. While teaching, he had also used, by kind permission of Prof. A. Rauber, some instruments and materials from the Institute of Anatomy. To develop the rudimentary anthropological col lection into a real museum of anthropology, he asked for support to his application from the Faculty of Medicine. At the end of his application, he requested 200-300 roubles per year for supplementing and arranging the anthropological collection at the Imperial University of Yuryev. Right below Privatdozent A. E. Landau's signature, Prof. A. Rauber has written in his own hand that he supports the application and expects a favourable solution from the faculty.
The ambiguous request for financing at the end of Landau's application gave reason for asking for an additional written explanation about this question. The explanation written on 15 November 1910 shows that he asked for that sum only for paying for the private flat where his anthropological collection was kept if no room could be found for it on the university premises. On the same day, It was also mentioned that, without using the university finances, he had procured the necessary instruments for successful teaching and plaster copies of excavated human bones. He had purchased a set of up-to-date anthropological instruments from Zürich with the personal help at their selection of the famous Swiss Professor of Anthropology Rudolf Martin. As the collection was also used by the professor of zoology from the department of natural sciences, the greatest disadvantage was found to be that the collection was situated in Landau's private flat near the railway station.
Thereafter, attention was drawn to the fact that the general and special problems of anthropology have been dealt with at the university since distant past. This is proved by the research papers of professors of anatomy and their numerous students -K. E. von Baer, L. Stieda, F. Waldhauer, I. Brennsohn, A. Rauber, R. Weinberg, E. Hugo and others were mentioned. As in the last thirty years interest in anthropological knowledge had considerably grown among criminalists and psychiatrists, the majority of the signatories found that this subject is of particular importance for future physicians who, working in different regions and conducting studies on local inhabitants, could support the development of this branch of science. It seemed to them that an obstacle to teaching of anthropology was that all the study and research aids were kept in Landau's private flat at a distance from the university, which was inconvenient for both students and lecturers. They also mentioned that not finding rooms for the anthropological collection could have a negative influence in the future, as the university would lose its significance as a research centre of Baltic anthropology. Therefore, they decided to request the University Council to satisfy Privatdozent A. E. Landau's application either by allocating one or two rooms in one of the university buildings or, if this proves impossible, by allocating 200-300 roubles annually for paying the rent for the private flat in two rooms of which the collection was kept.
Pyotr Polyakov, Ordinary Professor of Comparative Anatomy, Embryology and Histology submitted his dissenting opinion on 20 November 1910 on one handwritten page. He did not agree to the opinion of the majority of the Faculty of Medicine that Dr. Landau should be given finances for rooms, arranging and supplementing his private anthropological collection. Prof. Polyakov found that there was no need for opening a new auxiliary educational institution, as it was not stipulated by the University Statutes of 1884, and even the existing auxiliary educational institutions did not have enough finances. He also noted that, in the universities of Russia, teaching of anthropology and the anthro pology museum had always been part of the department of human anatomy. Next, he posed the question why the university should allocate rooms and money for a private collection. He saw this as a dangerous precedent -in the future each Privatdozent who, for some reason, was not content with his department and auxiliary educational institution, could start applying for special rooms and money for their furnishing. In conclusion, Prof. P. Polyakov recommended to accommodate the collection at the Institute of Anatomy, for which the rooms of the Institute of Anatomy should be enlarged and money allocated to the director of the Institute, not to Dr Landau's private collection.
On 30 November 1910, the University Council discussed the application of the Faculty of Medicine Council concerning Privatdozent Landau's collection and decided to postpone its solution. The same application was discussed again on 10 December, and it was decided to forward it to the university administration for taking a stance. On 21 December, the University administration sent a letter to the student hostel manager Yermolai Gravit and asked about the availability of rooms there. On 29 December the hostel manager replied that the hostel had no vacant rooms but considering that all university institutions must contribute to the main task of the university, teaching, it was possible to allocate to the anthropological collection one two-person room in the corner of the second floor. This room never remains vacant, but is not used willingly, as it is colder than others in winter. The room is located near the hall and auditorium of the hostel; therefore, it is noisier than the other rooms of the hostel. The university administration decided on 30 December not to make any obstacles for using this room on the condition that the collection is fully donated to the university. The decision was forwarded to the University Council. On 28 January 1911, the University Council decided to allow to allocate a room in the hostel for the collection on the same condition, ask Privatdozent Landau to submit a list of items in the collection and ask the curator of the Riga educational district to confirm this decision at least temporarily, until some new university buildings are completed.
On 1 February 1911, Privatdozent A. E. Landau submitted to the Rector of the Imperial University of Yuryev a list on two pages, written by his own hand, of the items that he would fully transfer to the anthropology collection to be created at the university. He had divided the donatable items into five groups. The first included plaster copies of excavated human bones. The second group consisted of two skeletons, the third of instruments and apparatuses, the fourth of charts and photos, the fifth was a small library. A. E. Landau also promised to replenish the collection according to opportunities.
In his letter of 17 February 1911, the curator of the educational district informed the rector of the university that the decision of the University Council to allocate a room for Privatdozent Landau's anthropological collection had been confirmed on the condition that the owner of the collection fully donates it to the university [3].
At its 25 February 1911 meeting, after receiving the list of items of the anthropological collection, the University Council thanked Privatdozent A. E. Landau for his donation and asked him to take over the management of the anthropological collection donated by him. Thus, in February 1911, the university founded a new, 36th auxiliary educational institution. Before that, the list of such institutions included the university library, three museums, two observatories, the botanical garden, the drawing school, two collections, seven institutes, ten study rooms, six hospitals, the polyclinic and the outpatients' clinic [16].
The report of the collection manager Landau submitted on 31 January next year shows that there were 41 units of plaster copies under 12 titles, 15 units of anthropological instruments and apparatuses, two skeletons, 12 charts and maps, 19 titles of books in 20 volumes, and two cabinets. He had added a note written by his own hand that all the items had been acquired by himself at his personal expense.
On 20 June 1912, Privatdozent A. E. Landau was appointed director of the museum of anthropology. The university administration, in accord with the curator of the Riga educational district doubled the area the museum used at the student hostel by adding another two-person room to the existing one.
The report of the museum of anthropology drawn up by Privatdozent A. E. Landau shows that during the previous year one chart, three models, two museum tables, nine models about representatives of different races, and an atlas with the photos of Estonians' brains had been acquired. At the end of the report, there was also a note that he had acquired all the items mentioned at his personal expense. Landau drew up this report when he had already left the service of the university (as of 12 January 1913).
The anthropology museum he had left behind languished and, during World War I, ceased to exist, although Extraordinary Professor of Anatomy H. E. Adolphi had been appointed to take care of it as an acting manager. The assets of the museum of anthropology were not replenished in 1913-1914. The reports on these years do not show who had compiled them and on which date. The reports of 1915 and 1916 show that no research or practical work was conducted at the museum of anthropology. There were no revenues or expenses. No research papers were published; students did not use the museum. There was no extension of rooms. These reports were drawn up by Prof. Adolphi, acting director of the museum on 6 February 1916 and 28 January 1917 respectively.
In the report of the auxiliary institutions of the university for 1917, the sheets meant for the report of the anthropology museum were not filled. For that year, 38 university subdivisions had to submit their reports to the office of the University Council by 30 January 1918, but only 27 auxiliary institutions submitted the required reports on time. On 7 March 1918, quick submission of missing reports was required, and six auxiliary institutions submitted them, but the museum of anthropology was not asked to submit the last report [3]. It can be said about the further destiny of the museum of anthropology collection, that at least part of it reached the University of Tartu History Museum thanks to the attentive staff of the present Institute of Anatomy. [24]. The University Council invited and elected the famous anatomist V. P. Vorobyov (1876-1937) to be his successor, but the ministry did not confirm his election to this post. Neither had he been confirmed as Professor of Anatomy at the universities of Kharkov and Warsaw, as the young scientist did not conceal his sympathy with the students' revolutionary expectations [21]. Thereafter, Privatdozent Hermann Ernst Adolphi became Director of the Institute of Anatomy.
RETIREMENT OF PROFESSOR RAUBER AND HIS SUCCESSORS
Hermann Ernst Adolphi MD (1863-1919, born in Wenden in the Livonian governorate, now Cēsis in Latvia), Prof. A. Rauber's student and long-time colleague had worked as his prosector for 20 years. He completed the Riga Governorate Gymnasium in 1882, graduated from the Faculty of Medicine in Dorpat as a doctoral student in 1888 and defended his doctoral thesis there on 29 November of the following year.
From 1894, he had constantly taught various special courses on anatomy; in later years, he also taught anatomy in Yuryev Private University. He published research papers on the variants of spinal nerves and the vertebral column of amphibians and mammals (including humans). In 1912, he became Extraordinary Professor of Anatomy and filled this post until 1917, being simultaneously Director of the Institute of Anatomy [14].
JURGIS ŽILINSKAS AND JĒKABS PRIMANIS
During the quarter of a century when Prof. A. Rauber was Director of the Institute of Anatomy, the arrangement of studies and research there could have given several students a boost to take up anatomy and anthropology in the future.
From 1908-1913, Jurgis Žilinskas (1885-1957), born in Kaunas gover norate, was a student of the Faculty of Medicine. During his studies, he could listen to Professor A. Rauber's anatomy lectures and do practical work in this subject under the supervision of prosector H. E. Adolfi and his super numerary assistant A. E. Landau. From the latter, he could also get his initial knowledge of anthropo logy. After graduation, he upgraded his education under Prof. R. Martin at the University of Munich. In 1922, J. Žilinskas became one of the organisers of the Faculty of Medicine at the Lithuanian University in Kaunas; | 2021-10-18T17:35:39.465Z | 2021-09-29T00:00:00.000 | {
"year": 2021,
"sha1": "1088df4e2fd8e066de6c7d230d2ac6d8008b974b",
"oa_license": null,
"oa_url": "https://ojs.utlib.ee/index.php/PoA/article/download/poa.2021.30.1.02/12976",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "0643722155f41e02e6296473435f129964175858",
"s2fieldsofstudy": [
"History",
"Medicine"
],
"extfieldsofstudy": [
"History"
]
} |
57525079 | pes2o/s2orc | v3-fos-license | Surgical Treatment for Congenital Lung Parenchyma and Non Lung Parenchyma Disorder: Center Experience
Copyright: © 2015 Kasem E, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Introduction
Congenital disorder of thoracic content is a rare entity, with estimated annual incidence 30 to 40 cases per 100.000 populations [1]. The spectrum of these disorders include congenital lobar emphysema (CLE), congenital cystic adenomatoid malformations (CCAM), Pulmonary sequestration, anomalies of diaphragm and Tracheo-esophageal fistula [2][3][4]. The pathology of these disorders occurring from failure of primitive intestine and its differentiation into respiratory system [4]. The presentation of these disorders rang from respiratory failure to imaging alterations in non symptomatic adult patients [5][6][7]. Prenatal diagnosis is helpful in early management. [8]. CT chest is main diagnostic tool of the pathology and allow for proper surgical planning [8]. Early surgery is the principal treatment for various forms of paraenchymal and non paraenchymal lung malformations. Pulmonary resection, lobectomy being the most common procedure [3,5,8,9] resections should be as conservative as possible. All pulmonary resections performed through thoracotomy. The other anomalies repaired either through thoracotomy or through abdominal incision [10].
Objective of our study present our experience with surgery in congenital lung disorder and other thoracic contents
Methods
We retrospective analysis 101 cases operated for congenital disorder of lung and other thoracic content in cardiac surgery department and pediatric surgery unit in Zagazig University Hospital from Aug 2008 to dec 2014. 107 Cases have been operated. Data analyzed regarding age, sex, symptoms of presentation and imaging test, side of the lesion and its location, surgical procedures used. post operative data include ICU, prolonged ventilation.
Abstract
Objective: Non parenchyma lung disorders are rare entity with life threatening outcome. Early surgerical intervention is the clue for life saving and avoid life threatening complications.
Methods: From Aug 2008 to dec 2014, 101 cases operated in Zagazig University Hospital for congenital non parenchyma disorder data are collected regarding preoperative, intra operative and post operative results.
Surgical technique
All patients operated through thoracotomy according to affected side. Under general anaesthesia patients positioned in lateral position. Emergent consideration taken for cases of congenital lobar emphysema. Posterior thoracotomy performed in 4 th or 5 th intercostals space. surgery done according to the congenital lesion, pulmonary resection have been performed for cases of congenital lobar emphysema, reeducation of abdominal content and closure of defect with mesh used for congenital diaphragmatic hernia, placation of diaphragm for congenital event ration and for Tracheoesophageal fistula ligation of fistula. Data analyzed, mean and median age were calculated, mean length of hospital stay and percentage and location of each disease.
Results
The mean age was 4 month (1 week -120 month). There was predominance of females with female to male ratio (61\40).
The pathology and side of congenital anomaly are described in Tables 1 and 2 and for all patients chest x ray as routine and CT chest as confirmatory test. 11 cases required preoperative rigid bronchoscope for treatment of pneumonia (Figures 1 and 2). Two patient out of 37 cases of CLE were operated while they are on mechanical ventilation, one case recovered another require prolonged mechanical ventilation die from sever respiratory infection and One case out of 24 cases of TOF was intubated and operated on mechanical ventilation. He survived after surgery.
Discussion
Congenital anomalies of lung parenchyma and thoracic content is rare entity that require surgical intervention in majority of cases to manage the symptoms and save of remaining lung tissues (1.3.5).
The estimated annual incidence of thoracic anomaly is 30=40 cases per 100.000 population [3] with predominance in female to male 2 to 1 [4] same we reported in our study predominance of females than males 60% of our cases are females.
Respiratory disorder is the main presentation [5][6][7]. However, One case of our study was discovered during routine x-ray after traffic accident. She has diaphragmatic event ration (cases no 91). Recently many of cases diagnosed during prenatal care and so decision can be taken early or after close observation.
Imaging of chest has great role in diagnosis. Routine chest ray give high suspicious and CT of chest imaging is confirmatory tool for the pathology and role out differential diagnosis [6,7].
Surgery is the curative tool of diagnosed cases. Surgical strategy according to diagnosis case with (CLE) surgery go with symptoms, cases with mild symptoms require follow up while symptomatic cases planned for surgery, lobecttomy is the main main surgery depened on affected lobe. While cases with (CACM), Asymptomatic cases require close observation and fellow up to manage symptoms and avoid complications as in cystic adenoid malformation to avoid malignancy (bronchio alveolar carcinoma, rhabdomyosarcoma) [11,12]. No surgery carried as prophylactic for our cases, surgery is indicated on symptoms.
Congentail cystic adenomatid malformation (CAM) SURGERY is indicated for symptoms or to avoid malignant incidence, surgery in form of lobectomy [17,18].
For cases diaphragmatic event ration, placation of diaphragm form thoracic side done [19,20] surgery is done once diagnosis confirmed and cases with tracheoesophgeal fistula surgery is done through right throcaotomy in 19 cases and by neck incision for H type fistula [21][22][23].
Three cases of TOF have right aortic arch [24]. In our study we operate through thoracotomy according to affected side and we use 4 th or 5 th space according to affected lobe. Although in many center videoassisted thoracoscopy has been widely used for pulmonary resection. Upper pulmonary resection is most common type of resection for cases of CLE, while for cases of CAM, lower lobectomy is used as many studies [11][12][13]15,17].
We reported 5 cases with combined anomalies three cases with right aortic arch in cases of TOF, one case with right coronary artery fistula to right atrium and one case with deformed chest wall pectus excavatum. Some studies reported higher incidence of mixed pulmonary malformation which is not reported in our study and also reported high incidence of cardiac, esophageal and chest wall malformation, but the incidence in our study is lower than reported.
The reported post operative complications are mainly pleural in term of pneumothorax and parenchyma in form of atelectasis which require bronchoscope for management. This also reported in many series [13][14][15] which reported high incidence of pleural complications after pulmonary resection.
Conclusion
Surgery is the clue for congenital parenchyma lung disorder and non parenchyma disorder, surgery is applied as diagnosis confirmed carry the advantages of saving lung parenchyma and avoid life threatening complications or malignant conversion in some anomaly like CAM. | 2019-01-06T14:11:20.088Z | 2015-12-28T00:00:00.000 | {
"year": 2015,
"sha1": "a53f1fd17d849b115b37077990b86703a279175c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2161-105x.1000306",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d2242ec326ca8040ca48e1eae023ab16c7da5b3e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
240529801 | pes2o/s2orc | v3-fos-license | Influence of the surrounding medium on the luminescence-based thermometric properties of single Yb3+/Er3+ codoped yttria nanocrystals
While temperature measurements with nanometric spatial resolution can provide valuable information in several fields, most of the current literature using rare-earth based nanothermometers report ensemble-averaged data. Neglecting individual characteristics of each nanocrystal (NC) may lead to important inaccuracies in the temperature measurements. In this work, individual Yb3+/Er3+ codoped yttria NCs are characterized as nanothermometers when embedded in different environments (air, water and ethylene glycol) using the same 5 NCs in all measurements, applying the luminescence intensity ratio technique. The obtained results show that the nanothermometric behavior of each NC in water is equivalent to that in air, up to an overall brightness reduction related to a decrease in collected light. Also, it was observed that the thermometric parameters from each NC can be much more precisely determined than those from the “ensemble” equivalent to the set of 5 single NCs. The “ensemble” parameters have increased uncertainties mainly due to NC size-related variations, which we associate to differences in the surface/volume ratio. Besides the reduced parameter uncertainty, it was also noticed that the single-NC thermometric parameters are directly correlated to the NC brightness, with a dependence that is consistent with the expected variation in the surface/volume ratio. The relevance of surface effects also became evident when the NCs were embedded in ethylene glycol, for which a molecular vibrational mode can resonantly interact with the Er3+ ions electronic excited states used in the present experiments. The methods discussed herein are suitable for contactless on-site calibration of the NCs thermometric response. Therefore, this work can also be useful in the development of measurement and calibration protocols for several lanthanide-based nanothermometric systems.
: a) Transmission Electron Microscopy (TEM) image of multiple Y 2 O 3 :Yb 3+ /Er 3+ NCs b) Size distribution of the Yb 3+ /Er 3+ codoped NCs. The particles can be found in sizes ranging from ≈ 70 nm to ≈ 150 nm with average value of 120 ± 20 nm. c) Diffractogram of the codoped Y 2 O 3 :Yb 3+ /Er 3+ and pristine Y 2 O 3 confirming the body centered cubic phase structure of the NCs.
SI1. Morphological characterization of Y 2 O 3 NCs
The sample preparation protocol leading to sparse single NCs on a glass coverslip has been already described by Galvão et al., in reference. 1 Their work contains SEM and TEM images of single NCs before and after the spin coating technique used for preparing the samples (see subsection 2.1 and Figure 1c) of reference 1 and Figure 4a) of reference 2 for another sample -Nd 3+ doped ytrria NCs following the same sample preparation protocol).
Their results show that the spin coating technique herein used leads to a majority of single particle sites spread all over the glass coverslip surface. Table 1.
SI3. Thermal resolution of individual nanocrystals (NCs)
The thermal resolution of a thermometer (δT ) is defined as the minimum temperature change that the system is able to measure confidently, 3 thus it is a function of the LIR (R).
One can expand δT in Taylor's Series and truncate to the first non-vanishing term, which results in According to Brites et. al., 3 the uncertainty in the determination of the LIR (δR) can be calculated either by measuring a set of LIR values in the same experimental condition, making a histogram and calculating the standard deviation; or by propagating the uncertainties from the signal-to-noise ratio of the detection system. Even though the above-mentioned procedures are well-established, they are not suitable to individual calibration in our system.
A different approach which can also be used to gain more physical intuition about the system is discussed below.
As reported in the main text, the calibration of the thermometer is made by fitting the LIR vs. Temperature data and obtaining two parameters: α, and β (equation 2 in the main manuscript). The fitting is performed by standard linear regression, in which it is also possible to calculate the variance-covariance matrix, defined by 4 is the variance of the variable γ = α or β, where E[γ] is its the expected value, and is the covariance of the two variables α and β.
If the two variables are independent random variables, the covariance between them must vanish, and the variance-covariance matrix becomes diagonal. Thus each element of the diagonal completely characterizes the statistical properties of its corresponding parameter.
In the opposite situation, where the variables are not independent, the Σ matrix is not diagonal, but still symmetric.
In the present work, however, by performing the linear regression via the Least Mean Square algorithm, it was observed that the covariance matrices for all thermometers fittings are not diagonal, which means that α and β are correlated. The physical meaning for this correlation relies on the choice of the Boltzmann Law to model the photophysical dynamics of the system. As discussed in a very recent work, 5 the authors used the same codoped system in a NaYF 4 matrix and showed that the dynamics of non-radiative absorption and decay of the Er 3+ ions leads to a deviation of the Intensity Ratio from the Boltzmann law, for the same thermally coupled levels used in this work.
Naturally, the β parameter depends on the radiative decay rates (equation 5 in the main manuscript), but the authors showed that the ∆E parameter evaluated by the Boltzmann Law is actually not the spectroscopic value, but an apparent value that depends also on the radiative decay rates. Therefore, α and β must be correlated in experiments. An important consequence is that the error propagation to obtain derived quantities as the thermal resolution must consider the off-diagonal terms in Σ.
In order to determine δR, it is possible to set that, for a fixed temperature T 0 , X 0 = 1/T 0 and R as R(α, β). Thus, the δR can be written in matrix form 4 The subsequent calculations of all related quantities presented in Table 1 follows from standard error propagation.
SI4. "Ensemble" averages and thermal resolution
The "ensemble" values presented in Table 1 for ∆E , β and S R were extracted from the mean of the values for the five selected NCs. The error bars for those values were defined as the standard deviation from the mean for each parameter.
The thermal resolution for the ensemble (δT (e) ) was obtained according to the references ( 1,5 ), being applicable for systems with multiple micro/nano-thermometers at a fixed temperature (T 0 ) and depends on the standard deviation (σ) of the LIR values for the studied NCs through where the superscript (e) holds for the ensemble mean values for R and S R . The values for δT (e) presented in Table 1 are higher than the obtained thermal resolution for the individual NCs because the dispersion of the LIR values for the set of nanothermomethers leads to a high standard deviation. | 2021-10-20T15:44:07.054Z | 2021-09-16T00:00:00.000 | {
"year": 2021,
"sha1": "90eb53f448286a72dbe37b80f91dbdd2c4472ed8",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "2a54b284f88a07f29e3996a1cc8c036a0f7884ee",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
21005640 | pes2o/s2orc | v3-fos-license | A 29-Kilodalton Golgi SolubleN-Ethylmaleimide-sensitive Factor Attachment Protein Receptor (Vti1-rp2) Implicated in Protein Trafficking in the Secretory Pathway*
Expressed sequence tags coding for a potential SNARE (soluble N-ethylmaleimide-sensitive factor attachment protein receptor) were revealed during data base searches. The deduced amino acid sequence of the complete coding region predicts a 217-residue protein with a COOH-terminal hydrophobic membrane anchor. Affinity-purified antibodies raised against the cytoplasmic region of this protein specifically detect a 29-kilodalton integral membrane protein enriched in the Golgi membrane. Indirect immunofluorescence microscopy reveals that this protein is mainly associated with the Golgi apparatus. When detergent extracts of the Golgi membrane are incubated with immobilized glutathione S-transferase α soluble N-ethylmaleimide-sensitive factor attachment protein (GST-α-SNAP), this protein was specifically retained. This protein has been independently identified and termed Vti1-rp2, and it is homologous to Vti1p, a yeast Golgi SNARE. We further show that Vti1-rp2 can be qualitatively coimmunoprecipitated with Golgi syntaxin 5 and syntaxin 6, suggesting that Vti1-rp2 exists in at least two distinct Golgi SNARE complexes. In cells microinjected with antibodies against Vti1-rp2, transport of the envelope protein (G-protein) of vesicular stomatitis virus from the endoplasmic reticulum to the plasma membrane was specifically arrested at the Golgi apparatus, providing further evidence for functional importance of Vti1-rp2 in protein trafficking in the secretory pathway.
Participation of NSF 1 and soluble NSF attachment proteins (SNAP) in diverse transport events in the secretory and endocytotic pathways is in conjunction with a superfamily of membrane proteins termed SNAP receptors (SNAREs) (1)(2)(3)(4)(5). The SNARE hypothesis suggests that vesicles derived from a donor compartment harbor a set of vesicle-associated SNAREs (v-SNAREs) that will interact specifically with those associated with those on the target acceptor membrane (t-SNAREs) (6 -11). This v-/t-SNARE pairing is a key event in the docking and fusion of the vesicle with its specific target membrane (6 -11). Vesicle-associated membrane proteins (VAMPs) or synaptobrevins are v-SNAREs associated with the synaptic vesicles, whereas syntaxin 1 and SNAP-25 (synaptosome-associated protein of 25 kDa) are t-SNAREs associated with the presynaptic membrane. The specific interaction of VAMPs/synaptobrevins with the syntaxin 1-SNAP-25 complex plays a fundamental role in the docking/fusion of synaptic vesicles with the presynaptic membrane (9 -11).
Because of the central role of SNAREs in diverse vesicular transport steps, molecular identification, biochemical characterization, and subcellular localization of novel SNAREs constitute fundamentally important aspects of study in the field of vesicular transport. The Golgi apparatus plays a major role in the secretory pathway (1)(2)(3)(4)12). Currently, five distinct SNAREs have been shown to be associated with the Golgi apparatus in mammalian cells. These include syntaxin 5 (13)(14)(15), GS15 (16), GS27 (also termed membrin) (17)(18), GS28 (also named GOS-28) (19 -20), and syntaxin 6 (21)(22). Syntaxin 5 and GS28 have both been shown to be involved in the endoplasmic reticulum (ER) to Golgi transport. GS28 has also been implicated in transport from the cis-to the medial-Golgi (15, 19 -20). GS27 was shown to be involved in transport from the cis/medial-to trans-Golgi/trans Golgi network (18). The functional aspects of GS15 and syntaxin 6 remain to be established (16,(21)(22). In this report, we describe the molecular, biochemical, and cell biological characterizations of Vti1-rp2, a novel 29-kDa SNARE associated with the Golgi apparatus. Vti1-rp2 is structurally homologous to Vti1p, a recently described yeast Golgi SNARE (23). We further show that Vti1-rp2 exists in distinct syntaxin 5-and syntaxin 6-containing SNARE complexes and is functionally important for protein trafficking in the secretory pathway.
EXPERIMENTAL PROCEDURES
Materials-Mouse EST clones (accession numbers AA016379 and W13616) were generated by the Washington University-Merck expressed sequence tag (EST) project and made available by IMAGE consortium via Research Genetics Inc. (Huntsville, Alabama). The mouse mRNA multiple tissues Northern blot was obtained from CLON-TECH (Palo Alto, CA. Mouse monoclonal antibody against Golgi mannosidase II was from Babco (Berkeley, CA). Fluorescein isothiocyanateconjugated goat anti-mouse IgG and rhodamine-conjugated goat antirabbit IgG were purchased from Boehringer Mannheim. Brefeldin A was from Epicentre Technologies.
cDNA Cloning and Sequencing-Mouse EST clones were fully sequenced by the dideoxy chain termination method using a kit from U. S. Biochemical Corp. The complete coding region was assembled using the DNA Strider 1 program.
Northern Blot Analysis-A mouse multiple tissue blot of poly(A) ϩ mRNA was probed with the insert of the EST clone AA016379 followed by actin probe as described previously (24).
Expression of Recombinant Proteins in Bacteria-GST fusion pro-* This work was funded by the Institute of Molecular and Cell Biology, National University of Singapore (to W. H.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
The nucleotide sequence(s) reported in this paper has been submitted to the GenBank TM /EBI Data Bank with accession number(s) AF035823.
Differential Extraction of Golgi Membranes and Immunoblot Analysis-These were performed as described previously (16,30).
In Vitro GST-␣-SNAP Binding Assay and Immunoprecipitation-This was performed as described previously (16,26).
Micoinjection and in Vivo Transport of Vesicular Stomatitis Virus G-protein-Vero cells grown on coverslips were infected with the vesicular stomatitis virus ts045 strain at 31°C for 45 min and then shifted to 40°C for 1 h. Cells were then transferred to 4°C in Dulbecco's modified Eagle's medium without fetal bovine serum, and microinjection was performed under a Zeiss Axiophot microscopy using an Eppendorf micromanipulation system. Cells were transferred back to 40°C immediately after injection and incubated at 40°C for 2 h to accumulate the G-protein in the ER. Transport of G-protein was performed by incubating cells at 31°C for 45 min in the presence of cycloheximide (to prevent new synthesis of G-protein). Cells were then fixed and processed for indirect immunofluorescence double-labeling to detect microinjected antibodies and the G-protein.
Vti1-rp2, a Mammalian Protein Homologous to Yeast Vti1p-
Searching the EST data bases using the amino acid sequence of a novel Golgi SNARE characterized in the lab 2 led to the identification of mouse EST clones (accession numbers AA016379 and W13616) that encode a putative SNARE. The EST clone W13616 was fully sequenced, and the nucleotide and the deduced amino acid sequences are shown in Fig. 1A. This protein was independently identified in three other laboratories and has been referred to as Vti1-rp2 (31), Vti1b (32), and Vti1a (33), respectively. To avoid further confusion in nomenclature, we have adopted the name Vti1-rp2 for this protein. Vti1-rp2 is a protein of 217 residues. Although the predicted molecular weight of Vti1-rp2 is 24,971 daltons, its apparent size as revealed by SDS-polyacrylamide gel electrophoresis is 29 kDa (see below). There exists a 22-residue carboxyl-terminal hydrophobic region that may function as a membrane anchor, a characteristic of the majority of known SNAREs (6 -11, 16). Preceding the carboxyl-terminal hydrophobic tail are four regions (residues 32-62, 69 -93, 112-134, 146 -179) that have the potential to form coiled-coil structures as predicted by the COILS 2.1 program (Fig. 1B). A search of motifs with the ScanProsite program revealed the existence of an ATP/GTP binding motif A (P-loop) in the sequence (residues 170 -177, ADANLGKS). Vti1-rp2 is homologous to Vti1p, a yeast Golgi SNARE implicated in at least two trafficking events (23). The homology between Vti1-rp2 and Vti1p occurs throughout the entire polypeptide with an overall amino acid identity of 28% and similarity of 45% (Fig. 1C). Since Vti1p does not contain the consensus ATP/GTP binding site motif A, it is not clear if the ATP/GTP binding site motif A observed in Vti1-rp2 is functionally important.
Vti1-rp2 mRNA Is Widely Expressed-To examine whether Vti1-rp2 is involved in a general cellular process or its function is restricted to certain tissues, Northern blot analysis was performed to examine the levels of Vti1-rp2 mRNA in various mouse tissues ( Fig. 2A). A major mRNA species of about 2.6 kilobases was detected at varying levels in all the tissues examined, consistent with the notion that Vti1-rp2 may participate in an event common to all the cell types. Interestingly, the Vti1-rp2 mRNA detected in the testis has a smaller size (about 1.5 kilobases). The basis and significance for this different mRNA size of Vti1-rp2 in the testis is currently unknown. Vti1-rp2 Is a 29-kDa Integral Membrane Protein Associated with the Golgi Apparatus-The cytoplasmic domain (residues 1-185) of Vti1-rp2 was expressed as a recombinant fusion protein with GST (GST-Vti1-rp2). The purified GST-Vti1-rp2 was used to raise polyclonal antibodies against Vti1-rp2 in rabbits. Although the predicted size for Vti1-rp2 is 249710, its apparent size revealed by SDS-polyacrylamide gel electrophoresis is about 29 kDa (Fig. 3A), because it migrates (lane 1) in between the 30-kDa marker (the marker lane) and GS28 (lane 2) (a Golgi SNARE with an apparent size of 28 kDa) (20,30). Another mammalian protein homologous to Vti1p has also been identified and has been referred to as Vtil-rp1 (31), Vti1 (32), and Vti1b (33), respectively. To avoid further confusion, we have adopted the name Vti1-rp1 for the other mammalian homolog of yeast Vti1p. Since Vti1-rp2 displays significant amino acid sequence identity (about 30%) with Vti1-rp1, it is essential to establish that our affinity-purified antibodies do not cross-react with Vti1-rp1. As shown (Fig. 3A), the detection of the 29-kDa protein in immunoblot was selectively abolished by preincubation of antibodies with recombinant cytoplasmic domain of Vti1-rp2 (lane 4) but not with the cytoplasmic domain of Vti1-rp1 (lane 3), establishing that our antibodies are specific for Vti1-rp2. Immunoblot analysis revealed that Vti1-rp2 is enriched in Golgi membranes (Fig. 3B, lanes 4 -6) as compared with total membranes, and microsomal membranes. The enrichment of Vti1-rp2 in Golgi membranes is comparable with that of Golgi ␣2,6-sialyltransferase (lanes 1-3). When Golgi-enriched membranes were subjected to different extraction conditions (Fig. 3C), Vti1-rp2 was effectively extracted by detergents but not by phosphate-buffered saline, 2.5 M urea, 0.1 M sodium bicarbonate (pH 12), or 2 M KCl. Vti1-rp2 is thus an integral membrane protein enriched in the Golgi fractions.
Indirect immunofluorescence microscopy was used to examine the exact subcellular localization of Vti1-rp2 (Fig. 4). Affinity-purified antibodies against Vti1-rp2 specifically labeled perinuclear structures (Fig. 4A, panel a) characteristic of the Golgi apparatus (34), and this labeling colocalized well with that of Golgi mannosidase II (panel b) (35). When the Golgi apparatus was fragmented by nocodazole treatment (panels c-d), Vti1-rp2 and mannosidase II were colocalized well in the fragmented Golgi apparatus. Similar to mannosidase II and other Golgi proteins (36), Vti1-rp2 was redistributed to the ER when cells were treated with brefeldin A (e-f). These results firmly establish that Vti1-rp2 is an integral membrane protein associated preferentially with the Golgi apparatus.
Vti1-rp2 Is a Novel Golgi SNARE-To investigate whether Vti1-rp2 indeed functions as a novel SNARE of the Golgi apparatus, we examined the potential interaction of Vti1-rp2 with ␣-SNAP. As shown in Fig. 5A (upper panel), Vti1-rp2 in the Golgi detergent extract was specifically retained by immobilized GST-␣-SNAP in a dose-dependent manner. Under identical conditions, Vti1-rp2 was not retained by immobilized GST or several other control GST fusion proteins (data not shown). Furthermore, other Golgi proteins, including ␣2,6-sialyltransferase, were not retained by immobilized GST-␣-SNAP (Fig. 5A, lower panel). The interaction of Vti1-rp2 with ␣-SNAP was further investigated (Fig. 5B). Proteins in the Golgi extract were incubated with GST (lanes 1 and 4), GST-␣-SNAP ( lanes 2 and 5), and GST-␥-SNAP (lanes 3 and 6); after extensive washing, the beads (lanes 1-3) and 1/10 of the supernatants (lane 4 -6) were analyzed by immunoblot to detect Vti1-rp2 (upper row) as well as GS28 (lower row), which serves as a positive control. Vti1-rp2 was retained by GST-␣-SNAP as efficiently as GS28 (lanes 2 and 5). Neither Vti1-rp2 nor GS28 was retained by GST (lanes 1 and 4). To lesser extents, Vti1-rp2 and GS28 was significantly retained by GST-␥-SNAP. These results establish that interaction of Vti1-rp2 with GST-␣-SNAP is specific and occurs with efficiencies comparable with that of known Golgi SNAREs such as GS28. Furthermore, interaction of Vti1-rp2 with immobilized GST-␣-SNAP could be abolished by NSF in conditions that promote dissociation of SNARE complexes (Fig. 5C). When Golgi extract was incubated with immobilized GST-␣-SNAP in the presence of increasing amounts of NSF in conditions (assembly buffer) that promote formation of SNARE complexes (lanes 1-6), comparable amounts of Vti1-rp2 were retained. However, retention of Vti1-rp2 by immobilized GST-␣-SNAP was readily abolished by NSF in conditions (lane 7-12) that promote ATP hydrolysis by NSF and disassembly of SNARE complexes. These results not only further confirmed that the interaction of Vti1-rp2 with ␣-SNAP is specific but also revealed that the interaction of Vti1-rp2 with ␣-SNAP is in the context of Vti1-rp2-containing SNARE complexes.
FIG. 4. Vti1-rp2 is associated preferentially with the Golgi apparatus.
Control normal rat kidney cells (a and b), normal rat kidney cells treated with 10 g/ml nocodazole for 1 h at 37°C (c and d), and normal rat kidney cells treated with 10 g/ml brefeldin A for 1 h at 37°C (e and f) were double-labeled with rabbit polyclonal antibodies against Vti1-rp2 (a, c, and e) and a monoclonal antibody against Golgi mannosidase II ( Man II) (b, d, and f). Bar, 10 M. by antibodies against syntaxin 6. In contrast, syntaxin 5 was not coimmunoprecipitated by anti-syntaxin 6 antibodies ( lanes 5 and 7). These results suggest that significant amounts of Vti1-rp2 exist in at least two distinct SNARE complexes, one containing syntaxin 5 and the other containing syntaxin 6. This conclusion was further substantiated by our observation that significant amounts of syntaxin 5 and syntaxin 6 were coimmunoprecipitated by antibodies against Vti1-rp2 (data not shown).
A Role of Vti1-rp2 in Protein Transport in the Secretory Pathway-The association of Vti1-rp2 with the Golgi apparatus and its establishment as a SNARE suggest that it may participate in protein trafficking in the secretory pathway. To investigate this, Vero cells grown on coverslips were first infected with vesicular stomatitis virus ts045 and then microinjected with affinity-purified antibodies against Vti1-rp2. Transport of G-protein along the secretory pathway was monitored by indirect immunofluorescence microscopy. Since microinjection of antibodies against EAGE epitope of -COP was shown previously to inhibit G-protein transport (29), cells microinjected with -COP antibodies serve as the positive control. We have shown recently that syntaxin 7 is in the endosomal compartment (28), and syntaxin 7 is thus not expected to function in the secretory pathway. Cells microinjected with syntaxin 7 antibodies thus serve as a negative control. As shown in Fig. 7, in cells microinjected with antibodies against Vti1-rp2 (C, arrows), surface labeling of G-protein was dramatically reduced, resulting in accumulation of G-protein in perinuclear structures characteristic of the Golgi apparatus (D, arrows). This inhibitory effect is comparable with that seen in cells microinjected with antibodies against -COP (A-B, arrows). In marked contrast, transport of G-protein to the cell surface was unaffected (E and F, arrows) in cells microinjected with syntaxin 7 antibodies. These results suggest that transport of G-protein from the ER to the plasma membrane is specifically inhibited in cells microinjected with antibodies against Vti1-rp2, and the site of inhibition seems to be at the level of the Golgi apparatus because G-protein was seen to accumulate in structures characteristic of the Golgi apparatus and the arrested G-protein colocalized well with markers of the Golgi apparatus such as 12-(N-methyl-N-(7-nitrobenz-2-oxa-1,3-diazol-4-yl))-ceramide and binding sites for lectin lens culinaris agglutinin (data not shown). Although more detailed future experiments are needed to address the mechanistic aspects of Vti1-rp2 involvement in protein transport, these results clearly revealed a role of Vti1-rp2 in protein transport in the secretory pathway. DISCUSSION We have identified a novel 29-kDa mammalian protein (Vti1-rp2) that has characteristics of a SNARE based on the presence of a COOH-terminal hydrophobic membrane anchor and several regions that can potentially form coiled-coil structures (6 -11, 16). Three observations establish that Vti1-rp2 is indeed a SNARE. First, Vti1-rp2 in Golgi detergent extract can interact with immobilized GST-␣-SNAP in a specific and dose-dependent manner. Interaction of Vti1-rp2 with immobilized GST-␣-SNAP occurs with efficiencies comparable with that of known Golgi SNAREs such as GS28 (20). The second line of evidence is that association of Vti1-rp2 with GST-␣-SNAP could be abolished by NSF, specifically under conditions that promote NSF ATPase activity and dissociation of SNARE complexes. This suggests that interaction of Vti1-rp2 with immobilized ␣-SNAP occurs through Vti1-rp2-containing SNARE complex(es) in the Golgi extract. The demonstration of existence of Vti1-rp2 in at least two distinct SNARE complexes (one containing syntaxin 5 and the other containing syntaxin 6) 1 and 4), GST-␣-SNAP (lanes 2 and 5), or GST-␥-SNAP (lanes 3 and 6) immobilized onto the glutathione beads. After extensive washing, the beads (lanes 1-3) and 1/10 of the supernatants (lanes 4 -6) were analyzed by immunoblot to detect Vti1-rp2 (upper row) or GS28 (lower row). C, interaction of Vti1-rp2 with immobilized GST-␣-SNAP could be abolished by NSF in conditions that promote disassembly of SNARE complexes. 200 g of Golgi extract was incubated with 2 g of immobilized GST-␣-SNAP in the presence of indicated amounts of NSF in either assembly (lanes 1-6) or disassembly buffer (lanes 7-12). The amounts of Vti1-rp2 bound onto the beads were then determined by immunoblot. FIG. 6. Vti1-rp2 exists in distinct syntaxin 5-and syntaxin 6-containing Golgi SNARE complexes. The Golgi detergent extract was immunoprecipitated (IP) with antibodies against syntaxin 5 ( lanes 1 and 3), antibodies against syntaxin 6 ( lanes 5 and 7), or control rabbit IgG (lanes 2, 4, 6, and 8). The immunoprecipitates (lanes 1, 2, 5, and 6) and 1/10 of supernatants (lanes 3, 4, 7, and 8) were analyzed by immunoblot to detect the indicated proteins. Please note that antibodies against syntaxin 5 detect two bands as has been described previously (16 -17). P, beads; S, unbound fraction.
provides the third line of evidence to support that Vti1-rp2 is a novel SNARE. The subcellular localization of Vti1-rp2 was established by two independent results. First, Vti1-rp2 is highly enriched in a membrane fraction that is also enriched for the Golgi apparatus. Furthermore, Vti1-rp2 colocalized well with the Golgi marker mannosidase II in both control as well as nocodazole-fragmented Golgi apparatus. Like mannosidase II, Vti1-rp2 could be redistributed into ER-like structures by brefeldin A. It is thus firmly established that Vti1-rp2 is a novel SNARE of the Golgi apparatus.
Data base searches with Vti1-rp2 sequence revealed that Vti1-rp2 is most homologous to Vti1p. Vti1p is a recently identified v-SNARE of the yeast Golgi and has been implicated in two independent vesicular transport events (23). By interacting with the early Golgi t-SNARE Sed5p (the yeast counterpart of syntaxin 5) (37-38), Vti1p has been suggested to function as a v-SNARE for vesicles involved in retrograde intra-Golgi transport. Furthermore, Vti1p has also been shown to be involved in transport from the late Golgi to the vacuole (equivalent to the mammalian lysosome) by interacting with Pep12p (39), a syntaxin-like t-SNARE of the pre-vacuolar compartment (equiva-lent to the mammalian late endosome). Whether Vti1-rp2 represents the mammalian counterpart of Vti1p remains to be further investigated, although another mammalian protein (Vti1-rp1) homologous to Vti1p could functionally substitute for the yeast Vti1p (32). Besides its sequence homology with Vti1p, another property of Vti1-rp2 that is similar to Vti1p is that a significant amount of Vti1-rp2 exists in a syntaxin 5-containing Golgi SNARE complex. In addition, a significant amount of Vti1-rp2 was also shown to be present in a syntaxin 6-containing SNARE complex. Since coimmunoprecipitation of syntaxin 5 and syntaxin 6 was not observed, these results suggest that Vti1-rp2 exists in distinct syntaxin 5-and syntaxin 6-containing SNARE complexes. Although the functional aspects remain to be established, syntaxin 6 has been shown recently to be enriched in the trans-Golgi network (22).
The presence of Vti1-rp2 in at least two distinct Golgi SNARE complexes indicates that it may function as a SNARE for at least two types of vesicle-mediated transport events. One will dock and fuse with cis-Golgi by interaction with syntaxin 5, whereas the other will dock and fuse with trans-Golgi network via interaction with syntaxin 6. The interaction of Vti1-rp2 with at least two syntaxin-like t-SNAREs is consistent with a recent study showing that yeast Vti1p could interact with at least five distinct syntaxin-like t-SNAREs (40). Since Vti1p participate in two distinct transport events (one associated with the secretory pathway and the other in the endosomal pathway), the existence of two distinct mammalian proteins homologous to Vti1p indicates that the two equivalent transport events in mammalian cells may be mediated by two distinct proteins. The preferential association of Vti1-rp2 with the Golgi apparatus indicates that Vti1-rp2 may participate in a transport event in the secretory pathway. Consistent with this, we have shown that microinjection of antibodies against Vti1-rp2 specifically inhibited transport of G-protein to the cell surface at the level of Golgi apparatus. The extents of inhibition of G-protein transport seen in cells microinjected with Vti1-rp2 antibodies are comparable with those seen in cells microinjected with antibodies against -coat protein. Serving as a negative control, G-protein transport to the plasma membrane was unaffected in cells microinjected with antibodies against endosomal syntaxin 7. Vti1-rp2 thus plays a role in protein transport in the secretory pathway, and the role of yeast Vti1p in the secretory pathway is most likely mediated by Vti1-rp2 in mammalian cells. Furthermore, our preliminary studies with Vti1-rp1 suggests that it is preferentially associated with the trans-Golgi network and/or the endosomal compartment, 3 indicating that the endosomal role of yeast Vti1p is most likely mediated by Vti1-rp1 in mammalian cells. | 2018-04-03T03:50:45.156Z | 1998-08-01T00:00:00.000 | {
"year": 1998,
"sha1": "6adf7264715deb0528eeee1635dbccd69ed6c640",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/273/34/21783.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "e32719e4c8e00f8b3fac93d5bee58b7a67ad860e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
256566677 | pes2o/s2orc | v3-fos-license | Reformulation of Theories of Kinematic Synthesis for Planar Dyads and Triads
: Methods for solving planar dyads and triads in kinematic synthesis are scattered throughout the literature. A review of and a new compilation of the complex number synthesis method for planar dyads and triads is presented. The motivation of this paper is to formulate uniform solution procedures, pointing out the commonalities of various approaches and emphasizing a consistent method for synthesizing mechanisms defined by specified precision positions. Particular emphasis is given to the solution method using compatibility linkages. The textbook Advanced Mechanism Design Vol II by Erdman and Sandor (1984) only includes a small portion of the available information on this method, and several researchers have added to the basic knowledge in the years since. In some cases, the approach and nomenclature were not consistent, yielding a need to describe and chart a generic formulation and solution procedure for dyads/triads using compatibility linkages and solution structures. The present method offers benefits for solving for exact dyad/triad solutions for complex multiloop mechanisms and could be a promising tool for reducing the computational load of finding complex mechanisms, and for visualizing properties of the solution space.
Introduction
The goal of synthesizing linkages and mechanisms to perform a particular task is a centuries-old practice.One famous example, called the South Pointing Chariot, was purportedly created by Chinese engineer Ma Jun (c.200-265).As its name implies, a clever gear system driven by a chariot's wheels forces a statue on the back of the chariot to continually point south.This was true no matter how many turns the chariot took, provided both of its wheels rolled without slipping.This was a valuable navigational tool that significantly pre-dated the invention of the conventional magnetic compass [1,2].However, for many centuries to follow, no formal or systematic methodology for synthesizing new mechanisms was developed.
Professor Robert Willis articulated this problem in the preface of his 1841 text "Principles of Mechanism" when he said, "By some strange chance, very few have attempted to give a scientific form to the . . .results of mechanism; for it cannot be said that the few and simple . . .examples in books of mechanics, are to be regarded as even forming a foundation . . .that will enable us either to reduce the movements and actions of a complex engine to system or to give answers to the questions that naturally arise upon considering such engines" [3].
In the remainder of the text, Willis laid a foundation for later work and a challenge for mathematicians and engineers to create mathematical synthesis techniques.This call was taken up by the likes of Franz Reuleaux, James Watt, Ludwig Burmester, and Ferdinand Freudenstein.Each of them made a unique contribution to the field, such that by the earlymid 1900s, a mathematical basis for solving mechanism problems had been established.
The generations of kinematicians that followed more thoroughly fleshed out the techniques formulated by these early researchers, and developed more methods, such as complex number and continuation methods.As the field has continued to expand, few centralized solution methodologies have arisen, but rather a collection of largely unique approaches that are mostly specific to the type of linkage topology.
There are many distinct ways to define problems, and, consequently, many distinct methods to solve them.Some define precision positions (x, y coordinates and relative angle) that a coupling link must pass through, some a path a single point must pass through, and others seek to generate vast sets of possible mechanism solutions through continuation methods [4][5][6][7][8].Recent studies in kinematic synthesis have primarily emphasized algorithmic approaches.
For example, Purwar and Deshpande investigated a machine-earning approach to kinematic synthesis, with the intent of mitigating the solution mechanisms' sensitivity to the initial conditions [9].In another paper, Baskar and Bandyopadhyay discuss an algorithm aimed at reducing the computational load of calculating the finite roots of large systems of polynomial equations, a problem that arose in kinematic synthesis as the mathematical method of polynomial continuation was implemented [10].Ref. [11] demonstrates a procedure for synthesizing RR, PR or RP dyads, but using a blend of exact and approximate positions.
While evidently valuable, this paper leans away from these algorithm driven approaches in favor of more classical synthesis approaches that focus on directly solving the kinematic equations.Countless complex planar mechanisms can be formed by a combination of dyads and triads, which can be viewed as kinematic building blocks.As two examples, consider the multilink mechanisms shown in Figures 1 and 2. The first deploys the footrest of a chair, while the second moves the leading-edge flap of a wing into its working position.Both mechanisms are composed of multiple dyad and triad chains.Rather than attempting to develop a custom kinematic synthesis process for every complex linkage, a uniform strategy is preferred.
early-mid 1900s, a mathematical basis for solving mechanism problems had been established.
The generations of kinematicians that followed more thoroughly fleshed out the techniques formulated by these early researchers, and developed more methods, such as complex number and continuation methods.As the field has continued to expand, few centralized solution methodologies have arisen, but rather a collection of largely unique approaches that are mostly specific to the type of linkage topology.
There are many distinct ways to define problems, and, consequently, many distinct methods to solve them.Some define precision positions (x, y coordinates and relative angle) that a coupling link must pass through, some a path a single point must pass through, and others seek to generate vast sets of possible mechanism solutions through continuation methods [4][5][6][7][8].Recent studies in kinematic synthesis have primarily emphasized algorithmic approaches.
For example, Purwar and Deshpande investigated a machine-earning approach to kinematic synthesis, with the intent of mitigating the solution mechanisms' sensitivity to the initial conditions [9].In another paper, Baskar and Bandyopadhyay discuss an algorithm aimed at reducing the computational load of calculating the finite roots of large systems of polynomial equations, a problem that arose in kinematic synthesis as the mathematical method of polynomial continuation was implemented [10].Ref. [11] demonstrates a procedure for synthesizing RR, PR or RP dyads, but using a blend of exact and approximate positions.
While evidently valuable, this paper leans away from these algorithm driven approaches in favor of more classical synthesis approaches that focus on directly solving the kinematic equations.Countless complex planar mechanisms can be formed by a combination of dyads and triads, which can be viewed as kinematic building blocks.As two examples, consider the multilink mechanisms shown in Figures 1 and 2. The first deploys the footrest of a chair, while the second moves the leading-edge flap of a wing into its working position.Both mechanisms are composed of multiple dyad and triad chains.Rather than attempting to develop a custom kinematic synthesis process for every complex linkage, a uniform strategy is preferred.Owing to some inherent properties of mechanisms and machines formed by links and joints, kinematic synthesis methods found in the literature share certain underlying mathematical principles that make finding one or more solutions possible.There exist a Owing to some inherent properties of mechanisms and machines formed by links and joints, kinematic synthesis methods found in the literature share certain underlying mathematical principles that make finding one or more solutions possible.There exist a few analytical approaches to solving triad synthesis synthesis problems, some of which are analyzed in Reference [13], including a unique approach coined the "relative precision position approach for triad synthesis (p.433)."Here, emphasis is placed on the solution method called the "compatibility linkage" for problems defined by precision positions.This method was first introduced by Sandor and Freudenstein [14] and summarized in Hartenberg, and Denavit [15] and later in Erdman and Sandor [4].Further contributions building on the foundation established by Sandor and Freudenstein were made by Chuen-Sen Lin and other authors [16][17][18][19].
few analytical approaches to solving triad synthesis synthesis problems, some of which are analyzed in Reference [13], including a unique approach coined the "relative precision position approach for triad synthesis (p.433)."Here, emphasis is placed on the solution method called the "compatibility linkage" for problems defined by precision positions.This method was first introduced by Sandor and Freudenstein [14] and summarized in Hartenberg, and Denavit [15] and later in Erdman and Sandor [4].Further contributions building on the foundation established by Sandor and Freudenstein were made by Chuen-Sen Lin and other authors [16][17][18][19].
Precision Position Solution Methods
The starting point for using the complex number method for solving kinematic synthesis problems (defined by precision positions) is modelling the linkage mechanism using a number of "standard form equations" [4].The equations are slightly different for a dyad and a triad.A dyad represents two links in the mechanism and has the form: ( − 1) + ( − 1) = (1) A triad represents three links in relative motion and has the form: ( − 1) + ( − 1) + ( − 1) = − (2) Note that W, Z, V, and h are vectors defined with complex numbers.Links in the mechanism that are not binary may be defined by more than one dyad or triad loop.These equations are illustrated in Figure 3a,b.
Each of the terms on the left-hand side in the standard form equations represent a link (W, V, or Z) in an assembled mechanism.They are multiplied by rotational operators which represent the rotations of the link from the starting position to each prescribed position.The term represents the vector between the precision point in the i-th and j-th position (i.e., goes between P1 and P2).In most cases, the terms and are prescribed in the problem, and is taken as a free choice.As the number of precision positions increases, the number of free choices that can be made decreases until there are no free choices left.As seen in Table 1, for problems involving a dyad in two or three positions, the number of free choices is such that the standard form equations can be solved with a linear solution, either directly or by Cramer's rule for a dyad in three positions.However, in the four-precision position case, there are three vector equations (six scalar equations) which must be solved simultaneously, but seven unknowns.As a result, a
Precision Position Solution Methods
The starting point for using the complex number method for solving kinematic synthesis problems (defined by precision positions) is modelling the linkage mechanism using a number of "standard form equations" [4].The equations are slightly different for a dyad and a triad.A dyad represents two links in the mechanism and has the form: A triad represents three links in relative motion and has the form: Note that W, Z, V, δ j and h are vectors defined with complex numbers.Links in the mechanism that are not binary may be defined by more than one dyad or triad loop.These equations are illustrated in Figure 3a Each of the terms on the left-hand side in the standard form equations represent a link (W, V, or Z) in an assembled mechanism.They are multiplied by rotational operators which represent the rotations of the link from the starting position to each prescribed position.The term δ j represents the vector between the precision point in the i-th and j-th position (i.e., δ 2 goes between P1 and P2).In most cases, the terms δ j and α j are prescribed in the problem, and β j is taken as a free choice.As the number of precision positions increases, the number of free choices that can be made decreases until there are no free choices left.As seen in Table 1, for problems involving a dyad in two or three positions, the number of free choices is such that the standard form equations can be solved with a linear solution, either directly or by Cramer's rule for a dyad in three positions.However, in the four-precision position case, there are three vector equations (six scalar equations) which must be solved simultaneously, but seven unknowns.As a result, a nonlinear solution method must be used.This is where the method of compatibility linkages is so useful.
Table 1.Maximum number of solutions for an unknown dyad/triad when δ j and α j are prescribed in the equation W e −iβ j − 1 + Z e −iα j − 1 = δ j for dyads, and W e −iα j − 1 + V e −iβ j − 1 + Z e −iγ j − 1 = δ j − h j for triads.Unlike the standard form dyad equation, the triad equation also includes the vector term h.This term adjusts the tail end of the vector chain, allowing for the solution method applicability even in completely ungrounded triad cases.The dyad equation can be modified to include the term h if required.
Number of Solutions
Figure 4 illustrates how a linkage system can be viewed as combinations of dyads and triads.Even though this six-bar is far less complex that the mechanisms shown in Figures 1 and 2, the following process can be applied to mechanisms with more loops and links in a similar way.For example, although a Stephenson III six-bar is shown, the other six-bar chains can be placed in the dyad-triad standard form as reported by Lonn [22].
The six-bar shown in Figure 4 has three loops, one dyad and two triads.They are defined by Equations (3)-( 5) [4].The first loop equation describes a dyad, while the next two describe triad loops.
Z 6 e iθ j − 1 + Z 7 e iβ j − 1 − Z 3 e iγ j − 1 = δ j (5) Using free choices of link vectors, this mechanism can be solved with three dyads [4] (p. 113).Ref. [4] has other examples of assigning dyad and triad standard form modelling to multiloop mechanisms including an eight bar with four triads and geared mechanisms.Once a linkage system is modelled with combinations of dyad and triad standard form equations, the compatibility linkage solution process is used to reveal the potential solution space.
( − 1) + ( − 1) = ( − 1) + ( − 1) − ( − 1) = ( − 1) + ( − 1) − ( − 1) = Using free choices of link vectors, this mechanism can be solved with three dy (p.113).Ref. [4] has other examples of assigning dyad and triad standard form mod to multiloop mechanisms including an eight bar with four triads and geared mecha Once a linkage system is modelled with combinations of dyad and triad standard equations, the compatibility linkage solution process is used to reveal the potentia tion space.A practical mechanism which reveals these loops can be seen in Figure 5a, w loops shown in Figure 5b.The mechanism guides a rotor on a drone from a vertica tion into a horizontal position, allowing for the same rotor to produce vertical or fo thrust.This setup would allow for the drone to takeoff vertically but fly in a typical " wing" configuration once in the air, improving its efficiency.This particular exam very challenging due to significant constraints on both the ground and moving The ground pivots must both be within the nacelle, and the moving pivots are ver to the link holding the propellor.In addition, the mechanism must deploy smoothly out exhibiting poor transmission angles.The dimensional synthesis resulted in the ues shown in Table 2.A practical mechanism which reveals these loops can be seen in Figure 5a, with the loops shown in Figure 5b.The mechanism guides a rotor on a drone from a vertical position into a horizontal position, allowing for the same rotor to produce vertical or forward thrust.This setup would allow for the drone to takeoff vertically but fly in a typical "fixed-wing" configuration once in the air, improving its efficiency.This particular example is very challenging due to significant constraints on both the ground and moving pivots.The ground pivots must both be within the nacelle, and the moving pivots are very close to the link holding the propellor.In addition, the mechanism must deploy smoothly without exhibiting poor transmission angles.The dimensional synthesis resulted in the Z values shown in Table 2.A proof-of-concept prototype was assembled, seen in Figure 6.
(a) (b) A proof-of-concept prototype was assembled, seen in Figure 6.The purpose of the compatibility linkage is to find "compatible values" of several unknown variables in a set of nonlinear synthesis equations that are compatible with the
Compatibility Linkage Solution Procedure, Dyad for 4 Precision Points
The purpose of the compatibility linkage is to find "compatible values" of several unknown variables in a set of nonlinear synthesis equations that are compatible with the known or specified variables.This method results in a closed-form solution to these equations.The compatibility linkage technique, which was introduced by Sandor and Freudenstein [14], takes advantage of insights gained by graphical and analytical precision position methods-both approaches provide keys to generating solutions for triads and dyads.
As with other precision position methods, it is assumed that the designer has either determined or measured the required x and y locations, and angles, of the precision point in each position, meaning that δ i-j and α i-j are known.Depending on the number of positions being considered for a particular problem, the designer may have additional free choices to make, but the change in position coordinates and angle between positions should always be known.
The compatibility linkage general solution procedure will be emphasized and illustrated with a planar dyad.The first step is to translate the known information into the standard form vector equations [4,12] (see Equations ( 1) and ( 2)).As seen in Figure 3, each vector W, Z, (and for triads, V), represents a link in a dyad or triad.Note that the two links Z and Z' shown in Figure 3a do not represent two unique links, but rather two vectors embedded in the same link.
The number of standard-form equations should be one less than the number of precision positions selected in the problem.The only terms that change in each of these equations are the angles β j , α j , γ j , and the vector δ j for each additional position j.These equations are then translated into a matrix form, which looks like this: [4] (p. 180).This equation will look roughly the same for a triad, except a column is added for V in the first matrix, and V is added to the column vector WVZ.An augmented matrix can be formed by adding the column vector δ 2-4 to the matrix on the left-hand side.A known property of this type of system is that a solution only exists if the rank of the augmented matrix is two (for a dyad in four positions), with rank referring to the number of linearly independent rows in the matrix.The rank can be most easily checked by finding the determinant of the matrix.For square matrices such as the augmented matrix under consideration, if the determinant equals zero, the rank of the matrix is based on the non-zero cofactor (also called minor in math) of the maximum possible order [25].The following expressions are derived from these properties.
[4] (p. 181).This determinant can be written into the following expression, known as the compatibility equation: where each vector ∆ 2-4 is the cofactor matrix associated with the corresponding β value in the augmented matrix.The cofactors are found by eliminating the row and column containing each value of β, as follows (e.g., ∆ 2 ): The cofactor matrix formed from what remains after eliminating this row and column is marked in the gray box.For a dyad in four positions, the cofactor matrices are given as follows: [4] (p. 181).Note that for dyads, the cofactor matrices will always be 2 × 2, while for triads, the cofactor matrices will be 3 × 3. Additionally, each of the vectors ∆ 2-4 is a matrix signified with vertical lines rather than conventional matrix brackets.This is a mathematical shorthand representing a determinant, meaning that each of these terms (once the determinant is evaluated) is a vector with a magnitude and direction.The term ∆ 1 is unique from the others, defined by the following expression: [4] (p. 181).
As brilliantly noted in [14], this equation can be viewed as a four link mechanism in its starting position-thus named the compatibility linkage.Equation ( 13) is the equation of closure where ∆ 1 is the fixed link and the rest of the vectors close the loop by connecting the chain's head to its tail.Plotting each of the four above vectors without applying any of the beta rotation angles mathematically will resemble Figure 7 (for a dyad in four positions, see Table 3 for more configurations).Table 3. Summary of the solution process for compatibility linkages for dyads and triads.Table 3. Summary of the solution process for compatibility linkages for dyads and triads.Repeat steps 1-4 of the dyad in 4PP.The difference is that the loop has five links instead of four, so the designer will need to set two free choices, typically the angles of Δ2 and Δ5.
Number of
[21] (p.113) Eleven-bar structure, ten-9; Δ1, Δ2, Δ3, Δ4, Δ5, Repeat steps 1-6 for the dyad in five positions.In the example at right, Δ3 is used as the ground pivot.There are two parallelogram loops to form in step four.One Eleven-bar structure, ten-bar compatibility linkage Repeat steps 1-6 for the dyad in five positions.In the example at right, ∆ 3 is used as the ground pivot.There are two parallelogram loops to form in step four.One about ∆ 4 , forming BCEF, and one about ∆ 1 , forming GIJK.Remove a link from either parallelogram to convert to the compatibility linkage.
Triad 5 Five-bar 2-DOF compatibility linkage 5; Δ1, Δ2, Δ3, Δ4, Δ5 Repeat steps 1-4 of the dyad in 4PP.The difference is that the loop has five links instead of four, so the designer will need to set two free choices, typically the angles of Δ2 and Δ5.[21] (p.117) One frequent point of confusion is that the compatibility linkage is related to the actual solution dyad.This is not the case.Rather, the constructed compatibility linkage is only the tool to allow the user to find compatible solutions for the unknown angles in the standard form equations.
Once the linkage is assembled, the user applies a rotation of β 2 to link ∆ 2 .Consequently, the links for ∆ 3 and ∆ 4 need to translate and rotate by some amount to keep the loop closed, as ∆ 1 is considered ground, and does not move.Once solved, the displacement angles of links ∆ 3 and ∆ 4 represent the solution values of β 3 and β 4 .These values are then plugged into the original standard form dyad equations.With β 3 and β 4 identified, solving for the vectors W and Z using standard linear algebra techniques is possible.
While the problems may be a bit more complex, increasing the number of precision positions or transitioning from a dyad to a triad changes very little about the underlying methodology for compatibility linkages.While this paper will not emphasize quadriads, it is even possible to apply the method of compatibility linkages to solving four link chains [21]!Here, each higher-order case up to seven precision positions will be briefly examined, highlighting the key differences of each from the dyad in four positions explained above.See Table 3 for a summary of these cases and see Appendix A for a detailed solution procedure of the triad in six and seven positions.
Dyad in 5 Precision Positions
Moving from four to five positions is likely the biggest single jump in complexity for solving problems using the method of compatibility equations.This is because there is no longer a single compatibility equation, but rather two.The compatibility equations for a dyad with five prescribed positions (no free choices) are: and [4] (p. 201).Or, in their simplified form: ∆ 2 e iβ 2 + ∆ 3 e iβ 3 + ∆ 4 e iβ 5 + ∆ 1 = 0 (17) [21] (p.107) where The ∆ terms are formed in the same way as the ∆ terms (cofactors of the augmented matrix), but they are taken from the second matrix.These equations must be fulfilled simultaneously to find a valid solution for W and Z. Previously, finding the solution to these compatibility equations would have required using a technique known as "Sylvester's Dyalitic Eliminant".While this method worked, the process is computationally involved and mathematically rigorous.Using the method of compatibility linkages described below allows the designer to avoid this complexity while being able to visualize the solution process.
To form a solution structure, identify each of the two independent four-bar loops formed by the ∆ terms.The first loop includes ∆ 1 through ∆ 4 , while the second is formed from ∆ 1 , ∆ 2 , ∆ 3 , and ∆ 4 (∆ 4 used twice because ∆ 4 is identical to it, either notation is acceptable).Using a consistent scale, line up these two four bars such that the tails of The final key step shown in Figure 8 is adding point D. This point is found by creating the parallelogram BCDE, turning link CF (corresponding to Δ3) into a ternary link.The completed mechanism is known as the solution structure.Once identified, the link DE may be removed, but point D will remain as a reference.After removing link DE, the mechanism transforms from a zero degree-of-freedom structure to a one degree-of-freedom linkage; this is the final compatibility linkage.
To find solutions, rotate the (now ternary) link Δ2.The rotation of Δ2 is the only input required to fully define the system, so each other link is determined once the angle of Δ2 is set.As this mechanism moves, at any position where the links CD and BE are parallel, the linkage represents a compatible solution to the original problem.The exception is the first position, as CD and BE will always be parallel initially, by definition.
For each unique parallel position, the displacement angles of the links correspond to the angles β2-5.Specifically, ∠Δ3 = β3, ∠Δ4 (outer loop)= β4, and ∠Δ4 (inner, Δ′ loop)= β5.These compatible angles are then inserted back into the standard form dyad equations.With four vector equations and two vector unknowns, the equation can now be solved for W and Z via a linear solution.The number of geometric inversions of the compatibility linkage corresponds to the number of solution sets to the compatibility equations.The term geometric inversion refers to the number of unique mechanisms that can be created by changing which link is fixed, meaning distinct inversions do not have unique angular displacements, just different grounded links.In this case, that means there are six sets of unique combinations of β2-5 which fulfill the original compatibility equations.However, two of these solutions correspond to the slider and concurrency special points.As a result, only up to four dyad solutions exist-that is, there are zero, two or four viable solutions for each choice of independent variable x.
Triad in 5 Precision Positions
As a designer transitions from synthesizing a dyad to a triad, the underlying solution procedure will remain the same, but a few key steps will either change or be added.First, The final key step shown in Figure 8 is adding point D. This point is found by creating the parallelogram BCDE, turning link CF (corresponding to ∆ 3 ) into a ternary link.The completed mechanism is known as the solution structure.Once identified, the link DE may be removed, but point D will remain as a reference.After removing link DE, the mechanism transforms from a zero degree-of-freedom structure to a one degree-of-freedom linkage; this is the final compatibility linkage.
To find solutions, rotate the (now ternary) link ∆ 2 .The rotation of ∆ 2 is the only input required to fully define the system, so each other link is determined once the angle of ∆ 2 is set.As this mechanism moves, at any position where the links CD and BE are parallel, the linkage represents a compatible solution to the original problem.The exception is the first position, as CD and BE will always be parallel initially, by definition.
For each unique parallel position, the displacement angles of the links correspond to the angles β 2-5 .Specifically, ∠∆ 3 = β 3 , ∠∆ 4 (outer loop)= β 4 , and ∠∆ 4 (inner, ∆ loop)= β 5 .These compatible angles are then inserted back into the standard form dyad equations.With four vector equations and two vector unknowns, the equation can now be solved for W and Z via a linear solution.The number of geometric inversions of the compatibility linkage corresponds to the number of solution sets to the compatibility equations.The term geometric inversion refers to the number of unique mechanisms that can be created by changing which link is fixed, meaning distinct inversions do not have unique angular displacements, just different grounded links.In this case, that means there are six sets of unique combinations of β 2-5 which fulfill the original compatibility equations.However, two of these solutions correspond to the slider and concurrency special points.As a result, only up to four dyad solutions exist-that is, there are zero, two or four viable solutions for each choice of independent variable x.
Triad in 5 Precision Positions
As a designer transitions from synthesizing a dyad to a triad, the underlying solution procedure will remain the same, but a few key steps will either change or be added.First, the standard triad equation has an additional term, e iγj -1, associated with the Z vector.See Figure 3b depicting the vector form of a triad to see where this term fits in the vector chain.Similarly, an intermediate vector V has been added, increasing the number of links in the chain from two to three.The meaning of some of the angles have changed as well.The angle α no longer describes the angle of the coupler link, but rather the angle of the vector W. β is now assigned to the intermediate link V and will continue to be selected as the free choice for these problems.The new angle, γ, replaces α as the angle describing the coupler's rotation.It is important to note that using triads instead of dyads for five precision positions does not increase the number of loops in the compatibility linkage-there is still only a single loop-but the triad does increase the number of terms that must be identified.All values of γ must also be prescribed along with all the information that was prescribed for a dyad in five positions.This volume of free choices enables a designer to make many decisions about their desired mechanism, but this can also be overwhelming due to the vast potential solution space.For the triad in five positions, there are four simultaneous vector equations, and one compatibility equation: [21] (p.111) As with the compatibility linkage for a dyad in four positions, only one compatibility equation exists for a triad in five positions.As a result, the compatibility equation only has a single loop.However, one significant difference between the two is the additional link in the five-bar compatibility equation.This results in a solution structure with two degrees of freedom rather than one.However, this challenge can be avoided by giving the designer a second free choice.Typically, these free choices are chosen as β 2 and β 5 , though any other combination of two angles is also valid.Once these free choices are made, the solution procedure is the same as the dyad in four positions, as all that is left is a geometrically deterministic triangle.The remaining link positions and angles can be solved by using the law of cosines.Table 3 and Figure 9 represent summaries of the dyad and triad solution procedures using the compatibility linkage approach.The similarities across these cases are indicated, perhaps suggesting a future software kinematic synthesis package.One example in this direction was achieved by Chase [26], although there was limited use of this software at that time.
A full explanation of the solution procedure using the compatibility linkage for a triad in six and seven precision positions can be found in Appendix A. While kinematic chains above dyads and triads will not be discussed in detail here, Lin demonstrated the general solution procedure for the compatibility linkage of a quadriad [21].Theoretically, even higher-order chains also ought to be solvable by compatibility linkages.However, it becomes increasingly difficult to fathom a sufficiently complex yet practical mechanism synthesis problem that would justify their use.Even so, chains incorporating five or more vectors/links, and their potential applications, remain a possible area for further study.To solidify the general solution procedure, the authors find it prudent to provide the following numerical example, a dyad in five precision positions.The problem is defined by the precision positions stated in Table 4.
General Solution Procedure
A flow chart is provided in Figure 9 depicting the general solution procedure using the method of compatibility linkages.Inspiration for the chart comes from [21] (pp.140-143).
To solidify the general solution procedure, the authors find it prudent to provide the following numerical example, a dyad in five precision positions.The problem is defined by the precision positions stated in Table 4. Using Equations ( 7)-( 13) and ( 19), the vectors representing the links in the compatibility linkage are found, shown in Table 5.These vectors form two four-bar loops, shown in Figure 10.The end of vector ∆ 1 is chosen as the shared point between the two loops.Using Equations ( 7)-( 13) and ( 19), the vectors representing the links in the compatibility linkage are found, shown in Table 5.These vectors form two four-bar loops, shown in Figure 10.The end of vector Δ1 is chosen as the shared point between the two loops.At this point, the links ∆ 2 and ∆ 2 are chosen as the input which will drive the compatibility linkage.As a result, all of the ∆ links are rotated about the head of vector ∆ 1 to align ∆ 2 and ∆ 2 so that they are colinear, a rotation of −13.15 degrees (CW).Additionally, a parallelogram is formed by drawing a vector from the end of ∆ 3 in the direction of ∆ 2 .This vector has length ∆ 2 − ∆ 2 =1.9307 − 7.2528i.After applying these changes, the compatibility linkage takes the form shown in Figure 11.From these circle intersections, it is possible to calculate an upper and lower bound of β2 as 116 degrees above the initial position (CCW), and 27 degrees below the initial From these circle intersections, it is possible to calculate an upper and lower bound of β 2 as 116 degrees above the initial position (CCW), and 27 degrees below the initial position (CW).The mechanism is rotated over this range, and any positions where links CD and BE are parallel to each other is recorded.In this problem, there are two such positions, shown in Figure 13a,b.In each of these two compatibility linkage positions, the links are measured to determine their angular displacement relative to the initial position.From these displacements, two dyads are found, corresponding two solution positions, by plugging the values back into the standard form equations and finding a linear solution.These two dyads are plotted using the software Lincages in Figure 14 [27,28].In each of these two compatibility linkage positions, the links are measured to determine their angular displacement relative to the initial position.From these displacements, two dyads are found, corresponding to two solution positions, by plugging the values back into the standard form equations and finding a linear solution.These two dyads are plotted using the software Lincages in Figure 14 [26,27].In each of these two compatibility linkage positions, the links are measured to determine their angular displacement relative to the initial position.From these displacements, two dyads are found, corresponding to two solution positions, by plugging the values back into the standard form equations and finding a linear solution.These two dyads are plotted using the software Lincages in Figure 14 [27,28].
Special Cases
As with other synthesis methods, there are several special cases when solving problems using the compatibility linkage.A few of the most common will be emphasized here, and Figure A4 shows a more complete table of special cases.
The first set of special cases occurs when the free choice angle β 2 is equal to α 2 , γ 2 , or 0. This results in Equation ( 2) taking the form shown in Equation ( 22) (for β 2 = α 2 ).Each of these cases is resolved by the solution containing a slider.W e iα j − 1 + V p j e iα j − 1 + Z e iγ j − 1 = δ j − h j (22) In this solution case, all angle variables are prior to performing any calculations.The only scalar term which is not defined is p j .p j is called the stretch factor, and p 2 is the free choice for this problem.
The second set of special cases is also caused by other angular similarities.They are: one link with no angular displacement (i.e., α j = 0), two links with the same angular displacement (i.e., α j = γ j ), and multiple links with no angular displacement (i.e., α j = γ j = 0).Each of these special cases is resolved through some combination of sliders, with the exception of γ j = 0, for which no solutions exist.For a full depiction of the special cases for triads, see Figure A4.
Advantages of the Compatibility Linkage Method
By analyzing a compatibility linkage for range of rotation of the "input link" of the compatibility linkage, some interesting properties of possible solution mechanisms are revealed.Frequently, the link ∆ 2 will have a finite rotational range, meaning that only values of β 2 falling in the acceptable range can produce solutions.This is quite useful, as previously, the range of acceptable β 2 (free choice) values would have been found through an exhaustive search.Through the method of compatibility linkages, the designer can clearly identify the upper and lower bounds of β 2 based on how far link ∆ 2 in the compatibility linkage will rotate in either direction from its starting position.For example, a crank-rocker type compatibility linkage will give β 2 a range that allows any value to be used as a free choice.In contrast, a double-rocker compatibility linkage will restrict the range of β 2 [17][18][19].In the latter case, one can expect solutions only for a limited range of β 2 , clockwise or counterclockwise-thus larger values of ±β 2 are rare.This is a quite useful design rule.
Applying the Grashof theory to a compatibility linkage reveals some interesting behavior.Depending on the type of mechanism formed by the compatibility equations (Grashof vs non-Grashof, crank-rocker, double-rocker, etc.), solution regions may emerge.If a compatibility linkage has more than one branch (e.g., more than one unique configuration, such that reaching the second configuration requires temporarily removing at least one pin joint), then there will correspondingly be multiple sets of β 2 values that produce viable solutions.This can be seen in reference [18] (p.4), depicting the Center-point Burmester curve for a double-rocker compatibility linkage.
A non-Grashof triple-rocker mechanism, on the other hand, has a single circuit.As a result, it will have continuous solutions throughout its full potential range of motion of the input angle.However, this will still not cover a full 360 degrees, as rockers are inherently limited in this regard.J.A. Schaaf and J.A. Lammers furthered this research, identifying fourteen specific classes of compatibility linkages and their corresponding center-point curve shapes [19].These fourteen groups are divided into three categories; Grashof, non-Grashof, and changepoint mechanisms.Within each of these groups, depending on which link ∆ 1-4 is the shortest, the general shape of the center-point curve can be determined.See their paper for a full list of these categories [19].While this theory has presently only been applied to the compatibility linkage of a dyad in four precision positions, there is reason to believe that the same line of analysis may yield similar findings for the triad in five precision positions, and perhaps even higher numbers of precision positions.
Applying the Grashof criteria to the compatibility linkage is not the same as applying the same criteria to the finished solution.Its use for the compatibility linkage reveals interesting information about potential solution regions in which, for any value of β 2 , a solution exists, or regions no dyad/triad solutions exist.However, mechanisms produced from the compatibility linkage approach may still be subject to circuit, branch, and order defects.Additionally, they may have poor force properties or low transmission angles.
Defects
The compatibility linkage is useful in that it reveals numerous prospective solutions, but the designer will still need to determine if a candidate mechanism found by this method meets their requirements and that it does not exhibit defects, such as the combination of dyads selected not reaching all design positions on one circuit of the mechanism.Chase and Mirth detailed an effective procedure for identifying and addressing these defects [28].Whether there is a relationship between the Grashof type of compatibility linkage and any or all these properties remains a possible area for research.Similarly, applying the Grashof criteria to the higher-order compatibility linkages could be further investigated.Investigating the circuit defects of a compatibility linkage will reveal unique solution regions as there are gaps where no solutions exist for a particular value of β 2 .
Eight or More Precision Positions
Cases that would require more than seven precision positions are less common in industry, as usually, a less complex solution method can produce a satisfactory mechanism design.However, a few options are available if a designer wishes to move beyond this limit.First, Chuen-Sen Lin derived compatibility linkage solutions for quadriads in up to nine positions.The solution structures produced for these mechanisms are quite complex but are solved in largely the same way as the dyads and triads.See his work from the University of Minnesota [21] or the subsequent work he and his students completed at the University of Alaska Fairbanks [29,30].
Connections to General Burmester Theory
Burmester Theory underpins many of the precision position synthesis techniques in the field of mechanisms.The theory largely revolves around the position of the poles for a particular moving plane.They are found by identifying the intersection of the perpendicular bisectors between two positions for two arbitrary points on the moving plane.In four positions, the center-point curve passes through the six standard poles, while the circle-point curve passes through the poles P 12 , P 13 , P 14 , and the image poles P 23 , P 24 and P 34 .
In addition to the poles, points called "opposite pole quadrilaterals", or "Π-points", are found by identifying the intersection of lines passing through each pair of non-adjacent poles.There are twelve of these points in four positions.The first six are shown in Equation (23), each of which intersects the center-point curve, much like the natural poles.(23) [ 31,32] (pp.25-26).
The next six are formed from some combination of the image poles and are shown in Equation (24).The circle point curve passes through each of these points in addition to the poles listed above.(24) [31,32] (pp.25-26).
Using the full collection of these points, an initial depiction of both Burmester curves can be drawn.This visual tool may lend exceptional value.A sample plot of the Burmester curves for a dyad can be seen in Figure 15: Using the full collection of these points, an initial depiction of both Burmester curves can be drawn.This visual tool may lend exceptional value.A sample plot of the Burmester curves for a dyad can be seen in Figure 15: Similarly, the Burmester Curve of Triad is shown in Figure 16: Similarly, the Burmester Curve of Triad is shown in Figure 16: Using the full collection of these points, an initial depiction of both Burmester curves can be drawn.This visual tool may lend exceptional value.A sample plot of the Burmester curves for a dyad can be seen in Figure 15: Similarly, the Burmester Curve of Triad is shown in Figure 16: Each point on the Burmester curve is part of a 'Burmester Point Pair', or a Burmester point set in the case of the triad.These pairs represent a corresponding moving pivot location and ground pivot location.The method of compatibility linkages is an extremely effective tool for finding these curves-for each value of the free choice that is valid, a new pair of points in the curves is generated.
Closing Thoughts
The potential applications of kinematic synthesis through the compatibility linkage are intriguing, and there are still opportunities for further investigation.Throughout this paper β 2 has been used as the free choice, and the compatibility linkage has been applied to find the values of the angles i-j .For dyads, this paper assumes the designer wants to solve the problem for motion generation.However, by forming new cofactor matrices about α rather than β, the compatibility linkage could be used for path with prescribed timing problems as well.The LINCAGES software package utilized this realization [33][34][35].Similarly for triads, it should be possible to use the compatibility linkage to solve the standard form equation for any of the angles β, α or γ, with the only procedural change being rewriting the cofactor matrices.This would allow the designer to solve triad synthesis problems defined for motion generation or for path with prescribed timing.There are typically two path generators for each motion generator due to cognates [4].As a result, further investigation into the unique properties of the compatibility linkage and their cognates is warranted.
Schaaf and Lammers investigated the compatibility linkage of a dyad in four positions and found that its Grashof type played a distinctive role in determining the shape of the Burmester curves [19].Inspired by their findings, we speculate that the compatibility linkage of the triad for five precision positions will exhibit similar properties.The triad has a five-bar compatibility linkage with two degrees of freedom, but it only has a single loop.Additionally, after setting the angle of one of the free choices (e.g., β 5 ), the rest of the linkage behaves like a four-bar, and the second free choice can be rotated through all its values (e.g., β 2 ).This likely means that the findings of Schaaf and Lammers are applicable to the triad, and that for each free choice of β 5 a new center-point curve could be generated which resembles the corresponding class of dyads.
In addition to the applications for multiple prescribed position synthesis, we speculate that the compatibility linkage can be utilized for mixed position-velocity synthesis as well.Using the standard form equations mixed position-velocity synthesis is already possible.In a two-precision position problem, for example, a designer may choose to include a third equation describing the velocity of the precision point in the first position.The standard form equation can be rewritten as: Here .β j and .α j represent the angular velocities of their respective links, and V j is the velocity vector of the precision point.To evaluate this expression using a compatibility linkage, the cofactors would need to be rewritten, but after making that change the general solution procedure should flow in exactly the same way [4,36,37].
Multiple researchers have demonstrated using multiply separated positions to synthesize a path generation mechanism by using derivative equations [4] (pp. 239-245), [38].The resultant tracer point curve closely resembles a prescribed function for a significant range of that function.However, to achieve this result for a problem defined by a single position and its four derivatives, Sylvester's dialytic eliminant was employed.In the same way as before, we speculate that this method can be avoided by employing the compatibility linkage method.This would only require rewriting the cofactor matrices with appropriate derivatives.links ∆ 1 and ∆ 1 (∆ 1 is extended to form the ternary link HIJ).Finally, a ternary link AJK is formed incorporating ∆ 2 and ∆ 2 (connecting the points J and K).A critical observation to make regarding these parallelogram loops is that each was made possible by existing relationships between the ∆ and ∆ terms.∆ 2 and ∆ 2 share the same angular displacement, even prior to being connected.The same is true of ∆ 3 and ∆ 3 , and ∆ 4 and ∆ 4 .This means that combining these links is an unnecessary but helpful simplification of the compatibility linkage, as this new form above requires only a single input to fully determine the rest of the mechanism.This is a significant advantage, as each independent five-bar loop had two degrees of freedom, making the problem more complex.
Robotics 2023, 12, x FOR PEER REVIEW 24 of 30 and Δ4′ (Δ4 is extended to form the ternary link CDE), while the parallelogram GIJK was formed about links Δ1 and Δ1′ (Δ1 is extended to form the ternary link HIJ).Finally, a ternary link AJK is formed incorporating Δ2 and Δ2′ (connecting the points J and K).A critical observation to make regarding these parallelogram loops is that each was made possible by existing relationships between the Δ and Δ' terms.Δ2 and Δ2′ share the same angular displacement, even prior to being connected.The same is true of Δ3 and Δ3′, and Δ4 and Δ4′.This means that combining these links is an unnecessary but helpful simplification of the compatibility linkage, as this new form above requires only a single input to fully determine the rest of the mechanism.This is a significant advantage, as each independent five-bar loop had two degrees of freedom, making the problem more complex.The compatibility linkage of the triad in six positions can be used to find solutions in the form described above.However, the linkage is unique in that it has an additional layer of possible simplification that the designer can take advantage of.As was discussed earlier, the number of unique geometric inversions of the compatibility linkage corresponds to the number of unique solutions to the original problem.In this case, though, once the free-choice angle β2 is selected and implemented, the relative positions of links AJK and HIJ remain consistent regardless of which geometric inversion is considered.As a result, several links can be eliminated.The pivots A, G, and H can be considered as a single ternary link, reducing the ten-bar linkage to a seven-bar zero-DOF structure [17] (p.36).After implementing each of these steps, the new structure looks like this (Figure A2): The compatibility linkage of the triad in six positions can be used to find solutions in the form described above.However, the linkage is unique in that it has an additional layer of possible simplification that the designer can take advantage of.As was discussed earlier, the number of unique geometric inversions of the compatibility linkage corresponds to the number of unique solutions to the original problem.In this case, though, once the free-choice angle β 2 is selected and implemented, the relative positions of links AJK and HIJ remain consistent regardless of which geometric inversion is considered.As a result, several links can be eliminated.The pivots A, G, and H can be considered as a single ternary link, reducing the ten-bar linkage to a seven-bar zero-DOF structure [17] (p.36).After implementing each of these steps, the new structure looks like this (Figure A2): The final positions of each of the points in this compatibility linkage are shown in Table A1.To find solutions, remove the link EF, creating a Watt-II solution linkage.For positions (apart from the starting position) in which links, CE and BF are parallel, the The final positions of each of the points in this compatibility linkage are shown in Table A1.To find solutions, remove the link EF, creating a Watt-II solution linkage.For positions (apart from the starting position) in which links, CE and BF are parallel, the mechanism represents a solution to the original problem.
Point
Position in Plane A (0,0) The solution values are then taken from the angular displacements of links AH, BF, HD, and GF, which correspond to the values of -β 3 , β 4 -β 3 , β 5 -β 3 , and β 6 -β 3 , respectively.These relationships are shown in Table A2.While making these simplifications does take more time initially, the payoff is substantial.Analysis of a six-bar mechanism is easier than a ten-bar linkage, not to mention that the Watt-type mechanisms are much more thoroughly covered in the literature.A designer working through this process will find the following relations: Table A2.Selected Links in the simplified compatibility linkage and their corresponding beta values.
Link
Angular Displacement The attentive reader may note that the angle β 2 is neglected in Tables A1 and A2, and it no longer plays a role as a free choice or as a solution angular displacement.Fortunately, once the beta values have been calculated, not all of them must be used.Only three of the beta values need to be incorporated to calculate the value of each unique solution for the triad.Chuen-Sen Lin shows an example, here taken as Equation (A4), using β 2 , β 3 , and β 4 .The solutions to this reduced system of standard form equations are the final solutions to the precision position problem.
AH
Triad in seven precision positions: A triad in seven precision positions is by far the most mathematically complex of the compatibility linkage types listed here.As with the transition from four to five positions for a dyad, the transition from six to seven positions for a triad reduces the number of potential solutions from an infinite number (based on the infinite number of potential free choice values) to a finite value, Solutions come in sets of 0, 2, 4, or 6 depending on the intersections of the Burmester curves.As a result, most authors recommend refraining After rotating the links to create alignment and creating the appropriate parallelograms between each layer of the loop, the linkage takes the form shown in Table 2. Links IJ, IH, MP, and PQ have all been created to form parallelogram loops.These parallel motion structures are created around links Δ1, Δ2, Δ3, and Δ4, which were each of the links that had identical angular displacement relationships.As with the lower-order structures, the designer may use this linkage by removing any of these created links, changing the chain from a 15-bar zero-degree-of-freedom structure to a 14-bar one-degree-of-freedom linkage.From here, the designer must identify the mechanism positions (outside of the starting position) in which the sides of the parallelogram from which the link was removed are parallel to each other.At these positions, the angular displacements of links After rotating the links to create alignment and creating the appropriate parallelograms between each layer of the loop, the linkage takes the form shown in Table 2. Links IJ, IH, MP, and PQ have all been created to form parallelogram loops.These parallel motion structures are created around links ∆ 1 , ∆ 2 , ∆ 3 , and ∆ 4 , which were each of the links that had identical angular displacement relationships.As with the lower-order structures, the designer may use this linkage by removing any of these created links, changing the chain from a 15-bar zero-degree-of-freedom structure to a 14-bar one-degree-of-freedom linkage.From here, the designer must identify the mechanism positions (outside of the starting position) in which the sides of the parallelogram from which the link was removed are parallel to each other.At these positions, the angular displacements of links CD, DE, EH, HM, KN, and LO from their starting positions correspond to β 2 -β 7 , respectively [21] (p.115).Further simplifications of the compatibility linkage for a triad in seven precision positions remain an area for further study.
Figure 1 .
Figure 1.Patent figure of a chair with a deployable footrest [12].
Figure 1 .
Figure 1.Patent figure of a chair with a deployable footrest [12].
Figure 2 .
Figure 2. Patent figure of a leading edge flap of a wing [20].
Figure 2 .
Figure 2. Patent figure of a leading edge flap of a wing [20].
Figure 3 .
Figure 3. (a) The standard form depiction of a four-bar comprised two dyads.(b) The standard form depiction of a triad is shown in two positions.Reprinted with permission from ref. [21].Copyright 1987 Chuen-Sen Lin.
Figure 4 .
Figure 4. (a) A multiloop mechanism shown in an assembled form.(b) The same multiloop nism decomposed into three components, two triads and one dyad.
Figure 4 .
Figure 4. (a) A multiloop mechanism shown in an assembled form.(b) The same multiloop mechanism decomposed into three components, two triads and one dyad.
Figure 5 .
Figure 5. (a) The synthesized Watt 1 multiloop mechanism for guiding a drone rotor out of the nacelle and into a vertical position [23,24].(b) The breakdown of the vector loops comprising the Watt 1 mechanism [23].
Figure 5 .
Figure 5. (a) The synthesized Watt 1 multiloop mechanism for guiding a drone rotor out of the nacelle and into a vertical position [23,24].(b) The breakdown of the vector loops comprising the Watt 1 mechanism [23].
Figure 5 .
Figure 5. (a) The synthesized Watt 1 multiloop mechanism for guiding a drone rotor out of the nacelle and into a vertical position [23,24].(b) The breakdown of the vector loops comprising the Watt 1 mechanism [23].
Figure 6 .
Figure 6.(a) A 3D printed prototype in the initial "fixed-wing" position [23].(b) The prototype in the open, vertical liftoff configuration.
Figure 6 .
Figure 6.(a) A 3D printed prototype in the initial "fixed-wing" position [23].(b) The prototype in the open, vertical liftoff configuration.
Robotics 2022 , 30 Figure 7 .
Figure 7.The compatibility linkage for a dyad in four precision positions.The resultant position of each link is shown after applying a rotation of β2.There are two combinations of the links ∆3 and ∆4 which close the linkage.Both represent a viable solution to the original problem for the given value of β2.
Figure 7 .
Figure 7.The compatibility linkage for a dyad in four precision positions.The resultant position of each link is shown after applying a rotation of β 2 .There are two combinations of the links ∆ 3 and ∆ 4 which close the linkage.Both represent a viable solution to the original problem for the given value of β 2 .
1 . 1 .
Write standard form equations, put them in an augmented matrix form, write the cofactor matrices 2. Create Δ1 as the sum of Δ2-4, then draw the linkage.3.Rotate Δ2 by β2, rotate Δ3 and Δ4 to close the loop.4. Read off the angular displacement of Δ3 and Δ4 to get β3 and β4.Write standard form equations, put them in an augmented matrix form, write the cofactor matrices.There are two sets, Δ and Δ'. 2. Using a consistent scale, plot both complete loops, with the base of the Δ1 links at the same x,y position 3. Rotate either loop until Δ2 and Δ2′ are colinear.4. Form a parallelogram about the Δ3 links, creating point D. 5. Remove the newly formed link DE to find the final compatibility linkage.6. Rotate Δ2 by β2, adjusting other links accordingly.Positions where BE and CD are parallel represent solutions.[21] (p.109) Dyad 5 Seven-bar structure, six-bar compatibility linkage 7; ∆ 1 , ∆ 2 , ∆ 3 , ∆ 4 , ∆ 1 , ∆ 2 , ∆ 3 1.Write standard form equations, put them in an augmented matrix form, write the cofactor matrices.There are two sets, ∆ and ∆ .2. Using a consistent scale, plot both complete loops, with the base of the ∆ 1 links at the same x, y position 3. Rotate either loop until ∆ 2 and ∆ 2 are colinear.4. Form a parallelogram about the ∆ 3 links, creating point D. 5. Remove the newly formed link DE to find the final compatibility linkage.6. Rotate ∆ 2 by β 2 , adjusting other links accordingly.Positions where BE and CD are parallel represent solutions.Robotics 2022, 11, x FOR PEER REVIEW 10 of 30
1 . 5 Five
Write standard form equations, put them in an augmented matrix form, write the cofactor matrices.There are two sets, Δ and Δ'. 2. Using a consistent scale, plot both complete loops, with the base of the Δ1 links at the same x,y position 3. Rotate either loop until Δ2 and Δ2′ are colinear.4. Form a parallelogram about the Δ3 links, creating point D. 5. Remove the newly formed link DE to find the final compatibility linkage.6. Rotate Δ2 by β2, adjusting other links accordingly.Positions where BE and CD are parallel represent solutions.[21] (p.109) [21] (p.109) Triad 5 Five-bar 2-DOF compatibility linkage 5; ∆ 1 , ∆ 2 , ∆ 3 , ∆ 4 , ∆ 5 Repeat steps 1-4 of the dyad in 4PP.The difference is that the loop has five links instead of four, so the designer will need to set two free choices, typically the angles of ∆ 2 and ∆ 5. Robotics 2022, 11, x FOR PEER REVIEW 11 of 30 Triad
[ 21 ]
(p. 113) structure, tenbar compatibility linkage 9; Δ1, Δ2, Δ3, Δ4, Δ5, Δ1′, Δ2′, Δ3′, Δ4′ Repeat steps 1-6 for the dyad in five positions.In the example at right, Δ3 is used as the ground pivot.There are two parallelogram loops to form in step four.One about Δ4, forming BCEF, and one about Δ1, forming GIJK.Remove a link from either parallelogram to convert to the compatibility linkage.[21] (p.35) Triad 7 Fifteen-bar structure, fourteen-bar compatibility linkage 13; Δ1, Δ2, Δ3, Δ4, Δ5, Δ1′, Δ2′, Δ3′, Δ4′, Δ1′', Δ2′', Δ3′', Δ4′' Repeat steps 1-6 for the dyad in five positions.There are several parallelogram loops to form in step four.Loops should be formed between each layer of the linkage.Here, they are formed about Δ4 and Δ1, and the layers of link Δ2 are fused to form a single link.Remove a link from any parallelogram to convert to the compatibility linkage *. [21] (p.117) * Additional information regarding the solution procedure for a triad in six and seven precision positions is shown in Appendix A, including a simplified form of the six-precision position compatibility linkage.
∆ 1 , 5 Five-bar 2-DOF compatibility linkage 5 ;
∆ 2 , ∆ 3 , ∆ 4 , ∆ 5 , ∆ 1 , ∆ 2 , ∆ 3 , ∆ 4 , ∆ 1 ", ∆ 2 ", ∆ 3 ", ∆ 4 " Repeat steps 1-6 for the dyad in five positions.There are several parallelogram loops to form in step four.Loops should be formed between each layer of the linkage.Here, they are formed about ∆ 4 and ∆ 1 , and the layers of link ∆ 2 are fused to form a single link.Remove a link from any parallelogram to convert to the compatibility linkage *.Triad Δ1, Δ2, Δ3, Δ4, Δ5Repeat steps 1-4 of the dyad in 4PP.The difference is that the loop has five links instead of four, so the designer will need to set two free choices, typically the angles of Δ2 and Δ5.[21] (p.113) Triad 6 Eleven-bar structure, tenbar compatibility linkage 9; Δ1, Δ2, Δ3, Δ4, Δ5, Δ1′, Δ2′, Δ3′, Δ4′ Repeat steps 1-6 for the dyad in five positions.In the example at right, Δ3 is used as the ground pivot.There are two parallelogram loops to form in step four.One about Δ4, forming BCEF, and one about Δ1, forming GIJK.Remove a link from either parallelogram to convert to the com-Repeat steps 1-6 for the dyad in five positions.There are several parallelogram loops to form in step four.Loops should be formed between each layer of the linkage.Here, they are formed about Δ4 and Δ1, and the layers of link Δ2 are fused to form a single link.Remove a link from any parallelogram to convert to the compatibility linkage *.
∆ 2
and ∆ 2 share the same x, y coordinate, and rotate either four-bar linkage (keeping all internal angles the same) such that ∆ 2 and ∆ 2 have the same angular direction.The result should now resemble Figure8, with ∆ 1 (A-G-H) and ∆ 2 (A-B-C) appearing as ternary links.There are two distinct four-bar chains, or loops, between them.Robotics 2023, 12, x FOR PEER REVIEW 13 of 30 from Δ1′, Δ2′, Δ3′, and Δ4 (Δ4 used twice because Δ4′ is identical to it, either notation is acceptable).Using a consistent scale, line up these two four bars such that the tails of Δ2 and Δ2′ share the same x,y coordinate, and rotate either four-bar linkage (keeping all internal angles the same) such that Δ2 and Δ2′ have the same angular direction.The result should now resemble Figure8, with Δ1 (A-G-H) and Δ2 (A-B-C) appearing as ternary links.There are two distinct four-bar chains, or loops, between them.
Figure 8 .
Figure 8.The compatibility linkage for a dyad in five precision positions.Linkage positions in which the lines CD and BE are parallel represent solutions to the synthesis problem [21] (p.109).
Figure 8 .
Figure 8.The compatibility linkage for a dyad in five precision positions.Linkage positions in which the lines CD and BE are parallel represent solutions to the synthesis problem [21] (p.109).
Figure 10 .
Figure 10.The unmodified plot of the Δ vectors for a dyad in five precision positions.
Figure 10 .
Figure 10.The unmodified plot of the ∆ vectors for a dyad in five precision positions.
Figure 11 .
Figure 11.The solution structure for a dyad in five precision positions.
Figure 11
Figure 11 is the solution structure representing this problem.Removing the link DE forms the compatibility linkage.To use it, the designer can directly begin rotating the linkage to try to find solutions.However, a useful intermediate step is to find the range of acceptable β2 values for which the compatibility linkage closes.This range is found by drawing a circle with radius Δ2 around the tip of Δ1, as well as a circle of radius |Δ2| + |Δ3| + |Δ4| around the tail of Δ1.Repeat this process for the Δ' loop.Here the range of the Δ loop is more limiting.The range of Δ2 is shown in Figure 12, with the circle's intersections denoting the limits of Δ2.
Figure 12 .
Figure 12.Procedure for finding the range of β2.Here the two intersections of the circles represent the upper and lower bound of the angle.From these circle intersections, it is possible to calculate an upper and lower bound of β2 as 116 degrees above the initial position (CCW), and 27 degrees below the initial
Figure 11 .
Figure 11.The solution structure for a dyad in five precision positions.
Figure 11
Figure 11 is the solution structure representing this problem.Removing the link DE forms the compatibility linkage.To use it, the designer can directly begin rotating the linkage to try to find solutions.However, a useful intermediate step is to find the range of acceptable β 2 values for which the compatibility linkage closes.This range is found by drawing a circle with radius ∆ 2 around the tip of ∆ 1 , as well as a circle of radius |∆ 2 | + |∆ 3 | + |∆ 4 | around the tail of ∆ 1 .Repeat this process for the ∆ loop.Here the range of the ∆ loop is more limiting.The range of ∆ 2 is shown in Figure 12, with the circle's intersections denoting the limits of ∆ 2 .
Figure 11 .
Figure 11.The solution structure for a dyad in five precision positions.
Figure 11
Figure 11 is the solution structure representing this problem.Removing the link DE forms the compatibility linkage.To use it, the designer can directly begin rotating the linkage to try to find solutions.However, a useful intermediate step is to find the range of acceptable β2 values for which the compatibility linkage closes.This range is found by drawing a circle with radius Δ2 around the tip of Δ1, as well as a circle of radius |Δ2| + |Δ3| + |Δ4| around the tail of Δ1.Repeat this process for the Δ' loop.Here the range of the Δ loop is more limiting.The range of Δ2 is shown in Figure 12, with the circle's intersections denoting the limits of Δ2.
Figure 12 .
Figure 12.Procedure for finding the range of β2.Here the two intersections of the circles represent the upper and lower bound of the angle.
Figure 12 .
Figure 12.Procedure for finding the range of β 2 .Here the two intersections of the circles represent the upper and lower bound of the angle.
Robotics 2023 ,Figure 13 .
Figure 13.(a) Compatibility linkage in solution position one.(b) Compatibility linkage in solution position two.
Figure 14 .
Figure 14.Final solution linkage visualized in the Lincages software.The triangles represent the ground pivots of this mechanism.
Figure 13 .
Figure 13.(a) Compatibility linkage in solution position one.(b) Compatibility linkage in solution position two.
Robotics 2023 ,Figure 13 .
Figure 13.(a) Compatibility linkage in solution position one.(b) Compatibility linkage in solution position two.
Figure 14 .
Figure 14.Final solution linkage visualized in the Lincages software.The triangles represent the ground pivots of this mechanism.
Figure 14 .
Figure 14.Final solution linkage visualized in the Lincages software.The triangles represent the ground pivots of this mechanism.
Figure 15 .
Figure 15.An example of the center-point and circle-point Burmester curves for a dyad in four positions [33].
Figure 16 .
Figure 16.The three Burmester curves for a triad in six precision positions.There are two circlepoint curves (corresponding to the two moving pivots) and one center-point curve [21] (p.86).
Figure 15 .
Figure 15.An example of the center-point and circle-point Burmester curves for a dyad in four positions [32].
Figure 15 .
Figure 15.An example of the center-point and circle-point Burmester curves for a dyad in four positions [33].
Figure 16 .
Figure 16.The three Burmester curves for a triad in six precision positions.There are two circlepoint curves (corresponding to the two moving pivots) and one center-point curve [21] (p.86).
Figure 16 .
Figure 16.The three Burmester curves for a triad in six precision positions.There are two circle-point curves (corresponding to the two moving pivots) and one center-point curve [21] (p.86).
Figure A1 .
Figure A1.(a) The two loops of the triad in six positions shown without modification; (b) The compatibility linkage of a triad in six positions after aligning Δ3 and forming parallelograms [21] (pp.31-35).
Figure A1 .
Figure A1.(a) The two loops of the triad in six positions shown without modification; (b) The compatibility linkage of a triad in six positions after aligning ∆ 3 and forming parallelograms [21] (pp.31-35).
Figure A2 .
Figure A2.The simplified form of the compatibility linkage for a triad in six positions [21] (p.38).
Figure A3 .
Figure A3.The three loops of the triad compatibility linkage for seven precision positions prior to modification.The end of link ∆1 is selected as the common point [21] (p.116).
Figure A3 .
Figure A3.The three loops of the triad compatibility linkage for seven precision positions prior to modification.The end of link ∆ 1 is selected as the common point [21] (p.116).
Table 3 .
Summary of the solution process for compatibility linkages for dyads and triads.
1. Write standard form equations, put them in an augmented matrix form, write the cofactor matrices 2. Create Δ1 as the sum of Δ2-4, then draw the linkage.3.Rotate Δ2 by β2, rotate Δ3 and Δ4 to close the loop.4. Read off the angular displacement of Δ3 and Δ4 to get β3 and β4.
Table 4 .
Precision positions and alpha angles.
Table 5 .
Direction and magnitude of each ∆ vector.
Table 4 .
Precision positions and alpha angles.
Table 5 .
Direction and magnitude of each Δ vector. | 2023-02-04T16:06:15.225Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "e9eec588eabfed07dbd71c191945362ae2f9779a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-6581/12/1/22/pdf?version=1675655881",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0f1807d5cd38a936af158020351c8247ed17add5",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
14960899 | pes2o/s2orc | v3-fos-license | On the Peaking Phenomenon of the Lasso in Model Selection
I briefly report on some unexpected results that I obtained when optimizing the model parameters of the Lasso. In simulations with varying observations-to-variables ratio n=p, I typically observe a strong peak in the test error curve at the transition point n/p = 1. This peaking phenomenon is well-documented in scenarios that involve the inversion of the sample covariance matrix, and as I illustrate in this note, it is also the source of the peak for the Lasso. The key problem is the parametrization of the Lasso penalty (as e.g. in the current R package lars) and I present a solution in terms of a normalized Lasso parameter.
Introduction
In regression and classification, an omnipresent challenge is the correct prediction in the presence of a huge amount p of variables based on a small number n of observations, and for any regularized method, one typically expects the performance to increase with increasing observations-to-variables ration n/p. While this is true in the regions n > p and n < p, some estimators exhibit a peaking behavior for n = p, leading to particularly low performance. As documented in the literature (Raudys and Duin, 1998), this affects all methods that use the (Moore-Penrose) inverse of the sample covariance matrix (see Section 3 for more details). This leads e.g. to the peculiar effect that for Linear Discriminant Analysis, the performance improves in the n = p case if a set of uninformative variables is added to the model 1 . In this note, I show that this peaking phenomenon can also occur in scenarios where the Moore-Penrose inverse is not directly used for computing the model, but in cases where least-squares estimates are used for model selection. One particularly popular method is the Lasso (Tibshirani, 1996) and its current implementation in the software R. As illustrated in Section 2, its parameterization of the penalty term in terms of a ration of the 1 -norm of the Lasso solution and the least-squares solution leads to problems when using cross-validation for model selection. I present a solution in terms of a normalized penalty term.
Simulation Setting and Peaking Phenomenon
For a p-dimensional linear regression model the task is to estimate β based on n observations {(x 1 , y 1 ), . . . , (x n , y n )} ⊂ R p × R. As usual, the centered and scaled observations are pooled into X = (x 1 , . . . , x n ) ∈ R n×p and y = (y 1 , . . . , y n ) ∈ R n .
In this note, I study the performance of the Lasso (Tibshirani, 1996) β lasso = arg min β y − Xβ 2 + λ β 1 , λ ≥ 0 for a fixed dimensionality p and for a varying number n of observations. Common sense tells us that the test error is approximately a decreasing function of the observations-to-variables ratio n/p. However, in several empirical studies, I observe particularly poor results for the Lasso in the transition case n/p = 1, leading to a prominent peak in the test error curve at n = p.
In the remainder of this section, I illustrate this unexpected behavior on a synthetic data set. I would like to stress that the peaking behavior is not due to particular choices in the simulation setup, but only depends on the ratio n/p. I generate n total = 5000 observations x i ∈ R 90 , where x i is drawn from a multivariate normal distribution with no collinearity. Out of the p = 90 true regression coefficients β, a random subset of size 20 are non-zero and drawn from a univariate distribution on [−4, +4]. The error term ε is normally distributed with variance such that the signal-to-noise-ratio is equal to 4. For the simulation, I sub-sample training sets of sizes n = 10, 20, . . . , 190, 200. The sub-sampling is repeated 10 times. On the training set of size n, the optimal amount of penalization is chosen via 10-fold cross-validation. The Lasso solution is then computed on the whole training set of size n, and the performance is evaluated by computing the mean squared error on an additional test set of size 500.
I use the cv.lars function of the R package lars version 0.9 − 7 (Hastie and Efron, 2007) to perform the experiments. The mean test error over the 10 runs are displayed in the left panel of Figure 1. As expected, the test error decreases with the number of observations. For n = p however, there is a striking peak in the test error (marked by the letter X), and the performance is much worse compared to the seemingly more complex scenario of n p. We also observe the peaking behavior in the case where n = p in the cross-validation split (marked by the letter O). The right panel of Figure 1 displays the cross-validated penalty term of the Lasso as a function of n. Note that in the cv.lars function, the amount of penalization is not parameterized by λ ∈ [0, ∞[ but by the more convenient quantity Values of s close to 0 correspond to a high value of λ, and hence to a large amount of penalization. The right panel of Figure 1 shows that the peaking behavior also occurs for the amount of penalization, measured by s. Interestingly, the peak does not occur for n = p, but in the case where the number of observations equals the number of variables in the crossvalidation loops. This peculiar behavior is explained in the two following sections, and I also present a normalization procedure that solves this problem.
The Pseudo-Inverse of the Covariance Matrix
It has been reported in the literature (Raudys and Duin, 1998;Tresp, 2002;Opper, 2001) that the pseudo-inverse of the covariance matrix is a particularly bad estimate for the true precision matrix Σ −1 in the case p = n. The rationale behind this effect is as follows. The Moore-Penrose-Inverse of the empirical covariance matrix is In particular, in the small sample case, the smallest p − n eigenvalues of the Moore-Penrose inverse are set to 0. This corresponds to cutting off directions with high frequency. While this introduces an additional bias, it tends to avoid the huge amount of variance that is due to the inversion of small but non-zero eigenvalues. In the transition case n/p = 1, all eigenvalues are = 0 (with some of them very small) and the MSE is most prominent in this situation. The striking peaking behavior for n = p is illustrated in e.g. Schäfer and Strimmer (2005). As a consequence, any statistical method that uses the pseudo-inverse of the covariance suffers from the peaking phenomenon.
consequently, the peaking behavior also occurs in ordinary least squares regression, as it uses the pseudo-inverse, For n = p, the norm is particularly high. Note furthermore that except for n = p, the curve is rather smooth, and small changes in the number of observations only lead to small changes in the 1 -norm of the estimate.
This observation is the key to understanding the peaking behavior of the Lasso. While for the estimation of the Lasso coefficients itself, the pseudo-inverse of the covariance matrix does not occur, it is used for model selection, via the regularization parameter s defined in Equation (1). I elaborate on this in the next section.
Normalization of the Lasso Penalty
Let me denote by n cv the number of observations in the k cross-validation splits, and by s n,cv the optimal parameter chosen via cross-validation. As n ≈ n cv , one expects the MSE-optimal coefficients β lasso,n computed on a set of size n and the MSE-optimal coefficients β lasso,ncv based on a set of size n cv to be similar, i.e. n ≈ n cv ⇒ β lasso,n 1 ≈ β lasso,ncv 1 . Now, if n cv = p, then, in each of the k cross-validation splits, the number of observations equals the number of dimensions. As the least squares estimate is prone to the peaking behavior (recall Figure 2), we observe β ols,n 1 β ols,ncv 1 .
This implies that even though the 1 -norms of the regression coefficients β lasso are almost the same, their corresponding values of s differ drastically. To put it the other way around: The optimal s found on the cross-validation splits (where n cv = p) is way too small, and it dramatically overestimates the amount of penalization. This explains the high test error in the case n cv=p that is indicated by the letter O in Figure 1. For n = p, the same argument applies. The optimal s cv on the cross-validation splits (where n cv < p) underestimates the amount of complexity in the n = p case, which leads to the peak indicated by the letter X in Figure 1.
To illustrate that the peaking problem is indeed due to the parametrization (1), I normalize the scaling parameter s in the following way. Let me denote by 1,olscv the average over all k different 1 -norms of the least squares estimates obtained on the k cross-validation splits. Furthermore, 1,ols is the 1 -norm of the least squares estimates on the complete training data of size n. The normalized regularization parameter is s = 1,olscv 1,ols s cv . (2) Note that the function lars returns the least squares solution, hence there are no additional computational costs.
To illustrate the effectiveness of the normalization, I re-run the simulation experiments with cross-validation based on the normalized penalty parameter (2). This function -called mylars -is implemented in the R-package parcor version 0.1 . The results together with the results for the un-normalized parameter 1 are displayed in Figure 3. (1) and (2) respectively.
Conclusion
The peaking phenomenon is well-documented in the literature, and it effects every estimator that uses the pseudo-inverse of the sample covariance matrix. As I illustrate in this note, this defect in the transition point n/p = 1 can also occur in more subtle ways. For the Lasso, the particular parameterization of the penalty term uses least-squares estimates, and it leads to difficulties in model selection. One can expect similar problems if one e.g. measures the fit of a model in terms of the total variance that it explains, and if the total variance is estimated using least squares. In this case, a normalization as proposed above is advisable. | 2009-04-28T14:49:12.000Z | 2009-04-28T00:00:00.000 | {
"year": 2009,
"sha1": "f835adb1970ccca76a67168e656c6021adb69349",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f835adb1970ccca76a67168e656c6021adb69349",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
12511437 | pes2o/s2orc | v3-fos-license | Novel Azetidine-Containing TZT-1027 Analogues as Antitumor Agents
A conformational restriction strategy was used to design and synthesize nine TZT-1027 analogues. 3-Aryl-azetidine moiety was used to replace phenylethyl group of TZT-1027 at the C-terminus. These analogues exhibited moderate to excellent antiproliferative activities, and the most potent compound 1a showed IC50 values of 2.2 nM against A549 and 2.1 nM against HCT116 cell lines, respectively. However, 1a could not achieve effective inhibition at all the dose levels in the A549 xenograft model (up to 5 mg/kg, injection, once a day), which is only 16%–35% inhibition at the end of the experiment.
Introduction
Dolastatin 10 and its natural analogues are highly-cytotoxic peptides isolated from the sea hare Dolabella auricularia from the India Ocean [1]. These compounds have been demonstrated to be effective against a broad spectrum of cancer cells [2]. The extraordinary cytotoxicity is caused by their ability to inhibit microtubule assembly and tubulin-dependent guanosine triphosphate (GTP) hydrolysis, which result in cell cycle arrest and apoptosis [3]. A large number of synthetic analogues of dolastatin 10 have been reported [4][5][6]. Some of them, such as TZT-1027, auristatin E, and auristatin PHE were advanced into clinical trials ( Figure 1). However, significant side effects were observed in clinical trials at dose levels that were not sufficient to attain clinical efficacy [7,8]. MMAE, a monomethyl analog of Auristatin-E, was conjugated to monoclonal antibodies, leading to the discovery of the FDA approved ADC brentuximab vedotin (ADCETRIS) for the treatment of relapsed Hodgkin lymphoma and systemic anaplastic large cell lymphoma [9].
Conformational study of dolastatin 10 analogues bound to tubulin revealed a compact structure that folded around the central Val-Dil bond in its cis form, whereas the flexible C-terminus does not interact with any amino acid residue directly, indicating that its main role might be arranging the molecule's overall orientation [10,11]. Here we introduced azetidine moiety into C-terminus of TZT-1027 to explore the effect of conformational restriction on potency ( Figure 2) [12]. Thus, nine conformational restricted analogues were synthesized and evaluated for inhibitory effects.
Chemistry
The synthetic route is outlined in Scheme 1. 3-Aryl-azetidines 5a-i were prepared according to known procedure [13]. Removal of the Boc group with trifluoroacetic acid (TFA) yielded the TFA salts 6a-i, which were coupled with N-Boc-(2R, 3R, 4S)-dolaproine (Dap) in the presence of HATU to give compounds 7a-i. Removal of the Boc group with TFA in 7a-i yielded the TFA salts 8a-i, which were coupled with Dov-Val-Dil· TFA (9) in the presence of HATU to provide the title compounds [5].
Chemistry
The synthetic route is outlined in Scheme 1. 3-Aryl-azetidines 5a-i were prepared according to known procedure [13]. Removal of the Boc group with trifluoroacetic acid (TFA) yielded the TFA salts 6a-i, which were coupled with N-Boc-(2R, 3R, 4S)-dolaproine (Dap) in the presence of HATU to give compounds 7a-i. Removal of the Boc group with TFA in 7a-i yielded the TFA salts 8a-i, which were coupled with Dov-Val-Dil· TFA (9) in the presence of HATU to provide the title compounds [5].
Chemistry
The synthetic route is outlined in Scheme 1. 3-Aryl-azetidines 5a-i were prepared according to known procedure [13]. Removal of the Boc group with trifluoroacetic acid (TFA) yielded the TFA salts 6a-i, which were coupled with N-Boc-(2R, 3R, 4S)-dolaproine (Dap) in the presence of HATU to give compounds 7a-i. Removal of the Boc group with TFA in 7a-i yielded the TFA salts 8a-i, which were coupled with Dov-Val-Dil¨TFA (9) in the presence of HATU to provide the title compounds [5].
In Vitro Antiproliferative Assay
As shown in Table 1, these analogues demonstrated moderate to excellent antiproliferative activities. Among them, compound 1a was the most potent with IC50 values of 2.2 nM against A549 cell lines and 2.1 nM against HCT116 cell lines. Structure-activity relationship could not be well illustrated due to a limited set of compounds. Basically, different substitutions on the phenyl group such as ortho-fluor (1b), meta-fluor (1c), para-chloro (1e), para-tert-butyl (1f), and para-phenyl (1g) could not improve the antiproliferative activities. These compounds resulted in about 20-30-fold loss of potency against A549 cell lines. When a bulky isopropyl group was introduced to the ortho-position of phenyl group, inhibitory activity was reduced about 60-folds (1d). All the target compounds showed weaker activity than TZT1027, indicating that conformational restriction at the C-terminus may not be beneficial to the activity. Membrane permeability can be a limiting factor for potency. The permeability data of synthesized compounds were not measured but we hypothesized that different substitutes of C-terminus could influence permeability, hence the antiproliferative activities. In addition, all the compounds showed better activity in HCT116 cell lines over A549 cell lines, demonstrating a cell selectivity. This is because HCT116 cells demonstrated a rapid proliferation rate than A549 cells and it is known that cytotoxic cancer drugs are believed to gain selectivity by targeting cells that proliferate rapidly.
In Vitro Antiproliferative Assay
As shown in Table 1, these analogues demonstrated moderate to excellent antiproliferative activities. Among them, compound 1a was the most potent with IC 50 values of 2.2 nM against A549 cell lines and 2.1 nM against HCT116 cell lines. Structure-activity relationship could not be well illustrated due to a limited set of compounds. Basically, different substitutions on the phenyl group such as ortho-fluor (1b), meta-fluor (1c), para-chloro (1e), para-tert-butyl (1f), and para-phenyl (1g) could not improve the antiproliferative activities. These compounds resulted in about 20-30-fold loss of potency against A549 cell lines. When a bulky isopropyl group was introduced to the ortho-position of phenyl group, inhibitory activity was reduced about 60-folds (1d). All the target compounds showed weaker activity than TZT1027, indicating that conformational restriction at the C-terminus may not be beneficial to the activity. Membrane permeability can be a limiting factor for potency. The permeability data of synthesized compounds were not measured but we hypothesized that different substitutes of C-terminus could influence permeability, hence the antiproliferative activities. In addition, all the compounds showed better activity in HCT116 cell lines over A549 cell lines, demonstrating a cell selectivity. This is because HCT116 cells demonstrated a rapid proliferation rate than A549 cells and it is known that cytotoxic cancer drugs are believed to gain selectivity by targeting cells that proliferate rapidly.
Inhibitory Activity of Compound 1a in A549 Xenograft Model
Further in vivo antitumor activities of 1a was evaluated in A549 xenograft models in mice via tail vein intravenous injection for 22 days. It is reported that a dose of 4 mg/kg of TZT-1027 seemed to be toxic [14,15]. Considering of that, the maximum dose of 1a was chosen as 5 mg/kg. After given 1a at 1 mg/kg/day, 2 mg/kg/day, and 5 mg/kg/day dosages, no overt toxicity and weight-loss were observed. However, compound 1a could not achieve effective inhibition at all the dose levels ( Figure 3b). TZT-1027 (2 mg/kg/day) inhibited tumor growth by 61% over the 22-day administration schedule, however 1a only inhibited tumor growth by 16%-35% at difference dose (Supplementary Materials, Tables S1-S3). No time-and dosage-dependent inhibition were observed. Higher dosage of 1a was not explored due to its poor solubility (Supplementary Materials, Table S4). Pharmacokinetic (PK) study was not conducted because in a mouse liver microsomes metabolic stability study, compound 1a demonstrated a T 1/2 of less than 2 min (Supplementary Materials, Table S5). The synthesis of analogues suitable for formulation is of considerable interest and this work will be reported in due course.
Inhibitory Activity of Compound 1a in A549 Xenograft Model
Further in vivo antitumor activities of 1a was evaluated in A549 xenograft models in mice via tail vein intravenous injection for 22 days. It is reported that a dose of 4 mg/kg of TZT-1027 seemed to be toxic [14,15]. Considering of that, the maximum dose of 1a was chosen as 5 mg/kg. After given 1a at 1 mg/kg/day, 2 mg/kg/day, and 5 mg/kg/day dosages, no overt toxicity and weight-loss were observed. However, compound 1a could not achieve effective inhibition at all the dose levels ( Figure 3b). TZT-1027 (2 mg/kg/day) inhibited tumor growth by 61% over the 22-day administration schedule, however 1a only inhibited tumor growth by 16%-35% at difference dose (Supplementary Materials, Table S1-S3). No time-and dosage-dependent inhibition were observed. Higher dosage of 1a was not explored due to its poor solubility (Supplementary Materials, Table S4). Pharmacokinetic (PK) study was not conducted because in a mouse liver microsomes metabolic stability study, compound 1a demonstrated a T1/2 of less than 2 min (Supplementary Materials, Table S5). The synthesis of analogues suitable for formulation is of considerable interest and this work will be reported in due course.
General
All starting materials, reagents, and solvents were commercially available. All reactions were monitored by thin-layer chromatography on silica gel plates (GF-254) and visualized with UV light. All the melting points were determined on a micromelting-point apparatus and thermometer was uncorrected. 1 H-NMR spectra and 13 C-NMR were recorded in acetone-d6 or CDCl3 on a 400 or 600 Bruker NMR spectrometer with tetramethylsilane (TMS) as an internal reference. All chemical shifts are reported in parts per million (ppm). High-resolution exact mass measurements were performed
General
All starting materials, reagents, and solvents were commercially available. All reactions were monitored by thin-layer chromatography on silica gel plates (GF-254) and visualized with UV light.
All the melting points were determined on a micromelting-point apparatus and thermometer was uncorrected. 1 H-NMR spectra and 13 C-NMR were recorded in acetone-d 6 or CDCl 3 on a 400 or 600 Bruker NMR spectrometer with tetramethylsilane (TMS) as an internal reference. All chemical shifts are reported in parts per million (ppm). High-resolution exact mass measurements were performed using electrospray ionization (positive mode) on a quadrupole time-of-flight (QTOF) mass spectrometer (Maxis Q-TOF, Bruker Inc., Billerica, MA, USA).
General Synthesis for 3-Aryl-Azetidines 5a-i
To a solution of sulfonyl chloride (1.0 equiv) in THF (0.2 M) at 0˝C was added hydrazine hydrate (2.5 equiv) dropwise. The reaction mixture was stirred at 0˝C until complete conversion was observed by thin-layer chromatography. The mixture was diluted with EtOAc, washed with brine, dried over Na 2 SO 4 and solvents removed in vacuo to give sulfonylhydrazides. To a solution of sulfonylhydrazones (1.0 equiv) in MeOH (0.5 M) was added ketone (1.0 equiv). The reaction mixture was stirred at room temperature until complete conversion was observed by TLC. Solvents were removed in vacuo to give sulfonylhydrazones. Sulfonylhydrazone (0.5 mmol, 1.0 equiv), boronic acid (0.75 mmol, 1.5 equiv), and cesium carbonate (0.75 mmol, 1.5 equiv) were placed in an oven-dried tube in vacuo for 30 min. The tube was backfilled with argon followed by the addition of dry degassed 1,4-dioxane (2 mL, 0.25 M). This tube was sealed and heated to 110˝C for 18 h before being cooled to room temperature, quenched with NaHCO 3 (2 mL of a saturated aqueous solution), and extracted with CH 2 Cl 2 (3ˆ5 mL). The organic phase was dried over MgSO 4 , and solvents were removed in vacuo to give a residue, which was purified by flash column chromatography (10%´30% EtOAc/hexane) to give the title compounds.
General Synthesis for 7a-i
Compounds 5a-i (1 equiv.) were dissolved in 2 mL CH 2 Cl 2 /TFA (1:1, v/v) at 0˝C, and the mixture was stirred for 1 h at room temperature. The reaction was then concentrated in vacuum, followed by azeotroping with dichloromethane three times to obtain the trifluoroacetate salts. To a stirring solution of the Dap in dry dichloromethane at 0˝C were sequentially added HATU (1.5 equiv.). After 10 min, the previously prepared trifluoroacetate salts dissolved in dichloromethane was added to reaction mixture followed by the addition of DIPEA (3 equiv.). After stirring for 12 h at room temperature, the reaction mixture was diluted with EtOAc/CH 2 Cl 2 , washed with 1M HCl, saturated NaHCO 3 solution, water and brine, dried, filtered and concentrated in vacuo. Purification by silica gel column chromatography (EtOAc/Petroleum ether, 1/1) afforded compounds 7a-i. 128.1, 128.0, 124.5, 115.8, 84.2, 84.1, 82.1, 79.9, 79.1, 61.2
Cancer Cell Proliferation Inhibition Assay
The following cell lines were used for the screening stage, obtained from American Type Culture Collection (ATCC, Manassas, VA, USA); HCT116 human colon cancer cells and A549 human lung carcinoma cell lines were cultured in RPMI 1640 medium supplemented with 10% FBS. Cell cultures were maintained in a humidified atmosphere of 5% CO 2 at 37˝C. Cells were seeded at 2000 cells per well in 96-well plates in a volume of 200 µL per well. The test compounds were dissolved in DMSO and diluted with culture medium to different concentrations. After seeding for 24 h, the medium was removed, and 500 µL of the test compound solution was added in duplicates and incubation continued for 72 h at 37˝C in a humidified atmosphere containing 5% CO 2 . Control cells were treated with vehicle alone. During the last 4 h of incubation, the cells were exposed to tetrazolium dye (MTT) solution (5 mg/mL, 20 mL per well). The generated formazan crystals were dissolved in 100 mL of dimethyl sulfoxide (DMSO), and the absorbance was read spectrophotometrically at 570 nm using an enzyme-linked immunosorbent assay plate reader. The data was calculated using Graph Pad Prism version 5.0 (GraphPad Softwrae, La Jolla, CA, USA). The IC 50 s were fitted using a non-linear regression model with a sigmoidal dose response.
In Vivo Efficacy Study
Pathogen-free, 4-6 week-old, female BALB/c athymic mice (Shanghai SCXK Laboratory Animal Technology Co. Ltd., Shanghai, China) were housed under sterile conditions. Human A549 xenograft was established in the right flanks of athymic mice according to the protocol of the National Cancer Institute. When the tumor reached a volume of 100 mm 3 , the mice were randomly assigned into control (n = 6 per group) and treatment groups (n = 6 per group). Control group were given lactate buffer, and treatment groups were iv administered with tested compounds. The size of tumor was measured individually on the indicated days. Tumor volume (V) was calculated as V = (lengthˆwidth 2 )/2. The individual relative tumor volume (RTV) was calculated as follows: RTV = V t /V 0 , where V t represented the last tumor size measurement and V 0 represented the pre-dosing tumor size measurement. The animal experimental protocols were approved by the Animal Ethics Committee of School of Pharmacy, Fudan University and the mice were treated in accordance with international animal ethics guidelines.
Conclusions
In summary, we described the design and synthesis of nine TZT-1027 analogues based on the conformation restriction strategy. 3-Aryl-zetidines were used to replace the phenylethyl group at C-terminus. Two human cancer cell lines (A549, HCT116) were used to evaluate the potency of the synthesized compounds. Compound 1a showed the strongest cytotoxic activities against A549 and HCT 116 cell lines (IC 50 values were 2.2 nM and 2.1 nM, respectively). Compound 1a could not achieve effective inhibition in A549 xenograft models at different dose levels. The poor solubility of 1a limited a further exploration of in vivo activity at higher dosage. Our objective was to discover potent antitumor agents with novel scaffold. In this study, compound 1a did not show any severe toxicity up to 5 mg/kg, showing a better safety potential than TZT-1027 (a dose of 4 mg/kg seemed to be toxic as reported). | 2016-06-10T08:59:46.098Z | 2016-04-28T00:00:00.000 | {
"year": 2016,
"sha1": "54f1765f96e7df7b903a3ac7a1e5bd4e6f3e6ec2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-3397/14/5/85/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "54f1765f96e7df7b903a3ac7a1e5bd4e6f3e6ec2",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
3314560 | pes2o/s2orc | v3-fos-license | Accumulation of multiple neurodegenerative disease-related proteins in familial frontotemporal lobar degeneration associated with granulin mutation
In 2006, mutations in the granulin gene were identified in patients with familial Frontotemporal Lobar Degeneration. Granulin transcript haploinsufficiency has been proposed as a disease mechanism that leads to the loss of functional progranulin protein. Granulin mutations were initially found in tau-negative patients, though recent findings indicate that these mutations are associated with other neurodegenerative disorders with tau pathology, including Alzheimer’s disease and corticobasal degeneration. Moreover, a reduction in progranulin in tau transgenic mice is associated with increasing tau accumulation. To investigate the influence of a decline in progranulin protein on other forms of neurodegenerative-related protein accumulation, human granulin mutation cases were investigated by histochemical and biochemical analyses. Results showed a neuronal and glial tau accumulation in granulin mutation cases. Tau staining revealed neuronal pretangle forms and glial tau in both astrocytes and oligodendrocytes. Furthermore, phosphorylated α-synuclein-positive structures were also found in oligodendrocytes and the neuropil. Immunoblot analysis of fresh frozen brain tissues revealed that tau was present in the sarkosyl-insoluble fraction, and composed of three- and four-repeat tau isoforms, resembling Alzheimer’s disease. Our data suggest that progranulin reduction might be the cause of multiple proteinopathies due to the accelerating accumulation of abnormal proteins including TDP-43 proteinopathy, tauopathy and α-synucleinopathy.
Interestingly, loss-of-function GRN mutations have been identified in patients clinically diagnosed with Alzheimer's disease (AD) [12][13][14][15][16][17][18][19] . For example, p.Gly35Arg (c.103G > A) 19 , and a single base pair deletion (c. 154delA) were found in AD, and the latter was shown to cause a frame shift (p.Thr52HisfsX2) creating a premature stop codon 20 . The rs5848 (3′ UTR + 78C > T) variant was also found in AD 21 and associated with an increased risk of this disease 22 . In addition, GRN mutations were found in the accelerating accumulation of abnormal proteins in corticobasal syndrome 10,[23][24][25][26] . Furthermore, tau pathology, in addition to TAR-DNA binding protein of 43 kDa (TDP-43) pathology, was found in most members of two families harboring a GRN mutation 27 . These findings suggest that a decline in, or dysfunction of, PGRN may cause tau abnormalities, leading to the formation of tau pathology by activation of cyclin-dependent kinases in a P301L tau/GRN +/− mouse model 28 . To explore these issues, we performed immunohistochemical staining and biochemical analyses on human familial GRN mutation cases and examined whether GRN reduction accelerates the accumulation of neurodegenerative-related proteins other than TDP-43.
In this study, using a novel, highly sensitive immunohistochemical method employing free-floating sections, we noted massive phosphorylated-tau-positive staining in some familial GRN mutation cases. Notably, in these same cases, we also observed significant phosphorylated α-synuclein positive staining. Additionally, detergent-insoluble tau and α-synuclein proteins were detected by immunoblot analysis. Similar tau pathology was not seen in other GRN mutation cases when employing standard immunohistochemistry based on paraffin-embedded sections. Our results suggest that at least some cases with GRN mutations may show a hitherto unrecognized accelerated pathological accumulation of tau and α-synuclein.
Materials and Methods
Ethics Statement. All patients, or in some cases in which the patient had died, next of kin, provided written consent for autopsy and postmortem analyses for research purposes. Written informed consent was obtained from all patients. This study was approved by the Ethics Committee of the Tokyo Metropolitan Institute of Medical Science (permission No. 15-1 and 15-5(1)), the Banner Sun Health Research Institute and University of Manchester. The study was performed in accordance with the ethical standards laid down in the 1964 declaration of Helsinki and its later amendments.
Cases. The brain tissues used in Study A from four patients with GRN and three controls were from the Banner Sun Health Research Institute (Sun City, AZ), Brain and Body Donation Program 29,30 . The additional nine GRN mutation cases in Study B were from the Manchester Brain Bank (UK). Ten control cases in Study B were registered in the autopsy archives of Dementia Research Project, Tokyo Metropolitan Institute of Medical Science. Case details are presented in Table 1. Seven different GRN mutations were recorded. Briefly, Case 1 had a c.1252C > T mutation resulting in p.Arg418X. Cases 2, 4, 9 and 12 had a c.1477C > T mutation resulting in p.Arg493X. A point mutation in a translation initiation codon (c.1A > C) predicted reduced mRNA levels in case 3. Three patients (cases 8, 14 and 16) shared c.1355delG mutation resulting in p.V452WfsX38. Case 10 had a c.1402C > T mutation resulting in p.Q468X and case 13 had a c.90_91insCTGC mutation resulting in p.C31LfsX34. Two patients (cases 11 and 15) shared c.388_391delCAGT mutation resulting in p.Q130SfsX124.
Study B: As we could not obtain sections in cases 8-26 that had been fixed and preserved under the same conditions as cases 1-7, we used formalin-fixed, paraffin-embedded sections instead. Therefore, sections from cases 8-26 were cut at 10 μm thickness, deparaffinized, incubated with 1% H 2 O 2 for 30 min to eliminate endogenous peroxidase activity in the tissue, then pretreated for 10 min in 10 mM sodium citrate buffer, pH6.0 at 110 °C. Sections were then treated with formic acid for 10 min (for α-synuclein staining) or 30 min (for tau staining). For tau immunostaining, sections were incubated in 10 μg/ml of trypsin (Sigma-Aldrich) at 37 °C for 10 min. They were also incubated with AT8 and anti-phosphorylated α-synuclein antibody (1175), overnight, as in Study A. Antibody labeling was performed by incubation with goat anti-rabbit IgG (1:1,000, Vector Laboratories, Burlingame, CA, USA) or horse anti-mouse IgG (1:1,000, Vector Laboratories) for 3 hours. The antibody labeling was visualized by incubation with avidin-biotinylated horseradish peroxidase complex (ABC Elite, Vector Laboratories, 1:1,000) for 3 hours, followed by incubation with a solution containing 0.01% 3,3′-diaminobenzidine, 1% nickel ammonium sulfate, 0.05 M imidazole and 0.00015% H 2 O 2 in 0.05 M Tris-HCl buffer, pH 7.6. Counter nuclear staining was performed with Kernechtrot stain solution (Merck, Darmstadt, Germany) or hematoxylin (Muto Pure Chemicals, Tokyo, Japan). The sections were then rinsed with distilled water, mounted on glass slides, treated with xylene, and coverslipped with Entellan (Merck). Tissue sections (cases 1-4) were also stained using a modified Gallyas-Braak method. Photographs were taken with a BX53 microscope (Olympus, Tokyo, Japan).
Histopathological assessments. Age-related plaque scores were determined using the Braak staging 31 .
For purpose of this protocol, the letter corresponds to the following assessment: 0 = No Aβ deposits, A = initial Aβ deposits can be found in basal portions of the isocortex, B = Aβ deposits can be shown in virtually all isocortical association areas, C = Aβ deposits can be seen in all areas of the isocortex, including the sensory and motor core fields. For the evaluation of neurofibrillary changes (tau deposition), Braak staging was applied 31 . In this protocol, the number corresponds to the following assessment of the area of tau deposition: Stage I = transentorhinal cortex, Stage II = entorhinal cortex, Stage III = hippocampus-subiculum, Stage IV = temporal cortex, Stage V = parietal cortex, and Stage VI = occipital cortex. The degree of accumulation of tau and α-synuclein was also evaluated qualitatively and a score ranging from -(negative) to +++ (severe) was assigned.
Results
GRN mutation cases used in the present study. The age, gender, clinical and pathological diagnoses and genetic information on the familial GRN mutation cases and control cases used in this study is summarized in Table 1. None of the GRN mutation cases examined in this study had MAPT mutation, and the MAPT haplotype was determined to be H1/H1 in cases 3, 4, 10 and 11, H1/H2 in cases 1 and 16, H2/H2 in cases 8, 12 and 14 ( Table 1). The MAPT haplotype of cases 2, 9, 13 and 15 were unknown.
Tau accumulation in GRN mutation cases. We observed a considerable number of tau-positive neurons, astrocytes and oligodendrocytes in all 4 Study A GRN mutation cases by either AT8 immunostaining (Fig. 2, and Table 2) or Gallyas silver staining (data not shown). In particular, case 2 showed massive AT8-positive structures in the entorhinal cortex, hippocampus ( Fig. 2A), amygdala (Fig. 2B), temporal cortex (Fig. 2C), insula. In the temporal lobe, the majority of tau-positive neuronal cytoplasmic staining appeared as pretangle-like forms ( Fig. 2A,F,G,H). In the neuropil, fine tau-positive granules were abundant (Fig. 2C). The size of most of these granules appeared smaller than the tau-positive grains observed in argyrophilic grain disease (AGD) brains, and they were negative for Gallyas-silver staining (data not shown). Furthermore, tau-positive astrocytic structures, resembling "bush-like" astrocytes previously reported in AGD 33 , were found in the cortex in all cases in Study A. Their morphology was significantly different from the tufted-astrocytes in progressive non-fluent aphasia (PSP) patients, the astrocytic plaques in corticobasal degeneration (CBD) or the ramified astrocytes in Pick's disease (Fig. 2D). In the white matter, tau-positive oligodendroglial coiled bodies were observed (Fig. 2E).
Gallyas silver staining also revealed structures similar to those stained with AT8, including neurofibrillary tangles (NFTs), threads, and astrocytic and oligodendrocytic structures (data not shown). In the hippocampal region, many NFTs were found in cases 2 and 4 by both AT8 immunostaining and Gallyas silver staining (data not shown). The control cases of Study A showed mild to moderate AT8-positive structures and less glial tau deposition compared to GRN mutation cases ( Table 2). Case 1, 2, 4 and 5 exhibited tau deposition that corresponded to Braak stage IV, Case 3 corresponded to Braak stage V and Case 6 and 7 corresponded to Braak stage I, respectively ( Table 2). The degree of accumulation of tau was evaluated qualitatively and a score ranging from -(negative) to +++ (severe) was assigned (Supplementary Figure 1 and Table 2).
In Study B, eight of nine GRN mutation cases exhibited some AT8 immunoreactivity (Supplementary Figure 2 and Table 3), but the levels of phosphorylated tau deposition were up to Braak stage II except for Case 9 (Table 3), dissimilar to that seen in GRN mutation cases of Study A. No tau deposition or only Braak stage I-II were observed in the control cases of study B ( Table 3). The tau pathology in the GRN mutation cases (Study A) was also detected by 3-repeat (3R)-tau (RD3) and 4-repeat (4R)-tau (anti-4R) specific antibodies indicating that both 3R and 4R tau accumulation was present (Data not shown).
α-synuclein accumulation in GRN mutation cases. Immunohistochemistry using an antibody to phosphorylated α-synuclein, revealed small round or dot-like structures and short thread-like structures in the temporal lobe (Fig. 3A-F), and oligodendroglial coiled body-like structures in the temporal white matter (data not shown) in cases 1-4 of Study A. The degree of accumulation of α-synuclein was evaluated qualitatively and a score ranging from -(negative) to +++ (severe) was assigned. In particular, case 4 exhibited atypical α-synuclein deposition in the temporal cortex (Fig. 3A,B and Table 2). Phosphorylated α-synuclein positive structures were not found in Study B using paraffin-embedded sections of GRN mutation cases (cases 8-16, Table 3) and control cases (cases 17-26, Table 3).
Amyloid β deposition in GRN mutation cases. Aβ deposition was found in the temporal lobe in three of four GRN mutation cases in Study A ( Fig. 4 and Table 2). In the cases 1 and 4, Aβ pathology was present mostly as diffuse plaques, corresponding to Braak stage A ( Fig. 4A and D). In case 2, there was no Aβ pathology ( Fig. 4B) but case 3 corresponded with Braak stage C (Fig. 4C). Of the control cases in Study A, three were similar to Braak stage A, but one (case 5) corresponded to Braak stage B. In Study B, Aβ accumulation in almost all cases corresponded to Braak stage 0, the others showing Braak stage A (Table 3). and AD (lane 5) by immunoblot analysis of the sarkosyl-insoluble fraction using C-terminal tau antibody (T46) (Fig. 5). The major tau band pattern in GRN mutation cases was triplets of 68, 64 and 60 kDa, similar to that in AD, but different from that in CBD and PSP (Fig. 5). GRN mutation cases were also detected by 3R-tau (RD3) and 4R-tau (anti-4R) specific antibodies indicating both 3R and 4R tau accumulation (Supplementary Figure 3). GRN mutation cases were also studied with anti-phosphorylated α-synuclein antibodies (1175 and pSyn#64) for cases 1-4. Very faint bands of phosphorylated α-synuclein were observed at 16 kDa. (Supplementary Figure 4).
Immunoblot analyses. Biochemical features of accumulated tau in cases of
Fluorescence immunohistochemistry. Fluorescent double-staining of the temporal lobe of the GRN mutation case 4 was performed to examine whether TDP-43/tau (Fig. 6A), TDP-43/α-synuclein (Fig. 6B) or tau/α-synuclein (Fig. 6C) were co-localized in the abnormal structures. Colocalization of these proteins was very infrequent in most abnormal structures.
Discussion
The results of the present study show that GRN mutations causing PGRN reduction may accelerate the intracellular accumulation of not only TDP-43 but also tau and α-synuclein in the brains of familial FTD patients with GRN mutations. This suggests that GRN mutations causing PGRN reduction may be causative or represent risk factors for multiple proteinopathies (TDP-43 proteinopathy, tauopathy or α-synucleinopathy).
Immunohistochemical analyses of phosphorylated TDP-43 revealed a considerable number of neuronal cytoplasmic inclusions and dystrophic neurites in all GRN mutation cases (Fig. 1, Tables 2 and 3). In FTLD-TDP, TDP-43 pathology falls within four histological subtypes (types A-D) based on the predominant type of TDP-43-positive structures exhibited 32 . Type A is characterized by numerous short dystrophic neurites and crescentic or oval neuronal cytoplasmic inclusions. Cases of FTLD-TDP with a GRN mutation invariably display type A pathology [34][35][36] , and present observations were in accordance with this.
The very high sensitivity staining method that we performed for Study A (cases 1-7) revealed Case 2 to show atypical tauopathy with massive tau deposition in neuron, astrocytes and oligodendrocytes without Aβ deposition. Case 4 was atypical synucleinopathy with diffuse α-synuclein positive structures that were observed mainly in the neocortex. Case 3 exhibited massive Aβ deposition corresponding to Braak Stage C and tau deposition corresponding to Braak stage V in AD pathology. Case 1 exhibited tau deposition that corresponded to Braak stage IV. It is possible that case 3 might be an incidental complication of AD because the age was late 70 s' . The pathology in the other two cases (case 2 and 4), however, is very rarely observed in the normal aging brain at mid-50 years of age. The control cases in Study A also had levels of tau deposition that corresponded to Braak stage I to IV (Table 2), but the average age was higher than that of the GRN mutation cases. We compared abnormal tau deposition using the paraffin-embedded tissues of GRN mutation and control cases, and there were significantly differences (Study B, Table 3). Though Study B was less obvious differences than Study A. Braak et al. reported that for Braak NFT stage III-IV, the ratio was less than 10% at ages 50 s' to 60 s' 37 , so that our GRN mutation cases in Study A showed tau accumulation atypical for normal aging.
It has been widely accepted for the past decade that there is no tau deposition in GRN mutation brains. However, using high-sensitivity immunohistochemical staining, we have found that hyper-accumulated tau and α-synuclein can occur in younger GRN mutation cases. Part B of the present study, using paraffin-embedded tissues, however, showed only mild tau deposition, as has been previously reported 8,9 . Hence, GRN mutation may accelerate deposition of tau and α-synuclein but the level of abnormal protein deposition seen in routine paraffin-embedded sections from GRN cases might not be as strong as that seen than in free-floating sections and therefore go unrecognized. Re-analysis might be necessary in other GRN mutation cases using this high-sensitivity immunohistochemical staining method, or immunoblot analyses on frozen brain tissue, in order to gain a fuller appreciation of the level of tau pathology present in such cases.
Our previous report made mention of the fact that a GRN mutation in P301L tau transgenic mice affected phosphorylated tau deposition 28 . The results of the present study support our previous observations in mice. It has been reported that PGRN deficiency causes lysosomal dysfunction 38 . We hypothesized that lysosomal dysfunction might reduce protein degradation in brain cells allowing aggregation-prone neurodegenerative disease-related proteins to deposit more easily.
The features of tau pathology in GRN mutation cases in this study are of predominantly neuronal pretangles, abundant fine granules in the neuropil, and astrocytic and oligodendroglial pathology. It is interesting that fine tau-positive granules were reported in the striatum of a brain with a GRN c.709-2A > G mutation 27 . Among tauopathies, the tau pathology most similar to our cases might be found in AGD. However, the size of the fine granules in our cases seemed smaller than that of the grains in AGD and they were negative for Gallyas-silver staining (data not shown). Although the form of tau-positive astrocytes in our cases was similar to the "bush-like" astrocytes in AGD, their Gallyas-positive status in contrast to the Gallyas-negative status of the "bush-like" AGD astrocytes 33 . No FUS accumulation was found in any GRN mutation cases in Study A, thus there might be no or little relationship between the GRN mutation and FUS deposition (data not shown).
Immunoblot analysis of our cases using C-terminal tau antibody revealed that the banding patterns of the full-length tau in the sarkosyl-insoluble fraction appeared to be essentially the same as that seen in AD (Fig. 5). The staining using three and four repeat tau specific antibodies revealed that tau in the sarkosyl-insoluble fraction consists of both forms of tau (Supplementary Figure 2). These results suggest that accumulated tau in cases of GRN mutation cases contains six tau isoforms just as in AD. However, the distribution of fine granular tau and the lack of any or only light Aβ accumulation (Fig. 4) is different from AD pathology. Tau isoforms in GRN mutation cases were biochemically different from those in CBD and PSP (Fig. 5). Cases of GRN mutation may therefore represent a different tauopathy from that of AD, CBD, PSP and AGD.
In addition to neuronal and glial tau accumulation, the present study also revealed α-synuclein-positive structures, including small round, dot-like or thread-like structures in the cortex and oligodendroglial coiled body-like structures in the white matter in the GRN mutation cases in Study A (cases 1-4, Fig. 3). Case number 4 showed particularly striking phosphorylated α-synuclein pathology. However, the nine paraffinized GRN mutation cases showed no α-synuclein-positive structures. This discrepancy might be caused by fixation or preservation methods. Leverenz and colleagues reported that α-synuclein pathology was observed in two of seven brains with a familial GRN mutation. One case showed brainstem α-synuclein pathology while the other was cortical 27 .
Accumulations of phosphorylated-tau, α-synuclein and TDP-43 were reported in the brains of Guam/Kii-amyotrophic lateral sclerosis (ALS)-parkinsonism-dementia complex (PDC) patients 39 . The triplet tau band patterns (68, 64, and 60 kDa) of immunoblot analysis in the sarkosyl-insoluble fraction of GRN mutation cases (Fig. 5) appeared to be essentially the same among cases with the GRN mutation, AD and Guam/Kii ALS-PDC. Fine tau-positive granules were also reported in the cerebral white matter of Guam-PDC cases, in which the morphology seemed to resemble that of our cases. Hazy astrocytes were observed in Guam-PDC cases, but their morphology seemed to differ from that of our cases. To date, no GRN mutations in Guam/Kii ALS-PDC cases have been reported. Very recently we reported that the GRN mutation leads lysosomal dysfunction 40 . We speculated that there could be common pathway(s) in lysosomal function or that these diseases have something common features of protein aggregation because of the similarities (TDP-43, tau, α-synuclein deposition and 3R/4R tau isoform aggregation).
In conclusion, when using a highly sensitive free-floating immunohistochemical technique combined with western blotting, we have shown widespread pathological tau and α-synuclein deposition in neurons and glial cells in familial GRN mutation cases that are not apparent when using standard immunohistochemical methods based on routine paraffin embedded sections. Although, the number of samples for this study was small and we recognize the limits of this study, our findings suggest that the pathologies seen in GRN mutation cases may possibly be renamed "neuronoglial multiple proteinopathies". | 2018-04-03T04:24:46.012Z | 2017-05-04T00:00:00.000 | {
"year": 2017,
"sha1": "09a9902de7fc98320b9f2107469af1c68e624406",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-01587-6.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4fdb6d9ad7d2b75199055e105ba8889e18dd6fe7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
242569769 | pes2o/s2orc | v3-fos-license | Comprehensive investigation of the prevalence and risk factors of viral hepatitis B and C in PERSIAN Guilan Cohort Study CURRENT
Background Hepatitis B (HBV) and C (HCV) viruses are two serious infectious diseases with high global health impact. The aim of this study was to evaluate the prevalence of HBV and HCV in the Prospective Epidemiological Research Studies of the Iranian Adults (PERSIAN) Guilan Cohort Study through immunological and molecular methods. Methods The blood samples were obtained from 10520 enrolled participant. Complete biochemical and hematological assessments plus urine analysis were done. The presence of HBsAg, anti-HBs, anti-HBc and anti-HCV antibodies for all participants and HBeAg and anti-HBe antibody for HBV positive patients were evaluated. HBV genomic DNA and HCV genomic RNA were extracted from positive serum samples. Real time PCR assay was done to quantify HBV and HCV genomes. HCV genotyping was also performed. Results female
Abstract
Background Hepatitis B (HBV) and C (HCV) viruses are two serious infectious diseases with high global health impact. The aim of this study was to evaluate the prevalence of HBV and HCV in the Prospective Epidemiological Research Studies of the Iranian Adults (PERSIAN) Guilan Cohort Study through immunological and molecular methods.
Methods The blood samples were obtained from 10520 enrolled participant. Complete biochemical and hematological assessments plus urine analysis were done. The presence of HBsAg, anti-HBs, anti-HBc and anti-HCV antibodies for all participants and HBeAg and anti-HBe antibody for HBV positive patients were evaluated. HBV genomic DNA and HCV genomic RNA were extracted from positive serum samples. Real time PCR assay was done to quantify HBV and HCV genomes. HCV genotyping was also performed.
Conclusion Our detected HBV and HCV prevalence were lower than other cities/provinces of Iran, which may be due to the lifestyle or other unknown reasons.
Background
The hepatitis B virus (HBV) is a viral agent whose target tissue is liver and can cause both acute and chronic illnesses [1]. According to the 2016 World Health Organization (WHO) statistics, 240 million people who are positive for at least 6 months of HBsAg are reported as HBV positive individuals [2]. Meaningly, more than 686,000 people die each year due to the effects of the virus, including cirrhosis and cancer [3]. Finally, the highest rates of hepatitis B are found in Africa and East Asia [4][5][6][7].
The hepatitis C virus (HCV) is the main cause of chronic liver disease, which can lead to chronic hepatocellular carcinoma with high economic burden [8][9][10][11]. It has silent epidemiology and at the same time is a major blood-borne infection worldwide [12].
According to the latest global health statistics, 130-150 million people are infected with HCV [13] and 700,000 people die each year [14]. HCV has seven genotypes and 70 subtypes. HCV RNA assays are the most sensitive test for HCV infection and are a gold standard for proving hepatitis C infection [15]. Although certain population sub-groups such as hemophiliacs and hemodialysis are more susceptible to HBV and HCV [20-25], but evaluation of the prevalence of these two viruses in general population also is very important. Considering the importance of hepatitis B and C in this study, the prevalence of these viruses in people referring to the cohort of Guilan province will be discussed.
Participants
This study is nested in the Guilan center of Prospective Epidemiological Research Studies of the Iranian Adults (PERSIAN) cohort study [26,27], which named PERSIAN Guilan Cohort Study (PGCS). The PGCS was started at September 2014 in Some'e Sara (GPS coordinator Latitude: 37.308003 & Longitude: 49.315022), Guilan, Northern of Iran. All residents between 35 to 70 years were enrolled. These 10520 peoples will be followed at least for 10 years to understand new diseases and identify the underlying genetic susceptibility factors for chronic diseases. Moreover, participants were followed up yearly by telephone or medias and they were encouraged to participate in the study. Peoples who were unable to attend the clinic for examination or refusal by a person to participate in the study were excluded [28].
Sampling and biochemical assessments
The aseptic blood samples were collected from the cubical vein into vacutainers. Total number of WBC, RBC, platelet, lymphocyte, monocyte, and granulocytes were counted.
The serum sample was harvested and stored at -20 °C until use for complete biochemical assessment. The Hb concentration and level of Hct, MCV, MCH, MCHC, RDWCV, RDWSD, plateletcrit, MPV, PDW were also evaluated. Urine sample was collected and used immediately for measuring of specific gravity (SG), pH, and creatinine level.
Virological assessments
The presence of HBsAg, anti-HBs, anti-HBc and anti-HCV antibodies were determined by Electrochemiluminescence (Cobas e 411, Roche, Germany). For positive patients, these four tests plus presence of HBeAg and anti-HBe antibody were measured again. HBV genomic DNA was extracted from positive serum samples by viral DNA extraction kit (QIAGEN, Germany). HCV genomic RNA was extracted from positive serum samples by viral RNA extraction kit (Roche, Germany). To quantify HBV and HCV genomes, qPCR assay was carried out using TaqMan-based commercial available kit (QIAGEN, Germany) based on the manufacturer's instructions. HCV genotyping was done using HCV Genotype Plus Real-TM kit (Sacase Biotechnologies, Italy).
Ethical consideration
This study was confirmed by the Ethics Committee of Guilan University of Medical Sciences (Ethics code: IR.GUMS.REC. 1396.254).
Statistical analysis
Qualitative data were expressed as frequency and percentage and their association with HBV and HCV statuses were analyzed using Chi square test. Quantitative data were presented as mean ± SD and between HCV/HBV positive and negative groups differences were analyzed using two independent sample t test. All statistical analysis were performed using SPSS version 23. The P value lesser than 0.05 was considered as significant difference.
Results
Demographic characteristics of our patients and statuses of HBV and HCV infection are presented in Table 1. Most of our participants were female (53.5%), rural (56.1%), married (97.2%) with primary education (< 12 years) (72.1%) without smoking (75.2%) or alcohol consumption (85.3%). In addition, most of them had history of surgery (63.3%) and hospitalization (80.6%) and had no transfusion (89.5%) or genital aphthous (98.8%). The HBV prevalence was 0.24% (95% CI, 0.16% to 0.36%) and the HCV prevalence was 0.11% (95% CI, 0.06 % to 0.19 %). The geographic distribution of HBV positive and HCV positive patients based on gender are presented in Figure 1. Rural participants were significantly more HBV positive than urban peoples (P=0.045) while male individuals were significantly more HCV positive than female participants (P=0.013). No other significant associations were detected between other evaluated demographic variables with HBV and HCV prevalence.
The complete blood and urine analysis of our participants are presented in Table 2 Table 3.
Most HBV positive patients (52%) had lesser than 300 copies of HBV DNA per ml. While, most HCV positive patients (58.4%) had 10 5 -10 6 copies of HCV RNA per ml. Most detected HCV genotype was 2a ( Figure 2). First-degree relatives of all HCV positive patients were also checked for HCV infection by qPCR. Just child of one patients had HCV infection with genotype similar to her mother as 1a.
Discussion
In the present study, the prevalence of HBV and HCV in the Guilan site of PERSIAN cohort were reported. We found the prevalence of 0.2 and 0.1 for HBV and HCV, respectively. positive, as we found in this study, also reported previously [62]. It can be said that, both HBV and HCV influenced the liver tissue and the changes in biochemical and hematological parameters can be related to these changes in the hepatic functions.
Conclusions
In summary, we found lower HBV and HCV prevalence compared to other regions of Iran.
Also, compared to previous reports from our province, Guilan, the HBV and HCV prevalence also decreased. This may be due to the preventive strategy or increase of the medical knowledge of peoples, which must be evaluated in the further studies.
Availability of data and materials
The datasets analyzed during the current study are available from the corresponding author on reasonable request.
Ethics approval and consent to participate
Ethical approval of the study was obtained from Guilan University of Medical Sciences.
(Grant number: 1397.163). Written consent was taken after informing the purpose and importance of the study to each participant. To ensure confidentiality of participant's information, codes were used where by the name of the participant and any identifier of participants was not written on the questionnaire.
Consent for publication
Not applicable. | 2019-09-17T03:00:05.462Z | 2019-09-08T00:00:00.000 | {
"year": 2019,
"sha1": "3901f21d85b84618792147be20a33a584140ed8e",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-4858/v1.pdf?c=1631841468000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "64cf1cd2d5718c73a209cf693a396e343434c918",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
259895722 | pes2o/s2orc | v3-fos-license | New vegetable varieties of Brassica rapa and Brassica napus with modified glucosinolate content obtained by mass selection approach
Background Glucosinolates (GSLs) constitute a characteristic group of secondary metabolites present in the Brassica genus. These compounds confer resistance to pests and diseases. Moreover, they show allelopathic and anticarcinogenic effects. All those effects are dependent on the chemical structure of the GSL. The modification of the content of specific GSLs would allow obtaining varieties with enhanced resistance and/or improved health benefits. Moreover, the attainment of varieties with the same genetic background but with divergent GSLs concentration will prompt the undertaking of studies on their biological effects. Objective and Methods The objective of this study was to evaluate the efficacy of two divergent mass selection programs to modify GSL content in the leaves of two Brassica species: nabicol (Brassica napus L.), selected by glucobrassicanapin (GBN), and nabiza (Brassica rapa L.), selected by gluconapin (GNA) through several selection cycles using cromatographic analysis. Results The response to selection fitted a linear regression model with no signs of variability depletion for GSL modification in either direction, but with higher efficiency in reducing the selected GSL than in the increasing. The selection was also effective in other parts of the plant, suggesting that there is a GSL translocation in the plant or a modification in their synthesis pathway that is not-organ specific. There was an indirect response of selection in other GSL; thus this information should be considered when designing breeding programs. Finally, populations obtained by selection have the same agronomic performance or even better than the original population. Conclusion Therefore, mass selection seems to be a good method to modify the content of specific GSL in Brassica crops.
Introduction
The Brassica genus belongs to the Brassicaceae family and is the most economically important genus within this family as Brassica crops represent the basis of world supplies along with cereals (1). The world production of Brassica genus vegetables has increased considerably throughout recent decades. Cultivated area has grown from 1.6 million ha to 2.4 million ha from 1990 to 2020; while production has increased from 39.3 million tons to 70.9 million tons during the same time frame (2). These crops exhibit an engaging nutritive profile since they provide considerably high amounts of fiber and protein when compared to other horticultural crops (3), besides having high antioxidant activity. This is related to phenolic compounds, especially flavonoids, and vitamin C contents (4)(5)(6)(7) although it is also related to the carotenoid (8) and vitamin E (5) content. However, what differentiates Brassica crops from other horticultural crops is the presence of the secondary metabolites called glucosinolates (GSL).
GSL are a major class of secondary metabolites derived from amino acids. Their defining core structure is derived from select amino acids and comprises a β-thioglucosyl N-hydroxysulphate, containing a side chain and a β-glucopyranose moiety. GSL are grouped into three chemical classes based on the precursor amino acid: they are aromatic if derived from phenylalanine or tyrosine; aliphatic if derived from methionine, alanine, valine, leucine or isoleucine; and indolic if derived from tryptophan (9). Biosynthesis of GSL proceeds in three stages: (1) sidechain elongation of amino-acids, (2) development of the core structure, and (3) secondary side-chain modifications. Both extensive GSL sidechain modification and amino acid elongation are responsible for the more than 120 reported structures that show the chemical diversity of these compounds (10). When cellular damage occurs, GSL come into contact with plant myrosinase, a β-thioglucosidase that activates the generation of an unstable intermediate leading to the formation of biologically active chemicals, including nitriles, epithionitriles, thiocyanates, oxazolidine-2-thiones and/or isothiocyanates (11,12).
GSL derived chemicals are recognized for both their role in plant defense and human health. After tissue breakdown caused by pests and necrotrophic pathogens, GSLs are hydrolyzed into toxic products. Toxicity is caused by changes in the permeability of cell membranes, the stability of DNA, the regulation of the cell cycle and mitochondrial function of plant pests and pathogens. Moreover, the system GSLsmyrosinase is involved in the stability of the cuticle from plant surfaces, the stomatal opening, and on the signaling in the plant's innate immune response to microbial pathogens (12). Hydrolytic products from GSL have a chemo protective effect in humans. Two major groups of breakdown products, named isothiocyanates and indoles, have reported activity against many types of cancer in different stages of development by inducing detoxification enzymes (phase II) and inhibiting the activation of phase I enzymes. They also regulate cancer cell development by blocking the cell cycle and promoting apoptosis (13).
The increase in GSL content with health benefits and the reduction in the content of the harmful ones has been one of the main objectives in the improvement of Brassica crops. In this way, new varieties of rapeseed with no progoitrin and broccoli varieties with increased content of glucoraphanin have been produced. Progoitrin has a goitrogenic effect that has been proven in animals; whereas, glucoraphanin intake is related to the reduction of cancer risk or the maintenance of cardiovascular health (14). Different methods have been employed to modify the content of GSL in plants, such as genetic manipulation or crossbreeding (15)(16)(17)(18). Obtaining new Brassica varieties with modified GSL profiles is interesting, not only to have new products with benefits in human nutrition, but also to obtain valuable starting material for genetic studies on the role of GSL in the defense against pests and/or pathogens.
Vegetable crops from the species Brassica rapa L. and Brassica napus L. are broadly cultivated and consumed in northwest of Spain. B. napus is cultivated for its leaves, receiving the local name of "nabicol" (B. napus+). B. rapa is also cultivated for its leaves, receiving the local name of "nabiza. " Crops of the both species are employed for human consumption in soups and stews during the winter period. At the end of the vegetative period and before flowering, tops of nabicol and nabiza are also employed for human consumption. Tops consist on fructiferous stems with flower buds and surrounding leaves, which are typically consumed while still green and before flower opening (19).
We carried out two mass selection programs to modify the leaf concentration of the GSL named glucobrassicanapin (GBN) in nabicol and gluconapin (GNA) in nabiza, to obtain new varieties with the same genetic background but different frequencies of those alleles that determine the character of interest.
Leaves of both crops are consumed as horticultural crops in northwestern Spain, hence the importance of making the selection in this particular plant organ for each specific GSL. Although previous studies have used mass selection as a method to modify GSL content in other Brassica species (17,20), this is the first study that analyses the effects of mass selection in B. napus and B. rapa. Therefore, the main objective of this paper was to stablish the effectiveness of a divergent selection program to modify the content of GBN in B. napus and GNA in B. rapa leaves. Although B. napus and B. rapa are consumed mainly by their leaves, consumption of tops of the plant is also a common practice in northwest of Spain. Therefore, it is interesting to measure the indirect effect of selection in this part of the plant and in the seed that potentially can be employed to obtain oil of for fodder. Based on previous research, modification of a specific GSL have indirect effects in the concentration of other GSL of the plant and may also affect the agronomical performance of the crop. Therefore, a second objective was to measure indirect effects of the selection to modify GSL concentration (17).
Divergent selection program evaluation and experimental design
Two landraces of northwestern Spain (Pontevedra) from Brassica genus kept at the Germplasm Bank at Misión Biológica de Galicia (MBG) were used in two independent and divergent selection programs. MBG-BRS0163 is a nabiza variety (B. rapa), and MBG-BRS0063 is a nabicol variety (B. napus). The divergent selection program in B. rapa started in 2008, while the one in B. napus started in 2011. The schedule followed in both species was similar. Approximately 250 plants of cycle 0 were transplanted into two different isolation cages. In each one of them, the leaf GSL content of all the plants was assessed 120 days after sowing by Ultra-High-Performance Liquid-Chromatograph (UHPLC). After analysis, 20% of the plants with the desired characteristic (high or low GSL content) were selected, and the rest of plants were removed before flowering. Cross-pollination among the selected plants in each cage was performed by bumblebees (Bombus terrestris). The selected plants were recombined with each other, and their seed mixture gave rise to the next cycle or generation. Following this schedule, six selection cycles were obtained in B. rapa and four in B. napus in both directions of selection for the target GSL, to increase (HGNA, HGBN) and decrease its content (LGNA, LGBN). To study the direct and indirect effects of selection, we have evaluated cycles C0, C1, C3, and C6 from B. rapa and cycles C0, C2, and C4 from B. napus (Table 1).
Seeds from selection cycles were sowed first in seedbeds under greenhouse conditions. Approximately, 6 weeks afterwards, all varieties were transplanted into the experimental station of MBG placed at Pontevedra (Salcedo, NW Spain, 42° 24′N, 8° 38′W). Transplanting was Frontiers in Nutrition 03 frontiersin.org carried out in October, in the field when the plants reached 5-6 leaves development (Table 1). Each experimental plot consisted of two rows spaced 0.6 m apart and plants on each row were spaced 0.8 m apart. Plots were arranged in a randomized block design with three repetitions. Each variety was randomly assigned on each repetition. Due to the internal variability that characterizes local varieties, C0 plants were planted in two different plots on each repetition to increase sample size and minimize experimental error due to such variability. Divergent selections of B. rapa and B. napus were evaluated independently in parallel trials.
In each plot, three bulks of 5 plants each were made with leaf and tops samples and collected at the optimal time of consumption. Leaves were taken 4 months after transplantation. Tops were taken before flowering time, between 4 and 6 months after transplanting. Samples were frozen in situ in liquid N 2 , immediately transferred to the laboratory and frozen at −80 ⁰C. All samples were freeze-dried (BETA 2-8 LD plus, Christ) for 72-96 h. Dried material was powdered by using an IKA -A10 (IKA -Werke GmbH& Co.KG) mill, so the fine powder was used for GSL extraction. Seeds from the same varieties of both species were also analyzed by using three replications of each. They were also powdered for GSL extraction.
GSL extraction
The purification technique was used according to the Sephadex/ sulphatase protocol, described by Kliebenstein (21) with some modifications. Ten mg (+/− 1 mg) of each lyophilized powder sample was weighed, and two replicates were placed in 1,2 mL tubes using racks of 96 tubes.
For the extraction of GSL, 400 μL of 100% (v/v) methanol preheated at 70 ⁰C, 10 μL of 0.3 M lead acetate, 120 μL of MiliQ water preheated at 70°C and 10 μL of glucotropaeolin (GTP) as internal standard were used. Samples were mixed with a shaker (Retsch MM30) for 130 min at 25 1/s and, subsequently, incubated in the dark on a rotatory shaker (Orbital Midi, Ovan) at 250 rpm for 1 h. Tissues and proteins were precipitated by centrifugation at 3700 rpm for 12 min to use the resulting supernatant for the chromatography analysis. A 96-well 45 μL pore multiscreen filter plates loaded with A-25 Sephadex resin were used with a Millipore multiscreen column loader. Three hundred μL of MiliQ water was added, and the mixture was allowed to equilibrate for 2 to 4 h. After drying the columns by vacuum pump (Millipore), 150 μL of supernatant was transferred to the 96-well plates, and the liquid was removed again by vacuum pump. This was repeated once more to reach 300 μL of vegetal extract. Columns were washed 4 times with 100 μL of 60% (v/v) methanol and another 4 times with 100 μL of MiliQ water using a vacuum pump between each wash.
To desulfate GSL in the samples, 10 μL of MiliQ water and 10 μL of sulfatase solution were added to each column. Plates were incubated overnight at room temperature (22). Desulfoglucosinolates were eluted by placing a 96-well plate under the column plate and aligning both plates to collect material using a vacuum pump. Columns were washed twice with 100 μL of 60% (v/v) ethanol and twice with 100 μL of MiliQ water, so that the resulting samples were kept at −20 ⁰C until their analysis by UHPLC (Ultra High -Performance Liquid Chromatography).
GSL identification and quantification
To identify and quantify GSL, 3 μL of plant extract was used.
Chromatographic analyses were carried out on an Acquity UPCL ® H-Class coupled to a DAD (Diode Array Detector) (Waters, MA, United States). The UHPLC column used was an Acquity UPCL ® BEH C 18 (50 × 2.1 mm ID 130 Å; 1.7 μm particle size) (Waters, MA, United States). The oven temperature was set at 35°C and compounds were detected at 227 nm. GSL separation was carried out through the following method in 25% acetonitrile (A) in water (B) with a flow of 0.6 mL × min −1 : conditions are initially set at 5% A (v/v) and then the gradient was increased to 0.63 min at 10% A (v/v), to 1.25 min at 30% A (v/v), to 2.08 min at 70% A (v/v), arriving to 100% A at 2.71 min and maintaining concentration until 3.54 min. At 3.75 min, the initial conditions were restored and maintained until 4.58 min. Total processing time of each sample was ≈5 min. Data obtained were recorded on a computer with the Empower 3 (Waters) software. The type and amount of each GSL was estimated based on the comparison of retention times with standards using GTP as internal standard, so that quantification was expressed in μmol/g DW.
Agronomic parameters
Morphological and agronomic traits were recorded on each plot, throughout the maturation cycle of both species. Early vigor was taken 1 month after transplantation through a subjective visual scale from 1 (very poor) to 5 (excellent). For each species, time to flowering was recorded, and the number of tops was counted approximately every 4 days in order to obtain the date on which 50% sprouting was reached in each plot and thus calculate the precise time of collection of tops. At the end of the vegetative period, when plants are at the optimum time for harvesting (approximately 5 months after sowing), the number of leaves and height of 10 plants per plot were recorded for both B. napus and B. rapa. Thirty leaves of different plants from each plot were also collected to obtain the value of their fresh weight. In order to obtain an estimate of leaf moisture, 5 fresh leaves per plot were weighed (fresh weight) and left to dry for a week in an oven at ≈70°C. After this time, leaves were weighed again (DW) to obtain a humidity value for each plot. The procedure was also carried out on the tops to obtain their moisture content per plot. Once the flowering period was over, secondary tops per plant of 10 plants from each plot were counted. Tops from each plot were weighed together to obtain the average weight per top of the plants per plot.
Statistical analysis
The results of GSL quantification were statistically analyzed through the SAS program (SAS, 2011). To evaluate the differences between selection cycles, an analysis of variance was carried out with selection cycles of each species to study both the content of individual GSL and the content of aliphatic, indolic and total GSL in the three parts of the plant studied (leaves, tops and seeds), using the PROC GLM of SAS. Cycles were considered fixed factors, and repetitions were random factors. The means of the selection cycles were compared using the Least Significant Difference test from Fisher (LSD, p ≤ 0.05). To analyze the selection response to GBN and GNA in the three organs, simple linear regression analyses were performed with the PROC REG of SAS (SAS, 2011), where GSL under selection was the dependent variable and selection cycles as the independent variable. Indirect selection response on other GSL was evaluated in leaves, tops and seeds, using the selected GSL (GBN on B. napus and GNA on B. rapa) as the independent variable and the other GSL (individual GSL and the sum of aliphatic, indolic and total GSL) as dependent variables.
The concentration of total GSL at cycle 0 was higher in seeds, followed by tops and leaves in B. napus; whereas, in B. rapa, the highest GSL content was found in the tops, followed by seeds and leaves (Supplementary Table S1). The higher concentration of GSL in seeds and tops compared to leaves coincides with previous reports in B. napus (23)(24)(25). Accumulation of GSL in different organs of the plant is genetically regulated. For example, gene BnaC02.GTR2 is a positive regulator of GSL accumulation in seeds but has a negative impact on vegetative tissue (26). Probably, when the plant is in the vegetative stage, it synthetizes GSL, which are later mobilized first to the flowers, and afterwards to the seeds. In this way, seeds have a high concentration of these secondary metabolites needed for defense in the first stages of germination, prior to being able to synthetize GSL by themselves.
Regarding the chemical class, aliphatic GSL predominated in all tissues of both species. They were in higher percentages in B. rapa than in B. napus (Supplementary Tables S1, S2). Indole GSL were more abundant in B. napus than in B. rapa, and, in both cases, they predominated in leaves, followed by tops and seeds. GNT was the only aromatic GSL detected, and it was found in higher proportion in all B. napus parts analyzed than in those of B. rapa. Both species showed tops as the part of the plant with higher GNT content, followed by leaves and seeds.
The profile of B. napus leaves was dominated by GBN, PRO, and GNA, in agreement with previous reports in the same species (27)(28)(29)(30). However, the profile changed in other parts of the plant. In tops and seeds, the predominant GSL was PRO, followed by GBN and GNA in tops and GNA and GBN in seeds (Supplementary Table S1). GNA was the main GSL in the three organs of B. rapa, followed by GBN and MeOHGBS in leaves, GBN and GNT in tops and OHGBS and GBN in seeds. The profile of B. rapa agrees with that found by Kim (31), Padilla (32) and Francisco (19), who reported similar GSL `proportions to our results. Therefore, the profile of GSL in leaves is very similar to that found by other authors; however, the total concentration and profile changes in other parts of the plant agreed with previous reports in Brassica species, such as B. oleracea (17,24,29,33,34) or B. napus (23,33).
Direct response to divergent mass selection
The regression coefficient of target GSL on selection cycles were positive and significant in leaves for GBN in B. napus (R 2 = 0.852; a = 1.16839; p = 0.0162; Figure 1) and GNA in B. rapa (R 2 = 0.6979; a = 3.30873; p = 0.0119; Figure 1). Therefore, it can be concluded that mass selection is an effective method to increase or decrease the concentration of individual GSL in these species. Moreover, our results suggest that the concentration of these compounds is a trait with a high heritability coefficient, although other authors have found that there is also a substantial contribution of non-additive gene effects (35). This study coincides with previous works where the Frontiers in Nutrition 05 frontiersin.org efficacy of this method to modify the concentration of other GSL was tested in B. rapa (20) and B. oleracea (17). However, the selection response was asymmetric, being more effective to decrease the content of the target GSL than to increase it, which agreed with Sotelo (17) and Korkovelos and Goulas (36). One of the possible explanations of this effect is the depletion in the variability to increase the GSL concentration. Evolutionarily, as a defensive mechanism (37-39), GSL accumulation may have provided certain advantages over plants with reduced GSL content (40). Possibly, through generations, a trend to increase them has occurred leaving a greater margin of genetic variability for GSL reduction.
Although selection was carried out in leaves in this work, we also analyzed the indirect response in other parts of the plant. The results showed a positive significant linear regression coefficient in GBN
Type
Chemical name Common name Abbr. Linear regressions of GNA on selection cycles in Brassica napus (A) and GBN on selection cycles in Brassica rapa (B) in the three parts of the plant studied.
Species Organ
Frontiers in Nutrition 06 frontiersin.org concentrations through selection cycles in B. napus tops (R 2 = 0.7195; p = 0.0439). However, the regression of GBN in seeds was non-significant (Table 3; Figure 1). In B. rapa, positive significant linear regression coefficients were obtained for GNA content through cycles in tops (R 2 = 0.7976; p = 0.0042) and seeds (R 2 = 0.7346; p = 0.0085; Table 4; Figure 1). Therefore, modification of the concentration of the target GSL in leaves leads to modification in other plant organs. As it was observed in leaves, the response in other parts of the plant was also asymmetric. This indirect response may be due to a selection of genes operating in different parts of the plant or to a transport between them (15,16).
Indirect response to divergent mass selection in other GSL
A regression analysis using leaf GBN content as the independent variable and the rest of GSL as dependent variables was performed in the three parts of the plant analyzed (Table 3). We found significant positive regression coefficients of GBN with GAL in leaves (p = 0.0136) and seeds (p = 0.0326; Table 3; Figure 2). No significant regressions were found in tops. GAL is the precursor of GBN in the biosynthetic pathways of aliphatic GSL; therefore, we probably are selecting by one or several genes which are located before the step of GAL synthesis.
In the B. rapa selection, we found positive and significant regression coefficients of GNA with aliphatic and total GSL (Table 4; Figure 3) in the three parts of the plant analyzed. Negative regression coefficients with GBS (p = 0.0068), OHGBS (p = 0.0031), and indolic GSL (p = 0.0021) were found in leaves and with OHGBS (p = 0.0041) and indolics (p = 0.017) in tops. Therefore, when the concentration of GNA was modified, the concentration of aliphatic GSL and total GSL changed in the same direction as GNA; whereas, indolic GSL were modified in the opposite direction. These results make sense considering that GNA is aliphatic and the major GSL in this species. Aliphatic and indolic GSL are synthesized following two different pathways with independent regulation. The indirect effect on indolic GSL may respond to a cross-talk between both synthetic pathways (41) and to the need to save resources in defense. The concentration of other individual GSL was also modified in GNA divergent selection. In tops, we found positive and significant regression coefficients in GBN (p = 0.0497) and GNT (p = 0.0125), while in seeds, we found a positive indirect response in the concentration of OHGBS (p = 0.0410) and GBN (p = 0.0445). In seeds, we found a negative regression coefficient with GNL (p = 0.0094; Table 4; Figure 3). The indirect response of selection in other GSL has important implications in plant breeding. If we want to improve the concentration of a beneficial GSL, we may have undesired effects in other GSL, thus this information should be considered when designing breeding programs.
Indirect response to divergent mass selection in agronomic parameters
The B. napus and B. rapa local varieties employed to start the divergent selection cycles showed in previous evaluations good agronomic performance to be cultivated as crops (13,42,43), as well as intra-variability for GSL content. Besides having a role in plant defense, GSL can also interfere with other processes in the plant. Some evidence suggests that there is cross-talk of the biosynthetic GSL pathway with the hormone metabolism, stomatal aperture, biomass, and flowering (44)(45)(46)(47). Therefore, the modification of GSL concentration in plants may have indirect effects in morphological and agronomical characters in the cultivated varieties, which may impact the final production of crops or other traits related to its quality. In previous field evaluations, some authors have found negative correlations between the GSL content and the leaf colour in B. juncea (48), showing that the lighter the leaf colour, the higher the GSL contents. In the same species, Merah (49) found that both the total GSL and GNA content were positively related to seed yield, measured as seed filling duration and thousand seed weight. However, relationship between GSL content and agronomical and/or production traits has not been studied in other Brassica crops. In our experiment, an increment in the top number per plant (R 2 = 0.4952; a = 0.04730; p = 0.0469) and a significant reduction in the top moisture (R 2 = 0.5778; a = −0.08632; p = 0.0289) were related to an increase of leaf GNA content in B. rapa. Therefore, in B. rapa selection, population with increased GNA content have also a high number of tops, and low PRO content, thus they are interesting from an agronomical and human health point of view. We did not find any indirect effect of selection in agronomic parameters in the B. napus selection. As conclusion, populations obtained by mass selection have the same agronomic performance or even better than the original populations.
Conclusion
Divergent mass selection was an effective method to quantitatively modify the leaf content of GBN in B. napus and GNA in B. rapa, suggesting that there is high genetic variability and heritability for GSLs Leaf GNA content is used as the independent variable and the rest of GSL on leaves, tops and seeds are used as dependent variables. R 2 , determination coefficient for each GSL; a, slope. *Significance at p ≤ 0.05 and **significance at p ≤ 0.01.
FIGURE 2
Linear regressions of other GSL with an indirect response to GBN selection cycles in Brassica napus in leaves (A) and seeds (B).
Frontiers in Nutrition 08 frontiersin.org content within the studied species. An asymmetrical response to selection was observed in both species as it was more effective to decrease selected GSL than to increase it. The selection was also effective in other parts of the plant, suggesting that there is a GSL translocation in the plant or a modification in their synthesis pathway that is non-organ specific. Both divergent selection programs on aliphatic GSLs had an indirect response on other aliphatic GSL, but only B. rapa selection showed indolic GSL response, mostly negative. The indirect response of selection in other GSL has important implications in plant breeding. Selection to increase the content of a specific GSL, may have undesired effects in other GSL, thus this information should be considered when designing breeding programs. Finally, populations obtained by selection have the same agronomic performance or even better than the original population. Therefore, mass selection seems to be a good method to modify the content of specific GSL in Brassica crops. The populations obtained in this study represent valuable materials to broaden understanding of the biological effects of GSL in these species. Also, we have obtained new Brassica varieties enriched in GSL content and with good agronomic performance that can be released into the market.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author. | 2023-07-15T15:14:02.087Z | 2023-07-13T00:00:00.000 | {
"year": 2023,
"sha1": "8ef5ad68820d9203c5cd185bc46e419e7d6f1094",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2023.1198121/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "da760b4568de357edf35326a8485345654d34461",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268777104 | pes2o/s2orc | v3-fos-license | Status of mental and social activities of young and middle-aged patients after papillary thyroid cancer surgery
Background Papillary thyroid cancer (PTC) is prevalent among younger populations and has a favorable survival rate. However, a significant number of patients experience psychosocial stress and a reduced quality of life (QoL) after surgical treatment. Therefore, comprehensive evaluations of the patients are essential to improve their recovery. Methods The present study enrolled 512 young and middle-aged patients diagnosed with PTC who underwent surgery at our institution between September 2020 and August 2021. Each participant completed a series of questionnaires: Generalized Anxiety Disorder 7 (GAD-7), European Organization for Research and Treatment of Cancer Quality of Life Questionnaire (EORTC QLQ-C30), Thyroid Cancer-Specific Quality of Life Questionnaire (THYCA-QoL), and Readiness to Return-to-Work Scale (RRTW). Results GAD-7 data showed that almost half of the study subjects were experiencing anxiety. Regarding health-related quality of life (HRQoL), participants reported the highest levels of fatigue, insomnia, voice problems, and scarring, with patients in anxious states reporting worse symptoms. Based on RRTW, more than half of the subjects had returned to work and had better HRQoL compared to the others who were evaluating a possible return to work. Age, gender, BMI, education, diet, residence, health insurance, months since surgery, monthly income, and caregiver status were significantly correlated with return to work. Additionally, having a caregiver, higher monthly income, more time since surgery, and living in a city or village were positively associated with return to work. Conclusion Young and middle-aged patients with PTC commonly experience a range of health-related issues and disease-specific symptoms following surgery, accompanied by inferior psychological well-being, HRQoL, and work readiness. It is crucial to prioritize timely interventions targeting postoperative psychological support, HRQoL improvement, and the restoration of working ability in PTC patients.
Introduction
Thyroid cancer (TC) has become the most prevalent malignancy of the endocrine system worldwide in the last few decades (1).Females make up the majority of TC patients (2,3), and papillary thyroid cancer (PTC) is the most predominant type of TC, accounting for 95% of differentiated thyroid cancers (4,5).In China, the incidence of TC has increased dramatically in the last 30 years, and the National Cancer Center of China reported that approximately 220,000 new cases of TC were diagnosed in 2022, with the disease tending to develop at a younger age (6,7).Surgery is the most common treatment for TC, and the 10-year overall survival rate for patients can reach 90% or higher (8,9).However, a significant proportion of patients suffer from postoperative sequelae that have serious repercussions for their quality of life (QoL), such as anxiety and depression, voice changes, and scarring (10,11), which can negatively impact both their mental and physical recovery (12, 13).Therefore, given the high postoperative survival rate for patients with PTC, it is essential to consider health-related quality of life (HRQoL) and social function when assessing the recovery quality of patients after surgical procedures.
HRQoL is now considered an essential measure to assess the outcomes of clinical treatments and interventions (14,15).Although it is widely acknowledged that PTC has a better clinical cure rate and lower death rate than other malignant diseases (16)(17)(18), the HRQoL status of postoperative patients with PTC is similar to or even lower than that of patients with other malignant tumors (19)(20)(21).Therefore, timely targeted intervention measures after surgical treatment in PTC patients have become an increasingly important goal (22).Improving social activity function and HRQoL (23) is crucial in achieving this goal.However, few studies have been performed to assess the relationship between HRQoL and social function in postoperative PTC patients as well as their potential influencing factors.
In addition, there remains a lack of comprehensive studies on the postoperative psychological status, social activity ability, and QoL in these patients.A study conducted by Wang et al. in Hangzhou, China examined HRQoL in community-based survivors of thyroid cancer and included a large sample size as well as an exploration of multiple factors associated with QoL (24).However, the study did not investigate the psychological status and social activities of the patients, and the clinical information was collected from individuals who had experienced thyroid cancer many years ago, leading to potential recall bias.Another recent study focused on investigating the psychological status of PTC patients who underwent surgery but similarly lacked a comprehensive examination of other postoperative aspects (25).Similarly, several other studies primarily concentrate on evaluating either postoperative QoL (26)(27)(28)(29) or solely assessing the postoperative psychological well-being of patients (30)(31)(32)(33), while neglecting other important dimensions.Therefore, conducting comprehensive studies encompassing analysis across multiple dimensions and their related factors in PTC patients after surgery is still necessary.
This study aims to comprehensively evaluate the psychosocial performance of patients with PTC after surgery, including their psychological state, QoL, readiness to return to work, and their related factors.The findings from this study may provide clinical guidance for individualized interventions in patients with PTC postoperatively, with the ultimate goal of improving their QoL after surgery.
Setting and population
This study is a population-based cross-sectional survey conducted at the Second Affiliated Hospital of Air Force Medical University, Xi'an, China, from September 1st, 2020, to August 31st, 2021.Patients who had pathologically diagnosed PTC according to the American Joint Committee on Cancer (AJCC) 8th edition guidelines (9) were recruited.The Ethics Committee for Clinical Research and Animal Testing of the Second Affiliated Hospital of Air Force Medical University approved this cross-sectional study (ethical approval number: K202205-20), and all patients provided written informed consent.No financial compensation was provided to the patients.
The inclusion criteria for participants were as follows: (a) aged between 18 and 59 years (defined as middle-aged); (b) had undergone surgical procedures and been diagnosed with PTC through postoperative pathology; (c) were conscious, had normal hearing, and had normal cognitive comprehension and expression abilities without any psychiatric disorders; and (d) agreed to participate in the study and signed an informed consent form.
The exclusion criteria for this study were patients who (a) were pregnant; (b) experienced serious postoperative complications (e.g.nerve injury and iatrogenic hypoparathyroidism); (c) had a history of anxiety, depression, or organic psychosis; (d) had undergone surgery for tumors in the past; (e) had received radioactive iodine treatment; and (f) had primary hyperthyroidism or hypothyroidism before surgery.
Data collection
The relevant information of the patients was collected in 1-6 months after surgery in this study.The basic information, including sociodemographic and clinical characteristics, of all eligible patients, was collected through self-report and hospital case records.Specifically, the data included age, gender, ethnicity, educational background, smoking and drinking habits, dietary status, residence, marital status, health insurance, time after surgery, occupation, monthly income, caregiver, type and scope of surgery, intake of levothyroxine, the volume of physical activity per week, and daily intake of fruits and vegetables.Notably, the participants' weekly physical activity was measured as the number of days per week that they performed moderate-intensity exercise for 30 minutes (e.g., brisk walking and housework) (34), and responses were scored on a four-point scale: 1 = 0 days; 2 = 1 to 2 days; 3 = 3 to 4 days; 4 = minimum 5 days.Moreover, the intake of 100 g of raw fruits and vegetables was considered one portion, and participants reported their intake (24,35).
Importantly, all eligible patients were asked to complete four standard questionnaires that have been widely used and have demonstrated robust reliability and validity in Chinese, including Generalized Anxiety Disorder (GAD-7) (36,37), European Organization for Research and Treatment of Cancer Quality of Life Questionnaire (EORTC QLQ-C30) (38)(39)(40), Thyroid Cancer-Specific Quality of Life Questionnaire (THYCA-QoL) (8, 41) and Readiness to Return-to-Work Scale (RRTW) (42).
The GAD-7 questionnaire was used to evaluate potential postoperative anxiety.The main statistical indicator of the GAD-7 is the total score, which ranges from 0 to 21 and is the sum of the item scores.The severity of anxiety can be assessed based on the GAD-7 scores: a score of 0-4 indicates no anxiety, 5-9 indicates mild anxiety, 10-14 indicates moderate anxiety, and 15-21 indicates severe anxiety.Notably, the GAD-7 is also used as a diagnostic tool for anxiety symptoms, with a cutoff value of 10 or higher (43).
Additionally, the EORTC QLQ-C30 was used to investigate the general profile of QoL in this study.It contains 30 items, including a global QoL scale, five functional scales (physical, role, cognitive, emotional, and social), three symptom scales (fatigue, pain, nausea, and vomiting), and several single items assessing common symptoms (dyspnea, loss of appetite, insomnia, constipation, diarrhea, and financial difficulties).The questions referred to the previous week, and each item was scored on a four-point response scale ranging from 1 = "not at all" to 4 = "very much," except for QoL, which was scored on a seven-point modified linear analog scale ranging from 1 = "very poor" to 7 = "excellent."After linear transformation, all scales and individual items were measured from 0 to 100.Higher scores on the functional scale and QoL indicate better functioning and HRQoL, while higher scores on the symptom scale indicate more complaints.
In addition to the EORTC QLQ-C30, the THYCA-QoL was also utilized in our study to evaluate the QoL of PTC patients postoperatively.The THYCA-QoL scale is a thyroid cancerspecific QoL scale designed to measure specific issues in patients with TC.It is used along with the Chinese version of the EORTC QLQ-C30 to assess the overall impact of TC and treatment on patients' QoL.The scale consists of 24 items divided into seven symptom scales (neuromuscular, voice, concentration, sympathetic, throat/mouth, psychological, and sensory) and six single items (scar, chilly, tingling, weight gain, headache, and sex).All items refer to the past week, except for the sexual interest item, which refers to the past 4 weeks.All items were rated on a 4-point scale (not at all, somewhat, fairly, and very much) and scored from 1 to 4. The scores for each scale are calculated in the same way as for the EORTC QLQ-C30.
Of note, to examine the employment status of patients with PTC after surgery, we used the RRTW scale.The RRTW scale is a 22-item questionnaire that consists of different components for working (9 items) and nonworking individuals (13 items).For working individuals, there are two stages: uncertain maintenance (UM), which explores the worker's struggle to remain employed, and proactive maintenance (PM), which examines the coping mechanisms and strategies to manage work in difficult situations that could lead to a setback.For the nonworking population, there are four stages.The precontemplation (PC) stage documents the lack of desire or plan to return to work.The contemplation (C) stage refers to when the individual begins to consider returning to work (RTW).The preparation for action-self-evaluative (PA-S) stage measures the degree of readiness for RTW, strategies to make work manageable, and making an actual plan for RTW (e.g., determining an actual date).Finally, the preparation for action-behavioral (PA-B) stage measures the degree of mental readiness and active involvement of the individual in building strength for RTW.The RRTW scale has demonstrated satisfactory construct validity for most stages of vocational rehabilitation and has shown an association with actual work outcomes.
Statistical analysis
Descriptive statistics were used to analyze the basic demographic data of patients with PTC after surgery, with categorical variables presented as percentages and continuous variables presented as the means and standard deviations (SD).Categorical variables were compared using either the chi-square test or Fisher's exact test.A two-tailed p-value of <0.05 was considered statistically significant.Univariate logistic regression analysis was performed to identify relevant variables associated with anxiety and return-to-work status.Variables with a p-value of <0.2 were included in the multivariable regression model as independent predictive factors associated with psychological status and RRTW after surgery in young and middle-aged PTC patients (44).Backward stepwise selection was applied using the likelihood ratio test.All statistical analyses were performed using SPSS version 26.0 (IBM Corporation, Armonk, NY).
Characteristics of the study population
In this study, 569 patients were diagnosed with PTC and underwent thyroid surgery during the research period.Ultimately, 512 patients were included in the study based on the inclusion and exclusion criteria (Figure 1).As shown in Supplementary Table S1 and Supplementary Figure S1, the PTC patients who participated in the research were predominantly female, accounting for 81.3% of all patients.The age distribution was mainly between 30 and 50 years, accounting for 59.4%.Over one-third (175/512, 34.2%) of the patients had a bachelor's degree or higher education.The majority of the patients were nonsmokers (91%) and nonalcohol drinkers (87.5%).Additionally, the patients mostly resided in cities (68.8%), with the number of people living in rural areas and villages being almost equal (17.6% and 13.7%, respectively).Most of the patients were married (90%), and only 1.8% and 1.4% of the participants were divorced or widowed, respectively.Notably, medical health insurance covered the majority of the study population, with 502 (98.0%) patients having insurance.More than half (292/512, 57.0%) of the patients participating in the study were employees or full-time students; almost half of the participants (46.1%) received a monthly income in the range of ¥2000-5000.In terms of surgical procedure choice, 330 patients (64.5%) preferred traditional open surgery, and it should be noted that a total of 260 (50.8%) of them had undergone thyroidectomy and lymphatic dissection, which increased both the scope of practice and the complexity of the surgery.During the postoperative recovery process, the majority of patients were cared for by their relatives (358/512, 69.9%); the other 154 patients lacked personal caregivers after surgery.Furthermore, the vast majority of patients had high postoperative compliance and were able to take levothyroxine regularly (504, 98.4%).Additionally, more than one-third of the patients (196/512, 38.3%) performed 1-2 days of exercise per week during postoperative recovery.However, the overall fruit and vegetable intake of the surveyed population was very low, with only 60 patients (11.7%) consuming more than 2 portions of fruit and vegetables per day.
Evaluation of the anxiety status and its related factors of the study population after surgery
All patients with PTC were given a final standard score that allowed for an assessment of their anxiety level according to the GAD-7.As shown in Table 1, no one had a normal score, with 249 subjects experiencing mild anxiety (mean GAD-7 score of 7.49 ± 0.73), 179 experiencing moderate anxiety (mean GAD-7 score of 12.20 ± 1.40), and 84 suffering from severe anxiety (mean GAD-7 score of 17.94 ± 2.48).The cutoff value for anxiety when using GAD-7 as an auxiliary diagnosis is 10 (i.e., a GAD-7 score ≥10 is considered anxiety clinically).Based on this cutoff value, the study population was divided into two groups: nonanxiety (249 individuals, 48.63%) and anxiety (263 individuals, 51.37%).
To explore the factors associated with anxiety, we conducted a comparative analysis of various characteristics between patients with PTC who were and were not experiencing anxiety.The data demonstrated a significant difference in age between the two groups (P = 0.042), indicating that patients with PTC aged between 31 and 40 were more likely to experience anxiety after receiving thyroid surgery compared to other age groups.Moreover, there was a statistically significant difference in anxiety status between males and females.Specifically, anxiety symptoms were more prevalent in females than in males (85.9% vs. 14.1%,P = 0.005).Additionally, the daily intake of vegetables and fruits emerged as a potential indicator Diagram of the Data Collection Process.This study included young and middle-aged patients diagnosed with PTC admitted to our department from Sep. 2020 to Aug. 2021; a total of 569 patients were enrolled firstly.Then, according to our inclusion and exclusion criteria, 46 patients were excluded as follow, 3 patients suffered with other cancers; 21 patients radioiodine therapy; 17 patients had a history of psychiatric diagnosis; 5 patients suffered from primary hyper-or hypothyroidism; and 11 patients lost to follow-up.Finally, a total of 512 PTC patients fulfilled the criteria and were included in this study.
of anxiety among patients with PTC.Patients who consumed either too few or too many fruits and vegetables per day were more likely to experience anxiety (P = 0.041).In summary, our data revealed that age, gender, and daily fruit and vegetable intake were significant factors associated with anxiety status in the study subjects (Supplementary Table S2).
Evaluation of the RRTW and its related factors of the study population after surgery
In the study, 63 participants were retired or unemployed before the diagnosis of PTC.Accordingly, we finally included 449 patients for the investigation of RRTW.Our data showed that 151 patients did not return to work after surgery, while 298 patients returned to work, which is approximately twice the number of those who did not return to work.
The items on the RRTW scale ranged from 1 to 5. The results of the study showed that patients who did not return to work had a higher average score on item 11 (They wished they had more ideas about how to return to work) and item 12 (They desired advice on how to go back to work) (Table 2).In contrast, patients who had returned to work had a higher average score on item 14 (They were doing everything they could to stay at work) and item 21 (They had returned to work and were doing well), indicating a positive attitude toward their work (Table 3).
Next, we assessed the factors related to RTW in the study population.Our findings revealed that age, gender, education level, alcohol consumption, diet, place of residence, health insurance, postoperative time, type of occupation, monthly income, and postoperative primary caregiver all influenced the decision to RTW among patients with PTC.Specifically, being female, residing in rural areas, lacking health insurance, having a shorter time after surgery, and having a lower income were associated with a higher likelihood of not returning to work (Supplementary Table S3).
Detailed assessment of HRQoL status of the study population
To obtain more accurate data on the postoperative HRQoL of patients with PTC, two questionnaires were used together in this study, the EORTC QLQ-C30 and THYCA-QoL.According to the EORTC QLQ-C30, the functioning scale indicated that among the 512 subjects, social function scored the highest, while emotional function scored the lowest.This suggests that emotional function might have a negative impact on QoL.On the symptom scale, fatigue scored the highest, whereas nausea and vomiting scored the lowest.This indicates that patients were more frequently bothered by fatigue compared to other symptoms, potentially leading to a worse QoL.Among the individual items, insomnia scored the highest, suggesting that sleep disturbance had a significant effect on patients (Table 5).Furthermore, the THYCA-QoL scale revealed that in the symptom scales, voice scored the highest, while concentration scored the lowest.Among the single items, chills and scars ranked first and second, respectively, while tingling in the arms and legs scored the lowest.Overall, these data indicate that patients with PTC were significantly affected by their voice condition, chills, and scars (Table 6).
Considering the close relationship between psychological state and QoL, we further explored the impact of anxiety on HRQoL state in PTC patients after surgery.Firstly, according to EORTC QLQ-C30, as we expected, all of the anxious patients had poorer QoL than non-anxious patients in each of the five functional scales, three symptom scales, and six single items (Table 7, Supplementary Figure S2A).In terms of the functional scale, both anxious and non-anxious patients had the lowest scores for emotional functioning (81.22 ± 13.29 and 57.98 ± 17.11), indicating poor emotional well-being in both groups.Patients without anxiety had the highest score for social functioning (91.97 ± 14.57), while patients with anxiety had the highest score for physical functioning (78.43 ± 14.24).Additionally, regarding the symptom scale, fatigue was the most prominent symptom since it showed the highest score in both anxious and non-anxious patients (20.53 ± 15.65 and 37.98 ± 19.23, respectively).Whereas, according to the six single items, the anxious and non-anxious groups experienced differently; the anxious patients were bothered by insomnia most (scored 34.22 ± 29.55), and the non-anxious group was bothered by constipation most (scored 18.21 ± 21.72).Then, according to THYCA QoL, all anxious patients had lower HRQoL than those without anxiety in both seven-symptom-scale and six-single-item.On the symptom scale, both patients with and without anxiety scored the highest in voice (25.64 ± 27.46 and 37.77 ± 28.80, Table 8, Supplementary Figure S2B); while among the six single items, the non-anxious group had the highest score in sexual interest.
Furthermore, we also investigated the influence of RTW status on HRQoL in PTC patients after surgery (Tables 9 and 10, Supplementary Figures S2C, D).On the functional scale of the EORTC QLQ-C30, the non-RTW patients had the highest score on physical functioning (79.51 ± 14.21; Table 9), while the RTW patients had the highest score on social functioning (86.13 ± 17.37).The lowest score in both groups was emotional functioning (67.00 ± 21.16 and 69.13 ± 18.04).On the symptom scale, fatigue scored highest in both groups (32.30 ± 18.90 and 27.37 ± 19.59); nausea/vomiting had the lowest scores (10.38 ± 15.43 and 6.99 ± 12.76).In the non-RTW group, the highest score was in financial difficulties (27.15 ± 29.32); while in the RTW group, the highest score was in insomnia (21.92 ± 26.83).In addition, the results of THYCA-QoL showed all RTW patients experienced lower QoL than the non-RTW patients according to all seven symptom scales (Table 10), especially on voice-related symptoms (both
Multivariate logistic regression model to predict anxiety occurrence in PTC patients after surgery
To assist in the clinical diagnosis and intervention of anxiety in advance, we established a multivariate logistic regression model to predict the occurrence of anxiety in PTC patients after surgery.Our data revealed that gender was an independent risk factor for anxiety, with female patients being more likely to experience anxiety symptoms than males [P = 0.002, OR (95% CI) = 2.396 (1.389-4.135),Table 11].Additionally, some indicators such as nonsmoking, higher daily intake of fruits and vegetables, and higher BMI were also identified as potential risk factors for anxiety occurrence (Table 11).The anxiety-risk estimation nomogram was developed using these independent predictors, as depicted in Figure 2. The nomogram is based on the coefficients from our logistic regression analysis, providing a user-friendly method for clinicians to estimate a risk for anxiety for patients.
Multivariate logistic regression model to predict RRTW in PTC patients after surgery
Similarly, to facilitate the early return to normal work for patients, we conducted a multivariate logistic regression model to predict the level of readiness.The data revealed that being female, having a higher BMI, lacking health insurance, and older age were independent risk factors for not returning to work.On the other hand, having a caregiver, higher monthly income, longer time after surgery, and residing in urban areas were positive factors for early RTW (Table 12).These independent predictors were used to form the RRTW estimation nomogram, as shown in Figure 3.To use the nomogram in a clinical setting, the values of each factor needs to be added and the total score of all predictors would be applied to determine the probability of RRTW.
Discussion
The incidence of PTC in young adults has increased over the years (45), and the 10-year survival rate of PTC patients after surgical treatment is over 90% (46).There is a significant need to enhance HRQoL in this population (47).In the present study, a total of 512 young PTC patients who underwent surgery at our center were included retrospectively; and based on the assessment of physical and psychological symptoms, the ability to return to work was evaluated systematically.Our study comprehensively assessed the postoperative QoL of PTC patients and related influencing factors across multiple dimensions, including postthyroid cancer-specific symptoms, general post-cancer symptoms, psychological anxiety status, and readiness to return to work, using four follow-up questionnaires and a 'step-by-step' evaluation strategy.
As reported in recent studies, voice problems and scarring were the most prominent specific symptoms associated with thyroid surgery (26,48).Whether using traditional open surgery or minimally invasive thyroidectomy, it's difficult to completely avoid the disruption to the recurrent laryngeal nerve (49-51), (56).Whether traditional open or minimally invasive surgery was given, the prevention of scarring and adhesions would still be an area of concern at the current stage.It is reported that social functioning, fatigue, and insomnia are important factors affecting postoperative HRQoL among patients who have undergone cancer operations (24,57,58).Similarly, in this study, PTC patients also exhibited the above three symptoms.Besides that, insomnia is prevalent among PTC patients, which may be related to psychological factors; as a considerable portion of this group experience psychological disorders such as anxiety.More importantly, it cannot be excluded that suppressive levothyroxine therapy may exert an impact on postoperative HRQoL (59).Subclinical hyperthyroidism, characterized by suppressed thyroidstimulating hormone (TSH) levels, may directly affect the sleep quality of patients with PTC (60).Additionally, the accompanying side effects, such as increased heart rate, may also contribute to sleep disturbances in patients (61).Therefore, it is suggested that precise and individualized TSH suppression treatments are important in TSH suppression therapy to improve postoperative symptoms in PTC patients.Furthermore, our study indicated that the PTC patients experienced significant psychological anxiety while experiencing subjective symptoms and physical discomforts.In our studied population, more than 50% of PTC patients experienced varying degrees of anxiety, which may be related to the significant psychological burden of the disease, partly due to misconceptions about PTC (62).Furthermore, our research indicates that anxiety is more likely to occur in female patients, possibly due to their higher psychological sensitivity and susceptibility to social functions, vocational issues, and lifestyle factors (16,28,63,64).Therefore, more clinical attention is deserved in postoperative psychological assessment and management for female patients with PTC.Collectively, we recommend that focusing on mental health as a key aspect of postoperative management is crucial to prevent the impact of anxiety on patients' work and daily life, especially for female patients (28).The study by Chen et al. mentioned that sexual interest was impaired in patients after PTC (65).Interestingly, there was no significant impact on sexual interest reported in our study population, and it could be due to the fact that our study population was mainly composed of young and middle-aged adults, who remained more sexually active.
In current social life, protecting the workability of young and middle-aged PTC patients is of great significance, especially given the widespread potential overdiagnosis and overtreatment of PTC (45,66).In our study, the majority of PTC patients expressed a strong willingness to return to work after undergoing surgical treatment.However, it was important to note that among those who returned to work, the number of individuals in PM and UM was similar.This may be related to the symptoms such as fatigue experienced by postoperative patients.Despite the low invasive nature of PTC, these symptoms further exacerbated their concerns about the disease itself, thereby inducing anxiety about their current occupational responsibilities (67).In contrast, compared to those who did not return to work, those who did reported better QoL in terms of social interactions, family life, and economic burden.Therefore, appropriate encouragement to guide them back to work is beneficial for PTC patients, which would help them gain more social support and satisfaction, thereby improving their QoL.Specifically, regarding young and middle-aged patients, the vocational capacity of this group holds significant practical value in contemporary society, necessitating further attention to the impact of reduced quality of life resulting from PTC on their reintegration into the workforce.
Previous studies primarily focus on evaluating either postoperative QoL (26)(27)(28)(29) or assessing the postoperative psychological status of patients (30-33), while neglecting other important dimensions at the same time.Therefore, a comprehensive study containing analysis across multiple dimensions and their related factors in PTC patients after surgery is still necessary.In this study, we have conducted detailed investigations from three aspects: psychological state, RRTW, and QoL after surgery using four different scales simultaneously.To provide a more comprehensive evaluation of the patient's QoL after surgery, we employed two scales concurrently: EORTC QLQ-C30 and THYCA-QoL.The former is widely used for assessing the basic QoL in the general population, while the latter is specifically designed for TC patients.By combining these two scales, we gained a better understanding of the study population's QoL and facilitated the development of a more rational postoperative Nomogram to predict anxiety occurrence in PTC patients after surgery.The nomogram to predict anxiety-risk is based on the above four predictive factors.To use the nomogram, the value of each factor is placed on each variable axis and a line is drawn upward to determine the number of received points for each value of the factor.The sum of these values is located on the total point axis and a line down the bottom axis is drawn to determine the probability of anxiety.
management strategy for PTC.Additionally, to guide timely postoperative intervention and enhance clinical capabilities in managing PTC patients after surgery, we investigated factors influencing postoperative psychological anxiety and return to work.Various statistical methods were appropriately applied in this analysis; initially conducting univariate analysis on influencing factors followed by multivariate logistic regression analysis to evaluate the simultaneous impact of multiple variables on outcomes.More importantly, to visualize our study results, we conducted a nomogram for clinical application, which could help clinicians develop personalized and reasonable postoperative management measures.In summary, our study provides valuable insights that were lacking in previous similar studies (16,17,24,27,65), and contributes to the enhancement of clinical guidelines for promoting comprehensive recovery among postthyroidectomy patients.Despite the intriguing findings of this study, several limitations should be acknowledged.First, the study only included patients with PTC, and it is necessary to investigate patients with other types of TC in future research.Second, the follow-up duration in this study was relatively short, and a longer-term follow-up is needed to assess the long-term outcomes.Third, this is a cross-sectional study, causal relationships between the outcome variables and HRQoL cannot be inferred.Thus, a prospective study design would be required in the following research.Indeed, we are currently conducting a longitudinal study in a multicenter with more types of TC samples and a more extended follow-up period, expecting to provide more personalized therapeutic intervention strategies for the postoperative rehabilitation of a larger population of TC patients in the future.
Conclusion
In conclusion, this study has systemically accessed the psychological status, HRQoL, and RTW state, as well as their associated factors in more than 500 young and middle-aged PTC patients.The findings of our study demonstrate that early and comprehensive assessments, combined with the development of personalized and efficacious intervention and treatment strategies, would contribute to enhancing their HRQoL and restoring their vocational capacity.This study has the potential to provide valuable insights for timely interventions in postoperative PTC patients and offer clinical guidance to promote their recovery from surgery.Nomogram to predict RRTW in PTC patients after surgery.Each predictive variable, such as age, gender, BMI, caregiver, health insurance, monthly income, time after surgery, and residence, is assigned a point value based on its coefficient in the logistic regression model.To use the nomogram, the value of each factor is placed on each variable axis and a line is drawn upward to determine the number of received points for each value of the factor.The total points accrued from all predictive variables are used to determine the probability of RTW, as indicated on the scale at the bottom of the nomogram.
TABLE 1
Investigation of the anxiety status according to GAD-7 in PTC survivors (N = 512).
TABLE 2
RRTW scores of PTC survivors who were not back at work (N = 151).
1.You do not think you will ever be able to go back to work.RRTW, Readiness to Return-to-Work Scale.
TABLE 3
RRTW scores of PTC survivors who had been back to work (N = 298).
RRTW, Readiness to Return-to-Work Scale.
TABLE 4
Dimension of RRTW in PTC survivors who were back (N = 298) or not back (N = 151) at work.scored the highest on voice (39.96 ± 31.76 in non-RTW group and 26.85 ± 26.56 in RTW group)).Among the six single items, the non-RTW group had the highest score for feeling chilly (27.81 ± 26.14), while the RTW group had the highest score in sexual interest (25.28 ± 22.40).
a Mean score of items contained in each dimension.groups
a A higher score indicates more symptoms.b A higher score indicates better functioning.THYCA-QOL, Thyroid Cancer-Specific Quality of Life Questionnaire.
a A higher score indicates better functioning.b A higher score indicates more symptoms.EORTC QLQ-C30, European Organization for Research and Treatment of Cancer Quality of Life Questionnaire.
TABLE 7 EORTC
QLQ-C30 scores of all dimensions in patients with and without anxiety (N = 512).
a A higher score indicates better functioning.b A higher score indicates more symptoms.EORTC QLQ-C30, European Organization for Research and Treatment of Cancer Quality of Life Questionnaire.
TABLE 8
(52,53)oL scores of all dimensions in patients with and without anxiety (N = 512).totransientvoice changes with various degrees in postoperative patients.Therefore, surgeons still need to care for the protection of the recurrent laryngeal nerve and promote rapid recovery of the patient's voice after surgery.Additionally, scarring is an inevitable issue for almost all PTC patients, especially for female patients who have a greater concern for neck scarring(52,53).Despite there are various surgical methods available, traditional open surgery remains one of the primary surgical approaches.In recent years, an increasing number of patients prefer to undergo minimally invasive thyroidectomy, including endoscopic and robotic thyroidectomy (54).However, it's important to note that post-operative QoL for PTC patients undergoing minimally invasive thyroidectomy has not been shown to improve significantly compared to those undergoing traditional open surgery (55).Postoperative adhesions caused by endoscopic and robotic thyroidectomy, including internal and external scarring, could still lead to upper gastrointestinal and respiratory symptoms, affecting patients' QoL
TABLE 9 EORTC
QLQ-C30 scores of all dimensions in patients whether they returned to work or not (N = 449).
a A higher score indicates better functioning.b A higher score indicates more symptoms.EORTC QLQ-C30, European Organization for Research and Treatment of Cancer Quality of Life Questionnaire.
TABLE 10 THYCA
-QoL scores of all dimensions in patients whether back at work or not (N = 449).
a A higher score indicates more symptoms.bA higher score indicates better functioning.THYCA-QOL, Thyroid Cancer-Specific Quality of Life Questionnaire.
TABLE 11
Multivariate logistic regression model to predict anxiety in PTC patients after surgery (N = 512).
TABLE 12
Multivariate logistic regression model to predict RRTW in PTC patients after surgery (N = 449). | 2024-03-31T15:51:08.331Z | 2024-03-26T00:00:00.000 | {
"year": 2024,
"sha1": "0df7f841e238a753fa4b2ca19b86f3db60dde2bc",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "ae1214b87ebbeae2484593db46c0dc52f8c3d8a5",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
} |
219920243 | pes2o/s2orc | v3-fos-license | Multi‐subject stochastic blockmodels with mixed effects for adaptive analysis of individual differences in human brain network cluster structure
Recently, there has been a renewed interest in the class of stochastic blockmodels (SBM) and their applications to multi‐subject brain networks. In our most recent work, we have considered an extension of the classical SBM, termed heterogeneous SBM (Het‐SBM), that models subject variability in the cluster‐connectivity profiles through the addition of a logistic regression model with subject‐specific covariates on the level of each block. Although this model has proved to be useful in both the clustering and inference aspects of multi‐subject brain network data, including fleshing out differences in connectivity between patients and controls, it does not account for dependencies that may exist within subjects. To overcome this limitation, we propose an extension of Het‐SBM, termed Het‐Mixed‐SBM, in which we model the within‐subject dependencies by adding subject‐ and block‐level random intercepts in the embedded logistic regression model. Using synthetic data, we investigate the accuracy of the partitions estimated by our proposed model as well as the validity of inference procedures based on the Wald and permutation tests. Finally, we illustrate the model by analyzing the resting‐state fMRI networks of 99 healthy volunteers from the Human Connectome Project (HCP) using covariates like age, gender, and IQ to explain the clustering patterns observed in the data.
INTRODUCTION
Network analysis and clustering methods have emerged as powerful platforms for investigating how some units of interest (i.e., nodes) mutually interact to facilitate the functioning of a system as a whole. As illustrated in Figure 1, data from resting-state fMRI can be abstracted to network like objects so that the anatomically defined regions of interest (ROIs) represent n number of nodes and the binary-valued links between them represent edges (with {1, 0} denoting the presence and absence of an edge, respectively).
The underlying topology of a brain network, which we refer to as the cluster structure, is revealed through its decomposition into Q number of clusters, which flesh out the most prominent connectivity profiles. Generally, this topology is estimated using one of the many existing clustering methods, the majority of which are based on the optimization of an objective function called modularity (e.g., the Newman spectral and fast Louvain algorithms; Achard, Salvador, Whitcher, Suckling, & Bullmore, 2006;Bassett & Bullmore, 2009;Bullmore & Sporns, 2009;Hilgetag, Burns, O'Neill, Scannell, & Young, 2000;Pan, Chatterjee, & Sinha, 2010;Rubinov & Sporns, 2010;Sporns, Chialvo, Kaiser, & Hilgetag, 2004). Nevertheless, one of the limitations of methods based on modularity is that they are confined to the identification of a limited range of patterns of connectivity, which typically fall into the framework of the "modular organization" (in which, the within-cluster connectivities are much higher than the between-cluster connectivities). In an extensive set of simulations (Pavlovic et al., 2020), we have benchmarked these methods on modular topologies but also on topologies with a core-modular structure, which diverge from a modular architecture by combining a subset of clusters with a modular like organization and a subset of strongly interconnected clusters. On these simulations, the modular algorithms showed a very high degree of inaccuracy in retrieving the correct cluster labels when the structure was heterogeneous modular but especially when the topology was nonmodular. A more accurate alternative to these methods is contained to the class of probabilistic network clustering methods called stochastic blockmodel (SBM), which were developed in the work of Snijders and Nowicki (Nowicki & Snijders, 2001;Snijders & Nowicki, 1997). The SBMs use a framework of mixture models to explain variations in the distribution of edges on a single network. Depending on the edge values, one would select a unique family of probability density (e.g., the Bernoulli density), which would parametrize the connection probabilities within each cluster and between each cluster pair. In the Bernoulli density example, these parameters represent the mean values in the cluster structure , which are given as a Q × Q matrix of probabilities. The participation of each Bernoulli cluster in the overall mixture of densities is controlled by the parameter , which F I G U R E 1 Illustration of the main steps involved in the derivation of multi-subject binary networks from resting-state fMRI data. The nodes, labeled as V i and V j , represent n regions of interest usually based on some predefined brain parcellations or anatomical atlases. The time-series data in each such region is represented by the mean time series across its constituent voxels. There are typically four ways to generate binary networks from these time series. (i) The first way is to compute Pearson's correlations between the nodal time courses, then transform them using Fisher's z scores and finally binarize them based on the series of tests {H 0 : r ijk = 0; H 1 : r ijk > 0} in which the significant edges take value one and the nonsignificant edges take value zero. To carry out these tests, we have used the "xDF" method recently proposed by Afyouni, Smith, and Nichols (2019), which accounts for the autocorrelation in regional time series, including the instantaneous and lagged cross-correlations, and computes more accurate variances for each edge. (ii) Another way is to directly threshold the correlation matrices according to some predefined correlation value (e.g., 0.8), which treats only the strongest correlations as edges. (iii) The third way is to decompose the nodal time series into discrete wavelets and compute the correlations between the wavelet coefficients within a low-frequency scale for which fMRI dynamics of neuronal origin are the most prominent (e.g., < 0.1 Hz). (iv) The fourth way is to use partial correlation, which takes each nodal time series pair and regresses out the other n − 2 nodal time courses from them. At the end of any of the four pipelines, each subject is awarded a binary symmetric adjacency matrix x k , and subject-specific covariates are collected alongside the resting-state fMRI data is a vector of length Q. By parameterizing each element in a cluster structure, the SBM model captures a wide range of topologies, and it is not limited to the stringent assumptions of modular organizations. Our SBM application on the C. elegans network (Pavlovic, Vértes, Bullmore, Schafer, & Nichols, 2014) captured its locomotion circuit and delineated clusters of sensory neurons maintaining the forward and backward locomotion, suggesting that the model can be used to identify clusters with functionally similar nodes.
Among this literature, we especially highlight the work of Mariadassou et al. (2010) who developed the generalized SBMs for the analysis of a single network. This work extends the variational fitting procedure of SBMs for the exponential family of densities and uses generalized linear models with edge-based covariate values to explain the network cluster structure . The interplay between the cluster structure and covariates can be such that the effects of covariates are the same across the entire cluster structure (i.e., homogeneous effects) or affect each element of the cluster structure with different intensity (i.e., heterogeneous effects). Building upon the work of Mariadassou et al. (2010), we have proposed three multi-subject SBMs (MS-SBMs) among which we especially highlight the heterogeneous-effects SBM (Het-SBM; Pavlovic et al., 2020), which serves as a basis for the development of the new model in this work.
Het-SBM assumes that the cluster labels are the same for all subjects but uses subject-specific covariates like age or gender in a logistic regression model to explain between-subject variations in the cluster structure. To intuitively understand this model, in Figure 2, we showcase a straightforward example in which the age effect of three subjects acts differently on the connectivity in each block. For example, there is a decrease of connectivity in Block (1, 1) with increasing age, and the opposite directional effect is observed in Block (1, 2). By tying each covariate effect to a block cell, the model can capture a wide range of cluster structures. For example, the first subject has a modular topology as the within-block connections are higher than those between-block connections. By contrast, the third subject has a topology that resembles a disassortative mixing as the bulk of connections in the profile of Block 1 goes to maintaining ties with Blocks 2 and 3. The estimation of model parameters related to the logistic regression is based on Firth-type estimators (Firth, 1993) which belong to a class of bias preventative procedures. At first glance, the assumption of common cluster labels across subjects may appear too strong and potentially discount the use of covariates in the estimation of cluster labels. However, since the effects of the covariates are tied to each element of the cluster structure, it is possible to encounter examples in which the effects of covariates iron out the differences in the connectivity profiles across the clusters and this information can be used toward a more accurate estimation of cluster labels. Thus, it is not advisable to separate the cluster label estimation from the logistic regression. In addition to its clustering abilities, the Het-SBM model also provides inference tools to detect group differences in the connectivity rates or, more generally, effects of covariates.
However, although the Het-SBM model is useful for the analysis of independent multi-subject networks, it may not be appropriate in cases where some form of dependence exists in the data. This may be generally attributed to two sources. First, if the covariates do not fully explain the intersubject differences in connectivity rates, there may still be some dependence within the individual elements of a block structure. For example, the prevalence of edges in one subject's block (e.g., the block (q, l)) may be consistently overestimated and, therefore, this lack of fit may be randomly distributed over subjects. Second, the data may comprise more than one network per each subject (i.e., visits) inducing repeated measures correlations. The latter type of data typically occurs in studies with multiple visits of subjects, who are either scanned at different time points or after being exposed to different experimental conditions, such as different drug effects (e.g., a crossover design). Other examples of such a type of data may include the combined outcomes of F I G U R E 2 Illustration of heterogeneous SBM (Het-SBM). Simulated data are available for three subjects aged 20, 40, and 90 years, and each subject's network is represented as a reorganized adjacency matrix comprised of three blocks, labeled numerically (1-3). In Het-SBM, the subject-level covariates can affect the connectivity rates specifically within each block or between each pair of blocks. The effect of age is seen as a locally heterogeneous increase or decrease of connectivity in each block. For example, the intrablock connectivity in Block (1, 1) decreases as a function of age, whereas the interblock connectivity in Block (1, 2) increases with age different imaging modalities or data that combine different classes of connectivity (e.g., structural and functional), where the goal would be to answer questions regarding the similarities of their networks' organizations.
In the present work, we continue the characterization of the human brain network topology by extending previous works in several ways. First, we derive the Het-Mixed-SBM model (see Pavlovic, 2015, chapter 5), which uses the heterogeneous fixed effects from the Het-SBM model and poses block-and subject-level random effects. Second, we outline its estimation strategy and consider the parametric Wald test and permutation test based on the Wald statistics. Third, we validate the model on a range of simulation scenarios where we benchmark this model against Het-SBM and showcase instances where Het-Mixed-SBM can provide more accurate clustering. In another set of simulations, we show the control of false positives and, finally, we consider an application to a resting-state fMRI study obtained from the Human Connectome Project (Van Essen et al., 2012) with 99 healthy subjects and 114 nodes. This data set has only one visit per each subject and, among the wealth of information available for these subjects, we focus on covariates such as age, gender, and IQ score, which are often linked to between-subject variations.
METHOD
Hereafter, we will be using the classical convention of roman capital letters to denote random variables and lower case letters to denote their observed realizations. Scalar and nonscalar values will be denoted by light and boldface fonts, respectively.
Heterogeneous mixed-effects stochastic blockmodel (Het-Mixed-SBM)
Let the set of nodes, labeled as {V 1 , … , V n }, be divided into Q unknown (latent) blocks or clusters. For a fixed number of blocks, Q, the indices q, l ∈ {1, … , Q} are used to label the individual blocks. The block membership of a particular node V i will be indicated by a random vector Z i = (Z i1 , … , Z iQ ), whose elements Z iq take the value 1 if V i ∈ qth group and 0 otherwise. For example, fixing Q = 3, the estimatedẑ i = (0, 1, 0) indicates that the node V i is in the second cluster. Pooling this information across nodes, the n × Q random matrix Z can be defined such that the vectors Z i are mutually independent and follow a categorical density with Q possible outcomes where is a 1 × Q dimensional vector of success probabilities = ( 1 , … , Q ) with a specific q being the probability that a randomly selected node falls into the qth block, and ∑ Q q q = 1. Here, it is important to note that the choice of categorical (or equivalently single-trial multinomial) density implies that the fitted blocks form a partition of all nodes, in which, each node belongs to only one block (i.e., disjoint blocks). This is formally noted as ∑ Q q=1 z iq = 1. Writing Z = ((Z iq )) 1≤i≤n,1≤q≤Q for the n × Q matrix, the probability mass function is given as (2) The random variable X ijkt represents the possibility of an edge between the nodes V i and V j in the kth subject and its tth measurement. Hence, x ijkt denotes a binary realization of X ijkt with 1 being an edge and 0 no edge. Therefore, for the kth subject and tth measurement, X kt = ((X ijkt )) 1≤i≠j≤n denotes an n × n random, symmetric adjacency matrix with elements X ijkt , and, for a total of K subjects and T measurements, X denotes the set of random matrices X = {X kt ∶ k ∈ {1, … K}, t ∈ {1, … , T}}. The individual matrices x kt = ((x ijkt )) 1≤i≠j≤n are assumed to be undirected, without self-connected nodes and without multiple edges between the nodes. Hence, they are binary and symmetric matrices with 0s on their principal diagonal and a total of n(n − 1)∕2 data points. Conditional on the cluster labels, the edges follow a Bernoulli density where R qlk represents a subject-specific random intercept for the block (q, l), d kt is a P × 1 vector of covariates associated with the kth subject and its tth measurement, and ql is a P × 1 vector of regression coefficients for the block (q, l). The total of Q(Q + 1)∕2 individual regression vectors ql is collectively noted as . We note that 2 ql is the random effect variance of each block or block-to-block regression and it is collectively noted as 2 = (( 2 ql )) 1≤q,l≤Q . In particular, the probability mass function of x given z can be written as
Estimation
The model parameters are estimated by optimizing the variational bound (f * (z; ); , , 2 ) defined as where f * (z; ) is the variational density with parameter (n × Q matrix of cluster probabilities), and it denotes the parametric family that is the closest (in the Kullback-Leibler sense) to f (z|x; , 2 , ). The evaluation of Equation (7) requires taking the expectation of Z with respect to its variational density f * (z; ), which is taken to be a product of the individual densities of Z i . Each density is categorical with block-specific probabilities that are independent in each node, where In particular, iq is the strength of evidence that a node V i is a member of block q having observed the data. Let us suppose an example in which the node V i has a vector̂i = (0.1, 0.01, 0.89), expressing the likelihood of its membership in three clusters. Since this node has the strongest affiliation to the third block, itsẑ i = (0, 0, 1). For simplicity of notation, we take advantage of the fact that ql = lq and define ijql as According to this, the variational bound in Equation (7) is given as For a fixed Q, we want to maximize the variational bound defined by Equation (10) with respect to the variational parameter as well as the parameters , , and 2 . The optimal variational parameter satisfies the fixed point relationŝ where we use the notation I qlk1 and I iqlk to denote the following integral expressions The estimate of each element of is given aŝ We next turn our attention to the optimization of the variational bound for and 2 . The estimates of ql and 2 ql can be found with the Newton-Raphson formula so that, for the (m)th step, we have ( The derivation details of the Fisher information matrix and the score can be found in Appendix A1.
For some initial values for 0 , we iteratively update the model parameters according to the two steps procedure ] .
In the first step, is updated according to Equation (14) while and 2 are updated according to the Newton-Raphson method outlined in Equation (15). In the second step, is updated according to Equation (12). The algorithm cycles through these two steps until convergence is obtained. The convergence is measured by the relative changes of the parameter estimates and the improvement of the variational bound. The Newton-Raphson algorithm uses a naïve estimate of̂q l as starting values for the intercept INT ql (i.e.,̂I NT ql = log(̂q l ∕(1 −̂q l )). In particular, for a given block element (i.e., Block (q, l)),̂q l is the ratio of the sum of its observed edges across the K subjects and T measurements and the total number of possible edges. The starting values for 2 are based on a strategy outlined in Demidenko (2004) whose objective is to estimate the random effects r qlk and their sample variance, from which, we finally obtain the initial estimate of 2 . The full details of this procedure can be found in Appendix B1 and further implementation details can be found in Appendix C1.
Model selection
The integrated classification likelihood (ICL) criterion (Biernacki et al., 1998) is one of the most common procedures for determining the optimal number of clusters in mixture models, where the main goal is to cluster network data (Daudin et al., 2008;Mariadassou et al., 2010;Pavlovic et al., 2014). The benefit of this approach is that it favors the models with well-separated clusters, and it penalizes for model complexity, which ensures parsimony. Using m Q to denote a candidate model with Q clusters, we derive the ICL criterion from log f (x, z|m Q ), which can be viewed as a sum of log f (x|z, m Q ) and log f (z|m Q ). The individual components log f (x|z, m Q ) and log f (z|m Q ) are each approximated with the Schwarz criterion (Schwarz, 1978). In the first component, log f (x|z, m Q ), the total number of parameters is Q(Q+1) 2 P in and Q(Q+1) 2 in 2 and the total number of data points in x is n(n−1) 2 KT. In the second component, log f (z|m Q ), the total number of parameters in is Q − 1, while the total number of data points is n. With this, the ICL criterion of Het-Mixed-SBM is defined as
Inference
Het-Mixed SBM offers the possibility to estimate a common cluster structure across subjects that can serve as a common ground for making comparisons between them. Since it poses a mixed effect logistic regression model on each element of the block structure, its methodological framework can be used to estimate differences between groups of subjects or effects of covariates on the connectivity rate at each block. More precisely, we can statistically test if these quantities are different from 0 or, in a more general sense, if linear combinations of these quantities are different from a specified constant. In the continuation of this section, we consider the Wald test to make such a kind of hypothesis testing and assume it to be conditional onẑ.
Wald test.
The null hypothesis is given as 0 ∶ L ql ql = b ql0 and the Wald statistic can then take the form where L ql is a matrix (or a vector) defining the combination of the parameters (contrast) tested and c ql is the rank of L ql . Asymptotically, W ql follows a 2 c ql distribution. If L ql is a vector, then asymptotically follows a standard normal distribution. The standard errors of the model parameters are estimated with the Fisher information matrix (see Appendix A1, Equation (A9)).
Multiple testing.
Inference procedures for the block level parameters comprise a multiple testing problem as we are effectively making Q(Q + 1)∕2 individual tests. To control the familywise error rate (FWE), defined as the probability of making at least one Type I error, we use the Bonferroni correction (Holm, 1979). This correction is valid for any dependence structure and is easy to apply: instead of using a nominal 0 significance value (e.g., 0.05), 0 ∕n T is used instead, where n T is the number of tests (here, n T = Q(Q + 1)∕2).
Permutation testing.
In addition to the Wald test, which depends on asymptotic sampling distributions, permutation tests are also considered (Good, 2000). Permutation tests are based on the premise that, under the null hypothesis, the data can be exchanged without altering its distribution. This implies that the null distribution of any test statistic can be found empirically through repeated evaluations of randomly rearranged (or permuted) data. In the context of Het-Mixed-SBM, we use the permutation strategy proposed by Potter (2005). In this approach, the covariate of interest is regressed on the remaining nuisance covariates (using a linear regression model), and the residuals from this model are used in place of the original covariate. These residuals are then permuted M times and, for each permutation b = 1, … , M, the logistic mixed-effect model is refitted yielding a permuted Wald test statistic w qlb . The p-value of the original Wald test statistic w ql0 (recomputed by refitting the logistic mixed-effect model with the residual covariate) is then computed as where I(⋅) is the indicator function.
Improved multiple testing procedures with permutation.
The Bonferroni correction for controlling the FWE is known to be conservative in the presence of dependence across tests. Conveniently, the use of permutation can provide a less conservative control of the FWE by comparing the original statistics to the null distribution of the maximum statistic across blocks (Westfall & Young, 1993). For the original Wald statistic w ql0 , the FWE corrected p-value based on this procedure is given as
Heterogeneous SBM (Het-SBM)
Het-SBM can be seen as Het-Mixed-SBM without random effects. Formally, for this model, we have where qlk is the connectivity of block (q, l) associated with subject k, d ⊤ k is the 1 × P vector of covariates associated with subject k (typically the first element will be 1, representing the intercept), and ql is a P × 1 vector of regression parameters. Its estimation details and inference strategies can be found in Pavlovic et al. (2020).
SIMULATION STUDY
This section first details the methodology of each simulation setting, labeled Simulation I and Simulation II, and then it proceeds to describe their results. Given that there is an overlap in the methodology of Simulation I and Simulation II, they are discussed jointly (see Section 3.1), while their results are treated separately (see Sections 3.2 and 3.3). Figure 3 illustrates a bird's eye view of each simulation pipeline. Each simulation pipeline is broken down into four components: goals, input parameters, models, and evaluation strategies.
Goals.
The goal of Simulation I is to investigate the accuracy of Het-SBM and Het-Mixed-SBM in estimating the correct cluster labels under the presence of random effects in the data, and to gauge how the random effects can influence this accuracy. The goal of Simulation II is to investigate, for both Het-SBM and Het-Mixed-SBM, the accuracy of the Wald parametric test and the nonparametric test (permutation test based on the Wald statistics) to control the Type I error rate and to give some recommendations for real data applications.
Input parameters.
The input parameters in both simulations are the cluster structure, the fixed-effects sizes, the random-effects variances, the proportion design (i.e., the cluster sizes), the network size, the number of subjects, and the number of visits. In Simulation I, the cluster structures considered are the heterogeneous modular (Het-Modular), core-modular, and weak structures shown in Figure 3. The heterogeneous modular structure is similar to the commonly assumed perfectly modular structure in the sense that the within-cluster connectivities are larger than the between-cluster connectivities. However, it allows the connectivities to vary within and between clusters while the perfectly modular structure imposes them to be constant. The core-modular structure is a hybrid structure where some parts of the network are modular, while the remaining parts contain clusters with high mutual connectivity patterns that comprise, therefore, a densely integrated core of clusters. For example, Blocks 1 and 2 are core clusters because they are densely connected, while Blocks 2 and 3 are modular because their within-cluster connectivities are much higher than their between-cluster connectivities. These two structures were chosen because of the extensive evidence that brain networks demonstrate at least some degree of modularity (Meunier, F I G U R E 3 Simulation pipelines: goals, input parameters, models, and evaluation methods. The input parameters consist of a cluster structure (capturing the underlying network topology), covariate effects (how a cluster structure varies across subjects as a function of their covariates), random effects (introducing dependencies within-blocks and within-subjects), a network size (i.e., the total number of nodes n), a proportion design (i.e., the individual cluster sizes), a total number of visits (T), and a total number of subjects (K) Lambiotte, Fornito, Ersche, & Bullmore, 2009) across many different species, such as the C. elegans, macaque and human brain networks (Achard et al., 2006;Bassett & Bullmore, 2009;Bullmore & Sporns, 2009;Hilgetag et al., 2000;Pan et al., 2010;Rubinov & Sporns, 2010;Sporns et al., 2004). However, there is also strong evidence that the cluster structure of a brain consists of a core of rich club nodes. For example, our previous work on the stochastic blockmodel analysis of the C. elegans brain network (Pavlovic et al., 2014) revealed a hybrid structure or "core-on-modules" structure, which deviated from a purely modular organization by the inclusion of densely inter-and intraconnected core blocks, which governed the worm's forward and backward motion. This kind of structure is hypothetically represented in the simulation by the core-modular topology. Finally, the weak cluster structure is an example of a topology where, on average, there is weak evidence for a three cluster solution as the connection profiles of Blocks 1 and 2 are identical. Such pathological cases may occur in noisy data sets. In Simulation II, however, we consider only the nonambiguous cluster structures with strong clustering evidence like the Het-Modular and core-modular structures so that there could be a fair comparison in the hypothesis testing procedures between Het-SBM and Het-Mixed-SBM. The rest of the simulation input parameters are self-explanatory in Figure 3. For example, in both simulations, the effect of the age covariate is set to zero, and the actual values for the covariates are randomly sampled. The random effect variances vary according to three values ( 2 ∈ {0 . 5, 1, 2}), which are assumed to be constant across the entire cluster structure. The proportion designs specify the cluster sizes, and, among these, we consider three cases: balanced (clusters with similar sizes), mildly unbalanced (M. Unbalanced; clusters with a mild decrease in sizes), and unbalanced (clusters with a strong decrease in sizes). The exact number of nodes in each cluster for each proportion design is given in Table 1. The network sizes are n ∈ {50, 100}, the total number of subjects is K ∈ {20, 40, 80}, and the total number of visits is T ∈ {1, 3} (with the exception of Simulation II where T ∈ {1}). For each such simulation scenario, we generate 100 Monte Carlo samples.
For both simulations, we fit Het-SBM and Het-Mixed-SBM. In Simulation I, the models are fitted on the range of different cluster valuesQ ∈ {2, 3, 4} using the same initializations, while, in Simulation II, only the fitted parameters are used to benchmark the accuracy of the tests. For the data sets with multiple visits, we submit multiple visits to Het-SBM as independent subjects.
Evaluation strategies.
The evaluation of the results in Simulation I is based on the accurate estimation of the optimal number of clusters and cluster labels. The comparison between the estimated clustering and the actual partition is carried out using the Adjusted Rand Index (ARI; Hubert & Arabie, 1985), which indicates the overall agreement between the selected best fits and the real clustering (with 1 denoting a perfect agreement). In Simulation II, we evaluate how well the parametric (Wald) and nonparametric (permutation) inference procedures control the false positive rate (FPR) at a level of significance of 5%. The p-values of the Monte Carlo permutation test statistic are obtained by computing 1,000 permutations of the age covariate for each of the realizations.
Simulation I: Results
For the data sets with the Het-Modular and core-modular cluster structures, Het-SBM and Het-Mixed-SBM both prove to be very accurate in the cluster label estimation and in selecting the optimal number of clusters (ARI = 1). This trend is noted for all the network sizes, subject totals, and random effect sizes (see Supplementary Information Figures S1-S8). This seems to indicate that, even under the presence of random effects, when the cluster profiles are well delineated, the choice between the fixed-effect or mixed-effect versions of the model is inconsequential in terms of clustering. However, as noted in Figure 4, this is not the case in the weak cluster structure data sets where the Het-SBM fits tend to be less accurate especially in the scenarios with small random effect variances (i.e., 2 = 0.5). For such values, Het-SBM struggles to pick on the between-subject variations in the connectivity profiles of Blocks 1 and 2, and it tends to underestimate the optimal number of clusters by one. It is interesting to point out that such effects are less pronounced when the network size is doubled, that is, when the individual clusters contain more nodes (see Figure 5). It is also worth pointing out that, in the cases of larger random effect variances, Het-SBM seems to be able to pick more accurately on the between-subject variations and F I G U R E 4 Box plots of ARI scores between the estimated and true cluster labels for the weak cluster structure, 50 nodes, and one visit. The x-axis shows two fitted models, Het-SBM and Het-Mixed-SBM, while the y-axis shows the distribution of their ARI scores. The rows show the subject totals, while the columns indicate the proportion designs and random effect variances F I G U R E 5 Box plots of ARI scores between the estimated and true cluster labels for the weak cluster structure, 100 nodes, and one visit. The x-axis shows two fitted models, Het-SBM and Het-Mixed-SBM, while the y-axis shows the distribution of their ARI scores. The rows show the subject totals, while the columns indicate the proportion designs and random effect variances to retrieve the correct clustering. By contrast, Het-Mixed-SBM is accurate in all situations. This is because the model is able to use the block-level variance of random effect to guide the clustering.
Simulation II: Results
The purpose of simulating the age effect under the null setting (i.e., H 0 : AGE ql = 0) is to evaluate the effectiveness of the Wald test and the permutation test based on the Wald statistics to control the Type I error rate at 5% significance. For convenience, we will hereafter refer to these procedures as the Wald test and permutation test, and we will base our discussion only on the simulation settings with the random-effect variances 2 = 1 as the trends observed are consistent with those observed in the other settings (whose results are given in Supplementary Information Appendix B). Figures 6 and 7 show the observed false positive rates (FPRs) at a level of significance of 5% obtained from the tests on each block-specific age coefficient and how these results contrast across Het-SBM and Het-Mixed-SBM. Both figures display the results in the settings with 50 nodes, one visit and the variance of random effects 2 = 1 but with different cluster structures, namely, the Het-Modular cluster structure in Figure 6 and the core-modular cluster structure in Figure 7. In the case of the Het-Modular structure, we may expect a moderate bias on the age coefficients as some cluster-connection probabilities may be very close to zero or one. The Firth-type correction used in Het-SBM accounts for such a type of bias, but, for Het-Mixed-SBM, as this kind of bias prevention correction was not utilized, we may expect some degree of liberal behavior for the parametric test. As noted in Figure 6, the Wald test for Het-Mixed-SBM in Block (1, 2) displays F I G U R E 6 Observed false positive rates (FPRs) at a level of significance of 5% in the Het-Modular network with 50 nodes, one visit, and random effect variance 2 = 1. The columns of the plot show three proportion designs, while its rows show three different subject totals (K ∈ {20, 40, 80}). The x-axis shows the fitted cluster-level regression coefficients submitted to hypothesis testing for the effect of age (i.e., H 0 : AGE ql = 0) and the y-axis shows their corresponding observed FPR. The observed FPR are computed for the Wald test (solid line) and the permutation test based on the Wald statistics (dashed line). The results are shown for Het-SBM (blue line with square symbols) and Het-Mixed-SBM (red line with triangle symbols). The shaded green band represents the 95% confidence interval based on a normal approximation exactly that behavior, which is not surprising as the input connection probability for that block was 0.01. Coupling such a small probability with a moderate number of subjects (K = 20) and a moderate number of edges introduces samples dominated by an excessive amount of zero-edges. With larger pulls of data (more edges and subjects), such effects seem to be ameliorated, resulting in better control of the FPR. For the core-modular structure in Figure 7, such effects seem to be absent and the Wald test appears to be valid in all scenarios. Similar trends are noted for other simulation settings (see Supplementary Information Figures S11-S14). In stark contrast to the results of Het-Mixed-SBM, the observed FPRs for the Wald test of Het-SBM are extremely inflated, suggesting complete unreliability of the test in the presence of random effects. One potential cause for this behavior lies in the assumption of edge independence within a block, which is no longer valid in the presence of random effects. In this particular setting, the fixed effects fail to explain the between-subject variation in the cluster-connectivity rates fully, and miss out on the dependence within the individual elements of a block structure introduced by the random effects. Notably, this trend is prevalent in all simulation settings and seems to moderately decrease only with a F I G U R E 7 Observed false positive rates (FPRs) at a level of significance of 5% in the core-modular network with 50 nodes, one visit, and random effect variance 2 = 1. The columns of the plot show three proportion designs, while its rows show three different subject totals (K ∈ {20, 40, 80}). The x-axis shows the fitted cluster-level regression coefficients submitted to hypothesis testing for the effect of age (i.e., H 0 ∶ AGE ql = 0) and the y-axis shows their corresponding observed FPR. The observed FPRs are computed for the Wald test (solid line) and the permutation test based on the Wald statistics (dashed line). The results are shown for Het-SBM (blue line with square symbols) and Het-Mixed-SBM (red line with triangle symbols). The shaded green band represents the 95% confidence interval based on a normal approximation smaller variance of the random effects (see Supplementary Information Figures S11 and S13 for 2 = 0 . 5). Finally, regarding the permutation test, it seems accurate in all simulation settings and for both models.
Data
In this work, we consider a resting-state functional magnetic resonance imaging (fMRI) data set obtained from the Human Connectome Project (HCP), containing 100 unrelated and healthy subjects. Each subject fMRI scan has been fully preprocessed to account for potential confounding factors such as head motion and slice timing errors (Smith et al., 2013). Due to excessive head motion and loss of signal in the frontal lobe, one subject has been removed from the study using the exclusion criterion recommended by Afyouni and Nichols (2018), thus yielding 99 subjects (K = 99). Next, each subject image has been further parcellated into 114 nonoverlapping regions of interests (i.e., n = 114) utilizing the methodological framework outlined in Thomas Yeo et al. (2011). Given that each region of interest contains several voxel-level time series, we use their mean time series for summarizing the functional activity in each node. However, even at this stage, the signal can still be contaminated by respiratory and cardiac confounds, which are also typically contained in the grand average time series (i.e., the global brain signal). Using the recommendations of Murphy and Fox (2017) and Li et al. (2019), the global mean signal is regressed out from each of the nodal time series. As noted in Figure 1, a functional connectivity matrix is obtained by correlating the nodal data in a pairwise fashion, and a binary network is derived from it by testing if each sample correlation is different from zero (i.e., H 0 ∶ r ijkt = 0 vs. H 1 ∶ r ijkt > 0). The negative correlation values bear very little neuroscientific information and, as such, are not considered in the analysis (Fornito, Zalesky, & Bullmore, 2016). The weakness of this approach, however, is that the fMRI time series are strongly autocorrelated (Woolrich, Ripley, Brady, & Smith, 2001), which produces a very poor estimate of the variance for Fisher Z scores used for the hypothesis tests, yielding strongly inflated false positives. In order to account for autocorrelation and cross-correlation across time series pairs, the approach termed "xDF" has been utilized (Afyouni et al., 2019). As shown in Afyouni et al. (2019), the xDF method produced much more accurate estimates than a naive approach. Using the xDF method, adjusted Z scores have been obtained, and the resulting p-values have been adjusted for multiple comparison error using a false discovery rate (Benjamini & Yekutieli, 2001). This part of the analysis has been completed using the Matlab xDF Toolbox (https://github.com/asoroosh/xDF, accessed on September 15, 2019; Afyouni et al., 2019). Finally, to obtain the binary adjacency matrix, any edge with an adjusted p-value lower than the nominal level (5%) has been set to one and otherwise to zero. Finally, for this data set, we only had one visit per subject, so T = 1 and all the subject covariates were not temporal. In this data set, each subject is linked to the following covariates: age (29.20±3.62), gender (+1 for a female with 54 subjects and −1 for a male with 45 subjects), and fluid intelligence score (IQ) (16.21±4.62) given in terms of their means and standard deviations, respectively. Fluid intelligence has been evaluated using the Penn Progressive Matrices (PMAT) test in which the subjects have been tasked with predicting a correct pattern given a prior set of sequential patterns. The PMAT test consists of 24 items and the problems are arranged in order of increasing difficulty. The design matrix has been set up to contain a global intercept and the three covariates mentioned above (with age and IQ standardized). The three covariates have been chosen based on the fact that primary subject covariates like age and gender are known to have high explanatory power, especially for capturing differences between subjects (Dosenbach et al., 2010). Similar evidence has also been given in support of fluid intelligence (Dubois, Galdi, Paul, & Adolphs, 2018). It is worth mentioning, however, that even though the HCP data contain a wealth of behavioral scores, we have decided to select only the most standard covariates since there is very little empirical evidence that the available behavioral measures are informative of specific functional processes in the brain. Even less evidence exists about how informative these are to clustering. While we do not discount the potential research value of linking behavioral data to the functional connectivity data, we view this question as secondary to this work and beyond the scope of this article. Another possibility would be to use task-based fMRI data as a subject-specific covariate and to see how these relate to the clustering. This would allow to study more directly the link between specific tasks and resting-state data. However, given that this model is one of the first steps toward linking covariates to clustering, more sophisticated analyses seem beyond the scope of this work.
Clustering results
To this data, we fit Het-Mixed-SBM and Het-SBM using initializations based on the k-means clustering from the R package amap (Lucas, 2014) and the hierarchical clustering from the R package stats (R Core Team, 2017). For the k-means clustering, we perform 1,000 initializations for each subject and the grand average matrix while, for the hierarchical clustering, we perform only one initialization per each subject and the grand average since this method is deterministic. For demonstration purposes, the analysis only considers 10 clusters, and the ICL criterion in both models is used to select the best fits across initializations. The best fits for both models converge to an identical clustering solution, suggesting that the data do not contain noise levels, which would iron out connectivity profiles in some of the clusters as we have observed in our simulations. Therefore, the choice between the fixed-effect and mixed-effect modeling does not seem to impact the selected clustering fit. Although we do not observe differences in the clustering between Het-Mixed-SBM and Het-SBM in our real-data analysis, it is worth mentioning that this does not mean that this situation generalizes to all data sets. For example, it is possible to have similar mean connectivity profiles in some clusters but very different random effects and this would obscure the clustering with fixed effects (Het-SBM).
To understand the fit, in Figure 8, we list the individual nodes in each of the ten fitted blocks. The label of each node is sourced from the 17-network parcellation found in Thomas Yeo et al. (2011), which depicts typical neural activations while a subject is at rest. These labels are the central visual, control (A, B, and C), default (A, B, and C), dorsal attention (A and B), limbic (A and B), peripheral visual, salience/ventral (A and B), somatomotor (A and B), and temporal-parietal networks. The central visual network is engaged in conscious processing of visual stimuli, and it supports a depth of visual perception and conscious awareness of visual stimuli. The network comprises nodes from the striate and extrastriate cortex. The control network is supporting cognitive processes related to working memory, attention and decision making. In the parcellation, this network is split along three components labeled A, B, and C; the control A component consists of nodes in the temporal cortex, intraparietal sulcus, dorsal, lateral and lateral-ventral prefrontal cortex, and midcingulate cortex; the control B component consists of nodes in the temporal cortex, inferior parietal lobule, dorsal, lateral, lateral-ventral, medial-posterior, lateral-dorsal and lateral-ventral prefrontal cortex, and inferior parietal lobule; the control C component consists of nodes in the precuneus and posterior cingulate cortex. The default mode network is active during tasks such as daydreaming, future planning, accessing memories, and thinking about others or themselves. The network is split into three components: A, B, and C; the default A component consists of nodes in the inferior parietal lobule, dorsal prefrontal cortex, precuneus posterior cingulate cortex, medial prefrontal cortex, and temporal cortex; the default B component consists of nodes in the temporal cortex, anterior temporal lobe, inferior parietal lobule, dorsal, lateral, and ventral prefrontal cortex; the default C component consists of nodes in the precuneus, inferior parietal lobule, retrosplenial, and parahippocampal cortex. The dorsal attention network is linked to voluntary orienting of attention to external tasks. It is divided into two components: A and B; component A consists of nodes in the temporal and parietal occipital cortex and superior parietal lobule; component B consists of nodes in the temporal occipital cortex, postcentral cortex, frontal The peripheral visual network is supporting peripheral vision. It consists of nodes in the striate cortex, extrastriate inferior and extrastriate superior cortex. The salience/ventral network is linked to attention such as redirecting our attention to sudden stimuli as well as for deciding which such stimuli are worthy of attention. It is divided into components A and B; component A consists of nodes in the parietal operculum, frontal operculum, insula, parietal medial cortex, frontal medial cortex, and precentral cortex; component B consists of nodes in the cingulate anterior cortex, inferior parietal lobule, lateral prefrontal cortex, medial posterior prefrontal cortex, and dorsal prefrontal cortex. The somatomotor network is linked to the coordination of motor tasks. It consists of components A and B; the component A consists of nodes in the primary somatosensory cortex; the component B consists of nodes in the insula, central cortex, secondary somatosensory cortex, and auditory cortex. The temporal-parietal network contains regions in the temporal parietal cortex.
Next, we summarize each block according to its functional specification (see Figure 9). Block 1 contains regions from five networks: the central visual, somatomotor B, salience/ventral attention B, control A and C networks. Given the diversity of its network associations, it is not easy to attach a unique functional label. Nevertheless, each region in this block seems to be very weakly connected with the other regions in this block and the regions in the other blocks, which can F I G U R E 9 Average connectivity matrix (for mean age, gender, and IQ) thresholded at 10% connectivity rate. The clusters are plotted along a circle, and their relative sizes correspond to their total number of nodes. Within-cluster connections are given in percent, and between-cluster links are provided in terms of their edges whose thicknesses indicate their relative strength of association. Each block is also awarded an approximate functional label explain why they have been clustered together. For this reason, we refer to this block as "weakly connected." By contrast, the remaining blocks are more straightforward to classify functionally. Indeed, Block 2 can be categorized as "visual" as it contains elements from the central and peripheral visual networks, Block 3 as "somatomotor and salience/ventral attention," Block 4 as "dorsal attention," Block 5 as "salience/ventral attention B," Block 6 as "limbic and temporal-parietal," Block 7 as "default mode A-B," Block 8 as "control A," Block 9 as "control B," and Block 10 as "default mode C." Figure 10 shows the average connectivity matrix for mean age, gender, and IQ. The within-cluster connectivity of Block 1 is fragile and weaker than several of its connections with other blocks (e.g., the connectivity between Block 1 and Block 5), suggesting a so-called "disassortative mixing" pattern. The nodes in this block appear to be classified together based on their substantial dissimilarity in connectivity profiles to the nodes in the rest of the networks. This departure from the main clustering patterns may potentially explain its functional heterogeneity, which was identified earlier. It is also interesting to point out that Blocks 2 (visual), 3 (somatomotor and salience/ventral attention), and 4 (dorsal attention) exhibit relatively high within-cluster connectivities but also relatively strong connections between one another. The high within-cluster connectivity is generally expected since regions in close anatomical proximity are involved in local processes. The relatively high connectivity between these clusters suggests an interplay between visual, somatomotor, and attention tasks. Although the components of salience/ventral attention network are split between Blocks 3 and 5 due to their differing connectivity profiles (e.g., Block (3,9) and Block (5,9)), these blocks maintain relatively high mutual connections, which are expected within the same functional network. Blocks 6 F I G U R E 10 Average connectivity matrix for mean age, gender, and IQ. The emerging cluster structure seems to diverge from a classical modular organization and shows evidence of high connectivity patterns between some blocks (e.g., between Blocks 2, 3, and 4, between Blocks 6 and 7, and between Blocks 8 and 10). The blocks are defined according to their connectivity profile and even though Blocks 3 and 4 appear to have similar within-and between-cluster probabilities, they are split into separate blocks on the grounds of their different connections to Blocks 5 and 8. An exception from the overall structural pattern in the cluster structure is Block 1, which has more connections to the other blocks in the network than within its cluster. This pattern is generally known as a disassortative mixing (limbic and temporal-parietal) and 7 (default mode A-B) also exhibit strong mutual connections, while the component C of the default mode network in Block 10 is not as connected to Block 6 as Block 7. Finally, the components A and B of the control network are split into Blocks 8 and 9, respectively.
Inference results
In Figure 11, we show the uncorrected p-values (in −log 10 scale) thresholded at 5% significance for the null hypothesis of no effect for each covariate (age, gender, and IQ). The results are shown for four testing procedures based on the combination of the two models (Het-SBM and Het-Mixed-SBM) and two tests (Wald and permutation tests). The Wald test for Het-SBM exhibits a much higher number of significant blocks than the three other testing procedures. This behavior seems to be consistent with the results shown in Simulation II (see Section 3.3), where we have observed a very strong liberal control of the FPR for the Wald test of Het-SBM. In general, the results of the three other testing procedures seem relatively similar.
In Figure 12, we show the familywise error rate corrected p-values (in −log 10 scale) thresholded at 5% significance for the null hypothesis of no effect for each covariate (age, gender, and IQ) and each testing procedure. As with the uncorrected p-values, the Wald test for Het-SBM F I G U R E 11 Uncorrected significant p-values for the null hypothesis of no effect for each covariate (age, gender, and IQ). The results are given for Het-SBM and Het-Mixed-SBM and in terms of the Wald and the permutation tests. The p-values are stated in terms of −log 10 (p-values) (e.g., the legend score of 36 corresponds to a p-value of 10 −36 ) and the nonsignificant p-values at 5% significance are given in gray exhibits a much higher number of significant blocks than the three other testing procedures. The permutation test for Het-SBM declared two significant results that occur for age and gender in Block (2,3) (between the visual and somatomotor and salience/ventral attention blocks). In contrast to this, the Wald and permutation tests for Het-Mixed-SBM both consistently declared a single significant result that occurs for gender in Block (1,5) (between the weakly connected and salience/ventral attention blocks). As shown in Nichols and Hayasaka (2003), one potential explanation for this discrepancy between the two models is that the permutation test based on the maximum statistic is sensitive to situations where the statistics are not pivotal. As the statistics in Het-SBM do not account for random effects, their null distributions are likely to vary across blocks, especially if the variance of the random effects is changing across blocks. In such cases, F I G U R E 12 Familywise error rate corrected significant p-values for the null hypothesis of no effect for each covariate (age, gender, and IQ). The results are given for Het-SBM and Het-Mixed-SBM and in terms of the Wald and the permutation tests. The p-values are stated in terms of −log 10 (p-values) (e.g., the legend score of 35 corresponds to a p-value of 10 −35 ) and the nonsignificant p-values at 5% significance are given in gray. For the Wald test, the familywise error rate corrected p-values were obtained using a Bonferroni correction. For the permutation test, the familywise error rate corrected p-values were obtained based on the null distribution of the maximum statistics the null distribution of the maximum statistic is likely to be driven by the blocks that bear the most familywise error risks. As a consequence of that, some blocks will be more penalized than others. On the contrary, as Het-Mixed-SBM accounts for some random effects, we may expect the statistics to be more pivotal. This means that the contribution of each block to the null distribution of the maximum statistic is likely to be more homogeneous, penalizing blocks more equally. It is worth mentioning that, based on this explanation, both results are valid even if they do not exhibit the same significant blocks because they are both controlling the familywise error rate.
DISCUSSION
In this article, we proposed a novel multi-subject model, termed Het-Mixed-SBM, which can be used for the analysis of multi-subject binary network data, allowing for random effects per subjects and blocks.
In Simulation I, we validated the clustering ability of the model on a range of different cluster structures, and we have compared its results with the Het-SBM fits. In the instances of clearly delineated cluster structures in which the blocks show starkly segregated connectivity profiles (i.e., the Het-Modular and Core-Modular structures), both models converged to the correct clustering solutions. Nevertheless, in the cluster structure with merged connectivity profiles, such as Blocks 1 and 2 in the weak topology, the variance of the random effects can be used by Het-Mixed-SBM to correctly separate Blocks 1 and 2. Interestingly, for small variations of the random effects, it seemed hard for Het-SBM to split Blocks 1 and 2 across the subjects, and the model tended to underestimate the optimal number of clusters. However, when the random effect variances were larger, Het-SBM yielded an accurate clustering. The overall conclusion of Simulation I is that, for cluster structures with strong clustering evidence, we can expect a similar performance between the two models. Still, for cluster structures with weaker evidence, it might be possible to get erroneous solutions with the fixed-effects model (Het-SBM). In combination with the conclusions of Pavlovic et al. (2020) where it was shown that the fixed-effect covariates might be informative of the clustering, it seems that the random effects can also be useful in guiding the model toward identifying the correct clustering solution in multi-subject networks. As such, it may not be appropriate to do the clustering separately from the logistic regression or logistic-mixed regression.
Having established in Simulation I the influence of random effects on the clustering, in Simulation II, we focused on validating the inference procedures based on the parametric (Wald) and nonparametric (permutation) tests in Het-SBM and Het-Mixed-SBM. As previously anticipated, the Wald test in Het-SBM showed highly inflated FPRs in the presence of random effects in the data, and such results seem to improve with the permutation testing. By contrast, the Wald and permutation testing in Het-Mixed-SBM showed more consistent results. The only departure from this trend appears in the particular block of the Het-Modular structure whose low connectivity rate introduced samples with hugely disproportionate ratios of zeros to ones, which are known to yield highly biased estimates and inflated FPRs for the Wald test. To remove this type of bias, one solution would be to use Firth-type estimators like in Het-SBM. Nevertheless, this is not straightforward due to the presence of random effects in the model. From a purely practical viewpoint, in such circumstances, it may be advisable to resort to a more straightforward alternative like permutation testing. Nevertheless, for probability rates that are away from the zero/one boundaries, it may be simpler to use the Wald test.
In the analysis of the HCP data set, we obtained identical clustering fits for Het-SBM and Het-Mixed-SBM. The clustering fit highlighted functionally meaningful blocks. The inference results based on the Wald and permutation testing in Het-SBM showed widely different results. These results echo the conclusions of Simulation II, which suggested a liberal behavior of the Wald test with Het-SBM. By contrast, there seems to be a high degree of agreement between the Wald and the permutation testing in Het-Mixed-SBM. These results seem to reinforce the notion that it is important to account for random effects to better model the data. A shortcoming of our real-data analysis is that we fixed the total number of clusters to 10 to ensure that the blocks are reasonably sized for realistic inferences and to avoid potential small sample issues. Further analyses using higher numbers of clusters would be useful to uncover finer-grained fits that might be more informative.
One shortcoming of Het-Mixed-SBM is that it only considers a random intercept per block as random effects. While this is an improvement upon the classic Het-SBM which does not model any form of within-subject dependencies, it assumes that, for a given block, the within-subject dependencies are the same between all the within-subject connections. This assumption may be too restrictive for repeated measures data as we may expect the dependencies to be more complicated. For example, in the case of longitudinal networks, we may expect the dependencies to decrease with time, which would not be modeled with our current version of Het-Mixed-SBM. One potential remedy in modeling more complex types of within-subject dependencies is to include additional random effects to the model. For example, in the case of longitudinal networks, adding a random effect of time would allow to model dependencies that also vary between time points. Similar considerations might be given to data that combined different imaging modalities (e.g., a combination of diffusion tensor imaging and resting-state fMRI).
Finally, as part of our future work, we will also consider a potentially useful simplification of our model that may speed up computation while accounting for subject-level randomness. Specifically, we intend to consider a model that combines "homogeneous random effects" (per-subject random intercepts common to the entire network) with "heterogeneous fixed effects" regression for each block element. We expect this model to have a similar clustering potential as the Het-Mixed-SBM but with a much reduced computational burden.
SUPPORTING INFORMATION
Additional supporting information may be found online in the Supporting Information section at the end of this article.
APPENDIX A. PARAMETER ESTIMATION IN HET-MIXED-SBM
For the clarity of the subsequent discussions and easy referencing, we provide a list of integrals I qlk1 − I qlk6 that will be used in the estimating equations of and 2 . Thus, we have Taking the partial derivatives of the variational bound (f * (z; ); , , 2 ) (Equation (10)) with respect to ql and 2 ql , for an individual block (q, l), we can form a score vector U( ql , 2 ql ) as a (P + 1) × 1 vector of first derivatives U( ql , 2 ql ) = (U( ql ) ⊤ , U( 2 ql )) ⊤ whose equations are given as and finally the estimate of a starting point iŝ
APPENDIX C. REPARAMETRIZATION OF THE INTEGRALS IN THE HET-MIXED-SBM
At each step of the Newton-Raphson algorithm, we require a numerical approximation of the six integrals (Equations (A1)-(A6)). In practice, such integrals can be reasonably well estimated with the adaptive Gauss-Hermite quadrature approximation (Lesaffre & Spiessens, 2001;Liu & Pierce, 1994) whose implementation in R is available via the function integrate (Piessens, Doncker-Kapenga, Überhuber, & Kahaner, 1983;R Core Team, 2015). Although this computational strategy offers relatively quick and accurate approximations, in a few examples, we encountered some instabilities and numerical issues. First, some integrals were evaluated as zeros, which caused the variational bound to diverge to −∞. The main reason for this was that the maximal value attained by the function h qlk was very small (e.g., −1,000), making the integrand numerically equal to zero over the whole domain of integration. To treat this point of numerical instability, we can add an offset value c qlk to the exponent h qlk . In particular, this offset value can be chosen to be the negative maximum of h qlk , taken with respect to r qlk , while iterative values of Using this, we can now write the integral ∫ +∞ −∞ e h qlk dr qlk = e −c qlk ∫ +∞ −∞ e h qlk +c qlk dr qlk and the edge based component of the variational bound is given as (C3) | 2020-06-04T09:07:00.076Z | 2020-07-15T00:00:00.000 | {
"year": 2020,
"sha1": "87e66621c067698fffef74c518380de34183ad3b",
"oa_license": "CCBY",
"oa_url": "https://ora.ox.ac.uk/objects/uuid:3248a7ba-1e5c-4fe2-a636-5a80b1096a03/download_file?file_format=pdf&safe_filename=Pavlovic_et_al_2020_multi_subject_stochastic_blockmodels_with_mixed_e%2525EF%2525AC%252580ects.pdf&type_of_work=Journal+article",
"oa_status": "GREEN",
"pdf_src": "Wiley",
"pdf_hash": "95c20ef113f99f7bbbba3aaf7fa81c1b9e251ab7",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
258259929 | pes2o/s2orc | v3-fos-license | Global knowledge, attitude, and practice towards COVID-19 among pregnant women: a systematic review and meta-analysis
Background Pregnant women form a specially vulnerable group due to unique changes in pregnancy, leading to a higher risk of getting a severe infection. As severe COVID-19 increases the risk of preeclampsia, preterm delivery, gestational diabetes, and low birth weight in pregnancy, there is a need to enhance pregnant women’s knowledge, attitudes, and practices to prevent these complications. This systematic review and meta-analysis aimed to determine their levels of knowledge, attitudes, and practice (KAP) regarding COVID-19 at the global level. Methods The systematic literature search was conducted in the English language, including Google Scholar, Scopus, PubMed/MEDLINE, Science Direct, Web of Science, EMBASE, Springer, and ProQuest, from the occurrence of the pandemic until September 2022. We used The Newcastle Ottawa scale for cross-sectional studies checklist to evaluate the risk of bias in the studies. Data were extracted by a Microsoft Excel spreadsheet and analyzed by STATA software version 14. We also employed Cochran Q statistics to assess the heterogeneity of studies and utilized Inverse variance random-effects models to estimate the pooled level of pregnant women’s KAP towards COVID-19 infection prevention. Results Based on the preferred reporting items for systematic reviews and meta-analyses (PRISMA) and inclusion criteria, 53 qualified studies were acquired from several countries. In total, 51 articles (17,319 participants) for knowledge, 15 articles (6,509 participants) for attitudes, and 24 articles (11,032 participants) for practice were included in this meta-analysis. The pooled good knowledge, positive attitude, and appropriate practice in pregnant women were estimated at 59%(95%CI: 52–66%), 57%(95%CI: 42–72%), and 53%(95%CI: 41–65%), respectively. According to subgroup analysis, the level of knowledge, attitude, and practice were 61%(95%CI: 49–72), 52%(95%CI: 30–74), and 50%(95%CI: 39–60), respectively, in Africa, and 58.8%(95%CI: 49.2–68.4), 60%(95%CI: 41–80) and 60% (95%CI: 41–78), respectively, in Asia. Conclusion The Knowledge, attitude, and practice towards COVID-19 infection prevention in pregnant women were low. It is suggested that health education programs and empowerment of communities, especially pregnant women, about COVID-19 continue with better planning. For future studies, we propose to investigate the KAP of COVID-19 in pregnant women in countries of other continents and geographical regions. Supplementary Information The online version contains supplementary material available at 10.1186/s12884-023-05560-2.
Background
The WHO declared the pandemic caused by COVID-19 as a public health emergency of international concern in January 2020 [1]. As of 02 October 2022, it has resulted in 623,268,353 confirmed cases of COVID-19 and 6,549,980 deaths globally [2]. Over time, new aspects of the effect of this virus on different body organs were identified and reported. Studies showed its impact on the digestive system, nervous system, skin, smell, cardiovascular system, liver, kidney, and eyes [3][4][5][6]. In addition to physical symptoms, the psychological burden of COVID-19 patients was heavy and persistent. So, the ongoing psychological trauma of the survivors of COVID-19 was highlighted in health care [7]. As of March 2021, there were 80 reported maternal deaths due to COVID-19 in the United States, and as of October 6, 2021, 1,637 COVID-19 infections and 15 deaths were reported in Mississippi [8].
On the other hand, pregnant women are more vulnerable, especially in the case of emerging infections, due to physiological and immunological changes [9,10]. They are at risk of contracting the disease because of the weakness of the immune system and being in general society [11]. Changes caused by disasters and crises harm women's health [12]. Moreover, the level of anxiety and stress during the COVID-19 pandemic is high, so women are worried about their babies getting infected and seeking prenatal care [13,14]. The most common complications in pregnancy include acute respiratory distress, disseminated intravascular coagulation, renal failure, bacterial infection, sepsis, need for mechanical ventilation, fetal death, and preterm delivery [15,16]. The type of delivery in affected pregnant women depends on the conditions of the fetus, mother, and cervix. Thus, infection with COVID-19 alone does not determine the type of delivery [17]. Furthermore, COVID-19 can also affect children and cause systemic disease with several internal organ involvements [18].
In a systematic review study, Turan et al. showed that increasing age, obesity, diabetes, D-dimer levels, and interleukin-6 were effective in predicting pregnancy outcomes at the time of COVID-19, leading to a rise in premature birth and cesarean section. Also, vertical transmission may be possible, although it has not been proven [19]. In another study, Simsek et al. reported that COVID-19 has a harmful effect on pregnancy [20]. The association of severe COVID-19 during pregnancy with preeclampsia, premature birth, gestational diabetes, and low birth weight was reported [21].
Considering the vulnerability of pregnant women, the availability of fully effective vaccines in preventing infection, and the lack of definitive treatment, it is suggested that prevention is possible by increasing the knowledge of society to apply the correct health principles and physical distance to prevent its prevalence. According to a study in Ethiopia, maternal age, educational levels, husband educational levels, underlying disease, and sociocultural and demographic features had an influence on the KAP of COVID-19 in pregnant women [22]. Although there are numerous studies about the KAP of pregnant women in the prevention of COVID-19, their findings are not consistent with each other in some cases. Therefore, an overall understanding of KAP on the prevention behaviors of COVID-19 in pregnant women is essential for health system policymakers and stakeholders to design prevention programs. As a result, this study aims to determine the level of knowledge, attitudes, and preventive actions of pregnant women regarding COVID-19 at the global level.
Methods
This study was conducted according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines [23,24]. In addition, its executive protocol was registered in the international prospective register of systematic reviews (PROSPERO) with code [CRD42022351552],(https:// www. crd. york. ac. uk/ prosp ero/ displ ay_ record. php? Recor dID= 351552).
Search strategy
We searched all articles published in the English language, including Google Scholar, Scopus, PubMed/ MEDLINE, Science Direct, Web of Science, EMBASE, Springer, and ProQuest, from the occurrence of the pandemic until September 2022.
The search method was performed using MeSH terms in combination or separately using "AND" and "OR" functions (supplementary Table 1). The references of the found articles were also examined to increase the sensitivity. The processes of searching and selecting related articles are shown in the PRISMA flowchart (Fig. 1).
Eligibility criteria
Databases were searched based on the mentioned strategy. Then, the collected articles were carefully reviewed in terms of the desired epidemiological parameters and the inclusion criteria: 1-All Cross-sectional studies that reported data on COVID-19 knowledge and attitudes and practices, as well as studies on KAP in COVID-19 in pregnant women. 2-All articles published in the English language from the occurrence of the pandemic until September 2022.
3-All articles whose full text was accessible. 4-Articles in which the subjects were selected based on random sampling or census.
1-Articles whose population was other than pregnant women (such as the general population, health care workers, and students). 2-Articles published in languages other than English. 3-Studies except for observational studies, such as reviews, case series, and short communication.
Quality assessment (Risk of bias)
In this study, we used the modified Newcastle Ottawa scale for cross-sectional studies checklist to evaluate the risk of bias (internal validity) of the studies. The Newcastle-Ottawa Scale (NOS) is an ongoing collaboration between the Australian universities of Newcastle and Ottawa, Canada. This scale has been developed to evaluate the quality of non-randomized studies with its design, content, and ease of use to combine quality assessments in the interpretation of meta-analytic results. In this scale, studies are evaluated and graded based on three points of view, each of which includes subsections: a) Selection of study groups (including representativeness of the sample, sample size, ascertainment of exposure, and nonrespondents), b) Ability to compare groups (the subjects in different outcome groups are comparable, based on the study design or analysis, and confounding factors), and c) determining the exposure or outcome of interest (assessment of the outcome and statistical test). The goal of the Newcastle-Ottawa Scale (NOS) is to develop a simple and convenient tool to assess the quality of non-randomized studies used in a systematic review. The title of the journal and the names of the authors is apparent for the reviewers to measure the quality assessment of included studies. First, the full text of the article was read carefully by the first referee, and then the quality assessment checklist was completed and scored. The same steps were done independently by the second referee. Disagreements were discussed in a group discussion session. The range of scores is 0-10, calculated based on the checklist for each study. So, we determine the risk of bias for articles divided into three categories with low risk (8-10), medium risk (5-7), and high risk (0-5) [25].
Data extraction
At first, all selected articles were entered into EndNote X8 software (Thomson Reuters, New York, USA), and duplicate articles were removed. Then, two team members (MJ and VR) reviewed the selected titles and abstracts and excluded irrelevant articles from the study.
We tried to select articles related to the research topic and compatible with descriptive and crosssectional studies based on working methods. After choosing the appropriate ones according to the study objectives, the final selection was made through group discussion. Then, the articles were entered into the next processes for qualitative evaluation and information extraction.
The data from the articles include the name of the author(s), year of study, type of study, sample size, geographical region of the study, and a good level of knowledge, good attitude, and appropriate practice towards COVID-19, which were extracted.
In this study, the knowledge, attitude, and practice about Covid-19 were as follows:
Knowledge
Containing disease symptoms, route of transmission, incubation period and isolation period, and ways to prevent COVID-19 were used to assess knowledge. A good level of knowledge means an above-average score.
Attitude
It included the individual's agreement or desire to participate in the fight against the epidemic of COVID-19, as well as the trust in the government and her companions in winning the battle against the COVID-19 pandemic. A score above the average level is recognized as a good attitude in the control and management of COVID-19.
Practice
It was defined as preventing infection and implementing prevention recommendations, such as maintaining physical distance, hand hygiene, wearing a mask, avoiding crowded places or social events, and isolation and quarantine to prevent the spread of COVID-19. Those whose score is average or higher are considered to be appropriate practices.
Statistical analysis
In this meta-analysis study, we performed statistical analyzes employing the STATA software (version 14.). We also used Inverse variance and Cochran Q statistics to evaluate the heterogeneity of studies. Low, medium, or high heterogeneity was considered as I 2 test statistics. Values < 50%, 50%-80%, and > 80% were defined as low, moderate, and high heterogeneity, respectively [26]. Due to heterogeneity, the Dersimonian and Liard randomeffects models were used in the current paper [27].
To evaluate the source of heterogeneity, univariate and multivariable meta-regression methods were used, as well as subgroup analysis [24]. In the analysis of the subgroups, the level of appropriate knowledge, positive attitude, and appropriate practice regarding preventive behaviors toward COVID-19 were estimated based on geographical areas.
We used the Funnel plots and Egger's regression test to check the existence of publication bias. On a condition of confirmation of publication bias, the trim-and-fill method was used to estimate the number of censored studies and correct the final estimate [28].
In addition, we used Arc GIS 10.3 software to visualize the geographic distribution of appropriate knowledge, positive attitudes, and appropriate practice according to continents and countries.
Search results and eligibility studies
A total of 1,502 articles were reviewed by searching the seven mentioned databases based on the inclusion criteria. In the next step, 605 articles were excluded due to duplicates and 732 articles due to a lack of inclusion criteria in the abstract and title. Furthermore, 112 studies were excluded based on the exclusion criteria, such as the type of study, non-pregnant target group, and lack of access to the full text of the article. Finally, 53 studies, including 52 studies for knowledge , 15 studies for attitude [32, 36, 46-48, 50, 51, 55, 57, 66, 71, 73, 74, 77, 79], and 24 studies for practice [32, 40, 46, 48, 50, 52, 55, 57, 58, 62, 64, 65, 67-69, 71, 73-79, 81] were included in this systematic review and metaanalysis (Fig. 1).
Characteristics of the eligible studies
Total eligible studies include 53 journal articles. In terms of evaluating the quality assessment of included studies, 44 studies with low risk of bias and nine studies with moderate risk of bias were scored based on the NOS quality scale, and no one was included in the high risk of bias category (Table 1). Based on the continent, 30 studies were conducted in Asia, 21 in Africa, and one in North America (Tables 1 and 2). In terms of the type of study, all studies were conducted in a cross-sectional design (Table 1).
Since the heterogeneity between studies was high, univariate and multivariable meta-regression methods were employed to investigate the cause and source of heterogeneity. In this regard, univariate meta-regression indicated that the country with a coefficient of 0.02281 might be its cause, which means that the percentage of good knowledge for COVID-19 can increase by 0.02281 with the change of the country, as demonstrated in Table 3.
Pooled good attitudes toward COVID-19
A total of 15 studies with 6509 people, including nine studies in Asia (4150 people) and six studies in Africa (2359 people), were included for attitude analysis.
The results of univariate and multivariable metaregression analysis showed that none of the variables of the continent, country, quality of studies, year of study, and sample size were possible causes of heterogeneity (p > 0.05), as shown in Table 3.
Pooled appropriate practice toward COVID-19
A total of 24 studies contained 11,032 pregnant women, including 16 studies in Africa (7010 people) and eight studies in Asia (4022 people).
Univariate and multivariable meta-regression analysis was performed to find the source of heterogeneity. Table 3 shows that none of the variables of the continent, country, quality of studies, year of study, and sample size are possible causes of heterogeneity (p > 0.05).
Publication bias
We employed Egger's regression test and funnel plots to check publication bias. On a condition of confirmation of publication bias, the trim-and-fill method was used to estimate the number of censored studies and finally to correct the overall estimate of the meta-analysis.
The funnel plots and Egger's test showed that there is a significant publication bias for the level of knowledge (bias = -15.8941, 95%CI: -21.322, -10.466, P < 0.001), as depicted in Fig. 8, A. Based on the results of trim-and -fill, two studies were censored. Thus, the approximation of the corrected good knowledge level was 58.5%(95%CI: 49.5-67.5%).
For appropriate practice, the funnel plot was asymmetric, and Egger's test was significant (bias = -25.4246, 95%CI: -34.458, -16.39, P < 0.001), as shown in Fig. 8, C. Based on Trim-and-fill and non-parametric methods, the expected values of two censored studies were calculated, and the overall appropriate practice corrected by the random effects model in pregnant women was estimated to be 49.4% (95%CI: 33.5-65.3%).
Discussion
This comprehensive systematic review and metaanalysis study assessed the overall good knowledge, positive attitude, and appropriate practice towards COVID-19 in pregnant women. The study demonstrated that these parameters for COVID-19 infection prevention in pregnant women were low. According to the results, good knowledge was 59%, which was in line with the results of several studies [82][83][84]. However, this finding is significantly lower than that of In the present paper, information about knowledge was extracted from 30 studies in Asia (7852 people), 21 studies in Africa (9353 people), and one study in America (114 people). At first glance, it seems that considering that the majority of the study population Table 3 Univariate and multivariable meta-regression to find possible causes of heterogeneity between studies included in the metaanalysis was conducted from developing countries. The current study's estimate of people's level of knowledge was lower than the real values of the global average, or at least these results cannot be generalized for developed countries.
While taking a closer look at the separate results of different articles, it is revealed that these results have shown the highest knowledge among pregnant women about COVID-19 in African countries (61%), followed by Asian countries (58.8%) and the lowest in the United States of America (35.9%) [29]. Interestingly, the highest knowledge was for an African country (Ghana 85.6%) [68], and the lowest one is for an African country as well (Uganda 32.8%) [75]. Also, Asian countries, such as India (79.5%) (53-54-55) and Lebanon (81.5%) [45], showed a high level of knowledge.
In addition, maternal age, educational levels, husband educational levels, underlying disease, and socio-cultural and demographic features were associated with KAP of COVID-19 in pregnant women [22]. Furthermore, a study conducted in China on pregnant women represented that a level of knowledge of COVID-19 prevention related to high education through the media, especially at the beginning of the epidemic, previous experiences of exposure to other coronavirus epidemics, and the local government-imposed strict restrictions on immediate infection control after the outbreak began [31].
The other results of this study demonstrated that the overall positive attitude among pregnant women toward COVID-19 was 57%, which was in line with the results of several studies [82,83,87]. A systematic review and meta-analysis estimated that the attitude towards COVID-19 infection prevention among pregnant women in Ethiopia was 62.46%% [86]. In this study, good knowledge of COVID-19 had a better status than a positive attitude. Generally, it is expected that people's knowledge is at a higher level than their attitude. Also, Asia (60%) had a better situation than Africa (52%). However, the highest values of a positive attitude were for Egypt (95%) [57], which has a better socioeconomic status and literacy level than other African countries.
The majority of the studies were online surveys, and literature support, age, and education level affected the behaviors of online surveys [88] since information sources, the Internet, and social networks played an important role in creating knowledge and attitude [89]. Furthermore, the difference in time in terms of the status of the epidemic curve during the study period, as well as the trust in the local government to manage the epidemic, especially the experience of controlling and managing previous epidemics, affected the attitude of the community [22]. Furthermore, our findings showed that the positive practice towards COVID-19 was 53%, which was in line with previous studies [83,85,87] and lower than a review conducted around the globe [90]. A systematic review and meta-analysis estimated that the practice among pregnant women in Ethiopia was 52.29% [86]. This study revealed that pregnant women who resided in urban areas were 2.23 times more likely to have good preventive practices for COVID-19 infection compared with those who resided in rural areas. One of the possible reasons may be that urban pregnant women have better access to basic healthcare services and media. They also can read texts related to Covid-19 from newspapers or social media. Moreover, findings showed that pregnant women with a secondary education level perform 3.36 times more preventive behaviors against Covid-19 compared to those with no formal education [86].
On the other hand, it should be mentioned that the present study focused on pregnant women, while the global study conducted worldwide included all people in society. The positive practice towards COVID-19 in pregnant women in the Asian continent was (60%) better compared to the African continent (50%), which seems logical. The level of positive practice of people was lower in knowledge and attitude. Achieving positive practice requires improving knowledge and attitude, yet their improvement does not lead to positive practice in all cases [91]. In a meta-analysis study, Mose et al. showed that pregnant women with good knowledge were 2.73 times more likely to have good preventive practices for COVID-19 than those with poor practice [86]. The level of risk perception of society to understand the risk of infection, cultural norms, such as shaking hands and participating in family, social and religious gatherings, continuity of water sources and easy washing of hands, access to the health care facility and living conditions may be effective in carrying out prevention behaviors for COVID-19 in communities [90,92].
Furthermore, the harm caused by the pandemic may be different in the uninfected pregnant population. In this regard, Zheng et al., in a systematic review and qualitative meta-synthesis study, reported that the COVID-19 pandemic disrupted the conceiving plan and the routine care of pregnant women. Since the availability and quality of maternal care have played a decisive role in maternal and fetal outcomes, it is suggested that the government or healthcare providers balance the restrictions and access to maternity care during future pandemics [93].
Strengths and limitations
One of the limitations of the current paper was the lack of studies regarding the KAP components of pregnant women in preventive behaviors against COVID-19, especially in developed countries, which to some extent limited the global estimate of the KAP rate for pregnant women leading to encountering the problem on the comparison of countries and continents. In addition, despite performing meta-regression analysis to find the source of heterogeneity and subgroup analysis to reduce its impact on the estimates, the heterogeneity rate between studies was still high. The reason for this is probably other variables, such as the difference in tools, questionnaires used to measure KAP components, and the difference in the studied societies in terms of basic demographic variables, such as age, literacy level, socioeconomic status, cultural difference, ethnicity, type of health system, and the different policies of the governing systems of the societies to deal with the COVID-19 pandemic in each region and country, which was not investigated in this study. The small sample size of many studies conducted in most countries, which probably cannot be generalized to the population of those countries, is worth considering. In addition, the publication bias among included studies was significant. Despite its correction with statistical methods and the estimation of the number of censored studies, it can still influence the estimates of this study.
However, considering the global estimation of the level of KAP components in pregnant women for COVID-19, we believe that in this study, all the available and accessible information and the appropriate statistical methods have been used for the most appropriate estimation of the KAP components at the global level. Also, by creating scientific evidence, its findings can be used in health policies and prevention programs, especially for possible future epidemics.
Conclusion
Our results showed that knowledge, attitude, and practice toward COVID-19 infection prevention in pregnant women were low. Considering that several years have passed since the beginning of this pandemic and taking into account the global effects of the disease in terms of health, social, economic, and political, it was expected that the knowledge, attitude, and practice of pregnant women, who are one of the high-risk groups regarding this disease, would be in a better condition. It is proposed that health education programs and empowerment of communities, Fig. 7 The percentage of appropriate practice towards COVID-19 among pregnant women based on the continent Fig. 8 Funnel plot with pseudo 95% confidence limits for detection of publication bias among included studies especially pregnant women, about COVID-19 continue with better planning. For future studies, it is suggested to investigate the KAP of COVID-19 in pregnant women in countries of other continents and geographical regions.
KAP
Knowledge, attitudes, and practice PRISMA Preferred reporting items for systematic reviews and meta-analyses WHO World Health Organization PROSPERO International prospective register of systematic reviews MeSH Medical Subject Headings NOS Newcastle-Ottawa Scale d.f Degree of freedom | 2023-04-22T13:27:00.156Z | 2023-04-22T00:00:00.000 | {
"year": 2023,
"sha1": "3aa276943e414bf8bb1cd00ac69b8bf1084e5538",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "3aa276943e414bf8bb1cd00ac69b8bf1084e5538",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
199545230 | pes2o/s2orc | v3-fos-license | Variation in life-history traits among Daphnia and its relationship to species-level responses to phosphorus limitation
Currently organisms are experiencing changes in their environment at an unprecedented rate. Therefore, the study of the contributions to and responses in traits linked to fitness is crucial, as they have direct consequences on a population's success in persisting under such a change. Daphnia is used as a model organism as the genus contains keystone primary consumers in aquatic food webs. A life-history table experiment (LHTE) using four species of Daphnia was conducted to compare variation in life-history traits among species across two different environmental conditions (high and low phosphorus availability). Results indicate that the food quality environment had the most impact on life-history traits, while genetic contributions to traits were higher at the species-level than clonal-level. Higher trait variation and species-level responses to P-limitation were more evident in reproductive traits, while growth traits were found to be less affected by food quality and had less variation. Exploring trait variation and potential plasticity in organisms is increasingly important to consider as a potential mechanism for population persistence given the fluctuations in environmental stressors we are currently experiencing.
In view of the criticisms of the reviewers, the manuscript has been rejected in its current form. However, a new manuscript may be submitted which takes into consideration these comments.
Please note that resubmitting your manuscript does not guarantee eventual acceptance, and that your resubmission will be subject to peer review before a decision is made.
You will be unable to make your revisions on the originally submitted version of your manuscript. Instead, revise your manuscript and upload the files via your author centre.
Once you have revised your manuscript, go to https://mc.manuscriptcentral.com/rsos and login to your Author Center. Click on "Manuscripts with Decisions," and then click on "Create a Resubmission" located next to the manuscript number. Then, follow the steps for resubmitting your manuscript.
Your resubmitted manuscript should be submitted by 30-May-2019. If you are unable to submit by this date please contact the Editorial Office.
Please note that Royal Society Open Science will introduce article processing charges for all new submissions received from 1 January 2018. Charges will also apply to papers transferred to Royal Society Open Science from other Royal Society Publishing journals, as well as papers submitted as part of our collaboration with the Royal Society of Chemistry (http://rsos.royalsocietypublishing.org/chemistry). If your manuscript is submitted and accepted for publication after 1 Jan 2018, you will be asked to pay the article processing charge, unless you request a waiver and this is approved by Royal Society Publishing. You can find out more about the charges at http://rsos.royalsocietypublishing.org/page/charges. Should you have any queries, please contact openscience@royalsociety.org.
We look forward to receiving your resubmission.
Kind regards, Royal Society Open Science Editorial Office Royal Society Open Science openscience@royalsociety.org on behalf of Dr Michael Tobler (Associate Editor) and Professor Kevin Padian (Subject Editor) openscience@royalsociety.org Associate Editor Comments to Author (Dr Michael Tobler): We have received the feedback from two reviewers. Both see merit in the study and agreed that the experimental approach was sound, but they also found significant issues particularly associated with the statistical analyses. Accordingly, the manuscript will need significant revision and cannot be accepted for publication in its present form. However, if the author can address the reviewers' concerns and revise the manuscript accordingly, the paper should be suitable for publication in RSOS.
Pg 9, line 5: 'Visual inspection of a stem and leaf plot' is not sufficient detail for how outliers were identified. What are the criteria for the stem and leaf plots used to determine outliers? Usually, it's that outliers are two SDs from the mean value.
Pg 9, line 37: Change 'within' to 'in' Pg 9, line 25: Change 'looked at' to 'examined' or something similar. Check throughout the manuscript that this sort of casual language is corrected (e.g., line 29: 'In order to look at variance, we looked at…').
Pg 10, line 5-8: What you call P-sensitivity is essentially a reaction norm (i.e., trait change across phosphorus treatments). This, then, could very reasonably called plasticity as well. In fact, in many cases, there is more grounding to call this plasticity than the COV and centroid differences. This introduces a crucial area of potential misunderstanding in this manuscript that, indeed, underlies the entire work (even the title!).
Pg 10, line 28-30: I don't think the inclusion of PC analyses in this paper adds significantly. In fact, it's quite redundant with the MANOVA results. Visual inspection of a PC plot is enough to give you some idea that groups (here, species) differ in the variables that you're interested in. But, I would like to see more rigorous statistical investigation of the questions at play. Visual inspection of the PC plot, for example shows separation along PC1 (size variables) for D. magna and along PC2 (reproductive variables) for pulex/obstusa. But what of statistical significance? That's why you've done the MANOVA.
The other thing that the PC analyses give you (that the MANOVA doesn't) is the ability to calculate centroids for groups. I have serious concerns, however, about using centroid separation as a measure of plasticity. Centroids, much like point estimates, don't capture the variance within groups. For example, two groups might have separated centroids (so, by your methods, you'd conclude plasticity), but may, in fact, overlap quite heavily in the (e.g.,) minimum convex polygon that surrounds each group in PC space (due to shifting of the density of points).
Pg 11, line 36: delete the parenthetical '(i.e., were more constrained)'. Make sure that when you talk about there being less variance elsewhere in the paper that you don't jump to the conclusion that phenotypes are more constrained. 'Constrained' represents a very specific biological statement about the limits of plasticity that may or may not apply in this case.
Pg 13, line 12: Just to point this out… regarding a previous comment re: contraints, here, the discussion of constraints is merited (that starts with 'Allometric constraints…') since it's discussion/speculation and appropriate language (e.g., 'may') is used.
Pg 14, Line 25-28: The distinction between sensitivity and trait variation are hard to understand throughout this paper and thus need to be better defined. On one hand, if sensitivity is measured as changes in traits from low-to high-P environments (as the equations in the methods define it), then one would expect that, by definition, sensitivity is trait variation (across environments), at least at the clonal level. If the author means to say that sensitivity (at the clonal level) is one thing and trait variation (at the species level) is another, then they should be careful to make this distinction (and remind the reader of this distinction) throughout the MS. Again, since this distinction seems central to the message of the MS, much more careful effort towards both defining and quantifying these must be taken.
Pg 5, Lines 31-34: Alternatively, mean and variance is often positively correlated in biological data. Part of this is a 'floor effect', for example. I would suggest discussing the alternative hypothesis (that mean and variance are positively correlated) here.
Pg 9, Line 26: change 'was run' to 'was conducted' -also change this in future sentences, only because 'was run' sounds a bit awkward Pg 10, Line 54: Please make explicit the direction of effect here -larger species have greater degrees of variation?
Pg 11, Lines 36-38: I suggest changing this heading so that it reads ' […] and species identity explained greater trait variation than genotype', or something along those lines.
Review form: Reviewer 3
Is the manuscript scientifically sound in its present form? Yes
Recommendation?
Accept with minor revision (please list in comments)
Comments to the Author(s)
I have a few minor comments that should be addressed for more transparency. Introduction: P5 L16 "therefore clonal variation may significantly add to genetic contributions". The meaning of this sentence is cryptic, can you try to reformulate? Methods P6 L40 I think one word is missing in the sentence "finnish pond from a ephemeral with dessication" "incidental ambient lighting": I suppose not only the stock culture but also the experiment were in these conditions. When was the experiment conducted? Please provide more details about spectrophotometer measurements: wavelength, and perhaps a reference to the literature on this already established procedure? Please provide details about the experimental animals. Especially in smaller species, a single clutch contains less than 20 neonates, so to reach the indicated number clutches must have been pooled. Were all experimental animals born on the same day from different mothers, or were they all from the same mother but born on different days? I am aware this is a logistical issue but also important when interpreting results, because it influences the observed variance. P9 word missing "mortality accounts for less than five percent…" P13 (middle) replace "we hypothesize" with "I hypothesize" P14 The superflea Tessier reference is incomplete, I suppose it was a formatting error. There is no R code available in supp Mat, please provide it Figure 2 is hard to read. Please consider using a white background, as grey on grey is not easy. Is RSOS having a limitation on color figures? If not I would consider using colors in this figure, because even if "only" 4 shades of grey aren't much, it is a bit hard. I also wish the symbols were larger, and I am wondering why the symbols for mean clutch size look different in shape?
03-Jul-2019
Dear Dr Hartnett On behalf of the Editor, I am pleased to inform you that your Manuscript RSOS-191024 entitled "VARIATION IN LIFE-HISTORY TRAITS AMONG DAPHNIA AND ITS RELATIONSHIP TO SPECIES-LEVEL RESPONSES TO PHOSPHORUS LIMITATION" has been accepted for publication in Royal Society Open Science subject to minor revision in accordance with the referee suggestions. Please find the referees' comments at the end of this email.
The reviewers and Subject Editor have recommended publication, but also suggest some minor revisions to your manuscript. Therefore, I invite you to respond to the comments and revise your manuscript.
• Ethics statement If your study uses humans or animals please include details of the ethical approval received, including the name of the committee that granted approval. For human studies please also detail whether informed consent was obtained. For field studies on animals please include details of all permissions, licences and/or approvals granted to carry out the fieldwork.
• Data accessibility It is a condition of publication that all supporting data are made available either as supplementary information or preferably in a suitable permanent repository. The data accessibility section should state where the article's supporting data can be accessed. This section should also include details, where possible of where to access other relevant research materials such as statistical tools, protocols, software etc can be accessed. If the data has been deposited in an external repository this section should list the database, accession number and link to the DOI for all data from the article that has been made publicly available. Data sets that have been deposited in an external repository and have a DOI should also be appropriately cited in the manuscript and included in the reference list.
If you wish to submit your supporting data or code to Dryad (http://datadryad.org/), or modify your current submission to dryad, please use the following link: http://datadryad.org/submit?journalID=RSOS&manu=RSOS-191024 • Competing interests Please declare any financial or non-financial competing interests, or state that you have no competing interests.
• Authors' contributions All submissions, other than those with a single author, must include an Authors' Contributions section which individually lists the specific contribution of each author. The list of Authors should meet all of the following criteria; 1) substantial contributions to conception and design, or acquisition of data, or analysis and interpretation of data; 2) drafting the article or revising it critically for important intellectual content; and 3) final approval of the version to be published.
All contributors who do not meet all of these criteria should be included in the acknowledgements.
We suggest the following format: AB carried out the molecular lab work, participated in data analysis, carried out sequence alignments, participated in the design of the study and drafted the manuscript; CD carried out the statistical analyses; EF collected field data; GH conceived of the study, designed the study, coordinated the study and helped draft the manuscript. All authors gave final approval for publication.
• Acknowledgements Please acknowledge anyone who contributed to the study but did not meet the authorship criteria.
• Funding statement Please list the source of funding for each author.
Please note that we cannot publish your manuscript without these end statements included. We have included a screenshot example of the end statements for reference. If you feel that a given heading is not relevant to your paper, please nevertheless include the heading and explicitly state that it is not relevant to your work.
Because the schedule for publication is very tight, it is a condition of publication that you submit the revised version of your manuscript before 12-Jul-2019. Please note that the revision deadline will expire at 00.00am on this date. If you do not think you will be able to meet this date please let me know immediately.
To revise your manuscript, log into https://mc.manuscriptcentral.com/rsos and enter your Author Centre, where you will find your manuscript title listed under "Manuscripts with Decisions". Under "Actions," click on "Create a Revision." You will be unable to make your revisions on the originally submitted version of the manuscript. Instead, revise your manuscript and upload a new version through your Author Centre.
When submitting your revised manuscript, you will be able to respond to the comments made by the referees and upload a file "Response to Referees" in "Section 6 -File Upload". You can use this to document any changes you make to the original manuscript. In order to expedite the processing of the revised manuscript, please be as specific as possible in your response to the referees.
When uploading your revised files please make sure that you have: 1) A text file of the manuscript (tex, txt, rtf, docx or doc), references, tables (including captions) and figure captions. Do not upload a PDF as your "Main Document". 2) A separate electronic file of each figure (EPS or print-quality PDF preferred (either format should be produced directly from original creation package), or original software format) 3) Included a 100 word media summary of your paper when requested at submission. Please ensure you have entered correct contact details (email, institution and telephone) in your user account 4) Included the raw data to support the claims made in your paper. You can either include your data as electronic supplementary material or upload to a repository and include the relevant doi within your manuscript 5) All supplementary materials accompanying an accepted article will be treated as in their final form. Note that the Royal Society will neither edit nor typeset supplementary material and it will be hosted as provided. Please ensure that the supplementary material includes the paper details where possible (authors, article title, journal name).
Supplementary files will be published alongside the paper on the journal website and posted on the online figshare repository (https://figshare.com). The heading and legend provided for each supplementary file during the submission process will be used to create the figshare page, so please ensure these are accurate and informative so that your files can be found in searches. Files on figshare will be made available approximately one week before the accompanying article so that the supplementary material can be attributed a unique DOI.
Once again, thank you for submitting your manuscript to Royal Society Open Science and I look forward to receiving your revision. If you have any questions at all, please do not hesitate to get in touch. The manuscript was re-assessed by two reviewers and both generally agree that the author has adequately revised the manuscript. They provide some additional feedback that should help to improve the manuscript. I would particularly like to echo two reviewer comments, one pertaining the clarification of why a negative correlation between mean and variance are expected, and one pertaining the removal of gray background in the figures. In the context of the latter, I should add that color figures are free, and using color may help to make figure 2 more clear.
Reviewer comments to Author: Reviewer: 1 Comments to the Author(s) The authors present a revised version of their manuscript in which they explore interspecific variation in life history traits in response to nutritional environments. They have done well to eliminate the PC analyses from the manuscript and focus instead on MANOVA analyses, which strengthen the overall statistical approach. The findings are generally expected, but interesting. The overall comparative approach is a strength.
Most of my critiques at this point are minor (see below). I would, however, like to see some effort spent in clarifying the expectation that mean and variance in responses should be inverse correlated. Indeed, my expectation for many biological traits is the opposite: that mean and variance in responses are generally positively correlated. This ends up bearing somewhat heavily on the manuscript since this prediction is introduced in the introduction and its implications are discussed in the discussion.
Line-by-line edits
Pg 5, Line 3: delete 'large' Pg 5, Lines 31-34: Alternatively, mean and variance is often positively correlated in biological data. Part of this is a 'floor effect', for example. I would suggest discussing the alternative hypothesis (that mean and variance are positively correlated) here.
Pg 9, Line 26: change 'was run' to 'was conducted' -also change this in future sentences, only because 'was run' sounds a bit awkward Pg 10, Line 54: Please make explicit the direction of effect here -larger species have greater degrees of variation?
Pg 11, Lines 36-38: I suggest changing this heading so that it reads '[…] and species identity explained greater trait variation than genotype', or something along those lines.
Pg 11, Line 43: Change 'imposes' to 'is imposed' Pg 14, Line 12: Missing date for reference Comments to the Author(s) I have a few minor comments that should be addressed for more transparency. Introduction: P5 L16 "therefore clonal variation may significantly add to genetic contributions". The meaning of this sentence is cryptic, can you try to reformulate? Methods P6 L40 I think one word is missing in the sentence "finnish pond from a ephemeral with dessication" "incidental ambient lighting": I suppose not only the stock culture but also the experiment were in these conditions. When was the experiment conducted? Please provide more details about spectrophotometer measurements: wavelength, and perhaps a reference to the literature on this already established procedure? Please provide details about the experimental animals. Especially in smaller species, a single clutch contains less than 20 neonates, so to reach the indicated number clutches must have been pooled. Were all experimental animals born on the same day from different mothers, or were they all from the same mother but born on different days? I am aware this is a logistical issue but also important when interpreting results, because it influences the observed variance. P9 word missing "mortality accounts for less than five percent…" P13 (middle) replace "we hypothesize" with "I hypothesize" P14 The superflea Tessier reference is incomplete, I suppose it was a formatting error. There is no R code available in supp Mat, please provide it Figure 2 is hard to read. Please consider using a white background, as grey on grey is not easy. Is RSOS having a limitation on color figures? If not I would consider using colors in this figure, because even if "only" 4 shades of grey aren't much, it is a bit hard. I also wish the symbols were larger, and I am wondering why the symbols for mean clutch size look different in shape?
Author's Response to Decision Letter for (RSOS-191024 You can expect to receive a proof of your article in the near future. Please contact the editorial office (openscience_proofs@royalsociety.org and openscience@royalsociety.org) to let us know if you are likely to be away from e-mail contact. Due to rapid publication and an extremely tight schedule, if comments are not received, your paper may experience a delay in publication.
Royal Society Open Science operates under a continuous publication model (http://bit.ly/cpFAQ). Your article will be published straight into the next open issue and this will be the final version of the paper. As such, it can be cited immediately by other researchers. As the issue version of your paper will be the only version to be published I would advise you to check your proofs thoroughly as changes cannot be made once the paper is published. This study explores life-history trait variation within/among species and proposes to examine the effects of genetics/environment, body size, and plasticity/sensitivity on daphnid trait expression. Overall, the experimental methods seem to be mostly solid, and the importance of variation in reproductive traits with nutrient limitation is a key finding that has been relatively overlooked in the stoichiometric literature. This study could make an important scientific contribution; however, I found myself getting lost at certain points and questioning the rational and interpretation behind some of the predictions and results. Therefore, I think that the manuscript might benefit by addressing some major and minor points.
Major Points Structure 1. The study questions are clearly laid out at the end of the introduction, which I appreciate. But, I believe that they can be better setup in the introduction and discussed in the later parts of the manuscript. There are introductory paragraphs about food quality, clones and plasticity, which sets up 2 of the 3 study questions, but nothing about body size in the introduction. Please consider adding a body size paragraph to justify rational for why organisms of different body size might be expected to grow differently. Also, while you do talk about plasticity you don't say how it might be related to sensitivity until the final intro paragraph, which left me thoroughly confused. You hint at a mechanism in the discussion, but it would be easier for the reader if you lay this out in the plasticity paragraph.
2. I might've missed it, but I only found one mention of the first study aim in the results and not really much discussion of it in the discussion section. This might make for an interesting discussion paragraph along with talkin about the the interactions between genotype and the environment. These are also tested but not thoroughly discussed in the paper.
Stats
3. There was some death in these experiments, but as survival data was not presented, I don't have a feel for how this might've influenced your results. I understand if survival wasn't part of your story, but please report the proportion of individuals that died for each species so that we may assess the robustness of the results. Similarly, please explain the rational for the 5 day survival cutoff reported on Page 7 Line 25? It seems to me that there would still be a significant different between body sizes of day 7 and day 20 animals which could affect your variance estimates.
4. As the PCA is central to your story, the reader could use some extra information to help interpret it. Please state whether your variables were centered and scaled so that we can be certain that individual variables didn't have a greater influence on the ordination. Also, you report in the MANOVA calculation that certain variables were skewed. How was this handled in the PCA ordination? Finally, please report the correlations between each variable and axis so that we can confirm that each PC axis was related to "growth" and reproduction as you say.
5. I'm still really confused about how the COV-sensitivity analysis was conducted. The COV's seem to be calculated with diet treatments for each species and for each individual trait whereas Appendix A the sensitivity analysis is calculated on multivariate data for each species. My question is if you are including both food treatment and trait variation (individual on the x-axis and multivariate on the y) on both axes, can we actually consider these two measurements to be independent? To me it seems self-evident that univariate and multivariate trait variation should be positively correlated. Please provide further description and justification for this analysis.
Results Interpretation 6. Since growth wasn't calculated using the common metric mass specific growth rate, I'm not sure that growth is the most accurate way to describe this axis. Specifically, since you don't take into account different starting masses, there's no way to directly compare growth rates across species. SAM is also generally considered to be a reproductively related trait as it is assumed to be under selection from size-selective predators and is inherently related to reproduction. I would be more comfortable if you referred to this axis as relating to body size. 7. I found myself questioning some of the results interpretation in the discussion section.
A) In the first two paragraphs, you talk about the buffering effects of clonal variation and how this might help species persistence in different or variable environments. However, phenotypic variation among clones was the weakest of any variable in your study and as you say these clones weren't from the same populations, so it's not clear to me how genotypic variation that does not lead to phenotypic differences among clones might be adaptive in natural environments. B) In the 3 rd paragraph, I'm not sure that your results show that larger-bodied daphnia are more affected by food quality than smaller bodied daphnia. Table 3 shows that this pattern is trait specific, Figure 2 shows that D. obtusa appears to be more variable than D. magna and figure 4 seems to show that obtusa is the most variable of the 3 species.
Minor Points P3 L31: It is debatable whether P is the ultimate limiting nutrient in freshwater systems. This supposition is not supported by your reference to Sterner 2008 and certainly not by the meta analyses of Elser et al. 2007 or any of the multitude of studies examining N & P co-limitation conducted in the last decade or so. It certainly is an important limiting element, but not necessarily the most important.
P3 L30: It's true that daphnids could be more susceptible to P-limitation, but as most environments are experiencing P-loading rather than P reductions it is unclear to me how they might be used as an indicator organism. Wouldn't it make more sense to use an organism sensitive to excess P? P4 L12: "Clonal variation can be considered as important as species identity" Please explain what this importance is referring to.
P5 L5: Are these predictions here? Do they all refer to the final study question 3? If not, you might consider moving them up with their corresponding question or spelling out some of these connections specifically in respect to question 3 earlier in the paragraph bc this information could be confusing out of context. P6 L3: Another caveat that you might want to add is that these animals have been adapted to lab conditions for long periods of time which might influence the magnitude of their plastic responses compared to animals in the wild. P6 L47: So the total amount fed to these animals would be 0.5 mg C day? Can you show that this level of food is not limiting to the larger magna and pulex species given that there were also likely neonates feeding in these tubes later on in the experiment? Could this account for some of the trait variance in these larger body sized animals? P7 L38-45: These sentences might be better off in the results section.
P9 L12: Do you have quantitative statistics to support this? I only see descriptive statistics for individual traits in the tables and figures.
P9 L38: This shift might be moved to the discussion where it can be discussed further.
P12 L34: It's not clear how these phylogenetic differences could've been related to plasticity. The work by Seidendorf 2010 doesn't support this, but if you believe that this might be the case, would it be worth including it as a covariate in your analyses? P12 L51: You might consider changing this to "partially matches the prediction". There might be an individual magna and pulex clone that is more sensitive, but on average obtusa seems to be the most sensitive species.
General comments to the editor and reviewers:
I very much appreciated the care and thought put into the reviews of this manuscript. Both reviewers had major concerns over the robustness of my analysis and therefore the conclusions/interpretations that I drew from the study. Therefore, I have completely redone my analyses and took special care to not draw any conclusions/discussion points qualitatively. I have relegated the PC analysis, used for visual interpretation, to the supplemental materials, and focused on the MANOVA for interpretation as suggested by reviewer 1. In addition, I ran two-sample t-tests to quantitatively test the claims I made on daphniid size and their effects on responsiveness and variation (before sensitivity and plasticity, see next paragraph for changes in terms). I also ran a correlation analysis to determine quantitatively the association between species-level responses and intraspecific variation. Because some of these new analyses changed my results, the discussion has been substantially changed.
In addition, another major concern was the way that I had (mis)defined plasticity and sensitivity. I have clarified my manuscript to investigate intraspecific trait variation (COVs) and species-level responses (mean trait in HPmean trait in LP: once defined as sensitivity when it could be plasticity) to remove any misunderstanding of what I am trying to address. This has been reflected throughout the manuscript, and should adequately remove this major concern of confounding sensitivity and plasticity.
Other major revisions:
I revised the introduction to focus less on the stress-aspect of the environmental treatment and more on the responses Daphnia exhibit under differing food quality, as well as ensuring that I adequately discuss each of the study's aims.
Effect size calculations, and an additional two-way ANOVA, was moved to the supplementary materials, as they did not add anything to the main message of the manuscript after re-doing the other analyses and the results of the ANOVA were insignificant, so they have been relegated to supplements.
Minor revisions:
I have revised the manuscript with grammatical suggestions by both reviewers. See line-by-line comments/revisions below for more information.
Reviewer 1
Comments to the Author(s) General comments: This project investigates the clonal-and species-level variation in life history traits in response to changing nutrient levels. While the methods are sound, the statistical analyses and interpretations are fraught. First, some major conclusions rest on simply visual interpretation of PC plots, rather than rigorous statistical tests (though MANOVAs were used some). Second, definitions of plasticity (in terms of the methods used to quantify plasticity) miss the mark. Using centroid values from PC plots to quantify plasticity (while ignoring the variance within groups in PC space) is likely to misrepresent the degree of plasticity. Furthermore (and perhaps most importantly), there seems to be a confounding of 'sensitivity' and 'plasticity' throughout the manuscript (see multiple comments below). I suggest a very thorough consideration of how to calculate plasticity (e.g., reaction norm approach), and a more careful conveyance of the precise definitions of (adaptive or not) plasticity and sensitivity in the MS.
Please see general comments above for a description of how these major concerns were addressed. Pg 7, lines 3-4: Please expand on the 'confounding issue' (do you mean to say that the confounding issue is due to inbreeding?) with clones MA1 and MA2. Rephrased to clarify that yes, I did mean that inbreeding could be a potential confounding factor for the amount of variation/response seen. Pg 7, line 8: Please report the densities in the housing containers. About how many animals were kept in the 900mL jars and 60mL jars? I rephrased to indicate that maternal lines were raised with only one individual per 60ml container. The best information I have for the housing containers is the minimum and maximum number of adults per jar. So I added an approximate range of densities based on that. Pg 7, line 21: Please provide the methods (could just be a sentence) for determining the amount of algae that were provided to the maternal lines. Also, some of this information is redundant with the next paragraphplease rectify. I added a sentence to indicate that we used a spectrophotometer to correlate chlorophyll to carbon. I think I can understand the confusion for the redundancy. I took adults from stock cultures and raised them individually in 60ml jars. These adults produced the experimental animals that I took measurements for. I have done my best to clarify that process. Pg 7, line 49-54: Please provide more detail on how the LoP and HiP treatments were prepared (e.g., more detail than just the C:P ratios). Added as suggested. Pg 8, line 46: Change '…to look at the significance of' to 'examine the effects of' Edited as suggested. Pg 8, line 51: Did you use non-parametric statistics to look at these outcome variables then? Without a statistical test for these variables or a satisfactory explanation for the omission of these tests, these analyses are incomplete. I had not. I included the variables that were not parametric and checked the post-hoc residuals to make sure that multivariate normality was still met. Pg 9, line 5: 'Visual inspection of a stem and leaf plot' is not sufficient detail for how outliers were identified. What are the criteria for the stem and leaf plots used to determine outliers? Usually, it's that outliers are two SDs from the mean value. I changed the criteria for removing outliers to +/-s.d. as suggested. Pg 9, line 37: Change 'within' to 'in' Edited as suggested. Pg 9, line 25: Change 'looked at' to 'examined' or something similar. Check throughout the manuscript that this sort of casual language is corrected (e.g., line 29: 'In order to look at variance, we looked at…'). Edited as suggested. Pg 10, line 5-8: What you call P-sensitivity is essentially a reaction norm (i.e., trait change across phosphorus treatments). This, then, could very reasonably called plasticity as well. In fact, in many cases, there is more grounding to call this plasticity than the COV and centroid differences. This introduces a crucial area of potential misunderstanding in this manuscript that, indeed, underlies the entire work (even the title!). See general comments to the editor. I changed the approach of my analysis to address this comment. Pg 10, line 28-30: I don't think the inclusion of PC analyses in this paper adds significantly. In fact, it's quite redundant with the MANOVA results. Visual inspection of a PC plot is enough to give you some idea that groups (here, species) differ in the variables that you're interested in. But, I would like to see more rigorous statistical investigation of the questions at play. Visual inspection of the PC plot, for example shows separation along PC1 (size variables) for D. magna and along PC2 (reproductive variables) for pulex/obstusa. But what of statistical significance? That's why you've done the MANOVA.
The other thing that the PC analyses give you (that the MANOVA doesn't) is the ability to calculate centroids for groups. I have serious concerns, however, about using centroid separation as a measure of plasticity. Centroids, much like point estimates, don't capture the variance within groups. For example, two groups might have separated centroids (so, by your methods, you'd conclude plasticity), but may, in fact, overlap quite heavily in the (e.g.,) minimum convex polygon that surrounds each group in PC space (due to shifting of the density of points). See general comments to the editor. I changed the approach of my analysis and interpretation to address this comment. Pg 11, line 36: delete the parenthetical '(i.e., were more constrained)'. Make sure that when you talk about there being less variance elsewhere in the paper that you don't jump to the conclusion that phenotypes are more constrained. 'Constrained' represents a very specific biological statement about the limits of plasticity that may or may not apply in this case. Edited as suggested. Pg 13, line 12: Just to point this out… regarding a previous comment re: contraints, here, the discussion of constraints is merited (that starts with 'Allometric constraints…') since it's discussion/speculation and appropriate language (e.g., 'may') is used. A welcome clarification, but no edit indicated here. Pg 14, Line 25-28: The distinction between sensitivity and trait variation are hard to understand throughout this paper and thus need to be better defined. On one hand, if sensitivity is measured as changes in traits from low-to high-P environments (as the equations in the methods define it), then one would expect that, by definition, sensitivity is trait variation (across environments), at least at the clonal level. If the author means to say that sensitivity (at the clonal level) is one thing and trait variation (at the species level) is another, then they should be careful to make this distinction (and remind the reader of this distinction) throughout the MS. Again, since this distinction seems central to the message of the MS, much more careful effort towards both defining and quantifying these must be taken.
See general comments to the editor. I agree that I may not fully understand the distinction between these terms. Having a reader restate my definitions really showed me how unclear they were. I admit that I could not understand how the reviewer was interpreting my terms. I have taken much more care to think about how to define and quantify these calculations. I have decided to reframe the manuscript's terms altogether to avoid misunderstanding. I have classified changes in traits from low to high as species-level responses and made sure my trait variation was calculated at the species level, giving the amount of variation within each species (intraspecific variation) and did not make any claims to compare variation within or between clones.
This study explores life-history trait variation within/among species and proposes to examine the effects of genetics/environment, body size, and plasticity/sensitivity on daphnid trait expression. Overall, the experimental methods seem to be mostly solid, and the importance of variation in reproductive traits with nutrient limitation is a key finding that has been relatively overlooked in the stoichiometric literature. This study could make an important scientific contribution; however, I found myself getting lost at certain points and questioning the rational and interpretation behind some of the predictions and results. Therefore, I think that the manuscript might benefit by addressing some major and minor points.
Major Points Structure 1. The study questions are clearly laid out at the end of the introduction, which I appreciate. But, I believe that they can be better setup in the introduction and discussed in the later parts of the manuscript. There are introductory paragraphs about food quality, clones and plasticity, which sets up 2 of the 3 study questions, but nothing about body size in the introduction. Please consider adding a body size paragraph to justify rational for why organisms of different body size might be expected to grow differently. Also, while you do talk about plasticity you don't say how it might be related to sensitivity until the final intro paragraph, which left me thoroughly confused. You hint at a mechanism in the discussion, but it would be easier for the reader if you lay this out in the plasticity paragraph.
A paragraph was added to explain the rationale behind why smaller-bodied Daphnia are expected to respond differently than large-bodied Daphnia.
2. I might've missed it, but I only found one mention of the first study aim in the results and not really much discussion of it in the discussion section. This might make for an interesting discussion paragraph along with talking about the interactions between genotype and the environment. These are also tested but not thoroughly discussed in the paper.
The introduction was re-structured in order to provide more background on the first study aim. In addition, the discussion was expanded to spend more time on the results between genetic and environmental contributions to life-history traits.
Stats
3. There was some death in these experiments, but as survival data was not presented, I don't have a feel for how this might've influenced your results. I understand if survival wasn't part of your story, but please report the proportion of individuals that died for each species so that we may assess the robustness of the results. Similarly, please explain the rational for the 5 day survival cutoff reported on Page 7 Line 25? It seems to me that there would still be a significant different between body sizes of day 7 and day 20 animals which could affect your variance estimates.
If the animal did not make it to 5 days, it was considered an unexpected death in the study, and was removed because it would bias the amount of size information available compared to reproductive data (it did not reproduce because it was dead, not because of food stress). Survivorship information has been added to the manuscript for transparency. While I do not feel like survivorship data needs to be reported directly, as it accounted for less than 5% of the study, I did include a chi-square contingency analysis to ensure that survivorship was independent of species identity and treatment.
4. As the PCA is central to your story, the reader could use some extra information to help interpret it. Please state whether your variables were centered and scaled so that we can be certain that individual variables didn't have a greater influence on the ordination. Also, you report in the MANOVA calculation that certain variables were skewed. How was this handled in the PCA ordination? Finally, please report the correlations between each variable and axis so that we can confirm that each PC axis was related to "growth" and reproduction as you say. (Tabachnick and Fidell). I did add the loadings for the PC analysis.
See general comments to the editor. I changed the approach of my analysis as another reviewer pointed out that visual inspection of the PC plots is qualitative assessment rather than qualitative. I have moved the plots to the supplements for visual purposes only. For visual interpretation only, it is acceptable to ignore assumptions of normality for variables
5. I'm still really confused about how the COV-sensitivity analysis was conducted. The COV's seem to be calculated with diet treatments for each species and for each individual trait whereas the sensitivity analysis is calculated on multivariate data for each species. My question is if you are including both food treatment and trait variation (individual on the x-axis and multivariate on the y) on both axes, can we actually consider these two measurements to be independent? To me it seems self-evident that univariate and multivariate trait variation should be positively correlated. Please provide further description and justification for this analysis.
I corrected this analysis to ensure that COV and sensitivity (now labeled "species-level responses" in the manuscript) were both calculated for each species and for each trait. Therefore the correlation comparison is multivariate overall and post-hoc univariate correlations are also reported.
Results Interpretation 6. Since growth wasn't calculated using the common metric mass specific growth rate, I'm not sure that growth is the most accurate way to describe this axis. Specifically, since you don't take into account different starting masses, there's no way to directly compare growth rates across species. SAM is also generally considered to be a reproductively related trait as it is assumed to be under selection from sizeselective predators and is inherently related to reproduction. I would be more comfortable if you referred to this axis as relating to body size.
I agree with this criticism. I have changed the axis to PC and interpretation from growth to size.
7. I found myself questioning some of the results interpretation in the discussion section. A) In the first two paragraphs, you talk about the buffering effects of clonal variation and how this might help species persistence in different or variable environments. However, phenotypic variation among clones was the weakest of any variable in your study and as you say these clones weren't from the same populations, so it's not clear to me how genotypic variation that does not lead to phenotypic differences among clones might be adaptive in natural environments. B) In the 3rd paragraph, I'm not sure that your results show that larger-bodied daphnia are more affected by food quality than smaller bodied daphnia. Table 3 shows that this pattern is trait specific, Figure 2 shows that D. obtusa appears to be more variable than D. magna and figure 4 seems to show that obtusa is the most variable of the 3 species.
The reviewer is correct on both fronts. Clonal variation and size did not affect responses to food stress like I had claimed. Re-doing the analyses with quantitative evidence made that clear, and I have removed those claims, and significantly changed the discussion so as not to make correct interpretations based on the data.
Minor Points
P3 L31: It is debatable whether P is the ultimate limiting nutrient in freshwater systems. This supposition is not supported by your reference to Sterner 2008 and certainly not by the meta analyses of Elser et al. 2007 or any of the multitude of studies examining N & P co-limitation conducted in the last decade or so. It certainly is an important limiting element, but not necessarily the most important.
I have edited this line to reflect that P is one of the important limiting nutrients.
P3 L30: It's true that daphnids could be more susceptible to P-limitation, but as most environments are experiencing P-loading rather than P reductions it is unclear to me how they might be used as an indicator organism. Wouldn't it make more sense to use an organism sensitive to excess P?
Yes, I have removed the information about daphnia being an indicator organism as they indicate for other stressors, not P-limitation.
P4 L12: "Clonal variation can be considered as important as species identity" Please explain what this importance is referring to.
I have clarified this part of the introduction to give more information about why clonal variation may contribute a significant amount to variation as well (or potentially instead of) species variation.
P5 L5: Are these predictions here? Do they all refer to the final study question 3? If not, you might consider moving them up with their corresponding question or spelling out some of these connections specifically in respect to question 3 earlier in the paragraph bc this information could be confusing out of context.
Those were predictions. I have moved this information so that they would not be out of context.
P6 L3: Another caveat that you might want to add is that these animals have been adapted to lab conditions for long periods of time which might influence the magnitude of their plastic responses compared to animals in the wild.
Added as suggested.
P6 L47: So the total amount fed to these animals would be 0.5 mg C day? Can you show that this level of food is not limiting to the larger magna and pulex species given that there were also likely neonates feeding in these tubes later on in the experiment? Could this account for some of the trait variance in these larger body sized animals?
The total amount fed would be 1.0 mg C, as they were individually housed in jars. I have included a citation to show that this amount of food should not be limiting in quantity (Sterner and Robinson 1994). P7 L38-45: These sentences might be better off in the results section.
Edited as suggested (see supplemental material)
P9 L12: Do you have quantitative statistics to support this? I only see descriptive statistics for individual traits in the tables and figures.
I have added more statistical tests in order to provide more backing for conclusions. See general comments to the editor.
P9 L38: This shift might be moved to the discussion where it can be discussed further.
This information was moved to the supplementary material.
P12 L34: It's not clear how these phylogenetic differences could've been related to plasticity. The work by Seidendorf 2010 doesn't support this, but if you believe that this might be the case, would it be worth including it as a covariate in your analyses?
Thank you for this comment so that I can correct this mistake. I have removed any claims to potential phylogenetic influences on plasticity. The quantitative analysis that I have done does not indicate that there is support for this.
P12 L51: You might consider changing this to "partially matches the prediction". There might be an individual magna and pulex clone that is more sensitive, but on average obtusa seems to be the most sensitive species.
This has also been removed due to subsequent analyses.
P13 L19-21: It would be really helpful to develop this idea in the introduction.
Added as suggested. There is more information in the introduction about a tradeoff between trait variation and responses to environmental change.
P13 L38: Hood and Sterner 2014 didn't measure clutch frequency or size.
Thank you for the correction. I had referenced the paper in their use of flexibility in resource use. The claim for differences in clutch size and frequency were from my own data. That has been edited (actually removed altogether). P13 L 43: I really like this idea, so if you get a chance you could expand upon it in future versions of this manuscript or if not in future work.
Thank you. I am not able to draw any further conclusions based on what I have done, but I will keep it in mind for future work.
Tables and Figures: It might help if you rearrange these to put large-and small-bodied animals together.
The legends for Figure 3 could also do with a bit of reformatting.
Background
With increasing environmental stress, many suites of organismal traits are expected to experience strong selection, with life-history traits potentially being among the most impacted (Bradshaw andHolzapfel 2008, Reed et al. 2011). Life-history traits have a direct link to fitness, as an organism's success is built upon an ability to grow to reproductive age, the timing of reproduction events, as well as cumulative reproductive output. Therefore, life-history theory has established direct associations between a population's environment and life-history trait evolution (Stearns 1992, Agrawal et al. 2013).
Food stress has been shown to create a variety of life-history trait effects in organisms, which include longer developmental time, decreases in body size, and lowered fecundity (Ellers & Van Alphen 1997;Nylin & Gotthard 1998). Food stress can be experimentally manipulated through decreasing a limiting resource. In many freshwater lentic systems, phosphorus (P) is one of the limiting nutrients (Wetzel 1983, Sterner 2008, with anthropogenic inputs of P in aquatic systems forcing rapid change in zooplankton populations (Frisch et al. 2014). P-limitation (i.e., low food quality) has effects on Daphnia life-history traits such as growth, reproduction, and senescence (e.g., Dudycha 2003, Jeyasingh andWeider 2005). Members of the genus Daphnia (Cladocera: Anomopoda) have one of the highest P contents amongst zooplankton, so they are predicted to be more responsive to P-limitation compared to other zooplankton taxa (Sterner and Schulz 1998); however P content alone is not sufficient to predict species-level responses in growth (DeMott and Pape 2004).
Daphnia have a cyclically parthenogenic life-cycle, which includes bouts of asexual reproduction under good growing conditions, and sexual reproduction during times of food stress, changes in photoperiod, and crowding cues (Kleiven et al. 1992). This creates the unique advantage of establishing multiple clonal lineages in a population leading to the maintenance of high genetic variation in many natural Daphnia populations (Innes et al. 1986;Spitze et al. 1991;Weider et al. 1999). In addition, researchers have found large strong clonal responses to predator cues (Spitze 1992, Weider andPijanowska 1993), nutrient limitation (Lynch 1989, Weider et al. 2004, habitat selection ( de Meester 1994) and toxins (Baird et al. 1990, Walls 1997. Intraspecific genetic variation has been shown to have effects on population-level processes like colonization (Crutsinger et al. 2008, Crawford andWhitney 2010), coexistence (Lankau et al. 2009), andpredation (Post et al. 2008). For Daphnia, clonal diversity is better maintained under P-limitation (Weider et al. 2008), therefore clonal variation may significantly contribute to overall add to genetic contributionsvariation among Daphnia species..
Under the hypothesis of environmental buffering, influential life-history traits should have minimal trait variation, as fitness would be heavily dependent on minimal change within important vital rate constraints (Pfister 1998). Over multiple life-history tables, Pfister (1998) showed that traits with the highest variation had the least response to temporal change. This could lead to ademonstrates a potential tradeoff between trait variation and species-level responses to environmental change, where traits that are more vital to fitness would show less trait variation.
However, lLife-history traits could vary in the amount of variation seen under food stress; Plimitation has been shown to differentially affect somatic growth rates and reproductive investment like in egg size and abortion rates within Daphnia due to differing nutritional requirements (Hood and Sterner 2014). This could lead to a potential tradeoff between trait variation and species-level responses to environmental change. Increased intraspecific variation could allow for flexibility at the species level, and potentially mediate environmental effects on life-history traits. Alternatively, trait variation and species-level responses to environmental changes could be positively correlated, as traits with low variation may have a lower range of trait values that are measurable in specieslevel responses (i.e., a floor effect), and it has been documented that more intraspecific trait variation also leads to higher variation in intraspecific responses across environments (e.g., Siefert Another mechanism to mitigate environmental effects is an organism's body size. Daphnia's physiology allow them to alter filtering rates under different food quality environments (Sahuquillo et al. 2007), although there are phylogenetic constraints. Jeyasingh (2007) suggested that evolution should favor more responsive physiologies for smaller organisms in order to counter frequent shifts in nutrient limitation. Smaller aquatic organisms use phosphorus at higher rates (Johannes 1964), which means that they would experience P-limitation more frequently. This could lead to smaller-bodied species becoming more plastic than larger-bodied species.
This present study aims to address the following: 1) how much does the environment vs. genetic (taxonomic) identity contribute to life-history trait variation? Do clonal lines contribute more to trait variation than species identity? 2) Do smaller species exhibit more plastic physiologies? And 3) what is the potential relationship between intraspecific variation and specieslevel responses to food stress?
Study organism
Daphnia is a cosmopolitan genus (Sarma et al. 2005, Lampert 2011). Three clonal lineages from each of four different Daphnia species (D. magna, D. mendotae, D. obtusa, and D. pulex) were collected from a variety of laboratory stocks (see Table 1). These clonal lineages span the three subgenera of Daphnia, ranging across North America and Europe, and come from various aquatic habitats (Table 1). D. magna clones used in this study originated from South Dakota, Finland and Germany from a spectrum of habitats. The South Dakotan clone (MA3) came from a permanent lake, a shallow (< 2 m) prairie pot-hole (Weider et al. 2004). MA2 and MA1are both inbred lines from an original genetic cross between a Finnish clone and a German clone. MA2 was inbred for three generations and MA1 was inbred for one generation (Dieter Ebert, Switzerland, personal communication). The environment of the parental clones include a Finnish clone from a ephemeral pond with desiccation in spring/summer and freezing during autumn/winter and a clone from a German semi-permanent pond, with freezing in the winter (Roulin et al. 2013). In addition, D. pulex and D. obtusa clones came from temporary ponds in the U.S. Midwest, while D. mendotae came from permanent lakes in the U.S. Midwest. One D. mendotae clone (ME3) experienced high levels of mortality early on in the experiment, and was subsequently dropped from the analyses. These contrasting environments have created very different evolutionary trajectories for these species. However, a couple of caveats should be noted: these lineages have been adapted to lab conditions and may have different capacities in variation and response than animals collected directly from the field, and a potential confounding issue due to inbreeding could affect the variation and response for two of the three D. magna clones (MA1 and MA2).
Experimental design
Clonal lineages were maintained as separate populations in 900 mL jars, with regular and plentiful feeding using the chemostatically-cultured green algae, Scendesmus acutus, at a constant 20°C in COMBO media (Kilham et al. 1998). These stock cultures maintained an approximate density of 12-30 adults L -1 . A small amount of cetyl alcohol (~10 mg) was added to act as a surfactant to prevent animals from being trapped in the air-water interface. Stock cultures and experimental animals received equal amounts of 24-hour incidental ambient lighting. Maternal lines for experimental animals were raised individually in 60 mL jars with 50 mL of COMBO and fed daily with 1 mg C L -1 of S. acutus that was grown in nutrient-rich conditions (i.e., C:P, ~100:1). The volumeamount of S. acutus added each day for 1 mg C L -1 was calculated by measuring the absorbance of chlorophyll at wavelengths 660nm and 740nm using a spectrophotometer (Spectronic®20 Genesis, Madison, WI) and using a calibrated chlorophyll-carbon correlation to convert the value to amount of carboncurve (Sterner 1993). Females were monitored every 24hours, and first and second clutches were removed. Experimental animals (N = 20 per clone) were taken from third or later clutches of these individually raised maternal lines within 24 hours to reduce maternal effects (Ebert 1991). Experimental animals needed to be pooled from multiple mothers in order to reach the appropriate sample size.
Starting on August 17 th , 2013, Aan initial body-length measurement (i.e., start length) of each experimental animal was taken using a MOTICAM 2300 digital camera and software system (Motic®, S-05165) mounted to an Olympus BX51 compound dissecting microscope. Length measurements were taken from the top of the eyespot to the base of the core body, right above the top of the tail-spine. The tail-spine is known to be morphologically plastic depending on environmental conditions, and was not measured with core length due to potential confounding length measurements. Then, experimental animals were placed individually into 60 mL glass jars with 50 mL of COMBO at 20°C, and were divided into two environmental conditions, high and low phosphorus (N=10 per clonal line for each environmental treatment). Nutrient media wasere prepared using a modification of COMBO (Kilham et al. 1998), with additions of sodium nitrate and potassium phosphate. Animals under a high phosphorus (HiP) feeding regime were fed daily with 1 mg C L -1 of S. acutus that was grown in nutrient-rich conditions (i.e., 85.01 mg L -1 sodium nitrate and 8.71 mg L -1 potassium phosphate and C:P ~100:1). A low phosphorus (LoP) feeding regime consisted of daily 1mg C L -1 feeding of S. acutus grown in nutrient-poor conditions (i.e., 42.5 mg L -1 sodium nitrate and 0.87 mg L -1 potassium phosphate and C:P ~750:1). These food conditions should not be limiting in quantity (Sterner and Robinson 1994). Experimental animals were transferred every two days to fresh jars in order to avoid carbon (detrital) accumulation that could differentially affect resource availability based on inter-/intra-specific variation of filtering rates. Experimental animals were monitored daily and size was measured again at maturation, when first egg development was seen (i.e., age at maturation and length at maturation). Clutch size Formatted: Superscript was recorded daily, as well as images for neonate body-lengths (N ≤ 5 neonates per clutch in order to reduce small-clutch bias). Number of clutches, clutch size and mean neonate length (termed mean clutch length) were calculated from these daily recorded measurements. Dead experimental animals were measured with the day of death. The experiment ran for 28 days, and at the end of this period, experimental animals were measured (i.e., end length), as described above.
Statistical Analyses
All analyses were run conducted using R (R core team, 2018) unless specified otherwise in the methods. Required packages for this analysis included: dplyr (Wickham et al. 2019), ggplot2 (Wickham 2016), ggpubr (Kassambara 2018), and Hmisc (Harrell et al. 2019). Individuals (replicates) were dropped from analysis if they died within 5 days of the start of the experiment to prevent bias from missing data in reproductive measurements. Data was screened for outliers and cases were removed if they were ± 2 standard deviations from a mean value. Mortality accounts for less than five percent of the study, and a chi-square contingency analysis was run conducted between species and treatment to ensure that survivorship was independent of species identity or treatment. A MANOVA was conducted to measure the effects of genetic (species, clonal) and environmental (phosphorous treatment) contributions on life-history traits (length at maturation, age at maturation, end length, mean clutch length, mean clutch size, and number of clutches). The start length of individuals was used as a covariate, as well as maternal line; maternal effects are common among daphniid studies (Lampert 1993), so maternal line was also used as a potential confounding variable. Multivariate normality was checked for the dataset using post-hoc residuals from the MANOVA, and the proportion of variance explained for each main effect was calculated from Wilks' lambda, where ŋ 2 = 1 − 1/ ; is Wilks' lambda for the main effect and s = min(6, dfeffect) (Tabachnick & Fidell 2013). The MANOVA was run conducted using SPSS (Version 20, IBM).
In order to identify differences in species-level responses to food stress, multiple linear regressions were run conducted for each trait, with food treatment and species as factors. The significance-level was adjusted using the Bonferroni correction (alpha = 0.008). Intraspecific variation was measured for each species by coefficients of variation (COVs). COVs were calculated from = , where s = standard deviation and γ = mean of the particular life-history trait for that species. Thus, COVs measure the amount of variation within the species for each trait.
Species-level responses to food stress were quantified by using the differences in log-transformed values between phosphorous treatments (Seidendorf et al. 2010
Results
Mortality in this study was independent of species identity and food treatment (chi-square contingency, df = 3, χ 2 = 0.1361, p = 0.9872). Descriptively, under low-phosphorous (LoP) conditions, all clonal lines of all species showed smaller sizes both at first reproduction, and at the end of the experiment. Similarly, under LoP, clones exhibited delayed onset of reproduction and had smaller clutch sizes. The number of clutches varied per clone, as well as their mean clutch length (See S1). Visual inspection of trait variation mapping using a PCA indicated that species showed more change in reproductive traits rather than size traits between food treatments (See S2).
The MANOVA showed that both genetic factors (species, clone) as well as environmental factors (maternal effects, food treatment) significantly affected life-history traits ( Table 2). The length of individuals at the start, the blocking effect of time, and maternal effects did not significantly affect life-history traits ( Table 2). The proportion of variance explained can be used as a proxy for the magnitude of effect size metric (Tabachnick & Fidell 2013). Using the ŋ 2 statistic as a relative effect size metric, food quality environment explained the greatest proportion of variance ( ŋ 2 = 0.85, dfeffect = 1), while genetic factors explained less; species identify ( ŋ 2 = 0.60, dfeffect = 3) explained more than the clonal level ( ŋ 2 = 0.23, dfeffect = 7) ( Table 2). Therefore, the environment had a stronger main effect than either genetic component, with species identity having a stronger effect than clonal identity.
The size of the species did not show significant differences in species-level responses (two-samples t-test, t = 0.75, df = 22, p > 0.1)., but However, they did show significant differences in the amount of intraspecific variation among traits, with larger species showing more variation than smaller species (two-samples t-test, t = 8.41, df = 22, p < 0.01).
All life-history traits were impacted by both species identity and the food quality level (length at maturation, adjusted R 2 = 0.921, age at maturation adjusted R 2 = 0.670, end length, adjusted R 2 = 0.752. mean clutch length, adjusted R 2 = 0.146, mean clutch size, adjusted R 2 = 0.708, number of clutches, adjusted R 2 = 0.464; all adjusted p < 0.01). Effects of food quality were detected across all species for some reproductive traits (length at maturation and clutch size),
Discussion
The environment contributed most to life-history traits and species identity was the largerexplained greater trait variation than genotype genetic contributor A keystone of evolutionary theory is that trait variation can be separated into genetic and environmental components; when a selection pressure is imposedimposes on a trait, the environmental variance will dominate over genetic variance due to reduction in genetic variance in the selection process (Spitze 1991). The food quality available in the environment had the largest effect on life-history traits, accounting for 85% of the variance (Table 2). Food quality affected all species current reproduction via clutch size, while it only had significant effects on mean clutch length, which could have implications for fitness of future generations, for largerbodied D. magna and D. pulex. Food quality is known to play an important role in growth rate, many studies have used the growth rate hypothesis to predict relationships between growth and Plimited diets or environments (e.g., Jeyasingh 2007, Hood andSterner 2014). P-limitation in particular has been well studied in Daphnia; P-intensive processes including organismal growth, rRNA levels, and therefore protein production, require high availability of phosphorus (Weider et al. 2004).
Depending on intra-and inter-specific pressures, evolution will favor more or less specialized individuals within a generalist population (Araújo et al. 2011). Within these populations, clonal variation could lead to a wider environmental range in which the population can maintain fitness under environmental change (Nunney 2015). Clonal lineages did not have a large contribution to life-history traits, as predicted. This potentially could be due to the relatively small number of clones per species. In addition, these experimental clonal assemblages are somewhat of an artificial construct and raised in the lab, which may not reflect field populations.
It is enticing that species identity clustered strongly in terms of composite life-history traits (i.e., PC axes) under high quality (high P) food conditions, but there was no strong species-specific clustering among three of the species under poor nutrient conditions (S2).
Trait variation in body size is constrained while there is flexibility in reproductive traits
The food quality levels from this study did not support the notion that smaller-bodied Daphnia respond more to changes in food quality than larger-bodied Daphnia. However, larger Daphnia did show more variation in life-history traits. Body size traits were associated with lower amounts of intraspecific variation and species-level responses to food quality, while other reproductive traits were associated with higher amounts of variation and responses to food quality ( Figure 2).
This indicates that Daphnia have size-based phenotypes that are somewhat genetically constrained. Allometric constraints may be one possible explanation for conserved morphological traits. It has been shown that regardless of body-size, daphniids all follow a similar pattern of resource allocation to growth and reproduction under different levels of food (carbon) quantity (Dudycha & Lynch 2005).
Body size has been implicated in determining sensitivity to food quality, with larger individuals being affected by low food quality more so than smaller individuals (Peter and Lampert 1989). However there was no evidence in this study that the body-size of the species played a role in species-level responses or the amount of intraspecific variation. Previous work has indicated that Daphnia may face a competitive tradeoff between maximizing growth when food quality is high and minimizing negative effects of poor food quality (Seidendorf et al. 2010, Hood andSterner 2014). However, stoichiometric flexibility may allow for changing the C:P ratio of new growth under P-limited conditions in order to avoid consequences of P-limited diets. D.
mendotae is relatively inflexible compared to the other species in this study in its response to changes in the C:P of its diet, showing strong homeostasis between diets (Hood and Sterner 2014).
This matches with this study's results, where D. mendotae had overall less intraspecific variation across traits, and smaller species-level responses ( Figure 2).
We I hypothesized a tradeoff between intraspecific trait variation and species-level responses to food stress. However, in this present study, results were contradictory to expectations: species showed more change between food quality environments with increasing trait variation. This is most likely due to reproductive traits being very P-intensive and very responsive to changes in food quality. In Daphnia, P and reproductive trait relationships have not been as well studied as somatic growth rate (SGR), a wellknown proxy of fitness (Lampert & Trubetskova 1996). However, shifts toward lower reproduction has been seen for low levels of nitrogen and phosphorus (Sterner et al. 1992) and for low food concentrations (Lynch 1989); but under toxin-enriched environments, Daphnia have shown to maintain reproductive output (Forbes et al. 2016). Plasticity in reproductive traits are generally considered less important in changing population growth rates based on previous modeling of growth and reproductive schedules (Pfister 1998). These results suggest that potentially environmental buffering from P-limitation has canalized the highly vital growth traits over time, while leaving plasticity in reproductive rates variable and responsive to environmental change. However, another potential explanation could be that a tradeoff would not be expected in this case because these lineages have been raised in the lab for many generations with bountiful resources, thereby creating a so-called "superflea" (Tessier #). Reznick et al. 2000). This study should be repeated using individuals taken from natural habitats.
Conclusions
This present study provides evidence that species identity is important in determining life-history traits, but that may not translate into size-structured populations due to variation in reproductive traits across environments that vary in overall food quality. In particular, the flexibility in reproductive traits may play an important role for population persistence in the face of environmental change. Phenotypic plasticity is the ability of an organism to change its phenotype in response to environmental change. Daphnia have shown a great capacity for phenotypic plasticity in predator-avoidance (e.g., Spitze 1992;Weider & Pijanowska 1993), nutrient uptake/use efficiency (e.g., Lampert 1994), and other life-history traits (e.g., Lampert 1993). This study, where a changing environment may select for more responsiveness in reproductive traits, indicates more consideration of evolution of phenotypic plasticity and population persistence through life-history traits (Chevin et al. 2010). Gathering information about the potential for phenotypically plastic traits via trait variation has been, and will continue to be, a goal toward predicting a species' ability to respond to continued environmental stress. However, there are costs and limits involved in maintaining plastic traits, including genetic and/or developmental constraints, competitive exclusion by a more optimal (and less plastic) trait during a stable period, or geographical limits (Whitlock 1996, Pigliucci 2005. Species that are flexible in their use of phosphorus may compensate for P-limitation by being more plastic in reproductive life-history traits. Associate Editor Comments to Author (Dr Michael Tobler):
Tables
The manuscript was re-assessed by two reviewers and both generally agree that the author has adequately revised the manuscript. They provide some additional feedback that should help to improve the manuscript. I would particularly like to echo two reviewer comments, one pertaining the clarification of why a negative correlation between mean and variance are expected, and one pertaining the removal of gray background in the figures. In the context of the latter, I should add that color figures are free, and using color may help to make figure 2 more clear. Reviewer comments to Author: Reviewer: 1 Comments to the Author(s) The authors present a revised version of their manuscript in which they explore interspecific variation in life history traits in response to nutritional environments. They have done well to eliminate the PC analyses from the manuscript and focus instead on MANOVA analyses, which strengthen the overall statistical approach. The findings are generally expected, but interesting. The overall comparative approach is a strength.
Most of my critiques at this point are minor (see below). I would, however, like to see some effort spent in clarifying the expectation that mean and variance in responses should be inverse correlated. Indeed, my expectation for many biological traits is the opposite: that mean and variance in responses are generally positively correlated. This ends up bearing somewhat heavily on the manuscript since this prediction is introduced in the introduction and its implications are discussed in the discussion.
Thank you for reviewing this manuscript again, I really appreciate the time you put into this. Please see my general comments to the editor for how I addressed your specific comment on the correlation between species-level mean responses and intraspecific trait variation.
Line-by-line edits Pg 5, Line 3: delete 'large' Edited as suggested.
Pg 5, Lines 31-34: Alternatively, mean and variance is often positively correlated in biological data. Part Appendix D RSOS-191024 Minor revisions of this is a 'floor effect', for example. I would suggest discussing the alternative hypothesis (that mean and variance are positively correlated) here.
Thank you for pointing out that I should have an alternative hypothesis, especially as I find evidence for an alternative. I was not able to find a reference that directly has both species-level responses and intraspecific trait variation in Daphnia, but I provide a reference that would reflect the positive correlation between intraspecific trait variation and shifts in mean trait values.
Pg 7, Line 26: what volume of algae (at 1mg C L^-1) was fed to the zooplankton?
The volume varied (between ~500-800microliters for HiP and ~500-1,200 microliters for LoP) depending on the concentration of algae that day, The methods that explain how to calculate this volume will give repeatability to the study. The volume was calculated each day using a spectrophometric method that converted absorbance of chlorophyll to concentration of carbon. I modified the methods to make this more clear.
Added "than" for clarification. "…less than five percent…" Pg 9, Line 26: change 'was run' to 'was conducted'also change this in future sentences, only because 'was run' sounds a bit awkward Edited as suggested.
Pg 10, Line 54: Please make explicit the direction of effect herelarger species have greater degrees of variation?
Added as suggested. "with larger species showing more variation than smaller species" Pg 11, Lines 36-38: I suggest changing this heading so that it reads '[…] and species identity explained greater trait variation than genotype', or something along those lines.
Edited as suggested.
Pg 11, Line 43: Change 'imposes' to 'is imposed' Edited as suggested.
Pg 14, Line 12: Missing date for reference Thank you for pointing this out. I was missing the correct citation in this reference. This has been updated. Edited as suggested.
Reviewer: 3 Comments to the Author(s) I have a few minor comments that should be addressed for more transparency. Introduction: P5 L16 "therefore clonal variation may significantly add to genetic contributions". The meaning of this sentence is cryptic, can you try to reformulate?
Edited as suggested.
Methods P6 L40 I think one word is missing in the sentence "finnish pond from a ephemeral with dessication" Edited as suggested (pond was added after ephemeral).
"incidental ambient lighting": I suppose not only the stock culture but also the experiment were in these conditions.
Added that experimental animals also received ambient lighting for clarification.
When was the experiment conducted?
The starting date for the experiment were added for clarity.
Please provide more details about spectrophotometer measurements: wavelength, and perhaps a reference to the literature on this already established procedure?
Added reference as suggested and also added spectrophotometer details and which wavelengths were measured.
Please provide details about the experimental animals. Especially in smaller species, a single clutch contains less than 20 neonates, so to reach the indicated number clutches must have been pooled. Were all experimental animals born on the same day from different mothers, or were they all from the same mother but born on different days? I am aware this is a logistical issue but also important when interpreting results, because it influences the observed variance.
Animals were pooled from multiple mothers, and accounted for in the MANOVA. I have added a line in the methods to specify that animals were pooled from different mothers.
P9 word missing "mortality accounts for less than five percent…" Edited as suggested.
P14 The superflea Tessier reference is incomplete, I suppose it was a formatting error.
Thank you for pointing this out. I was missing the correct citation in this reference. This has been updated.
There is no R code available in supp Mat, please provide it.
Thank you for pointing this out. Code has been added to the zip file in the datadryad submission associated with this manuscript (with updates to figure modification).
Figure 2 is hard to read. Please consider using a white background, as grey on grey is not easy. Is RSOS having a limitation on color figures? If not I would consider using colors in this figure, because even if "only" 4 shades of grey aren't much, it is a bit hard. I also wish the symbols were larger, and I am wondering why the symbols for mean clutch size look different in shape?
Edited as suggested. Figure 2 now | 2019-08-14T13:02:18.342Z | 2019-08-01T00:00:00.000 | {
"year": 2019,
"sha1": "3084082ec5c531201140e1b305a6aab81088b55d",
"oa_license": "CCBY",
"oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.191024",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9f069e0d7d0f4512dcc0ba6831e375c37c9ca95f",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
16268347 | pes2o/s2orc | v3-fos-license | Bowel Function in Acute Stroke Patients
Objective To investigate factors related to bowel function and colon motility in acute stroke patients. Method Fifty-one stroke patients (29 males, mean age 63.4±13.6 years, onset 13.4±4.8 days) were recruited and divided into two groups: constipation (n=25) and non-constipation (n=26) groups. We evaluated the amount of intake, voiding function, concomitant swallowing problem and colon transit time (CTT) using radio-opaque markers for ascending, descending and rectosigmoid colons. The Adapted Patient Evaluation Conference System (APEC), Korean version of Modified Bathel Index (K-MBI) and Motricity Index (MI) were evaluated. Results The constipation group showed significantly prolonged CTT of ascending, descending and entire colon (p<0.05) and more severe swallowing problems (p=0.048). The APEC scale (2.65±1.44 vs 1.52±0.92, p=0.001), K-MBI scores (59.4±14.4 vs 28.0±24.3, p<0.001) and MI scores (69.1±22.3 vs 46.8±25.9, p=0.001) of the constipation group were significantly lower compared to the non-constipation group. Conclusion Our study demonstrated that bowel function in acute stroke patients was associated with functional status and swallowing function, indicating the need for intensive functional training in post-stroke constipation patients.
INTRODUCTION
Post-stroke constipation prevalence rate in stroke patients is 30-60%. [1][2][3] It is diffi cult to assess constipation objectively, because constipation results in various degrees of subjective symptoms. Therefore, not enough studies are conducted to evaluate this. Constipation not only leads to a low quality of life, but also interferes with rehabilitation treatment because of problems in bowel movement control. 4 Post-stroke constipation is caused by inactivity, lethargy, insufficient water or nutrition intake, depression, lack of exercise capabilities, cognitive impairment, reduced consciousness and drug intake. Depending on the changes in the central and peripheral nervous systems, transit time through the small and large intestines can be delayed, and result in incomplete bowel movements. 5 In 1989, Wrenn et al. 6 reported that fecal impaction is often caused by taking placebo and inactivity in stroke patients with neurological impairments. In 2007, Bracci et al. 7 suggested that nitrate and anticoagulants trigger chronic constipation.
Our study aimed to investigate factors affecting bowel movement in acute stroke patients, and measure the correlation between bowel movement and functional recovery. www.e-arm.org
Participants
Patients who met the following criteria participated in the study: 1) suffered first acute stroke within a month, 2) admitted to Asan Medical Center, Department of Rehabilitation Medicine from December 2008 to October 2009, 3) scored 24 points or more in the Korean version of Mini -Mental State Examination -Korea (K-MMSE), indicating that the patient was able to inform bowel dysfunction.
Patients with the following characteristics were not eligible for our study: 1) had abdominal surgery in the past, or diseases that could have decreased colonic motility, such as diabetes and hypothyroidism, 2) suffered from gastrointestinal tract disorders in the past, 3) had hernia, congenital large intestine and anal deformity, or colostomy. Patients who met at least two of the following standard rome II criteria 8 were in the constipation group: 1) had fewer than three bowel movements a week, 2) strain during at least one of four bowel movements, 3) hard stools at least one of four bowel movements, 4) incomplete bowel movement at least once, 5) feeling of fullness in rectum or anus during at least one of four bowel movements, 6) the need to induce bowel movement with hand at least one of four times.
Methods
Participants' records were used to assess age, sex, parts affected by hemiplegia, National Institutes of Health Stroke Scale (NIHSS), amount of food and water intake, urination volume, accompanying voiding dysfunction or swallowing dysfunction, Adapted Patient Evaluation Conference System (APECS), 9 Korean version of Modifi ed Bathel Index (K-MBI), Motricity Index (MI) 7 and drug intake. 10,11 Th e amount of food intake was recorded every 8 hours for 10 days by the patient or care-givers, and the average amount was then assessed. Patients taking drugs for urinary incontinence, urinary frequency and ischuria were considered as patients with voiding dysfunction. Patients with swallowing disorders were observed for presence of aspiration and/or penetration, using video fl uoroscopy, and these patients' food intake was limited. APECS scale was divided into 8 levels (0 to 7), and the responsible therapist and doctor assessed patient walking function using the average level (Table 1). MI was used to measure motor function following stroke, with 100 points referring to normal. In cases of hemiplegia, the average points for the upper and lower extremities of the paralyzed side were measured. While in cases of quadriparesis, the average points for all upper and lower extremities were measured. Colon transit time was measured using Konsyl Phamaceuticals' SITZMARKS ® radio-opaque markers, and the procedure by Metcalf et al. 12 was used. After the patients were admitted to Asan Medical Center, Department of Rehabilitation Medicine, they stopped taking bowel function enhancement drugs, as well as relaxants and enema. A week later, capsules containing 24 radio-opaque markers were given every morning at 9 am for three days. After four days, plain supine abdominal X-rays were taken. To analyze the X-rays, the pictures were divided into three segments, which were ascending, descending and rectosigmoid colons. Th e location of the descending colon was defi ned as the right side of the point where the line connecting the spinous process and the line connecting the L5 and pelvis outlet crosses. The location of the ascending colon was defined as the left side of connecting the L5 and the anterior superior iliac spine. The location of the rectosigmoid colon was defined as below the line connecting L5 and pelvis outlet, and below the anterior www.e-arm.org superior iliac spine, and the number of radio-opaque markers in each segment was calculated. 13 The average transit time was calculated by multiplying the number of radio-opaque markers left in the colon by 1.0 ( Fig. 1). 14 Both physical therapy and occupational therapy were performed for at least one hour every day, six times a week. After four weeks of rehabilitation, patients were checked for presence of constipation, and the K-MBI was re-evaluated to check for improvements in the activities of daily living function.
Statistical analysis
SPSS version 12.0 (Chicago, IL, USA) was used for statistical analysis. Mann-Whitney U tests were used to compare walking function, activities of daily living function, motor function, level of food and water intake, and colon transit time in the constipation and nonconstipation groups. Chi-Square tests were used to assess presence of constipation, urination disorder and swallowing dysfunction, as well as the eff ect of the drugs taken. After rehabilitation, Mann-Whitney U tests were also used to compare activities of daily living function in the constipation group and the group of patients with improved conditions.
Participant characteristics
Fifty-one patients, 29 males and 22 females, with a mean age of 63.4±13.6 participated in the study. The patients were divided into two groups, constipation (n=25) and non-constipation (n=26) groups. There were no signifi cant diff erences between the two groups in sex and onset time of stroke. In addition, there were no signifi cant differences in the patients' National Institute of Health Stroke Scale (NIHSS), parts aff ected by hemiplegia, and whether they received surgery ( Table 2).
Eff ect of constipation on walking function, daily activities, motor function
There was a significant difference in walking function when assessed using the APEC scale, the non-constipation group was 2.65±1.44, while the constipation group was 1.52±0.92 (p=0.001). The K-MBI score also showed that the non-constipation group scored 59.4±14.4, while the constipation group scored 28.0±24.3 (p<0.001) ( Table 3). The MI score used to assess motor function showed a significant difference as well, in which the non-constipation group scored 69.1±22.3 points and the constipation group scored 46.8±25.9 (p=0.001) ( Table 3).
Amount of food and water intake, and constipation
There was no significant difference in the amount of food consumed in the two groups. Th e non-constipation group consumed 2,106.9±459.9 cc, while the constipation Fig. 1. Th e colon transit time (CTT) of the twenty six patients was measured using radio-opaque markers for the ascending (aCTT), descending (dCTT), rectosigmoid (rsCTT) colons as well as for the entire colon (tCTT). After 4 days, the spinal processes and imaginary lines from the 5 th lumbar vertebra to the left iliac crest and pelvic outlet served as landmarks. (Table 3).
Eff ect of voiding function on constipation
Among the participants, 8 patients had voiding dysfunction, 4 each in constipation and non-constipation groups. Therefore, the presence of voiding dysfunction did not have a significant effect on constipation ( Table 3). The average amount of urine produced from the two groups showed no significant difference. The nonconstipation group produced 1,522.4±440.9 cc, while the constipation group produced 1,383.4±311.7 cc (Table 3).
Eff ect of swallowing dysfunction on constipation
Twenty-one patients had swallowing dysfunction, 14 of them were in the constipation group, and 7 in the nonconstipation group (p=0.048) ( Table 3).
Eff ect of drug intake on constipation
Use of the following drugs had no eff ect on the presence of constipation (Table 3): Analgesics, antidepressants, antiepileptics, diuretics, antacids, anticholinergics, anticoagulants, nitrates, angiotensin-converting enzyme inhibitors, calcium channel blockers and anticoagulants.
Colon transit time
Transit time for the ascending colon in the constipation group was 18.6±15.8 hours, and it was 5.3±7.3 hours (p=0.032) for the non-constipation group. The transit time for the descending colon in the constipation group was 19.3±13.7 hours, whereas it was 5.9±5.5 hours (p=0.029) for the non-constipation group (p=0.029). Th e transit time for the rectum was 12.3±11.5 hours, and 10.9±10.3 hours for the rectosigmoid colon, while the total transit time was 50.3±18.0 hours, and 22.2±16.4 hours respectively. The difference in the colon transit time between the constipation and non-constipation groups was statistically signifi cant, with the constipation group taking longer (p<0.05) (Fig. 2).
Eff ect of rehabilitation on improvement in constipation and capacity for day-to-day activities
After four weeks of rehabilitation at Asan Medical Center, Department of Rehabilitation Medicine, 13 out of 51 patients (25.5%) had constipation. Among the 25 patients in the constipation group, 12 people's symptoms improved, and none of the patients from the non-constipation group developed constipation symptoms. The mean K-MBI score of patients with improved symptoms was 17.1±7.9, and the mean K-MBI score of the patients with constipation after four weeks of rehabilitation was 4.8±2.5. Th e increase in their activities of daily living function was statistically significant in patients with improved symptoms (p=0.002).
DISCUSSION
Constipation is a common complication of stroke, and several studies have been conducted to investigate this. However, compared to voiding dysfunction, not enough research on constipation has been conducted in Korea. Stroke patients often suffer from bowel dysfunction. Bowel problems are used as an indicator of functional recovery limit, and are closely related to the patient and care-givers quality of life. 15 In 2009, Su et al. 11 reported that constipation in acute stroke patients can be used as an indicator of symptoms to follow 12 weeks after stroke. Constipation in acute stroke patients, therefore, has clinical signifi cance.
In 1975, Lehmann et al. 16 reported that compared to the damage in the left cerebral hemisphere, damage to the right hemisphere caused slower recovery. However, In 1986 Jongbloed 17 suggested that the damaged part of the brain was irrelevant to the difference in functional recovery. In our study, no significant difference in functional recovery was observed in patients with aff ected brain regions. In addition, patients were checked for history of surgery, and the presence of constipation according to the affected brain regions, but the results were insignifi cant. Th ese fi ndings suggest that lesion sites and the patient's history of surgery have no eff ect on the onset of constipation.
In 1982, Skilbeck et al. 18 reported that most patients' walking and bowel function improved within the first three months after stroke, and that walking problems were closely related to bowel functions. Our study also showed that walking function and capacity for day-to- www.e-arm.org day activities was better in the non-constipation group, and we could assume that is because physical activities aff ect colon motility. 19 Min et al. 20 said in 2000 that sufficient amount of food and water intake increased the number of voiding. However, our study showed no significant relationship between the amount of food and water intake and the presence of constipation. A possible explanation is that because only patients admitted to Asan Medical Center participated in our study, all participants' meals were planned, and when patients were consuming insuffi cient amounts, appropriate measures were taken immediately. Also, voiding dysfunction and the amount of urine did not have signifi cant eff ects on defecation function in both groups, and this result coincides with what Min et al. 20 reported in 2000.
In 2007, Bracci et al. 7 reported that the use of antithrombotics and nitrate in stroke patients did not cause constipation. In 2009 Su et al. 11 reported that the onset of constipation among patients of stroke using analgesics and diuretics was statistically signifi cant. However, in this study, drug use did not cause consti pation.
Although Nino-Mrurica et al. 21 in 1990, and Lim et al. 22 in 2001 measured colon transit time in patients with spinal cord injury patients, no previous studies had assessed colon transit time in stroke patients. In a study of patients with neurogenic bowel after spinal cord injury, transit time was longer in the descending colon than the ascending colon. While in a study of stroke patients, transit time was considerably longer in ascending and descending colons compared to the rectosigmoid colon.
In 1999, Del Giudice et al. 23 reported that in cerebral palsy patients with prolonged colon transit time, most cases were seen in the ascending colon, with 52% of the delay observed in the ascending colon, 36% in the descending colon and 12% in the rectosigmoid colon. In 2004, Park et al. 24 showed that transit time was delayed in the proximal colon in cerebral palsy patients with constipation. Th is study also showed that patients in the constipation group experienced prolonged transit time in the proximal colon, which could have been caused due to neurological problems following lesions.
Our study suggests that the onset of constipation in acute stroke patients and the decline in recovery of function are closely related, since the MBI scores improved significantly in the non-constipation group after four weeks of rehabilitation.
Our study did not include all stroke patients, because patients needed to swallow SITZMARKS ® radio-opaque markers to determine colon transit time. Therefore, patients receiving tube feeding were not able to enroll in the study. In addition, patients who scored 24 points or higher in the K-MMSE were able to inform bowel dysfunction and participate in the study. As a result, our study did not fully explain the cause of constipation nor clearly determine the pattern of colon transit time in all stroke patients. Th e K-MBI scores signifi cantly improved in the non-constipation group, and this could be closely related to the improvement in motor function. However, we did not assess walking function and the MI scores after four weeks of rehabilitation, therefore whether the improvement of motor function enhanced the K-MBI scores remain unclear.
CONCLUSION
Fifty-one acute stroke patients admitted to Asan Medical Center, Department of Rehabilitation Medicine, participated in our study and the conclusions were as follows: 1) Compared to the non-constipation group, the constipation group had poor walking function and activities of daily living function, as well as weak upper and lower muscles.
2) More patients in the constipation group suff ered from dysphagia, compared to the non-constipation group.
3) Ascending and descending colon transit time was signifi cantly longer in the constipation group. 4) A large number of patients in the constipation group no longer suff ered from constipation after rehabilitation, and the group with improved symptoms had better activities of daily living function. When treating acute stroke patients, more attention is needed on bowel dysfunction, and in order to relieve bowel problems, well-planned, comprehensive rehabilitation programs are needed. Further treatment in the constipation group is necessary, since the patients in the group with improved symptoms also had better activities of daily living function. | 2014-10-01T00:00:00.000Z | 2011-06-01T00:00:00.000 | {
"year": 2011,
"sha1": "98cf9fba245d7a11801df0126b80c1e66db7754f",
"oa_license": "CCBYNC",
"oa_url": "http://www.e-arm.org/upload/pdf/arm-35-337.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "98cf9fba245d7a11801df0126b80c1e66db7754f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258032528 | pes2o/s2orc | v3-fos-license | A Research on Online Teaching Behavior of Chinese Local University Teachers Based on Cluster Analysis
COVID-19 boosted online teaching and yielded a significant amount of valuable data, yet utilizing it for education is a challenge. This study employed the K-means clustering method to analyze the online teaching behavior data of 1147 courses from a local university in East China. As a result, five types of courses with distinct teaching behaviors were identified: resource preparation (4.1%), online classroom interaction (3.6%), task evaluation (9.2%), active interaction (15.5%), and inactive interaction (67.6%). By examining the relationship between these course types and academic performance, the authors discovered no significant difference in the academic performance of students in the three course groups (i.e., resource preparation, online classroom interaction, and task evaluation) and students in the inactive interaction course group. However, there was a significant disparity in academic performance between students in active interaction courses and students in inactive interaction courses. These findings can assist teachers in planning online teaching activities more effectively and improving teaching outcomes.
INTRODUCTION
The onset of the COVID-19 pandemic in 2020 had far-reaching consequences on public health and the well-being of millions of individuals worldwide.Governments worldwide implemented various measures to mitigate the spread of the virus, leading to significant disruptions in various sectors, including education.In many countries, mandatory lockdowns rendered face-to-face instruction in educational institutions impossible, promoting distance learning as a practical substitute for traditional classroom education, particularly in universities.
In 2019, the Department of Higher Education of the Ministry of Education of the People's Republic of of China approved the implementation of the Double Ten Thousand Plan or the Gold Course Construction Plan in all colleges and universities across the country.This initiative aimed to develop 10,000 national first-class courses and 10,000 provincial first-class courses, with 3,000 online "gold courses" and 7,000 blended "gold courses" (offline and online) (Ministry of Education of the People's Republic of China, 2019).The creation of the third batch of national first-class courses has already begun.The COVID-19 pandemic affected the regular opening of classrooms and traditional in-person teaching in colleges and universities.To address this, the Ministry of Education recommended that institutions make use of massive open online courses (MOOCs) and high-quality online course teaching resources at the provincial and school levels.With the aid of experimental resource platforms and various online course platforms at all levels, as well as on-campus online learning spaces, online learning and teaching must be actively carried out to ensure teaching progress and quality during the epidemic prevention and control period.This helps in achieving the objective of suspending classes without stopping teaching and learning.Additionally, the Ministry of Education of the People's Republic of China (2020) recommended 22 online course platforms that could support online teaching services in colleges and universities during the epidemic prevention and control period, such as Icourse Network, Wisdom Tree, and Superstar Learning.Driven by the Ministry of Education, online courses in higher education have rapidly developed.Different colleges and universities have established online courses with unique features on network platforms, leading to an increase in online curriculum teaching.
In April 2022, EDUCAUSE, which is the U.S. higher education informatization association, released the 2022 Horizon Report: Teaching and Learning Edition (Pelletier et al., 2022), which identifies hybrid and online learning, learning analytics, and big data as the future of higher education.According to Long and Siemens (2011), big data and its analytics have become the most significant factors influencing the future of higher education.The rapid development of online curriculum teaching in Chinese colleges and universities after the epidemic has also provided an opportunity for big data research on users' education on major online platforms.Statistics show that, in China's colleges and universities at all levels, the number of students reached tens of millions.Online teaching in the teaching platform has left a vast amount of teaching data.In the network teaching environment, mining the big data of teachers' and students' teaching and learning on the platform will help to find the characteristics of teachers' and students' behaviors and their changing rules, providing theoretical support for education administrators in colleges and universities to improve their teaching decisions, optimize resource allocation, and change higher education teaching.The majority of extant studies on online teaching behavior are focused on students and their various online learning behaviors and activities.However, there has been limited research on teaching behavior from the perspective of instructors, specifically through the collection and analysis of substantial amounts of online teaching data (Zhang et al., 2021).In this study, the authors investigated the teaching behavior of teachers in a university in East China, analyzed the data of teachers' generative teaching on the online course teaching platform, and discussed the characteristics of teachers' online teaching behavior, in order to improve the teaching quality of higher education and promote the reform of the higher education model.
Teachers' Online Teaching Behaviors
Behavior refers to externally observable activities that are influenced by an individual's thoughts and intentions.Human behavior is shaped by a range of factors including one's ideological beliefs, values, and living environment (Ding, 2007).When it comes to teaching, the term "teaching behavior" encompasses all actions teachers take to stimulate, maintain, and support their students' learning.They include activities such as providing guidance and services to students (Li, 2005).The advent of online education has produced a growing interest in how teachers' teaching behaviors operate in digital learning environments, as opposed to traditional classroom settings.
The Basic Dimensions of Teachers' Online Teaching Behavior
Teachers' behavior is a dynamic and complex phenomenon, and the areas of concern are extremely diverse.Merely describing and summarizing this behavior for research purposes is not sufficient.Therefore, it is necessary to divide teaching behavior into dimensions to refine the research direction, develop clear research ideas, and establish a logical order.The fundamental dimension of teachers' online teaching behavior serves as the perspective and focal point for studying teaching behavior.Scholars have divided teachers' online teaching behavior into different dimensions from different perspectives.Shi and Cui (1999) and Yao et al. (2022) divided online teaching behavior into three categories based on the teachers' roles: Teacher behavior, teaching assistant behavior, and managerial behavior.Teaching behavior can be categorized as effective or ineffective, depending on the teaching effect (Yao et al., 2022).Additionally, teachers' online teaching behavior can be divided into three types, based on the level of support provided: Autonomous support, cognitive support, and emotional support (Liu et al., 2017).Furthermore, teachers' teaching behavior in a network-assisted environment can be categorized into teaching preparation behavior, teaching interaction behavior, teaching evaluation and feedback behavior, and teaching reflection behavior, based on the teaching process (Hu & Wang, 2011).Ma (2020) categorized teachers' teaching behavior in a mixed teaching environment by considering the instructional design process.The resulting categories include designing and organizing teaching, direct guidance, promoting participation, feedback evaluation, providing resources, and emotional support.
In this study, the authors aimed to explore the division of teachers' online teaching behavior dimensions based on the teaching process and teachers' practical experience.
The Effectiveness of Teachers' Online Teaching Behavior
The term "effectiveness" is used to describe the degree to which a desired outcome is achieved.In the context of teaching, the effectiveness of teaching behavior refers to teachers' ability to exhibit reasonable and flexible behavior during teaching activities, resulting in higher-than-expected teaching outcomes within a reasonable time frame (Lian, 2000).Early research on teaching behavior focused primarily on classroom teaching.However, with the advancement of the Internet and information technology, research on teaching behavior analysis has rapidly grown, particularly in two areas.First, there is a focus on the study of effective teaching itself, including the definition of effective teaching behavior (Wei & Ren, 2021), measuring and evaluating effective teaching behavior (Yuan et al., 2021), and analyzing the current state of effective teaching.Second, there is a focus on the relationship between teachers' teaching behavior and factors affecting the learning experience from the perspective of psychology, such as satisfaction, participation, and learning outcomes.For instance, Liu et al. (2021) conducted a survey on college teachers' online teaching behavior and their students' satisfaction with online learning.The study found that satisfaction with teaching evaluation and guidance behavior was relatively low, particularly regarding artistic skills demonstration, practical experiments, paper writing training, and other operational skills.In another study, Li and Li (2022) investigated the relationship between online teaching behavior of college teachers and students' participation in learning.The research revealed that teachers' provision of good resources, designing and organizing teaching, and promoting participation positively impacted students' learning participation.Additionally, Chen (2021) analyzed data from a provincial normal college's small private online courses (SPOC) platform and found that pre-class homework, discussion, and mutual evaluation homework were factors that positively influenced test scores.Conversely, teachers' correction of homework and response to the forum for help had no significant impact on student performance.
Analysis Method of Teachers' Online Teaching Behavior
In the early days, research on the analysis of teachers' online teaching behavior mostly relied on questionnaire surveys or interviews to collect data and conduct statistical analysis.However, in recent years, the emergence of "Internet+Education" and the development of educational data mining technology have expanded the statistical analysis methods for data, and provided new insights for researching teachers' online teaching behavior in a blended environment.Compared to traditional classroom teaching, online teaching data comprise behavior logs at the operational level, as well as interactive texts reflecting teachers' cognitive behavior.For these large-scale sample data, researchers can use data mining to reveal valuable information behind behavior data such as behavior patterns, laws, and habits.Ultimately, this promotes teachers' analysis, understanding, and optimization of the teaching process and environment.Depending on the educational problems studied, researchers need to use different data mining methods, such as regression analysis, association rules, decision trees, and cluster analysis (Zhang et al., 2021).For instance, when analyzing the factors that influence academic performance, regression analysis can be used to infer or verify the possible performance of dependent variables (grades) based on the independent variables, such as teaching behavior elements in the dataset (Zong et al., 2016).When identifying teachers' online teaching behavior tendencies, the behavior characteristics of teachers' online activities are analyzed using technologies such as association rules and decision trees (Liu, 2018).
Cluster Analysis
Cluster analysis plays an important role in the classification of subject behavior.It is a multivariate statistical method that quantitatively classifies research objects based on their characteristics.The purpose of cluster analysis is to group target objects with similar measurement indicators as closely as possible while keeping the values of object attributes in different groups as dissimilar as possible (Gao, 2019).This approach enables the achievement of the common goal of "like attracts like" and "birds of a feather flock together." Scholars have applied this technique to identify patterns and characteristics of learners' behavior and their relationship with academic performance.By analyzing data on learners' characteristics, online learning behavior, and learning paths, cluster analysis allows to divide online learners into several categories based on similar labels.
In studying behavior patterns and characteristics, researchers use cluster analysis to identify rules or patterns of learners' online learning behavior.Araya et al. (2014) employed this technique to classify learners' online math game behaviors and find out the behavior rules of learners' groups during the collaboration process.Khalil and Ebner (2017) used cluster analysis to describe MOOCs learners' characteristics (e.g., video browsing, course participation, resource acquisition, motivation, and topics of interest) and provide a comprehensive interpretation.
Scholars' second area of study with the support of cluster analysis explores the relationship between online learning behavior and learning effect, the behavior characteristics that affect learning effect, and the prediction of behavior characteristics on learning effect.For instance, Vaessen et al. (2014) used cluster analysis and regression algorithm to explore the prediction of academic performance by different types of help-seeking strategies of learners in intelligent learning systems.
A few scholars have employed cluster analysis to study teachers' online teaching behavior.For instance, Tao (2019) collected and analyzed the usage behaviors of 64 college teachers stored in a learning management system.They used a two-stage cluster analysis method to identify four types of teachers: Low-usage, administration-oriented, communication-oriented, and assessment-oriented.Similarly, Park et al. ( 2016) used a clustering approach to extract common activity features of 612 courses from a large private university in South Korea, using online behavior data tracked from the a learning management system and the institution's course database.They identified four unique subtypes: Type I (inactive or immature), Type C (communication or collaboration), Type D (delivery or discussion), and Type S (sharing or submission).Wang et al. (2020) proposed a frequent sequence mining algorithm and cluster analysis to examine the teaching mode of wisdom classroom teachers.They analyzed the teaching behavior of a group of outstanding teachers in a specific discipline through teaching video cases and then carried out a cluster analysis.The results showed that the excellent teacher result cluster had a firm consistency with the excellent teacher group in actual teaching.However, despite these studies, there is a lack of research using clustering methods to analyze teachers' online teaching behavior from the perspective of big data (Zhao & Yao, 2019).Thus, more research is needed to further explore this area.
Based on the preceding analysis, in this study the authors aimed to conduct a big data analysis of teachers' online teaching behavior in online course groups established by local universities in China.The authors aimed to investigate whether teachers' teaching behaviors in different types of course groups exhibit distinctive features and whether such behaviors are associated with students' academic performance.Specifically, the study addressed the following research questions: 1. Can online courses be categorized into different types through cluster analysis of teachers' online teaching behavior data on the online course platform?2. What are the characteristics of online teaching behaviors in different course groups?3. Is there a correlation between online course types and students' academic performance?
Participants
The authors selected the sample data for this study from the SPOC offered by a local university in East China on the Super Star Erya network teaching platform.They collected the data from all the teachers who offered courses on this university teaching platform from March 2022 to June 2022, which was the second semester of the 2021-2022 academic year.In total, 1589 courses and 894 teachers participated in the study.After excluding courses with zero or extreme data, the researchers obtained a final set of 1147 courses for analysis.
Online Teaching Platform
The authors examined the Superstar Erya teaching platform (PC terminal) and Superstar Learning platform (mobile terminal) as the online teaching environment.Developed by China Superstar Corporation, the platforms are widely used in nearly 800 universities across the country.They are mainly divided into two modules: Course management and class activity.The PC terminal houses the course teaching resources, including micro-class videos, teaching courseware, work tasks, cases, and exercises.Instead, the mobile terminal hosts the course teaching activities, such as sign-in, asking questions, publishing tasks, and discussion.The online teaching platform effectively connects the SPOC Internet platform with the traditional teaching classroom through the course management module and the course class activity module (Figure 1).Thus, it creates an online teaching environment that is not limited by time and space (Tian & Zhang, 2019).
General Online Teaching Behavior Model
Based on activity theory and design-based research, and informed by the functional characteristics of both Superstar Erya and Superstar Learning platforms, the authors developed a general model of online teaching behavior for all disciplines under blended teaching environments (Figure 2).The The online teaching process typically includes the following steps: 1.During the pre-class stage, the teacher uploads a list of learning tasks and resources for the week, such as multimedia materials, course materials, case studies, exercises, and test questions.Students review the list of learning tasks, understand the requirements, and use the provided resources to complete the assigned tasks.If they encounter any issues, they can record them and provide feedback to the teacher.2. During the in-class stage, the teacher carries out a series of online teaching activities, such as organizing sign-ins, voting, surveys, live broadcasts, questioning, exercises, and summaries.Meanwhile, students participate in sign-ins, provide feedback, listen, question, discuss, display, evaluate, and engage in other interactive activities.3.During the after-class stage, the teacher organizes Q&A discussions, releases homework and tests, and grades assignments and exams.Students complete the relevant homework and learning tasks, discuss and communicate their work, conduct learning summaries and reflection, enhance their knowledge system, and formulate next-step learning plans.
Data Collection
Teaching Behavior In this study, the authors utilized the relevant actions teachers performed on the teaching platform as a measurement index.This index encompasses three categories of actions: pre-class, in-class, and after-class.pre-class actions include uploading media resources, creating chapters, and releasing test questions and test papers.In-class actions encompass conducting class activities and video teaching.Lastly, after-class actions include posting and replying to posts, and publishing and grading homework and exams.Each teacher's behavior corresponds to multiple data collection points (Table 1).
Students' Academic Performance
Academic performance comprises two primary components: Formative evaluation and summative evaluation within the course.These evaluations are typically expressed using the 100% system and each account for 50% of the final grade (Table 2).
Data Analysis
The software SPSS offers several methods for cluster analysis, but the two primary methods are k-means cluster and hierarchical cluster.In this study, the authors adopted k-means cluster in SPSS 26.0, a statistical method that uses the sample mean of the cluster to represent the whole cluster, to better explore teachers' online teaching behavior, since the data analyzed include more than 1000 data samples.The aim is to divide online course groups into different categories based on the characteristics of behavior data, and explore the characteristic differences between teachers' online teaching behaviors in different course groups.
Based on the teaching process of SPOC and teachers' practical teaching experience, some courses show a different emphasis on teachers' teaching behaviors in three stages: Before class, in class, and after class.In other words, some courses have more teachers' teaching behaviors in one or two stages than in other stages, showing a certain preference.Instead, some courses have relatively close numbers of teaching behaviors in the three teaching stages; this shows no obvious preference.Therefore, in this study, the authors defined three independent variables, counted the total times of teachers' teaching activities before, during, and after class in each course, and used the k-means clustering method to cluster these three stages.They tested different values in the "Number of clusters" SPSS window to explore the classification law of teachers' teaching behaviors in online course groups.To enhance the discrimination and interpretation of the clustering results, the researchers standardized the data, as substantial variations in the number of teaching behaviors occurred across the three teaching stages.
Teaching stage Teacher behavior Data collection point
Pre-class Uploading media resources.
The number of pictures, videos, audio, documents, and other formats Creating chapters.
The number of chapters.
Releasing test questions and test papers.The number of test questions and test papers.
In-class
Carring out class activities.
The number of sign-ins, questionnaires, asking questions, votes, thematic discussions, notices, and in-class exercises.
Video teaching.
The number of live broadcasts and simultaneous classes.
After-class
Posting and replying.
The number of posts and replies.
Publishing and grading homework.
The number of homework published and graded.
Publishing and grading exams.
The number of publishing exams and grading exams.
Furthermore, they used one-way ANOVA to explore the relationship between teachers' teaching behaviors and students' academic performance in different types of course groups.The authors used descriptive statistics to analyze the characteristics of teachers' online teaching behaviors in each cluster course group.
Classification of the Online Course
To classify the teachers' teaching behaviors across the three teaching stages (i.e., before, during, and after class) in all online courses, the authors imported the corresponding data files into SPSS.They utilized the "Analysis|Classification|K-Means Cluster" menu command to cluster the imported data.After several attempts, they determined that setting the value to 5 in the "Number of clusters" window of SPSS proved to be effective in identifying the categories of online courses.Ultimately, the authors completed the cluster analysis after eight iterations; Table 3 presents the corresponding data.
The mean clustering method produced five curriculum groups with distinct values for each category.As Tables 4 and 5 show, cluster 1 has the highest average number of teaching behaviors before class,
Evaluation method Illustration
Academic performance Formative (50%) Formative evaluation is a process that involves assessing the learning behavior performance of students participating in teaching activities organized by teachers.This performance is measured based on several factors such as attendance, participation in discussions, answering questions, and posting and replying to messages.Additionally, homework performance is also taken into account and expressed as a percentage score.These measures are subjective and are subject to the assessment of the teacher.
Summative (50%)
Summative evaluation refers to the weighted average of scores from all exams, including unit tests, midterm exams, and final exams, among others, in a given course, typically expressed as a percentage.In the default setting of the platform, all online tests or exams can be repeated twice, with the highest score being recorded.The majority of these exams comprise objective questions, and the scores are not influenced by the tutor's marking.The standardized values of these categories are much higher than those of the other two teaching stages.In cluster 4, the average teaching times in the three teaching stages are middle, while in cluster 5 the average number of teaching behaviors in the three teaching stages is low.In this paper, the authors define cluster 1, cluster 2, cluster 3, cluster 4, and cluster 5 as five types of courses, namely, Resource preparation, online classroom interaction, task evaluation, active interaction, and inactive interaction, based on the characteristics of teachers' teaching behaviors in each group of courses.The number of resource-prepared course group is 47, accounting for 4.1% of the total, while the number of interactive teaching course groups is 41, accounting for 3.6% of the total, and the number of task evaluation course group is 106, accounting for 9.2% of the total.The number of active interaction course group is 178, accounting for 15.5% of of the total, while the number of inactive interaction course group is 775, accounting for 67.6% of of the total.Table 5 shows the number of members in each cluster.
After multiple comparisons of the four categories of curriculum groups formed by clustering, the authors found significant differences in teachers' online teaching behaviors in the four categories of curriculum groups (p<0.01).This fully reflects that no cross phenomenon occurs in the four hierarchical structures divided by clustering, and the partition effect between levels is evident, which can effectively identify the categories of teachers' online teaching behaviors (Table 6 & 7).
Teaching Behaviors of Resource Preparation Course Group
High-quality network teaching resources can effectively connect teachers' instruction with student learning and promote students' knowledge internalization. in the resource-prepared course group, teachers' preparatory activities before class significantly outweigh those during and after class.According to statistical analysis, the resources teachers uploaded prior to class include not only multimedia resources, such as pictures, videos, and documents, but also instructional materials, such as examination questions, test papers, and chapter outlines.As Figure 3 shows, test questions comprise the majority of uploaded resources, accounting for 80.1% of the total.Among the multimedia materials, video resources are the most frequently uploaded, constituting 13.2% of the total, which exceeds the combined percentage of other multimedia resources.According to Figure 4, the majority of the uploaded test questions are objective questions, with the highest proportion being single-choice questions (13366 times, 35.3%), followed by blank-filling questions (7658 times, 20.2%).In contrast, topic discussions and computational problems make up a smaller proportion, accounting for 1.2% and 0.9%, respectively.In total, objective questions such as multiple-choice, filling-blank, true-false, short-answer, and noun explanations comprise more than 95% of the test questions.Overall, the resource preparation group's teachers demonstrate considerable enthusiasm in creating online learning materials, especially in uploading test questions to establish a test bank and prepare for future exams.However, there is a lack of online teaching during and after class.
Teaching Behaviors of Online Classroom Interaction Course Group
The online classroom interaction course group consists of 41 courses.Figure 5 illustrates that teachers predominantly engage in asking questions as part of their online classroom teaching behaviors, with a total of 4269 times, accounting for 47.1% of the total.Additionally, live broadcasting and synchronous classroom activities occur 582 and 526 times, respectively, making up 6.4% and 5.8% of the total.On average, each course has 14.2 live broadcasts and 12.8 synchronous classroom sessions.The least utilized activity is the questionnaire, with only 263 times, which is about 2.9% of the total.
In summary, teachers in the online classroom interaction course group demonstrate strong teaching interaction behaviors in class, with a particular preference for asking questions.However, the online classroom interaction course group shows a lack of pre-class resource and chapter outline, task assignments, grading of homework, and postclass posting and responding.
Teaching Behaviors of Task Evaluation Course Group
The task evaluation course groups consist of 106 courses, and Figure 6 illustrates that teachers' teaching behaviors primarily center on grading homework after class.This activity occurred 29312 times, representing 53.2% of the total, followed by grading exams, which happened 17969 times, accounting for 32.6%.Compared to traditional homework grading, online grading is more cost-effective, more efficient, and quicker, significantly reducing teachers' workload, leading to its widespread adoption.Teachers' posts and replies after class occurred 2291 and 895 times, respectively.The average number of posts and replies per course was 21.6 and 8.4 times, respectively, indicating the need for improvement in communication between teachers and students after class.
In conclusion, task-based course group teachers primarily engage in evaluating after-class tasks, focusing on homework grading, with limited teaching activities before and in class.These teachers tend to exploit the advantages of the platform by assigning homework and tests after class and actively grading them, but the frequency of communication between teachers and students after class is low.
Teaching Behaviors of Active Interaction Course Group and Inactive Interaction Course Group
According to the data, the active interaction course group consists of 178 courses, and the average teaching behavior values in the three teaching stages are 311.4(pre-class stage), 86.6 (in-class stage), and 153.1 (after-class stage).These findings suggest that teachers are actively engaged in teaching activities across multiple teaching stages.
Conversely, the inactive interaction course group comprises 775 courses, where the average teaching behavior times in the three stages are significantly low, standing at 47.1(pre-class stage), 10.3(in-class stage), and 22(after-class stage), respectively.These results indicate that such teachers exhibit minimal participation in online teaching activities, and students mainly use the learning materials on the teaching platform to study by themselves.
Finally, Table 8 summarizes the characteristics of teachers' online teaching behaviors in each cluster course group.
Comparison of Academic Performance in Different Course Groups
Based on the findings of the previous cluster analysis, the authors invetigated the relationship between academic performance and course type by treating academic performance as the dependent variable and course type as the independent variable.Specifically, the authors compared students' academic performance in different course groups using variance analysis.Given the varying sample sizes across multiple comparisons, they selected the "Scheffe" option; Table 9 and Table 10 illustrate the corresponding analysis results.
The results indicate that the average academic performance for the five course groups inactive interaction, task evaluation, online classroom interaction, and resource preparation is 80. 74, 82.74, 81.76, and 80.92, respectively.Notably, there is no significant difference in academic performance between these groups (P>0.05).However, students in the active interaction course group achieved an average academic performance of 87.92, which is higher than the other four course groups, and this difference is statistically significant (P=0.000<0.05).
Online Teaching Behaviors in Each Cluster Course Group
According to Jurado et al. (2014), when given the choice of tools to use in an online learning management platform, teachers tend to favor those that allow them to upload, store, and distribute learning content or materials.In this study, the authors found that teachers of a resource preparation course group had a keen interest in uploading test questions, while teachers of a task evaluation course group were eager to distribute and grading homework, and teachers of an online classroom interaction course group preferred to ask questions.Other teaching behaviors, such as uploading multimedia resources, leading in-class exercises, facilitating topic discussions, posting and replying to student comments, are relatively less common.These findings suggest that, while these teachers are active in their teaching activities, they tend to focus on specific teaching activities during certain stages of instruction, resulting in a relatively limited range of online teaching behaviors.Some teachers use only a few functions on the online learning platform (e.g., establishing a question bank, generating papers randomly, checking attendance, and correcting homework) as tools to assist teaching, but have not fundamentally changed their traditional teaching methods.
The inactive interaction course group includes 775 courses, accounting for 67.6% of the total, which indicates that most online courses have very little teachers' teaching interaction, and, most of the time, students use the resources on the learning platform to learn independently.This finding aligns with previous research, which revealed that, although many colleges and universities have established numerous online courses, they only provide free course materials during the teaching process, and about 50% of the courses show inactive use or immature stages of blended learning implementation (Park et al., 2016).According to Tao (2019), low teachers' participation accounted for 67.2% in online course teaching, leading to a problem of "emphasize construction and neglect use" .Given that universities have invested substantial resources in developing online courses, it is crucial to promote the effectiveness of online teaching.However, many teachers remain unfamiliar with the functions and operations of online teaching platforms, and their online teaching abilities need to be further enhanced.Therefore, it is essential to organize training sessions for teachers to improve their online teaching abilities and promote their familiarity with the online teaching platform.
The test questions teachers of resource preparation course groups uploaded primarily consist of objective questions, such as filling in the blanks, multiple-choice, true/false, and short answer questions, accounting for more than 95% of the total test questions.These questions mostly assess students' memory, application, and understanding of the knowledge they have aquired, which are considered low-level cognitive skills in Bloom's classification structure of teaching objectives in the cognitive field.However, course inspection and evaluation should not be limited to this level, but should also focus on students' high-level cognitive abilities, such as analyzing problems, evaluating problems, and creating problems to develop their innovative capacity (Anderson, 2007).Therefore, teachers should first strengthen the comprehensiveness of course teaching evaluation content.First,the test questions or homework uploaded by the teacher should not only consist of objective questions (e.g., choice, filling in the blanks, and judgment), but also include higher-quality question types (e.g., essay questions, analysis questions, and calculation questions) and more effective testing tools to enhance students' higher-order thinking abilities such as analysis, evaluation, and innovation.Further,Teachers must establish evaluation standards or rubrics for teaching activities such as examinations, assignments, and discussions.Finally, teachers should provide timely evaluation and effective feedback, and communicate any problems they identified during the evaluation to students promptly to guide their progress and facilitate teachers' teaching reflection.
The Relationship Between Course Types and Academic Performance
In this study, the frequency of a certain teaching behavior among teachers in a particular teaching stage is high in the three types of course groups, that is, resource preparation type, online classroom interaction type, and task evaluation type.However, there is not much difference between the average academic performance of students in these course groups and the average academic achievement of the inactive interaction course group, and no significant difference between them.These findings are consistent with previous research, which shows that teachers frequently grading homework and frequently responding to forums for help had no significant effect on student performance (Chen, 2021).Although the frequency of teachers updating resources is significantly correlated with students' test scores (Shen & Wu, 2020), too high or too low teacher interaction frequency has no significant impact on learners' academic performance (Dang, 2019).
There is a significant difference in the academic performance of students in the active interaction course group and the inactive interaction course group.This indicates that in the teaching process, maintaining a moderate, rather than frequent or rare, interaction frequency in the three teaching stages can promote students' academic performance (Zhang, 2017).Teachers' teaching interaction needs to run through the entire teaching process, including before class, during class, and after class, in order to have a significant positive impact on the realization of course objectives and students' learning outcomes (Liu & Wang, 2019).Therefore, in a blended teaching environment, teachers should make full use of the advantages of the teaching platform and reasonably arrange teaching activity from a systematic perspective instead of solely favoring or frequently engaging in specific teaching behaviors.In other words, teachers should release information, upload teaching content, and media resources based on students' learning needs and cognitive characteristics before class.During class, teachers should reasonably arrange online teaching activities such as sign-ins, asking questions, and discussing topics based on the needs of classroom teaching.After class, teachers should post assignments, exams, and topic posts on the teaching platform, and review and reply to posts promptly.The rational teaching activity can enhance students' participation in learning, improve teaching outcomes, and foster better academic performance (Liu et al., 2014).
LIMITATIONS AND FUTURE RESEARCH
The limitations of this study are as follows: First, in this study only refers to one university as an example.Compared with thousands of universities across the country, the data samples are too few and the conclusions drawn are not universal.Therefore, they may not reflect the online behavior characteristics of all university teachers in online curriculum teaching.Secondly, this study referred only to teaching data as a sample to mine teachers' teaching and teaching behavior, while ignoring the influence of teachers' teaching behavior on students' online learning behavior.
In future research, it is important to deeply study the characteristics of teachers' online teaching behavior in blended teaching by expanding the scope of sample research and collecting big data samples from universities with different majors, levels, and teaching platforms.This can be achieved by increasing the number of participants from universities at different levels and regions across the country.Additionally, it is important to conduct cross-cultural analysis by comparing the online teaching behavior of university teachers in different countries.Furthermore, it is crucial to combine teachers' online teaching behavior with students' learning behavior to improve the validity of the research and deepen the understanding and knowledge of teachers' online teaching behavior.Finally, In the active interaction course group, the students' academic performance is significantly higher than that of the other course groups.In the future, it is necessary to further analyze the characteristics of teachers' teaching behaviors in this course group, which is of great significance to improve the teaching effect.
CONCLUSION
In conclusion, in this study the authors investigated teachers' online teaching behaviors in 1147 courses from a local university in East China using k-means clustering.The analysis identified five course types with distinct teaching behaviors, including resource preparation (4.1%), online classroom interaction (3.6%), task evaluation (9.2%), active interaction (15.5), and inactive interaction (67.6%).The authors found that most courses were in the inactive interaction course group, indicating a lack of teachers' online teaching behaviors.Although some teachers were active in their teaching activities, they focused on specific teaching activities during certain stages of instruction, resulting in limited range of online teaching behaviors.The authors also found that academic performance was not significantly different for students in the three types of courses compared to the inactive interaction course group, but there was a significant disparity between students in active interaction course group and students in inactive interaction course group.These results show that a single and frequent teaching behaviors cannot improve the teaching effect.Online course teachers should not only pay attention to the diversification of teaching behavior, but also maintain a moderate amount of teaching interaction throughout the teaching process.
These findings may be particularly relevant to the period during or after the COVID-19 pandemic.Many educators have had to adapt to online teaching and learning due to school closures and social distancing measures.By identifying common challenges and potential strategies to address them, this research provides valuable insights and guidance to teachers new to online teaching or seeking to improve the quality of their online courses.
Figure 1 .
Figure 1.Main functional modules of the network teaching platform Figure 2. General online teaching behavior mode
Figure 4 .
Figure 4. Distribution of main types of uploaded test questions
Figure 5 .
Figure 5. Overview of teachers' teaching activities in class
Table 9 . Multiple comparisons of academic performance in different course groups
Note. *The mean difference is significant at the 0.05 level.Academic performance is the dependent variable.
Table 10 . Comparisons of academic performance Scheffe a,b Number of observed values in cluster
Means for groups in homogeneous subsets are displayed.a. Uses harmonic mean sample size = 80.507.b.The group sizes are unequal.The authors used the harmonic mean of the group sizes.Type I error levels are not guaranteed. Note. | 2023-04-09T15:03:51.493Z | 2023-04-07T00:00:00.000 | {
"year": 2023,
"sha1": "da7633bb62306004161e7b547528731af1515f50",
"oa_license": null,
"oa_url": "https://www.igi-global.com/ViewTitle.aspx?TitleId=320801&isxn=9781668489680",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0331de1be416c21382707f159f1118aaa15eb6fc",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
247519674 | pes2o/s2orc | v3-fos-license | The effect of on polymers in organic electrochemical transistors for
Bioelectronics focuses on the establishment of the connection between the ion-driven biosystems and readable electronic signals. Organic electrochemical transistors (OECTs) offer a viable solution for this task. Organic mixed ionic/electronic conductors (OMIECs) rest at the heart of OECTs. The balance between the ionic and electronic conductivities of OMIECs is closely connected to the OECT device performance. While modification of the OMIECs’ electronic properties is largely related to the development of conjugated scaffolds, properties such as ion permeability, solubility, flexibility, morphology, and sensitivity can be altered by side chain moieties. In this review, we uncover the influence of side chain molecular design on the properties and performance of OECTs. We summarise current understanding of OECT performance and focus specifically on the knowledge of ionic–electronic coupling, shedding light on the significance of side chain development of OMIECs. We show how the versatile synthetic toolbox of side chains can be successfully employed to tune OECT parameters via controlling the material properties. As the field continues to mature, more detailed investigations into the crucial role side chain engineering plays on the resultant OMIEC properties will allow for side chain alternatives to be developed and will ultimately lead to further enhancements within the field of OECT channel materials.
Introduction
It is difficult to imagine any underlying physiological process in living organisms without considering the role of ions.Ionic solutions in water and bodily fluids are major players in the regulation of essential biological and metabolic processes, as osmosis, pH monitoring, and hydration. 1Furthermore, ions are responsible for the stimulation and modulation of a plethora of crucial mechanisms in both animal (neural impulse, muscle function) and plant (turgor, photosynthesis) worlds. 2 Any form of life is tightly interconnected with ionic behaviour. 3[10][11][12][13] To establish the origin of a complex biological condition or treat a disease, a responsive system capable of interacting with biological substrates and translating their characteristics into distinguishable electronic signals is necessary. 14Establishing the link between these biosystems and readable electronic output is a major focus of bioelectronics.Creating this connection is associated with a handful of difficulties, related to the fundamental differences in the operational modes and material features of human-made and nature-created structures. 15For instance, while biosystems tend to use ionic and molecular forms for information transfer, electrons and holes serve that role in artificial electronic systems.Typically, hydrophobic electronic devices are composed of rigid counterparts, while water-friendly biological systems are known to be flexible and soft.Diversity in energy sources and operational conditions conclude the list of differences.To address these divergences and merge them in an efficient bioelectronic device, the development of new materials, state-of-the-art device architectures, and appropriate power sources is essential. 15The result of this merging is a bioelectronic interface, capable of bidirectional recognition of biological signals (e.g., cells, organs, tissues) induced by the change in electronic or ionic charge transport. 16Many applications have arisen as a result of the development of new bioelectronic interfaces: cell culture, 17 biomedical diagnosis, 18 electrophysiological stimulation 19 to name but a few (Fig. 1). 20Inspired by the progress in other areas of organic electronics, namely organic field-effect transistors (OFETs), organic solar cells (OSCs) and organic lightemitting diodes (OLEDs), the field of bioelectronic devices has blossomed over the last two decades. 21Comparable to the famous pioneering Galvani's animal electricity 22 experiment, advances in new bioelectronic materials lead to the device miniaturisation and sensitivity improvement. 23he attributes of secure and efficacious bioelectronic interfaces include biocompatibility, operational stability, compatibility with the living matter, sensitivity, and detection speed. 14ll of these conditions can be met by an organic electrochemical transistor (OECT).5][26] It is the ability of the polymer channel material to uptake ions and other metabolites from the interfacing electrolyte and transport electronic charge carriers (holes and/or electrons), resulting in mixed ionic and electronic conductance, that underpins the superior performance of OECTs in bioelectronic applications. 27ixed conductance, permeability, and conformability, essential for the OECT operation, can be achieved via the utilisation of organic mixed ionic-electronic conductors (OMIECs). 28As opposed to conventional rigid inorganic electronics components, OMIECs possess the merits of facile low-temperature processability and solubility in various organic solvents, which This journal is © The Royal Society of Chemistry 2022 makes them suitable for mass production printing techniques. 29hese advantages stem from the distinctive molecular design of OMIECs, generally combining a highly conductive conjugated polymer (CP) backbone and side chains capable of ion uptake (Fig. 2). 30o fabricate a highly efficient OECT channel material, the condition of facile ion penetration 31 through the CP network upon voltage application, has to be met. 32That provides significant transconductance values, which translate into the adequate sensitivity of target devices. 33CP's sensitivity to both ionic and electronic charge carriers furnishes them with suitability for a wide range of applications, using ion-to-electron signal conversion. 16,34Moreover, mechanical softness on par with that of biological tissues, has to be achieved for CPs to render them suitable for biomedical applications.The tuneability of CP underpins the versatility of OECT-based devices, and enables the wealth of applications. 16,35While modification of electronic properties is primarily dictated by the development of novel conjugated scaffolds, such properties as ion permeability, solubility, flexibility, morphology, and sensitivity can be tuned through side chain architectures. 36,37Even though the effects of side chains on the overall device performance have been studied extensively in other fields of organic electronics (e.g., OFETs, OSCs, OLEDs), 38 well-structured reviews on structureperformance side chain directed trends in OECTs are lacking.Therefore, the motivation behind this work is uncovering the influence of side chain molecular design of channel materials, on properties and performance of resultant OECT devices.Firstly, the fundamental concepts of the OECT physics and commonly used OMIEC materials will be discussed in Section 2, to summarise current understanding of OECT performance.Additionally, the concepts of the ionic-electronic coupling, sensitivity and selectivity will be introduced, and their connection with the side chain engineering approach will be uncovered.Further on, Section 3 will focus on the detailed discussion of OECT material side chains, showing how the versatile synthetic toolbox can be employed to tune various OECT parameters.Finally, an overview and perspective for future side chain development will be presented in the conclusion section.
OECT physics
The performance of an OECT is governed both by its device structure and the features of the materials involved.In parallel with other transistors, such as conventional OFETs and electrolytegated OFETs (EGOFETs), OECTs are miniature thin-film devices, comprised of a source, drain and gate electrodes, and a layer of channel material sandwiched between them (Fig. 3(a)). 21OECTs are known to operate in two modes, namely depletion and accumulation, depending on the nature of the channel material. 39 benchmark p-type channel material, namely poly(3,4-ethylenedioxythiophene) doped with polystyrene sulfonate (PEDOT:PSS) operates in the depletion mode (Fig. 3(b), (top)).Whilst the doped initial state of a channel material is a key characteristic of depletion mode OECTs, unbiased undoped CPs (p-type in this case) serve as a foundation for accumulation mode OECTs (Fig. 3(b), (bottom)).As the device switches on due to the hole build-up, both high hole mobility and superior neutral/oxidised state stability represent the crucial requirement for the accumulation mode p-type OECTs. 40Fig. 3(b) details the processes of polymer doping in the cases of initially doped channel materials (PEDOT:PSS) and undoped p-type CPs.Electron mobility governed n-type CPs enable both depletion and accumulation modes of OECT operation.Needless to say, that stability requirement is equally applicable to n-type channel materials. 21he efficacy of the OECT performance can be described using transconductance (g m ), which essentially defines the signal transduction by the transistor and dictates the OECT sensitivity.The scale of g m largely depends on the OECT geometry and such material-specific characteristics, as charge carrier mobility and volumetric capacitance. 39,41The relationship between these parameters is outlined in eqn (1): where g m -transconductance, I D -drain current, V G -gate voltage, m -electronic charge carrier mobility, C* -volumetric capacitance, W -channel width, d -channel depth, L -channel length and V TH -the threshold voltage.Notably, OECT transconductance can exceed that of OFETs, reaching values as high as 800 S m À1 . 42,43In bioelectronic applications, transconductance generally serves as a function of the parameter of interest (e.g., target ion or metabolite concentration). 44Consequently, the high sensitivity of OECTs stems from enhanced resolution at reduced detection limits, which is accounted for by the gate voltage/channel current interconnection. 45The combination of such low operational voltage and high transconductance create beneficial conditions for the precise examination of biological events. 37 The dotted lines correspond to the best fits of transconductance g m = aWd/L, with a being a proportionality constant. 42Reproduced from ref. 44.
With the addressed significance of OECT transconductance in biological applications, as introduced above, enormous efforts have been made on the development of OECT channel materials, in which the signal amplification essentially occurs.The upcoming Section 2.2 presents an overview of currently well-studied OECT channel materials and introduces side chain engineering as a systematic tuning approach of material properties.The latter is further connected to the fundamental concept of ionic-electronic coupling through the discussion of morphology related effects (Section 2.2.1).Sensitivity and selectivity concepts will be discussed in detail further in Section 2.2.2, followed by the detailed examples of types of applied side chains in Section 3.
OECT materials and side chain engineering
The mode of operation of an OECT device is governed by the choice of channel material.Thus, an intrinsically doped material is expected to enable the depleted operational mode, while a CP requiring additional doping would warrant the accumulation mode.The OECT channel materials can be classified according to the type of organic semiconductor involved: p-or n-type. 46dditionally, OMIECs for use in OECTs can be subdivided following their chemical composition.Homogenous materials support bidirectional electronic and ionic charge transport either in a single material or in a materials blend, whilst heterogeneous type refers to the segregated regions of exclusively ionic or electronic transport. 15he work-horse OECT material, which has found numerous bioelectronic applications, is PEDOT:PSS.][49] However, being a heterogeneous system, PEDOT:PSS does not provide much freedom for synthetic modifications.Hence significant efforts were dedicated to the design of accumulation mode homogeneous CPs as potential improvements. 50-Type CPs are mainly represented by thiophene-based materials.Polythiophenes are particularly attractive due to their overall (thermal, chemical, environmental) stability in both doped and undoped states.Even though unsubstituted polythiophene reveals some solubility issues, synthetic incorporation of side chains resolves this issue.For instance, decoration of the polythiophene scaffold with long alkyl chains and development of facile synthetic methods (e.g., Kumada catalyst transfer polymerisation) has led to the introduction of the most archetypal polythiophene derivative, namely regioregular poly(3-hexylthiophene-2,5-diyl) (P3HT).
As electron mobility represents an obvious challenge for the p-type thiophene-based materials, a new class of n-type CPs began to emerge.Efficient n-type OMIECs have been prepared, utilising a donor-acceptor molecular skeleton bearing a strong electron accepting 1,4,5,8-tetracarboxylic acid diimide (NDI) fragment.Compared to PEDOT:PSS, significantly higher sensitivity and signal amplification could be achieved for n-type OMIECs. 40The combination of the NDI chromophore with the functionalised thiophene moiety resulted in low reduction/ oxidation potentials of the OMIEC copolymer and enabled ambipolar p-and n-type OECT performance.
In addition to the synthetic progress in designing polymer backbones, the efforts on the side chain engineering offer another key aspect in developing future OMIECs.Side chain engineering allows further fine tuning of material properties, which brings ease to set up model studies in lab and ultimately helps establish our understanding of the structure-property relationship of OECTs, e.g., the concept and determining factors of ionic-electronic coupling.In the next section, the impact of side chains on this electrochemical event will be addressed, which further demonstrates how side chain engineering would substantially influence the sensitivity of OECTs.More importantly, selectivity of OECTs with respect to the biological analytes also benefits from the incorporation of functional side chains, which will be discussed in Section 2.2.2 subsequently.
2.2.1.Side chain engineering effect on ionic-electronic coupling.In the context of OECTs, not only do the optoelectronic properties of the materials matter, but also doping-related characteristics and consequent variations in ionic-electronic coupling.As a result, OECT mobility and transconductance values can be greatly affected by side chain modification.OECT sensitivity and selectivity can be directed by the choice and incorporation of appropriate side chains. 51Ionic-electronic coupling refers to the balance of electronic and ionic conduction within a mixed conducting material.Electronic conduction is governed by the equilibrium of charge concentration and charge mobility, which consequently depends on structural and morphological features.In terms of molecular design, efficient p-orbital overlap is crucial to enable electronic transport.Such an overlap can occur both within the chain of a conjugated polymer (intramolecularly) and upon the polymer chain through-space p-p interaction (intermolecularly).The combination of simultaneous inter-and intramolecular p-orbital overlap allows for efficient electronic transport along and between the polymer chains.Using rigid molecular fragments and assembling planarised scaffolds has proven to be a successful approach towards efficient conjugation, bandgap minimisation, and consequently enhanced electron conduction. 52,53Importantly, intermolecular interchain p-p interactions have a significant impact on polymer morphology via the emergence of crystalline regions, responsible for higher electronic transport (Fig. 5(a), left panel). 50,54,55Side chain engineering offers a wonderful tool to tailor electronic conduction within the conjugated polymers by means of finetuning the bandgap and affecting the p-p stacking of the polymer backbone. 38For example, controlling the side chain length and the degree of branching could significantly alter the p-p stacking distance and thus control the intermolecular charge hopping for the application of OFETs. 28,56,57This general rule is applicable to OECT channel materials in terms of improving electronic conductance, and has been translated from the well-established alkyl side chain system to the wide spreading ethylene glycol based side chain system, detailed later.
While the above observations apply to dry polymer films, the situation changes with the volumetric electrolyte uptake (Fig. 5(a), right panel). 50Upon exposure to the electrolyte solution, the CP film is subject to swelling.Swelling in turn promotes infiltration of the ions and water molecules into the bulk of the polymer film.As a result, initial polymer swelling constitutes an important condition of ionic conductivity. 52Ions are not only subject to hopping, but also follow the Grotthus mechanism of solvated ion transport, resulting in amplified ion conduction. 58,59Since most of the currently studied CP backbones are hydrophobic, side chain engineering serves another important role by introducing hydrophilicity into the system to promote effective ionic-electronic coupling. 51,60he extent of the ionic-electronic coupling greatly depends on the side chain segmental mobility, as well as side chain interaction with a dopant.2][63][64][65] The finding of heterogenous swelling demonstrates that the degree of water uptake within the polymer, upon OECT operation, is reliant on the polymer's crystallinity and microstructure, which could be tuned by different side chain engineering strategies reviewed later.Computational work from Dong and co-workers further suggest that a negligible change in the side chain design can significantly influence the morphology of the mixed conducting polymer via affecting the conformational order or the side chains in the amorphous domain, hence, modifying the conductivity values. 66Owing to their ionic-electronic coupling, mixed conductors in OECTs are capable of substantial current amplification in the presence of the analyte in question, which gives rise to satisfying sensitivity of OECT at a low operational voltage in the aqueous environment. 4,67.2.2 Side chain engineering effect on the selectivity of mixed conductors.On par with ionic-electronic coupling and the related sensitivity parameter as discussed above, selectivity represents another crucial parameter of the OECT bioelectronic device performance.68 Selective detection of certain biologically hazardous molecules in the presence of other analytes is important in healthcare applications.69 While the selectivity of an OECT is largely associated with the utilisation of ionselective membranes 3 and catalytic enzymes, 68 CP side chains undeniably play a significant role in facilitating the selectivity by providing chemical linkage to various ligands and enzymes.
Post-functionalisation of the channel materials has been prevailing to immobilise probing molecules on the material surface, due to the smaller synthetic barrier compared to prefunctionalisation strategy.However, the potential damage to the engineered molecules and device performance after common processing techniques, e.g., plasma treatment, thermal annealing, and solvent erosion, brings trouble to efficient device fabrication. 70trategies that avoid the above harsh post-processing conditions have to adapt the weak intermolecular forces, which limits the long term functionality of the device, as well as the grafting efficiency of biomolecules. 71Applying side chains as chemical linkages to the complicated biomolecules is currently an effective solution to this issue.The smaller size of chemical linkage as monomer side chain allows ease of polymerisation of the backbone, while offering strong connecting sites for the grafting of biomolecules which are later covalently bonded to the CP channel.In 2018, Hai et al., presented a functionalised PEDOT:PSS derivative for human influenza virus sensing. 72More specifically, the authors covalently grafted 2,6-sialyllactose (an influenza virus receptor) onto an oxylamine moiety which was tethered to an EDOT-based backbone (Fig. 6).The OECT device was utilised as an effective signal transducer, whereby the binding interaction between 2,6-sialyllactose and hemagglutinin led to recordable fluctuations in the drain current.Moreover, the overall negative charge of the influenza virus nanoparticle incurred an anionic doping effect within the active channel, subsequently altering the drain current output of the OECT.Compared to common immunochromatographic tests, the poly(EDOTOA-co-EDOT)/PEDOT:PSS composite-based OECT devices demonstrated a two order of magnitude decrease in the limit of detection.Despite this, the devices are a low power consumption alternative, offering facile processing from printed technologies for mass production.Similarly, Gala ´n et al., report a selective sensor of Hepatitis C virus using DNA sequence functionalised PEDOT, which has been engineered with azide side chains first to serve as linkage to the virus probe. 71These studies highlight the versatility of side chain engineering in combination with the signal amplifying and transduction potential of an OECT device.The ability to bind specific biomolecules paves the way for wearable sensors and point-of-care evaluation of interested biological substances or events.The ability to synthetically tune the linker moiety also represents the potential to expand this design to target multiple other viruses.
The selectivity of OECTs could be further established by side chain engineering the gate electrode with crown ether functional groups.The size of the cavity in the crown ether units determines the specific alkali metal cations that would induce the intercalation effect.Such selective complexing between the targeted metal ions and the crown ether components causes the disruption of the p conjugation along the CP backbone, thus generating the reduction current that primarily correlates to the change of concentration of interested ions in the environment.Based on this fundamental mechanism, Wustoni et al., copolymerised a traditional EDOT unit with crown ether engineered EDOT units as the coating of gate electrodes of OECTs for selective recognition of K + and Na + .The crown ether functionalised PEDOT system allows selective sensing of the targeted cations in the physiological concentration range without any additional membrane filters as shown in Fig. 6(c). 73Additionally, Kousseff et al., reported that functionalising crown ethers to the PEDOT system provides the material with better electrochemical stability and substrate adhesion, in addition to the metal cation selectivity. 74ummarising the above discussion of the side chain effects on OECT channel materials in biological applications, it can be concluded that the side chains have a direct influence on the following parameters of a mixed conductor: (i) HOMO/LUMO energy levels, affecting, in turn, the bandgap, linear electronic charge transport, and ionic-electronic coupling; (ii) p-p stacking of the polymer backbones, influencing through-space electronic charge transport and crystallinity; (iii) the extent of swelling and ion uptake, controlling the resulting morphology; (iv) sensitivity and transconductance of the resulting OECT; (v) the selectivity of the materials.In Section 3, we will review the different types of side chains which have been employed in OECT active channel materials, commenting on various device and material improvements imparted from the plethora of side chain engineering literature.
Types of side chains utilised for OECT active channel materials
In addition to the above fundamental mechanisms that give rise to the high sensitivity and selectivity of OECTs, uncovered by side chain related studies, side chain engineering has been extensively studied to provide a set of systematic tuning strategies for high performance OECT channel materials as well.Section 3 thus focuses on the selection of currently most studied side chain types, including the ethylene glycol family, the alkyl and alkoxy side chains, the hybrid side chains and finally the charged side chains.With various examined side chain parameters here, further detailed design principles will be revealed in this section.
Ethylene glycol (EG) based side chains
As discussed in Section 2, ionic conductance heavily relies on the hydrophilicity of CPs to facilitate ion flow inside the thin film.Given the difficulty of altering the hydrophobic backbones for most of the CPs, hydrophilic side chains have become an efficient solution.Currently, one of the most widely studied hydrophilic side chains is ethylene glycol (EG) based side chains.CPs bearing EG side chains are able to facilitate aqueous solubility, more aqueous ion transport, and the stabilisation of ions in the materials. 51,75It is reported that doping kinetics of the glycolated polythiophenes with respect to small anions could be approximately 150 times faster than their alkylated counterparts, implying its potential in biosensing applications. 50n addition to the facilitated water and electrolyte uptake, hydration brought by the introduced hydrophilic side chains induces morphological changes that allow the mechanism of charge injection.Bischak et al., has recently conducted a detailed experimental study, uncovering the reversible structural phase transitions in the thiophene-based systems engineered with hydrophilic EG side chains. 76Upon ion injection and electrochemical oxidation, the primary morphology of the glycolated mixed conductor poly[2,5-bis(thiophenyl)-1,4-bis(2-(2-(2-methoxyethoxy)ethoxy)-ethoxy)benzene] (PB2T-TEG) is commanded by the side chain-induced crystallisation.The cumulative effect of the hydration and injection of the ions facilitates the unzipping of the intertwined polymer chains.The latter is subject to the subsequent p-p-stacking governed zipping upon the oxidation (Fig. 5(b)). 76Such controllable phase transitions were advocated to be effective to tailor the electrochemical characteristics of mixed conductors.Significantly, the abovedescribed phase transitions are dependent on the hydrophilic nature of the EG side chains, as no zipping/unzipping associated charge injection was observed in the alkylated P3HT polymer in the same study.With all these benefits, the current highest mC* reported for a p-type OMIEC reaches 522 F cm À1 V À1 s À1 . 77owever, from the published literature, it is obvious that the OECT performance of the tuned materials do not simply scale with the addition of EG side chains.Careful consideration is required to explore in detail how the introduction of hydrophilic side chains alter the performance.
Modifications of EG based side chains have been manipulated using different parameters, including the overall chain length, the linkage spacer to the backbone, the total percentage in the bulk material, and finally the engineered backbone positions.As previously studied in the alkylated system, the side chain length could systematically tune the morphology Fig. 6 (a) Schematic of the OECT device, employing the poly(EDOTOA-co-and thus the electronic conductance. 56An analogous study on the length of EG side chains was performed by Moser et al., increasing the repeating units of the ethylene glycol side chains, tethered to a thiophene backbone, from 2 to 6 as shown in Fig. 7. 46 Among the four presented glycolated polythiophenes, p(g3T2-T) exhibited the optimised volumetric capacitance and charge mobility, rendering an overall mC* exceeding 135 F cm À1 V À1 s À1 .Increasing EG side chain length from p(g3T2-T) to p(g4T2-T) significantly decreases the charge mobility from 0.16 to 0.06 cm 2 V À1 s À1 , adding flexibility to the polymer backbone, which impedes long range ordered packing in a large range.On the other hand, p(g3T2-T) also achieves the optimised point of volumetric capacitance, within the series, with a value of 211 AE 18 F cm À1 , due to the sufficient ion transport and stabilisation provided by the TEG side chains.Further addition of EG repeating units beyond three did not provide additional ion stabilisation, instead resulting in a decreased capacitance.As for the comparison with samples having decreased side chain length, p(g2T2-T) bearing the shortest EG side chains shows difficulty in processing into the OECT channel and efficiently transporting ions, due to the reduced solubility and disordered morphology.
Redistribution of EG side chains also impacts the electrochemical performances of CPs. 77By redistribution, Moser et al., varied the number of repeating units of the EG side chains attached on neighbouring thiophenes while keeping the total amount of units the same, as shown in Fig. 9.Among the investigated polymers, p(g2T2-g4T2) shows the best mC* up to 522 F V À1 cm À1 s À1 followed by p(g1T2-g5T2) with mC* of 496 F V À1 cm À1 s À1 .The important parameters of the remaining polymer samples are compared in Table 1.
It is worth noting that the change of m and C*, respectively, under this side chain manipulation also reveals a trade-off between electronic conductance and the ionic conductance as reported in the adjustment of the side chain length. 46With more data regarding the active swelling of the material in this study, evaluation of how these two parameters are related to the extent of expansion can be elucidated.As summarised in Table 1, m is inversely correlated and C* is positively correlated to the degree of active swelling.Intuitively, increasing the hydrophilic chain length resulted in an increase of ion and water uptake and thus the active swelling; however, this relationship becomes weaker when the number of the repeating units exceeds 3, which is in agreement with the previous chain length study. 46Hence, the extent of active swelling is related more to monomers having the side chain length below 3 units, which explains the increased active expansion of the materials This journal is © The Royal Society of Chemistry 2022 when the shortest side chain length is increased from 0 to 3 units.As the ability of water and ion uptake increases, the charge carrier mobility consequently decreases, which could be attributed to the further disruption of the crystalline regions in the materials.On the other hand, the volumetric capacitance, the metric that closely depends on the material's ability to transport and stabilise ions during doping, increases with the degree of active swelling.A slight decrease occurred for p(g3T2) since its additional expansion results from water uptake rather than ion transport into the bulk material.This study again addresses the importance of the trade-off between m and C* or the future molecular design of OECT channel materials.Additionally, the stability of the OECT materials in aqueous environment has been significantly improved, with only 2% reduction of the initial channel current after 700 doping cycles for p(g3T2), with the previous benchmark glycolated polymer p(g2T-TT) suffering a 25% decrease of the initial current under the same testing conditions. 51he percentage of EG chains is another dimension to look into when tuning the electrochemical performances of the material. 37,79Giovannitti et al., gradually replaced alkyl side chains by ethylene glycol side chains in random copolymers based on naphthalene-1,4,5,8-tetracarboxylic-diimide-bithiophene (NDI-T2) as shown in Fig. 10. 37With the increasing percentage of glycolated monomers, the material starts to exhibit a dominant OECT operation mode, with diminished OFET performance, when the EG percentage exceeds 50%, which is aligned with previous research. 51As summarised in Table 2, when the primary working mode of the material transitions from OFET to OECT, the charge mobility drops drastically.Such decrease of efficiency in charge transport is highly related to the morphological changes brought by the addition of EG side chains.Analysing the morphology changes reveals that the lamellar spacing inside the film increases with the amount of glycol side chains.Addition of EG side chain fraction results in a stronger tendency towards disordered microstructures, regardless of water presence.Needless to say, the increased swelling of materials with increasing glycol side chain fraction further interrupts the interconnectivity among crystallites.
A similar study examining the impact of the EG side chain percentage on p-type conjugated polymers was summarised here for comparison, which provides more details regarding the trend between the TEG side chain fraction and the material properties. 79In this study, the portion of glycolated monomers was gradually increased in p(g2T-TT) copolymers from 0% to 100%, with an additional homopolymer bearing hexakis-EG side chains, named as 2g in Fig. 11.It is observed that the transconductance of the material scales by five orders of magnitude with the amount of EG side chains.As indicated in Table 3, such enhancement in g m results from an increase in volumetric capacitance and a trend of V TH to approach 0 V.The study also revealed that a higher portion of EG contents comes with greater swelling, which couples with improved volumetric capacitance with or without the electrochemical potential, as shown in Fig. 11.Characterisation of the properties and morphological changes occurred in 2g implies essential lessons of controlled hydrophilicity in the system.The volumetric capacitance of 2g upon the application of electrochemical potential drops slightly compared Table 1 The electrochemical performances and swelling degree of a series of ethylene glycolated polythiophenes with 'redistributed' side chain lengths 77 Polymer Fig. 10 Chemical structures of the investigated mixed conducting copolymers, the P0 to P100 series. 37ble 2 The electrochemical performances and swelling degrees of NDI-T2 copolymers with increased EG side chain percentage from 0% to 100% 37 to g-100%.Uncoupling the doping process of anions and water molecules reveals that ion uptake reaches saturation with the amount of hydrophilicity in g-100%.Additional volume expansion occurred in 2g is a consequence of excessive water uptake, which indeed dilutes the electronic contents and lowers C*.In addition, with further penetration of water molecules, 2g films possessing the highest EG content shows the most severe heterogenicity in the swelling of crystalline and amorphous regions, with greater expansion of amorphous regions being observed, which sets the barrier for charge transport between crystallites.As a result, the hole mobility undergoes a steep drop.A weakness of conjugated polymers incorporating a great amount of ethylene glycol side chains is their poor solubility in benign organic solvents, so that their processing has to involve toxic organic solutions such as chlorinated solvents. 37,75,80,81ntroducing branching is an efficient solution to alleviate the processing conditions of these materials. 82,83Jones et al., has recently reported an acetone-processable CP, PE 2 -biOE2OE3, that could be readily applied as a OECT channel material, with the chemical structure shown in Fig. 12. 82 Upon the electrochemical potential, the loss of the initial channel current is only 10% after 1000 cycles.Similarly, Kang et al., exploit the benefit of branched EG side chains and synthesised a series of water/ethanol processable CPs with a high hole mobility of 0.1 cm 2 V À1 s À1 in the dry state.Although the latter study didn't characterise the materials with OECT performance, branching of EG side chains could still be a step towards the fabrication of green OECTs in the future. 83ntroduction of a crown ether, as a special case of EG side chains, represents an elegant way to fine tune the morphology, adhesion and electrochromic behaviour of the CPs (Fig. 6(c) and (d)).Upon decorating PEDOT with a crown ether (PEDOT-Crown) (Fig. 6(d)), Kousseff et al., was able to achieve a superior material performance in electrochromic devices than that of the parent PEDOT. 74More advantageous properties of PEDOT-Crown were attributed to the improved surface morphology, stronger adhesion to ITO, more pronounced faradaic behaviour, and bathochromically shifted absorption features.Furthermore, the cyclic nature of the crown ether side chains was found to show selectivity toward alkali metal ions, suggesting improved suitability for biological sensing application as already discussed in detail in Section 2.2.2. 73,74he incorporation of hydrophilic EG side chains has been one of the most achieved side chain engineering strategies for competitive OECT channel materials, as proven by the significantly improved mC* and thus the transconductance values reported above.Given the numerous control studies on different dimensions of EG side chains, using side chains to tune the morphological response of channel materials upon the stress of electrochemical voltage has provided a more detailed guidance of material design.Specifically, the hydrophilicity should be introduced with care to reach both the sufficient ion uptake and minimal microstructure disruption.To further explore how the side chain nature would control the OECT performance, the role of hydrophobic side chains is later summarised to complement our understanding.
Alkyl and alkoxy side chains
With the great leap in bringing up transconductance values from the perspective of increasing hydrophilicity of the system, Table 3 The electrochemical performances of p(g2T-TT) polymers with increasing amount of glycol side chains from 0% to 100%. 79lymer g-0% as presented in the previous section, the negative impacts inflicted by the repetitive swelling behaviour becomes clear as well.Lessons from the above studies indicate the necessity to control excessive water uptake beyond an optimised point, which not only accommodates less ion transport but also disrupts the ordered microstructure more severely.Moreover, the accumulation of the remaining water in each doping/ dedoping cycle, due to the strong interaction between the glycol side chains and water molecules, leads to irreversible morphological changes that sets limits on the reversibility, stability and finally the performance of materials. 84,85n addition to increasing the backbone rigidity of the materials to improve the structural stability upon cyclic hydration, tuning the fraction of hydrophobicity becomes a solution toward this issue.Szumska and co-workers have reported an NDI based D-A type copolymer suitable for the electrochemical application in the aqueous environment, with the NDI acceptor monomers bearing alternating hydrophilic EG side chains and hydrophobic alkyl side chains, as shown in Fig. 13(a). 84This study examines the impact of side chain nature in a similar way to the EG percentage study previously discussed, 37 but with a different focus on the role of alkyl side chains to control the swelling behaviour.Regardless of the slight difference in the donor units, the increasing amount of hydrophobic alkyl side chains results in a decrease in the degree of swelling in both types of polymers.The restricted swelling brought higher stability of the overall materials under the cyclic electrochemical voltages due to less irreversible disruption of the microstructures.Moreover, the retention of water is significantly controlled as represented by the measurement of the mass change of materials in each cycle shown in Fig. 13(b).The baseline of the polymer bearing purely glycolated NDI units drifted significantly in only 3 cycles of scans, whereas the baseline of polymers involving alkyl side chains have more stable baselines throughout the assessment.It is also noteworthy that controlled water uptake by the introduction of hydrophobic alkyl chains allows a higher fraction of utilisation of theoretical capacity in the material.Thus, the importance of tuning the ratio of hydrophobicity and hydrophilicity in the bulk material is thus revealed.
Alkoxy side chains help to improve the electrochemical stability of CPs in a different way from alkyl side chains.This group of electron rich side chains could tune the ionisation potentials and thus the stability of conjugated polymers for application in OECT devices. 41,86In 2018, Giovannitti et al., reported the improvement in the redox stability of benzo[1,2-b: 4,5-b 0 ]dithiophene (BDT) structure based D-A copolymers, via the choice of the comonomers, specifically the side chain difference. 86Among the series of glycolated BDT copolymers synthesised with comonomers bearing different side chains, as shown in Fig. 14(a), gBDT-MeOT2 exhibited a significant improvement in material stability and transconductance when applied as the channel in OECTs.With the introduction of oxygen atoms along the side chains of the comonomers, the oxidation potential of the overall material is lowered, thus requiring lower turn on voltage, preferable in a biological application.In addition, gBDT-MeOT2 shows improved solubility in common organic solvents compared to the rest of the copolymers.A follow up study from Giovannitti et al., regarding the impact of alkoxy side chains engineered on the donor comonomers was subsequently conducted. 41In this series of synthesised copolymers, presented in Fig. 14(b), p(gPyDPP-MeOT2) exhibited greater redox stability with almost no loss of the initial current after 400 cycles at a gate voltage of À0.5 V, compared to its unsubstituted analogue.The higher stability could be credited to the greater extent of hole polaron localisation provided by the methoxy side chains.Moreover, the electron rich methoxy groups in the donor units help shield the polymer backbone from undesired reactions with oxygens when applied in the ambient environment.
From the studies discussed above, it can be seen that D-A type comonomers allow more flexibility in side chain engineering design, in terms of tuning the portions of mixed types of side chains to achieve the improved material properties.In both donor and acceptor units, the side chain engineering contributes significantly to the controlled polymer swelling and the increased stability.
Nevertheless, there lacks a comprehensive study that enhances both the performance metrics and the stability via blending mixed types of side chains in copolymers.Although the reported CPs in this section have less competitive performance compared to the current state-of-art materials, 41,86 the ability of alkyl and alkoxy side chains to finely control the material swelling and stability addresses comprehensive consideration of impacts from both the hydrophobicity and the hydrophilicity constituents in the system.
Hybrid and 'spacer' side chains
Given the above consideration, hybrid side chains become another side chain category that has been rigorously investigated.It has often been the case that a combination of these two types of side chains is required to impart solubility/processability (hydrophobic) and to facilitate conduction of aqueous ionic species (hydrophilic, most commonly EG based side chains) of the resultant polymer, hence enhancing the ionicelectronic coupling as discussed in Section 2.2.1.In addition to tethering side chain moieties of different natures to different comonomer, as discussed in the examples of Sections 3.3.1 and 3.3.2, 37,87an alternative design strategy was presented by Yue et al., whereby a hybrid side chain, combining a hydrophobic alkyl component with a hydrophilic EG unit within the same side chain, was attached to an isoindigo backbone and polymerised with a bis(3,4-ethylenedioxythiophene) (bis-EDOT) donor unit, affording PIBET-AO (Fig. 15). 80The authors hypothesised that merging these components into a single side chain would retain the polarity necessary to induce ionic conduction (hydrophilic EG unit) whilst simultaneously preventing film dissolution (hydrophobic alkyl unit).In order to compare the effects on both OECT performance and polymer microstructure four different side chain compositions were investigated, namely: the hybrid mixed alkyl-EG side chain (PIBET-AO), linear and branched EG side chains, and branched alkyl side chains. 80he hybrid alkyl-glycol side chain substantially increased polymer to substrate adhesion, preserving operation and performance after 90 minutes of ultrasonication in an aqueous electrolyte (Fig. 15).In contrast replacing the hybrid side chain with a linear glycol side chain led to complete film delamination terminating device operation, over the same 90 minute time period.Notably, the hybrid alkyl-EG side chain decorated polymer displayed impressive operational stability, with OECT devices retaining their original current over a six-hour period of on-off cycling, totalling 3628 cycles.In contrast, the glycolated derivative (PIBET-O), whereby the hybrid side chains were replaced by linear 6-unit EG chains, retained only 10% of the original current after only 400 cycles.Upon increasing the hydrophilic side chain density, by incorporating a branched EG side chain (PIBET-BO), the operational stability decreased again with a 90% reduction in initial ON current after only 6 minutes (20 cycles).The poorer operational stabilities, moving from hybrid alkyl-EG to linear EG to branched EG, were prescribed to the increase in threshold voltage (and thus the voltage required to switch on the device) leading to overoxidation and increased swelling.These findings were corroborated by a recent study which demonstrated the importance of side chain composition on the resultant electrochemical stability, with a balance of hydrophilic and hydrophobic side chains increasing the redox reversibility of OMIECs in aqueous environments. 84 second study investigated the effect of introducing the methyl spacer by comparing P3MEEMT to an analogue containing an identical EG side chain directly tethered to the thiophene backbone (P3MEET). 88A combination of computational and experimental chemistry was employed to probe ionic transport properties and the resultant ionic conductivity of the two materials.A follow-up study expanded the derivatives to three, introducing an ethyl spacer between the thiophene backbone and the EG side chain to afford P3MEEET (Fig. 5(c)). 89The three homopolymers were investigated in OECTs and showed drastic variation in performance metrics, highlighting the importance of precise side chain engineering for improved performance as an active OECT channel material. 89The systematic increase in the alkyl linkage between the thiophene backbone and the EG side chain lead to an increase in volumetric capacitance from 80 AE 9 to 242 AE 17 F cm À1 between P3MEET and P3MEEET.Furthermore, moving from no spacer to methyl and finally ethyl spacers resulted in the OECT figure of merit mC* value to increase by more than two orders of magnitude, from 0.04 AE 0.01 to 11.5 AE 1.4 F cm À1 V À1 s À1 , for P3MEET and P3MEEET respectively.
Electrochemical quartz crystal microbalance with dissipation monitoring (EQCM-D) was utilised to determine the mass exchange between the electrolyte and the active channel material.Upon the application of a doping potential (matching the magnitude which resulted in maximum OECT transconductance), significant variation in swelling is observed across the series (Fig. 5(c)).The ethyl spacer derivative, P3MEEET, showed a mass uptake which was 12 times that of P3MEET and P3MEEMT, leading to volumetric capacitance values which aligned with those calculated from electrochemical impedance spectra from OECT channels.It was postulated that the inclusion of a progressively extended alkyl spacer led to increased accessibility of the diethylene glycol side chain imparting heightened ionic transport and increased crystallinity for P3MEEET.Importantly, these studies exemplify the inclusion of a hybrid spacer side chain, which may seem like a simple synthetic modification, has a dramatic effect on numerous properties of the OMIEC.
Recently, the effect of alkyl spacers on n-type D-A NDI based OMIECs has been investigated. 27,90Based on the initial p(gNDI-gT2) 81 polymer, two alkyl spacer derivatives were synthesised introducing a propyl (C 3 ) or hexyl (C 6 ) spacer, p(C3-gNDI-gT2) and p(C6-gNDI-gT2), respectively (Fig. 16). 90It was proposed that the hybrid alkyl-glycol side chain may protect the polaron, which is delocalised along the polymer backbone, from mobile ions during OECT operation, preventing charge trapping, aiming to increase charge carrier mobility.EQCM-D measurements revealed that both passive and active bias induced (doped) swelling followed the same trend, increasing from p(C6-gNDI-gT2) o p(C3-gNDI-gT2) o p(gNDI-gT2).Demonstrating that decreasing the overall hydrophilic EG density, through the inclusion of hybrid spacer side chains, is an effective synthetic design strategy to modulate the degree of swelling, an important factor which must be considered for high OECT performance. 91imilar trends were also observed for both operational stability and overall OECT performance, with both alkyl spacer containing derivatives recording a higher mC* than the solely glycolated derivative p(gNDI-gT2).This study further highlights the importance of side chain engineering, corroborating the previous hybrid side chain studies in confirming that a balanced mix of hydrophilic and hydrophobic side chain density leads to improved OECT operational stability, an essential property for widespread use in bioelectronics. 92 similar design strategy was employed by Ohayon et al., presenting a series of NDI D-A polymers, incorporating an alkyl spacer on the NDI unit and polymerising with an unsubstituted bithiophene (T2) donor (p(C x -T2) series) (Fig. 16), a methoxy decorated bithiophene unit (p(C4-T2-OMe)) and an alkyl spacer containing bithiophene monomer (p(C4-T2-C y -EG) series), respectively.27 The authors found that the electron donating nature of the methoxy groups sufficiently increased the donor ability of the bithiophene unit resulting in ambipolar charge transport, impeding n-type performance.The inclusion of an alkyl spacer in a hybrid alkyl-glycol side chain also resulted in ambipolar operation, as such the authors focus on the p(C x -T2) series, whereby the unsubstituted bithiophene unit facilitated purely n-type behaviour.OECT mobility increased upon lengthening the alkyl spacer, between the NDI backbone and the EG side chain, peaking for the hexyl spacer (4.74 Â 10 À3 AE 4.31 Â 10 À4 cm 2 V À1 s À1 ) before decreasing for the eight-carbon spacer derivative.The trend in OECT performance was justified by ex situ GIWAXS measurements, whereby polymer films were electrochemically reduced and exposed to an aqueous electrolyte solution, mimicking OECT operation.No notable structural changes were observed for the p(C x -T2) series, compared to their undoped ''as cast'' pristine state, suggesting that the introduction of an alkyl spacer led to crystalline regions with heightened orientation and stability during OECT operation.In contrast the previously reported glycolated analogue p(gNDI-gT2) displayed significant changes in relative peak intensities under replicated OECT operation conditions, in the absence of any spacer moiety.81 The efficacy of alkyl spacer side chains was further bolstered by the state of the art mC* figure of merit recorded for p(C 6 -T2) which at 1.29 AE 0.117 F cm À1 V À1 s À1 is among the highest reported for any n-type active channel OECT material.93 These studies demonstrate the power of side chain engineering on the resultant polymer properties and suggest that the inclusion of a hydrophobic alkyl spacer unit between the polymer backbone and a hydrophilic EG side chain can be an effective design strategy to improve adhesion, stability, performance and modulate swelling.However, a careful balance must be reached between the side chain features and performance parameters.Akin to each unique design strategy presented within this review, there is no one-size-fits-all, blanket strategy to produce high performing OMIECs.Nevertheless, a nuanced and systematic approach to chemical design should include the effects of hybrid/spacer side chains as a point of consideration.
Charged side chains
As mentioned in Section 2.2, to date the most commonly employed active material within OECTs is PEDOT:PSS owing to the general commercial availability, high conductivity and ease of processing. 45Here the positively charged PEDOT backbone is stabilised by the negative PSS À counterion.The widespread availability of PEDOT:PSS has undoubtably bolstered the progress of recent OECT research.However, the limitations related to synthetic polymer modifications render structureproperty relationships difficult to elucidate.Multiple studies have postulated the addition of dopants or alternative blends as potential avenues to overcome these limitations. 94,95In 2014, Inal et al., presented poly(6(thiophene-3-yl)hexane-1-sulfonate) (PTHS) (Fig. 17) a P3HT analogue, replacing the typical hexyl side chain with a hexanesulfonate side chain affording a conjugated polyelectrolyte with improved ionic conductivity in OECTs. 96ndeed, the inclusion of the hydrophilic sulfonate unit improved ionic conductivity by imparting swelling ability, facilitating water and ionic uptake, resulting in a volumetric capacitance of 124 AE 38 F cm À3 . 97High transconductance was also recorded on account of the impressive hole mobility ((1.2 AE 0.5) Â 10 À2 cm 2 V À1 s À1 ) and was much improved over the fully alkylated hydrophobic P3HT. 98Following the introduction of PTHS various sulfonated thiophene moieties have been utilised within multiple polythiophene backbones in order to bolster ionic conductivity.In 2019, the THS monomer was copolymerised with 3-hexylthiophene to afford polymers with differing ratios of THS:3HT (Fig. 17). 99The authors demonstrated the advantages of including both the 3HT unit which reduced water solubility, hindering delamination, and avoided the need for external crosslinkers with the swellability, ionic conductivity and high C* properties of the THS unit.Similar hole mobilities and volumetric capacitance to that of the aforementioned PTHS were recorded, with OECT operation occurring at a lower threshold voltage with heightened ON/OFF ratios.These studies further highlight the importance of side chain composition, especially the nature and ratio of hydrophilic to hydrophobic side chain density on the resultant ionic conductivity, swelling and general OECT performance.Another example of a sulfonated polymer was presented as an evolvable OECT channel material for neuromorphic applications. 100,101Here the hybrid accumulation-depletion mode OECT is formed in situ via electropolymerisation of 4-(2-(2,5-bis(2,3-dihydrothieno[3,4-b]- [1,4]dioxin-5-yl) thiophen-3-yl)ethoxy)butane-1-sulfonate (ETE-S), a self-doped conjugated monomer.Upon electropolymerisation, to form PETE-S (Fig. 17), the gate electrode acts as the presynaptic terminal, the polymeric channel controls the synaptic weight, and the drain electrode mimics the postsynaptic terminal, replicating a biological synapse. 100,101eichmanis et al., presented a water-soluble precursor polymer, P3KBT, possessing a charged side chain which could be protonated through acidification to yield the solvent-resistant material poly[3-(4-carboxypropyl)thiophene] (P3CPT) (Fig. 17). 102he authors note that the use of carboxylic acid functionalised side chains can act as solubility mediators, whereby the deprotonated, charged, carboxylated salt can be processed from water, allowing facile channel fabrication, whilst post-processing protonation renders the material resilient to delamination improving OECT operational stability.Another important strategy to consider within the OECT side chain toolbox of design principles.
The use of charged side chains has also been investigated in electron transport (n-type) materials. 85,103Moia et al., attached a zwitterionic side chain to an NDI core, copolymerising with a glycolated bithiophene monomer to afford p(g7NDI-gT2) (Fig. 18).The positively charged ammonium ion was synthetically tethered to the NDI core via an ethyl spacer, postulating that the negatively charged polymer backbone would be compensated by the opposing charge of the ammonium ion, negating the need for charge compensation via mobile cations Fig. 17 Chemical structures of charged side chain containing OECT active channel layer materials. 96,99,100,102rom the electrolyte.Indeed, the inclusion of the zwitterionic side chain led to enhanced redox reversibility and specific capacity in aqueous electrolytes.
Recently, the first n-type biofunctionalised polymer was reported, tethering a lysine inspired side chain to an NDI core to afford p(NDI-T2-L2) (Fig. 18). 103The charged lysine-based side chains enabled electrical communication between the OECT and supported lipid bilayers (SLB), which are a promising platform to study numerous cellular events. 104The polarity and surface orientation of the lysine-based side chain allowed for interactions with zwitterionic lipid vesicles, forming SLBs, whilst simultaneously providing the hydrophilicity to afford a volumetric capacitance of 95 F cm À3 .EQCM-D measurements suggested that the nature of the lysine side chain also limited polymer swelling, with only an 8% increase in original film thickness observed within an aqueous-based salt solution. 103his is in contrast to PEDOT:PSS based SLB monitoring OECT devices which show polymer swelling of up to 80%, which could be detrimental to SLB formation. 105,106Moreover, this seminal report presented the first report of an n-type OECT capable of both interfacing and monitoring the biomimetic model SLB.
Conclusions and outlook
In this review, we have summarised the current understanding of the OECT operational mechanisms and specifically, the knowledge of ionic-electronic coupling, which sheds light on the significance of side chain engineering in the active channel layer material.The versatile synthetic toolbox of side chains can be employed to tune OECT properties impacting the material transconductance, morphology, selectivity, sensitivity, and operational stability, and has led to promising OECT device performance.
The composition and nuances of side chains, which have been synthetically tethered to channel materials for OECTs is as vast as the number of backbones which have been studied. 21,93hilst PEDOT:PSS has dominated the field of OECT applications the wealth of examples employing side chain engineering, specifically for OECT active channel materials, is extensive and offers valuable insight into the importance this synthetic tool has.Indeed, the so called ''second generation'' of semiconducting polymers, namely polythiophene materials, has demonstrated the efficacy of EG based side chains as the current gold standard to produce OMIECs for OECTs.
From the evolving understanding of what determines OECT channel material performance and how side chains influence these metrics, we summarise the findings in the following way: firstly, the hydrophilicity is the most important intrinsic factor that determines the ability of ion uptake, especially in the application of bioelectronics whereby an aqueous environment and ion exchange are essential.Replacing hydrophobic side chains with hydrophilic chains, e.g., EG side chains, is the primary adaptation of the molecular design principles from other thin film application of CPs to meet the working mechanism existing in OECTs.The commercial availability, ability to facilitate ionic transport and the capability to tune polymer swelling are just three reasons why EG based side chains have accelerated to the forefront of molecular design for OECT channel materials.However, these benefits of glycol side chain building blocks may also deter a deeper investigation into side chain engineering.It should not be understated that side chain engineering can become exponentially difficult synthetically, one could envisage the lengthy process needed to synthesis an EG side chain replacement, starting from non-commercially available resources, eventually resulting in a poorer performing channel material which has undoubtedly hindered research in this area.As such, EG side chains might not be the perfect choice for OECT channel materials, however, without significant investment into side chain engineering of novel alternatives EG side chains will continue to be employed for the foreseeable future.Despite this, as exemplified throughout, EG based side chains have undoubtably bolstered the field, improved OECT device performance by orders of magnitude, increased operational stability and have allowed for the swellability of materials to be controlled. 77,79,84,91n further examination of how the manipulation of EG side chain percentage, length, and position could influence the polymer performance, the swelling behaviour brought by the engineered hydrophilicity becomes the major parameter to research on.With increasing hydrophilicity in the system, uneven swelling and water retention upon cyclic electrochemical doping processes can indeed permanently disrupt the microstructure permanently and hence weakens material stability and hinders charge carrier mobility. 50,84Consequently, one of the major challenges of material design rises from the trade-off between the electronic conductance and ionic conductance, which are opposingly correlated with the swelling behaviour.The introduction of mixed types of side chains into these materials could serve to optimise the charge mobility and the volumetric capacitance simultaneously.In studies examining the potential application of D-A copolymers in OECTs, side chains of different natures could be separately engineered to the donor or acceptor monomers, achieving the goal of controlled swelling 84 and improved stability. 41,86hile PEDOT:PSS retains popularity as a widely used OMIEC, featuring the positively charged PEDOT backbone stabilised by the negative PSS À counterion, a number of homogeneous mixed conductors decorated with charged ionic or zwitterionic side chains have been reported.Not only did the introduction of charged side chains improve hydrophilicity of the materials, and consequently, swelling ability and transconductance, but it was also found to improve hole mobility.Furthermore, introducing various ratios of charged and alkyl groups aids to strike the balance between the swelling ability of the mixed conductor and its performance. 96,97Last but not least, introduction of zwitterionic side chains was shown to enhance redox reversibility and specific capacity in aqueous electrolytes. 102As a result, charged side chains offer tremendous opportunities for fine tuning the hydrophilicity and swellability of a mixed conducting polymer, while maintaining OECT performance.
Utilisation of functionalised side chains afford promising opportunities for the development of novel bioelectronic devices, with tailored selectivity (e.g., sialyllactose and crown ether derivatives).The ability of the functionalised side chains to bind specific analytes (e.g., proteins) paves the way towards wearable sensors and point-of-care evaluation of various viruses, with improved sensitivity and specificity. 107However, a trade-off between the hydrophilicity, transconductance, and operational stability must be struck.Hybrid side chains offer the potential to strike a balance between adhesion, stability, performance and swelling and warrant further study.
The prevalence of EG side chains also begs the question of alternative Group 16 (chalcogen) containing side chains such as thioethers which could provide additional stabilising interactions with common polythiophene backbones but come at the unquestionable synthetic cost compared to the commercially available EG side chains.One could also envisage an expansion past EG side chains to more biocompatible units which target specific biological interfaces such as hydrolytically degradable water-soluble side chains.
The wealth of studies summarised above demonstrate the breadth of choices available when choosing which side chain to employ for an OECT channel material.Whilst great strides have been made in terms of processability, modulation of swelling and overall OECT device performance future work will continue to utilise side chain engineering as a major tool to further improve future OECT channel materials.As the field continues to mature more detailed investigations into the crucial role side chain engineering plays in the resultant polymer properties will allow for novel side chain alternatives to be devised and will ultimately lead to further enhancements within the field of OECT channel materials.
Nadzeya
Kukhta received her MSc in Organic Chemistry from Belarusian State University in 2011.She completed her PhD in Materials Engineering at Kaunas University of Technology in 2016 in the group of Prof. J. V. Grazulevicius, focusing on the development of organic semiconductors for optoelectronic and photovoltaic applications.In 2017, Nadzeya joined the group of Prof. M. R. Bryce as a postdoctoral research associate at Durham University, focusing on investigation of TADF and RTP materials.Since 2020, Nadzeya has been a postdoctoral research associate in the group of Prof. C. K. Luscombe at the University of Washington.
Marks Adam Marks received his MSc degree in Chemistry from Imperial College London in 2016.He then joined Prof. Iain McCulloch, completing his PhD in 2020, developing OMIEC materials for bioelectronic applications.He then followed the McCulloch group to the University of Oxford, where he was a postdoctoral research associate, focusing on the development of electron transport OMIECs.He is now a postdoctoral researcher with Prof. Alberto Salleo and Prof. William Chueh at Stanford University.His current research interests are focused on the synthetic development of novel materials for bioelectronic devices and evaluation of materials for polymeric electrocatalysis.Christine K. Luscombe Christine Luscombe received her BA, MA, and MSci from the University of Cambridge and completed her PhD under the supervision of Prof. Andrew Holmes and Prof. Wilhelm Huck in the Melville Laboratory for Polymer Synthesis at the University of Cambridge.She then moved to the University of California, Berkeley as a postdoctoral researcher with Prof. Jean M. J. Fre ´chet.She started her independent career in the Materials Science and Engineering Department at the University of Washington in 2006 and is now a Professor at the Okinawa Institute of Science and Technology Graduate University in Japan.
Fig. 2
Fig. 2 Schematic representation of an OMIEC acting as a channel material in an accumulation mode OECT.Reproduced from ref. 30 with permission from Springer Nature.
Fig. 1
Fig. 1 Bioelectronic interface with human body applications of conjugated polymers.Reproduced from ref. 16 with permission from Wiley-VCH.
Fig. 4 presents the comparison of the transconductance values of a collection of the reported OMIEC materials.
Fig. 3
Fig. 3 (a) Representation of the OECT operation principle, utilizing a typical OMIEC channel material, PEDOT:PSS.Reproduced from ref. 40 (b) top panel: depletion mode, where (i) unbiased state of a channel material (PEDOT:PSS); (ii) holes move towards the drain electrode upon positive gate voltage application; (iii) positive gate voltage causes the reduction of the hole flow.Bottom panel: accumulation mode, where (i) unbiased state of a channel material (p-type); (ii) negative gate voltage application causes electrochemical doping; (iii) holes move towards the drain electrode upon negative gate voltage application both at the gate and drain electrodes.Reproduced from ref. 21 with permission from Wiley-VCH.
Fig. 4
Fig.4The ranges of transconductance values of various OMIEC materials.The dotted lines correspond to the best fits of transconductance g m = aWd/L, with a being a proportionality constant.42Reproduced from ref.44.
Fig. 5
Fig. 5 (a) A cartoon representing the differences in electronic charge transport in dry and hydrated and doped OMIEC materials.Reproduced from ref. 50 with permission from the American Chemical Society.(b) The role of side chains in the OMIEC doping mechanism.Reproduced from ref. 76 with permission from the American Chemical Society.(c) Chemical structures of no spacer, methyl spacer and ethyl spacer P3MEET derivatives, comparative swelling data and AFM image of P3MEEET.Reproduced from ref. 89 with permission from the American Chemical Society.
Fig. 8
Fig. 8 Sections of chemical structures of pgBTTT and p(g2T-TT) addressing S-O interactions with corresponding GIWAXS maps.Reproduced from ref. 78 with permission from the American Chemical Society.
Fig. 11
Fig. 11 The swelling behaviour and volumetric capacitance with respect to different glycol fractions in p(g2T-TT) and 2g based polymers at (a) V = 0 V and (b) V = 0.5 V versus V OC .Reproduced from ref. 79 with permission from Wiley-VCH.
Fig. 15
Fig. 15 Maximum current versus sonication time (left), OECT transfer curve (right) and structure of PIBET-AO (middle).Reproduced from ref. 80 with permission from the American Chemical Society.
Fig. 12 The chemical structures of reported green solvent processable PE 2 -biOE2OE3.Reproduced from ref. 82 with permission from Wiley-VCH.
This journal is © The Royal Society of Chemistry 2022 | 2022-01-09T16:12:35.791Z | 2022-01-07T00:00:00.000 | {
"year": 2022,
"sha1": "9fd3ebce75e8b8082eb3f1c63c95b9c3b7f85772",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2022/tc/d1tc05229b",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "77c2c7b265283191c55c24b178e54ae2a96d6bbd",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18636391 | pes2o/s2orc | v3-fos-license | Pomegranate (Punicagranatum) juice decreases lipid peroxidation, but has no effect on plasma advanced glycated end-products in adults with type 2 diabetes: a randomized double-blind clinical trial
Introduction Diabetes mellitus characterized by hyperglycemia could increase oxidative stress and formation of advanced glycated end-products (AGEs), which contribute to diabetic complications. The purpose of this study was to assess the effect of pomegranate juice (PJ) containing natural antioxidant on lipid peroxidation and plasma AGEs in patients with type 2 diabetes (T2D). Materials and methods In a randomized, double-blind, placebo-controlled trial, 44 patients (age range 56±6.8 years), T2D were randomly assigned to one of two groups: group A (PJ, n=22) and group B (Placebo, n=22). At the baseline and the end of 12-week intervention, biochemical markers including fasting plasma glucose, insulin, oxidative stress, and AGE markers including carboxy methyl lysine (CML) and pentosidine were assayed. Results At baseline, there were no significant differences in plasma total antioxidant capacity (TAC) levels between the two groups, but malondialdehyde (MDA) decreased levels were significantly different (P<0.001). After 12 weeks of intervention, TAC increased (P<0.05) and MDA decreased (P<0.01) in the PJ group when compared with the placebo group. However, no significant differences were observed in plasma concentration of CML and pentosidine between the two groups. Conclusions The study showed that PJ decreases lipid peroxidation. Therefore, PJ consumption may delay onset of T2D complications related to oxidative stress.
H yperglycemia in diabetes causes tissue damage through various mechanisms and contributes to diabetic complications (1). Several lines of evidence indicate that these mechanisms are activated by a single upstream event: mitochondrial overproduction of reactive oxygen species (ROS). Therefore, oxidative stress is implicated as a major factor in the development of diabetes complications. Increased formation of advanced glycated end-products (AGEs) is one of the mechanisms that contribute to hyperglycemia's tissue damage (2). Due to hyperglycemia in diabetes, reducing sugars react nonenzymatically with free amino groups of protein to form a diverse group of protein-bound moieties known as AGEs (3). Plasma proteins modified by AGE precursors bind to AGE receptors on cells, such as macrophages, vascular endothelial cells, and vascular smooth muscle cells, and this binding induces the production of ROS, causing multiple pathological changes in gene expression (4).
Some of the best chemically characterized AGEs in humans are pentosidine and carboxy methyl lysine (CML) (5). Peripheral artery disease, a macrovascular complication of hyperglycemia of diabetes, has been strongly associated with serum malondialdehyde (MDA), an indicator of ROS, and with AGEs in type 2 diabetes (T2D) (6). Dietary supplementation, with antioxidant phytochemicals, may be a successful strategy to reduce the risk of pathological complications (7). Pomegranate (Punicagranatum) juice (PJ) possesses the highest antioxidant capacity when compared with the commonly consumed polyphenol-rich beverages (8) , allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license. has been shown to ameliorate hypertension and reduce risk factors of atherosclerosis in a few clinical studies (9Á12), and the effects of PJ consumption on AGEs have not previously been studied. The present randomized clinical trial was designed to explore the effects of 12-week PJ consumption on plasma AGEs (including pentosidine and CML) and oxidative stress on patients with type 2 diabetes.
Materials and methods
Patients and study design This study was a randomized, double-blind, placebocontrolled trial. Forty-four patients (23 men and 21 women, age range 5696.8 years), at least 1-year post-type 2 diabetes diagnosis, were selected based on their medical records in Iran Charity Foundation for Special Diseases and Health Center in Tehran, Iran. All the patients were taking oral hypoglycemic agents, and none were using insulin. In addition, patients were not smokers and not suffering from any other chronic diseases and, if female, were not taking estrogen or progesterone. At baseline, patients were stratified by gender and randomly assigned to one of the two groups: group A (PJ, n 022) and group B (Placebo, n022). Random allocation of patients to treatment groups was performed by sequentially numbered containers. Randomization was performed by an assistant, and the group allocation was blinded for the investigator and participants. Written informed consent was obtained from all the patients. Ethical approval for the trial was obtained from Ethical Committee of National Nutrition and Food Technology Research Institute (Tehran, Iran). This clinical trial has been registered in the Iranian Registry of Clinical Trials at www.irct.ir (ID No: IRCT201206144010N8).
Intervention and compliance
At baseline, patients were stratified by gender and randomly assigned to consume 250 ml/day PJ, or a control beverage of similar color and energy content for 12 weeks. The study product was packaged in single-serving labeled bottles, so that neither subjects nor staff members were aware of treatment assignment. The subjects were asked not to change their dietary habits, physical activities, or drug regimens. The patients were contacted every week to evaluate compliance to intervention and inquire regarding possible side effects. Each patient was provided with a fixed number of PJ bottles and instructions to return the unused bottles at the end of the study. Based on the number of returned bottles by each patient, their compliance was determined, which was 90%. Patients were excluded from the analysis if they consumed B90% of the packets, had changed their medication, or reported severe side effects. No adverse events were reported.
Measurements
Each subject's weight was recorded, while wearing light clothing and no shoes, using digital scales to the nearest 100 g. Height was measured to the nearest 0.5 cm. Body mass index (BMI) was then calculated as weight (kg) divided by square of height (m). Dietary intakes of the subjects were assessed, using a 3-day dietary recall (2 weekdays and 1 weekend day) at baseline and at the end of 12 weeks. The patient's diets were analyzed by Nutritionist IV software (N Squared Computing, San Bruno, CA, USA).
Biochemical analysis
At baseline and the end of 12-week intervention, 10 ml blood was collected from each patient after a 12Á14 h overnight fast. Blood samples to which the anti-coagulant was added were centrifuged at 4,000 rpm for 10 min. The plasma samples were separated into aliquots and were frozen at (808C, until they were assayed. Fasting plasma glucose (FPG) concentration was assessed using the colorimetry method by commercial kit (Pars Azemoon, Tehran, Iran) and a Selectra 2 autoanalyzer (Vital Scientific, Spankeren, The Netherlands). Hemoglobin A1c (HbA1c) was assessed using the ion exchange chromatography method by commercial kit (Biosystem, Barcelona, Spain). The coefficient of variation (CV) for FPG was 1.3% and HbA1c was 5.1. Plasma concentration of pentosidine and CML was assessed using enzyme immunoassay (ELISA). Pentosidine and CML by related kits (Cusabio Biotech, Wuhan, China) were measured. Plasma concentration of MDA and total antioxidant capacity (TAC) were measured by colorimetry using kits (Cayman, Ann Arbor, USA; and Biocore diagnostics, Hamburg, Germany), respectively. The CV for pentosidine, CML, MDA, and TAC was 6.9, 7.8, 6.4, and 7.3, respectively.
Pomegranate and placebo juice
To choose the commercial PJ with the highest polyphenol levels, several hand-squeezed and various commercially available juices were analyzed, using the colorimetric assay. The phenols were determined by the FolinÁCiocalteu reagent, using gallic acid as a standard and had 1,946 mg gallic acid equivalent (GAE)/L of PJ (13). Total flavonoid content, measured by the aluminum chloride colorimetric assay, using catechin standard (14), had 345.87 mg catechin equivalent/ml of PJ. The juice was diluted at 1:10 (v:v) to measure TAC, which is based on the inhibition percent of 2,2?-azino-bis(3-ethylbenzothiazoline-6-sulphonic acid), and when compared with bovine serum albumin (BSA) standard curve, it had 7 mmol/L BSA TAC (3). PJ provides 126 kcal, 24 g sugar, and 1 g protein, while placebo beverage had similar color, taste, and energy content (126 kcal, 24 g sugar, and 1 g protein), but colorimetric assays showed that it has no polyphenols. The juice and the placebo were kept at room temperature ( B258C) until opened as recommended by the manufacturer.
Statistical methods
Statistical analysis of data was performed using the Statistical Package for the Social Sciences (SPSS, Inc., Chicago, IL, USA) for Windows version 16.0. A x 2 test was used to compare qualitative variables between the two groups. Normality of quantitative parameters was tested using KolmogorovÁSmirnov test. Since all parameters had normal distributions, independent t-tests and paired t-tests were used to compare parameters between and within the groups, respectively. Adjustment for differences in baselines covariates and changes in variables during the study were performed by analysis of covariance (ANCOVA) using general linear models. The results were expressed as mean9SD, and differences were considered significant at p 50.05.
Results
There was no significant difference in baseline characteristics between the two groups. Anthropometrical factors did not differ between the two groups at the baseline or at the end of week 12 (Table 1). In addition, these factors did not change significantly within the groups during the study. Dietary intakes of participants are shown in Table 2. Energy and carbohydrate intake was different between the two groups.
At baseline, there was no significant difference in plasma TAC levels between the two groups, but MDA levels were significantly different (p00.0001). Within the group, analysis showed that levels of MDA and TAC have changed significantly in the PJ group, compared with the baseline ( Table 3). Comparison of TAC between the two groups showed higher values in the PJ group when compared with the control group. ANCOVA was performed to compare MDA between two the groups using the baseline MDA values as the covariate. There was a significant difference between the two groups, indicating lower MDA levels in the PJ group. At baseline, there were no differences in plasma concentrations of CML and pentosidine between the two groups. At the end of the study, no significant changes were found in plasma concentrations of CML or pentosidine between or within the groups.
Discussion
The results of the current study indicate that daily consumption of PJ could increase plasma TAC and decrease plasma MDA in diabetic patients while it has no effect on AGE levels. Juice extracted from the pomegranate has been shown to have the greatest in vitro antioxidant potency among commonly consumed beverages (8,15). It is worth mentioning that the baseline MDA levels in the PJ group were higher than in the placebo group. Therefore, in addition to antioxidative properties of PJ that could influence plasma MDA level, part of the decline in MDA levels following PJ consumption could be due to the higher baseline levels of MDA in the PJ group. Increased serum TAC and paraoxonase and decreased LDL sensitivity to oxidation have been reported in healthy persons (11) and patients with carotid arterial stenosis following PJ consumption (16). Pomegranate is known as a very rich source of anthocyanins, ellagic acid, punicalagins, catechins, and gallocatechins (15,17,18). Anthocyaninrich beverage ingestion has been reported to decrease plasma and urinary concentrations of MDA and to improve plasma antioxidant capacity in healthy female volunteers (19). Furthermore, ellagic acid has been shown to reduce MDA levels in the brain of streptozotocininduced diabetic rats (20). Consumption of pomegranate polyphenolic extract has led to a significant decrease in serum MDA in diabetic patients (21). In addition, the juice has been shown to decrease serum lipid peroxides and increase paraoxonase (22). Although polyphenolic extract of PJ is effective in reducing oxidative stress, the extract is less potent than the juice, which may indicate that other factors in the juice (unique sugars) may contribute to mitigate oxidative stress (22). The mechanism by which PJ or its compounds could ameliorate lipid peroxidation and oxidative stress is not well known but might occur by directly neutralizing the generated reactive oxygen species (23), increasing certain antioxidant enzyme activities such as paraoxonase (22), and inhibiting or activating certain transcriptional factors, such as nuclear factor kB (24,25) and peroxisome proliferator-activated receptor g (26). Interestingly, a recent study showed that administration of pomegranate fruit extract for seven consecutive days before and after methotrexate challenge in Swiss albino mice reduced ROS generation in hepatocytes mainly by differential regulation of the activation of the transcription factors nuclear factor (erythroid-derived 2)-like 2 and nuclear factor kB as a consequence of which the antioxidant defense mechanism in the liver was upregulated (24). Protein glycation and AGE formation are the result of non-enzymatic reactions with glucose; AGE level has previously correlated with HbA1c levels (27,28). Serum levels of pentosidine and CML have been reported to be higher in patients with type 2 diabetes, compared with nondiabetic controls (29). In addition, during the process of AGE formation, ROS are produced as by-products of the late steps of glycation reactions, and these radicals in turn further promote glycation (30). AGE levels have been associated with ROS in diabetic patients (31). Previously, in vitro and animal studies have shown that polyphenolic extracts from a different source could inhibit AGE formation (32,33). In our study, although we found that PJ resulted in a significant improvement in TAC an MDA status, there was no significant difference between the PJ group and the placebo group regarding plasma concentration of pentosidine and CML. One possible explanation for this finding is that the patients in the current study did not have such elevated baseline AGE levels. In a study by Lapolla et al., plasma pentosidine levels of a healthy subject were 63.2 pmol/ml which are comparable to the pentosidine levels of our study subjects, while in diabetic patients without peripheral artery disease pentosidine levels were 85.5 pmol/ml and in those with peripheral artery disease were 109.2 pmol/ml (6). Since no significant difference were found in glycemic control between the two groups, comparable non-enzymatic reaction with glucose and protein glycation could have occurred in both the groups. Another possibility is that to influence plasma AGEs, taking antioxidant source should continue for a more prolonged time. In a study by Shimada et al., antioxidant therapy for 6 months significantly decreased hemoglobin carboxymethyl valine residue levels, though HbA1c did not change (34).
In the current study, although dietary energy and carbohydrate intake differed between the two groups, no significant changes in body weight, plasma glucose, or HbA1c were observed between the two groups.
Some limitations of the present study are that PJ consumption was relatively short in duration and few parameters reflecting oxidative stress and advanced glycation process were measured.
In conclusion, 12-week consumption of PJ does not influence plasma levels of the AGEs, pentosidine and CML, in patients with T2D while it improved plasma TAC an MDA status. PJ contain simple sugars. However, its consumption did not impair glycemic control of diabetic patients. Therefore, it could be potential natural drinks in diabetic diet with favorable properties. | 2018-04-03T04:54:06.954Z | 2015-09-08T00:00:00.000 | {
"year": 2015,
"sha1": "69580add4d459aed77019639afde759a524337d4",
"oa_license": "CCBY",
"oa_url": "https://foodandnutritionresearch.net/index.php/fnr/article/download/800/2476",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "69580add4d459aed77019639afde759a524337d4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253801928 | pes2o/s2orc | v3-fos-license | Machine learning strategies to predict late adverse effects in childhood acute lymphoblastic leukemia survivors
Acute lymphoblastic leukemia is the most frequent pediatric cancer. Approximately two third of survivors develop one or more health complications known as late adverse effects following their treatments. The existing measures offered to patients during their follow-up visits to the hospital are rather standardized for all childhood cancer survivors and not necessarily personalized for childhood ALL survivors. As a result, late adverse effects may be underdiagnosed and, in most cases, only taken care of following their appearance. Thus, it is necessary to predict these treatment-related conditions earlier in order to prevent them and enhance the survivors' health. Multiple studies have investigated the development of late adverse effects prediction tools to offer better personalized follow-up methods. However, no solution integrated the usage of neural networks to date. In this work, we developed graph-based parameters-efficient neural networks and promoted their interpretability with multiple post-hoc analyses. We first proposed a new disease-specific VO$_2$ peak prediction model that does not require patients to participate to a physical function test (e.g., 6-minute walk test) and further created an obesity prediction model using clinical variables that are available from the end of childhood ALL treatment as well as genomic variables. Our solutions were able to achieve better performance than linear and tree-based models on small cohorts of patients ($\leq$ 223) for both tasks.
Introduction
Childhood acute lymphoblastic leukemia (ALL) is the most frequently diagnosed type of cancer in children 1 . The 5-year relative survival rate is currently above 90% 2 . Nevertheless, approximately two thirds of childhood ALL survivors will present one or more health complications 3 known as late adverse effects (LAEs). The LAEs are rather resulting from the treatment (e.g., exposure to chemotherapy, cranial radiation therapy) than the cancer itself 3 . The existing follow-up measures, used in clinical settings and offered to patients during their visits to the hospital, are rather standardized for all childhood cancer survivors and not necessarily personalized for childhood ALL survivors 4 . As a results, LAEs may be underdiagnosed, and in most cases, only taken care of once they have already appeared in adulthood. Thus, it is necessary to predict these treatments related conditions earlier in order to prevent them and enhance the survivors' health.
Between 2013 and 2016, 246 childhood ALL survivors have participated to a series of clinical, physiological, biological and genetic evaluations as part of the PETALE study 5 . The main goal was to pinpoint predictive clinical, genetic and biochemical biomarkers that are relevant to establish personalized intervention plans to reduce LAEs prevalence, while providing knowledge for the improvement of follow-up methods 5 .
Using the valuable data acquired from the PETALE cohort, efforts have been made towards the development of better personalized follow-up methods [6][7][8][9][10][11] . As an example, an equation based on a linear regression was specifically developed to estimate the maximal oxygen consumption (i.e., VO 2 peak) in childhood ALL survivors following a 6-minute walk test (6MWT) 6 . The VO 2 peak is an excellent predictor of cardiac health in patients with cancer and is recognized as the gold standard in exercise physiology to measure patients' cardiorespiratory fitness 12 , which plays an important role towards the prevention of LAEs in childhood ALL survivors 1 . However, the direct measurement of the VO 2 peak, which is usually done by performing a maximal cardiopulmonary exercise test (CPET), is not an optimal solution in clinical settings due to financial and time constraints. Therefore, there is an interest in using a walking test (e.g., 6MWT) when access to comprehensive testing is limited (e.g., CPET) 13 . Moreover, it has been shown that using a disease-specific VO 2 peak equation from the 6MWT provides a robust tool to estimate the patient's cardiorespiratory fitness with lower costs 13 .
More recently, it has also been suggested that childhood ALL survivors' cardiorespiratory fitness is associated with specific trainability genes 11 , highlighting the potential impact of some genetic variants in the prediction of VO 2 peak. Another study investigated the association between genetic variants (i.e., single nucleotide polymorphisms) and cardiometabolic LAEs (e.g., obesity, dyslipidemia, hypertension) in childhood ALL survivors 7 . The single nucleotide polymorphisms (SNPs) were grouped according to their associated cardiometabolic conditions and further analyzed with eight other biological and treatment-related variables using a logistic regression. The authors found that multiple common and rare variants were independently associated with cardiometabolic conditions, such as dyslipidemia, insulin resistance and obesity 7 . They also suggested that these associations should be considered as indicators for the early assessment of these LAEs. This is an important aspect to take into consideration since, in the PETALE study, 41.8% of childhood ALL survivors had dyslipidemia, 33% were obese and 18.5% had an insulin resistance 7 .
In medical contexts, simple models such as linear regression and logistic regression are often favored over more complex machine learning approaches (e.g., deep learning models) due to their ability of being easily interpreted 14,15 . Moreover, due to their modest number of parameters to optimize (i.e., reduced capacity), simple models are less inclined to overfit on small training datasets and consequently have poor generalization performance. Hence, these models are well adapted to a clinical context with a small cohort of patients. However, more sophisticated model architectures (i.e., neural networks) have lately achieved better results in the prediction of clinical events using data from electronic health records 16,17 . Interpretability of neural networks has also been the subject of many studies over the last years 15, 18 . Post-hoc methods have been investigated to get insights about a neural network's behavior following its training. For example, recent work motivated the usage of attention mechanisms within their models to help depict the decision-making process behind individual samples 17,19 . Modelagnostic techniques exist as well to compare and visualize features within a layer of a neural network 15, 20, 21 . On the other hand, interpretability of neural networks can also be strengthened a priori via the design of their architectures by including components with specific functionalities 15 . In particular, some studies explicitly integrated graph-based architectures (i.e., graph neural networks) to leverage the importance of the similarity between patients to solve a prediction task 22,23 .
In this work, our main goal was to design neural networks for the prevention of LAES in childhood ALL survivors population. An overview of the prediction tasks and the experimental setup considered in this study is presented in figure 1. We hypothesized that parameters-efficient neural networks could achieve better prediction performance than linear and treebased models on small cohorts of patients but should require rigorous post-hoc analyses to provide interpretability of their behaviors. Especially, we believed that graph-based architectures would lead to the best results since they can benefit from the links between patients of the cohorts instead of treating each of them separately. We also suggested that the inclusion of genomic variables would be beneficial in the creation of early LAEs prediction model. Towards our goal, we first proposed a new disease-specific VO 2 peak prediction model that does not require patients to participate to a physical function test (e.g., 6MWT); even if it has some advantages over the cardiopulmonary exercise test, the 6MWT still requires time and human resources. We further created an obesity prediction model using clinical variables that are available from the end of childhood ALL treatment as well as genomic variables. Overall, our results suggest that neural networks can outperform simple models to predict LAEs in small cohorts of ALL survivors. The conducted post-hoc analyses including the visualization of features within specific layers and the visualization of attention maps also demonstrated their usefulness to provide a general understanding of the behaviors of the models. b a 10x Figure 1. Experimental setup. (a) Prediction tasks viewed using childhood ALL treatment timeline. On the left, VO 2 peak is predicted using variables measured on the same day. On the right, obesity is predicted using variables available at the end of childhood ALL treatment. (b) Experiment workflow. (1) Separation of the dataset into a learning set and an holdout set.
(2) Evaluation of the models using random stratified subsampling with 10 splits. survivors individually, our model captures information from their neighborhood (i.e., the set of survivors connected to them by an oriented edge) when it calculates their predictions. Precisely, a targeted survivor encapsulates the information from his surrounding by calculating a weighted average of his neighbors' standardized features and by applying a transformation to the resulting vector. The weight attributed to each neighbor during the calculation of the weighted average is determined by the attention mechanism of the GAT. Both the attention mechanism and the transformation are parameterized functions for which the parameters are learnt during the training of the model. The vector resulting from the transformation is then concatenated to the initial standardized features to create an enriched representation of the survivor (i.e., an embedding). It follows that the VO 2 peak of a survivor is estimated with a linear combination of the components within his embedding.
Construction of the graph
We created the oriented graph structure connecting the childhood ALL survivors of our dataset using their attributes. To guide the attention mechanism towards a subset of connections with intuitively more potential to help with the regression task, we restricted the number of oriented edges pointing at each survivor (i.e., node) by selecting only their 10 nearest neighbors of the same sex. The similarity between survivors was determined using the Euclidean distance based on the numerical features. A self-connection was also added to each node so they could be part of their own neighborhood. The survivors from the holdout set ( Fig. 1b) were not allowed to be connected to each other in order for our experiment to be representative of a real clinical context where each new incoming survivor can only be connected to others that have already been observed (i.e., that are already in the graph).
Performance of the prediction model
We compared our new VO 2 peak prediction model to the equation from Labonté et al. 6 by measuring the root-mean-square error (RMSE), the mean absolute error (MAE), the Pearson correlation coefficient (PCC) and the concordance index (Cindex) associated to the predictions of both models in the holdout set. Except for the concordance index, our model shows an improvement against the equation from Labonté et al. 6 (final test section of table 1). In figure 2, we can see that the equation from Labonté et al. 6 is overestimating the VO 2 peak of childhood ALL survivors while our model is closer to the real observed values (i.e., targets). Moreover, our model does not rely on any measurement acquired from a 6MWT.
The results that led to the selection of our final model (i.e., GAT) are presented in the top section (i.e., evaluation of models) of table 1. The evaluation of models consists of the second phase of our experimental setup ( Fig. 1b box 2 Table 1. Performance of the models for the VO 2 peak prediction task. Models are compared using the root-mean-square error (RMSE), the mean absolute error (MAE), the Pearson correlation coefficient (PCC) and the concordance index (C-index). The results reported in the top section (i.e., evaluation of models) refer to the mean ± standard deviation obtained in the second part of our experimental setup ( Fig. 1b box 2). The scores recorded following the predictions made by the selected model (i.e., GAT) and the equation from Labonté et al. 6 in the holdout set are presented in the final test section. The HPs optimization column indicates if the scores were acquired with hyperparameter values that were manually selected or found by an automated hyperparameter optimization algorithm. HP: hyperparameter.
Analysis of the prediction model
We first visualized the oriented graph structure connecting the childhood ALL survivors of our dataset (Fig. 3a). The resulting graph showed that survivors sharing connections had similar VO 2 peak values while supporting the fact that women in the childhood ALL survivors population have generally lower VO 2 peak values than men, as it is already observed in the nonsurvivors population 24 . We further projected the embeddings learnt by our model in a 2D space using t-SNE 20 in order to have a better understanding of the model's behavior (Fig. 3b). The projection suggests that our model can learn embeddings that group survivors of the same sex while generally keeping them closer when they share similar VO 2 peak values. Considering that a shorter distance between two survivors' embeddings generally means that they have closer VO 2 peak values, we can help a clinician to validate the potential of a new prediction made by our model by comparing the profile of the survivor associated to the predicted value to the profiles of the associated most similar survivors for which we already know the VO 2 peak values. For example, in figure 3c, we compared a survivor in the holdout set with the associated three most similar survivors in the learning set. In this case, Figure 3. Analysis of the VO 2 peak model. (a) Graph constructed for the VO 2 peak regression task. Connected men (diamonds) and women (circles) present similar VO 2 peak levels. The self-connections were omitted for visualisation purpose. (b) Projection of the embeddings in a 2D space using t-SNE 20 . Men and women present two distinct groups where close survivors usually share similar VO 2 peaks. (c) Comparison between the profile of a survivor in the holdout set and the associated three most similar survivors in the learning set according to the embeddings learnt by the model. the prediction made by the model seems legitimate since the closest patients in the learning set have comparable attributes and VO 2 peak values. Note that the similarity measure in figure 3c is based on a weighted Euclidean distance between the embeddings, that is, similarity = 1/(1 + distance). The weight attributed to each dimension of the embeddings during the Euclidean distance calculation corresponds to the absolute value of the weight associated to the same dimension in the last linear layer of the model.
Construction of the prediction model
We developed a new model for the early obesity prediction in childhood ALL survivors population. To make our model available to use directly at the end of the childhood ALL treatment, we considered the age at diagnosis (years), the body mass index (BMI) at the end of treatment (kg/m 2 ), the doxorubicin dose received (mg/m 2 ), the duration of treatment (DT) (years), the effective corticosteroid cumulative dose received (mg/m 2 ), the methotrexate dose received (mg/m 2 ), the sex and 30 SNPs as our observed variables (Fig. 1a). The SNPs are categorical variables that share the three following modalities: homozygous for the reference allele ("0/0"), heterozygous (i.e., one chromosome with the reference allele and the other with the alternate) ("0/1") or homozygous for the alternate allele ("1/1"). All variables mentioned above were kept following a feature selection process (see Feature selection in Methods and Fig. 1b-4). The non-genomic variables excluded are shown in Supplementary Tables S4-S9.
The obesity status of a single individual can vary according to the measure used (e.g., body mass index (BMI), total body fat percentage (TBF), waist circumference) and the specific cut-off value associated to it, which can evolve according to guidelines. Thus, we trained our model to directly predict the future TBF of survivors. This way, any cut-off value can be further applied to evaluate if a survivor will be obese or not based on the predicted value. It allows our model to be independent from any cut-off value and consequently ensures that it stays operational with time. In our dataset, the time elapsed between the end of the treatment and the measurement of the TBF was on average 13.21 years (Fig. 1a).
Gene Graph Attention Encoder
We created a novel neural network architecture that efficiently uses the genomic data (i.e., the SNPs) for a regression task while considering the underlying structure of the genome. This new architecture called the Gene Graph Attention Encoder (GGAE) (Fig. 4) encodes the data from the SNPs in a low-dimensional vector named the genomic signature. The latter is further concatenated to the other standardized clinical features to create a given patient embedding that can be used as an input to any feedforward neural network (FNN) architecture. The parameters of the GGAE and the subsequent connected FNN architecture are learned in an end-to-end fashion during the training of the model. It follows that the model generates genomic signatures that are specific to the regression task. In our case, we used a simple linear regression (i.e., a linear layer) as the FNN part to reduce the number of parameters to optimize and allow the post-hoc analysis of the coefficients associated to the standardized clinical features. See section 2.2.6 of Supplementary Methods for further details on the architecture of the GGAE. . In order to create the genomic signature, the GGAE first interprets each chromosome pair as a complete graph (i.e., chromosomal graph) where the nodes represent the observed SNPs associated to the pair. A real-valued vector is mapped to each node of each chromosomal graph according to the SNP's category linked to the node (i.e., "0/0", "1/1" or "0/1"). We refer to each of these vectors as SNP embedding. The GGAE further depict the whole genome as another complete graph (i.e., the genome graph) where each node represents a pair of chromosomes. A real-valued vector (i.e., a chromosomal embedding) is mapped again to each of these nodes by aggregating the SNP embeddings of the associated chromosomal graph (i.e., applying a readout function). Another readout function is finally applied to the genome graph to create the genomic signature.
Performance of the prediction model
We compared the best model obtained with the inclusion of SNPs (linear regression + GGAE) to the best model obtained without the SNPs (linear regression) according to four different regression metrics (RMSE, MAE, PCC and C-index) and two binary classification metrics (sensitivity and specificity) ( Table 2 final test sections). The sensitivity and the specificity of the models were calculated following the categorization of the survivors as obese or not obese considering both the real TBFs and the model predictions with the cut-off values presented by Lemay et al. 1 : >25% (men), >35% (women) and >95th percentile (children). More precisely, for any child, the cut-off value was the 95th percentile measured from a sample of U.S children of the same sex and age group 32 . The combination of the linear regression with the GGAE achieved the best scores on all metrics except for the specificity since it misclassified a non-obese survivor (P226) by 0.29 percentage point (Fig. 5). Additional results associated to the evaluation of models phase mentioned in the top sections of table 2 are available in Supplementary Tables S14-S15. Table 2. Performance of the models for the obesity prediction task. Regression metrics are calculated using the predictions of TBF. Classification metrics are calculated considering the obesity class (obese or not obese) associated to each prediction following the application of the cut-off values. The results reported in the evaluation of models sections refer to the mean ± standard deviation obtained in the second part of our experimental setup ( Fig. 1b box 2). The scores achieved by the selected model without SNPs (linear regression) and the selected model with SNPs (linear regression + GGAE), in the holdout set, are displayed in the final test sections. The HPs optimization column indicates if the scores were acquired with hyperparameter values that were manually selected or found by an automated hyperparameter optimization algorithm. HP: hyperparameter.
Attention mechanism
The attention mechanism in the GGAE demonstrated its capability to focus on different SNPs to make a prediction. The attention of the GGAE was mainly directed toward the
Impact of the clinical features
We analyzed the coefficients associated to the standardized clinical features and the intercepts related to each sex. The age at diagnosis (0.05), the BMI at the end of treatment (2.33), the doxorubicin dose received (0.65) and the effective corticosteroid cumulative dose received (0.44) were found to increase the prediction of the TBF. The obesity at the end of treatment was already identified as a prevalence factor of the obesity at the interview for the survivors in the PETALE cohort 33 . Additionally, corticosteroids have also been reported to increase the obesity risk in other studies 34,35 . Therefore, it is legitimate that positive coefficients are associated to the BMI at the end of treatment and the cumulative corticosteroids dose received. On the other hand, the duration of treatment (-0.32) and the methotrexate dose (-1.56) were found to decrease the prediction of the TBF. Each part of the grid is annotated with a "0", a "1" or a "2" to show if the SNP of a survivor is respectively homozygous for the reference allele ("0/0"), heterozygous ("0/1") or homozygous for the alternate allele ("1/1"). A bar plot with the average attention score of each SNP is presented over the heatmap.
The intercepts calculated for both sex (men: 16.31, women: 30.35) support the fact that women have generally higher TBF than men, which is common in the non-childhood ALL survivors 36 .
Discussion
Over the years, efforts have been pursued towards the development of better personalized follow-up methods for childhood ALL survivors using data from the PETALE study [6][7][8][9][10][11] . Other recent works have presented interesting results associated to the usage of neural networks in prediction tasks related to clinical contexts 16, 17 . However, until now, these machine learning approaches remained underexplored for the prediction of LAEs in childhood ALL survivors. In our work, we developed graphbased parameters-efficient neural networks for the LAEs prediction in childhood ALL survivors. In addition to contributing to precision medicine, our solutions constitute a promising avenue for the usage of artificial intelligence in clinical settings with restricted numbers of patients. We first created a new disease-specific VO 2 peak prediction model based on a Graph Attention Network 29 . The VO 2 peak is the gold standard to measure the cardiorespiratory fitness 12 , which in turn is a key element for the prevention of LAEs such as obesity, cholesterol and depression 1 . To use this type of neural network architecture and handle the VO 2 peak prediction task as a node regression problem, we created a graph structure with our dataset (Fig. 3a). In addition to achieving better performance than the equation from Labonté et al. 6 , our model does not rely on a walking test (e.g., 6MWT). The removal of this constraint represents a strong advantage in the context of healthcare considering that the 6MWT requires time and financial resources. Moreover, with our new model, the VO 2 peak prediction becomes more accessible since all variables needed by the model can be self-reported by the survivors. Therefore, our model could be associated to an online survey that survivors would be requested to fill at different time points. The resulting predictions could be further analysed by an exercise physiologist with the support of an interface providing comparisons between the current patient and the most similar survivors for which the VO 2 peak is already known (Fig. 3c). Even tough it obtained satisfying results, the VO 2 peak prediction model presented development challenges that should be further addressed in following works. The manual construction of an optimal graph structure represents a tedious task. Until now, we only explored solutions based on the calculation of distances between the observations of our dataset. However, even if these solutions are conceptually simple, they involve the selection of several additional hyperparameters such as the choice of a distance metric and the number of neighbors associated to each node. Machine learning methods enabling to simultaneously learn the graph structure relative to the data as well as the parameters of the model should be considered. Among the recent developments relevant to the subject, the Graph Convolutional Transformer 37 is a an example of model that simulates the presence of an edge between any pair of observations within a dataset and learn how to calculate a weight for each of them. In other words, this model features a flexible mechanism allowing each node to determine the best candidates to be part of its neighborhood. Whilst such model's complexity is growing according to the number of nodes in a graph, it is a plausible solution to consider in future work given the small cohort size enrolled in our study. In addition to simplifying the construction of the graph, this approach could lead to the improvement of our current state-of-the-art solution.
We also proposed an obesity prediction model using clinical variables that are available at the end of childhood ALL treatment as well as genomic variables. In addition to showing promising result in the prediction of future obesity, our work presented a novel neural network architecture (i.e., the GGAE) that efficiently encodes the information associated to the SNPs (Fig. 4). Its design improve the modeling of genomic data while allowing to manage the obesity prediction task similarly to a graph classification problem. The attention map produced by the GGAE (Fig. 6) demonstrates the degree of followup personnalization pursued by our work. Not only it allows to generate hypothesis about the importance of certain SNPs regarding the survivors population in general, but it also enables to see the contribution of the allelic constituents within each individual. For example, the map produced for the holdout set suggests that certain SNPs (i.e., 4:120241902, 12:48272895, 15:58838010, 16:88713262, 21:4432365 and 22:42486723) could be generally more relevant than the others for the prediction of the future TBF (Top of Fig. 6). Meanwhile, it also indicates on which SNPs the model was focusing the most according to each patient. Among others, we can mention the higher attention level given to the SNP 4:120241902 by patient P018 and P123 (Fig. 6). We acknowledge that the performance gain provided by the usage of SNPs through the GGAE was small (Fig. 5) and therefore, a study comparing the benefits and the cost of executing a whole exome sequencing should be conducted. Nonetheless, we consider the GGAE as an innovative solution for the integration of heterogeneous oncological data in parameters-efficient neural networks and we plan to further explore its potential in future work. It should be noted that the association between the SNP 12:48272895 (i.e., the VDR FokI polymorphism) and different obesity traits has already been investigated in multiple studies but results were found to be inconsistent 38 .
We further highlighted limitations related to our study. First, only a small number of samples were available in the PETALE dataset. Therefore, the scores obtained in the holdout sets related to both prediction tasks might not be fully representative of the future performance of our models on bigger and unseen datasets. Moreover, we hypothesize that the limited number of samples reduced the effectiveness of the automated hyperparameter optimization. More precisely, even though each set of hyperparameters selected by the algorithm was evaluated on multiple sub-samples, it is possible that their sizes were too small to provide valuable estimations of the hyperparameters' reliability. Second, all survivors of our datasets were from a monocentric cohort where individuals had only European origins. Hence, our findings may not translate to other ethnicity groups of childhood ALL survivors. Third, the concept of future is not clearly defined within the current design of the obesity prediction model. The time from the end of treatment could eventually be integrated as an additional variable to predict this LAE within a more precise time frame while potentially contributing to an increase of its accuracy 39 .
Next, we believe that future work could be separated in three different phases: (i) the validation of the current work using an external dataset; (ii) the application of the current work to other LAEs; (iii) the development of new architectures based on multi-task learning 40 . During the validation phase, performance of the new models could be first tested on other cohorts of childhood ALL survivors with European origins. Tests could also be performed on cohorts with different ethnicity to acquire more information about the clinical settings in which our models are reliable. Additionally, investigation could be pursued concerning the possible association between the TBF and the SNPs that received higher attention scores by the GGAE. Except for the VDR FokI polymorphism, we did not find any work that reported relevant results regarding the link between these SNPs and the TBF. At last, investigations could also be conducted to further quantify the theoretical benefits of using GNNs with datasets that do not naturally have underlying graph structures. In the second phase, considering the promising results achieved in our work for the prediction of the VO 2 peak and the future TBF, the development of prediction models for other LAEs such as dyslipidemia and insulin resistance would be an interesting avenue to explore. In terms of the third phase, we hypothesize that, on small cohorts, neural networks designed for the simultaneous prediction of multiple LAEs via multi-task learning could provide better performance than neural networks built to produce a single output. This hypothesis follows the intuition that a neural network trained to predict LAEs within a same family (e.g., metabolic disorders) should benefit from the common underlying patterns linking the variables to each specific LAE, while being less vulnerable to overfitting since it has to learn parameters that help conjointly for the different tasks.
In conclusion, we demonstrated in our work that graph-based parameters-efficient neural networks can achieve better results than linear and tree-based models for prediction tasks in clinical contexts with small cohorts (≤ 223 childhood ALL survivors). We also showed that an improvement of regression performance can be leveraged from the creation of graph-based architectures by either connecting patients of a dataset together or providing a better modeling of their individual information (e.g., genomic data). Additionally, we displayed that it is feasible to have a better understanding of the behaviors of these more complex machine learning solutions with post-hoc analysis methods such as the visualization of patients' embeddings and the study of attention maps. Overall, we strongly believe that the design of efficient model architectures and the achievement of post-hoc analyses are the key to increase the progress and the trust associated to the usage of machine learning with small cohorts in healthcare.
Datasets
All data was taken from the PETALE study. All participants of this study were survivors with European origins who have been diagnosed for childhood ALL between 1987 and 2010 before the age of 19 and were at least 5 years post-diagnosis (see the article from Marcoux et al. 5 for a complete list of the eligibility criterion). Descriptive analyses of the datasets and procedures of their construction are presented in Supplementary Tables S1-S12 and Supplementary Figures S1-S2 respectively. The VO 2 peak dataset consisted of 164 survivors who reached a valid maximal oxygen consumption while performing a cardiopulmonary exercise test 6 . 90% of the survivors with a VO 2 peak under or equal to the median were women while 76% of the survivors with a VO 2 peak over the median were men. The obesity dataset consisted of 223 survivors for which the TBF was measured by dual energy x-ray absorptiometry 1 . 76% of the survivors with a TBF under or equal to the median were men while 77% of the survivors with a TBF over the median were women.
Experimental setup
We developed a framework (Fig. 1b) to compare the performance of different models (see Models section) for the VO 2 peak and the obesity regression tasks. In this framework, 10% of the dataset is first extracted using random stratified sampling (see Random stratified sampling section) to create a holdout set. The holdout set remains hidden until the final best model is selected and ready to be evaluated. The search of the best model is done using the 90% of data left, which is referred to as the learning set. The latter is divided 10 times into different training sets and test sets. Each test set is constituted of 20% of the learning set and is also extracted using random stratified sampling. For the early obesity problem, a selection of features is conducted on each of these data splits considering the data from the training set (see Feature selection section) to exclude variables that are not helpful for the prediction.
Each model is evaluated on these 10 data splits at least twice. The models are first evaluated considering manually selected sets of hyperparameter values. Then, models are evaluated using a set of hyperparameter values obtained from an automated hyperparameter optimization algorithm (see Hyperparameter optimization section). For each model evaluation, we save the empirical means and standard deviations of the metrics calculated on the test sets for further analyses. The model that achieved the best performance for the greatest number of metrics during one of its evaluations is kept as our final model. The best manually selected set of hyperparameter values, as well as the hyperparameters' search spaces used for the automated hyperparameter optimization of each model, are reported for each model in section 2.3.2 of Supplementary Methods.
The selected model is finally trained and evaluated twice on the learning set and the holdout set respectively. The first training is done using the best manually selected set of hyperparameter values and the second training is done using automated hyperparameter optimization. For the early obesity problem, a selection of features is conducted beforehand on the learning set to exclude variables that are not helpful for the prediction. The selected features are reported in the Results section. Noteworthy, the model comparison step (Fig. 1b box 2) was conducted twice for the early obesity prediction task. We have first selected the best model by running the comparisons without the SNPs and then selected the best model considering the SNPs. Both models were finally evaluated on the holdout set (Table 2).
Models
For each regression task and each set of variables tested, we evaluated the performance of a random forest, XGBoost 30 , a linear regression (trained with gradient descent), a multi-layer Perceptron (MLP) and two GNNs (GCN 31 and GAT 29 ) combined with the Jumping Knowledge Networks framework 28 . The linear regression with the GGAE was only evaluated for the obesity task with the set of variables including the SNPs. The random forest and the XGBoost implementations were taken respectively from scikit-learn 41
Hyperparameter optimization
Hyperparameter optimization ( Supplementary Fig. S3) was conducted by evaluating 200 sets of hyperparameter values sampled using the Tree-structured Parzen Estimator algorithm (TPE) 44 . Each set was evaluated on 10 internal training sets and internal test sets created by sub-dividing the training set (as well as the learning set) using stratified random sampling (see Random stratified sampling section). The average RMSE observed in the 10 different internal test sets was used to estimate the performance related to a set of hyperparameter values. The set of hyperparameter values associated to the lowest average RMSE was selected. All the hyperparameter optimization process was executed using the optuna 45 library. The settings of the TPE algorithm are reported in Supplementary Tables S27-S28.
Random stratified sampling
All test sets created (as well as the holdout sets and the internal test sets) were sampled using random stratified sampling. The stratification was performed each time on a temporary column combining sex and discretized versions of the targets (i.e., the VO 2 peak values or the TBFs depending on the regression task) based on the median in the complete dataset (i.e., the dataset before the extraction of the holdout set). The temporary column had four modalities: women (≤median), women (>median), men (≤median) and men (>median). The sex was considered in the stratification since we knew beforehand that it had an impact on the the VO 2 peak 24 and the TBF 36 in the non-childhood ALL survivors population.
Moreover, two additional criteria were established to verify if a test set (as well as a holdout set or an internal test set) was valid. The first criterion was that, for any test set sampled, the remaining dataset must contain any possible modality associated to the categorical variables. This criterion ensures that any categorical modality has been considered during the training of a model and can further be recognized during the evaluation of the same model on the test set. The second criterion was that the numerical values observed within any numerical column of the test set must not be further than 6 interquartile ranges away from the first and third quartiles of the same column in the remaining dataset. This criterion ensures that the numerical values in the test set lies in a similar region than the one see in the set used for the training.
Feature selection
For each training set (as well as each learning set), we trained 10 different random forests using the default hyperparameters of version 0.24.1 of scikit-learn. We further extracted the feature importance calculated by each random forest for each feature. All the features with an average feature importance greater or equal to 0.01 were kept for the training. The selection of the clinical features and the genomic features (i.e., the SNPs) was done independently.
Data imputation and transformation
For each pair of training and test sets created (as well as learning/holdout and internal training/internal test sets pairs), we imputed missing data in the numerical columns using the empirical means calculated with the observed data in the training set and imputed the missing data in the categorical columns using the modes of the observed data in the training set. Once imputed, transformation steps were applied to each pair of training and test sets. Numerical columns were reduced and centered using the empirical means and standard deviations calculated with the observed data in the training set. The modalities of each categorical column were changed for nominal encodings.
Graph construction
The directed graphs considered to train and evaluate the GNNs were built by considering the attributes of the survivors. Especially, the oriented edges pointing at each survivor (i.e., node), were coming from the nodes of the k-nearest neighbors of the same sex. During the evaluation of models phase (Fig. 1b box 2), values of k of 4, 6, 8 and 10 were considered for the evaluation of each GNN model with manually selected hyperparameters (Supplementary Tables S16-S18). The value of k associated to the best performance of a GNN, with manually selected hyperparameters, was further used during the automated hyperparameter optimization of the same model (Supplementary Tables S24-S25). The similarity between each survivor was calculated on all the standardized numerical features and categorical features excluding the sex. More precisely, we used the value 1/(1 + Euclidean distance) in cases where no categorical attributes were available and considered the cosine similarity otherwise. The categorical attributes were converted to one-hot encodings to calculate the cosine similarities. The similarity values were set as the weights of the edges for the GCN (Supplementary Methods section 2.2.4).
Survivors in the test sets (as well as the holdout sets and the internal test sets) were always excluded from each graph during the training of GNNs. Moreover, once added for the evaluation, survivors in the test sets were only allowed to have oriented edges coming from the nodes of the associated training graph. As an example, for each prediction task, during the execution of the second phase of the experimental setup ( Fig. 1b box 2), training graphs were only constituted of patients in the training set while patients of the associated test sets were only added in the graphs for the evaluations after the training.
Ethics declarations
All the analyses conducted for the PETALE study were compliant with the Declaration of Helsinki and approved by the Institutional Review Board of Sainte-Justine University Health Center. Written informed consent was obtained from study participants or parents/guardians.
Code Availability
All software code allowing to run the experiments used to produce all the results presented in this work is freely shared under the GNU General Public License v3.0 on the GitHub website at: https://github.com/Rayn2402/ThePetaleProject.
Data Availability
The datasets analysed during the current study are not publicly available for confidentiality purposes. However, randomly generated datasets with the same format as used in our experiments are publicly shared in our GitHub repository to test the code implemented for this work.
Author contributions statement
NR and MV conceived the study. NR wrote the manuscript and performed data analysis. NR and MM implemented the machine learning framework used to perform data analysis. HL generated random datasets to validate experiment reproducibility. MV supervised the study. MC, VM, MK, DC and DS contributed to the experimental design. All authors revised the manuscript.
Additional information Competing interests statement
The author(s) declare no competing interests.
Descriptive analyses of the datasets
In this subsection we present tables with descriptive analyses of the complete dataset, the learning set and the holdout set related to each prediction task. The statistics associated to each numerical feature are presented into the format mean (std) [min, max ]. The statistics associated to each categorical feature (e.g., SNP) are presented into the format count (percentage of group). Note that the statistics are calculated without considering any missing value. Supplementary Figure S1. Construction of the VO2 peak dataset. The 6 survivors without genomic data were excluded to allow the possibility of an eventual experiment including SNPs. The first outlier was excluded due to a short DT of 6 months. The second outlier was excluded considering its high MVLPA of 238 minutes per day.
Supplementary Figure S2. Construction of the obesity dataset. The outlier was excluded due to a short DT of 6 months.
Architectures of the models
In this subsection, we describe the model architectures that were implemented using Pytorch [1] and DGL [2] librairies. We denote N and C the sets of numerical and categorical features observed in a dataset and k j the number of modalities associated to any categorical feature j ∈ C. We use the notation x i ∈ R (|N |+|C|)×1 to represent a single data point in a dataset and x ij its component associated to the feature j ∈ N ∪ C. We also identify the prediction associated to a datapoint x i as y i . Hence, considering this notation, we can define where m i = [x ij ] j∈N ∈ R |N |×1 and c i = [x ij ] j∈C ∈ R |C|×1 contain the standardized numerical features and the nominal encodings of the categorical features respectively.
Categorical embedding
We integrated the categorical features within each of our architecture using categorical embeddings. More precisely, each categorical feature (x ij s.t j ∈ C) is first mapped to a vectorial representation (i.e., embedding) h ij ∈ R kj ×kj . Each embedding is defined as emb ∈ R kj ×kj is a matrix of parameters initialized independently from a standard normal distribution and o ij ∈ R kj ×1 is a one-hot vector with the non-null value at the position represented by the nominal encoding x ij .
All categorical embeddings are further concatenated to represented the enriched representation of c i that we denote
Linear regression
The linear regression model was implemented such that where w ∈ R (|N |+ j∈C kj )×1 and b ∈ R.
Multi-layer Perceptron (MLP)
The MLP was designed to have a single hidden layer containing half the number of features of the input layer. Its architecture is defined in order to have y i = w T (σ (W [m i ; c ′ i ] + b 1 )) + b 2 where W ∈ R round(n/2)×n , w ∈ R round(n/2)×1 , b 1 ∈ R round(n/2)×1 , b 2 ∈ R, n = |N | + j∈C k j and σ is a parametric rectified linear unit (PReLU).
Graph convolutional network (GCN)
Considering that an oriented graph is attached to a dataset, meaning that each data point x i is associated to a node v i . We denote S i the set of nodes v j such that an oriented edge exist from v j to v i and d ji the weight associated to this edge.
Following the Jumping Knowledge framework [3], we implemented the GCN [4] such that with where W ∈ R n×n , w ∈ R 2n×1 , b 1 ∈ R n×1 , b 2 ∈ R, n = |N | + j∈C k j and σ is a ReLU.
Graph attention network (GAT)
We designed the GAT [5] by using the equation (1) but replacing equation (2) by where α ij represents the attention score given to node v j by the node v i and is calculated with a single attention-head of the mechanism introduced by Veličković et al. [5].
Gene graph attention encoder (GGAE)
Considering that the set of categorical features is composed by the set of SNPs and the set of categorical features that are not SNPs (C = C SN P s ∪ C ¬SN P s ). We further define C SN P s = k C (k) SN P s is the set of SNPs belonging in the chromosome pair k.
Following this notation, the GGAE first calculates the chromosomal embeddings z (k) i ∈ R 3×1 linked to a data point x i : where a (k) ∈ R 3×1 is the attention mechanism specific to the chromosome pair k, h ij ∈ R 3×1 is the categorical embedding (i.e., SNP embedding) associated to SNPs j, σ is a ReLU and γ is a LeakyReLU.
Then, the genomic signature s i is calculated: where W ∈ R s×3 , b ∈ R 3×1 is the final attention mechansim, s is the signature size, σ is a ReLU and γ is a LeakyReLU.
Descriptions of the hyperparameters
Here, we enumerate and give a description of the hyperparameters associated to the models we implemented. All hyperparameters are associated to the models in which they are included in table S19. For the description of the hyperparameters associated to the random forest and XGBoost, please refer to the documentation of versions 0.24.1 and 1.4.2 of scikit-learn [6] and xgboost [7] libraries.
attention dropout
Probability of randomly setting an attention score α ij to 0 for any node v i in the execution of a forward pass of the GAT during its training.
batch size Number of elements in each batch during the training of a model. β L2 penalty coefficient for the regularization term in the mean squared error loss used during the training of a model. dropout Probability of randomly setting a feature (i.e., a neuron) of the hidden layer to 0 in the execution of a forward pass of the model during its training.
Tree-structured Parzen estimator (TPE)
In this subsection, we expose additional details about the parameters (Supplementary Table S27) and the statistical distributions (Supplementary Table S28) used to run the hyperparameter optimization with the TPE algorithm [9]. Additionally, an illustration of the automated hyperparameter optimization process is displayed ( Supplementary Fig. S3
SUPPLEMENTARY BACKGROUND
3.1 Disease-specific VO 2 peak equation Here, we present the disease-specific VO 2 peak equation created by Labonté et al. [10]. | 2022-11-24T06:42:26.995Z | 2022-11-23T00:00:00.000 | {
"year": 2022,
"sha1": "e9c6aaf5affef90fa33a114b482cc941b1f1542d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e9c6aaf5affef90fa33a114b482cc941b1f1542d",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
251295584 | pes2o/s2orc | v3-fos-license | Indocyanine Green Retention Test as a Predictor of Postoperative Complications in Patients with Hepatitis B Virus-Related Hepatocellular Carcinoma
Background Accurate preoperative estimation of liver function reserve is the key to the safety of hepatectomy. Recently, indocyanine green retention test at 15 minutes (ICG-R15) has been widely used to estimate hepatic function reserve in different liver diseases. The purpose of this research was to investigate the clinical value of ICG-R15 in predicting postoperative major complications and severe posthepatectomy liver failure (PHLF) in patients with hepatitis B virus (HBV)-related hepatocellular carcinoma (HCC) subjected to hepatectomy. Methods A total of 354 HBV-associated HCC patients who underwent hepatectomy were enrolled. The Child–Pugh, model for end-stage liver disease (MELD), albumin–bilirubin (ALBI) and ICG-R15 for assessing postoperative complications risk were compared using receiver operating characteristic (ROC) curve and decision curve analysis (DCA). Results Postoperative major complications developed in 32 patients (9.1%) and severe PHLF developed in 57 (16.1%) patients. Multivariate analyses revealed that ICG-R15 were independent factors for predicting postoperative major complications and severe PHLF. ROC curve analyses and DCA plots showed that the predictive abilities of ICG-R15 for postoperative major complications and severe PHLF risk was significantly greater than Child–Pugh, MELD, and ALBI scores. Similar results were obtained by stratifying different background subgroups. Then, patients were divided into three different risk cohorts, emphasizing the significantly discrepancy between the incidence of postoperative major complications and severe PHLF. Conclusion Compared with Child–Pugh, MELD and ALBI scores, ICG‐R15 revealed significantly advantages in predicting postoperative major complications and severe PHLF in HBV-related HCC patients subjected to liver resection.
Introduction
Hepatitis B virus (HBV) infection is related to 70-90% of the patients with hepatocellular carcinoma (HCC) in the Asia-Pacific regions, especially China. 1 Partial hepatectomy is the preferred curative means in select HBV-related HCC patients. 2,3 Although advances in hepatectomy and perioperative care techniques have greatly improved the safety of surgery, postoperative major complications, especially severe posthepatectomy liver failure (PHLF) induced by residual hepatic functional insufficiency, remain the major cause of postoperative death. [4][5][6][7][8] Thus, it is of great significance to estimate liver function reserve prior to hepatectomy.
Currently, the Child-Pugh scoring system is the most commonly applied method to assess liver function reserve; however, its clinical applications is limited due to the use of two subjective and arbitrary indexes (hepatic encephalopathy and ascites) in its calculations. 9,10 The model for end-stage liver disease (MELD), originally established to estimate the outcomes of cirrhotic patients, has been gradually recognized as a standard for assessing liver function reserve and sequencing transplant candidates. Nevertheless, the level of serum creatinine is strongly influenced by individual reasons, such as gender and age, leading to its limited application. 11 The albumin-bilirubin (ALBI) score is the most recently recognized model for assessing hepatic functional reserve and is often used to predict the prognostic risk of different liver diseases, but it is still limited to accurately assess patients with obstructive jaundice. 12 Therefore, there is still a need to explore better tools to estimate liver reserve function.
Indocyanine green (ICG) is a water-soluble fluorescent dye that binds to lipoprotein and albumin and excretes bile as it is after intravenous injection. 13,14 As a quantitative excretory hepatic functional method to assess functional hepatocytes and liver blood flow, the ICG retention test at 15 minutes (ICG-R15) became a standard preoperative parameter to evaluate liver function reserve in patients with different hepatic diseases, mostly in Asian series. [15][16][17][18] In this study, we compared the abilities of ICG-R15, Child-Pugh, MELD and ALBI scores for assessing postoperative major complications and severe PHLF risk.
Patient Population
In this study, 354 patients who were subjected to initial hepatectomy for HBV-related HCC between January 2017 and December 2018 in our hospital were included. HCC patients who received radiofrequency ablation, transarterial chemoembolization or other treatments for tumors prior to liver resection were excluded. This study was conducted with the written informed consent of each patient and approved by the Ethics Committee of Guangxi Medical University Cancer Hospital, as well as in accordance with the Helsinki Declaration.
Diagnosis and Definitions
Postoperative pathological examination was the basis for the diagnosis of HCC, and Barcelona Clinical Liver Cancer (BCLC) criterion was selected as the HCC stage. Splenomegaly or gastroesophageal varices with thrombocytopenia was defined as clinically significant portal hypertension (CSPH). 19 Patients with hyperbilirubinemia and abnormal coagulation on postoperative day 5 was defined as PHLF. Grade A PHLF not needed any specific therapy, grade B PHLF not needed invasive treatments, and grade C PHLF needed invasive therapies. Among them, grade B or above PHLF was defined as severe PHLF. 20 The severity of postoperative complications was classified based on the Dindo-Clavien grade, and grade III and above was defined as postoperative major complications. 21
ICG Clearance
Generally, ICG clearance is performed using a continuous infusion technique during hepatic vein intubation. All enrolled patients in our study were received ICG clearance test prior to hepatectomy. After fasting overnight, an appropriate amount of ICG was quickly injected through a peripheral vein of forearm. Plasma ICG concentration was monitored by an optical probe connected to the patient, and the ICG-R15 value was measured by a pulsed dye density map analyzer (DDG3300K, Japan).
Hepatectomy and Follow-Up
Before hepatectomy, abdominal CT or MRI was carried out to estimate cancer situation and surgical safety. The Child-Pugh scoring system and residual hepatic volume were measured to assess hepatic function reserve. The surgical treatment of liver tumors were based on segmental anatomical resection. The extent of hepatectomy can be divided into major resection (removal of three or more Couinaud segments) or minor resection (removal of one or two segments or wedge resection) based on the number of liver segments resected. 22 More details and indications of liver resection procedures were described in previous research. 23 All patients were routinely reviewed 1 month after discharge, every 2-3 months in the first postoperative year, and every 3-6 months in the second year. Routine re-examinations include serum biochemistry, α-fetoprotein, abdominal ultrasonography, CT or MRI, and so on.
Statistical Analyses
Categorical variables were shown as frequencies and proportions and were compared using χ 2 test. Continuous variables were shown as median (Q25-Q75) and were compared using Mann-Whitney U-tests.
Using univariate and multivariate logistic regression analyses, we confirmed independent risk parameters that predicted postoperative major complications and severe PHLF. Predictive abilities of Child-Pugh, MELD, ALBI and ICG-R15 to predict postoperative major complications and severe PHLF were tested via the areas under the receiveroperating characteristic (ROC) curves (AUCs) and decision curve analysis (DCA). 24 Additionally, three risk groups were generated by splitting its linear predictor at the 50th and 85th percentiles of ICG-R15. The low-risk group was less than 50%, the intermediate risk group was between the 50th and 85th percentiles, and the last 15% was high-risk.
SPSS software (version 25.0, IBM, USA) was used for statistical analyses. P < 0.05 was considered to be statistically significant.
Patients' Characteristics
The clinical characteristics of 354 HBV-related HCC patients enrolled are shown in Table 1. The patients included 36 females and 318 males with a median age of 51 years. And, 9.4% of the patients suffered from CSPH, while most patients (60.2%) had cirrhosis. Moreover, most patients (86.2%) were categorized as Child-Pugh grade A, and the rest patients was grade B. The median MELD was 5 (4 to 7), the median ALBI was −2.38 (−2.59 to −2.16), and the median ICG-R15 was 4.6 (3.2 to 7.35).
Based on the BCLC grade system, 3.4% of the patients were grade 0, 57.9% were grade A, 20.3% were grade B, and 18.4% were grade C. The surgical resection included 235 major hepatectomy and 117 minor hepatectomy.
Independent Predictors of Postoperative Major Complications
Factors related to postoperative major complications in univariate logistic regression analyses, included male, prealbumin, albumin, aspartate aminotransferase, creatinine, Child-Pugh, MELD, ALBI, ICG-R15, tumor size, blood loss and major resection ( Table 2, P < 0.05 for all). For multivariate analysis, aspartate aminotransferase, ICG-R15 and major hepatectomy were confirmed as independent predictors of postoperative major complications in HBV-related HCC patients ( Table 2, P < 0.05 for all).
Independent Predictors of Severe PHLF
Univariate logistic regression analyses indicated prothrombin time, prealbumin, albumin, CSPH, cirrhosis, Child-Pugh, MELD, ALBI, ICG-R15, tumor size, portal invasion or extrahepatic spread and major hepatectomy were related to severe PHLF (Table 3, P < 0.05 for all). Then, in a multivariate analysis, prothrombin time, cirrhosis, ICG-R15 and major hepatectomy were identified as independent predict variables of severe PHLF in HBV-related HCC patients (Table 3, P < 0.05 for all).
Discriminative Abilities of the Models for Major Complications
The AUC of the ICG-R15 (AUC 0.789, 95% confidence interval (c.i.) 0.707 to 0.872) for predicting postoperative major complications was higher than the Child-Pugh (
764
showed that ICG-R15 has a better net benefit and a wider threshold possibilities in assessing postoperative major complications ( Figure 1B). Accordingly, the ICG-R15 was superior in estimating postoperative major complications risk.
Discriminative Abilities of the Models for Severe PHLF Figure 2A, P < 0.05 for all). In addition, the DCA plot also indicated that ICG-R15 has a better net benefit and a wider threshold possibilities in predicting severe PHLF ( Figure 2B). Thus, ICG-R15 also showed a significant advantage in predicting severe PHLF.
Subgroup Analyses
Subgroup analyses were performed according to the cirrhosis conditions, intraoperative status (hepatectomy, blood loss and blood transfusion), and tumor stage. In all subgroups, the AUCs values of ICG-R15 in predicting major postoperative complications (Figure 3 and Supplementary Table 2; P < 0.05 for all) and severe PHLF (Figure 4 and Supplementary Table 3; P < 0.05 for all) were greatly higher than the other scoring systems.
Application of the ICG-R15 in Patients Risk Stratification
The 50th percentile of ICG-R15 was 4.6%, and 85th percentile was 9.9%. Then, three risk groups were generated (lowrisk ≤4.6%, intermediate-risk 4.6-9.9%, and high-risk >9.9%). The incidence of postoperative major complications and severe PHLF was significantly different among all enrolled patients in the ICG-R15 risk subgroups (
Discussion
In this research, we compared the differences of four methods (Child-Pugh, MELD, ALBI and ICG-R15) in assessing postoperative major complications and severe PHLF in HBV-related HCC patients after hepatectomy. We found that ICG-R15 was an independent predictor of postoperative major complications and severe PHLF, and the predictive abilities of ICG-R15 wwere greatly higher than other scoring systems. Furthermore, the ICG-R15 also has great advantages in predicting postoperative major complications and severe PHLF in subgroup analyses based on cirrhosis condition, intraoperative status (hepatectomy, blood loss and blood transfusion), and tumor stage. In addition, the incidence of postoperative major complications and severe PHLF risk also increased with ICG-R15-based risk stratification.
765
PHLF is the most serious complication after hepatectomy and may lead to death of patients. [4][5][6][7][8] To reduce the risk of postoperative major complications and severe PHLF, it is of great significance to estimate hepatic functional reserve prior to surgery. Commonly, the Child-Pugh, MELD and ALBI scores are three applied tools for hepatic functional reserve assessment, but they all have obvious defects that limit their wide clinical application. [9][10][11][12] Recently, with the development of noninvasive pulse spectrophotometers, ICG-R15 test have became a standard preoperative parameter to assess liver function reserve is possible prior to hepatectomy in patients with sepsis in intensive care units, hepatosteatosis, acute hepatitis, or receiving chemotherapy. [13][14][15][16][17][18] However, it is not clear which of the four mentioned models is the optimal method to assess liver function reserve in HBV-related HCC patients prior
766
to hepatectomy. To solve this issue, we first carried out univariate logistic regression analyses to find indicators for predicting postoperative major complications and severe PHLF. As expected, all four mentioned methods showed significant differences in predicting major postoperative complications and severe PHLF alone. However, only ICG-R15 of the four methods can be used as an independent predictor of postoperative major complications and severe PHLF when the multivariate logistic analysis of other factors is taken into account. These findings preliminarily revealed that ICG-R15 is a better predictor of postoperative major complications and severe PHLF than other models. Furthermore, the ROC curve analyses showed that ICG-R15 had higher AUCs for predicting postoperative major complications and severe PHLF compared to the other three models, and DCA plots suggest that ICG-R15 had a better net benefit and a wider range of threshold possibilities in predicting postoperative major complications and severe PHLF. These results further verified that ICG-R15 has significantly higher predictive power than the other three models in assessing postoperative major complications and severe PHLF.
In addition, many studies have shown that liver cirrhosis background, intraoperative status (hepatectomy, blood loss and blood transfusion) and tumor stage were also independent predictors for assessing postoperative complications. 6,19,25 In our research, only major hepatectomy has always been an independent risk parameter for predicting major complications and severe PHLF, while cirrhosis was only an independent predictor for severe PHLF. Then, according to these different subgroups, we continued to compare the predictive ability of those mentioned four methods to assess postoperative major complications and severe PHLF. Surprisingly, in all the subgroup analyses, the ICG-R15 showed stable and satisfactory predictive performance in assessing postoperative major complications and severe PHLF and was superior to the other three models.
On the basis of risk stratification, this study further analyzed the relationship between ICG-R15 and postoperative major complications and severe PHLF. Our study showed that the incidence of postoperative major complications and severe PHLF differed significantly among the three risk groups. Unsurprisingly, the incidence of major postoperative complications and severe PHLF was greatly higher in the high-risk cohort than in the other two groups. Therefore, from our results, it can be concluded that hepatectomy should be carefully selected for high-risk population. However, there are also some limitations in our research. Firstly, all included patients were HBV-related HCC patients, and other etiologies, such as hepatitis C virus or alcoholic liver disease, still need to be studied. Moreover, this is a retrospective and single-center project, and further larger and multicentric researches are required to verify our findings.
Data Sharing Statement
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Ethics Approval and Consent to Participate
The study was conducted in compliance with the Helsinki Declaration and approved by the institutional Ethics Committee of Guangxi Medical University Cancer Hospital, and all patients provided written informed consent.
Author Contributions
RYM and TB contributed equally to this work. All authors made a significant contribution to the work reported, whether that is in the conception, study design, execution, acquisition of data, analysis and interpretation, or in all these areas; took part in drafting, revising or critically reviewing the article; gave final approval of the version to be published; have agreed on the journal to which the article has been submitted; and agree to be accountable for all aspects of the work.
Funding
The study was supported by the Natural Science Foundation of Guangxi (NO. 8186110284) and Guangxi Traditional Chinese Medicine AppropriateTechnology Development and Promotion Project (GZSY20-18).
Disclosure
The authors declare that they have no competing interests. | 2022-08-04T15:09:34.726Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "ddf26b6535c3bd1aa9ffa37fbc055199adff8a66",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=82616",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fff16696de4348927ea6fd4042412d5f907aeb9f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
215787787 | pes2o/s2orc | v3-fos-license | Concurrent bloodstream infection with Lodderomyces elongisporus and Candida parapsilosis
We report the case of a 54-year-old patient with central venous catheter related mixed candidaemia with Lodderomyces elongisporus and Candida parapsilosis, who responded to line removal and anidulafungin therapy. Mixed candidaemia was detected on Candida chromogenic agar. Identification of the two isolates was confirmed by MALDI-TOF MS (Bruker). Antifungal susceptibility testing revealed different antifungal MICs. This is the first reported case of mixed Lodderomyces candidaemia and outlines laboratory methodology to aid diagnosis and management.
Introduction
Lodderomyces elongisporus is an uncommon cause of candidaemia, usually associated with immunosuppression or intravenous access devices [1][2][3][4][5][6]. It is one of four species recognised within the Candida parapsilosis complex; other species include C. parapsilosis, C. orthopsilosis and C. metapsilosis [7][8][9]. These four species are physiologically similar and identification by biochemical methods is unreliable, with both Vitek 2 (bioMérieux) and API 20C (bioMérieux) systems generally misidentifying isolates as C. parapsilosis [1]. L. elongisporus is currently on neither database. On chromogenic Candida agar, L. elongisporus colonies are typically turquoise while other species within the C. parapsilosis complex are pink/lavender [1][2][3]6]. MALDI-TOF mass spectrometry or PCR amplification and DNA sequencing of the internal transcribed spacer region and/or D1/D2 domain of the rRNA gene accurately identify all four species within the C. parapsilosis complex [10].
Mixed candidaemia is uncommon, with rates of 3%-5% [11][12][13]. Correct identification of mixed infection and speciation of organisms isolated ensures appropriate management. We report the case of a patient with line-associated mixed candidaemia with Lodderomyces elongisporus and Candida parapsilosis, who responded to anidulafungin and line removal.
Case
A 54-year-old woman was admitted to hospital with recurrent stoma malfunction and prolapse requiring surgical revision, on a background of prior total colectomy and ileostomy formation for chronic pseudoobstruction, and short gut syndrome. A long-term Hickman line for total parenteral nutrition (TPN) was last replaced 10 months prior to admission, with a history of previous Hickman line infections secondary to enteric and skin flora.
The patient had a fever on day +1 of admission and a set of blood cultures was collected from the patient's Hickman line. Budding yeasts were identified on microscopy of the anaerobic BacT/ALERT® (bioMérieux) blood culture bottle after 23 hours incubation. Empiric anidulafungin (200mg IV loading dose, followed by 100mg IV daily) was commenced. The subsequent isolate was identified on MALDI-TOF (Bruker) as C. parapsilosis (score = 2.1) after sub-culture. Yeasts were also identified in the aerobic bottle after 28.5 hours incubation. After 2 days incubation of sub-culture of the aerobic bottle, 2 morphologic colony types were noted on chocolate agar, subsequently identified on MALDI-TOF as C. parapsilosis (score 2.26) and L. elongisporus (score 2.04). L. elongisporus colonies were observed to have lighter pigmentation than C. parapsilosis. Sub-culture of the aerobic blood culture bottle revealed pink and turquoise colonies after overnight incubation on Chromogenic Candida Agar (ThermoFisher Scientific) (Fig. 1). MALDI-TOF of the green/blue colonies identified these as L. elongisporus (score 2.15). Antifungal susceptibility testing of both isolates was performed using the Sensititre™ YeastOne™ YO10 plate (ThermoFisher Scientific) ( Anidulafungin was continued for 14 days following Hickman line removal (ceased on day +20). There was no growth in blood cultures collected day +10 and subsequently, and the patient had clinical improvement with resolution of fever following Hickman removal.
Discussion
Mixed candidaemia is difficult to identify using non-differential media alone and may have important impact when selecting antifungal therapy. When yeast is observed in positive blood cultures by microscopy, our laboratory routinely sub-cultures onto Chromogenic Candida agar to aid in identification of mixed candidaemia. Although two phenotypes were observed on chocolate agar, identification of mixed yeasts was more readily seen on chromogenic agar.
Antifungal susceptibility patterns of the two isolates were similar but not identical, with low minimum inhibitory concentrations (MICs) to all agents tested. This is consistent with the few published isolates [1][2][3][4][5][6]9]. Although categorically comparable, MICs for anidulafungin and micafungin had more than fourfold difference between our isolates. Local Australian Therapeutic Guidelines [14] and the Infectious Diseases Society of America guidelines [15] recommend empirical echinocandins as first-line therapy for candidaemia. Fluconazole may be used instead in select non-critically ill patients unlikely to have a fluconazole-resistant Candida species. As the patient tolerated anidulafungin and had a clinical response, the decision was made not to alter therapy once antifungal susceptibility results were available.
In addition, consideration of removal of central venous catheters if present must be part of treatment of candidaemia, particularly in nonneutropaenic patients or those in whom a line is considered the source of infection [16]. Our laboratory uses the method described by Maki et al. for processing of central venous catheter tips [17]. L. elongisporus was not identified on tip culture of the Hickman line in our case, which may have been due to the relatively low inoculum of L. elongisporus compared with C. parapsilosis or misidentification as catheter tip specimens are not routinely plated onto chromogenic agar. As the Hickman line was the likely source of candidaemia, removal was an important part of therapy.
Ethical form
This research did not receive and specific grant from funding agencies in the public, commercial, or not-for-profit sectors. The authors have no conflicts of interest to disclose. Consent was obtained from the patient to publish this case report.
Declaration of competing interest
There are none. | 2020-04-02T09:10:39.989Z | 2020-04-01T00:00:00.000 | {
"year": 2020,
"sha1": "6556305db4e94b58fb49bd5bf9d77b686b2ac417",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.mmcr.2020.03.007",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "83a878f4fca6af897b5c1d80dd7ba87efb60e573",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
174808722 | pes2o/s2orc | v3-fos-license | Modelling potential cost savings from use of real‐time continuous glucose monitoring in pregnant women with Type 1 diabetes
Abstract Aim To investigate potential cost savings associated with the use of real‐time continuous glucose monitoring (RT‐CGM) throughout pregnancy in women with Type 1 diabetes. Methods A budget impact model was developed to estimate, from the perspective of National Health Service England, the total costs of managing pregnancy and delivery in women with Type 1 diabetes using self‐monitoring of blood glucose (SMBG) with and without RT‐CGM. It was assumed that the entire modelled cohort (n = 1441) would use RT‐CGM from 10 to 38 weeks’ gestation (7 months). Data on pregnancy and neonatal complication rates and related costs were derived from published literature, national tariffs, and device manufacturers. Results The cost of glucose monitoring was £588 with SMBG alone and £1820 with RT‐CGM. The total annual costs of managing pregnancy and delivery in women with Type 1 diabetes were £23 725 648 with SMBG alone, and £14 165 187 with SMBG and RT‐CGM; indicating potential cost savings of approximately £9 560 461 from using RT‐CGM. The principal drivers of cost savings were the daily cost of neonatal intensive care unit (NICU) admissions (£3743) and the shorter duration of NICU stay (mean 6.6 vs. 9.1 days respectively). Sensitivity analyses showed that RT‐CGM remained cost saving, albeit to lesser extents, across a range of NICU costs and durations of hospital stay, and with varying numbers of daily SMBG measurements. Conclusions Routine use of RT‐CGM by pregnant women with Type 1 diabetes, would result in substantial cost savings, mainly through reductions in NICU admissions and shorter duration of NICU care.
Introduction
Type 1 diabetes during pregnancy is associated with increased risks of adverse outcomes such as pre-eclampsia, premature delivery, perinatal morbidity and admission to a neonatal intensive care unit (NICU) [1][2][3][4], which are at least partly attributable to suboptimal glycaemic control as measured by maternal glycated haemoglobin (HbA 1c ) levels [5]. For this reason, the UK National Institute for Health and Care Excellence (NICE) has recommended that glycaemic control should be optimized before and during pregnancy in women with Type 1 diabetes, with self-monitoring of blood glucose (SMBG), at least four, and up to 10 times daily [6,7]. Despite frequent glucose monitoring, optimal glucose control is often difficult to achieve due to pregnancy-related changes in insulin sensitivity and day-to-day variations in insulin pharmacokinetics with advancing gestation [8][9][10].
Real-time continuous glucose monitoring (RT-CGM) offers the potential to improve glycaemic control, compared with SMBG because it provides real-time data on changing glucose concentrations, thereby enabling users to take appropriate action in response to glucose fluctuations [11,12]. The potential value of this approach has been demonstrated in the Continuous Glucose Monitoring in Women with Type 1 Diabetes in Pregnancy Trial (CONCEPTT), in which the use of RT-CGM, in addition to SMBG, resulted in improvements in time in glycaemic target ranges during the second and third trimesters. This was accompanied by improved neonatal outcomes such as fewer large for gestational age infants, fewer NICU admissions > 24 h, less neonatal hypoglycaemia, and a shorter duration of hospitalization among infants of mothers using SMBG and RT-CGM [11]. Importantly, the treatment effect of RT-CGM was comparable in women receiving insulin pump therapy, and in those receiving multiple daily injections (MDI). This is consistent with the experience of RT-CGM users outside pregnancy, and suggests that the potential benefits of RT-CGM are applicable to a broad population of people with Type 1 diabetes [13].
Because RT-CGM and insulin delivery technologies are expensive, it is important to demonstrate the budgetary impact of these advancing technologies in clinical practice. Such evidence can be obtained through the use of budget impact models, which estimate the affordability of an intervention in a specific population over a short-term time horizon [14]. Our aim was to develop a budget impact model to estimate the costs and potential cost savings associated with the introduction of RT-CGM in pregnant women with Type 1 diabetes.
Methods
A model was developed to estimate, from the perspective of National Health Service (NHS) England, the costs associated with the use of RT-CGM by pregnant women with Type 1 diabetes. It assumes that RT-CGM is used throughout pregnancy for~28 weeks (from 10 to 38 weeks' gestation), and that neonates not admitted to a NICU stayed on a normal postnatal ward (Fig. 1). The model was constructed in Microsoft Excel, v1808 (Microsoft Corp, Redmond, WA, USA), and is available from the authors.
Model inputs
Model inputs are summarized in Table 1. Based on data from the 2014-2016 UK National Pregnancy Diabetes Audit, indicating 4323 pregnant women with Type 1 diabetes over 3 years, we estimated that there were on average 1441 women per year throughout England [18]. Data on rates of complications (pre-eclampsia and NICU admission), durations of hospitalization or NICU stay, and frequency of glucose monitoring by RT-CGM or SMBG, were derived from CONCEPTT [11] and NICE guidance for the management of diabetes during pregnancy [6,7]. The indications for NICU admission and country-to-country NICU admission data were assessed post hoc after peer review. What's new?
• Real-time continuous glucose monitoring (RT-CGM) improves neonatal health outcomes, with fewer large for gestational age infants, fewer neonatal intensive care unit (NICU) admissions and a shorter neonatal length of hospital stay.
• It is not known whether the costs of implementing RT-CGM into National Health Service England antenatal care, would be offset by the reduction in neonatal complications.
• The approximately threefold higher costs of RT-CGM use, compared with self-monitoring of blood glucose (£1820 vs. £588), are offset by substantial cost savings, mainly through reductions in NICU admissions and a shorter duration of NICU stay. Neonates admitted to a NICU also had a stay in the postnatal ward, either before or after NICU admission. The duration of these stays was recorded, and if this was less than 24 h the corresponding cost of the postnatal ward admission was not included in the cost calculation; hence, this calculation can be considered conservative. Based on data from CONCEPTT, it was assumed that women would use a mean of four CGM sensors per month, giving a total of 28 sensors between 10 and 38 weeks' gestation. In addition, based on the NICE guidelines on pregnancy (NG3) and management of Type 1 diabetes (NG17) [6,7], it was assumed that women would make an average of 10 fingerstick measurements per day if they were using SMBG alone, and four if they were using SMBG together with RT-CGM.
Costs of managing complications and glucose monitoring were derived from the 2018/2019 NHS National Tariffs [16], NICE guidance [6,7], a published clinical trial of glycaemic control in paediatric intensive care units [15], and commercial data from Medtronic Ltd (Watford, UK). NICE data show that the mean costs of normal and complicated deliveries are £1957 and £3357 respectively, and hence the incremental cost of a complicated pregnancy, compared with normal pregnancy is £1400 [19]. Because the NICE guidance states that women with pre-eclampsia undergo deliveries with complications and comorbidities [7], this incremental cost was multiplied by the proportion of women with preeclampsia. Costs associated with the management of diabetes (e.g. costs of insulin therapy) were not included in the model which focuses on glucose monitoring rather than mode of insulin delivery. All costs are reported as 2018 GBP (£).
Sensitivity analyses
The base case analysis assumed that 18% of deliveries would be complicated by pre-eclampsia [11], the mean cost of NICU was £3743 per day, and the mean duration of NICU care when RT-CGM was used with SMBG, compared with SMBG alone was 6.6 vs. 9.1 days, respectively (unpublished CONCEPTT data). A number of sensitivity analyses were performed to determine the cost impact of varying different inputs. One-way analyses explored the impact of varying the proportion of complicated deliveries from 18% to 32%, and of varying the daily cost of NICU care from £3743 to £2400 or £3800. Twoway analyses investigated the potential cost impact of using between four and 12 blood glucose strips per day, and of durations of normal postnatal ward hospitalization (excluding NICU) of between 1 and 6 days. It is now possible to use RT-CGM without SMBG, so the possibility of RT-CGM use with zero to four SMBG was assessed post hoc, after peer review.
Results
In the modelled population (n = 1441), the total annual costs of glucose monitoring and the management of pregnancies and deliveries in women with Type 1 diabetes were £23 725 648 when glucose monitoring was performed by SMBG alone (Table S1). These costs decreased to £14 165 187 when it was assumed that the entire modelled cohort used RT-CGM together with SMBG during pregnancy (Fig. 2). Hence, the potential cost-saving resulting from RT-CGM use was approximately £9 560 461. The principal drivers of this saving were the daily cost of NICU care (£3743) and the shorter duration of NICU care when RT-CGM was used with SMBG, compared with SMBG alone (6.6 vs. 9.1 days, respectively). The main reasons for NICU admission were preterm delivery (63%), neonatal hypoglycaemia treated with intravenous dextrose (56%), neonatal hyperbilirubinemia (54%) and respiratory distress (26%), with comparable indications for NICU admission when RT-CGM was used with SMBG, compared with SMBG alone. The UK sites had the highest proportion of NICU admissions (63%), followed by Canada (34%) with only one or none in Spain, Italy, Ireland and the USA (Table S2).
The impact of changes in complication rates and NICU costs on the potential cost savings achievable with RT-CGM was examined in sensitivity analyses. In the base case analysis, it was assumed that, in the absence of RT-CGM, 18% of deliveries would be complicated by pre-eclampsia. Increasing this proportion resulted in a progressive increase in the potential savings achievable with RT-CGM, which reached £9 842 896 with a complication rate of 32%. Further analysis showed that RT-CGM was still cost-saving when the daily cost of NICU care was reduced from the base case value of £3743 to £2400 (potential saving £5 444 736), and that the savings increased to £9 735 141 when the daily cost was increased to £3800 (Fig. S1).
Further sensitivity analyses examined the impact of SMBG strip use and length of non-NICU postnatal ward stay. The potential savings associated with RT-CGM use increased from approximately £9.1 million to £9.7 million when the mean number of daily fingerstick measurements in the SMBG group was varied between 4 and 12, respectively (Table S3). Furthermore, RT-CGM remained cost-saving, albeit to lesser extents, when the number of SMBG measurements in the RT-CGM users was increased from four to seven. In addition, greater cost savings were depicted when the number of SMBG measurements in the RT-CGM users was reduced to zero demonstrating the potential savings of newer CGM systems with reduced and/or no need for additional SMBG tests. Similarly, decreasing the duration of postnatal (non-NICU) ward hospitalization from 3 days to 1 day, among RT-CGM users, increased the potential savings achievable (Fig. S2). The maximum potential saving was £11 145 546 when duration of non-NICU postnatal ward admission increased from 3 to 6 days in SMBG users and decreased from 3 days to 1 day among RT-CGM users (Table S4).
Discussion
This study has shown that the routine use of RT-CGM by pregnant women with Type 1 diabetes could produce savings to the NHS of approximately £9.6 million, mainly through reductions in NICU admissions and a shorter duration of NICU stay. Furthermore, RT-CGM remained cost saving, albeit to lesser extents, across a range of NICU daily costs, durations of NICU stay and varied number of daily SMBG measurements. Our model highlights the impact of NICU admissions on the total costs associated with the management of Type 1 diabetes during pregnancy. By contrast, the costs of postnatal ward admissions, in infants not admitted to NICU, and before or after NICU admission, account for smaller proportions of the total costs.
In this budgetary impact model, the cost of RT-CGM use from 10 to 38 weeks' gestation was approximately threefold higher than that of SMBG alone (£1820 vs. £588 respectively), with the assumption that 10 SMBG measurements would be made per day in SMBG users [7]. Nevertheless, sensitivity analyses showed that RT-CGM still remained cost-saving, when SMBG measurements were reduced to less than four per day.
Furthermore, the observed savings may be underestimates because we conservatively assumed that only 18% of pregnancies would be impacted by the additional costs associated with a complicated delivery (£3357 for complicated and £1957 for normal delivery [19]). Additional obstetric morbidities such as hypertensive disorders of pregnancy (any gestational hypertension, worsening of preexisting hypertension) as well as maternal morbidity relating to large for gestational age birthweight (postpartum haemorrhage and perineal trauma) were not included with the incremental complicated delivery costs.
Data on the cost-effectiveness of RT-CGM during pregnancy are scarce [20,21]. A recent systematic review [21] identified only two studies that directly compared CGM with capillary glucose monitoring [22,23], neither of which included cost data. It is noteworthy that in CONCEPTT, the numbers needed to treat with CGM to prevent one neonatal complication were low; six for NICU admissions and large for gestational age, and eight for neonatal hyperglycaemia [11]. This suggests that the potential cost savings seen in the present analysis are achievable. Furthermore, more than 50% of pregnant women in CONCEPTT were using multiple daily injections [11], and hence the costs of insulin treatment would have been lower than with pump therapy. By contrast, in the Juvenile Diabetes Research Foundation study,~90% of adults with Type 1 diabetes, were using insulin pump therapy [24]. Importantly, the clinical efficacy of RT-CGM in women using insulin pump therapy and multiple daily injections was comparable, although rates of NICU admission > 24 h were higher among insulin pump users [25]. However, the costs of insulin therapy were not included in our model, so we cannot draw conclusions about the potential costs of RT-CGM in women using pumps or multiple daily injections.
Because Type 1 diabetes during pregnancy is associated with increased risks of serious pregnancy complications such as congenital abnormalities, stillbirth and neonatal mortality, it imposes particular clinical, societal and financial burdens on healthcare systems [26]. Large for gestational age remains the most common complication, affecting half of all infants born to mothers with Type 1 diabetes, and increases risk for obstetric complications including shoulder dystocia, instrumental and/or operative delivery and postpartum haemorrhage [27]. These costs are considered only in the duration of NICU and postnatal hospitalization. Recent data confirm that the risk of adolescent obesity is 1.5 times higher in infants born large for gestational age [28], suggesting that the acceleration of BMI and sustained obesity persist throughout childhood and adolescence. The longer-term costs associated with childhood overweight and obesity attributable to large for gestational age birthweight in Type 1 diabetes pregnancy are unknown.
Strengths of the present study include the use of outcome data from a multicentre randomized controlled trial, robust sensitivity analyses and the use of contemporary National Diabetes Pregnancy data in the model. Approximately twothirds of NICU admissions occurred in the UK, making these data representative of the factors affecting NICU admission in the NHS. As well, the model inputs have been varied to reflect different scenarios of SMBG use and NICU costs, with RT-CGM found to be consistently cost saving. The reductions in large for gestational age neonates, neonatal hypoglycaemia and NICU admissions in RT-CGM users were generalizable across 31 centres from the UK, Canada, Spain, Italy, Ireland and the USA, so although there is no reason to assume that the potential for cost savings would vary substantially in different healthcare settings, they may be most applicable in settings with high NICU admission rates. The study has additional limitations. The modelled population is restricted to England, which may limit the generalizability of our findings, although pregnancy outcome data are comparable with studies from other Northern European, Canadian and USA healthcare settings [1,2,4,5,29,30]. A further potential limitation is that costs associated with the treatment of diabetes, such as diabetes educator time and costs of insulin therapy, were excluded from the model. As a result, it is not possible to determine whether, or to what extent, these costs affect the RT-CGM cost savings. Furthermore, the RT-CGM used during CONCEPTT has been superseded by newer CGM systems with a longer sensor lifespan. Recent improvements in sensor accuracy and reduced need for pre-meal SMBG and/or additional calibration tests, also mean that the current costs of glucose monitoring with modern CGM devices may now be lower.
The results of this study have important implications for clinicians and policy-makers. Current NICE guidance recommends that women with diabetes should aim to achieve an HbA 1c level of < 48 mmol/mol (<6.5%) [6], but achieving this level of control throughout pregnancy is often difficult. It was achieved by only 40% of women with Type 1 diabetes in England and Wales, with substantial variability across different maternity clinics [26]. By contrast, the NICE target HbA 1c was achieved by 66% of women in CONCEPTT, with no heterogeneity across differing baseline maternal HbA 1c levels or across countries. Pregnant women are often among the early adopters of advanced diabetes technologies, with data from the US T1D Exchange clinic registry participants suggesting that approximately one-third used CGM and three-quarters used insulin pump therapy [29]. The Belgian healthcare authorities have authorized reimbursement of RT-CGM for insulin pump users with Type 1 diabetes treated in selected specialized diabetes centres. Initial data from over 500 users including 66 women who were pregnant and/or planning pregnancy suggested potential for sustained improvements in glucose control for up to 12 months [30]. Inclusion of diabetes technology use (both RT-CGM and insulin pump therapy) as a key metrics in national and international Diabetes Pregnancy data sets is needed to determine whether the clinical and costeffectiveness demonstrated in CONCEPTT can be translated into real-world NHS clinical settings.
In conclusion, our results suggest that the higher costs of RT-CGM, compared with SMBG alone, are offset by savings in NICU care. The cost savings associated with RT-CGM use are achieved mainly through reductions in NICU admission rates, and in the shorter length of NICU stay. This is an important message for clinicians and healthcare providers, given that 40% of infants born to mothers with Type 1 diabetes are admitted to NICU [26]. Routine use of RT-CGM by pregnant women with Type 1 diabetes would result in substantial cost savings to the NHS, and probably to other healthcare systems. Recent improvements in sensor accuracy and duration mean that current RT-CGM use may result in more substantial cost savings.
Ethical approval
The clinical study protocol was approved by the Health Research Authority, East of England Research Ethics Committee (12/EE/0310) for all UK sites and at each individual centre for all other sites. All participants provided written informed consent.
Funding sources
Funding for the development of the economic model reported here was provided by Medtronic. CONCEPTT was funded by JDRF grants #17-2011-533, and grants under the JDRF Canadian Clinical Trial Network (CCTN), a public-private partnership including JDRF and FedDev Ontario and supported by JDRF #80-2010-585. Medtronic supplied the CGM sensors and CGM systems at a reduced cost. HRM is supported by Tommy's charity.
Supporting Information
Additional supporting information may be found online in the Supporting Information section at the end of the article.
Doc. S1. CONCEPTT Collaborative Group.): Figure S1. Tornado plot showing results of one-way sensitivity analyses of the impact of varying neonatal intensive care unit costs from £2400 to £3800. Figure S2. Results of two-way sensitivity analyses showing impact of varying duration of postnatal ward care from 1 to 6 days for both cohorts. Table S1. Costs of management of Type 1 diabetes pregnancies and deliveries, when glucose monitoring was performed with and without real-time continuous glucose monitoring. Table S2. Diagnoses of neonates admitted to the neonatal intensive care unit. Table S3. Sensitivity analyses of the impact of changes in the daily number of SMBG measurements on the potential cost savings with RT-CGM use. Table S4. Sensitivity analyses of the impact of changes in the duration of neonatal hospital admission (postnatal ward without NICU) on the potential cost savings with RT-CGM use. | 2019-06-05T13:13:29.680Z | 2019-06-04T00:00:00.000 | {
"year": 2019,
"sha1": "ef747bf4b1fc09eb42e7bcc4b01882af421f6d58",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/dme.14046",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6069bd6e85ff5f8d83f0d455d2c0f51e0359ae7d",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18460689 | pes2o/s2orc | v3-fos-license | Localization in Lattice Gauge Theory and a New Multigrid Method
We show numerically that the lowest eigenmodes of the 2-dimensional Laplace-operator with SU(2) gauge couplings are strongly localized. A connection is drawn to the Anderson-Localization problem. A new Multigrid algorithm, capable to deal with these modes, shows no critical slowing down for this problem.
1
It is well-known that the convergence of local relaxation algorithms for solving inhomogeneous linear equations of the form Lξ = f is determined by the lowest eigenmodes of the problem matrix.
We studied these modes in an example coming from Lattice Gauge Theory, using Here △ is the 2-dimensional covariant Laplace operator, i.e. the discretized Laplace operator with SU(2)-matrices as nearest-neighbour-couplings (see sect. 1.1 for a more thorough explanation of this), ε 0 is its lowest eigenvalue and δm 2 is the critical parameter. The lowest eigenvalue has to be subtracted to make the problem critical-otherwise there would be no critical slowing down and no need to apply a multigrid. This subtraction of a constant does obviously not affect the shape of the eigenvectors. Physically, our model corresponds to a Higgs-doublet with a SU(2)-charge. It is found that the lowest eigenmodes of this model are strongly localized, i.e. they are appreciably large only in a small region of the grid. This result will be described in the first part of this paper. In the second part a recently proposed multigrid algorithm [1] and its performance for the model problem is investigated. This algorithm works extremely well (it eliminates critical slowing down completely), because it is able to handle the localized modes.
Localized modes in a model problem
The fundamental form of an inhomogeneous equation is If we choose L as in eq. (0.1), our equation becomes the propagator equation for a bosonic particle in a SU(2)-gauge field background. The covariant Laplace-operator in stencil-notation is with U z,µ ∈ SU (2). The second index denotes the direction of the coupling to the neighbour. The link matrices U z,µ fulfill U z,−µ = U * z−µ,µ . They are distributed according to the Wilson action [2] S W = β 4 P Re tr (1 − U P ) . Here β = 4/g 2 is the inverse coupling and the sum is over all Plaquettes in the lattice. U P denotes the parallel transport around the Plaquette This distribution leads to a correlation between the gauge field matrices with finite correlation length χ for finite β. The case β = 0 corresponds to a completely random coice of the matrices (χ = 0), for β = ∞ all matrices are 1 (χ = ∞). In this sense, β is a disorder parameter, the smaller β the shorter the correlation length and the larger the disorder. Now we want to study the lowest-and the highest-eigenmodes of this operator. If we look only at the norm of the eigenmodes we can see immediately that the lowest and the highest eigenmodes will look identical because of the following
LOCALIZATION
Theorem: Let L be a (n × n)-matrix with the following properties: • L has a constant diagonal α, • For all i, j with i = j and i + j even: L ij = 0.
Then the following statement holds: If ξ k is an eigenvector of L to the eigenvalue λ k (k < n/2), then the vector (−) j ξ k j is also an eigenvector to λ n−k = 2α − λ k .
This theorem is also true if the matrix elements are matrices themselves. It applies to the Laplace operator if its matrix elements are ordered in the right way (checkerbord fashion). Fig. 2 shows the norm of the lowest eigenmode of the covariant Laplace operator on a 64 2 -lattice at β = 1.0 and on a 32 2 -lattice at β = 5.0 . For the smaller β-value the localization can be seen clearly, for larger β the localization is in a more extended domain.
This of course poses the question whether all modes are localized. To answer this, we calculated all modes on a smaller grid, using a standard library routine. It was found that only a few of the low modes show localization. It might be possible that on very large grids (with very low values of β) many of the modes are localized, as localization increases with the disorder. But to study this a large computational effort would be needed. (The simple storage of all eigenvectors on a 128 2 -lattice would need about 1 Gigabyte.) It is expected that the sharpness of localization will increase with the disorder, i.e. with decreasing β. To study this effect we may look at the participation ratio defined as [3] 4) where N denotes the number of grid points. This quantity measures the fraction of the lattice over which the peak of the localized state ξ is spread. Fig. 3 shows the participation ratio of the lowest eigenmode as function of β calculated on 64 2 -lattices. The dependence of the disorder can be seen clearly. The absolute size of the localized state, measured by N α −1 , does not depend on the grid size if the grid is larger than the localization peak. We want to remark here that a similar phenomenon of localization also has been found for the twodimensional Dirac-equation in a SU(2)-gauge-field [4] and for the same two models in four dimensions [5].
Explaining the localization
How can we understand this phenomenon of localization in non-abelian gauge fields?
We will try to relate it to Anderson-localization [6,7,8]. Anderson examined a tight-binding-model of an electron in a random potential V (z), leading to a Schrödinger equation (− △ scalar +V (z))ψ = Eψ . (1.5) Localization of all modes may occur, dependend on the disorder and the dimension. In one dimension, arbitrary small disorder leads to localization, in two dimensions there is a phase transition between weak and strong localization as one increases the disorder, in higher dimensions a transition between non-localized and localized states occurs. A theoretical understanding of this was achieved in the papers [9]. As the Schrödinger operator for this model is again the Laplacian (without a gauge field), we may expect a similarity between our localization problem and Anderson localization, but there are crucial differences: First, our operator is not fully random, because the equilibrated gauge field possesses correlations. Second, our operator shows off-diagonal disorder: It is not the potential that varies from site to site, but the couplings between the sites. And third, the couplings do not vary in strength, but in orientation in colour space, resulting in a frustrated system. Nevertheless, in the following we will try to stress similarities between the two models, giving us an-at least intuitive-picture of what happens.
We have to look for a quantity in our model that may play the role of the random potential V (z) in the Anderson problem. There we can extract the value of the potential by calculating the difference between the diagonal element and the norm of the couplings, because this is zero for the Laplace operator without potential, see eq. (1.2). Doing this for our model problem of course only gives δm 2 − ε 0 , independend of the lattice site. To arrive at a varying quantity, we may look at a blocked version of our operator. The simplest blocking procedure one can think of is the "taxi-driver" blocking [10]. Here four points are blocked to one and all block-quantities are calculated by paralleltransporting the fine-grid-quantities to one of the four points (e.g. the lower left point) via the shortest possible route. (For the upper right point there are two routes, each weighted with a factor 1/2.) If we use a blocking-operator of this type and calculate L block = C [j 0] · (−△) · A [0 j] , the blocked operator, we still have an operator with fluctuating bonds and a constant diagonal. But now the bonds are fluctuating in strength, so we have to separate the kinetic and the potential part of the diagonal elements by calculating V (x) = L block (x, x) − y L block (x, y) as explained above. This quantity now is fluctuating on the block lattice, and so we arrived at a situation much more similar to Anderson-localization.
The blocking procedure has a certain arbitrariness (e.g. in the choice of the blocks), but a simple calculation shows that it is mainly the quantity F µν (z) that enters into V (x). (This has to be expected taking into account that the quantity involved has to be gauge invariant.) Let us now look at the field strength norm, or, which is more conclusive, at the quantity W (z) = µ =ν F µν (z)F µν (z). Fig. 4 shows this quantity for the same configuration as in figure 2, bottom, and one can see clearly that the localization is in a region where W (z) is small, as should be expected from our argument. This is true for all configurations we looked at: The localization center is always in a region with low field strength sum.
We can support this idea further by looking at the eigenmodes of the following operator, which we called Anderson-Laplace-operator: D AL = − △ scalar +W (z), where △ scalar is the Laplace operator without gauge field. So now we are looking at a true Anderson-localization problem, except that the random potential is not independend at different sites. There is a finite correlation length, as explained in the previous section. Fig. 5 shows the lowest mode of this operator. If compared to fig. 1, bottom, one sees that the center of localization sits at the same place.
In conclusion we can say that our analysis shows that firstly the lowest eigenmodes of the covariant Laplace operator in a SU(2)-gauge field are localized and that secondly this localization occurs where the field strength norm is small. This quantity can be interpreted as a random potential on a block lattice. In this way we were able to draw a connection to Anderson localization.
The problem
Disordered systems are among the most interesting and most difficult models in physics. Here we are interested in the numerical solution of-discretized-differential equations for such models. (An example for this is the inversion of the fermion matrix.) As the critical point of the system is approached, simple local algorithms face the problem of critical slowing down, that is, the nearer one gets to the critical point the slower the convergence.
The use of nonlocal methods may overcome this problem. For the solution of "ordered" problems multigrid methods have been very successfull. Even in the disordered case the "Algebraic Multigrid" (AMG) [11] was applied with great success to scalar problems like the "random resistor network" [12]. But up to now, no generalization of this method has been found that could be used for Lattice Gauge Theory.
In a previous paper [1] this problem was tackled and a new algorithm was proposed. Here an improved version will be explained. It will be applied to the model problem described above. This algorithm is a unigrid algorithm. To prepare for the following consideration, in the next section the multigrid method will be reformulated in the unigrid language.
The Unigrid
Suppose the equation to solve lives on a "fundamental" lattice Λ 0 with lattice constant a 0 . We write the equation as where the index 0 tells us that it is formulated on the fundamental lattice. Inhomogeneous equations of this type also arise when an eigenvalue equation is solved by inverse iteration [13]. We will use inverse iteration heavily in our algorithm later on. One now introduces auxiliary lattices Λ 1 , Λ 2 , . . . Λ N , called layers, with lattice constants a j = L j b a 0 , where L b is the blocking factor (typically, one chooses L b = 2). The last lattice Λ N consists of only one point. The different ways of transferring information between the lattices makes the crucial distinction between a unigrid and a true multigrid method. Let H j be the space of functions on lattice Λ j . Then we introduce grid transfer operators: In a true multigrid grid transfer operators are composed from transfer operators that act between adjacent layers. From this it is quite clear that every multigrid can be formulated as a unigrid (instead of transferring information directly from Λ j to Λ k go first to the fundamental lattice and afterwards to the target space), but not vice versa. From the point of view of computational complexity, a unigrid method is inferior because an equal amount of work needs to be done on all layers.
The meaning of smoothness
The basic principle of multigrid algorithms (for ordered elliptical problems) originates from the observation that after doing local relaxation sweeps, the error gets smooth. Hence it should be possible to represent the error on a coarser lattice, because the intermediate values can be obtained by smooth interpolation. "Good" interpolation operators are therefore known beforehand; for example, one may use linear interpolation. Normally, the smooth modes are the low-lying eigenmodes of the operator. How can we extend this observation to disordered problems, where no a priori notion of smoothness is given? As explained in [1], we propose the following definition of smoothness in a disordered system. It assumes that a fundamental differential operator L 0 specifies the problem. In this case Definition: A function ξ on Λ 0 is smooth on length scale a when in units a = 1. This definition implies that the smoothest function is the lowest eigenmode of L 0 . 1 So we arrive at the basic principle stated above even for the disordered case: The slow-converging modes, which have to be represented on coarser grids, are smooth. Of course we still have to show that the definition is sensible and can be used to construct a good algorithm.
As a first step we can see that the notion of algebraic smoothness as introduced in the context of the AMG implies smoothness in our sense. The error e of an approximative solution of the differential equation is called "algebraically smooth" if the residual r is small compared to it, so (2.5) The crucial step in the setup of the algorithm is the choice of the grid transfer operators A [0 j] and C [j 0] . These operators should be smooth in our sense, because we want to use them to represent a smooth error on a coarser grid. But because our definition of smoothness depends on the problem matrix L, they are not given a priori. Instead, we have to compute these operators.
In the following, we will adopt the Galerkin choice C [j 0] = A [0 j] * , where * denotes the adjoint, and the coarse-grid-operator L j will be defined by L j = C [j 0] L 0 A [0 j] . So we only need to construct good interpolation operators.
We have to restrict the operators to a part of the lattice, because otherwise interpolation would be too costly. For the time being we will choose the supports of the interpolation operators to be fixed blocks [x] as shown in figure 1. Therefore the operators will possess representations as rectangular matrices with elements A ·,x vanishes outside the block [x]. It has been found that there is little hope in eliminating critical slowing down for the inversion of the Diracoperator without overlapping blocks [15]. As we intend to apply our algorithm also to this case, we choose overlapping blocks as shown in the figure. 1 It has recently been remarked by Sokal [14] that the operator L does not possess eigenvectors in a strict sense, because it maps a space on its dual and there does not exist a natural scalar product on the two spaces. He proposes to look instead at B −1 0 L, where B0 is defined by the relaxation algorithm. In case of a matrix with constant diagonal this will not make any difference (as long as we use e.g. Jacobi-iteration), so this does not give problems with our model problem. Remark that this is not true for a true multigrid, because there on a block lattice a non-constant effective mass might be generated. In such an algorithm, the definition of smoothness should take the relaxation algorithm into account, as it is done in the AMG-context.
We now want to look for smooth interpolation operators in the specified sense which fulfill (approximately) the eigenvalue equation restricted to the block [x]
(2.6) Here L 0 | [x] denotes the restriction of L 0 to the block [x]. As the interpolation operator must vanish outside the block, we impose Dirichlet boundary conditions. The crucial assumption of our algorithm is that the solution A ·,x of this equation is smooth on length scale a j . This is true for the scalar Laplace-operator, where the solution is half a sine wave on the block.
If we know such interpolation operators we can start a unigrid algorithm in the usual way: After relaxing on the fundamental layer the error e 0 of the approximative solution ξ 0 defined as e 0 = ξ 0 − ξ 0 should be smooth. It fulfills the error equation L 0 e 0 = r 0 , where r 0 = f 0 − L 0 ξ 0 is the residual.
As the error is smooth, it can be obtained by smooth interpolation of a function e 1 living on Λ 1 : Inserting this into the error equation yields which involves only functions and operators on the block lattice. After relaxing on eq. (2.10) we interpolate our estimate e 1 back to the fundamental lattice and replace our approximation by a better one: ξ ← ξ + A [0 1] e 1 . Now the error is expected to be smooth on the length scale a 1 because we have relaxed it on Λ 1 , and so we can proceed to the next layer. When we come to layer Λ j , the error should be smooth on scale a j−1 . After relaxing on this layer, we correct our approximation again: ξ ← ξ +A [0 j] e j . By this we have smoothened the error on the larger scale a j . Eventually we will reach the lattice Λ N where we can solve the equation exactly and can thereby remove the smoothest mode from the error. The difference to a true multigrid can be seen clearly: The multigrid is defined in a recursive way. (2.10) is solved by going to the next-coarser lattice without correcting the approximate solution ξ 0 on the fundamental lattice.
The ISU-algorithm
But of course the question is: "Where do we get the smooth operators?" Consider for instance It should satisfy the equation ·, start for large n and an arbitrary starting vector), we will have to solve an equation which seems to be exactly as difficult as our starting point, eq. (2.1).
But this is not so. The worst-converging mode of our starting equation is the mode to the lowest eigenvalue of L 0 , but now we want to compute this mode, so it does not contribute to the error of the eigenvalue equation. (We have to do a simple normalization step after each iteration.) Consequently the mode to the second-lowest eigenvalue of L 0 is the one that converges worst and if we could handle this (and all higher modes as well), we could also handle the lowest mode in our inhomogeneous equation (2.1) by solving first eq. (2.11).
The basic idea of our algorithm is that the higher modes are smooth on shorter length-scales. This means that it should be possible to construct them out of pieces which are smooth on these length scales and have supports on parts of the lattice. So the next-lowest modes ξ low are representable by linear combination of the interpolation operators A (2.12) If this is true-and it is for our model problem-we see that the calculation of A [0 N −1] is similar. Again the worst-converging mode is the mode we aim at, the next-higher modes can be represented on smaller blocks. Their calculation is therefore simpler. Finally we arrive at the calculation of A ·,x , having to solve an equation on a 3 × 3-lattice. This is easily done. Because of the Dirichlet boundary conditions there is no low-lying mode here.
Remark: It might happen that the lowest eigenvalue is much larger than the difference between it and the next eigenvalue. In this case, many inverse iterations have to be done to resolve the two corresponding modes. A possible remedy for this problem is to calculate estimates λ of the lowest eigenvalue as we proceed and to invert not L 0 but L 0 − λ. This problem will not arise on the last layer, because otherwise the operator would not be critical and simple relaxation algorithms will suffice to solve the equation.
We identify the site x ∈ Λ j with the block [x] in λ 0 having x at the center. Eq. (2.6) is an eigenvalue equation on [x] for the vector A ·,x . It can be solved via inverse iteration by our unigrid method, using the already calculated interpolation operators A [0 k] ·,y with supports inside the block. With this we arrive at the following Algorithm for calculating smooth interpolation operators: ·,x; start .
Relax on the fundamental lattice on block [x].
3. For all 1 ≤ k < j do: Calculate the residual R ·,x; start . Block the residual to layer Λ k : R 4. Normalize the interpolation operator. We call this method Iteratively Smoothing Unigrid or ISU, because it is a unigrid method which computes smooth operators by means of an iterative method (and not directly from the given operator as in the AMG-algorithm).
If approximate solution A
Now we could try to use the same method to define a true multigrid algorithm: Just replace the eigenvalue equation for A ·,x by an equation for A j ·,x which interpolates between adjacent layers and use L j−1 as the operator for this equation. In this way, we would get operators which are smooth with respect to the blocked differential operator. But this will often not work. To see this, formulate the new true multigrid algorithm in unigrid language. It involves operators A [0 j] = A 1 · A 2 · · · A j . But the product of operators which are smooth on different length scales is smooth only on the shorter of these length scales. So we will never get a transfer operator that is smooth on large scales. To put the same fact in another way: In a true multigrid as used for ordered problems, interpolation operators A j will smoothly interpolate functions that are smooth on all length scales a k > a j (the usual linear operators for example interpolate constants to constants, regardless of the scale), but in our case A [0 j] will not be able to do this.
This tells us that our algorithm really is a unigrid algorithm, not just a multigrid in disguise. It is therefore impossible to apply the usual two-grid-analysis to prove convergence. Furthermore, we can not stop the algorithm on a layer j < N , because in this case the modes on the larger scales would not be handled appropriately.
From the above description it is clear that the work involved in calculating good interpolation operators is larger than the time needed for the solution of the equation (2.1) itself. The following table shows the computational costs of the algorithm, compared to a true multigrid and to a local relaxation. Here, L denotes the grid length and d is the dimension.
Performance of ISU
We studied this algorithm for the 2-dimensional bosonic model problem, as described in eqs. (0.1) and (2.1). The subtraction of the lowest eigenvalue makes the problem critical, and we can directly control criticality by tuning δm 2 , the lowest eigenvalue of the full problem. We measured the inverse asymptotic convergence rate τ defined as where ̺ n is the quotient of the error norms before and after iteration number n. For large n, this quantity approaches a constant. Fig. 6 shows the inverse asymptotic convergence rate as a function of δm 2 for grid sizes 32 2 -128 2 at β = 1.0 in a SU(2)-gauge field background. Absence of critical slowing down can be seen clearly, and the absolute value of τ is quite small. (τ = 1 corresponds to a reduction of the residual by a factor of e in one multigrid sweep.) The sweeps are V-cycles with one pre-and one post-relaxation step. The results do not vary appreciably by changing β.
Remark: To conclude that critical slowing down is absent, it is not sufficient to study only the dependence of δm 2 for fixed grid sizes. Our method ensures correct treatment of the lowest mode with eigenvalue δm 2 . But the eigenvalue of the second-lowest mode depends on the grid-size, so the grid-size has to be large enough to make also this eigenvalue fairly low.
Because of the large work involved in calculating the interpolation operators, it has also to be veryfied that the number of inverse iterations needed for the calculation of the interpolation operators does not grow with the grid size. We found that six multigrid sweeps with one pre-and no postrelaxation step were sufficient for this on every layer, regardless of the lattice size; doing more sweeps and thereby calculating the operators more exactly did not improve the convergence of eq. (2.1).
So it is clear that there is no critical slowing down for the solution of the model problem. Nevertheless the question has to be answered at which grid sizes this will pay off, since the overhead is large. A careful comparison with the conjugate gradient algorithm on CPU-time grounds will have to be done. We think this will be worthwile only if the ISU works as well for fermions.
To understand this success we can investigate the algorithm more closely. To this end we have calculated all eigenmodes of the covariant Laplace operator (0.1)-using a standard library functionand a solution of eq. (2.1). Now we were able to start the algorithm with an initial value of which we knew the error in advance, and to monitor the behaviour of the error as the algorithm proceeds, always expanding the error into the eigenmodes. So we could check that the fundamental relaxation indeed smoothens the error by eliminating the contribution of the higher modes. The coarser the lattice becomes the lower are the modes which are eliminated, until on the last lattice the contribution of the lowest mode is set exactly to zero.
We also checked that it is indeed possible to represent the low-lying modes by the interpolation operators. We calculated that part of the modes which was orthogonal on all interpolation operators on a given layer. This quantity was small, so the overlap between low-lying modes and interpolation operators is large. This latter result is not surprising, as we already know that the lowest modes are localized. If a mode consists of a few localized parts it seems clear that we can patch it together from operators which are restricted to a part of the grid. This suggests that localization of low-lying modes is a good prerequisite for convergence of our algorithm. Fig. 7 shows the interpolation operators A [0 N −1] and A [0 N −2] belonging to the same gauge field configuration as Fig. 2, bottom. It can be clearly seen that the different support boundaries are able to single out the different localization centers. On smaller blocks, the operators are centered in the middle of the block, because here the restriction through the boundary is too strong.
It is crucial for this possibility of representing the localized modes by the interpolation operators that there exists a block into which the modes fit well. This is not true for other models of disordered systems. Our algorithm was also tried on the "random resistor problem" [16], but here the simple method of fixing the blocks geometrically led to critical slowing down (z ≈ 0.7). This is explained by the fact that there existed parts of eigenmodes which did not fit into any of the blocks and could therefore not be combined from the interpolation operators. (We studied this by again calculating all modes and looking at the shape of the bad-converging ones.) Nevertheless it is possible to generalize our algorithm by choosing the blockcenters in a more sophisticated way, using an algorithm of AMGtype for this step, and then applying the ISU-algorithm. This will be done in the future, using a C++-class library which is developed at the moment [17].
Conclusions
We have seen the localization of the lowest (and highest) modes in a 2-dimensional Lattice Gauge Theory. This phenomenon could be understood by connecting it to the problem of Anderson-Localization.
The performance of a new Multigrid algorithm was studied. For the model problem it showed no critical slowing down. The algorithm is well suited to deal with low-lying localized modes, because good approximations to these modes are used for the interpolation.
The next step will be to investigate localization and the behaviour of the ISU-algorithm for the Dirac-equation. This work is in progress. | 2014-10-01T00:00:00.000Z | 1994-05-05T00:00:00.000 | {
"year": 1994,
"sha1": "071c53944fcf731241eaf8a8339b26eb077c5584",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-lat/9405005",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8d38513f24a0ff7cbc935aea2d8fb723f3043f68",
"s2fieldsofstudy": [
"Computer Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
252845904 | pes2o/s2orc | v3-fos-license | Light-Responsive Molecular Release from Cubosomes Using Swell-Squeeze Lattice Control
Stimuli-responsive materials are crucial to advance controlled delivery systems for drugs and catalysts. Lyotropic liquid crystals (LLCs) have well-defined internal structures suitable to entrap small molecules and can be broken up into low-viscosity dispersions, aiding their application as delivery systems. In this work, we demonstrate the first example of light-responsive cubic LLC dispersions, or cubosomes, using photoswitchable amphiphiles to enable external control over the LLC structure and subsequent on-demand release of entrapped guest molecules. Azobenzene photosurfactants (AzoPS), containing a neutral tetraethylene glycol head group and azobenzene-alkyl tail, are combined (from 10–30 wt %) into monoolein-water systems to create LLC phases. Homogenization of the bulk LLC forms dispersions of particles, ∼200 nm in diameter with internal bicontinuous primitive cubic phases, as seen using small-angle X-ray scattering and cryo-transmission electron microscopy. Notably, increasing the AzoPS concentration leads to swelling of the cubic lattice, offering a method to tune the internal nanoscale structure. Upon UV irradiation, AzoPS within the cubosomes isomerizes within seconds, which in turn leads to squeezing of the cubic lattice and a decrease in the lattice parameter. This squeeze mechanism was successfully harnessed to enable phototriggerable release of trapped Nile Red guest molecules from the cubosome structure in minutes. The ability to control the internal structure of LLC dispersions using light, and the dramatic effect this has on the retention of entrapped molecules, suggests that these systems may have huge potential for the next-generation of nanodelivery.
■ INTRODUCTION
Emerging methods to control the delivery of small molecules have wide-spread applications spanning from pharmaceuticals, notably for COVID-19 mRNA vaccines, to catalytic reactions, agriculture, and food. Targeting the release of payloads ensures that they are used in a direct manner, which can reduce waste and unwanted side-effects. 1 To enable this, materials that can entrap molecular payloads and release them on-demand using an external stimulus are required as delivery systems. One method for molecular entrapment is to use lyotropic liquid crystals (LLCs), which are formed from the self-assembly of amphiphiles on the addition of a solvent and possess longrange orientational order. 2 These ordered networks have complex, nanoscale internal structures, which restrict outward diffusion of guest molecules. 3 Furthermore, the amphiphilic nature of LLCs mean that a variety of molecules of differing hydrophilicities can be contained within them, including drugs, 4 catalysts, 5 or medical imaging agents. 6 However, bulk LLC mesophases are often viscous, making them challenging to administer. To aid their use, they can be broken up in excess aqueous solution to form low-viscosity dispersions of nanoparticles, while retaining the internal order necessary for controlled delivery. 6,7 Monoolein (MO) is an amphiphilic lipid commonly used to create host LLCs due to its propensity to form stable dispersions, as well as its biocompatibility and biodegradability. 3 It can form a variety of LLC phases depending on the solvent concentration and polarity of molecular additives. 8,9 These phases can be broken up to form colloidal dispersions of particles typically between 200 and 300 nm in diameter, 6 most commonly using a high-shear input (sonication or homogenization), 10 with additional interfacial stabilization. 11 The retention and release of entrapped guest molecules are governed by the internal structure of the dispersed LLC particles. 12 Liposomes, which are vesicles with an outer lipid bilayer shell, have been extensively researched and clinically implemented for controlled delivery applications. 13 However, the simple structure can lead to problems with premature drug leakage and fast release rates. 14 To combat this, hexagonal (hexosomes) and cubic (cubosomes) LLC phases are of particular interest, as the complex two-and three-dimensional interfaces between the water channels and the amphiphile bilayer slow the diffusion of entrapped species. 15,16 The release of guest molecules can be controlled by the LLC dimensions, which directly affect diffusion rates. 4 Furthermore, the larger lipid surface area in comparison to simple liposomes allows a higher guest payload to be incorporated. 14 However, undirected release of active molecules results in wasted payload away from the target site, which can even manifest as harm in the case of toxic drugs. 17 As such, methods to control the time and position of release using an external stimulus are needed. Light is particularly attractive as a stimulus as its intensity, wavelength, duration, and spatial position can be easily adjusted. Light-responsive LLC dispersions have previously been created through addition of metallic nanoparticles, which induce photothermal phase changes; 18,19 however, the toxicity of nanoparticle additions remains a concern. 20 Alternatively, LLCs can be built from amphiphiles, which contain a photoswitchable group. 21−28 Of these, azobenzene photosurfactants (AzoPS) have been the most extensively studied. 29 Despite concerns over its potential toxicity, a promising recent work has shown the capability to produce biocompatible azobenzene derivatives. 30 On irradiation with UV light, azobenzene photoisomerizes from the trans (E) to the cis (Z) state, forming a photostationary state (PSS) of mostly cis-isomers, with a composition dependent on irradiation wavelength, solvent, temperature, and chemical structure. 31 Reverse isomerization can be triggered using visible light, giving a second PSS of mostly trans-isomers, or fully, using heat. 32,33 Isomerization leads to a change in shape 34 and polarity 35 of the molecule, which, when combined into amphiphiles, modifies the molecular geometry and hydrophilicity. 29 This has been shown to have a knock-on effect on the interfacial-and self-assembly of AzoPS at low concentrations, both with charged, ionic, 36 and neutral head groups. 37 As such, AzoPS have been investigated for applications such as: DNA compaction, 38 microfluidics, 39 foam stability, 40 and micellar entrapment and release. 41,42 At higher concentrations, there have been several reports of the self-assembly of AzoPS into higher-order bulk LLC phases using both charged 24 and neutral surfactants; 23,27,28,43,44 however, research has focused on the latter, due to a greater number of hydrogen bonding sites, thought to aid LLC formation. 45 In work by Peng et al., neutral AzoPS molecules, with an oligooxyethylene head and azobenzene-alkyl tail, formed both lamellar and hexagonal LLCs with photoisomerization resulting in a loss of the hexagonal phase. 23,26 Houston et al. further demonstrated the ability to control LLC phase formation, including lamellar and hexagonal, using the structure, concentration, and temperature in systems of neutral AzoPS, all of which showed loss of LLC order on isomerization. 25 Some controlled release mechanisms using light-responsive AzoPS have been reported. Aleandri et al. created lightresponsive bulk hexagonal and cubic LLC mesophases by introducing small amounts of an Azo-MO analogue into LLC phases containing a mixture of MO and oleic acid. 46 The authors observed that photoisomerization increased the diffusivity of the hexagonal LLC, leading to an increased rate of release of an entrapped hydrophilic dye. However, interestingly, small-angle X-ray scattering (SAXS) results showed no structural difference in the LLCs upon photoisomerization, suggesting that the Azo-MO analogue did not impart significant organizational change at the concentrations studied (<5 wt %). Looking to nanoparticle dispersions, Pernpeintner et al. formed simple liposomes using phosphatidylcholines that were modified with an azobenzene group in the tail. 47 For clinical applications, the low tissue penetration of UV light remains an issue. To combat this, the group further functionalized the azo-phospholipid using tetra-ortho-chloro substitution to red-shift the isomerization wavelength from UV (365 nm) to red (660 nm). 48 Using this approach, light-driven drug release was achieved both in vitro and in vivo, showing the potential for azobenzene-stimulated release in human tissue. However, to the best of our knowledge, there have been no previous reports of AzoPS-induced light-responsivity in nanoparticles exhibiting internal LLC order, whose increased dimensional control could set them apart as the nextgeneration of nanodelivery devices.
Herein, we demonstrate for the first time light-responsive cubosomes, which show measurable change in the internal structure on photoisomerization. Bulk LLCs were prepared from a mixture of MO, water, and a neutral AzoPS dopant, with known LLC-forming abilities 25 (Figure 1a), which transform into cubosomes under application of a shear force ( Figure 1b). Using SAXS, polarized optical microscopy (POM) and cryo-transmission electron microscopy (TEM), we show that the internal structure of the cubosomes can be swelled by modifying the bulk LLC precursor composition (i.e., AzoPS or water wt %). Moreover, photoisomerization squeezes the cubic lattice, leading to triggerable release of entrapped guest species (e.g., Nile Red) at rapid timescales in comparison to diffusion rates, enhancing their application as stimuli-responsive delivery materials.
■ RESULTS AND DISCUSSION
Creating Bulk LLCs Containing Light-Responsive AzoPS. We first investigated the formation of LLCs within bulk MO-AzoPS-water mixtures by SAXS. Two different AzoPS structures were investigated, both containing a tetraethylene glycol head group but with differing alkyl spacer (m = 4 or 8) and alkyl tail (n = 6 or 8) lengths, subsequently referred to as C 6 AzoC 4 E 4 and C 8 AzoC 8 E 4 (Figure 1a). AzoPS were loaded at 20 wt % with respect to MO, and the water content was 20 wt % with respect to total amphiphile mass. The resulting concentration of AzoPS was above the critical micelle concentration for both structures (Supporting Information, Table S1). A reference sample of MO-water (20 wt %) was also prepared.
MO-water shows sharp Bragg diffraction peaks in the ratio of 1:2 (Figure 2a), indicating the formation of a lamellar LLC phase, as expected at this composition and temperature (25°C ). 49 This phase assignment is supported by a characteristic streaky pattern in the POM images (Supporting Information, Figure S1b). On incorporation of AzoPS, there is a shift in the diffraction peaks in the SAXS patterns (Figure 2a), which depends on the tail length of the AzoPS molecule. A characteristic peak ratio of 1:√3:2 suggests the formation of a hexagonal phase in the ternary MO-C 6 AzoC 4 E 4 -water system. 4 A birefringent fan pattern under POM supports this phase assignment (Figure 2b). In contrast, the analogous C 8 AzoC 8 E 4 system exhibits SAXS peaks in the ratio of 1:2, indicating the formation of a lamellar LLC mesophase, visible as a striped pattern under POM (Figure 2c).
The sensitivity of the LLC phase to the AzoPS structure demonstrates that there is an interaction between the MO and AzoPS in the LLC, and that they are not forming separate selfassembled structures. The preference for different LLC phases by the two AzoPS structures can be rationalized using critical packing parameter (CPP) considerations for the spontaneous curvature of the amphiphile films. 50−52 The CPP is defined as CPP = v/a 0 l c, where v is the volume of the hydrophobic tail, a 0 is the hydrophilic head group area, and l c is the length of the hydrophobic chain. 53 Due to the high amphiphile concentration with respect to water (80 wt %), inverse LLC phases, with negative curvature, are expected to form. 54 Compared to MO, the AzoPS have a much larger head group areas, due to the four ethylene glycol groups, and also longer tail lengths. This results in smaller packing parameters (Supporting Information, Table S3), favoring lower-curvature phases when combined with MO, as modeled with similar neutral (but non-light responsive) surfactants. 55 On decreasing the alkyl chain length from C 8 AzoC 8 E 4 to C 6 AzoC 4 E 4 , there is a transition from the zero-curvature lamellar phase to the inverse hexagonal phase, with a higher negative curvature. This shows that the AzoPS chain length dominates the packing and spontaneous curvature of the amphiphile bilayer. This is consistent with previous findings for these AzoPS, where the ratio of alkyl chain length/head group area determined LLC formation; 25 since the head group in both AzoPS here is the same, it is expected that the alkyl chain length determines the phase formation.
It is interesting to compare these results to LLC formation in AzoPS-water systems, without MO, from a previous work by Houston et al. 25 C 6 AzoC 4 E 4 forms a lamellar phase at 20 wt % water and 25°C, showing that incorporation with MO, which has a higher CPP, increases the negative curvature of the amphiphile film, resulting in the inverse hexagonal phase. In comparison, C 8 AzoC 8 E 4 remains as insoluble crystals on addition of 20 wt % water, at 25°C. Addition to the MO matrix allows solubilization of the hydrophobic AzoPS to form the lamellar phase. Compared to the MO-water system, the incorporation of C 8 AzoC 8 E 4 decreases the lamellar spacing (from 4.1 to 4.0 nm), visible as a shift in the SAXS peaks to higher q values. As the C 8 AzoC 8 E 4 chain length is just under double that of MO (3.4 cf. 1.8 nm), this suggests that, in the lamellar phase, one AzoPS molecule spans across the MO bilayer. The result of this would be a decrease in the average lamellar spacing on incorporation of the AzoPS into the bilayer.
Structural Characteristics of LLC Dispersions. The ability to break up bulk MO-AzoPS LLC phases to form lowviscosity dispersions under shear was next investigated. The AzoPS tail length, concentration of AzoPS (10−30 wt %, with respect to MO) and initial water concentration (10−40 wt % with respect to total amphiphile mass in the parent, bulk LLC phase) were all varied to probe changes in the LLC particles with composition. The bulks were then homogenized in a solution of Pluronic F-127 (0.3 wt %) to aid interfacial stability.
After homogenization, reference dispersions of MO-water formed particles with a mean Z-average hydrodynamic diameter, D H , between 130 and 190 nm (Table S4, Supporting Information), measured using dynamic light scattering (DLS). A small increase in particle size was observed on the incorporation of AzoPS into the LLCs, giving D H typically between 159 and 220 nm. No clear trend was observed in the particle size with variation of the AzoPS tail length, concentration, or initial water concentration. An outlier of 468 ± 7 nm was measured for the dispersion containing 30 wt % C 8 AzoC 8 E 4 , indicating lower stability and agglomeration of particles with high AzoPS concentrations. This is accompanied by a general increase in the polydispersity index on increasing AzoPS concentration for both tail lengths (Table S4). Despite this, almost all dispersions had a particle polydispersity index below 0.3, which can be considered monodisperse for applications as lipid-based nanoparticles. 56 The colloidal stability of the dispersions was determined by remeasuring the particle size after storage for 10 months at room temperature in the dark. None of the dispersions had visibly phase-separated during this period, with only a little agglomeration at the side of the vials (Supporting Information, Figure S4). However, for most samples, there was a significant decrease in the nanoparticle size, with D H for most AzoPScontaining samples lying between 80 and 117 nm. A similar effect was observed for the reference MO-water dispersions, with D H decreasing to 95−119 nm (SI , Table S5). This decrease in size was accompanied by an increase in the polydispersity index and can be attributed to dehydration of the particles over this time frame. Despite these variations in the particle size, it can be concluded that Pluronic F-127 provides sufficient stabilization to prevent large agglomerates over the timescale of months, giving long shelf-life potential in these systems.
Cryo-transmission electron microscopy was also used to image the particles. Dispersions of MO-water (20 wt %) showed a double-ring outer surface, attributable to the outer amphiphile bilayer of unilamellar vesicles (Figure 3a), as expected from the lamellar LLC in the bulk phase. In these vesicles, there was no sign of internal order. In contrast, MO-AzoPS-water dispersions exhibit visible internal structure in the micrographs (C 6 AzoC 4 E 4 in Figure 3b and C 8 AzoC 8 E 4 in Figure S5, SI). Surface scattering from these particles was further investigated by SAXS. Porod plots (logI vs logq) show two distinct straight-line regions, indicating scattering from two different length scales (Figure 3c). 57 In the lowest q region, the scattering is proportional to ∼q −2 , indicative of scattering from 2D sheets, attributable to the outer bilayer. 58 Guinier analysis in this region was used to estimate the overall particle size, with resulting diameters (100−167 nm) in agreement those observed using DLS and TEM (SI , Table S6). At higher q, the scattering comes from the interface between the particles and the aqueous phase, with gradients of −3 and −4 corresponding to scattering from rough and smooth 3D fractal interfaces, respectively. 59 This interface gradient decreases for the MO-AzoPS-water systems, indicating that a smoother fractal surface forms in AzoPS-containing particles compared to the MO-water reference. The gradient increase is accompanied by a shoulder region in the MO-AzoPS-water systems between q = 0.01 and 0.02 Å −1 . This corresponds to features of length scales of 30−60 nm, which are visible as small vesicles in the cryo-TEM micrographs (Figure 3b), showing that there is some heterogeneity in the size and order of the dispersed particles.
Vesicles formed from MO-water dispersions showed no Bragg diffraction peaks in the SAXS patterns; only a broad peak at q = 0.2 Å −1 is observed (Figure 3d), corresponding to the real-space distance expected from the bilayer packing of MO molecules in the outer layer of the vesicles. 60 Variation of the initial water content in the bulk phase (10−40 wt %) resulted in the same broad peak, showing that this had no effect on internal ordering (SI, Figure S6). However, upon incorporation of AzoPS into the MO-water dispersions, Bragg peaks with a q ratio of √2: √4:√6 become clearly apparent in the SAXS patterns (Figure 3d). This ratio is characteristic of the inverse bicontinuous primitive cubic (Im3m) LLC phase, 4 corresponding to peaks of Miller indices of (110), (200), and (111), and indicates that cubosomes are present in the MO-AzoPS-water dispersions. The lattice parameter varied between 145 and 217 Å, which is comparable to other MO-based primitive cubosomes in the literature. 61 This LLC phase was stable across the composition range tested (10−40 wt % initial water, 10−30 wt % AzoPS), for both chain lengths of AzoPS (for C 6 AzoC 4 E 4 see SI, Figure S7) and across the temperature range of 25−55°C (SI , Table S7). To the best of our knowledge, this is the first example where a light-responsive chemical moiety has been incorporated into dispersed nanoparticles to form a cubic LLC phase, which is highly desirable for future controlled release applications.
The bicontinuous primitive cubic LLC phase is associated with a high packing stress, where some amphiphiles are extended (around the water channels) and others are compressed (around lipid junctions). It has been observed previously that the inclusion of long-chain additives to MO can lower the packing stress by preferentially segregating to regions where the amphiphiles are in extension. 62 In this case, the primitive cubic phase has been stabilized by the AzoPS addition, due to their longer chain length. Notably, variation in the lattice parameter was observed across the AzoPS composition range explored for both amphiphiles. The cubic lattice parameter (a) was calculated from a = 2π√(h 2 + k 2 + l 2 )/q 0 , where q 0 is the peak center for the first observed Bragg peak and h, k, and l are the associated Miller indices (110). 61 Increasing the concentration of AzoPS within the LLCs resulted in a stark increase in a (Figure 3e). In the trans state, both AzoPS have a longer hydrophobic tail and larger headgroup area than MO and therefore a lower CPP. This favors a lower spontaneous curvature in the amphiphile bilayer and leads to swelling in the lattice at increasing concentrations. 63 The increase in lattice size is further amplified by the increased thickness of the amphiphile bilayers from the longer chains. 64 In comparison, on increasing the initial water concentration from 10 to 30 wt %, there was little variation in the lattice parameter (Figure 3e), which is directly related to the internal water content in the cubic phase through geometric packing analysis (SI , Table S8). This is as expected, as the internal water within the cubic lattice is free to move into the external solution. However, a decrease in lattice parameter, and therefore internal water content, was measured at the highest water concentration (40 wt %) with C 8 AzoC 8 E 4 . In the MOwater system, transition to a two-phase cubic + water region is expected at 40 wt % water. 49 On incorporation of the more hydrophobic C 8 AzoC 8 E 4 , this could lead to the dispersion being formed in the two-phase region, which, in turn, could lead to preferential segregation of the C 8 AzoC 8 E 4 into the aqueous phase. This would result in AzoPS depletion in the cubic lattice and a decrease in lattice parameter, as expected from the discussion above. This is also accompanied by a decrease in the intensity of the Bragg peaks, indicating a higher proportion of vesicles without internal cubic phases forming in the dispersion at this composition, providing an upper limit on the amount of water that can be incorporated before disordering occurs.
To retain LLC order, particularly at high concentrations, the choice of AzoPS structure was found to be crucial. On increasing the concentration of C 6 AzoC 4 E 4 above 10 wt %, there was a significant decrease in the intensity of the Bragg peaks in the SAXS patterns (SI, Figure S7), indicating a disordering of the cubic LLC. This low order-retention was further demonstrated by repeating SAXS experiments for three samples made to the same composition (30 wt % C 6 AzoC 4 E 4 and 20 wt % initial water), where no Bragg diffraction peaks were visible for two of the three samples (SI, Figure S9). In contrast, the disordering effect is less prevalent for dispersions containing the longer AzoPS, C 8 AzoC 8 E 4 , with repeat compositions (30 wt % C 8 AzoC 8 E 4 , 20 wt % initial water) retaining the inverse bicontinuous primitive cubic LLC phase to give reproducible results (SI, Figure S9). For both AzoPS structures, it was found that the peak intensity also increases on increasing the temperature to 55°C. This was accompanied by a peak shift to higher q values, indicating a decrease in cubic lattice parameter (SI, Figure S10), as expected from the literature where higher temperatures lead to a dehydration of the surfactant head groups and an increase in the CPP. 4 In the case of these MO-AzoPS systems, this acts to both contract the cubic lattice and increase the cubic phase stability at elevated temperatures. This implies that the AzoPS geometry, quantified by the CPP, is crucial in determining the stability of the cubic LLC phase. The estimated CPP at room temperature for C 8 AzoC 8 E 4 is greater than C 6 AzoC 4 E 4 (0.42 cf. 0.40, see SI, Table S3), implying that amphiphile geometries closer to that for MO (1.16) 65 results in greater stability of the cubic phase and more reproducible cubosome formation. The LLC disordering at increased AzoPS concentrations provides an upper limit to the amount of light-sensitive material that can be added to these systems while retaining the internal order required for controlled molecular entrapment and release. The ability to tailor the stability through control of the lightresponsive surfactant geometry is thus an important result for the subsequent optimization of these systems for molecular delivery applications.
Isomerization within Light-Responsive LLC Dispersions. Having formed LLC dispersions using AzoPS in their native, trans state, the effect of photoisomerization was next investigated. UV−vis absorption spectra for AzoPS within MO dispersions in the trans state show a peak at ∼350 nm, characteristic of the π−π* transition in azobenzene 31 ( Figure 4). We note that the spectra show a relatively high background due to Rayleigh scattering from the dispersed particles. When compared to the spectra for AzoPS diluted in water, introduction into LLC dispersions caused a red-shift in the absorption band by 19 nm (Figure S11), attributed to the formation of J-aggregates in the LLC structure. 66 This formation of aggregates also contributes to the asymmetry in the π−π* peak (Figure 4), due to overlap of peak contributions from both aggregates and monomers. 67 On irradiation with UV light (5 min), the trans peak decreases and two new peaks arise at λ max = 319 and 450 nm ( Figure 4, for C 6 AzoC 4 E 4 see SI, Figure S12), attributed to the π−π* and previously forbidden n−π* transitions in the cis isomer, indicating that photoisomerization has occurred. The trans isomer can be partially recovered (94%) through irradiation with blue light. A photostationary state, containing Journal of the American Chemical Society pubs.acs.org/JACS Article a mixture of both isomers that dynamically switch between the two forms, is created on irradiation. 23 The time needed to obtain a cis-dominant PSS and subsequent reversion to the trans state was determined using first-order kinetics, as in previous studies (see SI, Figures S13 and S14). 25 Forward conversion occurred on the order of seconds for both AzoPS types (Table S9, SI). Reversion from the cis back to the trans state required longer irradiation times (∼10 s cf. ∼1 s for transcis), due to a combination of the lower irradiance from the blue LED and a lower absorption coefficient for the n−π* transition (for C 8 AzoC 8 E 4 , ε cis,455nm = 340 m 2 mol −1 cf. ε trans,365nm = 1000 m 2 mol −1 ). Thermal reversion lifetimes, on storage in the dark, were on the order of hours (SI Table S9, Figures S15 and S16).
On storage in the dark over the course of 1 month, the π−π* peak recovered at the same wavelength (∼350 nm). This indicates that there was minimal leaching of the AzoPS into solution over the course of the irradiation and relaxation cycle, which would result in a blue-shift of the peak as observed for the AzoPS alone in solution. With a view to controlled release applications, the high stability of the cis isomer over the course of hours is important for the storage and delivery of LLC particles to the target site. Combined with the rapid photoisomerization of AzoPS, this demonstrates a high degree of temporal control within these systems using light as an external stimulus.
Effects of Isomerization on Structure and Size of MO-AzoPS Dispersions.
The effects of photoswitching on the size and structure of the nanoparticles present in MO-AzoPSwater dispersions was next investigated. Following irradiation to form the cis-dominant PSS, the SAXS patterns exhibited Bragg peaks of the same ratio, √2:√4:√6, across the whole composition range, showing the retention of the inverse bicontinuous primitive cubic phase ( Figure 5 and SI, Figures S17−S19). However, the lattice parameter decreased across all samples on isomerization (Figure 5b,c). The measured change in the lattice parameter remained approximately stable on increasing the initial water concentration; however, a dramatic increase in the disparity between the trans and cis-state lattice parameters was observed at higher AzoPS concentrations (>20 wt %). A maximum decrease in lattice parameter of 39% was measured for the dispersion of MO with 30 wt % C 8 AzoOC 8 E 4 and 20 wt % water. Repeat experiments for different samples of the same composition showed that this decrease in cubic lattice parameter was reproducible (see SI, Figure S20). We note that for an analogous sample containing C 6 AzoC 4 E 4 , lowintensity Bragg peaks were observed in the trans state, only showing peaks large enough to be assigned in one sample of three. Despite this, the little ordered material present in this sample showed high sensitivity to the isomeric state of the AzoPS, with a 28% decrease in the lattice parameter. Interestingly, in one sample, Bragg peaks only emerged after isomerization (see SI, Figure S20), further demonstrating that the cubic phase stability is dependent on AzoPS geometry.
On photoisomerization, "bending" of the AzoPS tail leads to an increase in tail volume and therefore CPP, increasing the spontaneous curvature of the amphiphile bilayer. This would result in contraction of the cubic lattice (and decrease in corresponding lattice parameter) as previously observed for structural changes in nonlight-responsive MO systems. 63 Increasing the concentration of AzoPS within the LLC acts to magnify this effect, resulting in a larger change. The increased stability of the cubic LLC on isomerization can be attributed to the increase in the CPP, which is consistent with observations at increased temperatures and chain lengths in the trans samples. Despite this increase in cubic phase stability on isomerization, the greater disorder at high AzoPS concentrations, especially in C 6 AzoC 4 E 4 , will have a knock-on effect for the ability of these systems to entrap guest molecules. It is therefore crucial to strike a balance between achieving the maximum structural change, obtained from high concentrations of AzoPS, and retaining ordered LLC packing, which is sensitive to the disparity in geometries between MO and AzoPS. In this regard, the longer chain C 8 AzoOC 8 E 4 showed a greater ability to retain LLC order at high concentrations, due to its higher CPP, maximizing the light-sensitivity and reproducibility of these systems.
The retention of internal order on isomerization was visible in TEM micrographs (Figure 6a,b); however, these also displayed heterogeneity in the particle structures. The micrographs show a mixture of vesicles, ordered LLC particles, and, for dispersions containing C 8 AzoC 8 E 4 , more complex, multi-particle assemblies consisting of multiple, ordered cubosome regions within a vesicle shell. Further optimization of the preparation method for these systems may be needed to form a homogenous array of cubosome particles, as has been achieved in the literature previously. 68 DLS studies showed that the particle size increased on isomerization of the AzoPS (Figure 6c), resulting in hydrodynamic diameters mostly between 211 and 257 nm (see SI, Table S11). As in the trans state, an outlying diameter of 571 nm was observed for particles with 30 wt % C 8 AzoC 8 E 4 . The cis isomer of pure azobenzene has a larger dipole moment than the trans, resulting in an increase in the hydrophilicity of the AzoPS molecules on isomerization. 35 The increased interaction between the molecules and the surrounding water may lead to swelling at the particle surface, which may explain the observed increase in hydrodynamic diameter. It is worth noting, however, that this swelling has no effect on the ordered, cubic LLC regions, in which there is a contraction of the lattice, as discussed above. In the trans state, the polydispersity showed a high dependence on the concentration of AzoPS within the LLCs. For low concentration samples (10 wt %), photoisomerization resulted in a large increase in polydispersity, compared to the trans state (Figure 6c). The formation of a PSS upon irradiation could lead to heterogeneity in the interaction between different particles and the surrounding water, resulting in a heterogeneity in the particle size. In contrast, for dispersions containing higher concentrations of AzoPS (30 wt %), this effect is masked by the high initial polydispersity for trans dispersions, leading to an insignificant change on isomerization.
Light-Induced Release from MO-AzoPS Dispersions. The correlation between the change in lattice parameter upon irradiation and the ability of the cubosomes to retain and release guest molecules was next investigated. Nile Red was used as the guest molecule, a hydrophobic dye that exhibits a high fluorescence intensity when present in a lipid phase but significantly lower fluorescence in water. 69 This allows the location of the dye, either in the lipid-like amphiphile bilayer or the surrounding aqueous dispersion phase, to be monitored from its emission spectrum. A reference dispersion was made by mixing Nile Red (0.03 wt %) into an MO-water (20 wt %) bulk mixture before homogenizing into a dispersion. Following excitation at 550 nm, which avoids inducing unwanted isomerization, a fluorescence peak was observed at 640 nm in the emission spectrum. This reference spectrum showed no change on irradiation of the dispersion using UV light for 3 min (Figure 7a). This was compared with the dispersion that showed the greatest change in the lattice parameter on isomerization, MO-C 8 AzoC 8 E 4 (30 wt %)-water (20 wt %). For this sample, the fluorescence intensity at 640 nm decreased by 72% on irradiation with UV light, under identical conditions (Figure 7a). This indicates that Nile Red is released from the lipid LLC matrix into the aqueous phase as a result of the contraction of the cubic phase, which can be thought of as squeezing the dye from the amphiphile bilayer ( Figure 7c).
The change in the fluorescence intensity with time after isomerization was also tracked and compared to an identical dispersion with the AzoPS kept in the trans state, through storage in the dark. Both samples retained a fluorescence peak of roughly the same intensity over the course of 3 h, indicating minimal diffusion of the dye out into the aqueous phase within this period (Figure 7b). This shows that stimulated release using UV irradiation is significant in comparison to the gradual diffusive release from these systems on the time scale of hours, within which thermal relaxation of the cis state back to the trans is not a concern. Structural control of the LLC therefore allows rapid, stimuli-responsive release in comparison to diffusion from the particles in their unirradiated state.
■ CONCLUSIONS
In summary, we have designed light-responsive cubosomes that exhibit a swell-squeeze mechanism to enable triggerable release of entrapped payload. First, light-responsive AzoPS molecules were combined with MO and water to form bulk LLC mesophases whose structure depends on the chain length of the AzoPS, with C 6 AzoC 4 E 4 and C 8 AzoC 8 E 4 forming hexagonal and lamellar phases, respectively. Bulk LLCs were then homogenized to form stable dispersions of particles ∼200 nm in diameter. An internal inverse bicontinuous primitive cubic LLC phase was observed using SAXS and cryo-TEM across the composition (10−40 wt % initial water, 10−30 wt % C 6 AzoC 4 E 4 or C 8 AzoC 8 E 4 ) and temperature (25−55°C) range tested. Notably, the cubic lattice parameter is highly sensitive to the AzoPS concentration: a higher loading leads to swelling of the cubic lattice, offering a method to tune the nanoscale structure. However, the stability of the cubic LLC phase decreased at higher AzoPS concentrations, suggesting that there is a sweet spot to be found between tunability and stability. Upon UV irradiation, AzoPS molecules within the cubosome structure isomerized rapidly between trans and cis states, leading to a small increase in particle size in most samples but retention of the internal inverse bicontinuous primitive cubic phase across the composition range. However, photoisomerization leads to squeezing of the cubic lattice, resulting in a corresponding decrease in the lattice parameter. This squeeze mechanism was successfully harnessed to enable phototriggerable release of trapped Nile Red guest molecules from the cubosome structure into the aqueous phase. It is thought that the "bending" of the AzoPS tail on isomerization leads to an increase in the tail volume and thus the spontaneous curvature of the amphiphile bilayer in the cubic LLC. This acts to contract the lattice, which effectively squeezes the hydrophobic dye out of the LLC matrix in a manner that is markedly faster than release due to diffusion.
With view to application, ordered LLC nanoparticles are promising candidates for the next-generation of nanodelivery devices for drugs, catalysts, or other active molecules. Triggerable release of the entrapped payload further directs delivery, which can improve selectivity and, notably for anticancer treatments, reduce drug toxicity to surrounding normal tissue. This proof-of-concept work has shown that cubosomes can be built containing light-responsive AzoPS, swelled (using composition) to allow design to encapsulate a variety of different payloads and subsequently squeezed (using photoisomerization) to induce release. Further work is needed to probe how this swell-squeeze mechanism can be exploited to tune the release of a greater variety of guest molecules with different hydrophilicities. Looking toward clinical application, red-shifting the isomerization wavelength from the UV to infrared regions, using further functionalization of the azobenzene, is a vital step toward improving tissue penetration and paving the way toward light-triggerable release in vivo.
* sı Supporting Information
The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/jacs.2c08583. Materials and experimental methods; details for the synthesis and characterization of C 6 AzoC 4 E 4 and C 8 AzoC 8 E 4 using NMR spectroscopy, FTIR, mass spectrometry, and percentage yields; AzoPS critical micelles concentrations; MO-water bulk LLC characterization; calculations to estimate MO, C 6 AzoC 4 E 4 , and C 8 AzoC 8 E 4 geometries; DLS results for LLC particle size and stability after 10 months; photographs to show dispersion stability after 10 months; cryo-TEM micrograph for MO-C 8 AzoC 8 E 4 -water dispersion; additional SAXS data and analysis for dispersions in the trans state, Guinier size analysis, table of Bragg peak positions for phase assignment with temperature and composition, and calculations for the final water fraction in the nanoparticles; additional UV−vis absorption spectra, kinetics plots and results for AzoPS LLC dispersions on irradiation with UV and blue light, and thermal reversion of the cis form in the dark; additional SAXS data comparing isomerized dispersions; and DLS results for isomerized dispersions (PDF) SAXS small-angle X-ray scattering TEM transmission electron microscopy UV ultra-violet | 2022-10-13T06:18:09.763Z | 2022-10-12T00:00:00.000 | {
"year": 2022,
"sha1": "4e2d9258deeaec96af95b30eb374965c3243ee26",
"oa_license": "CCBY",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/jacs.2c08583",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "99bb1cf6fa5ef18e10003de3dd9959a835a3d0f5",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225017352 | pes2o/s2orc | v3-fos-license | 3D printing of silk microparticle reinforced polycaprolactone scaffolds for tissue engineering applications
Polycaprolactone (PCL) scaffolds have been widely investigated for tissue engineering applications, however, they exhibit poor cell adhesion and mechanical properties. Subsequently, PCL composites have been produced to improve the material properties. This study utilises a natural material, Bombyx mori silk microparticles (SMP) prepared by milling silk fibre, to produce a composite to enhance the scaffolds properties. Silk is biocompatible and biodegradable with excellent mechanical properties. However, there are no studies using SMPs as a reinforcing agent in a 3D printed thermoplastic polymer scaffold. PCL/SMP (10, 20, 30 wt%) composites were prepared by melt blending. Rheological analysis showed that SMP loading increased the shear thinning and storage modulus of the material. Scaffolds were fabricated using a screw-assisted extrusion-based additive manufacturing system. Scanning electron microscopy and X-ray microtomography was used to determine scaffold morphology. The scaffolds had high interconnectivity with regular printed fibres and pore morphologies within the designed parameters. Compressive mechanical testing showed that the addition of SMP significantly improved the compressive Young's modulus of the scaffolds. The scaffolds were more hydrophobic with the inclusion of SMP which was linked to a decrease in total protein adsorption. Cell behaviour was assessed using human adipose derived mesenchymal stem cells. A cytotoxic effect was observed at higher particle loading (30 wt%) after 7 days of culture. By day 21, 10 wt% loading showed significantly higher cell metabolic activity and proliferation, high cell viability, and cell migration throughout the scaffold. Calcium mineral deposition was observed on the scaffolds during cell culture. Large calcium mineral deposits were observed at 30 wt% and smaller calcium deposits were observed at 10 wt%. This study demonstrates that SMPs incorporated into a PCL scaffold provided effective mechanical reinforcement, improved the rate of degradation, and increased cell proliferation, demonstrating potential suitability for bone tissue engineering applications.
Introduction
3D printing technologies are enabling the development of complex multi-material structures for tissue engineering applications that are beginning to more accurately reflect the multifaceted biophysical environment within tissues [1,2]. The advancement of 3D printing or bioprinting is facilitating advancements in a range of tissue engineering applications such as bone, cartilage, cardiac, skin, vasculature, and the development of biomimetic disease models [3][4][5][6][7][8]. However, a key priority is the development of advanced biomaterials that are compatible with bioprinting technologies but also provide superior physical and biological properties. This is of utmost necessity in bone tissue engineering applications, which has a demanding set of requirements due to the complexity and remarkable properties exhibited by the nanoscale organised hierarchical composite tissue. In this application, the biomaterial must withstand significant mechanical forces generated in the tissue and promote osteogenic cell behaviour.
A widely investigated biomaterial is the synthetic polymer, polycaprolactone (PCL), which has been used for bone tissue engineering applications and in vivo is degradable, bioresorbable, and biocompatible [9,10]. These properties can be readily exploited in tissue engineering applications because PCL can easily be processed as it has a low melting temperature (~60°C), soluble in a range of solvents (e.g. chloroform, dichloromethane, toluene, and acetone), excellent blending compatibility, and suitable rheological and viscoelastic properties enabling the fabrication of a variety of scaffolds using both conventional and additive manufacturing techniques [9][10][11].
Despite these advantages, PCL has a degradation profile that can be too long to match the formation and ingrowth of neotissue -it can take up to 4 years to degrade; depending on structural properties (e.g. thickness, porosity, and quantity), molecular weight, and the biological environment [9,[12][13][14][15]. Furthermore, the biomechanical properties of PCL scaffolds are not appropriate for load bearing applications such as in bone tissue engineering and orthopaedics. PCL, in common with other synthetic polymers, also lacks biological cues and binding motifs to promote cell adhesion and modulate cell behaviour whilst the hydrophobic chemistry reduces cell attachment and spreading.
Subsequently, strategies to improve the degradation, mechanical, and bioactivity properties of PCL scaffolds are required. The mechanical properties can be improved through blending with both synthetic and natural polymers and the incorporation of particulate fillers such as bioceramics and carbon nanomaterials [16][17][18]. The functionalisation of specific chemistries and biological motifs onto the scaffold surface can enhance the bioactivity of scaffolds [19]. Whilst incorporating bioactive biomaterials such as collagen, gelatine, hydroxyapatite, tricalcium phosphate, and bioactive glass can promote cell attachment, proliferation, and differentiation down specific cell lineages [16,[20][21][22]. For example, the development of a composite scaffold containing poly(L-lactic-co-ε-caprolactone), bovine bone matrix, and gelatin promoted faster and mature bone regeneration in human patients [23,24]. This demonstrates that the biological properties of PCL can be improved through the incorporation of the appropriate biological motifs.
The aim of this study is to address the challenges inherent to PCL of mechanics, degradability, and bioactivity by using silk as the material filler in the form of silk microparticles (SMPs). Silk is a protein-based natural polymer fibre, primarily composed of fibroin (core) and sericin (coating), which is biocompatible, biodegradable, and has excellent mechanical properties [25][26][27][28]. It has been used in Food and Drug Administration (FDA) approved medical devices such as sutures and has been investigated as a scaffold for tissue engineering applications [25][26][27][28]. In the three-dimensional (3D) bioprinting space, silk has predominantly been used as dissolved silk fibroin solution to produce hydrogel bioinks [29][30][31]. However, the dissolution process is time consuming and requires toxic chemicals. The production of silk particles directly through mechanical cutting and milling overcomes these issues. Moreover, the particles retain the natural microstructure of silk. SMPs and fibres have successfully been used as filler agents to enhance the mechanical and biological properties of scaffolds [32][33][34][35][36][37]. A rigid particle filler in a polymeric matrix can improve the compressive modulus, creep resistance, and fracture toughness of the composite [38,39]. Rajkhowa et al. demonstrated a 40-fold increase in compressive modulus and yield strength of a porogen leached silk scaffold reinforced with SMPs [40]. Furthermore, Mandal et al. [36] used alkalihydrolysed silk microfibres as a filler in a silk scaffold whilst Gupta et al. [37] used chopped silk fibres to reinforce a lyophilised silk and hydroxyapatite coated scaffold which showed a 5-fold increase in compressive modulus. Additionally, Zhang et al. used SMPs, microfibres, and nanofibres to reinforce a 3D printed chitosan hydrogel which showed a 5-fold increase in compressive modulus and aided in printing fidelity and stability of the structure whilst maintaining biocompatibility [32,33]. Subsequently, the development of a PCL and SMP composite scaffold may improve the physical and biological properties of a scaffold for tissue engineering applications. The use of SMPs as a filler provides an alternative to other explored filler materials such as graphene, carbon nanotubes, hydroxyapatite, and bioglass which have been investigated to modulate mechanical and biological properties [16][17][18]22]. SMPs are biodegradable thus pose less of a cytotoxic and inflammatory risk compared to the use of graphene and carbon nanotubes whilst still providing an improvement to mechanical performance. Furthermore, silk has been shown to promote mineralisation and osteogenic behaviour in scaffolds thus can be utilised in a similar strategy as the inclusion, for example, of hydroxyapatite and bioglass [37,[41][42][43][44][45][46]. Subsequently, a PCL/SMP composite scaffold can be used by itself or in combination with co-printing of hydrogels, containing SMPs, for bone tissue engineering applications [32,33,47].
Screw-assisted extrusion-based 3D printing was utilised in this study to produce PCL based scaffolds. This technique has previously been used to produce scaffolds for tissue engineering applications with homogenous and high particle loadings [16,48]. Furthermore, this process offers simplicity in use, lower cost to competing technologies, applicability to a wide range of materials including highly filled particle composites, and can be combined with multiple extrusion heads to fabricate multi-material scaffolds including the printing of cells. Alternative technologies such as selective laser sintering (SLS) have been employed to fabricate complex porous PCL scaffolds without the use of support structures [49,50]. However, they typically have partially melted structures which can limit their mechanical properties and the partially melted or loose powder potentially poses inflammatory risks, although thermal post-processing steps can minimise these issues [49]. SLS also cannot be easily used to generate multi-material scaffolds with distinct material regions, furthermore, cells are not able to be included in the printing process. Subsequently, the use of extrusion processes using polymer melts or solutions offers a simpler and widely accessible approach but may require support structures for complex structures.
This study presents a composite 3D printed PCL scaffold reinforced with SMPs to improve the mechanical, biological, and degradation properties. The rheological and thermal properties of the PCL/SMP composites with different particle loadings were evaluated. The scaffolds were fabricated using a screw-assisted melt extrusion 3D printing system and scaffold morphology was characterised using scanning electron microscopy and micro-computed tomography. The effect of SMP loading on the surface roughness, mechanical properties, wettability, protein adsorption, and in vitro enzymatic degradation of the scaffolds was assessed. Finally, human adipose derived stem cells were used to assess the in vitro biological properties of the scaffold with cell metabolic activity, viability, and morphology evaluated.
Materials
Bombyx mori silk fibres (Automatic Silk Reeling Unit, Ramanagaram, India) which was reeled and undegummed was processed using a previously reported protocol [51]. The silk was degummed using a rotary textile dyeing machine (Ahiba IR Pro, Datacolor, USA) for 30 min at 98°C in a 2 g/L sodium carbonate (Sigma-Aldrich, USA) and 1 g/L olive oil soap (Vasse Virgin, Australia) solution, with a material mass (g) to solution volume (mL) ratio of 1:50. The degummed silk fibres were washed three times each in tap water then deionised water to ensure the removal of all chemical traces. Finally, the silk fibres were dried overnight at 40°C. The SMPs were produced using a physical milling process previously reported that consists of a combination of chopping, wet milling and spray drying [52]. PCL (M w = 50,000 Da, CAPA 6500, Perstorp, UK) in the form of 3 mm pellets was used as the polymer matrix.
PCL/SMP composites were prepared through melt blending. PCL pellets were melted at a temperature above 80°C and then SMPs were added to the PCL melt to achieve the desired particle loadings of 10 wt %, 20 wt%, and 30 wt% SMPs. The mixtures were physically blended for at least 15 min to ensure a homogeneous dispersion of silk powder. Finally, the prepared blends were cooled and cut into smaller pieces C. Vyas, et al. Materials Science & Engineering C 118 (2021) 111433 ready for scaffold fabrication and experimental testing.
Particle characterisation
The SMP morphology was assessed by scanning electron microscopy (SEM) and laser diffraction particle size analyser.
The particles were sputter coated with platinum for 60 s obtaining ã 10 nm coating and then imaged with SEM (S-3000N, Hitachi, Japan) with an accelerating voltage of 10 kV. The particle morphology was assessed using the open source software Fiji.
A laser diffraction particle size analyser (Mastersizer 3000, Malvern Panalytical, UK) was used to quantify the particle size and volume distribution. The SMPs were dispersed in water (refractive index = 1.33) and the particle refractive index was 1.561. A Mie scattering and a general-purpose analysis model was used. Sonication was employed prior to measurement to reduce bubble formation and disperse any agglomerates. Five measurements of the dispersed SMPs were performed.
Rheology
The rheological properties of PCL/SMP composites were measured using a dynamic rotational rheometer (HR-3 Rheometer, TA Instruments, USA). Due to the difficulty in obtaining high shear rates with a polymer melt in a rotational rheometer the Cox-Merz rule was used to correlate the dynamic oscillatory low-shear data obtained with high-shear viscosity, generating the complex viscosity (η*) [53].
Samples were prepared by melt-blending into a disc shape of 15 mm diameter and 1 mm thick. Oscillation testing was implemented using a parallel plate geometry (plate diameter 40 mm), geometry gap of 1 mm, and temperature of 140°C. Amplitude sweep measurements with a strain value of 10 −1 to 10 2 %, frequency kept constant at 1 Hz, were performed in advance to ensure the selected strain value was within the linear viscoelastic range (LVR). A 1% strain value was then chosen for the dynamic frequency sweep tests with a frequency range of 0.1 to 100 Hz.
Thermal analysis
Thermal analysis of PCL and PCL/SMP scaffolds was performed using differential scanning calorimetry (DSC, Q100, TA Instruments, USA) with a nitrogen atmosphere. An indium standard was used to calibrate the heat enthalpy and temperature of the DSC. Each sample (n = 3) of~5 mg was placed in an aluminium pan for the measurement. All specimens were heated to 70°C at a rate of 10°C/min from the ambient temperature to erase the previous thermal history. This was followed by cooling to −30°C and reheating up to 70°C at a rate of 10°C/min to evaluate the thermal properties including melting behaviour and crystallisation. The following equation was used to calculate the degree of crystallinity (Χ c ) (54): where ΔH m refers to the melting enthalpy from the second heating curves, φ, is the weight fraction of PCL in the composites, and ΔH m * is the melting enthalpy of the polymer with complete crystallisation, for PCL was reported to be 142 J/g [55].
Scaffold fabrication
Scaffolds were fabricated using a screw-assisted extrusion-based additive manufacturing system (3D Discovery, RegenHU, Switzerland). The scaffolds were designed with a 0°/90°lay-down pattern, 300 μm fibre diameter, 300 μm pore size, and 255 μm layer-thickness. The material and screw chamber temperature were kept constant at 140°C and 90°C, respectively, and an air pressure of 6 bar was supplied to the material chamber. Qualitative observation showed that a higher material chamber temperature was required to allow continuous movement of material into the screw chamber, thus the temperature was raised to 140°C to enable this. The screw rate was kept constant at 7.5 rpm for all material compositions, however, the feed rate was varied to obtain an appropriate fibre diameter, due to changes in viscosity of the composites, which was assessed using an optical microscope (VHX-5000, Keyence, Japan). Subsequently, a feed rate of 11.75, 12.75, 13.25, and 13.75 mm/s was selected for PCL, 10 wt%, 20 wt%, and 30 wt% SMPs, respectively. Higher percentage SMP loadings, 40 wt%, was attempted. However, the high particle filler content resulted in repeated blocking of the extrusion nozzle. Furthermore, the increase in viscosity required a significant increase in temperature (200°C) to maintain material flow which potentially can cause material degradation. The printed scaffolds were 40 mm × 40 mm × 2.5 mm with a one-layer thick brim around the perimeter to provide adhesion to the print platform. Scaffolds were cut to 6.5 mm × 6.5 mm × 2.5 mm, a calliper was used for quality control, for experimental studies.
Scaffold morphology
Scaffold morphology was assessed through SEM and micro computed X-ray tomography (μCT).
The scaffolds for SEM (S-3000N, Hitachi, Japan) were sputter coated for 60 s with platinum to obtain~10 nm coating and imaged with an accelerating voltage of 10 kV. Top-down and cross-sectional images were obtained. Energy dispersive X-ray spectroscopy (EDX; Oxford Instruments, UK) was used to investigate the elemental composition of the scaffold surface after in vitro cell culture.
Three-dimensional imaging was obtained using μCT (SkyScan 1172, Bruker-microCT, Kontich, Belgium). Scanning parameters were set to 9 μm pixel size, X-ray source with 100 kV and 100 mA and using no filters. Samples were rotated 360°around their vertical axis with a rotational step of 0.4 degrees. The images were reconstructed using the software (NRecon, Bruker-microCT, Kontich, Belgium) and subsequently analysed with another software (CTAn, Bruker-microCT, Kontich, Belgium) which allowed for quantitative analysis of the scaffolds morphological parameters such as porosity, pore size, fibre diameter, and surface to volume ratio. Interconnectivity was calculated with an algorithm, as previously described, by considering only the open and accessible pore volume within the scaffold [56,57].
Surface topography
The surface roughness (S a ) of the scaffolds (n = 3) was assessed through optical profilometry using a laser scanning digital microscope (LEXT OLS4100, Olympus, Japan) at ×50 magnification on an individual fibre with a scanning area of 257 × 258 μm. The areal scanning method was used to measure a specific area of the surface to obtain the mean surface roughness, S a , which is a 3D parameter of the arithmetic mean roughness obtained from the 2D parameter (R a ).
Bulk mechanical properties
Uniaxial compression using a universal testing machine (INSTRON 4507, UK) was used to assess the bulk mechanical properties of the scaffold and the role of SMPs in reinforcing the PCL matrix. The samples (n = 5) in the dry state were compressed at strain rate of 0.5 mm/min with a 2 kN load cell up to a strain of 40%. The compressive Young's modulus was calculated from the gradient of the linear elastic region. The compressive strength at 1, 10, and 40% strain and the yield strength (using 2% strain offset method, typical of polymeric materials) are reported (Fig. S1 -supplementary information).
Nanoindentation
The nanomechanical properties of the PCL/SMP scaffolds were characterised using nanoindentation (Hysitron TI 950 Tribo-Indenter, Bruker, USA). The equipment was fitted with a standard three-sided pyramidal (Berkovich) probe and the probe area (A) function was calibrated independently before the test. A 50 μm spacing was used between indents with 10 indents for each sample.
Samples were mounted in epoxy and polished down to a 40 nm surface finish. The indenter was forced into the sample at 400 μN/s for 5 s, held at a peak load of 2000 μN (P max ) for 2 s and unloaded at 400 μN/s. The force and displacement were recorded for the entire duration of the test. The nanoindenter data analysis software (Hysitron TI 950 Tribo-Indenter, Bruker, USA) was used to estimate hardness (H). The software uses, P max and A for hardness estimation.
where P max is 2000 μN and A is the contact area [58]. The unloading segments of each indentation was analysed using the data analysis software which follows the Oliver-Pharr model [59] to fit the initial unloading portion (95%-80%) of the force-displacement curve and extracts the Reduced Modulus (E r ).
where S is the contact stiffness and A is the projected contact area [59].
In vitro enzymatic degradation
The degradation properties of the PCL and PCL/SMP scaffolds were assessed using a model accelerated in vitro enzymatic degradation method based on previous studies with slight modification [51,55]. All samples were first conditioned to 20°C ± 2°C and 65% ± 2% relative humidity for at least 48 h. The weight of each sample was then recorded before sterilisation by 30 min of ultraviolet (UV) irradiation. Control samples (n = 3) were immersed in 1 mL of 0.1 M phosphate buffered saline (PBS) at pH 7.4 solution in a 24-well plate. Experimental samples (n = 3) were placed in a solution of 1 mL of 0.1 M PBS containing 0.5 mg/mL of lipase from Thermomyces lanuginosus (Novo Nordisk, Copenhagen, Denmark). Samples were then incubated in a standard cell culture incubator (5% CO 2 , 37°C, and 95% humidity) and removed after 6, 12, 24, 48, and 96 h. During the incubation, the lipase and PBS solution were changed every day to ensure enzyme activity.
At each time point, samples were removed and rinsed with deionised water three times and dried in a fume hood overnight. The dried samples were then conditioned again to 20°C ± 2°C and 65% ± 2% relative humidity for at least 48 h and re-weighed. The weight loss of each sample was expressed as a percentage of the original weight. The scaffold mass was considered zero when the scaffold structure itself was degraded and only pieces remained. The scaffold surface morphology was evaluated qualitatively during the degradation process using laser scanning digital microscopy (LEXT OLS4100, Olympus, Japan) at each time point.
Wettability
The wettability of the scaffolds was determined through a static water contact angle measurement (KSV Cam 200, Finland). Images were obtained immediately after droplet formation on the scaffold (n = 3) and analysed using the Sessile drop technique.
Protein adsorption
Fetal bovine serum (FBS) was used as a model protein system to quantify the amount of protein that would adsorb onto the scaffolds.
The bicinchoninic acid assay (BCA) (Micro BCA Protein Assay Kit, Thermo Fisher Scientific, USA) was used following the manufacturer's instructions and experimental procedure based on previously reported studies [60]. Briefly, the scaffolds (n = 3) were pre-wet using a PBS solution in an incubator (5% CO 2 , 37°C, and 95% humidity) overnight. The samples were then immersed in a 10% FBS solution and kept in the incubator for 12 h. The scaffolds were then gently washed in PBS to remove excess and unattached protein. Samples were moved to a new plate to guarantee that only protein adsorbed to the scaffolds was quantified and the working reagent was added. After 2 h of incubation (5% CO 2 , 37°C, and 95% humidity), 150 μL from each sample was transferred to a 96-well plate and read at an absorbance of 562 nm using a microplate reader (Infinite 200, Tecan, Switzerland). The amount of protein absorbed was calculated according to a standard curve generated by a dilution series of bovine serum albumin (BSA) standards and normalised to non-protein treated samples (n = 3).
Biological assessment 2.11.1. Cell culture and seeding
The printed scaffolds were washed twice in sterile PBS, sterilised by immersion in 80% ethanol for 2 h, washed again twice with sterile PBS, and then dried overnight in a sterile tissue culture laminar flow cabinet ready for cell seeding.
Human adipose derived stem cells (hADSCs, STEMPRO™, Thermo Fisher Scientific, USA) were used for in vitro biological assessment of the scaffolds. hADSCs were maintained and expanded in MesenPRO RS™ media containing 2% (v/v) growth supplement, 1% (v/v) glutamine, and 1% (v/v) penicillin/streptomycin, and incubated using standard conditions (5% CO 2 , 37°C, and 95% humidity). The culture medium was changed every three days. Cells were harvested at passage 7 and at~80% confluency using a 0.05% trypsin-EDTA solution (Sigma-Aldrich, USA). A cell suspension was prepared (0.33 × 10 6 /mL) and 25,000 cells in 75 μL of media were statically seeded onto each scaffold. The solution was pipetted on top of the scaffold in a nontreated 48-well plate and incubated in a standard cell culture incubator for 2 h to allow cell attachment, before the addition of 325 μL fresh media supplement. The cell culture media was changed every three days. Non-treated well plates were used to minimise cell migration and attachment to the underlying tissue culture plastic (TCP), as proposed by Sobral et al. [61]. All cell seeded scaffolds were transferred to a new 48-well plate on day 1.
Cell metabolic activity
Cell metabolic activity was assessed at day 1, 3, 7, 14, and 21 after cell seeding, using the resazurin assay (Sigma-Aldrich, UK), commonly referred to as Alamar Blue, which functions by the reduction of resazurin to resorufin by metabolically active cells. This assay can provide an indication of cell proliferation. On day 1 all samples (n = 12) were transferred to a new 48-well plate to enable quantification of cell attachment and prevent unattached cells from influencing the result. TCP was used as a positive control, non-seeded scaffolds as a negative control, and media as a blank. At each time point, a 10% by volume (40 μL) of resazurin solution (0.01% (v/v)) was added to each sample and incubated for 4 h. After incubation, 150 μL of each sample was transferred to a 96-well plate and the fluorescence intensity was measured (540 nm excitation/590 nm emission wavelength) with a plate reader (infinite 200, Tecan, Switzerland). Samples were washed twice in sterile PBS to remove the resazurin solution before the addition of fresh media.
Cell viability
Cell viability was assessed using a Live/Dead assay kit (ThermoFisher Scientific, UK) at day 21 according to the manufacturer's instructions. Culture media was removed from the samples and TCP control and washed twice with PBS before adding 500 μL of a calcein- AM (2 μM) and ethidium homodimer-1 (EthD-1, 4 μM), PBS solution. Ethanol treated TCP samples were used as EthD-1 positive controls (dead). The samples were then incubated for 25 min in an incubator. Scaffolds were imaged with an inverted fluorescence microscope (Leica DMI6000 B, Leica Microsystems, Germany). Day 1 imaging was attempted, however, the EthD-1 would bind to the SMPs giving rise to false positives in the dead channel ( Fig. S2 -supplementary information).
Cell morphology
Cell attachment and morphology on the scaffolds was imaged using SEM (Hitachi S-3000 N, Hitachi, Japan) on day 7, 14, and 21. Scaffolds were gently washed twice with PBS and then fixed with 2.5% glutaraldehyde solution for 1 h. After fixation, scaffolds were washed with PBS twice, followed by dehydration in a sequential series of ethanol concentrations (50,60,70,80,90, and 100%) for 15 min at each concentration with the 100% ethanol concentration repeated. Hexamethyldisilazane (HMDS, Sigma-Aldrich, UK) and ethanol in a 50:50 mixture was then used to treat scaffolds for 15 min. Finally, a 100% HMDS solution was used and the samples were then left to dry for 24 h in a fume cupboard to allow the HMDS to evaporate before platinum coating and SEM imaging.
Statistical analysis
Statistical analysis was performed using one-way analysis of variance (ANOVA) and multiple comparisons with Tukey post-hoc test using GraphPad Prism software (Graphpad Software Inc., USA) and all values are reported as mean ± standard deviation (SD). The differences between means were considered significant at *p < 0.05, **p < 0.01, ***p < 0.001 and ****p < 0.0001.
Silk microparticle morphology
The SMPs were characterised by a laser diffraction particle size analyser and SEM to determine particle morphology (Fig. 1). The particles have an irregular shape with a spherical cauliflower-like morphology and an equivalent sphere diameter of~6-7 μm and 90% of the particles sized below~10 μm. The particle volume distribution is relatively narrow with only a single peak, which demonstrates that the SMP fabrication process produces uniform particles.
Composite rheology
The PCL/SMP composites were analysed to assess the impact of SMPs on the rheological behaviour of the polymer matrix (Fig. 2). The rheological results were reproducible even after repeated melt-cooling rheology measurements, which indicated good stability and dispersion of the SMPs within the PCL matrix with no additional crystal alignment or degradation induced by the temperature cycling [62]. The storage (G′) and loss (G″) modulus of PCL and PCL/SMP blends all showed liquid-like behaviour with G″ higher than G′ at 140°C, regardless of the SMP weight loading. As the SMP loading increased, from 10% to 30%, both G′ and G″ of the composites increased with the silk content, which agrees with previous studies on PCL-based nanocomposites filled with clay [63], starch-based nanoparticles [54], nano-hydroxyapatite and tricalcium phosphate particles [64]. The addition of SMPs caused a more pronounced effect on G′ than on G″, thus enhancing the elasticity, which is characterised as a pseudo-solid-like-response of the composite material [63].
The loss factor tan δ, which indicates the ratio of the loss modulus to storage modulus, as a function of frequency shows that the tan δ of pure PCL decreases with increasing frequency, indicating its viscoelastic liquid-like behaviour (Fig. 2b). At frequencies lower than 10 rad/s, there is a clear difference between PCL and PCL/SMP composites with the tan δ for pure PCL being much higher, indicating the evident elastic response introduced by the addition of SMPs with less energy being dissipated. As the silk content increases, the PCL polymer macromolecule motion is largely retarded by the presence of the SMPs and thus the polymer relaxation is slower at low frequencies [54]. At higher frequencies (> 10 rad/s) the tan δ of all samples converge and the materials exhibit an increasing elastic response as the difference between G' and G" decreases.
The complex viscosity (η*) as a function of frequency is dependent on the SMP content ( Fig. 2c) As the SMP content increases the η* increases, which agrees with the change observed in the G′ and G″. The 30 wt% SMP composite showed the largest increase in η* compared to other samples, due to a more pronounced particle reinforcement in the polymer network structure. As the frequency increases a clear shear thinning behaviour is observed which is more prominent with increasing silk loading especially at 30 wt% SMP which has the largest decrease in η*. The change in viscosity is due to the structural reorganisation of the polymer network due to the introduction of the SMP filler which results in particle-particle and particle-polymer interactions. The shear thinning behaviour observed is typical of particle filled polymers and can be associated with the inter-particle distances being disturbed which results in reduced particle-particle interactions thus disrupting the initial 3D structure [65]. The particles are rearranged in the direction of flow with a layered structure forming within the matrix which is less resistant to flow, and the materials behave in a more non-Newtonian shear-thinning manner.
Thermal analysis
DSC was used to determine the thermal properties of the PCL/SMP composite with the cooling and subsequent heating curves allowing the determination of the crystallisation peak temperature (T c ), crystallisation enthalpy (ΔH c ), melting temperature (T m ), ΔH m , and X c (Fig. 3. and Table 1). The addition of SMPs led to an increased T c with the increase independent of SMP loading. The pure PCL showed a crystallinity of 45.6%, similar to previous reported studies [54,66]. Increasing the SMP loading led to a decrease in ΔH c and subsequently the X c of the composite. The SMPs had limited influence on the T m even with a decrease in the ΔH m .
An increase in the crystallisation temperature after adding SMPs may indicate that the particles can act as heterogeneous nucleating agents and lower the nucleation free energy required for the PCL matrix, therefore resulting in higher T c [54]. The decrease in crystallinity caused by the addition of SMPs suggests an interaction with the PCL matrix which disrupts crystal formation [67]. A similar reduction in matrix crystallinity has also been reported in other particle reinforcing studies such as nano-cellulose reinforced polyethylene oxide [68]. A decrease in the ΔH m of the composite is most likely caused due to the increasing SMP content and less PCL per unit weight as the melting behaviour was not affected by SMP as only a small change in melting temperature was observed.
Scaffold morphology
The 3D printed scaffold morphologies were observed using SEM and μCT (Figs. 4 and 5). The SEM shows that the composite PCL/SMP scaffolds were successfully 3D printed. The scaffolds have a regular overall geometry with 0°90°architecture (log-pile) and the printed fibres and pores between them are clearly defined and uniform. The increasing loading of SMPs is clearly observed in the rougher surface of the printed fibres and particles embedded in the fibre cross-sections.
The scaffold dimensions closely matched the designed criteria of 300 μm for both fibre diameter and pore size. The scaffolds were designed with a 300 μm pore size as studies have demonstrated values in this region are suitable for bone tissue engineering applications, although the specific pore size is dependent on the final application [69][70][71]. The fibre diameter decreased with increasing loading of SMP with 30 wt% SMP having a diameter of 269.9 ± 5.0 μm compared to pure PCL of 301 ± 5.3 μm. As the fibre diameter decreased, with increasing loading of SMP, the resulting pore size increased (Fig. 5a). The total porosity was approximately 55% for all scaffolds. Scaffold interconnectivity was > 85% for all scaffolds at a pore threshold size of 200 μm and approaching 100% at smaller thresholds (Fig. 5b). Full morphological measurements can be observed in Table S1 in the supplementary information. This demonstrates excellent interconnectivity and a major advantage of using additive manufacturing for the fabrication of scaffolds. Pore interconnectivity is crucial to allow tissue ingrowth and diffusion of nutrients, gases, biomolecules, and waste removal throughout the scaffold [72][73][74][75]. The change in rheological behaviour of the composite material affects material flow thus the fibre dimension. Although the printing process has been optimised; the small changes in dimensions throughout a scaffold are not immediately obvious from SEM characterisation but is clear when using μCT which allows the entire structure to be measured. Additionally, some fibre delamination can be observed at 30 wt% SMP loading which is not observed on the other scaffolds but is related to differences in actual fibre dimension to the design dimensions ( Fig. S3 -supplementary information). Further optimisation is required of the 3D printing process parameters to reach the desired design criteria. Finally, no information can be determined about the SMP distribution within the PCL matrix using μCT as the absorption contrast was not sufficient to segment the SMPs from the PCL. However, the SMPs are assumed to be homogenous within the PCL due to the thorough physical blending process and then mixing within the screwchamber in the extruder during printing. Furthermore, the interfacial surface energy in the PCL/SMP system will play a critical role in their dispersion, the spherical shape of the SMPs is likely to assist in dispersion due to the lower surface area compared to particles with a flaky morphology that can tend to aggregate [76]. The SEM images show an apparent homogenous distribution on the fibre surface and cross-section (Fig. 4).
Mechanical properties
The bulk and nano mechanical properties of the PCL/SMP scaffolds were evaluated using uniaxial compression testing and nanoindentation, respectively. The bulk mechanical properties are important, as they need to match the intended tissue and withstand physiological forces imposed on the structure, especially in load bearing applications such as bone and cartilage. The nanomechanical properties influence cell behaviour through mechanotransduction pathways as the cells sense the biophysical environment [77][78][79]. Thus, both should be designed for the intended application. The PCL/SMP scaffolds show typical behaviour of a cellular solid with the stress-strain profiles showing an initial linear elastic behaviour, plateau phase, and then a densification region (Fig. 6a) [80]. The bulk compressive modulus significantly increased with increasing loading of SMPs from 67.51 ± 4.25 MPa to 119.70 ± 7.34 MPa for PCL and 30 wt% SMP, respectively (Fig. 6b). Although the compressive modulus of cortical bone is considerably higher (7-18 GPa), the 30 wt% SMP scaffold is within the lower region of trabecular bone (0.1-5 GPa) [81]. The difference between 10 wt% and 20 wt% SMPs scaffolds is minimal and not statistically significant. This may have arisen due to the differences in scaffold morphology with the 20 wt% SMP scaffolds having a smaller fibre dimeter and larger pore size than the 10 wt% SMP scaffolds, thus the 3D printing process parameters requires further optimisation, as previously stated. Subsequently, the mechanical properties may not follow a proportional relationship with particle filler loading.
The nanomechanical properties also increase with loading of SMPs into the PCL matrix (Fig. 6c). The reduced Young's modulus significantly increases from 0.55 ± 0.03 GPa to 1.31 ± 0.33 GPa for PCL and 30 wt% SMP, respectively. The stiffness of the scaffold surface significantly increases with inclusion of SMPs and stiffer surfaces promote osteogenic differentiation; a surface elasticity of~40 kPa promotes osteogenic behaviour, thus the stiffness of the PCL/SMP composite is considerably higher [79]. However, the elastic modulus of trabecular bone observed through nanoindentation is higher (> 7 GPa) [82,83]. The hardness also significantly increases with SMP loading with the moduli increasing from 39.47 ± 3.08 MPa to 72.64 ± 30.96 MPa for PCL and 30 wt% SMP, respectively. However, these values are lower than the hardness observed in trabecular bone (~500 MPa) [83]. A similar nanomechanical behaviour is observed between 10 wt% and 20 wt% SMPs scaffolds which agrees with the observation for the bulk mechanical properties. However, this may be related to the quantity of SMPs present at the fibre surface influencing these nanomechanical properties. As qualitative observation of the fibre surface shows similar quantity of SMPs between 10 wt% and 20 wt. SMP scaffolds but a large increase at 30 wt% (Fig. 4n). A threshold may be reached at 30 wt% which influences particle packing, distribution volume, and particle-polymer interactions. This can be observed through the previous rheological and thermal analysis which shows a clear trend related to increasing SMP content with 30 wt% having a distinct behaviour, especially rheological. This particle threshold behaviour may be responsible for the nanomechanical and in addition, the bulk mechanics observed.
The inclusion of SMPs as a reinforcing agent within the PCL matrix provides a significant increase to both the bulk and nanomechanical properties of the scaffold compared to PCL alone. Further investigation is required to elucidate the influence of SMPs on the PCL matrix and subsequent bulk and nanomechanical properties.
Surface roughness and wettability
The surface topography and wettability of the 3D printed PCL/SMP scaffolds were determined through laser microscopy and water contact angle measurement, respectively (Fig. 7). Biomaterial surface topography and wettability are important as they influence protein adsorption and cell-material interactions [19,[84][85][86]. The characterisation and engineering of the surface properties will enable the facilitation of specific cell behaviour.
Laser microscopy demonstrated that all 3D printed scaffolds had good print quality, with well-defined fibres and uniform pore distribution, complimenting the SEM and μCT imaging. The surface morphology of the pure PCL scaffolds shows hexagonal surface features and a considerable number of hollow voids or pits clearly observable (Fig. 7a). These features can be attributed to crystalline microstructure formation and the rheological behaviour during extrusion and the subsequent cooling resulting in the observable hexagonal and pitted surface features [65,87,88]. The hollow pitted surface of the scaffold fibres disappeared with the addition of SMPs as the crystallisation kinetics and rheology were altered, as previously demonstrated in the DSC and rheological analysis. The fibre surface morphology is sensitive to changes in rheology as demonstrated by Huang et al. in a 3D printed PCL scaffold including multi-walled carbon nanotubes which resulted in a 'sharkskin' effect on the fibre surface [17]. The inclusion of SMPs did not alter the overall fibre morphology, remained circular and nonwavy, but became significantly rougher due to the presence of microparticles and altered the colouration of the scaffolds.
The addition of SMPs into the PCL scaffolds resulted in a colour change from white to a yellowish tinge throughout, which increased with higher silk loading (Figs. 7a and S4 -supplementary information). The cause of the yellowing is unclear, whilst the microparticles themselves do have an off-white yellowish colour, the scaffolds printed with high SMP content were considerably more yellow than the original particles [52]. There could be a degree of thermal degradation of the silk due to the high temperature used during the printing process, however, thermal analysis shows that the printing temperature of 140°C is considerably lower than the degradation temperature of silk particles, approximately 300°C [33]. Another possible cause may be thermal oxidation of some of the side chains in fibroin. Four major amino acid residues in the silk fibroin chain (glycine, alanine, serine and tyrosine) can be involved in thermal oxidation reactions, with the reaction products producing pigmented phenolic groups that contribute to the colour development. Nevertheless, high temperature printing could lead to instability in the SMPs due to oxidation reactions, thus, further investigation of SMP stability is required. Therefore, printing in an inert environment or at lower temperatures might be considered in the future to avoid any potential degradation effects.
The PCL/SMP scaffold surface roughness, S a , increased with larger SMP loading (Fig. 7c). The S a of the pure PCL scaffolds was 1.03 ± 0.01 μm; the SMP containing scaffolds had a significantly higher S a of up to 1.22 ± 0.04 μm, at 30 wt% SMP. There was no significant difference between SMP containing scaffolds, the size and distribution of the SMPs appeared to be homogeneous within the PCL matrix, thus resulting in a similar mean surface roughness regardless of SMP loading.
The wettability of a biomaterial surface is dependent on surface chemistry and roughness with hydrophilic surfaces having a water contact angle less than 90°whilst above this value the material becomes hydrophobic. The wettability of the PCL/SMP decreased (hydrophobicity increased) with increasing SMP content with the contact angle significantly increasing from~80°to~100°for PCL and PCL/ SMP 30 wt%, respectively ( Fig. 7b and d). PCL is an inherently hydrophobic polymer with typically a contact angle above 90°, the lower value in this study can be attributed to the large pores in the printed structure which allows the droplet to seep into the structure [9]. The increase in hydrophobicity with increasing SMP content can be attributed to two parameters: the increase in surface roughness and the highly crystalline content of SMPs. As observed by laser microscopy the surface roughness increases with increasing SMP content. As PCL is inherently hydrophobic the increase in roughness will increase the hydrophobicity. The interaction between the liquid (water) phase and solid phase (PCL matrix) is energetically unfavourable and the increase in surface roughness, thus, surface area subsequently has a negative effect on wetting [89][90][91]. Furthermore, SMPs produced through physical milling maintain relatively high contents of the crystalline regions of fibroin with β-sheet hydrophobic domains populating the structure, which results in the hydrophobic nature of the SMPs [92,93]. The incorporation of SMPs have been shown to increase the hydrophilicity of a chitosan hydrogel [33]. However, as chitosan is hydrophilic the increase in surface roughness by inclusion of SMPs promotes energetically favourable interactions thus an increase in wettability.
Protein adsorption
The total amount of protein adsorbed onto the composite surface was determined through immersion of the scaffolds in a 10% FBS solution for 12 h (Fig. 8). Control samples incubated in PBS show a large amount of protein (fibroin) present on the SMP containing samples with the protein quantity following the trend of increasing SMP loading and no protein observed on PCL only scaffolds (Fig. S5 -supplementary information). This allows a method to quantify the amount of fibroin protein at the surface; the experimental samples were normalised to these controls. The results show a decreasing trend in protein adsorption with higher SMP loading in the scaffolds. The 30 wt% SMP scaffolds have significantly lower protein adsorption than all other scaffold types whilst 10 wt% and 20 wt% SMP scaffolds have similar quantities of protein adsorbed.
Typically, hydrophobic materials can adsorb more protein than hydrophilic surfaces [94,95]. The binding of protein on hydrophilic surfaces requires the displacement of water, which creates an energy barrier which must be overcome. Although hydrophobic surfaces can bind through internal hydrophobic domains in the protein this can lead to unfolding and denaturing of the protein. Subsequently, understanding the protein adsorption kinetics, conformation at the surface, and adhesion strength are important factors in determining cell behaviour. Bovine serum albumin is a major component of FBS and has been shown to have increased adhesion on surfaces with contact angles higher than~60-65° [96]. The SMP scaffolds reduce the amount of protein adsorbed even though they are more hydrophobic than the PCL only scaffolds. The increase in roughness should also enhance the available surface area for protein binding.
The presence of silk fibroin, predominately as crystalline β-sheets in the SMPs, may minimise protein adsorption onto the PCL surface. Protein adsorption onto differently processed silk cast films showed that higher crystallinity and hydrophobicity reduced protein adsorption [97]. Thus, the highly crystalline and hydrophobic SMPs may reduce protein affinity for the surface and decrease the available surface area for protein adsorption. The kinetics and specific protein adsorption need to be evaluated to understand the mechanism of protein binding on SMPs and the composite material scaffold, thus, subsequently how protein adsorption influences material-cell interactions.
Enzymatic degradation
The main degradation route for PCL is hydrolysis of ester bonds which is highly reliant on material crystallinity and water uptake thus in typical in vitro and in vivo conditions is slow [9,14,98]. Accelerated degradation model studies using chemical or enzymatic methods are necessary to ascertain the role that material fillers such as SMPs have on the degradation profile of a PCL-based composite and can also be used to determine any cytotoxicity associated with degradation by-products [9,55,[99][100][101]. The PCL/SMP scaffolds degradation profiles were evaluated using a model accelerated enzymatic method employing a lipase from Thermomyces lanuginosus. The change in scaffold mass, morphology, and surface corrosion are investigated (Figs. 9 and 10).
Control groups immersed in PBS buffer showed no apparent change in mass which indicates that hydrolytic degradation of the scaffolds did not occur. This is expected as PCL shows little hydrolytic degradation in PBS or water for up to a year [14,15]. In the enzymatic degradation solution, pure PCL scaffolds showed the highest stability throughout the whole period. The presence of SMPs in the composite scaffolds accelerated the degradation process; all PCL/SMP scaffolds lost their scaffolding structure within 3 days. By day 3 all PCL/SMP scaffolds were fully collapsed and the sample degradation solution appeared turbid due to the release of the SMPs, whilst 20% of the original PCL weight remained (Fig. 9b and c). The degradation rate observed for PCL/SMP scaffolds with different silk loadings were not statistically significant.
Structural changes in scaffold morphology were observed during the degradation process (Fig. 9a). All scaffolds changed in appearance within 6 h of enzymatic degradation, with a less shiny appearance possibly reflecting the loss of PCL at the surface and the appearance of more SMPs (observed as white particles) exposed on the scaffold surfaces. After 48 h degradation, the scaffolds exhibited uneven fibre edges due to enzyme corrosion. All scaffold fibres appeared thinner compared to the original structures. The PCL/SMP scaffolds had the greatest fibre diameter reduction (with only 1/2 or 1/3 remaining) compared to pure PCL scaffolds. Correspondingly, inter-fibre pores became larger and delamination between adjacent layers occurring with some fibres becoming detached from the scaffolds.
To observe the degradation process in more detail, single fibre surface morphology and height changes were recorded for each scaffold (Fig. 10). The fibre surface progressively became rougher and the fibre diameter decreased. The enzymatic degradation of the scaffolds creates holes in the scaffold surface; as lipase only degrades PCL when the holes in the PCL matrix became larger than the size of the embedded SMPs, these particles become detached and released into solution. Thus, as the SMPs are released this increases the available surface area within the PCL matrix for the enzyme to degrade which increases the degradation rate. As the printed fibres were optimised for the same diameter, the high SMP loading scaffolds contained less PCL, which will then also lead to increased scaffold degradability.
Furthermore, the reduced crystallinity of PCL after SMP inclusion, as previously described, could make the PCL more prone to degradation by lipase. Several studies on the degradation behaviour of PCL suggest that the degradation process is selective, with the amorphous regions being attacked and degraded prior to the crystalline regions. The proposed explanation is the sufficient spatial degree of freedom in the amorphous regions that allows better penetration of the enzyme into the polymer chains [102,103].
Polymer scaffold degradation is a chemical/biological cleavage process of polymer chains into oligomers, monomers or other low molecular weight degradation product that will eventually be metabolised and removed [14,104].The degradation rate of pure PCL is slow (up to several years in vivo) compared to other biopolymers, which can cause issues depending on the specific tissue engineering application requirements [12,105,106]. As a key goal is matching the degradation rate with the rate of new tissue formation, the faster rates of degradation brought by the addition of SMPs could be advantageous compared to pure PCL scaffolds. Nonetheless, how these results will translate to in vivo behaviour is unclear and further animal testing is required to evaluate if the introduction of SMPs can translate to improved degradation rates and to understand if there any adverse effects from the released SMPs.
Biological assessment
The PCL/SMP scaffolds have been initially biologically assessed by the seeding and culture of hADSCs for up to 21 days in vitro and the cell metabolic activity, viability, and morphology was evaluated (Figs. 11 and 12).
Cell metabolic activity increased in all scaffolds, as measured by an increase in the fluorescence intensity through the Alamar Blue assay, up till day 7 (Fig. 11a). At day 7, the 20 wt% SMP scaffolds had the highest fluorescence intensity whilst the 30 wt% had the lowest. A slight decrease and stabilisation in fluorescence intensity is observed by day 14 and 21 in the PCL, 10 wt%, and 20 wt% SMP scaffolds which can be attributed to a confluent state being reached and a reduction of the metabolic activity of the cells. The 10 wt% SMP scaffolds had the highest fluorescence intensity by day 21. However, a significant reduction in fluorescence intensity is observed for the 30 wt% SMP scaffolds between day 7 and 14 which then remains stable till day 21. This decrease can potentially be attributed to material cytotoxicity at high concentrations of SMPs and reduction in cell numbers.
Cell viability was observed at day 21 using a live/dead assay, which indicates that the PCL, 10 wt%, and 20 wt% SMP scaffolds support viable cells (Fig. 11b-d). The observation of dead cells was complicated due to the SMPs binding EthD-1 stain, as previously described (Fig. S2), thus the determination of the number of unviable cells was not possible. The 30 wt% SMP scaffolds showed only a few observable viable cells in the entire scaffold (Fig. 11e).
Cell morphology, spreading and migration throughout the scaffolds was observed using SEM at day 7, 14, and 21 (Fig. 12). The PCL, 10 wt %, and 20 wt% SMP scaffolds all supported considerable cell proliferation and migration throughout the scaffolds by day 21, as demonstrated by the presence of cells throughout the scaffold cross-section and not concentrated in a specific location. Only a few cells are observable at all-time points for the 30 wt% SMP scaffolds. The PCL and 10 wt% SMP scaffolds show dense cell and ECM formation by day 14 and at day 21, especially in the 10 wt% SMP scaffolds, the pores are beginning to become blocked due to cell proliferation and ECM deposition. Most noticeably is the different cell morphology observed on the 20 wt% SMP scaffolds. The hADSCs are tightly bound to the fibre surface forming dense sheets across the fibres surface whilst the cells in the PCL and 10 wt% SMP scaffolds show considerably more spreading, bridging between fibres, and ECM production. A similar tightly bound cell morphology is observed on the 30 wt% SMP scaffolds at day 21. The difference in cell morphology, spreading, and migration observed on the scaffolds may be due to changes in the surface properties of the scaffold as a result of SMPs increasing the surface stiffness and roughness, hydrophobicity, and reducing the amount of protein adsorbed. This will change the cell-material interactions via the adsorbed proteins and subsequently the presentation of integrin binding clusters which will influence cell behaviour [107]. The distribution of the SMPs at the PCL matrix surface will alter the distribution and conformation of the adsorbed proteins, furthermore, the stiffer surface of the SMP will alter mechanotransduction pathways within the cells, which may be conducive for promoting osteogenic differentiation [77][78][79].
A significant feature observed was the formation of small particles (< 1 μm) and a crust-like layer on the 20 wt% and 30 wt% SMP scaffolds by at least day 14 (Figs. 12 and 13). EDX analysis of the particles shows the presence of calcium which is not observed in the surrounding area where no particles or crust are present. Silk fibroin has been demonstrated to regulate the nucleation and mineralisation of calcium phosphates such as hydroxyapatite [41][42][43][108][109][110][111][112][113][114]. The presence of SMPs may act as a nucleating point and allow precipitation and calcium mineralisation from the cell culture medium. The amorphous regions of silk between the crystalline β-sheets have been shown to be responsible for nucleating hydroxyapatite in a similar process to collagen type I in bone [112]. Although the SMPs are highly rich in crystalline β-sheets, amorphous regions are still present. Additionally, the change in cell morphology observed between 10 wt% and 20 wt% SMP scaffolds could be related to the hADSCs undergoing osteogenic differentiation due to the high presence of a calcium-based mineral. Furthermore, silk fibroin is composed primarily of glycine and alanine and these are the principal degradation by-products which can be utilised for neo-protein synthesis, potentially supporting the increase in cell proliferation observed in this study [25].
Further investigation is required to elucidate the mechanism of calcium mineral formation, the specific form of calcium, and if the process is spontaneous or cell mediated. At higher SMP loading the mineralisation process becomes more obvious. Furthermore, the adverse cellular response observed at high SMP loading may be related to the calcium mineralisation, which may cause physical damage to the cells or due to an increase in the intracellular or local intercellular calcium ion concentration, further study is required to understand this phenomenon. Additionally, the degradation of the SMPs into smaller fragments, silk nanoparticles, during the cell culture period may induce a cytotoxic effect at high concentrations as Naserzadeh et al. have demonstrated during cell culture of fibroblasts and human umbilical vein endothelial cells incubated with silk nanoparticles [115]. This observation is also in agreement with a previous study on cytotoxicity of silk nanoparticles that at low concentrations showed no cytotoxicity, whereas a decrease in cell viability was observed with higher concentrations [116]. Thus, further investigation is required to determine the optimum silk loading in the composites and the influence of SMP degradation has on biocompatibility. Furthermore, the influence of SMPs and calcium nucleation requires in vivo investigation to determine how the material responds in a complex biological environment and subsequent cell and tissue response.
The results indicate that the PCL, 10 wt%, and 20 wt% SMP scaffolds support metabolically active and viable cells which can proliferate up to day 21 whilst the 30 wt% SMP scaffold exhibits a cytotoxic behaviour from at least day 7. The 10 wt% and 20 wt% SMP scaffolds show equal or better biocompatibility than PCL alone. Although a cytotoxic behaviour is observed at the highest particle loading, the scaffolds at 10 wt% exhibited significant advantages in terms of mechanical reinforcement, increased degradation rate, and cell proliferation thus higher SMP loadings may not be necessary to achieve the desired impact. The 3D printed PCL/SMP scaffolds show potential suitability for load bearing applications such as bone or cartilage tissue engineering. However, further understanding of the influence of SMPs on the osteogenic or chondrogenic differentiation of hADSCs is required.
Conclusion
This study demonstrates the development and characterisation of a new composite material, a PCL blended with SMPs. The composite material has been used to fabricate scaffolds for bone tissue engineering applications using screw-assisted extrusion-based 3D printing. The rheological behaviour of the PCL/SMP composite showed an increase in material elasticity and shear-thinning behaviour. Scaffolds were successfully fabricated up to 30 wt% SMP loading and showed uniform morphology. The incorporation of SMPs reinforced the PCL matrix improving the mechanical properties of the scaffolds and are within the lower region of trabecular bone mechanical properties. The scaffold showed increasing roughness and hydrophobicity with higher SMP loading due to the highly crystalline β-sheets in the SMPs. The presence of SMPs accelerated the enzymatic degradation of the scaffolds. A clear difference is observed in material properties with all SMP containing samples compared to PCL alone typically correlating to increasing SMP content. However, the mechanical and protein adsorption properties follow a non-proportional trend for 10 wt% and 20 wt% which have similar values, further investigation is required to understand this behaviour. Preliminary biocompatibility studies showed that hADSCs were viable and proliferated up to 21 days on scaffolds up to 20 wt% SMP loading, however, 30 wt% SMP scaffolds exhibited cytotoxicity. Considerable calcium mineral deposition was observed on the SMP scaffolds as silk facilitates the nucleation and mineralisation of calcium, further investigation is required to determine how this influences the osteogenic differentiation of hADSCs. The presence of SMPs in the scaffolds significantly improved the degradation rate, mechanical properties, and cell proliferation of the scaffolds indicating that key disadvantages of PCL can be improved by incorporating silk particles. The results demonstrate that the 3D printed PCL scaffolds reinforced with SMPs have improved degradation and mechanics with potential osteogenic properties. This study offers a promising alternative approach to overcome limitations associated with PCL-based scaffolds for load bearing tissue regeneration.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2020-08-27T09:12:41.073Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "00c566fc12e0392bfae85d0c811cbaa517bd6759",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.msec.2020.111433",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3866f373cf70dbfacb201d1c6431ce5ce0450277",
"s2fieldsofstudy": [
"Materials Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
41117563 | pes2o/s2orc | v3-fos-license | Antioxidant activity of rice plants sprayed with herbicides 1
The selectivity of herbicides depends on several factors, such as ingredients features, plants, application method and environmental conditions, as well as absorption, metabolism or translocation of herbicides (Hess 2000). Even if a particular active ingredient is classified as selective to plants, the energy expenditure for detoxification of xenobiotics may cause phytotoxicity problems, damaging the growth and development of crops (Song et al. 2007). ABSTRACT RESUMO
INTRODUCTION
The selectivity of herbicides depends on several factors, such as ingredients features, plants, application method and environmental conditions, as well as absorption, metabolism or translocation of herbicides (Hess 2000).Even if a particular active ingredient is classified as selective to plants, the energy expenditure for detoxification of xenobiotics may cause phytotoxicity problems, damaging the growth and development of crops (Song et al. 2007).
ABSTRACT RESUMO
Phytotoxicity caused by the use of herbicide chemical control happens mainly due to the increase of reactive oxygen species (ROS).These molecules are highly reactive to lipids of cell membranes, causing lipid peroxidation and, consequently, the formation of radicals, irreversibly damaging cell membranes (Fleck & Vidal 2001).The direct response of cell membrane damage by lipid peroxidation is leakage of cellular contents, disrupting several physiological processes of plants, such as photosynthesis and defense mechanisms (Kruse et al. 2006).
Understanding the physiological defense behavior of plants subjected to herbicide application may help to identify products with higher or lower capacity to cause oxidative stress in crops.This study aimed at evaluating the effect of herbicides in the antioxidant activity of rice plants.The experimental design was completely randomized, with six replications.Treatments consisted of the herbicides bentazon (photosystem II inhibitor; 960 g ha -1 ), penoxsulam (acetolactate synthase inhibitor; 60 g ha -1 ), cyhalofop-butyl (acetyl coenzyme-A carboxylase inhibitor; 315 g ha -1 ) and a control.After the herbicides application, samples of rice shoots were collected at 12, 24, 48 and 96 hours after application (HAA).The components evaluated were hydrogen peroxide (H 2 O 2 ), lipid peroxidation and activity of the antioxidant enzymes superoxide dismutase (SOD) and catalase (CAT).Bentazon (up to 24 HAA) and penoxsulam (48 and 96 HAA) reduced the CAT activity.Moreover, these herbicides increased the levels of H 2 O 2 , lipid peroxidation and SOD activity, indicating a condition of oxidative stress in rice plants.The cyhalofop-butyl herbicide did not alter the antioxidant activity, showing that it causes less stress to the crop.KEY-WORDS: Oryza sativa; oxidative stress; phytotoxicity.
Bentazon, penoxsulam and cyhalofopbutyl are herbicides intensively used in rice crop.Bentazon inhibits photosynthesis by binding to the quinone-binding (QB) niche on the D1 protein of the photosystem II complex, localized in the chloroplast thylakoid membranes.Herbicide binding at this protein location blocks the electron transport from Q A to Q B and stops CO 2 fixation and production of ATP and NADPH 2 , which are essential for plant growth (Han & Wang 2002).
Penoxsulam acts as an acetolactate synthase (ALS) inhibitor, which interferes with synthesis of the amino acids valine, leucine and isoleucine (Senseman 2007).These amino acids are necessary for protein synthesis and, therefore, are essential for cell metabolism.Thus, the herbicide will disrupt cell division, consequently stopping the plant growth (Oliveira Júnior et al. 2011).
Cyhalofop-butyl acts by inhibiting acetylcoenzyme A carboxylase, reducing the ability of plants to produce malonyl-coenzyme A, which is needed for the synthesis of fatty acids (Ruiz-Santaella et al. 2006).Fatty acids are essential constituents of plasma membranes of cells and organelles.Deficiency of fatty acids causes disorders of cell permeability and breaks in the structure of cell membranes (Oliveira Júnior et al. 2011).
Although recommended for rice, there are field reports of phytotoxicity in the crop after the application of bentazon, penoxsulam and cyhalofopbutyl, which drive the investigation on the capacity of these products to cause oxidative stress in the crop.Thus, this study aimed at evaluating the effect of these herbicides on the antioxidant activity of rice plants.
MATERIAL AND METHODS
The experiment was conducted in a greenhouse at the Universidade Federal de Pelotas, in Capão do Leão, Rio Grande do Sul State, Brazil, during the 2011/2012 cropping season.
Plastic pots (8 L) filled with Yellow-Red Alfissol were used.The experimental design was completely randomized, with six replications.The herbicides tested were: bentazon 480 g L -1 (960 g ha -1 ), penoxsulam 240 g L -1 (60 g ha -1 ), cyhalofop-butyl 180 g L -1 (315 g ha -1 ) and a control.The herbicide solution was applied with specific adjuvants, such as mineral oil to bentazon (1,000 mL ha -1 ) and cyhalofop-butyl (3,000 mL ha -1 ) and vegetable oil to penoxsulam (1,000 mL ha -1 ).The herbicide rates were established considering the highest recommended dose for rice (Agrofit 2014).Each experimental unit consisted of 16 plants (IRGA 424 cultivar), in order to obtain enough plant biomass for laboratorial analyses.Spraying was performed at 15 days after crop emergence (DAE), using a pressurized CO 2 backpack sprayer equipped with 110.02 points range, with a flow rate of 150 L ha -1 .Flood irrigation began one day after the herbicide application.
For the rice shoots experiment, samples were taken at 12, 24, 48 and 96 hours after application (HAA) and stored at -80 ºC, until the evaluation of hydrogen peroxide (H 2 O 2 ); lipid peroxidation, in terms of thiobarbituric acid reactive species (TBARS); and activity of the antioxidant enzymes superoxide dismutase (SOD) and catalase (CAT).The collection time was based on preliminary tests, considering the mode of action of the herbicides used in the study.
Cell damage in tissues was determined by the hydrogen peroxide (H 2 O 2 ) content (Sergier et al. 1997) and thiobarbituric acid reactive species (TBARS) via accumulation of malondialdehyde (MDA) (Heath & Packer 1968).To perform these analysis, 0.2 g of leaf tissue were grinded in liquid nitrogen, homogenized in 2 mL of trichloroacetic acid 0.1 % (w/v) and centrifuged at 14,000 rpm, for 20 min.To quantify H 2 O 2 , 0.2 mL of the supernatant were added to 0.8 mL of 10 mM phosphate buffer (pH 7.0) and 1 mL of 1 M potassium iodide.The solution was kept for 10 min at room temperature, and absorbance read at 390 nm.The H 2 O 2 concentration was determined using the standard curve and expressed in milimol per gram of fresh weight (mM g -1 FW).
To determine TBARS, 0.5 mL aliquots from the supernatant described above were added to 1.5 mL of thiobarbituric acid (TBA) 0.5 % (w/v) and trichloroacetic acid 10 % (w/v) and incubated at 90 ºC, for 20 min.The reaction was stopped on ice, for 10 min.The absorbance was read at 532 nm, discounting the non-specific absorbance at 600 nm.
To determine the activity of the antioxidant enzymes SOD and CAT, a first extraction was carried out, with 0.2 g of leaf samples grinded in a porcelain mortar, in the presence of liquid nitrogen and 0.02 g of polyvinylpyrrolidone (PVP).Then, the following solutions were added to the grinded tissue: 900 µL of 200 mM phosphate buffer (pH 7.8), 18 µL of 10 mM EDTA, 180 µL of 200 mM ascorbic acid and 702 µL of ultrapure water.The mixed solution was centrifuged at 14,000 rpm, at 4 ºC, for 20 min.From this extract, protein samples were quantified using the Bradford (1976) method, by adding 60 µL of extract to 2 mL of Bradford solution and performing the absorbance reading at 595 nm wavelength.The standard curve was estimated with globulin and results were expressed in milligrams of protein (mg protein) by FW.
The SOD activity was determined according to a method adapted from Peixoto (1999).Using this method, the inhibition of the reduction of nitro blue tetrazolium (NBT) by the enzyme extract was determined, avoiding, therefore, the formation of chromophore.In this assay, one unit of enzyme activity (AU) of SOD was regarded as the amount of enzyme required to reach 50 % inhibition of NBT reduction by the SOD contained in the enzyme extract.For the reaction, the following components were added to a test tube: 1 mL of potassium phosphate buffer at 100 mM (pH 7.8), 400 µL of 70 mM methionine, 20 µL of 10 mM EDTA, 390 µL of ultrapure water, 150 µL of 1 mM NBT, 20 µL of 0.2 mM riboflavine and 20 µL of the extract.Then, the tubes were exposed to a 15 W bright fluorescent lamp chamber, for 10 min, and the 560 nm absorbance was recorded.Tubes without extract, exposed and not exposed to light, were considered as blanks in the reaction.The activity was determined by calculating the amount of extract that inhibited 50 % of the NBT reaction and expressed in AU mg -1 protein min -1 .
The CAT activity was determined by recording the consumption of H 2 O 2 (extinction coefficient of 39.4 mM cm -1 ).The reaction mixture contained 1 mL of potassium phosphate buffer at 200 mM (pH 7.0), 850 µL of ultrapure water, 100 µL of hydrogen peroxide at 250 mM and 50 µL of the extract.The absorbance at a wavelength of 240 nm was recorded in a spectrophotometer (Ultrospec 6300 Pro UV/ Visible -Amersham Bioscience), for 90 seconds, with readings at intervals of 7 seconds (Sudhakar et al. 2001).For both enzymes (SOD and CAT), it was considered that the decrease of one unit of absorbance was equivalent to one active unit (AU) of enzyme.The activity of the total extract was determined from the amount of extract that reduced the absorbance reading in one AU, being expressed in AU mg -1 protein min -1 .
Data were analyzed for normality (Shapiro-Wilk) and subsequently subjected to analysis of variance (p < 0.05).In case of significance, the treatment means were compared within each evaluation by the Tukey test (p < 0.05).
RESULTS AND DISCUSSION
For H 2 O 2 , it was found that, for the evaluations performed at 24, 48 and 96 HAA, there were no differences between the herbicide treatments, when compared to the control (Figure 1).On the other hand, at 12 HAA, higher H 2 O 2 values were observed after the application of bentazon, as compared to the control and other herbicides (Figure 1), indicating that the use of the herbicide initially caused an increase in the production of reactive oxygen species.
Oxidative stress caused by increased concentration of reactive oxygen species can activate ), penoxsulam (60 g ha -1 ), cyhalofop-butyl (315 g ha the programmed cell death due to membrane lipid peroxidation, protein oxidation, enzyme inhibition and damage to DNA and RNA (Ma et al. 2013).However, in later evaluations (24, 48 and 96 HAA), bentazon did not differ from the control, in terms of H 2 O 2 levels (Figure 1).
Studies have shown that the use of triazines and ureas can induce oxidative stress (Bowler et al. 1992, Ivanov et al. 2005).Although bentazon does not belong to these classes of herbicides, it also acts by blocking the electron transport in photosynthesis, what originates reactive oxygen species, such as H 2 O 2 .
The increase of H 2 O 2 content observed for bentazon in the first evaluation time possibly contributed to the higher values of thiobarbituric acid reactive species detected for the same herbicide at 12 and 24 HAA, if compared to the control (Figure 2).The same occurred with penoxsulam at 96 HAA.The reactive oxygen species can react with lipids, proteins and pigments, causing lipid peroxidation and membrane damage (Gill & Tuteja 2010).Malondialdehyde is the product of lipid peroxidation, therefore, the highest content of this compound indicates oxidative stress (Han & Wang 2002).
Han & Wang (2002) also showed an increase in the level of malondialdehyde in rice plants after applying 98.8 mM of bentazon, suggesting that lipid peroxidation arises from the generation of reactive oxygen species, as a result of the interruption of electron transport in photosystem II (Han & Wang 2002).However, the same authors demonstrated that the physiological response of plants depends on genotype, as the Tainung 67 cultivar kept low MDA levels after bentazon application.
Cyhalofop-butyl did not differ from the control, in terms of H 2 O 2 and TBARS, in all evaluated time points (Figures 1 and 2).Therefore, the use of this ACCase inhibitor did not seem to generate oxidative stress in rice plants.Studies demonstrate that the fluazifop-P-butyl herbicide, belonging to the same group of cyhalofop-butyl, can cause lipid peroxidation in grasses (Luo et al. 2004).However, rice characteristics such as lack of esterase functionality, reduced absorption through the cuticle and increase in herbicide metabolism may help to explain the selectivity of rice after the application of cyhalofop-butyl (Ruiz-Santaella et al. 2006).
For enzyme activities, it was observed that the highest SOD activity was detected at 24 HAA of bentazon (11.9 AU mg -1 protein min -1 ) and at 48 and 96 HAA of penoxsulam (11.3 and 12.4 AU mg -1 protein min -1 , respectively), when compared to the control (Figure 3). of rice plants in response to the use of bentazon (960 g ha -1 ), penoxsulam (60 g ha -1 ), cyhalofop-butyl (315 g ha -1 ) and control, at 12, 24, 48 and 96 hours after application (Capão do Leão, Rio Grande do Sul State, Brazil, 2011/2012).Means followed by the same letter on the bars do not differ within each evaluation time by the Tukey test (p < 0.05).), penoxsulam (60 g ha -1 ), cyhalofop-butyl (315 g ha The highest SOD activity in plants has been correlated with tolerance to oxidative stress (Iannelli et al. 1999).This protein belongs to the metalloenzymes that protect cells from superoxide radicals by catalyzing the dismutation of O 2 •-in O 2 and H 2 O 2 .There are reports that plants treated with paraquat (Ekmekci & Terzioglu 2005), fluroxypyr (Wu et al. 2010) and atrazine (Zhang et al. 2014) also showed an increase in the SOD activity, what may be linked to an increase in the superoxide radical formation, as well as to increases in gene and protein expression (Verma & Dubey 2003) by superoxide-mediated signal transduction (Fatima & Ahamad 2005).
The effects of bentazon on the antioxidant activity seem to be more evident in the first few hours after application.A research monitored the SOD gene expression at 1, 2, 4 and 8 HAA of bentazon, in soybean plants, and showed the highest activity of SOD after 4 HAA (Zhu et al. 2009).
Corroborating our results, a lower activity of SOD was also observed in maize leaves after the application of clethodim (Radwan 2012).Clethodim acts like cyhalofop-butyl, i.e., inhibiting acetyl coenzyme-A carboxylase (ACCase).On the other hand, the herbicide haloxyfop-methyl, also an ACCase inhibitor, increased the SOD activity at 48 HAA in wheat plants (Janicka et al. 2008).The explanation for this variability remains unknown, but it can be hypothesized that the response against herbicides depends on several factors, such as species, active ingredient, concentration, environmental conditions, evaluation period, tissue and age of the plant, as well as enzyme isoforms.
For the CAT, it was observed that the treatment with bentazon showed a lower activity (0.53 AU mg -1 protein min -1 ) at 24 HAA, when compared to the control (Figure 4).For penoxsulam, it was also observed a lower CAT activity, if compared to the control at 48 and 96 HAA (0.54 and 0.52 AU mg -1 protein min -1 , respectively).Possibly, the reduction of CAT activity is due to the inhibition of the enzyme synthesis or change in the assembly of enzyme subunits under stress conditions caused by bentazon and penoxsulam (Abedi & Pakniyat 2010).These results may be related to the greater presence of H 2 O 2 detected after the application of these herbicides (Figure 1).
CAT has less affinity for H 2 O 2 and, under environmental stresses, such as saline condition and high temperature, it can be inactivated, with subsequent degradation (Hertwig et al. 1992).Both SOD and CAT enzymes are almost restricted to the peroxisomes, which essentially work to remove H 2 O 2 , during photorespiration.This compartmentalization limits their ability to keep low the reactive oxygen species levels in other cellular compartments, such as chloroplast (Asada 2006).
Our results indicate that bentazon and penoxsulam induce significant changes in the physiological components evaluated, therefore indicating greater potential of these herbicides to cause oxidative stress in rice, when compared to cyhalofop-butyl.To mitigate the effects of these herbicides, the plant will enable the defense system, highlighting the role of the SOD and CAT enzymes.
In practical terms, considering only the effects on oxidation stress and antioxidant enzymes, cyhalofop-butyl (315 g ha -1 ) has a lower potential to cause oxidative stress in rice crop.However, it is noteworthy that this herbicide is recommended only for the control of grasses (Agrofit 2014).If the area is infested by magnoliopsides and sedges, bentazon should be indicated.Initially, bentazon causes oxidative stress (up to 48 HAA), but the crop recovers the negative condition afterwards.
Further studies are needed also to investigate the effects of herbicides on shoot dry weight and crop ), penoxsulam (60 g ha -1 ), cyhalofop-butyl (315 g ha
Figure 2 .
Figure 2. Content of thiobarbituric acid reactive species (TBARS)of rice plants in response to the use of bentazon (960 g ha -1 ), penoxsulam (60 g ha -1 ), cyhalofop-butyl (315 g ha -1 ) and control, at 12, 24, 48 and 96 hours after application (Capão do Leão, Rio Grande do Sul State, Brazil, 2011/2012).Means followed by the same letter on the bars do not differ within each evaluation time by the Tukey test (p < 0.05). | 2017-09-16T19:58:34.503Z | 2016-03-28T00:00:00.000 | {
"year": 2016,
"sha1": "745f628f8c1f82fbf42a4a312863cafa817d5ac1",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/pat/a/9qJwDWdrgZdH6Y3bFC3VZ7v/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "745f628f8c1f82fbf42a4a312863cafa817d5ac1",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
15497457 | pes2o/s2orc | v3-fos-license | Bmc Complementary and Alternative Medicine How Parents Choose to Use Cam: a Systematic Review of Theoretical Models
Background: Complementary and Alternative Medicine (CAM) is widely used throughout the UK and the Western world. CAM is commonly used for children and the decision-making process to use CAM is affected by numerous factors. Most research on CAM use lacks a theoretical framework and is largely based on bivariate statistics. The aim of this review was to identify a conceptual model which could be used to explain the decision-making process in parental choice of CAM.
Background
There is considerable debate around the definition of Complementary and Alternative Medicine (CAM) [1-3], definitions varying over time [4]. CAM can be defined as "any health improving technique outside of the mainstream of conventional medicine" [2]. One of the most recent definitions divides CAM into mind-body medicine, biologically based therapies, manipulative and bodybased systems and energy medicine, and whole system approaches such as Ayurveda and Traditional Chinese Medicine [5].
CAM is very popular, with recent population based estimates of yearly adult use in the UK of 20% -28% [6,7] and 34% -38% in the USA [8,9]. A systematic review of CAM prevalence surveys worldwide found a prevalence of between 23% -62%, and for over the counter CAM 25% -46% [10]. CAM is also commonly used for children, with prevalence estimates from 12% in the USA [9], 11% in Canada [11] to 51% in Australia [12]and 17.9% [13] to 37% [14] in the UK. There is evidence that home remedies are also commonly used for children in the UK [13,15,16].
As in making choices about conventional care treatments, in choosing CAM there are numerous considerations to take into account. Such considerations revolve around the personal perception of the balance between expected drawbacks of action (e.g. side effects) and the anticipated benefits of the treatment [17]. Within this vast continuum, the variables which will impact on the decision making process include desires (utilities associated with each alternative, personal values, goals, etc), beliefs (expectations about processes and outcome, knowledge, means to achieve desired outcome etc) [18] and other practical considerations (e.g. access). Decision making in healthcare may be moving from paternalism to autonomy, finally settling on shared decision making and 'consumerism' [19].
In addition, the decision making process for child health is different to that of adults as it may include the whole family not just the patient [20]. Decisions are often made by parents, not the child although there is debate around the role of children, with the model of constrained parental autonomy suggesting that parents make decisions for their children but this autonomy is not absolute [21]. Family centred care is now a central tenet of healthcare, particularly nursing [22], although a recent literature review found that despite the importance of including children in decision making on their own health they are rarely involved in the decision making process [23]. Regarding CAM use, particularly for certain ethnic minority groups, children may be even less autonomous than in other areas [24,25]. Adolescents however are likely to have a greater degree of autonomy, using CAM due to personal beliefs and control [26] Parents may use CAM to be a 'good' parent, particularly for children with a serious illness [27]. In addition parents may not use the same treatments for children as for themselves, including home remedies [28]. Parents may be more cautious with children's health, trusting and visiting practitioners more readily and taking caution in using home remedies [25]. However, they may also feel more strongly dissatisfied with conventional healthcare for their child than for themselves [29]. In addition, females are higher users of healthcare service [30], and mothers are more likely to use CAM for their children than fathers [31,32], indicating that mainly mothers are involved in decision making.
There is a large amount of literature written on the decision-making process in conventional care but the extrapolation of the conventional medicine (CM) decisionmaking process to CAM choices is debatable [33]. The process of choosing to use CAM may be far more dynamic, iterative and more individualistic than the more logical and rational decisions in conventional care [33,34].
The literature about choosing to use CAM encompasses many different approaches. Many studies have identified the factors associated with CAM use in children in the UK [13,15,35,36], the USA [32,[37][38][39][40][41][42] and Canada [11,43]. However, much of the research into reasons for using CAM so far has been atheoretical and lacks a clear and comprehensive conceptual framework to contain and explain the processes which are inherent in CAM decisionmaking [44,45]. Most of the studies are based on survey methods, are cross-sectional and cannot determine directional relationships [46]. In recent years there has been a move to explain the mechanisms which motivate and actualise the choice of CAM.
This review examines the literature relating to the decision-making process in CAM, but excludes models which do not include psychosocial factors or affective values or beliefs such as computational models of decision making, the cognitive processes involved in decision making (e.g. Hypothetico-deductive model [47]) and other descriptive theories of the decision making process which relate to treatment choices (e.g. Prospect theory [48]). This review focuses on the decision which leads to choosing CAM; as such it concerns itself with the models which attempt to explain the choice of complementary or alternative healthcare and the psychosocial factors that are involved in this decision.
Two dominant approaches have been used to study decision-making in CAM: the first originates in the concept of healthcare utilisation, concerned with the factors which enable and encourage the consumption of health services. The second approach views the decision to use CAM as a health behaviour, where the decision to use CAM is viewed within the framework of social and psychological, mainly cognitive, factors. Background on the models reviewed is given in table 1.
Healthcare utilisation models
The decision leading to choice of healthcare can be modelled by pathway (sequential), or determinants models [49]. Pathway models give stages of healthcare seeking, moving from self-care, adoption of the sick role, seeking medical care and finally recovery [49]. Determinants models focus on explanatory factors of the choices made [49].
The main determinants model is Andersen's sociobehavioural model (SBM) [30,50], particularly prominent in the (conventional) medicine literature [30]. This model sets out three sequential components which mitigate healthcare use; predisposing, enabling and need factors [30]. A recent addition to the model is the role of social support, whereby social influence can encourage the utilisation of healthcare as well as the perception of the efficacy of a given treatment [50].
The Consumer Decision-Making model which is less often used for healthcare has three components: external influences, the consumer decision-making process and the post decision behaviour [51].
The majority of healthcare utilisation models have found that the dominant determinant of healthcare utilisation is need, or illness [49].
Health behaviour models
Health behaviour models take into account psychological influences on behaviour and thus explain the individual The distinguishing characteristics of this model are firstly that moving through the stages is not necessarily a linear process, but it is necessary to move through all changes in order to incur sustained change; Secondly, the balance of pros and cons of carrying out a given behaviour, will determine the stage of change in which the individual finds him/her self. [92] Theory of planned behaviour (TPB) TPB attempts to explain behavioural intentions as predicted from three major sources: attitudes, perceived behavioural control and subjective norms. Attitudes include beliefs and expectations about a particular behaviour and the extent to which consequences are seen as desirable. Subjective norms are the beliefs one has about the expectations of 'significant others' and the motivation to comply with these. Perceived behavioural control is the extent to which one expects the behaviour to be easy or difficult and whether they perceive themselves to have the ability to carry out such a behaviouroften equated with self efficacy [57].
The self-regulatory model (SRM) The SRM explains how individuals have 'illness beliefs' or 'illness perceptions' about their condition. These are predefined cognitions which represent illness characteristics and coping strategies, related to perceived cause, effects, consequences, duration and sources of control or cure. People go on to form a representation of their coping alternatives, which may be represented as 'treatment beliefs'. [70,93] Braden's Self-help model Braden's self-help model specifies central variables and relationships involved in a learned response to chronic illness and includes the following elements: side-effects burden, uncertainty, perceived enabling skills, self help and quality of life. Its utility lies in its ability to form the connection between individual's use of enabling skills to manage their illness. [72] differences in behaviour, but often do not take account of external characteristics such as sociodemographic variables. Within this approach there are various models which have been applied to health behaviours, such as the Health locus of Control (HLoC), the Theory of Planned Behaviour (TPB), the Transtheoretical model (Stages of change) (TTM) and the Self-Regulatory model (SRM).
The concept of HLoC originates in Attribution Theory [52] and relates to the extent to which individuals view events as under their control (an internal locus of control) or out of their control (external locus of control). HLoC has been widely applied in the health and other arenas [53,54], based on the prediction that individuals high in internal locus of control are more likely to carry out health-promoting behaviours, whereas those with high external locus of control who attribute their health to chance will be unlikely to engage in health-enhancing activities [55]. The TTM was originally developed as a charting of the processes engendered in the elicitation and maintenance of change during the therapeutic process [56] and has been used for various health behaviours. The theory of planned behaviour [57] attempts to explain behavioural intentions as predicted from three major sources: attitudes, perceived behavioural control and subjective norms. The SRM was formulated to describe the process whereby when confronted with a health threat, individuals seek overcome the problem and return to normality.
This paper presents the results of a systematic review to identify how these models have been applied to the prediction of CAM use, and focuses on their potential use for children. The paper discusses the methods used in the literature search, followed by the results which discuss the quality of the papers found and their conclusions regarding the suitability of the model they test.
Aim
To identify a conceptual framework which can successfully model the parental decision making process of choosing to use CAM for children, through a systematic review of studies using a decision making model for CAM use.
A systematic search was conducted of the following databases: Psychinfo, Sciencedirect, Academic search elite, Medline, Psycharticles, Elsevier, Biomed, Ingenta connect, Cinahl and Embase. A combination of the following search terms was used: CAM or Complementary or alternative Medicine, choice, decision making, parent or child or adolescent or paediatric or pediatric, model, utiliz*.
Two stages of screening were used to identify relevant articles. A diagram of the process of inclusion/exclusion is given in Figure 1.
At first screening exclusion criteria were: randomised control trial/efficacy trials of CAM; physician/practitioner knowledge/choice/decision making; teaching in medical school; integration into primary care; non-English language; relating to regulation of CAM; survey, comment, editorial or review; published before the year 1995.
At the second selection stage the inclusion criteria were that the paper had to present a model of factors associated with the utilisation or choice of CAM. This was done by screening each paper for the key word 'model' or 'theory' and determining their use or reference to a model within the article (excluding the term 'regression model' and 'integrative model/integrative care model'). The term model was occasionally used in reference to a paradigm, rather than the intended use as in a theoretical framework which depicts a process of sorts; when this occurred it was necessary to exclude such studies.
Results
Over 2700 articles were screened for inclusion. Only one study focussed on children, so the review had to include articles related to adult decision making. As seen in Figure 1, 22 articles met the criteria for inclusion in the review. These studies are presented in Additional file 1, and discussed below.
Design
Out of 22 studies surveyed, 16 used quantitative approaches to examine the models in question. Six studies [71,[73][74][75][76][77] used qualitative methodology. In the initial search, many studies were identified which used qualitative methodology to examine the subject of predictors or correlates of CAM use, however in the subsequent analysis these had to be excluded due to lack of use of a model.
All the studies, excluding one [67] used a cross-sectional design. The main disadvantage of such a design is that cause and effect cannot be determined in spite of collec- Not available (eg dissertation) N= 10 N = 22 articles included in the review tion of a large amount of variables. This often resulted in largely correlational datasets, which, while being informative and to some extent predictive, fail to provide a causal model of CAM decision-making and to identify the exact mechanisms through which CAM choices are made. Only one study examined the propensity for CAM use over two time points, thereby assigning specific causal relationships to use of CAM [67].
Flowchart of article selection process
One major advantage of a number of studies was that they used the frequency of CAM use as a variable to compare specific predictors of more or less use. By doing this they were able to distinguish between the beliefs which differentiated a person who tried CAM but did not have a committed treatment plan [58].
Sampling
The majority (10) of studies surveyed had medium sized samples of between 123 and 551, indicating attempts to achieve a representative population. Of the studies using self-selection there was a predominance of female respondents, usually because females were more likely to use health services [78] and more likely to use or consider using CAM [6,8,[79][80][81]. Some studies were specifically aimed at the experience of females only, mainly in relation to breast cancer and other female-dominated diseases [61,64,72,73,75], one included males only (prostate cancer) [76]. Other studies based on large scale, often national, studies had much larger samples of between 1672 and 31,044 so were able to pursue a more representative sample in most cases. Qualitative studies had much smaller samples, between 16 and 42, which is appropriate for their methodology.
Overall, the response rate for most studies was highly acceptable at 60% or higher for most studies [82]. This serves to enhance the reliability of the findings in terms of their generalisability. However, for a few studies there was very low response rate, which would indicate that it was unlikely that the sample was representative of the population concerned [44,60,62].
Most studies only included adults, most defined as over 18, but some with limited age ranges [65,70,71,76]. Only one paper [29] focused on the use of CAM for children and utilized a theoretical model.
Settings
Studies were mainly conducted in the US, Canada, Japan and the UK (although limited by English language inclusion only). Participants were recruited from conventional medicine (CM) centres, CAM clinics, health related internet sites, national surveys or random internet mailing.
Measures and analyses
Most of the quantitative studies included in this review tended to use measures which were largely found to be reliable and valid. This was often established in previous studies which used the same measures. In some cases, the reliability of the measures was tested through internal consistency (Cronbach ά) and multi-item responses used to establish reliability within the studies. Aside from the qualitative studies and a number of the surveys, many of the studies were self-report questionnaires, which are open to biases in the form of response bias, demand characteristics and to the introduction of systematic errors.
The qualitative studies used semi structured or open ended interviews. They also tended to use methods which attest to the integrity and validity of the data such as confirming the findings with participants. One qualitative study tested the predictability of the model they developed [71].
The quantitative studies predominately used multivariate logistic regression or multinomial logit regression to explain the relative variance of each of the factors significant in the decision making process although some analyses were limited to bivariate or correlational association [59,66]. The qualitative studies used either grounded theory [73,75], or 'thematic' analysis [74,76,77].
Limitations of the studies
Many studies did not distinguish between different types of CAM, which may have significant implications given that those studies that did differentiate CAM type found that the decision making process did vary for the different CAM modalities [66,[68][69][70]. Some studies were unclear about what was included in their definition of CAM [67,72].
Not all studies controlled for factors which may have biased the sample or introduced extraneous variables, such as; stage of illness, duration of illness and conventional or other treatments [61,67,72]. Studies using the SBM did not always explain the recursive nature of the factors which has recently been described [50].
Although some studies were based on large, nationally representative samples, some used small, potentially underpowered samples [58,59,62]. Two studies additionally only included CAM users, preventing comparison of CAM users and non users [58,75]. Although most studies did not specifically exclude non CAM users, there may have been response bias in terms of CAM users being more likely to take part.
A number of studies used non validated measures, which limited the validity of the study and also makes compari-son between different studies difficult. In particular a number of the studies based on the SBM used non validated measures of health beliefs [65,66], or did not include health beliefs at all [68,69]. Some studies did not provide statistics on the percentage of variance the model explained [59,66].
Discussion
The current review found that almost 100 papers (eliminated from the final analysis) did not use an overarching framework to examine their findings, leaving them open to spurious explanations. Some studies investigated psychological constructs such as beliefs, using validated measures, but refrained from going further to consolidate a model. Other studies set out to validate the items and their interrelationships within a model, discussed below (see table 1 for descriptions of models).
CAM definition
In the studies included in this review not all studies made the distinction between different types of CAM therapies. Hendrickson et al highlight the problem of treating CAM as one modality and illustrate through their study that there are differences in the determinants of use of different type of CAM therapies [68]. Most studies included only practitioner based therapies, others viewed CAM in terms of the behaviour. The lack of a consistent operational definition of CAM use made the papers heterogeneous and difficult to combine and form conclusions and may explain some of the inconsistent findings in the literature. There is a need to examine studies which identify types of CAM in order to compare their findings, reflected in Andersen's suggestion that the outcome of the SBM should ideally relate to a specific type of healthcare service [50]. In addition, a distinction between CAM use as a treat, a preventative strategy or treatment of disease was often lacking; This may serve to distinguish between diverse CAM users who have different motivations for using such services [66] In addition other factors influencing the status of CAM which may affect the decision making process will vary between, and even within, countries of study. These include professional regulation, legal status, financial access and reimbursement of CAM and its integration within national health systems.
Healthcare utilisation models
The socio-behavioural model was the most commonly used, and was largely supported for modelling CAM decision making, although some studies were only partially supportive of the model; most commonly enabling factors were not significant. One study [69] was based on an adapted SBM model, the "CAM Healthcare Model" [83].
Here the SBM was extended to examine the concurrent, complementary use of conventional medicine and CAM, and the choice between them, and included self-care practices and products as well as practitioner based CAM [83]. The only study using the SBM for child CAM use added the component of healthcare experience [29]. These adaptations may be important for child use of CAM which is often non practitioner based (88% of CAM use by London paediatric outpatients [84] and 64% in the USA was non practitioner based [85]), and parental use of CAM is very likely to influence child use [11,13,29,40,86,87].
The findings from the studies using the SBM are summarised in Figure 2. As described by Andersen [50], the importance of health beliefs and organisational (enabling) factors may be underestimated due to the inadequate conceptualisation (and therefore measurement) of these in many studies. Findings have supported Andersen's claim that need factors are important, but it should be emphasised that these factors are heavily dependent on social context and health beliefs [50].
This review only identified one study using a theoretical model for child use of CAM. Although this study found support for most components of the SBM, this study was based in the USA, only tested practitioner based CAM, and the survey was not specifically designed to test the SBM.
The lack of studies on child use of CAM using a theoretical model means this review is unable to extrapolate findings to the use of CAM in children. However it does highlight the need to identify the most suitable model and to test it's suitability for application to CAM use both for parents and children to determine whether the processes are similar to the choices of CAM in adults. The original SBM used the family as the unit of study [30], and although recently this has changed to the individual, due to problems in measuring family based variables [50], the family focus is especially appropriate for child CAM use.
One of the main strengths of the SBM is that it incorporates variables which may include both subjective (e.g. health beliefs, perception of illness) and objective (e.g. income, symptoms) variables from a variety of domainssocioeconomic, biological, psychological and social. As such this model fits very comfortably with an interdisciplinary and integrative view of healthcare utilisation, whist taking account of idiosyncratic influences as well.
However, the SBM falls short of naming the specific processes which are often complicated and non-linear, which lead to the specific decisions to use CAM among individuals and subgroups [44]. Also, qualitative studies [73,77] highlighted the importance of temporal factors, e.g. deciding to use CAM through continuous appraisal of wellbeing and due to perception of circumstances at that point, which SBM fails to account for. While the model succeeds in incorporating health beliefs within the predisposing factors, which in turn encompass other affective and cognitive factors, specific health beliefs are often not identified. Empirical study using the SBM needs to ensure that the findings are integrated back into the model and not left as a collection of associations. To this end, it is recommended that future studies utilise a longitudinal prospective approach, whilst ensuring differentiation between different CAM modalities.
The Consumer Decision-Making model [51] was able to take into account the variability in the decision factors and the intricacy of their relationships and effects on each other. However, it is not specifically related to health and does not include factors relating to emotional and interactive aspects of care which may be important for both CAM use and child healthcare. The Consumer Decision Making model contained no integration of affective or valueladen factors which would differentiate individuals with similar experience from one another [44].
Health behaviour models
When CAM use is viewed as a health behaviour, individual differences which are not explainable in terms of more extrinsic characteristics (e.g. socio-demographic) can be explained. This is important given that CAM use is often a behaviour specific to an individual [77]. The advantages of this approach are two-fold; Firstly psychological factors, such as cognition, beliefs and values, are considered to be important and proximal to the decision to carry out certain behaviour, and may mediate other more extrinsic factors [88]. Secondly, psychological factors, as opposed to extrinsic factors, have the advantage of being amenable to change, at least to some extent; this is particularly important in relation to health-related interventions including CAM [88].
The health locus of control was predictive of CAM use in two studies [59,61] but not in three [58,60,62]. In addition, many of the HLoC studies were carried out on samples of patients with a chronic illness, so of limited generalisability. In terms of the decision to use CAM, it can relate to the perception of control over illness and treatment.
TTM and TPB were found to be supportive of CAM use prediction and psychological factors were more important than medical or demographic, but only when the two models were used together, and findings may be open to selection bias [63]. Both beliefs about the positive effects and worries about the negative effects were important [63]. Family expectation was particularly important in the TPB [63].
The self-regulatory model received little support for predicting CAM use [70]. People may pursue a particular treatment if they perceive their illness in a certain way or hold particular treatment beliefs, for example having a holistic approach to health and illness being the strongest predictor of CAM use [89]. Braden's self help model had strong support as patients used CAM because it was perceived as effective and was related to income, however it was only tested by one study of cancer patients [72].
The Health Locus of Control, Self-Regulatory model, TPB and TTM all had the weakness that they tended to originate from a singular viewpoint which results in limited integration of the sources of influence, and thus they were able to account for limited variability in the dependent variable.
The models developed using qualitative data may prove useful once empirically tested, although these were all based on patients with chronic health problems [71,75,77].
Limitations of review
Due to disparate terms used in the decision making literature, it was difficult to define search terms to capture all relevant papers. Although searches were kept broad with extensive hand searching of reference lists to capture a wide range of articles, the search terms could have included terms such as 'Framework', although subsequent scanning of results it did not seem to make a difference to the papers chosen for inclusion. Language bias may well be an issue as only English language papers were chosen (15 were excluded for this reason) [90].
The review only included published papers, which did not capture potentially important sources of information such as theses, conferences abstracts and official reports. In addition papers on this subject were published in journals in a very wide range of subject areas; there may be other databases which should have been included. The review only included studies published post 1994 which may have been a source of bias.
Terms for CAM could have been expanded to include all CAM modalities (e.g. acupuncture, herbal etc), in order to capture studies that may not be indexed under general CAM terms.
Future research
The review found that the SBM has strong support for modelling the decision making process in CAM use. However, a number of methodological limitations were identified which future research needs to address. The decision making process appears to vary depending on the CAM modality; comparison between CAM modalities should be made. Extraneous variables should be controlled for, especially illness characteristics. Quantitative studies should include sufficiently powered samples, validated measures and multivariate analysis. Studies of the SBM should also incorporate the dynamic, interactive nature of the factors in the model.
The use of qualitative methods to explore decision making is particularly interesting and should be considered carefully. The discipline of psychology is particularly prone to quantitative methods, except when the subject of study is exploratory. This issue may represent a disadvantage in applying psychological approaches to the data as many pertinent findings which arise from qualitative studies are often omitted from subject analysis as they do not fit easily into pre-set conceptual categories. As child use of CAM was identified as an underexplored area, the use of qualitative methodology to examine the predictors and correlates of CAM use would be particularly relevant for child use of CAM.
Some of the authors are now engaged in a funded research project using qualitative methods to clarify validity and to identify the relative importance of the factors and will test this using a quantitative questionnaire using correlational and regression analysis to validate the model.
Conclusion
Andersen's sociobehavioural model has been identified as a suitable model for modelling the decision making process resulting in adult CAM use. However, the suitability of application of this model to child CAM use has not fully been studied and needs further clarification. This identification of a suitable decision making model is facilitating theory-guided research into how and why CAM is used for children, through empirical testing. Using an existing model promotes methodological consistency, which is imperative in the field of CAM which often uses disparate methods and tools. Providing an overarching model which has been tested for a child population will aid to guide healthcare practitioners' understanding and application to clinical practice [91]. | 2018-05-08T17:54:17.984Z | 0001-01-01T00:00:00.000 | {
"year": 2009,
"sha1": "13f4b363e724308337c7e16550f984de3b102877",
"oa_license": "CCBY",
"oa_url": "https://bmccomplementalternmed.biomedcentral.com/track/pdf/10.1186/1472-6882-9-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "13f4b363e724308337c7e16550f984de3b102877",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
251018348 | pes2o/s2orc | v3-fos-license | NISTT: A Non-Intrusive SystemC-TLM 2.0 Tracing Tool
The increasing complexity of systems-on-a-chip requires the continuous development of electronic design automation tools. Nowadays, the simulation of systems-on-a-chip using virtual platforms is common. Virtual platforms enable hardware/software co-design to shorten the time to market, offer insights into the models, and allow debugging of the simulated hardware. Profiling tools are required to improve the usability of virtual platforms. During simulation, these tools capture data that are evaluated afterward. Those data can reveal information about the simulation itself and the software executed on the platform. This work presents the tracing tool NISTT that can profile SystemC-TLM-2.0-based virtual platforms. NISTT is implemented in a completely non-intrusive way. That means no changes in the simulation are needed, the source code of the simulation is not required, and the traced simulation does not need to contain debug symbols. The standardized SystemC application programming interface guarantees the compatibility of NISTT with other simulations. The strengths of NISTT are demonstrated in a case study. Here, NISTT is connected to a virtual platform and traces the boot process of Linux. After the simulation, the database created by NISTT is evaluated, and the results are visualized. Furthermore, the overhead of NISTT is quantified. It is shown that NISTT has only a minor influence on the overall simulation performance.
I. INTRODUCTION
Nowadays, simulation is an important part of the design process of a System-on-a-chip (SoC). Virtual Platforms (VPs) can be developed in an early stage of the design process to serve as the enabler of Hardware (HW)/Software (SW) codesign. A VP is a simulation of a complete SoC that can execute target SW without modification. To support the development, standardized frameworks like SystemC-TLM 2.0 are used [1]. SystemC defines standardized interfaces for models and their connection at the Electronic System Level (ESL). The Transaction-level Modeling (TLM) extension abstracts communications between models like memory accesses or interrupts to speed up the simulation.
Compared to real HW, VPs have the benefit of allowing easy debugging and analysis. Due to Moore's law and the related increasing complexity of SoCs, simulations have also become more nested and complex. This problem leads to VP NISTT SystemC performance issues in the simulation and the target SW. Additional tools are required to reveal which parts of the simulation or the target SW need to be optimized to overcome those issues. Those tools are called SystemC frontends. A distinction is made between static, dynamic, and hybrid approaches. Static approaches analyze the source code of the simulation and extract information without executing the simulation. Dynamic techniques take the dynamic behavior of the simulation into account. Modules that are created during runtime and the workload that is executed on the VP are considered. For that, the simulation is executed and analyzed. Hybrid approaches combine the two techniques by analyzing both the static and the dynamic behavior. Often, static analysis is used to search and annotate functions of interest. Based on those annotations, the dynamic behavior is analyzed. The static evaluation requires a tool that reads the source code of the simulation. This tool can either be a parser or a compiler.
In this paper, we present NISTT as a novel approach to tracing the behavior of SystemC-TLM-2.0-based simulations. The design goals of NISTT are: • Revealing insights of the simulation and the target SW • Tracing of pre-compiled VPs without debugging symbols • No access to the source code of the simulation • Capturing the dynamic behavior without a static analysis As shown in Fig. 1, NISTT is placed between the VP and the SystemC library. It can intercept function calls to the SystemC library to analyze data. NISTT does not place any requirements on the simulation or the used toolchain. The official SystemC library and the preferred toolchain can be used without changes. The standardization of the SystemC Application Programming Interface (API) guarantees compatibility. Furthermore, NISTT is invisible to the simulation. The simulation behavior is untouched. [13]. Table I shows an extended compilation of existing approaches. As mentioned before, the approaches can be classified as static, dynamic, or hybrid.
The static approaches are often based on a parser that analyzes the C++ source code of the simulation and derives information from the parsed output. Different approaches for the implementation of the parser exist. SystemCXML [2] uses Doxygen's C++ front end to parse the module code. ParSysC [3] uses a Purdue Compiler Construction Tool Set (PCCTS)-based parser to convert the SystemC representation to a Register-Transfer Level (RTL) Intermediate Representation (IR) to analyze the simulated system. Genz et al. also developed a PCCTS-based parser for the static analysis and a code generator that injects additional code into the simulation to evaluate runtime information [4].
A problem that often occurs when SystemC parsers are used is the limitation to a subset of the SystemC language. Therefore, compilers can be used instead of parsers to extract static information. Pinapa [5] uses a modified GNU Compiler Collection (GCC) to get the abstract syntax tree of the simulation from which static information is extracted. Dynamic information is extracted by executing the elaboration phase of the simulation, in which SystemC builds up the module hierarchy. Pinapa has been further developed to PinaVM [6]. PinaVM uses the LLVM IR to insert additional code into the simulation that is used to capture runtime information.
SHaBE [7] and AIBA [8] use GNU Debugger (GDB) to debug the simulation during execution. SHaBE uses a GDB plugin to extract data during the elaboration phase to build up the module hierarchy. AIBA creates a GDB command file from a static analysis which controls GDB during the execution to set breakpoints and store data. This approach has been further developed to support the tracing of TLM transactions [14]. Quiny [9] is a dynamic approach that uses a modified SystemC library that implements C++ operator overloading to retrieve information during runtime. Scoot [10] is a model extractor based on a custom C++ frontend that analyzes the source code using simplified SystemC header files to extract the module hierarchy, sensitivity list of the processes, and the port bindings. ReSp [11] adds Python wrappers to SystemC models which allow interaction during the simulation. The approach proposed by Stoppe et al. [12] uses the debug information of the compiled simulation executable and the SystemC API to capture simulation-related data.
Most presented static approaches try to extract the module hierarchy and the connections between the modules. This information does not include the runtime behavior of the simulation. Furthermore, all approaches need access to the source code of the simulation to either directly analyze it or compile the simulation using customized tools. That can be a drastic limitation, especially for industrial VPs where the source code is not distributed to the customer. Another aspect that stood out is the need for an extensive static analysis most hybrid approaches use to configure their dynamic analysis. It would be beneficial to perform the dynamic analysis without a preceding static one to keep the complexity of the tool as low as possible. For those reasons, the idea for the development of NISTT is to create a tracing tool that works without a static analysis, does not require access to the source code of the simulation, and is as simple as possible.
III. PROPOSED NISTT APPROACH
The design idea behind NISTT is to create a tool that is capable of tracing an already compiled SystemC-TLM-2.0based simulation without accessing its source code or debug symbols. These requirements increase the usability of the tracing tool compared to existing approaches. To trace an already compiled simulation, NISTT intercepts the calls of the simulation to the shared SystemC library to extract data. The interception of function calls without access to the source code can be done due to the standardized SystemC API. The LD_PRELOAD feature of the Linux dynamic linker/loader, ld, is used to perform this interception. ld is responsible for loading shared libraries that are needed by a program. During runtime, ld dynamically links function calls to those libraries. The shared libraries that are needed by an executable are listed in the dynamic section of the compiled Executable and Parser Genz et al. [4] PCCTS-based parser Pinapa [5] Modified GCC, modified SystemC library PinaVM [6] LLVM, modified SystemC library SHaBE [7] GCC plugin, GDB AIBA [8] GDB Quiny [9] Modified SystemC library Scoot [10] Custom C++ frontend, simplified SystemC library ReSp [11] Python wrapper Stoppe et al. [12] Debug symbols, SystemC API NISTT LD_PRELOAD Linkable Format (ELF) file. The Linux environment variable LD_PRELOAD can be used to define paths of additional shared libraries that are loaded by ld, regardless of whether they are required by the executable or not. When the executable calls a function that is defined in a shared library, ld needs to resolve that call. The matching of the function to be called and the available functions is based on the function name. In the case of a C program, that name corresponds to the name given by the programmer. For C++ programs, a mangled name is used that is created by the compiler based on the name, return type, and parameter types of the function. Preloading a library that contains a function with the same name as a function of a required shared library overrides the implementation of that function. When the executable calls the function, ld resolves the call. ld searches for an implementation of that function in the loaded shared objects. The first function that is found is the one implemented in the preloaded library because it is loaded before the required libraries. Thus, preloading can be used to intercept calls to a shared library by implementing a function with the same name in a preloaded library.
A. Working Principle of NISTT
NISTT uses LD_PRELOAD to intercept calls to the SystemC library and extract tracing information. That enables interaction with the simulation without changing or accessing the source code of the simulation or dependent libraries. The only requirement is that the simulation must be dynamically linked against a SystemC-TLM-2.0-compatible library.
The working principle of NISTT is shown in Fig. 2. NISTT is a library that needs to be preloaded to a SystemC-TLM-2.0based simulation using LD_PRELOAD. The library implements SystemC functions whose calls and passed data should be traced. When the simulation calls such a function, ld links the call to the NISTT implementation 1 . NISTT can then access and evaluate the passed parameters and store a data point in a database 2 . The original SystemC function is called afterward to keep the simulation behavior unchanged 3 . For that, the API of the Linux dynamic linker/loader is used. After the original SystemC function returns 4 , a second data point can be stored in the database 5 . Then, the NISTT function returns 6 . When the simulation calls a SystemC function that is not implemented in NISTT, dl directly forwards the call to the original library, as shown by the solid arrows on the left.
B. Traced SystemC Functions
NISTT can trace the following simulation properties: • SystemC process/coroutine scheduling • Quantum duration of processes • Processes waiting on event notifications • Notification of events • Course of simulation time and real-time NISTT overrides multiple variants of the SystemC function wait to intercept its calls. wait can be used inside SystemC threads to suspend the execution of the thread in a non-preemptive way and resume it at a later point in time. Parameters can be passed to the wait function to specify when the SystemC scheduler should resume the execution of the thread. One variant of the wait function accepts an amount of simulation time that needs to pass until the thread is resumed. Another implementation gets a SystemC event or a collection of SystemC events as a parameter. In that case, the thread is resumed once the events have been notified. When an overridden wait function of NISTT is called, the name of the calling SystemC process and simulation time/real-time timestamps are stored in a database along with information on the duration of the suspension. Depending on the parameters of the wait function, that information is either the amount of simulation time that should be waited or the name of the event that needs to be notified.
The data stored on wait calls can be used for various analyses. For instance, they provide information on the scheduling of SystemC threads. SystemC threads are coroutines that use the wait function to suspend their execution by calling the scheduler. During an intercepted wait call, the first data point is stored in the database when the execution of a thread is suspended 2 . Before the thread is resumed, a second data point is stored 5 . To trace the first entry into a thread, NISTT also intercepts calls to the function sc_thread_cor_fn. This function is used to invoke a coroutine.
Another property that can be derived from wait calls is the quantum duration. In loosely-timed SystemC-TLM 2.0 simulations, the concept of temporal decoupling is used. Thereby, the simulation performance is increased by reducing the temporal accuracy. SystemC threads are allowed to run ahead of the global simulation time to decrease the number of synchronizations. They keep the elapsed time since their last synchronization with the global simulation time as their local time. When the local time of a thread exceeds a limit or an operation that requires high accuracy should be executed, the thread needs to synchronize with the global simulation time. This synchronization is done by calling the wait function with the difference in time as a parameter. This difference in time is called quantum. The used quantum durations of the process provide information on the simulation performance. In general, large quantum durations are targeted because they increase the decoupling and thereby accelerate the simulation.
Apart from the wait function that gets a time as a parameter, SystemC also offers a function to suspend the execution of a thread until a specified event is notified. This function can, e.g., be used in the implementation of a CPU model to implement a Wait For Interrupt (WFI) instruction. To emulate the execution of WFI on a VP, wait can be called with the interrupt event as parameter. Besides tracing threads that are waiting on notifications of events, NISTT can also trace the notification itself. For that, the SystemC function notify of the class sc_event is implemented in NISTT to intercept calls. Depending on the parameters, the event is directly notified, or the notification is delayed by the specified amount of simulation time.
Every data point that is stored in the database contains timestamps of the current simulation time and the elapsed real-time since the beginning of the simulation. Those two timestamps can be used to put the simulation time t sim in relation to the time needed for the simulation, the realtime t real . The Real-Time Factor (RTF) can be calculated to measure the simulation performance.
RTF = ∆t sim ∆t real
The data stored in the database can be visualized and evaluated in a post-processing step. Visualizations using Python and Matplotlib [15] are presented in the next chapter.
In the future, additional SystemC functions can be implemented in NISTT to extend the functionality. However, there are some limitations. Inlined SystemC functions and methods defined in template classes cannot be overridden using preloading. That is because they are directly compiled into the executable that uses those functions and therefore not stored in the shared library.
IV. EXPERIMENTAL EVALUATION
NISTT is used to profile the boot process of a VP in a case study to demonstrate the functionality of the tool. The tracing overhead is measured and compared to an intrusive implementation. As the profiled target, the Virtual Components Modeling Library (VCML)-based [16] VP AVP64 [17] is used. Since NISTT is implemented in a non-intrusive way, no changes in SystemC, VCML, or AVP64 are needed. Fig. 3 depicts the architecture of the VP. It consists of an ARMv8 CPU and peripherals that are connected via a bus. Interrupts are implemented by a TLM-based interrupt protocol. An Operating System (OS) kernel like Linux can be booted from a virtual Secure Digital (SD) card using the SD Host Controller Interface (SDHCI). PL011 ARM PrimeCell UART models serve as user interface. They can be configured to print their output to the host's terminal and read-in user input. The VP is a loosely-timed simulation, i.e., the SystemC threads are temporal decoupled, keep their local time, and regularly synchronize with the simulation time. Memory accesses and interrupts are implemented by TLM transactions.
All benchmarks have been executed on a computer equipped with a AMD Ryzen 9 3900X 12-Core CPU, 64 GB RAM, and a Samsung 860 EVO SATA III SSD, running CentOS 7.9 with Linux 3.10.0. The maximum allowed quantum duration for the simulation was 100 µs.
A. Case Study
AVP64 was started with NISTT being preloaded. Fig. 4 depicts the captured results of the first 2 seconds of the Linux boot process. The RTF during the simulation is shown in Fig. 4a. It can be observed that the RTF fluctuates during the simulation. This fluctuation depends on the workload that is executed on the VP. Different parts of the workload cause different CPU utilization. When the utilization is low, idle cycles of the CPU are not simulated, which increases the performance. Besides that aspect, interactions with peripherals can also reduce the RTF due to early quantum terminations. Fig. 4b shows the used quantum durations of the CPU thread. During the periods where the RTF is low, the quanta are comparatively small. If no data points are printed, that point in time has not been simulated for the CPU model. The reason for that is that the state of the CPU stayed unchanged during that time. That is, e.g., the case when the CPU is in idle mode. The Linux idle mode is implemented by executing the WFI instruction. AVP64 emulates this instruction by using the SystemC wait function with the interrupt event as a parameter. Thereby, the execution of the processor thread is suspended until the next interrupt raises. Fig. 4c visualizes the notified SystemC events. The IN_FREE event is used by the VCML register model to serialize parallel accesses to TLM target sockets of a peripheral. It is notified after each handled transaction of the corresponding peripheral. That means, every time the IN_FREE event is notified, a register of the peripheral has been written. Thus, the figure reveals when interactions with peripherals take place.
Besides the IN_FREE event, the events IRQ[0]_ev and arm_timer_ns of the CPU model are of importance. The first event, IRQ[0]_ev, is notified every time an interrupt is signaled to the CPU by the Generic Interrupt Controller (GIC). That is the reason why the notification pattern of the IRQ[0]_ev of the CPU and the IN_FREE event of the GIC look similar. Every time an interrupt is signaled to the CPU, the CPU interacts with the CPU interface of the GIC to handle and acknowledge the interrupt. The second event of the CPU, arm_timer_ns, is used to implement a timer. When the timer is set to expire after a certain time, the event is programmed to be notified at this time. This delayed notification is visualized in Fig. 4c by an arrow starting at the time of programming and pointing to the time of expiration. There are three periods during the boot process where the timer is programmed to expire after a comparatively long period of time (0.4 s-1.1 s; 1.1 s-1.3 s; 1.3 s-1.8 s). During these periods, no quantum data were recorded. That strengthens the assumption that the OS was in idle mode during those periods and used the timer to wake up. Table II shows the needed computation time to execute the SC_THREADs of the VP. The simulation of 2 s of the Linux boot took 29.15 s. 96 % of the simulation, the processor thread of the CPU model was active. Since the boot process is used as the benchmark, the CPU was the most compute-intensive model of the platform.
B. Performance
The idea of NISTT is to trace the execution of a VP and to place as few requirements as possible on the simulation. Therefore, LD_PRELOAD is used to limit the requirements to the usage of a shared SystemC library. This section examines whether the performance of NISTT is reduced due to its nonintrusive implementation. If changing simulation-dependent libraries had not conflicted with the requirements, instrumentation of functions of interest inside the SystemC library would have been an alternative implementation with the same tracing results as NISTT. This alternative, intrusive implementation is compared to NISTT to evaluate the differences in performance. Furthermore, the general overhead of NISTT is classified.
The simulation of the VP is executed in four different configurations to evaluate the tracing overhead of NISTT. As shown in Table III, the configurations differ in the tracing implementation and the way the simulation is linked against SystemC. NISTT requires a dynamic linkage against SystemC to work. The intrusive reference implementation is used to profile the overhead of intercepting function calls using preloading. As a reference, the execution time of the unmodified VP without tracing is measured. The first 2 s of the Linux boot process are again used as the benchmark to quantify the overhead of NISTT and the two intrusive implementations compared to the reference VP without tracing. Fig. 5 shows the results for the different configurations and different activated traces.
When the tracing is implemented but deactivated (cf. None), i.e., additional functions are present in either SystemC or the preloaded NISTT library, but data are not stored in a database, the execution times are similar to the one of the reference implementation without tracing. In general, no clear relation between used linking and needed execution time could be detected. Even NISTT does not harm the simulation performance compared to the corresponding intrusive implementation. What is noticeable, however, is that the kind of tracing that is enabled has an influence on the execution time. That is because the traces are executed with different frequencies and therefore cause various overheads. The Wait for Event trace, e.g., is only triggered 68 times during the simulation and therefore caused only a little overhead. The Quantum trace captured 94 174 data points which results in a visibly higher execution time. The highest overhead is created, and most data points are stored, by the Event trace (195 064 data points) and the SystemC Process trace (208 174 data points). Those results show that the produced overhead is mainly caused by the kind of the trace rather than its implementation (intrusive/non-intrusive) or used linkage. However, a non-intrusive implementation like NISTT has the advantage that the simulation does not need to be adapted to the tool. The standardized SystemC API assures compatibility. Furthermore, no access to the source code of the simulation is needed for the non-intrusive implementation. The simulation itself stays unchanged and does not need to be recompiled to be traced.
V. CONCLUSION
This paper presents a novel approach for tracing a SystemC-TLM-2.0-based simulation in a non-intrusive way. Due to the standardization of the SystemC API, NISTT can trace every simulation that is based on SystemC without making special requests to the implementation. Source code or debug symbols of the simulation are not needed, which drastically increases the usability compared to existing approaches. NISTT stores the captured tracing data in a database to allow evaluation and analysis in a post-processing step. We showed that the non-intrusive implementation of NISTT does not reduce the performance compared to an intrusive one. The non-intrusive implementation has the advantage of not requiring compiletime modifications. Thereby, also VPs can be analyzed where the source code can not be accessed.
However, there is also a limitation of NISTT due to its implementation using preloading. NISTT is only capable of redirecting calls to SystemC functions that are located in the library object file. Thereby, calls to inlined functions or functions of template classes cannot be intercepted. Despite this limitation, NISTT is a powerful tool that offers deep insights into SystemC-TLM-2.0-based simulations like VPs without the need of having access to their source code. It can capture relevant ESL-simulation data and can easily be extended to trace additional functions of interest. | 2022-07-25T01:15:58.424Z | 2022-07-22T00:00:00.000 | {
"year": 2022,
"sha1": "48784a6a20c20ba8dd6c31526d17becf009c57bd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "72b00a940767814d31f50926b280034a97cd9761",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
252735313 | pes2o/s2orc | v3-fos-license | Practical Adversarial Attacks on Spatiotemporal Traffic Forecasting Models
Machine learning based traffic forecasting models leverage sophisticated spatiotemporal auto-correlations to provide accurate predictions of city-wide traffic states. However, existing methods assume a reliable and unbiased forecasting environment, which is not always available in the wild. In this work, we investigate the vulnerability of spatiotemporal traffic forecasting models and propose a practical adversarial spatiotemporal attack framework. Specifically, instead of simultaneously attacking all geo-distributed data sources, an iterative gradient-guided node saliency method is proposed to identify the time-dependent set of victim nodes. Furthermore, we devise a spatiotemporal gradient descent based scheme to generate real-valued adversarial traffic states under a perturbation constraint. Meanwhile, we theoretically demonstrate the worst performance bound of adversarial traffic forecasting attacks. Extensive experiments on two real-world datasets show that the proposed two-step framework achieves up to $67.8\%$ performance degradation on various advanced spatiotemporal forecasting models. Remarkably, we also show that adversarial training with our proposed attacks can significantly improve the robustness of spatiotemporal traffic forecasting models. Our code is available in \url{https://github.com/luckyfan-cs/ASTFA}.
Introduction
Machine learned spatiotemporal forecasting models have been widely adopted in modern Intelligent Transportation Systems (ITS) to provide accurate and timely prediction of traffic dynamics, e.g., traffic flow [1], traffic speed [2,3], and the estimated time of arrival [4,5]. Despite fruitful progress in improving the forecasting accuracy and utility [6], little attention has been paid to the robustness of spatiotemporal forecasting models. For example, Figure 1 demonstrates that injecting slight adversarial perturbations on a few randomly selected nodes can significantly degrade the traffic Figure 1: An illustration of adversarial attack against spatiotemporal forecasting models on the Bay Area traffic network in California, the data ranges from January 2017 to May 2017. (a) Adversarial attack of geo-distributed data. The malicious attacker may inject adversarial examples into a few randomly selected geo-distributed data sources. (e.g., roadway sensors) to mislead the prediction of the whole traffic forecasting system. (b) Accuracy drop of victim nodes. By adding less than 50% traffic speed perturbations to 10% victim nodes, we observe 60.4% accuracy drop of victim nodes in morning peak hour. (c) Accuracy drop of neighbouring nodes. Due to the information diffusion of spatiotemporal forecasting models, the adversarial attack also leads to up to about 47.23% accuracy drop for neighboring nodes.
forecasting accuracy of the whole system. Therefore, this paper investigates the vulnerability of traffic forecasting models against adversarial attacks.
In recent years, adversarial attacks have been extensively studied in various application domains, such as computer vision and natural language processing [7] However, two major challenges prevent applying existing adversarial attack strategies to spatiotemporal traffic forecasting. First, the traffic forecasting system makes predictions by exploiting signals from geo-distributed data sources (e.g., hundreds of roadway sensors and thousands of in-vehicle GPS devices). It is expensive and impractical to manipulate all data sources to inject adversarial perturbations simultaneously. Furthermore, stateof-the-art traffic forecasting models propagate local traffic states through the traffic network for more accurate prediction [5]. Attacking a few arbitrary data sources will result in node-varying effects on the whole system. How to identify the subset of salient victim nodes with a limited attack budget to maximize the attack effect is the first challenge. Second, unlike most existing adversarial attack strategies that focus on time-invariant label classification [8,9], the adversarial attack against traffic forecasting aims to disrupt the target model to make biased predictions of continuous traffic states. How to generate real-valued adversarial examples without access to the ground truth of future traffic states is another challenge.
To this end, in this paper, we propose a practical adversarial spatiotemporal attack framework that can disrupt the forecasting models to derive biased city-wide traffic predictions. Specifically, we first devise an iterative gradient-guided method to estimate node saliency, which helps to identify a small time-dependent set of victim nodes. Moreover, a spatiotemporal gradient descent scheme is proposed to guide the attack direction and generate real-valued adversarial traffic states under a human imperceptible perturbation constraint. The proposed attack framework is agnostic to forecasting model architecture and is generalizable to various attack settings, i.e., white-box attack, grey-box attack, and black-box attack. Meanwhile, we theoretically analyze the worst performance guarantees of adversarial traffic forecasting attacks. We prove the adversarial robustness of spatiotemporal traffic forecasting models is related to the number of victim nodes, the maximum perturbation bound, and the maximum degree of the traffic network.
Extensive experimental studies on two real-world traffic datasets demonstrate the attack effectiveness of the proposed framework on state-of-the-art spatiotemporal forecasting models. We show that attacking 10% nodes in the traffic system can break down the global forecasting Mean Average Error (MAE) from 1.975 to 6.1329. Moreover, the adversarial attack can induce 68.65%, and 56.67% performance degradation under the extended white-box and black-box attack settings, respectively. Finally, we also show that incorporating adversarial examples we generated with adversarial training can significantly improve the robustness of spatiotemporal traffic forecasting models.
Background and problem statement
In this section, we first introduce some basics of spatiotemporal traffic forecasting and adversarial attack, then formally define the problem we aim to address.
Spatiotemporal traffic forecasting
Let G t = (V, E) denote a traffic network at time step t, where V is a set of n nodes (e.g., regions, road segments, roadway sensors, etc.) and E is a set of edges. The construction of G t can be categorized into two types, (1) prior-based, which pre-define G t based on metrics such as geographical proximity and similarity [10], and (2) learning-based, which automatically learns G t in an end-toend way [2]. Note the G t can be static or time-evolving depending on the forecasting model. We denote X t = (x 1,t , x 2,t , · · · , x n,t ) as the spatiotemporal features associated to G t , where x i,t ∈ R c represents the c-dimensional time-varying traffic conditions (e.g., traffic volume, traffic speed) and contextual features (e.g., weather, surrounding POIs) of node v i ∈ V at t. The spatiotemporal traffic forecasting problem aims to predict traffic states for all v i ∈ V over the next τ time steps, Note the above formulation is consistent with the state-of-the-art Graph Neural Network (GNN) based spatiotemporal traffic forecasting models [2,10,11,12], and is also generalizable to other variants such as Convolutional Neural Network (CNN) based approaches [13].
Adversarial attack
Given a machine learning model, adversarial attack aims to mislead the model to derive biased predictions by generating the optimal adversarial example where x is the adversarial example with maximum bound ε under L p norm to guarantee the perturbation is imperceptible to human, and y is the ground truth of clean example x.
Note the adversarial attack happened in the testing stage, and the attackers cannot manipulate the forecasting model or its output. On the benign testing set, the forecasting model can perform well.
Based on the amount of information the attacker can access in the testing stage, the adversarial attack can be categorized into three classes. White-box attack. The attacker can fully access the target model, including the model architecture, the model parameters, gradients, model outputs, the input traffic states, and the corresponding labels. Grey-box attack. The attacker can partially access the system, including the target model and the input traffic states, but without the labels. Black-box attack. The attacker can only access the input traffic states, query the outputs of the target model or leverage a surrogate model to craft the adversarial examples.
Adversarial attack against spatiotemporal traffic forecasting
This work aims to apply adversarial attacks to spatiotemporal traffic forecasting models. We first define the adversarial traffic state as follow, where S t ∈ {0, 1} n×n is a diagonal matrix with ith diagonal element indicating whether node i is a victim node, and X t is the perturbed spatiotemporal feature named adversarial spatiotemporal feature. We restrict the adversarial traffic state by the victim node budget η and the perturbation budget ε.
Note that following the definition of adversarial attack, we leave the topology of G t immutable as we regard the adjacency relationship as a part of the model parameter that may be automatically learned in an end-to-end way.
Attack goal. The attacker aims to craft adversarial traffic states to fool the spatiotemporal forecasting model to derive biased predictions. Formally, given a spatiotemporal forecasting model f θ (·), the adversarial attack against spatiotemporal traffic forecasting is defined as where T test and T train denote the set of time steps of all testing and training samples, respectively. L(·) is the loss function measuring the distance between the predicted traffic states and ground truth, and θ * is optimal parameters learned during the training stage.
Since the ground truth (i.e., future traffic states) under the spatiotemporal traffic forecasting setting is unavailable at run-time, the practical adversarial spatiotemporal attack primarily falls into the grey-box attack setting.
However, investigating white-box attacks is still beneficial to help us understand how adversarial attack works and can help improve the robustness of spatiotemporal traffic forecasting models (e.g., apply adversarial training). We discuss how to extend our proposed adversarial attack framework to white-box and black-box settings in Section 3.2.
Methodology
In this section, we introduce the practical adversarial spatiotemporal attack framework in detail. Specifically, our framework consists of two steps: (1) identify the time-dependent victim nodes, and (2) attack with the adversarial traffic state.
Identify time-dependent victim nodes
One unique characteristic that distinguishes attacking spatiotemporal forecasting from conventional classification tasks is the inaccessibility of ground truth at the test phase. Therefore, we first construct future traffic states' surrogate label to guide the attack direction, where g φ (·) is a generalized function (e.g., tanh(·), sin (·), f θ (·)), δ t+1:t+τ are random variables sampled from a probability distribution π(δ t+1:t+τ ) to increase the diversity of the attack direction. In our implementation, we derive φ based on the pre-trained forecasting model parameter θ * , and δ t+1:t+τ ∼ U (−ε/10, ε/10). In the real-world production [5], the forecasting models are usually updated in an online fashion (e.g., per hours). Therefore, we estimate the missing latest traffic states based on previous input data,H t = g ϕ (H t−1 ), where g ϕ (·) is the estimation function parameterized by ϕ. For simplicity, we directly obtain ϕ from the pre-trained traffic forecasting model f θ * (·).
With the surrogate traffic state labelỸ t+1:t+τ , we derive the time-dependent node saliency (TDNS) for each node as where L(f θ (H t−T +1:t ),Ỹ t+1:t+τ ) is the loss function and σ is the activation function. Intuitively, M t reveals the node-wise loss impact with the same degree of perturbations. Note depending on the time step t, M t may vary. A similar idea also has been adopted to identify static pixel saliency for image classification [15].
More in detail, the loss function L(f θ (H t−T +1:t ),Ỹ t+1:t+τ ) in Equation 6 is updated by the iterative gradient-based adversarial method [8], where H (i) t−T +1:t is adversarial traffic states at i-th iteration, α is the step size, and clip X t−T +1 ,ε (·) is the project operation which clips the spatiotemporal feature with maximum perturbation bound ε.
, the time-dependent node saliency gradient is derived by where γ is the batch size. We use the RELU activation function to compute the non-negative saliency score for each time step, Finally, we obtain the set of victim node S t based on M t , where s (i,i),t denotes the i-th diagonal element of S t , and Top(·) is a 0-1 indicator function returning if v i is the top-k salient node at time step t.
Attack with adversarial traffic state
Based on the time-dependent victim set, we conduct adversarial attacks to spatiotemporal traffic forecasting models. Specifically, we first generate perturbed adversarial traffic features based on gradient descent methods. Take the widely used Projected Gradient Descent (PGD) [8] for illustration, we construct Spatiotemporal Projected Gradient Descent (STPGD) as below, where H (i−1) t−T +1:t is the adversarial traffic state at i − 1-th iteration in the iterative gradient descent, α is the step size, and clip X t−T +1:t ,ε (·) is the operation to bound adversarial features in a ε ball.
Instead of perturbing all nodes as in vanilla PGD, we only inject perturbations on selected victim nodes in S t . Similarly, we can generate perturbed adversarial traffic features by extending other gradient based methods, such as MIM [9].
In the testing phase, we can inject the adversarial traffic states H t−T +1: The details of the adversarial spatiotemporal attack framework under the grey-box setting is in algorithm 1.
The overall adversarial spatiotemporal attack can be easily extended to the white-box and black-box settings, which are detailed below.
White-box attack. Since the adversaries can fully access the data and labels under the white-box setting, we directly use the real ground truth traffic states to guide the generation of adversarial traffic states. The detailed algorithm is introduced in Appendix A.1.
Black-box attack. The most restrictive black-box setting assumes limited accessibility to the target model and labels. Therefore, we first employ a surrogate model, which can be learned from the training data or by querying the traffic forecasting service [16,17]. Then we generate adversarial traffic states based on the surrogate model to attack the targeted traffic forecasting model. Please refer to Appendix A.2 for more details.
We conclude this section with the theoretical upper bound analysis of the proposed adversarial attack strategy. In particular, we demonstrate the attack performance against the spatiotemporal traffic forecasting model is related to the number of chosen victim nodes, the budget of adversarial perturbations, as well as the traffic network topology.
be the L-th layer embeddings of the forecasting model, the upper bound of the adversarial loss satisfies where λ denotes maximum weight bound in all layers of the forecasting model, β denotes parameter of the activation function in f θ (·), C denotes the maximum degree of G. η and ε are the budget of number of victim nodes and perturbations, respectively.
Proof. Please refer to Appendix B.
Algorithm 1: Adversarial spatiotemporal attack under the grey-box setting Input: Previous traffic data, pre-trained spatiotemporal model f θ * (·), pre-trained traffic state prediction model g ϕ (·), maximum perturbation budget ε, victim node budget η, and iterations K. Result: Perturbed Adversarial traffic states H t−T +1:t . / * Step 1: Identify time-dependent victim nodes * / 1 Estimate current traffic stateH t−N +1:t by function g ϕ (·); 2 Construct future traffic state's surrogate labelsỸ t+1:t+τ by Equation 5 ; 3 Compute the time-dependent node saliency M t withH t−T +1:t andỸ t+1:t+τ by Equation 6-9; 4 Obtain the victim node set S t by Equations 10 ; / * Step 2: Attack with adversarial traffic state * / 5 Initialize adversarial traffic state H Generate perturbed adversarial features X For evaluation, all datasets are chronologically ordered, we take the first 70% for training, the following 10% for validation, and the rest 20% for testing. The statistics of the two datasets are reported in Appendix C.
Baselines. In the current literature, few studies can be directly applied to the real-valued traffic forecasting attack setting. To guarantee the fairness of comparison, we construct two-step baselines as below. For victim node identification, we adopt random selection and use the topology-based methods (i.e., node degree and betweenness centrality [20]) to select victim nodes. We also employ PageRank (PR) [21] as the baseline to decide the set of victim nodes. For adversarial traffic state generation, we adopt two widely used iterative gradient-based methods, PGD [8] and MIM [9], to generate adversarial perturbations. In summary, we construct eight two-step baselines, PGD-Random, PGD-PR, PGD-Centrality, PGD-Degree, MIM-Random, MIM-PR, MIM-Centrality, and MIM-Degree. For instance, PGD-PR indicates first identifying victim nodes with PageRank and then applying adversarial noises with PGD. Depending on the adversarial perturbation method, we compare two variants of our proposed framework, namely STPGD-TDNS and STMIM-TDNS.
Target model. To evaluate the generalization ability of the proposed adversarial attack framework, we adopt the state-of-the-art spatiotemporal traffic forecasting model, GraphWaveNet (Gwnet) [2], as the target model. Evaluation results on more target models are reported in Appendix F.
Evaluation metrics. Our evaluation focus on both the global and local effect of adversarial attacks on spatiotemporal models, where L(·) is a user-defined loss function. Different from the majority target of adversarial attacks that are classification models (e.g., adversarial accuracy), traffic forecasting is defined as a regression task. Therefore, we adopt Mean Average Error (MAE) [22] and Root Mean Square Error (RMSE) [23] for evaluation. More specifically, we define We select 10% nodes from the whole nodes as the victim nodes, and ε is set to 0.5. The batch size γ is set to 64. The iteration K is set to 5, and the step size α is set to 0.1.
Ablation study
Then we conduct ablation study on our adversarial attack framework. Due to page limit, we report the result of STPGD-TDNS on the PeMS-BAY dataset. We consider two variants of our approach: (1) w/o TDNS that randomly choose victim nodes to attack, and (2) w/o STPGD that apply vanilla PGD noise to selected victim nodes. As reported in Table 2, we observe (3.91%, 6.28%, 2.97%, 3.60%) and (33.41%, 52.45%, 26.19%, 32.97%) attack performance degradation on four metrics by removing our proposed TDNS and STPGD module, respectively. The above results demonstrate the effectiveness of the two-step framework. Moreover, we observe that the STPGD module plays a more important role in the adversarial spatiotemporal attack.
Parameter sensitivity
We further study the parameter sensitivity of the proposed framework, including the number of victim nodes η, the perturbation budget ε, and the batch size γ. Due to page limit, we report the result of G-RMSE on the PeMS-BAY dataset. We observe similar results by using other metrics and on the METR-LA dataset. Each time we vary a parameter, we set other parameters to their default values.
Effect of η. First, we vary the number of victim nodes from 0% to 40%. As reported in Figure 2 (a), Our approach achieves the best attack performance with a limited victim node budget, and the advantage decrease when the attack can be applied to more nodes.
Effect of ε. Second, we vary the perturbation budget from 0% to 90%. As shown in Figure 2 (b), the G-RMSE first increase and then slightly decrease. This is perhaps because the clip function in Equation 11 weakens the diversity of attack noises.
Effect of γ. Finally, we vary the batch size from 8 to 128, as illustrated in Figure 2 (c). We observe the adversarial attack is relatively stable to the batch size. However, too large batch size reduces the attack performance, which may induce over smooth of Equation 8.
Defense adversarial spatiotemporal attacks
Finally, we study the defense of adversarial spatiotemporal attacks. One primary goal of our study is to help improve the robustness of spatiotemporal forecasting models. Therefore, we propose to incorporate the adversarial training scheme for traffic forecasting models with our adversarial traffic states, denoted by AT-TNDS. We compare it with (1) conventional adversarial training (AT) [8] and (2) Mixup [24] with our adversarial traffic states. Note that we also tried other strategies, such as adding L 2 regularization, etc., which fail to defend the adversarial spatiotemporal attack. The other state-of-the-art adversarial training methods, such as TRADE [25], cannot be directly applied in regression tasks. Please refer to Appendix E for more training details.
The results in G-MAE on the PeMS-BAY are reported in Table 4. Overall, we observe AT or Mixup can successfully resist the adversarial spatiotemporal attack, and AT-TDNS that combines the adversarial training scheme with our adversarial traffic states achieves the best defensive performance. The above results indicate the defensibility of adversarial spatiotemporal attacks, which should be further investigated to deliver a more reliable spatiotemporal forecasting service in the future.
Related work
Spatiotemporal traffic forecasting. In recent years, the deep learning based traffic forecasting model has been extensively studied due to its superiority in jointly modeling temporal and spatial dependencies [10,11,6,26,2,12,27,28]. To name a few, STGCN [10] applied graph convolution and gated causal convolution to capture the spatiotemporal information in the traffic domain, ASTGCN [11] proposed a spatial-temporal attention network for capturing dynamic spatiotemporal correlations. As another example, GraphWaveNet [2] adaptively captures latent spatial dependency without requiring prior knowledge of the graph structure. The key objective of the above mentioned models is more accurate traffic forecasting. The vulnerability of spatiotemporal traffic forecasting models remains an under explored problem.
Adversarial attack. Deep neural networks have been proven vulnerable to adversarial examples [8,14]. As an emerging direction, various adversarial attack strategies on graph-structured data have been proposed, including both target-attack and non-target attack [29,30]. However, existing efforts on adversarial attacks mainly focus on classification tasks with static label [9,24]. Only a few works study the vulnerability of GCN based spatiotemporal forecasting models under query-based attack [31] and generate adversarial examples based on evolutionary algorithms [32]. In this paper, we study the gradient based adversarial attack method against spatiotemporal traffic forecasting models, which is model-agnostic and generalizable to various attack settings, i.e., white-box attack, grey-box attack, and black-box attack.
Conclusion
This paper showed the vulnerability of spatiotemporal traffic forecasting models under adversarial attacks. We proposed a practical adversarial spatiotemporal attack framework, which is agnostic to forecasting model architectures and is generalizable to various attack settings. To be specific, we first constructed an iterative gradient guided node saliency method to identify a small time-dependent set of victim nodes. Then, we proposed a spatiotemporal gradient descent based scheme to generate realvalued adversarial traffic states by flexibly leveraging various adversarial perturbation methods. The theoretical analysis demonstrated the upper bound of the proposed two-step framework under human imperceptible victim node selection budget and perturbation budget constraints. Finally, extensive experimental results on real-world datasets verify the effectiveness of the proposed framework. The reported results will inspire further studies on the vulnerability of spatiotemporal forecasting models, as well as practical defending strategies for resisting adversarial attacks that can be deployed in real-world ITS systems.
Proof 1 Remarks. Assumption 1 provides a more general activation function assumption. This assumption is met by the ReLU, sigmoid, tanh function [10,11,2] etc. We also noticed that [31] also analyzes traffic forecasting loss under query-based attack. Our theorem is different in that we first give the worst performance bound of an adversarial traffic forecasting attack, but [31] does not provide the worst performance bound. Second, our theorem is more general because we do not specify a specific activation function.
C Data statistics
We conclude the data statistics for two-real world datasets in Table 5.
where m represents the number of samples in test sets, and n denotes the number of nodes.
E Defense adversarial traffic states
Given a spatiotemporal forecasting model f θ (·), the adversarial training in spatiotemporal traffic forecasting is defined as where L(·) is the loss function measuring the distance between the predicted traffic states and ground truth, and θ is parameters learned during the training stage. T train denote the set of time steps of all training samples. We use strategies that include (1) adversarial training (AT) [8]. We use adversarial training with the PGD-Random adversarial attack method to generate the adversarial samples under white-box setting.
(2) Mixup [24]. We randomly sample the clean and adversarial samples to train the forecasting model. The adversarial sample are also generated by PGD-Random method under white-box setting. (3). We use adversarial sampels generated by our method STPGD-TDNS under white-box setting to train the model.
F Further experiments F.1 Experiments on other models
The other spatiotemporal traffic forecasting models are summarized as follows.
(1) STGCN [10] applies graph convolution and gated causal convolution to capture the spatiotemporal information in the traffic domain. (2) To overcome the spatiotemporal forecasting problem, ASTGCN [11] presented a spatial-temporal attention method for capturing dynamic spatiotemporal correlations. (3) MTGNN [33] created a self-learned node embedding for forecasting traffic conditions that is also not dependent on a pre-defined graph.
We report the evaluation results on other target models in Tables 6-8
F.2 Ablation study under white-box setting
Since selecting a few set as the victim nodes is important to attack traffic forecasting model, we conduct further ablation study to evaluate the method TDNS under the white-box setting. Table 9 reports the overall results on Gwnet under white-box attack.
F.3 Experiments at different time intervals
We conduct further experiments at different time intervals, including 5 minutes, 10 minutes, 15 minutes, 30 minutes, and 45 minutes. We report the results at different time intervals compared with other baselines in Tables 11-15. Overall, as the time interval increases, the forecasting and adversarial attack performances decrease, as reported in Table 10.
For example, the G-MAE increases from 3.9458 to 6.1329 from a time interval of 5 minutes to a time interval of 60 minutes, with the attack performance degradation from 75.93% to 67.80%. One possible reason is that as the time interval increases, the forecasting error of the spatiotemporal model will increase. It is more challenging for the adversarial attack methods to estimate the target label to generate effective adversarial examples. | 2022-10-07T01:15:49.773Z | 2022-10-05T00:00:00.000 | {
"year": 2022,
"sha1": "7ff063f8dbe9ad0de7f452df66f67e7ac8176e96",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7ff063f8dbe9ad0de7f452df66f67e7ac8176e96",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
16908512 | pes2o/s2orc | v3-fos-license | Stellar populations in the nuclear regions of nearby radiogalaxies
We present optical spectra of the nuclei of seven luminous (P(178MHz)>10**25 W Hz**(-1) Sr**(-1)) nearby (z<0.08) radiogalaxies, which mostly correspond to the FR II class. In two cases, Hydra A and 3C 285, the Balmer and 4000A break indices constrain the spectral types and luminosity classes of the stars involved, revealing that the blue spectra are dominated by blue supergiant and/or giant stars. The ages derived for the last burst of star formation in Hydra A are between 7 and 40 Myr, and in 3C 285 about 10 Myr. The rest of the narrow-line radiogalaxies (four) have 4000A break and metallic indices consistent with those of elliptical galaxies. The only broad-line radiogalaxy in our sample, 3C 382, has a strong featureless blue continuum and broad emission lines that dilute the underlying blue stellar spectra. We are able to detect the Ca II triplet in absorption in the seven objects, with good quality data for only four of them. The strengths of the absorptions are similar to those found in normal elliptical galaxies, but these values are both consistent with single stellar populations of ages as derived from the Balmer absorption and break strengths, and, also, with mixed young+old populations.
INTRODUCTION
In recent years new evidence that star formation plays an important role in Active Galactic Nuclei (AGN) has been gathered: • The presence of strong Ca II λλ8494,8542,8662Å triplet (CaT) absorptions in a large sample of Seyfert 2 nuclei has provided direct evidence for a population of red supergiant stars that dominates the near-IR light (Terlevich, Díaz & Terlevich 1990). The values found in Seyfert 1 nuclei are also consistent with this idea if the dilution produced by a nuclear non-stellar source is taken into account (Terlevich, Díaz & Terlevich 1990, Jiménez-Benito et al. 2000. The high mass-to-light ratios L(1.6µm)/M inferred in Seyfert 2 nuclei also indicate that red supergiants dominate the nuclear light ⋆ Visiting Fellow at IoA, UK † Visiting Professor at INAOE, Mexico (Oliva et al. 1995), but a similar conclusion does not hold for Seyfert 1 nuclei.
• The absence of broad emission lines in the direct optical spectra of Seyfert 2 nuclei which show broad lines in polarized light can be understood only if there is an additional central source of continuum, most probably blue stars (Cid Fernandes & Terlevich 1995, Heckman et al. 1995). This conclusion is further supported by the detection of polarization levels which are lower in the continuum than in the broad lines (Miller & Goodrich 1990, Tran, Miller & Kay 1992.
• Hubble Space Telescope imaging of the Seyfert Mrk 447 reveals that the central UV light arises in a resolved region of a few hundred pc, in which prominent CaT absorption and broad He II λ4686Å emission lines reveal the red supergiant and Wolf Rayet stars of a powerful starburst. The stars dominate the UV to near-IR light directly received from the nucleus (Heckman et al. 1997). At least 50 per cent of the light emitted by the nucleus is stellar, as a conservative estimate. Mrk 447 is not a rare case: a large sample of nearby bright Seyfert 2s and LINERs show similar resolved star-burst nuclei of 80 to a few hundred pc in size (Colina et al. 1997, González-Delgado et al. 1998, Maoz et al. 1995, with some of the Seyfert 2 containing dominant Wolf-Rayet populations (Kunth & Contini 1999, Cid Fernandes et al. 1999.
A starburst-AGN connection has been proposed in at least three scenarios: starbursts giving birth to massive black holes (e.g. Scoville & Norman 1988); black holes being fed by surrounding stellar clusters (e.g. Perry & Dyson 1985, Peterson 1992; and also pure starbursts without black holes (e.g. Terlevich & Melnick 1985, Terlevich et al. 1992. The evidence for starbursts in Seyfert nuclei strongly supports some kind of connection. However, it is still to be demonstrated that starbursts play a key role in all kinds of AGN. One of the most stringent tests to assess if all AGN have associated enhanced nuclear star formation is the case of lobe-dominated radio-sources, whose host galaxies have relatively red colours when compared to other AGN varieties. In this paper we address the stellar content associated with the active nuclei of a sample of FR II radiogalaxies, the most luminous class of radiogalaxies (Fanaroff & Riley 1974) which possess the most powerful central engines and radio-jets (Rawlings & Saunders 1991). The presence of extended collimated radio-jets, which fuel the extended radio structure over > ∼ 10 8 yr, strongly suggests the existence of a supermassive accreting black hole in the nuclei of these radiogalaxies. This test addresses the question of whether AGN that involve conspicuous black holes and accretion processes also contain enhanced star formation.
In section 2 we introduce the sample and detail the data acquisition and reduction processes. In section 3 we provide continuum and line measurements of the most prominent features of the optical spectra of the radiogalaxies. In section 4 we discuss the main stellar populations responsible for the absorption and continuum spectra. In section 5 we offer notes on individual objects. A sumary of the main conclussions from this work is presented in section 6.
DATA ACQUISITION AND REDUCTION
Our sample of radiogalaxies was extracted from the 3CRR catalogue (Laing, Riley and Longair 1983) with the only selection criteria being edge-brightened morphology, which defines the FR II class of radiogalaxies (Fanaroff & Riley 1974), and redshift z < 0.08. This last condition was imposed in order to be able to observe the redshifted CaT at wavelengths shorter than λ9300Å, where the atmospheric bands are prominent. Six out of a complete sample of ten FR II radiogalaxies that fulfill these requirements were randomly chosen. In addition to this sub-sample of FR IIs, we observed the unusually luminous FR I radiogalaxy Hydra A (3C 218). This has a radio luminosity of P178MHz = 2.2 × 10 26 W Hz −1 Sr −1 , which is an order of magnitude above the typical FR I/FR II dividing luminosity.
Spectroscopic observations of a total of seven radiogalaxies, one normal elliptical galaxy to serve as reference and five K III stars to serve as velocity calibrators were performed using the double-arm spectrograph ISIS mounted in the Cassegrain Focus of the 4.2m William Herschel Tele-scope ‡ in La Palma during two observing runs, in 1997 November 7-8 and 1998 February 19-20. The first run was photometric but the second was not, being partially cloudy on the 20th. The seeing, as measured from the spatial dimension of spectrophotometric stars, was between 0.7 and 0.8 arcsec throughout the nights.
A slit width of 1.2 arcsec centered on the nucleus of galaxies and stars was used. We oriented the slit along the radio-axis for all radiogalaxies, except for Hydra A, for which the orientation was perpendicular to the radio-axis.
An R300B grating centered at λ4500Å with a 2148x4200 pixel EEV CCD and an R316R grating centered at λ8800Å with a 1024x1024 pixel TEK CCD were used in the 1998 run. The projected area on these chips is 0.2 arcsec/pixel and 0.36 arcsec/pixel respectively. This configuration provides the spectral resolution necessary to resolve the Mg b and CaT features and, at the same time, offers a wide spectral span: λ3350Å-λ6000Å at 5.1Å resolution in the blue and λ7900Å-λ9400Å at 3.5Å resolution in the red. In the 1997 run, in which we assessed the viability of the project, we used the R600B and R600R gratings instead. This setup covers the λ3810Å-λ5420Å and λ8510Å-λ9320Å range in the blue and red arm, at 2.6 and 1.7Å resolution respectively. Just one radiogalaxy (DA 240) was observed with this alternative setup. The dichroics 5700 and 6100 were used in 1997 November and 1998 February, and in both runs we used a filter to avoid second order contamination in the spectra.
We obtained flux standards (HZ 44 and G191-B2B) for the four nights and gratings, except in 1998 Feb 20, when we were unable to acquire the red spectrum of the corresponding standard due to a technical failure. One calibration lamp CuAr+CuNe exposure per spectral region and telescope position was also obtained for all objects.
The total integration times for the radiogalaxies (from 1 to 3 hr) were split into time intervals of about 1200 or 1800 s in order to diminish the effect of cosmic rays on individual frames and allow to take lamp flat-fields with the red arm of the spectrograph between science exposures. The TEK CCD has a variable fringing pattern at the wavelengths of interest, such that the variations are correlated with the position at which the telescope is pointing. Since flat-fielding is crucial for the reddest wavelengths, where the sky lines are most prominent, after every exposure of 20 to 30 min we acquired a flat-field in the same position of the telescope as the one for which the galaxies were being observed. We followed this procedure with all galaxies except with DA 240. The same procedure was also used in the case of the elliptical galaxy, splitting its total integration time in two. Table 1 summarizes the journal of observations, where column 1 gives the name of the object; column 2 the radiopower at 178 MHz; column 3 the redshift; column 4 the integrated V magnitude of the galaxy; column 5 identifies whether the object is a radiogalaxy (RG), a normal elliptical (E) or a star (S); column 6 gives the date of the beginning of the night in which the observations were carried out; column 7 the position angle (PA) of the slit; column 8 the total exposure time; column 9 the grating used; and column 10 the corresponding linear size to 1 arcsec at the redshift of the galaxies (for H0 = 50 km s −1 Mpc −1 ). The data for the radiogalaxies were extracted from the 3C Atlas (Leahy, Bridle & Strom, http://www.jb.man.ac.uk/atlas/) and for the host galaxy of Hydra A from the 3CR Catalogue (de Vaucouleurs et al. 1991).
The data were reduced using the IRAF software package. The frames were first bias subtracted and then flat-field corrected. In the case of the red arm spectra, the different flats obtained for a single object were combined when no significant differences were detected between them. However, in several cases the fringing pattern shifted positions that accounted for differences of up to 20 per cent. In these cases we corrected each science frame with the flat-field acquired immediately before and/or afterwards. Close inspection of the faintest levels of the flat-fielded frames showed that the fringing had been successfully eliminated. Wavelength and flux calibration were then performed, and the sky was subtracted by using the outermost parts of the slit.
The atmospheric bands, mainly water absorption at λ8920Å-λ9400Å, affect the redshifted CaT region of several radiogalaxies. The bands have been extracted using a template constructed from the stellar spectra obtained each night. The template was built averaging the normalized flux of spectrophotometric and velocity standard stars, once the stellar absorption lines had been removed. The atmospheric bands were eliminated from the spectra of the galaxies dividing by the flux-scaled template. This reduces the S/N of the region under consideration, especially since the bands are variable in time and one of our observing nights was partially cloudy. However, the technique allows the detection of the stellar atmospheric features. The CaT of the elliptical galaxy is not affected by atmospheric absorption. Figure 1 shows the line spectrum of the sky and, as an example, the atmospheric absorption template of 1998 Feb 19. Water-band correction proved to be critical for the detection of the CaT lines when the atmospheric conditions were most adverse. Figure 2 shows extractions of the nuclear 2 arcsec of the spectra of the galaxies. This corresponds to 844 to 2020 pc for the radiogalaxies, and 98 pc for the normal elliptical galaxy.
CaT index
The CaT was detected in all of the objects, although in three cases (3C 285, 3C 382 and 4C 73.08) it was totally or partially affected by residuals left by the atmospheric band corrections and the measurement of its strength was thus precluded. For the remaining cases, the strength was measured in the rest-frame of the galaxies against a pseudo-continuum, following the definition of the CaT index of Díaz, Terlevich & Terlevich (1989). In Hydra A, 3C 285 and 3C 382, the red continuum band is seriously affected by residuals left from the atmospheric absorption removal. We defined two alternative continuum bands, 8575Å< λ < 8585Å and 8730Å< λ < 8740Å, that substitute the red-most contin-uum window of the CaT index. We checked this new definition against the original one in the elliptical galaxy, which doesn't have residuals in its continuum bands, and the agreement between the two systems was good within 5 per cent. Velocity dispersions were measured by cross-correlating the galaxy spectra with the stellar spectra obtained with the same setup. The errors in the velocity dispersions calculated in this way were less than 10 per cent.
A high velocity dispersion tends to decrease the measured values of indices based on EW measurements. The CaT index has to be corrected from broadening of the absorption lines by the corresponding velocity dispersion. In order to calculate the correction we convolved stellar profiles with gaussian functions of increasing width, and measured the CaT index in them. A good description of the correction found for our data is given by the functional form ∆EW (Å) = (σ(km s −1 ) − 100)/200. The corrections were applied to the values measured in the galaxies, and converted into unbroadened indices.
λ4000Å and Balmer Break indices
Stellar populations can be dated through the measurement of the λ4000Å or Balmer breaks. In intermediate to old populations the discontinuity at λ4000Å results from a combination of the accumulation of the Balmer lines towards the limit of the Balmer absorption continuum at λ3646Å (the Balmer break) and the increase in stellar opacity caused by metal lines shortwards of λ4000Å. Table 3 lists the values of the λ4000Å break index, ∆4000Å, measured in the spectra of the 6 narrow-line radiogalaxies and the elliptical galaxy in our sample. This excludes 3C 382, which has a spectrum dominated by a strong blue continuum and broad-emission lines, and shows very weak stellar atmospheric features and no break. We adopted the definition given by Hamilton (1985), which quantifies the ratio of the average flux-level of two broad bands, one covering the break (3750-3950) and one bluewards of the break (4050-4250). Both bands contain strong metallic and Balmer absorption lines in the case of normal galaxies. In active galaxies, the measurement can be contaminated by emission of [Ne III]λ3869Å, which in our case is weak. The contamination by high-order Balmer lines in emission is negligible. The net effect of emission contamination is to decrease the Balmer break index. In the radio-galaxies, we have estimated this effect by interpolating the continuum levels below the [Ne III] emission, and we estimate that the ratio can be affected by 6 per cent at worst, in the case of 3C 192, and by less than 3 per cent for the rest of the objects. Table 3 lists emission-devoid indices.
Hydra A and 3C 285 have spectra which are much bluer than those of normal elliptical galaxies. In order to quantify better the strength of the break and the ages of the populations derived, we have performed a bulge subtraction using as template the spectrum of NGC 4374, scaled to eliminate the G-band absorption of the radiogalaxies. Since the velocity dispersion of the stars in NGC 4374 and in the radiogalaxies are comparable inside the spectral resolution of our data, no further corrections were needed. The G-band absorption is prominent in stars of spectral types later than F5 and it is especially strong in types K. NGC 4374 is a normal elliptical galaxy, with a spectral shape which compares well with those of other normal ellipticals in the spectrophotometric atlas of galaxies of Kennicutt (1992). Thus, by removing a scaled template of NGC 4374, we are isolating the most massive stars (M > ∼ 1M⊙) in the composite stellar population of the radiogalaxies. Figure 3 shows the bulge subtractions obtained on these two radiogalaxies.
We measured on the bulge-subtracted spectra ∆4000Å and also the Balmer break index as defined by the classical Dλ1 method of stellar classification designed by Barbier and Chalonge (Barbier 1955, Chalonge 1956, see Strömgren 1963. The latter quantifies the Balmer discontinuity in terms of the logarithmic difference of the continuum levels (D) and the effective position of the break (λ1). The method places a pseudo-continuum on top of the higher order terms of the Balmer series in order to measure the effective position of the discontinuity. Figure 4 shows the placement of continua, pseudo-continua and the measurements of D and λ1 for an A2I star from the stellar library of Jacoby, Hunter & Christian (1984). The functional dependences on the effective temperature and gravity of the stars are sufficiently different for D and λ1 to satisfy a two-dimensional classification.
The Dλ1 method could only be reliably applied in the cases of Hydra A and 3C 285. For the other radiogalaxies, the bulge-subtractions led to results that did not allow the identification of the absorption features and/or the break in an unambiguous way due to the resulting poor S/N. Figure 5 shows the Dλ1 measurements performed on the bulgesubtracted spectra of Hydra A and 3C 285. We have placed different continuum levels to estimate the maximum range of acceptable parameters of the stellar populations that are involved. Table 3 lists the λ4000Å and Balmer break indices measured in both the bulge-subtracted and the original spectra of the radiogalaxies.
Lick indices
The presence of prominent Balmer absorption lines, from Hγ up to H12 λ3750Å, is one of the most remarkable features of the blue spectra of two of the seven radiogalaxies, while Hβ and Hα are filled up by conspicuous emission lines. A clear exception to the presence of the Balmer series in absorption is the broad-line radiogalaxy 3C 382.
In order to estimate the Balmer strength, crucial to date any young stellar population involved, we use the EW of the H10 λ3798Å line, which appears only weakly contaminated by emission in the radiogalaxies. H10 is chosen as a compromise of an easily detectable Balmer line that shows both a minimum of emission contamination and clear wings to measure the adjacent continuum. The Balmer lines from Hβ to H9 λ3836Å are contaminated by prominent emission, which in Case B recombination comes in decreasing emission ratios to Hβ of 1, 0.458, 0.251, 0.154, 0.102, 0.0709 (Osterbrock 1989); H10 has an emission contamination of 0.0515 × Hβ. At the same time, the absorption strengths are quite similar from Hβ to H10, although the EW(H10) is actually systematically smaller than EW(Hβ) in young to intermediateage populations. González-Delgado, Leitherer & Heckman (2000) obtain, in their population synthesis models, ratios of EW(Hβ)/EW(H10) between 1.3 and 1.6 for bursts with ages 0 to 1 Gyr and constant or coeval star formation histories. Lines of order higher than 10 have decreasing emission contamination, but they also increasingly merge towards the Balmer continuum limit.
A caveat in the use of H10 as an age calibrator comes from the fact that this line might be contaminated by metallic lines in old populations. Although our measurements of H10 in NGC 4374 are around 1.5Å, an inspection of the spectra of three elliptical galaxies (NGC 584, NC 720, NGC 821) observed in the same wavelegth range (but with lower S/N) and archived in the Isaac Newton Group database, indicates that a wide range of EW(H10), from 2 to 4Å, could characterize elliptical galaxies, while their Hβ indices are in the 1 to 2Å regime. If confirmed by better data, these results could indicate that although the upper Balmer series is detected in elliptical galaxies, it could indeed be contaminated by the absorptions of other species. Clearly, more work needs to be done in the near-UV spectra of elliptical galaxies before conclusive evidence can be derived for the behaviour of EW(H10) in old stellar systems, and its contamination by metallic lines.
In all the radio-galaxies observed in this work, the H10 profile is narrow and reproduces the shape of the wings of the lower-order Balmer absorption lines. Hydra A and 3C 285 clearly provide the best fittings. As an illustration, Figure 6 shows the estimated absorption line profiles for the Hβ, Hγ and Hδ lines, assuming a constant ratio between their EWs and that of H10, and also a scaled (×1.4) H10 profile for the case of Hβ in Hydra A.
We also measured indices that are mostly sensitive to the metal content of the stellar populations involved. The Lick indices of Mg and Fe (e.g. Worthey et al. 1994) serve this purpose. In order to avoid the contribution of [O III]λ4959Å to the continuum measurement for the molecular index Mg2, we have displaced the lower continuum band of this index to 4895.125Å< λ < 4935.625Å. This redefinition does not alter the value of the index in the elliptical galaxy, which shows no [O III] emission. Table 4 compiles the EW of H10, and the metallic indices Mg b, Fe5270, Fe5335, [MgFe], Mg2 of the Lick system, measured in the rest-frame of the galaxies in our sample. The atomic indices are affected by broadening, like the CaT index, while Mg2 is only affected by lamp contributions in the original IDS Lick system , Longhetti et al. 1998. We have calculated broadening corrections as in section 3.1 for the atomic lines, and adopted the corrections calculated by Longhetti (1998) for the molecular lines.
The uncorrected values of these indices are denoted with a subindex u in Table 4. The errors of the individual line and molecular indices were estimated adopting continua shifted from the best fit continua by ±1σ. This lead to average errors between an 8 and a 10 per cent for individual line and molecular indices, and ∼ 6 per cent for [MgFe].
The agreement between our measurements of Lick indices and those carried out by other authors (González 1993, Davies et al. 1987, Trager et al. 2000a) on our galaxy in common, NGC 4374, is better than 10 per cent.
Comparison with elliptical galaxies and population synthesis models
The analysis of the spectral energy distributions and colours of elliptical galaxies suffers from a well known age-metallicity degeneracy (Aaronson et al. 1978). However, this is broken down when the strengths of suitable stellar absorption lines are taken into account (e.g. Heckman 1980). The plane composed by the [Hβ] and [MgFe] indices, in this sense, can discriminate the ages and metallicities of stellar systems. It is on the basis of this plot, that a large spread of ages in elliptical galaxies has been suggested (González 1993). Bressan, Chiosi & Tantalo (1996) claim, however, that when the UV emission and velocity dispersion of the galaxies are taken into account, the data are only compatible with basically old systems that have experienced different star formation histories (see also Trager et al. 2000aTrager et al. , 2000b. A recent burst of star formation that involves only a tiny fraction of the whole elliptical mass in stars, would rise the [Hβ] index to values characteristic of single stellar populations which are 1 to 2 Gyr old (Bressan et al. 1996).
Most likely, the stellar populations of radiogalaxies are also the combination of different generations. Direct support for this interpretation in the case of Hydra A comes from the fact that the stellar populations responsible for the strong Balmer lines are dynamically decoupled from those responsible for the metallic lines (Melnick, Gopal-Krishna & Terlevich 1997).
This interpretation is also consistent with the modest ∆4000Å measurements we have obtained. Figure 7 shows a comparison of the values found in radiogalaxies, with those of normal elliptical, spiral and irregular galaxies, including starbursts, from the atlas of Kennicutt (1992). The radiogalaxies 3C 98, 3C 192, 4C 73.08 and DA 240 have indices of the order of 1.9 to 2.3, which overlap with those of normal E galaxies, ∆4000Å = 2.08 ± 0.23. These values correspond to populations dominated by stars of ages 1 to 10 Gyr old, if one assumes the coeval population synthesis models of Longhetti et al. (1999). However, Hydra A and 3C 285 have indices in the range 1.4 to 1.6, typical of coeval populations which are 200 to 500 Myr old. Once the bulge population is subtracted, the ∆4000Å indices of Hydra A and 3C 285 decrease to 1.2 and 1.0 respectively, which are typical of systems younger than about 60 Myr. Hamilton (1985) measured the ∆4000Å index in a sample of stars covering a wide range of spectral types and luminosity classes. He found a sequence of increasing ∆4000Å from B0 to M5 stars, with values from 1 to 4 mag respectively. A comparison with the sequence he found leads us to conclude that the break in the bulge subtracted spectrum of Hydra A is dominated by B or earlier type stars while that of 3C 285 is dominated by A type stars. The index ∆4000Å does not clearly discriminate luminosity classes for stars with spectral types earlier than G0.
The equivalent width of the H10 absorption line in these two radiogalaxies give further support to the interpretation of the Balmer break as produced by a young stellar population. In Hydra A we find after bulge subtraction EW(H10)≈ 3.9Å, which, according to the synthesis models of González- Delgado et al. (2000) gives ages of 7 to 15 Myr for an instantaneous burst of star formation, and 40 to 60 Myr for a continuous star formation mode, in solar metal-licity environments. In the case of 3C 285, EW(H10)≈ 6Å would imply an age older than about 25 Myr for a singlepopulation burst of solar metallicity.
The metallic indices of normal elliptical galaxies range between the values 0.56 < ∼ log [MgFe] < ∼ 0.66 (González 1993), which characterizes oversolar metallicites for ages larger than about 5 Gyr. This is also the typical range of our radiogalaxies, although 3C 285 shows a clear departure with log [MgFe] ≈ 0.4. However, [MgFe] tends to be smaller for populations younger than a few Gyr and similar oversolar metallic content. Since 3C 285 has a clear burst of recent star-formation, we conclude that its overall abundance is also most probably solar or oversolar.
The blue stellar content
A better estimate of the spectral type and luminosity class of the stars that dominate the break in Hydra A and 3C 285 comes from the two-dimensional classification of Barbier and Chalonge. In Figure 8 the solid squares connected by lines represent the maximum range of possible Dλ1 values measured in these radiogalaxies.
The Balmer break index is sensitive to the positioning of the pseudo-continuum on top of the higher order Balmer series lines, which in turn is sensitive to the merging of the wings of the lines, enhanced at large velocity dispersions. In order to assign spectral types and luminosity classes to the stars that dominate the break, therefore, it is not sufficient to compare the values we have obtained with those measured in stellar catalogues. The values measured for the radiogalaxies can be corrected for their intrinsic velocity dispersions; we have chosen instead to recalibrate the index using template stars of different spectral types and luminosity classes convolved with gaussian functions, until they reproduce the width of the Balmer lines observed in the radiogalaxies (FWHM≈ 12.5Å). We used the B0 to A7 stars from the stellar library of Jacoby et al. (1984), which were observed with 4.5Å resolution. The values of the Dλ1 indices measured in these broadened stars are represented in Figure 8 by their respective classification. By comparison we also plot the grid traced by the locus of unbroadened stars, as published by Strömgren (1963). The broadening of the lines shifts the original locus of supergiant stars from the λ1 < ∼ 3720Å range (Chalonge 1956) to the 3720 < ∼ λ1 < ∼ 3740Å range, occupied by giant stars in the original (unbroadened) classification. Giant stars, in turn, shift to positions first occupied by dwarfs. Most dwarfs have Balmer line widths comparable to those of the radiogalaxies, and thus their locus in the diagram is mostly unchanged.
The value of the D index indicates that the recent burst in Hydra A is dominated by B3 to B5 stars, and the effective position of the Balmer break (λ1) indicates that these are giant or supergiant stars, respectively. These stars have masses of 7 and 20M⊙ (Schmidt-Kaler 1982). From the stellar evolutionary tracks of massive stars with standard mass-loss rate at Z⊙ or 2Z⊙ (Schaller et al. 1992, Schaerer et al. 1993, Meynet et al. 1994 we infer that these stars must have ages between 7 to 8 Myr (B3I) and 40 Myr (B5III). Note that the B4V stars in Figure 8, near the location of Hydra A, cannot originate the break and at the same time follow the kinematics of the nucleus (see section 5.3). Any dwarf star located in the stellar disk of Hydra A would show absorption lines that have been broadened beyond the 12.5Å of FWHM we measure in this radiogalaxy, and its position would have been shifted further into larger values of λ1.
The location in the D − λ1 plane of 3C 285 indicates that the break is produced by A2I stars. These are 15M⊙ stars. Again, ages of 10 to 12 Myr are found for the last burst of star formation in this radiogalaxy. The interpretation of the blue excess in terms of A type stars is further supported by the detection of the Ca II H line in the bulge-subtracted spectrum.
The red stellar content
The CaT index in the radiogalaxies has values between 6 and 7Å. Díaz et al. (1989) find that at solar or oversolar metallicities red supergiant stars have CaT indices ranging from 8.5 to 13Å, red giant stars from 6 to 9Å and dwarfs from 4.5 to 8.5Å. The values we find are thus compatible with both giant or dwarf stars. However, we favour the interpretation of giant stars since the old bulge population will be dominated by red giants, as in the case of elliptical galaxies.
We have measured the CaT in a control sample of elliptical galaxies observed by J. Gorgas and collaborators (priv. communication) and combined these measurements with those found by Terlevich et al (1990) in a sample of elliptical, lenticular, spiral, and active galaxies of different kinds. We find that the range of CaT in elliptical galaxies, 5 to 7.5Å, comprises the range of values of the radiogalaxies (see Figure 9).
García-Vargas, Mollá & Bressan (1998) find in their population synthesis models values of the CaT between 6 and 7Å for ages ranging from 100 Myr to 1 Gyr, and larger afterwards, assuming coeval star formation and solar or oversolar metallicity. A revised version of these models by Mollá & García-Vargas (2000) which includes extended libraries of M-type stars, predicts for ages between 2 and 20 Gyr a constant value of 7Å at solar metallicity, and 8.5Å at 2Z⊙. These synthesis models are based on parametric fittings of the CaT strength in NLTE model atmospheres (Jørgensen, Carlsson & Johnson 1992) and in fittings of empirical values measured in stellar libraries (e.g. Díaz et al. 1989;Zhou 1991). The fits work well in the low metallicity regime. However, at metallicities higher than solar, the relationship between metallicity and the CaT index shows a big scatter, and the linear fittings loose any predictive power (see Figure 4 of Díaz et al. 1989).
Red supergiant stars appear in coeval population synthesis models between 5 and 30 Myr, and create a maximum strength of the CaT index (CaT > ∼ 9Å) around 6 to 16 Myr for Z⊙ and 5 to 30 Myr for 2Z⊙ Mayya 1997). Strengths of CaT > ∼ 7Å are characteristic of bursts with ages between 5 and 40 Myr. Leitherer et al. (1999) also find that the total strength of the population depends dramatically on the slope of the initial mass function (IMF) and star formation history. While a coeval burst with a complete Salpeter IMF yields values surpassing 7Å between 6 and 12 Myr, the same IMF in a continuous star formation mode yields values of only 5.5Å maximum. The CaT strength values for coeval star formation derived by Leitherer et al. (1999) differ substantially from those derived by García-Vargas and coworkers (1993,1998) and by Mayya (1997), most probably due to a different calibration of the CaT index.
Mixed populations of young bursts which contain red supergiants, superposed on old populations can also yield values of the CaT between 4 and 8Å (García-Vargas et al. 1998). Since metal rich giant stars have CaT values ranging from 6 to 9Å we regard our observations of the CaT index in radiogalaxies, as being compatible with ages 1 to 15 Gyr.
3C 98
3C 98 shows a double-lobe radio structure which spans 216 arcsec at 8.35 GHz, with a radio-jet that crosses the northern lobe to hit into a bright hotspot. There is little evidence of a southern jet, but a twin hotspot in the southern lobe is also present (Baum et al. 1988, Leathy et al. 1997, Hardcastle et al. 1998. Broad band imaging of the host of 3C 98 reveals a smooth and slightly elongated elliptical galaxy located in a sparse environment (Baum et al. 1988, Martel et al. 1999. Deeper images reveal a faint shell as a sign of a past disturbance (Smith & Heckman 1989). However, the rotation curves of 3C 98 show that the stellar kinematics has negligible rotation < 25 km s −1 and no anomalies (Smith, Heckman & Illingworth 1990). Although the optical colours of 3C 98 are similar to those of normal elliptical galaxies (Zibel 1996), one should note that the colours are modified by the high Galactic extiction towards the source, AV = 0.986 (Schlegel et al. 1998). The ∆4000Å and [MgFe] indices found in this radiogalaxy are characteristic of old metal-rich populations.
An extended narrow line region with a wealth of structure, and no particular orientation with respect to the radioaxis, is also detected in direct narrow-band images (Baum et al. 1988). The narrow emission lines detected in the optical spectra correspond to highly ionized plasma (Baldwin, Phillips & Terlevich 1981, Saunders et al. 1989, Baum, Heckman & van Breugel 1992. 3C 98 has been detected in X-rays by the Einstein satellite at a flux level f (0.5-3keV) = 1 × 10 −13 erg cm −2 s −1 or LX = 4.2 × 10 41 erg s −1 (Fabbiano et al. 1984). The source detection was too weak to look for any extension to a point source.
According to Sandage (1972), 3C 192 is a member of a small group of galaxies. Broad band imaging reveals the host of 3C 192 to be a round elliptical galaxy with a companion of similar size 70 arcsec away, and no obvious signs of interaction (Baum et al. 1988, Baum & Heckman 1988). The stellar kinematics shows negligible rotation, < 30 km s −1 , and no disturbances (Smith et al. 1990). The spectral shape of 3C 192 also shows a blue excess with respect to our template elliptical galaxy.
3C 218 or Hydra A
3C 218 is one of the most luminous radiosources in the local (z < 0.1) Universe, only surpassed by Cygnus-A. Although the radio-luminosity of 3C 218 exceeds by an order of magnitude the characteristic FR I/FR II break luminosity, it has an edge-darkened FR I morphology (Ekers & Simkin 1983, Baum et al. 1988, Taylor et al. 1990). The total size of the radio structure extends for about 7 arcmin, such that the radio-jets, which flare at 5 arcsec, are curved and display 'S' symetry.
3C 218 has been optically identified with the cD2 galaxy Hydra A (Simkin 1979), which dominates the poor cluster Abell 780. This however is an X-ray bright cluster with LX ≈ 2 × 10 44 erg s −1 in the 0.5 -4.5 keV range, as seen by the Einstein satellite (David et al. 1990). The total bolometric luminosity has been estimated to be 5 × 10 44 erg s −1 from 0.4-2 keV ROSAT observations (Peres et al. 1998). The thermal model that best fits the data supports the existence of a cooling flow which is depositing mass in the central regions of the cluster at a rate of 264 +81 −60 M⊙ yr −1 . Hydra A has an associated type II cooling flow nebula , characterized by high Hα and Xray luminosities, but relatively weak [N II] and [S II] and strong [O I] λ6300Å emission lines, usually found in LINERs. The Hα extended narrow line emission (Baum et al. 1988) actually fills the gap between the radiolobes.
The λ2200Å, λ2700Å, B-band, B − V and, also, the U − I continuum colours of the center of Hydra A have been attributed to a ∼ 10 Myr burst of star formation involving 10 8 to 10 9 M⊙ (Hansen, Jørgensen & Nørgaard-Nielsen 1995, McNamara 1995. This is further supported by the detection of strong absorption lines of the Balmer series in the near-UV spectrum of the nucleus (Hansen et al. 1995, Melnick et al. 1997. We also find strong absorption Balmer lines, which we identify as originating in blue supergiant or giant B stars. One of our best two matches in the Dλ1 classification we use in this work, B3I stars, also indicates ages 7 to 8 Myr. Heckman et al. (1985) found that the stellar kinematics has negligible rotation (13 ± 18 km s −1 ), but their observations were limited to the λ4200 -5700Å region, and did not include the higher Balmer lines in absorption. On the other hand, Ekers & Simkin (1983) report very fast rotating stars in the central 20 kpc of the radiogalaxy. A two dimensional analysis of the blue spectrum shows a tilt of the Balmer absorption lines of 450 ± 130 km s −1 in the central 3 arcsec, while the Ca II H,K lines do not show any displacement (Melnick et al. 1997). This tilt is further confirmed by our data. The conclusion derived by Melnick et al. (1997) is that the young stars have formed a disk which is rotating perpendicular to the position of the radio-axis. The star-formation disk has also been detected in U -band images (McNamara 1995).
3C 285
The host galaxy of 3C 285 has been identified with the brightest member of a group of galaxies (Sandage 1972). Optical imaging of the galaxy reveals an elliptical main body and a distorted S shape envelope aligned with a companion galaxy ∼ 40 arcsec to the northwest (Heckman et al. 1986). Narrow band imaging shows that the S-shaped extension is due to continuum emitting structures (Heckman et al. 1986, Baum et al. 1988). The narrow emission lines are originated by photoionization with a high ionization parameter (Saunders et al. 1989. Sandage (1972) found that the B − V colour of 3C 285 is much bluer than that of a normal elliptical galaxy. Our observations show that the blue light of the nucleus (inner 2 arcsec) is dominated by a burst which contains A2I stars, and thus has an age of 10 to 12 Myr. Saslaw, Tyson and Crane (1978) identified a bright blue slightly resolved object halfway between the nucleus and the eastern radio-lobe, which they denoted 3C 285/09.6. Optical spectra and imaging obtained by van Breugel & Dey (1993), showed that the knot is at the same redshift as the galaxy, and its U BV colours and 4000Å break are consistent with a burst of 70 Myr, which they interpreted as induced by the radio-jet.
3C 285 is a classical double-lobed radiogalaxy of 190 arcsec total extension at 4.86 GHz, with two hotspots and an eastern ridge showing curvature roughly along the line to the optical companion (Leahy & Williams 1984, Hardcastle et al. 1998. The source has not been detected by the Einstein satellite in X-rays, at a flux level f (0.5-3keV) < 1.5 × 10 −13 erg cm −2 s −1 or LX = 4.4 × 10 42 erg s −1 (Fabbiano et al. 1984).
3C 382
3C 382 has a double-lobe structure, with a clear jet in the northern lobe that ends in a hotspot. A hotspot in the southern lobe is also detected, but a counterpart jet is not clear, although a trail of low fractional polarization is detected (Black et al. 1992). The total 3.85 GHz size between hotspots is 179 arcsec (Hardcastle et al. 1998).
Optically, the radiosource is identified with a disturbed elliptical galaxy dominated by a very bright and unresolved nucleus (Mathews, Morgan & Schmidt 1964, Martel et al. 1999, located in a moderately rich environment (Longair & Seldner 1979). The optical spectra shows a strong continuum and prominent broad-lines photoinized by a power-law type of spectrum (Saunders et al. 1989, Tadhunter, Fosbury & Quinn 1989. The stellar population of the host galaxy, as we show in our study, is barely detected in the nuclear regions. The Einstein satellite detected 3C 382 in X-rays at a flux level f (0.5-3keV) = 1.3 × 10 −13 erg cm −2 s −1 , or 2 × 10 44 erg s −1 (Fabbiano et al. 1984). The source is resolved in ROSAT/HRI observations but its interpretation is debateable since the luminosity is too strong for a galaxy environment which is only moderately rich (Prieto 2000).
DA 240
This is a double-lobed giant radio-galaxy of 34 arcmin angular size between hotspots and ongoing nuclear activity at 2.8cm (Laing et al. 1983, Nilsson et al. 1993, Klein et al. 1994. The amplitude of the angular cross-correlation of sources found in optical plates around the position of the radio source is weak, Agg = 0.101 ± 0.118 (Prestage & Peacock 1988 4C 73.08 is a giant double-lobed radio-galaxy, with 13 arcmin angular size between hotspots (Meyer 1979, Nilsson et al. 1993).
The environment of the radiogalaxy is also weak, with amplitude of the angular cross-correlation of optical galaxies around the radiosource of Agg = 0.203 (Prestage & Peacock 1988).
4C 73.08 shows a high excitation spectrum typical of narrow line radio galaxies. The colours of the radiogalaxy and the ∆4000Å and [MgFe] indices are comparable to those of our reference elliptical galaxy.
CONCLUSIONS
We have presented spectra of 7 radiogalaxies in the λ3350Å -λ6000Å and λ7900Å -λ9400Å range. All radiogalaxies show either a clear detection or an indication of detection of the Ca II λλ8494,8542,8662Å triplet in absorption, and in 6 of them we detected Balmer absorption lines.
On the basis of the ∆4000Å break measurements, we conclude that 4 of these radiogalaxies contain populations which are typical of normal elliptical galaxies, 2 have populations younger than a few hundrer Myr, and in one its stellar population cannot be characterized.
In the two cases with young bursts, Hydra A and 3C 285, we subtracted the bulge population using a normal elliptical galaxy as a template in order to characterize better the young burst. The λ4000Å and Balmer break index measurements indicate that the young population is dominated by blue giant and/or blue supergiant stars: B3I or B5III for Hydra A, and A2I for 3C 285. The derived age of the burst is beween 7 and 40 Myr for Hydra A, and 10 to 12 Myr for 3C 285.
The CaT strength, invoked to support the detection of young stellar populations in active galaxies, fails to provide a clear conclusion on the nature of the stars that dominate the red light in these radiogalaxies. The CaT could either be due to the red giant stars that dominate old bulge populations, or to the red dwarfs of a young starburst (t < ∼ 7Myr), or the red giants and supergiants of a post-starburst (t > ∼ 30 Myr), or a combination of a bulge population and a recent burst of star formation. A mixed population is again favoured as the interpretation of the red spectra.
It is known that although the hosts of FR II sources look like ellipticals, few of them have true elliptical galaxy properties: magnitudes, colours, and structural parameters show a wider dispersion than in normal ellipticals (Baum et al. 1988, Zirbel 1996. Most of the radiogalaxies in our sample have reported structural disturbances in their optical morphologies, show signs of interactions, have close companions, belong to rich environments and/or have signatures of cooling flows. These are phenomena that facilitate carrying large quantities of gas to the centers of the galaxies and can power the AGN and/or provoke bursts of star formation.
Good quality data in the blue region of this sample is necessary in order to constrain the ages of the young populations involved, especially in the cases of 3C 98, 3C 192, DA 210, 3C 382 and 4C 73.08, where our bulge subtractions led to poor signal-to-noise and therefore unreliable results.
A detailed analysis of the ages of the last burst of star formation will set the relative chronology of the onset of the radio and starburst activity in these galaxies, and shed new light into the relationships between jets, AGN and star formation.
ACKNOWLEDGMENTS
This work has been supported in part by the 'Formation and Evolution of Galaxies' Network set up by the European Commission under contract ERB FMRX-CT96-086 of its TMR programme. We thank PATT for awarding observing time. IA, ET and RJT also thank the Guillermo Haro Programme for Advanced Astrophysics of INAOE for the oportunity it gave us to meet and make progress on the project during the 1998 workshop 'The Formation and Evolution of Galaxies'. GC acknowledges a PPARC Postdoctoral Research Fellowship, and ET an IBERDROLA Visiting Professorship to the Universidad Autónoma de Madrid. We thank J. Gorgas for providing the CaT spectra of the sample of comparison elliptical galaxies prior to publication, M. García-Vargas for suggestions on how to improve the fringing removal, and an anonymous referee for crucial comments on the relevance of Balmer indices in old populations. . ∆4000Å break measurements for the radiogalaxies in our sample (RG) and for elliptical (E), spiral (S) and irregular (Irr) galaxies in the atlas of Kennicutt (1992). We have also plot the indices measured in stars from the library of Jacoby et al. (1984), that have been broadened to mimic the width of the Balmer lines in the radiogalaxies, represented by their corresponding spectral type and luminosity class (e.g. A2I). The grid of solid lines traces the original locus of unbroadened stars. The big symbols at the edges of the grid represent the correspondence into spectral classes and luminosity classes of the frame defined by the grid. Figure 9. CaT index measured for the radiogalaxies (RG), and compared with the values found in Seyfert 1 (Sy1), Seyfert 2 (Sy2), LINERs, starburst galaxies (SB), elliptical (E) and spiral and/or lenticular (S/S0) galaxies from the samples of Terlevich et al. (1990) | 2014-10-01T00:00:00.000Z | 2001-03-04T00:00:00.000 | {
"year": 2001,
"sha1": "390ddf361a787396305aa4258ee387da5a4fc3a9",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/325/2/636/3978990/325-2-636.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "390ddf361a787396305aa4258ee387da5a4fc3a9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
146004862 | pes2o/s2orc | v3-fos-license | Functional outcome of fracture middle-third clavicle treated in tertiary institute
Introduction: Clavicle is the bone which links thorax to the shoulder and help in movements at shoulder joints. It is the first long bone to ossify in the body. Clavicle fractures are the most common fractures in the upper limb. The purpose of this study is to measure the functional outcome of mid third displaced clavicular fractures treated by open reduction and internal fixation with plate osteosynthesis and conservative management. Materials and Methods: This study was done in Tertiary Institute, Ammapettai, in between May 2017 – September 2018, Ethical committee approval obtained, IHEC No: 2017/312.Patients were allocated in two groups and were given treatment as per the group. 20 cases were treated conservatively and 20 with plate osteosynthesis. Patients were followed every 3rd, 6th, 9th month. Result: The mean union rate in our study was 10 weeks and constant shoulder score was 90 for conservative group. 20 patients in plate Osteo-synthesis the average union was 6wks and constant shoulder score was about 95 with an excellent grade. Conclusion: Plate osteosynthesis in displaced midshaft clavicle fracture has resulted in excellent functional outcomes and also good union rate.
Introduction
Clavicle fractures are common injuries in adults (2-5%). 1 Fracture of middle third of clavicle forms (70-80%) whereas lateral fracture contributes to (15-30%) and medial fracture 3% which are least common. Incidence peaks in 3rd decade of life. 2 Non operative treatment is no longer valid in treating clavicle fractures with good functional outcomes. 3 In some studies non -union rate reported in mid clavicle fracture is 15% treated conservatively. 4 Mid shaft fractures of clavicle treated conservatively with axial shortening leads to non-union, malunion. 5 Other symptoms include neurological complications, restricted shoulder movement, protuberant callus which is cosmetically unfavourable for the patient. Patients with higher activity level and rigorous daily routine work will not accept the treatment which give prolonged recovery and restricted shoulder movements.
Early fixation of the clavicle gives better shoulder functions and provides comfort to the patient. Successful surgical interventions for middle third clavicle fracture includes plate osteosynthesis fixation and intramedullary nailing like "TENS" nailing.
Open reduction and internal fixation with plating provides rigid fixation, early functional recovery which lowers the incidence of non-union and malunion. Surgical treatment of middle shaft fracture results less no of cases with non-union as compared to conservative treatment. 6 Wehave taken this study of middle third clavicle fracture to see the functional outcomes on the patients undergoing treatment with plate osteosynthesis and conservative.
Objective
To evaluate the functional outcome of fracture middlethird clavicle treated with conservative or Plate osteosynthesis in the patients treated in tertiary institute.
Materials and Methodology
Total of 40 patients were studied in this project. Out of 40 patients, 1. 20 patients were included in conservative group, 16 Male, and4 females. 2. 20 patients in plate osteo-synthesis group, 15 males and 5 females.
In both groups 16 patients had right side clavicle fracture and 4 patients had left side clavicle fracture. The mean age of patients treated conservatively was 33.10 years, and the mean age in years of patients treated with plate osteo-synthesis was 34.15 years.
Mode of Injury
In both groups, road traffic accident was the most common mode of injury (80%); fall on out-stretched hand was 20%. When the patient presented to us we took two views of clavicle x-ray (AP view and Zanca view). Informed and written consent were obtained from all patients in both groups. Patients were followed up for a period of 9 months (2, 4 & 6 months) from the date of Conservative / surgical intervention and evaluated clinically with Constant and Murely 7 scoring system.
Patients whoever not willing for surgery, Conservative management was done.
Statistical Analysis
The collected data was analysed with SPSS software. To describe about the data descriptive statistics frequency analysis, percentage analysis were used for categorical variables and the mean & S.D were used for continuous variables. To find the significant difference between the bivariate samples in Independent groups (Conservative & Operative) unpaired sample t-test was used. To find the significance in categorical data Chi-Square test was used. In both the above statistical tools the probability value .05 is considered as significant level.
Conservative Treatment
There are various conservative treatment 10,11 options available, the commonest being the use of a sling or 'figureof-eight' bandage. 12, 13 1. In adults, the undisplaced fracture is treated with triangular sling which supports the upper limb, with active exercises of fingers, wrist and elbow (50 times, thrice a day). The sling is removed after 3 weeks and shoulder exercises is advised. 2. If the fracture fragments are displaced, the distal fragment is lifted upwards and pulled backwards and figure of 8 bandage is applied with good padding of both axilla with cotton. 3. Often no subsequent therapy is suggested to the patient.
Sometimes, however, a patient will require stretching exercises to regain motion. 4. Periodic check-ups are important to look pressure sores in the axillary folds by figure of 8 bandage. 5. The patient with a structured rehabilitation in order to have a satisfactory outcome for most patients. To protect the healing clavicle, it is important to avoid contact sports for a minimum of 4 to 5 months. 6. Midshaft clavicle fracture goes on to healing with any method of immobilization. The choice of immobilization, then, should reflect patient comfort and function issues rather than anticipated healing rates.
Operative Technique
Under general anaesthesia, patient positioned in supine with sand bag under the scapula. Shoulder prepared and draped, and incision made over the fractured clavicle site. The fracture site identified, and fracture reduction done and fixed with a 3.5 mm pre-contour plate. Plate was fixed over bone at superior surface, with the goal of achieving minimum of three screws in the proximal and distal fragments in most cases, with care being taken to preserve soft-tissue attachments. The delto-trapezial fascia was closed with interrupted number-1 absorbable sutures as a distinct layer, followed by skin closure. 14 Instruments used for middle 3rd clavicle fixation 1. 3.5mm reconstruction plate, 1/3 tubular plate. 2. 2.7 mm drill bit 3. 3.5mm universal drill guide. 4. Hand drill/pneumatic drill 5. 3.5mm Tap for cortical screw 6. Depth gauge 7. 3.5mm cortical screw of varying sizes (12-22mm). 8. Screw driver 9. General instruments like retractor, periosteal elevator
Post Operative Protocol
Patients were started on IV antibiotics and analgesics. The wound was inspected on 2 post-operative day. On 12 th day of the surgery the sutures were removed. The arm were kept in broad arm sling for six weeks. The Pendulum exercises were started within the first 24 hours after surgery. After the suture removal, passive flexion and extension exercises were started. All patients are sent for physiotherapy. Patients were followed up clinically and radiologically at 3 week, 3 month, 6 month and 9 months after surgery. The functional outcome was assessed by Constant and Murley 13 score.
Results
Among 20 patients in conservative group, 1 patient had non-union for whom plate osteo-synthesis with bone grafting was done. The average union rate was 10.75wks and constant shoulder score was 90 with good grade.20 patients in plate osteo-synthesis the average union was 6wks and constant shoulder score was about 95 with an excellent grade. All other patients had excellent result with good range of movements & excellent radiological union.
Pre-op
Post-op 6 months follow-up Good range of movements achieved
Case 2: Shows rigid fixation with excellent functional outcomes
Pre-op Post-op 6 months follow-up Good range of movements achieved
Discussion
Our study shows, significant outcome of clavicle fractures middle third treated operatively. Patients who were treated early for middle clavicular fractures by internal fixation gave a better postoperative outcome, quick pain relief, patient satisfaction and comfort in daily activities. A study conducted over 868 patients in U.S.A with clavicle fractures, out of them 581 had midshaft diaphyseal fracture. And it came out to be a higher non-union rate 21% for the displaced, comminuted midshaft fracture and was statistically significant (p <0.05).
Another study on fifty two patients done in Canada having displaced midshaft clavicular fractures and found that 8 patients had non-union and 16 patients had a poor outcome. And it was concluded that displaced fracture fragments more than 2 cm had an unsatisfactory result.
A recent meta-analysisreported that rate of non-union in clavicular fractures was 2.2% in displaced midshaft fractures among ten patients out of four hundred sixty patients after plate fixation (15%) when compared with nonoperative care risk was reduced by 86%. This study also showed that plate fixation was safe and reliable operative mode.
Union ranging from 94-100%, lesser surgical complications, and decreased chances of infection were the result of acute midshaft clavicular fractures. Met analysis done over four hundred and sixty patients has reported a rate of non-union of 2.2%. Plate fixation has proven to be superior operative mode with the use of prophylactic antibiotics.
Neurovascular abnormalities were noted in 6% of the patients who were treated conservatively, this abnormalities were the result from callus formation and non-union. In our study there were no neurological abnormalities found.
Patients who were treated conservatively had some disability in the affected side with some loss of muscle strength. In our study range of motion was good and mean score was 90.
Treatment of displaced clavicular fractures by internal fixation is the best option as it provides early pain relief, return of shoulder functions and also in daily activities and work.
There are many methods explained to treat mid shaft clavicle fractures such as intramedullary nailing which resulted into no of complications rotational instability, tense nail migration, fixing screws or wire within the fragment leads to immobilization and is not sufficient. In our study we chooses rigid fixation of the plate which gives good results in treatment of acute clavicular fractures.
So we should consider treatment of acute middle third clavicular fractures should be reserved for the patients who choice to return early to their daily activities. Patients should be informed about the infection and wound complications.
Limitations
In our study proper follow up was not possible upto 1 year due to irregularity of the patients. Open fractures were not included in our study. Even patients who presented with non-union of mid third clavicular fractures were excluded from the study.
Conclusion
Operative treatment resulted in early return to function and better anatomical stability as compared to conservative treatment. | 2019-05-07T13:28:55.378Z | 2019-04-15T00:00:00.000 | {
"year": 2019,
"sha1": "4bd6a90b0a2a67ef18c5455a040d22214577ca7e",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.18231/j.ijos.2019.003",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ff8fb0c8527bd664f80216b54d577a549732711b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
242267159 | pes2o/s2orc | v3-fos-license | Reduction in Fertilizer Input´s Has No Significant Affect on Yield or Performance of Early- and Late-season Indica Hybrid Rice
Increasing rice production by using genetically improved rice cultivars and fertilizer application has been the main objective of rice farming. The double rice-cropping system has been an important rice production system in the middle and lower reaches of the Yangtze River in China since the 1950s. It is of great significance to determine whether hybrid vigor can make a substantial contribution to early- and late-season rice production, and how the heterosis expression of hybrid rice functions under different level of fertilization application. The objective of this study was to evaluate the grain yield and associated plant traits of popular hybrid and inbred rice varieties with large-scale promotion under conditions of customary (high) and combined (low) fertilization in the early and late seasons of 2017-18 in Changsha County, Hunan Province, central Southern China. We found that hybrid rice varieties displayed their respective advantages in the early- and late-rice seasons, but the advantages in their relative yield traits varied. The main advantages of early-season rice were effective panicle number per hill (EPN), 1000-grain weight (KGW), harvest index (HI), yield, and nitrogen use efficiency (NUE). Whereas in late-season rice, the major advantages were grain number per panicle (GNP), HI, yield, and NUE. The EPN was the main advantage of early-season hybrid rice with a short-growth period, and the GNP was the main advantage of late-season hybrid rice with a long-growth period. The main yield advantage of hybrid rice was stronger under combined (low) fertilization than under customary (high) fertilization. Hence, high yield can be achieved by selecting good hybrid rice varieties and by using combined fertilization (lower fertilizer use with higher efficiency). This work is contributive for rice growers, extension specialists, and fertilizer producers, as it provides data that can be used to maximize yields with reduced fertilizer and pesticide inputs.
Introduction some companies have developed "medicinal fertilizers" for famers. The term "medicinal fertilizer" usually refers to a mixture of pesticides and fertilizers in a given proportion, produced via a special process, which has the integrated function of both the pesticide and the fertilizer. The pesticide can be a herbicide, insecticide, or bactericide; the fertilizer can be either a simple or multi-nutrient compound fertilizer. Consequently, the medicinal fertilizer could be used for killing / inhibiting insects, bacteria, and weeds, regulating crop growth, and providing nutrition for crops, or improving the fertilizer utilization rate. Farmers can combine the frequently used fertilizer with the medicinal fertilizer to solve the issue of labor shortage. In addition to the maintenance of agricultural growth and increase in yield and harvest, combined fertilization has advantages of convenience of use and reduced environmental pollution (Li, 2018). On the other hand, many farmers select hybrid rice to increase yield, but the performance of early-and late-season hybrid rice has not been clari ed under different fertilization conditions. To address this gap the present investigation was conducted to nd the outcome of using different fertilizer regimens, we explored differences in the performance of hybrid and inbred rice lines under customary (high) and combined (low) fertilization treatments across the 2017 and 2018 rice growing seasons. Our study objectives were to (1) evaluate the performance of hybrid rice in double-cropping seasons and (2) identify the effects of different fertilization regimens on heterosis expression in hybrid versus inbred lines for the purpose of reducing fertilization inputs. We compared the yield relative traits, such as biomass accumulation, main yield traits, grain yields, and nitrogen use e ciency (NUE), between tested earlyand late-season hybrid and inbred rice varieties under two fertilization treatments. This work provides valuable data for growers, extension specialists, and fertilizer producers that can be used to maximize yields with reduced fertilizer and pesticide inputs..
Experimental Design and Operation
The eld experiments were conducted at the Hunan Hybrid Rice Research Center experimental base, Changsha County, Hunan Province, China ( 28°12′N, 6112°59′E ) in 2017-18. The early-season and late season hybrid rice varieties tested in this study were ZLY89 and LLY268, TY390 and CLY7 and inbred varieties were ZZ39 and XZX45 and HHZ and X2 respectively. A detailed description of each variety is given in Table 1. The experimental eld covered an area of 0.2 ha. Customary fertilization (CUF: high fertilizer use) and combined fertilization (COF: low fertilizer use) treatments were used. Customary fertilization of early-and late-season rice in 2017 and 2018 included a basal fertilizer application before transplanting, and topdressing about one week after transplanting. The fertilizer application amounts are listed detailed in Table 2. The combined fertilization of early-and late-season rice in 2017 and 2018 included a basal fertilizer application before transplanting, two topdressings with medicinal fertilizer application about one week after transplanting, and during the booting and heading stages, as shown in Table 2. The total fertilizer amount ( N-P 2 O 5 -K 2 O ) used for customary fertilization of early-season rice was 350.84 and 399.82 kg ha -1 in 2017 and 2018, respectively, and the total fertilizer amount ( N-P 2 O 5 -K 2 O ) used for combined fertilization of early-season rice was 221.67 and 305.34 kg ha -1 in 2017 and 2018, respectively. The total fertilizer (N-P 2 O 5 -K 2 O) amount of customary fertilization used for late-season rice was 443.25 and 399.82 kg ha -1 in 2017 and 2018, respectively, and the total fertilizer (N-P 2 O 5 -K 2 O) amount of combined fertilization used for the late-season rice was 221.67 and 291.67 kg ha -1 in 2017 and 2018, respectively ( Table 2). The medicinal fertilizer XSL (N 16%, bensulfuron 0.032%, butachlor 0.608%, China Patent No: ZL201410082830.3) was developed by Hunan Shenlong High Tech Co., Ltd, Changsha city, Hunan privince, China, for use as a topdressing fertilizer to control the growth of weeds at the rice tilling stage. The medicinal fertilizer SAK ( N 20%, furosemide 0.1% ) was developed by Hunan Shenlong High Tech Co., Ltd. Changsha city, Hunan privince , China to control insects in the paddy eld. The date of sowing and transplanting and the speci cations for transplanting, number of transplanted seedlings per hill, and maturity of each variety are provided in Table 3. According to the local farmer eld management practice, under customary fertilization conditions, the paddy rice was sprayed with pesticides 2-3 times during the entire growth period, but under combined fertilization conditions, the paddy rice was not sprayed with pesticides. Each treatment area was approximately 30 m 2 , and the treatments were replicated three times.
Sampling and measurements
Biomass accumulation was investigated at different stages. After the maturity stage, 20 hills of each variety were sampled for EPN and plant height (PH). Six hills of each variety were investigated for GNP, seed set rate (SSR), 1000-grain weight ( KGW ), and harvest index ( HI ) . The panicles of every hill were hand-threshed, and the lled grains were separated from un lled grains by winnowing. The GNP was calculated as the total number of grains divided by the EPN. The SSR was 100 × total lled grains / number of total grains per hill, and the KGW was 1000 × total lled grains' weight / number of lled grains per hill. The total above ground biomass was calculated as the summation of dry weight of leaves, stems, rachis, and lled, half-lled, and empty spikelet' s. The biomass of six hills for each variety was measured after plants were heated at 70 °C for 48 h to a constant weight. The HI ( lled grains' weight / total above ground biomass ) was calculated separately (Miao et al, 2011). The yield was determined from more than 300 hills in each plot using a combine harvester when grains had adjusted to a moisture content of 12%. The HI was calculated as the ratio of the lled grain dry weight to the total aboveground biomass. The NUE, de ned as the yield produced per unit of N applied , Ju et al, 2016, was approximated using N partial factor productivity (PFP N , kg rice grain per kg N applied) because it integrates fertilizer input, inherent soil N supply capacity, and achieved yield, and as such is the broadest measure of NUE (Cassman et al, 1996, Xie et al, 2020. PFP N = Grain yield with N application / N application rate. Standard heterosis (HCK) was calculated according to the following formula: Where F1 is the performance of yield traits of the hybrid rice, CK is the performance of yield traits of the inbred control variety of early and late-season rice.
Data Analysis
The data for rice attributes, i.e., biomass accumulation per hill ( BPH ), PH, EPN, GNP, SSR, KGW, HI, yield, and NUE were subjected to an analysis of variance using the SPSS 17.0 (IBM) software. Means of varieties were compared based on the least signi cant difference test (LSD) at P≤0.05 probability levels under different fertilization treatments during different seasons. The gures were prepared using Microsoft Excel.
BPH and PH are independent of fertilizer regimen and cultivar across seasons
Source capacity is typically expressed as units of biomass production, which is directly related to the photosynthetic capacity of plants (Zhang et al, 2009). The most common approach for increasing productivity in rice is to increase biomass production (Peng et al, 2009, Yuan, 2012. To evaluate differences in biomass production related to low-versus high-fertilizer treatments, we compared the total BPH of different cultivars of early-and late-season rice. It was found that early-season rice accumulated more biomass in its late growth stages, whereas late-season rice accumulated more biomass after full heading (Table 4 and, Fig. 1). Compared with the inbred varieties, the HCK for BPH of early-season hybrid rice was largely negative. The difference in BPH for these hybrids was inconsistent between the two fertilization treatments. Speci cally, the HCK for BPH of late-season hybrid rice at the booting and mature stages showed an increase over the inbred cultivars in 2017, but was lower than that of inbred cultivars in 2018 (Table 5 and S2, Fig. 2). Furthermore, the hybrid rice varieties did not exhibit consistent differences in BPH at different growth stages. Similarly, differences in BPH between fertilization treatments were also non-signi cant in 2017. However, among late-season rice, COF led to lower biomass accumulation than that of CUF during the full heading and full-ripening stages.
Plant height ( PH ) was relatively lower among early-season rice than late-season rice, averaging 70-80 cm. The PH of early-season rice was 83.06 and 84.89 cm in 2017 and 78.43 and 79.76 cm in 2018 under CUF and COF treatments, respectively (Table 6 and S3, Fig. 3). The average HCK for PH of early-season hybrid rice varieties was 0.05% and 1.65% in 2017, and 3.12% and 2.63% in 2018 under the CUF and COF treatments, respectively, indicating better performance of the hybrid rice. Under the COF treatment, the average PH of early-season hybrid rice varieties was higher than that under the CUF treatment (Table 7 and S4, Fig. 4 ).
In contrast, the PH varied widely among late-season rice varieties. The PH of inbred rice variety X2 was the highest, reaching 110 cm, whereas that of other varieties averaged 90-100 cm ( (Table 6 and S3, Fig. 3). Compared with late-season inbred rice, the HCK for PH of lateseason hybrid rice was -3.07% and -1.55% in 2017 and -6.58% and -4.94% in 2018 under CUF and COF treatments, respectively, indicating a decrease in heterosis (Table 7 and S4, Fig. 4). Under the COF treatment, the average PH among late-season hybrid rice varieties was higher on an average than that under the CUF treatment, suggesting that higher fertilizer application did not signi cantly affect plant height as a function of biomass production.
2.2 Low fertilization increased EPN per hill in early-season hybrids and GNP in late-season hybrids Tiller number is also an informative and quantitative agronomic trait in rice because it is positively correlated with panicle number per m 2 (Ao et al, 2010, Wu et al, 1998, Miller et al, 1991. EPN represents the tillering ability of varieties and contributes to other yield related traits. The average EPN of early-season rice varieties was 13.22 and 13.99 in 2017, and 10.50 and 11.21 in 2018 under CUF and COF treatments, respectively (Tabls 6 and S3, Fig. 3). Compared with early-season inbred varieties, the average HCK for EPN of early-season hybrid rice was 13.27% and 14.1% in 2017 and 9.07% and 7.45% in 2018 under CUF and COF treatments, respectively (Table 7 and S4, Fig. 4). Taken together, these data indicate that the early-season hybrids produced higher EPN than that of inbred lines; however, the higher fertilizer application did not affect EPN.
The average EPN of late-season rice varieties was 12.31 and 12.4 in 2017 and 15.32, and 16.34 in 2018 under CUF and COF treatments, respectively (Table 6 and S3, Fig. 3). Compared with late-season inbred lines, the average HCK for EPN of late-season hybrid rice varieties was 1.86% and 0.08% in 2017 and 2.62% and -0.29% in 2018 under CUF and COF treatments, respectively (Tables 7 and S4, Fig 4). These results showed that the EPN of early-season hybrids were higher than that of inbred lines, but lower in late-season hybrids ; further, CUF generally led to greater EPN, although not consistently ( e.g., COF improved EPN in 2017 early-season rice ).
Several eld studies have reported that higher grain yields among hybrid rice varieties are associated with a large sink size ( spikelets (Table 6 and S3, Fig. 3). Compared with early-season inbred varieties, the average HCK for GNP in the early-season hybrid was -19.89% and -5.28% in 2017 and -7.71% and -6.58% in 2018 under CUF and COF treatments, respectively (Table 7 and S4, Fig. 4). Given the primarily negative trends in GNP for hybrid varieties, these results indicated that the early-season hybrids did not exhibit strong heterosis through GNP and were not affected by lower fertilizer application.
The average GNP of late-season rice was 80.58 and 87.32 in 2017 and 113.18 and 113.34 in 2018 under CUF and COF treatments, respectively (Table 6 and S3, Fig. 3). Compared with late-season inbred rice varieties, the average HCK for GNP of the late-season hybrids was 16.83% and 11.68% in 2017 and 2.31% and 6.28% in 2018, under CUF and COF treatments, respectively, indicating positive heterosis (Table 7 and S4, Fig. 4). Thus, the HCK for GNP in late-season hybrid varieties was higher under CUF applications than under COF, suggesting that this trait was positively affected by higher fertilizer application in the late-season hybrids.
Fertilizer regimen signi cantly affected neither SSR nor KGW for hybrid lines
Seed set rate (SSR) is also one of the main quantitative traits for assessing yield. We examined differences in SSR to determine the effects of reduced fertilizer application on yield. It was found that the SSR of early-season rice was 73.5% and 77.57% in 2017 and 91.64% and 91.02% in 2018 under CUF and COF treatments, respectively (Table 6 and S3, Fig. 3). The SSR of early-season rice varieties was higher than 90% in 2018, with negligible differences between varieties but larger differences between years. Compared with earlyseason inbred varieties, the HCK for SSR of early-season hybrid rice varieties was -9.44% and -5.23% in 2017 and -1.07% and -0.2% in 2018 under conditions of CUF and COF treatments, respectively (Table 7 and S4, Fig. 4). Although all hybrids showed negative heterosis, the early-season hybrid rice varieties had a higher SSR under the COF than under the CUF treatment.
In the late-season hybrids, the average SSR was 72.01% and 77.19% in 2017 and 87.41% and 83.89% in 2018 under CUF and COF treatments, respectively (Table 6 and S3, Fig. 3 ). Compared with late-season inbred varieties, the HCK for SSR in late-season hybrid rice varieties was -6.9% and 0.51% in 2017 and -6.07% and -7.73% in 2018 under CUF and COF treatments, respectively (Table 7 and S4, Fig. 4). These results showed that SSR was not the main yield heterosis trait in either early-season or late-season hybrid rice, and notably, heterosis of SSR is unaffected by fertilizer treatments.
Several previous studies have shown that increasing the spikelet size (grain weight) is a feasible approach for increasing rice yield (Huang et al, 2011, Zhang et al, 2009). To measure this trait, we examined 1000-grain weight (KGW) and compared the trait between varieties and across fertilizer treatments to determine if lower fertilization negatively impacted yield. The KGW of early-season rice was 23.36 and 23.63 g in 2017 and 25.74 and 25.68 g in 2018 under CUF and COF treatments, respectively (Table 6 and S3, Fig. 3). Compared with early-season inbred varieties, the average KGW of early-season hybrid rice varieties was -1.02% and 2.02% higher in 2017 and 6.62% and 4.41% higher in 2018 under CUF and COF treatments, respectively (Table 7 and S4, Fig. 4). Hence, the early-season hybrid rice varieties exhibited an advantage in terms of KGW as compared to the early-season inbred rice varieties.
The KGW of late-season rice was 29.43 and 30.06 g in 2017 and 28.55 and 26.87 g in 2018 under CUF and COF treatments, respectively (Table 6 and S3, Fig. 3). However, in comparison with the late-season inbred varieties, the HCK for KGW of late-season hybrid rice varieties was -3.41% and -6.96% in 2017 and -5.0% and -4.92% in 2018 under CUF and COF treatments, respectively, thus exhibiting poor heterosis for KGW, regardless of fertilizer treatment (Table 7 and S4, Fig. 4). These data showed that, in terms of KGW, differences in fertilizer treatment did not lead to signi cant improvement of yield of hybrid rice over that of inbred rice.
2.4 Decreasing fertilizer inputs did not affect HI, grain yield, or NUE for either early-or late-season hybrids The degree of source-to-sink translocation is often assessed by measuring the harvest index (Sinclair, 1998), which is determined by the rate of transient photosynthesis during grain formation and the remobilization of stored reserves into the growing grain (Blum, 1993). It is generally accepted that the HI (harvest index) of modern high-yielding rice cultivars is approximately 0.5 (Zhong et al, 2006). Here, we found that the HI of early-season rice was 0.32-0.63, with HI values being 0.36 and 0.38 for CUF and COF treatments, respectively, in 2017, and 0.58 and 0.57 for CUF and COF treatments, respectively, in 2018 (Table 6 and S3, Fig. 3). Thus, the difference in HI between fertilization conditions was small, and it t was greater among varieties and years also. Speci cally, compared with the early-season inbred varieties, the average HCK for HI in early-season hybrids was -8.96% and 3.38% in 2017 and 5.32% and 4.99% in 2018 under CUF and COF treatments, respectively (Table 7 and S4, Fig. 4).
In contrast, the HI for late-season rice was 0.43-0.57, with HI values being 0.47 and 0.49 for CUF and COF treatments, respectively, in 2017, and 0.52 and 0.50 for CUF and COF treatments, respectively, in 2018 (Table 6 and S3, Fig. 3). Similarly, the differences were small between the fertilization conditions, but signi cant between rice varieties. In the comparison of heterosis in late-season inbred varieties, the HCK for HI of late-season hybrid rice varieties was higher than that of inbred varieties by 4.55% and 7.21% for CUF and COF treatments, respectively, in 2017 and by 5.25% and 7.42% for the respective treatments in 2018. Thus, there was positive heterosis for this trait ( Table 7 and S4, Fig. 4). In the case of HI, late-hybrid rice performed better under the low fertilizer COF treatment than under the CUF treatment.
The nal metric for performance was yield, and we found that early hybrid varieties yielded between 5.61-7.69 t ha -1 ( tons per hectare ). The average yield of early-season rice was 6.59 and 6.19 t ha -1 in 2017 and 6.32 and 7.11 t ha -1 in 2018 under CUF and COF treatments, respectively (Table 6 and 3, Fig. 3). The average HCK for yield among early-season hybrids was higher by 2.57% and 16.11% than that of inbred lines in 2017 and by 15.25% and 14.44% than that of inbred lines in 2018 under CUF and COF conditions, respectively (Table 7 and S4, Fig. 4). This indicated a better performance by early-season hybrids than that of the inbred varieties, under COF conditions.
The yield for late-season rice ranged from 7.05 to 9.9 t ha -1 . The yield of late-season rice was 8.52 and 8.37 t ha -1 in 2017 and 8.90 and 8.61 t ha -1 in 2018 under CUF and COF conditions, respectively (Table6 and S3, Fig. 3). The average HCK for yield of late-season hybrid varieties was 3.44% and 5.85% higher than that of inbred lines in 2017 and 9.24% and 15.45% higher than that of inbred lines in 2018 under CUF and COF treatments, respectively (Table 7 and S4, Fig. 4). Both the early-and late-season hybrid varieties had higher yields than that of the inbred lines, and yields of hybrids were more stable and consistently higher under the COF treatment than under the CUF treatment. (Table 5 and S3, Fig. 3). The average HCK for PFP N of early-season hybrids was 2.36% and 4.12% higher than that of the inbred lines in 2017 and 15.19 % and 9.62% higher than that of inbred lines in 2018 under CUF and COF conditions, respectively (Table 6 and S4, Fig. 4 ).
The average PFP N of late-season rice was 51.85 and 60.49 kg kg -1 in 2017 and 36.73 and 49.14 kg kg -1 in 2018 under CUF and COF conditions, respectively (Table 5 and S3, Fig. 3). The average HCK for PFP N of early-season hybrids was 3.44% and 5.79% higher than that of the inbred lines in 2017 and 9.23% and 15.37% higher than that of inbred lines in 2018 under CUF and COF conditions, respectively (Table 6 and S4, Fig. 4). Both the early-and late-season hybrid varieties had higher PFP N than that of the inbred lines, and the PFP N values were higher under the CUF treatment than under the COF treatment, except for early-season 2018. Most of the PFP N of early and late-season hybrid varieties corresponded to their yields under the CUF treatment rather than the COF treatment.
Discussion
Source capacity is usually expressed as the amount of biomass production, which is achieved through the plant' s photosynthetic capacities. Sink capacity is a function of the number of spikelets per unit land area and their potential size. Source-to-sink translocation degree is often assessed by measuring harvest index, which is determined by the transient photosynthesis during grain formation and the remobilization of stored reserves into the growing grain (Zhang et al, 2009, Mae et al, 2006, Sinclair, 1998, Blum, 1993. Rice yield is determined by the sink-source capacity, i.e., the degree of source-to-sink translocation, it is a function of total biomass and harvest index (HI). (Khush, 2013). Hybrid rice yielded approximately 7-19 % more grain than that of inbred cultivars. The higher grain yields observed for hybrid rice cultivars were attributed to high grain weight and biomass accumulation, with longer crop duration to make full use of the heat-light resources in the growing season (Jiang et al, 2016, Bueno and Lafarge, 2009, Yang et al, 2007, Katsura et al, 2007. The attainable early-season rice yield under double rice-cropping systems is characterized by a relatively lower grain yield of 5-6 t ha -1 and superiority in sink size (sink capacity, such as spikelets per m 2 ) and biomass production (Wu et al, 2013, Zhong et al, 2006. A recent investigation by Chen et al. 2019 showed that the highest yield of early-season hybrid rice was 7.60 t ha -1 and 7.49 t ha -1 in 2017 and 2018, respectively, with the yield advantage of early-season hybrids typically being less than 5.00% over that of inbred varieties; further, hybrids were shown to produce more panicles per plant but less grains per panicle (Chen et al, 2019). In this study, it was shown that the average yield of early-season rice was 6.39 t ha -1 and 6.71 t ha -1 in 2017 and 2018, respectively (Table S3). Compared with inbred lines, the average yield of early-season hybrid rice increased by 9.34% and 12.46% in 2017 and 2018, respectively (Table S4 ). Early-season rice hybrids had advantages in terms of EPN, KGW, HI, yield, and NUE (Fig. 5a), with the EPN advantage being hugely evident for them. The results of this study were consistent with those reported by Chen et al. (2019), but, in our study, the yield advantage of early-season hybrid rice was greater and not limited as compared to inbred rice.
A previous study showed that GNP and EPN traits were the main reasons for yield superiority in the two-line super high-yielding hybrid rice varieties with long growth duration. The GNP and EPN had super-parental advantages . The highest yield of lateseason hybrid rice was 9.64 t ha -1 , whibch produced a 6-26% higher grain yield than that of the other cultivars because the higher grain yield was driven by improvements in sink-source capacity (biomass production, panicles and spikelets per m −2 , and grain weight) (Huang et al, 2017b); In this study, it was shown that the average yield of late-season rice was 8.44 t ha -1 and 8.75 t ha -1 in 2017 and 2018, respectively (Table S3). Compared with the inbred lines, the average yield of late-season hybrid rice increased by 4.65% and 12.35% in 2017 and 2018, respectively (Table S4). The late-season hybrid rice had obvious advantages in GNP, HI, yield, and NUE ( Fig. 5b and d ). The GNP was an obvious advantage for late-season hybrid rice and different from early-season hybrid rice. This result has the similarity with Li et al. (2016).
Nitrogen (N) is the most important nutrient element in irrigated rice production, and current high yields of irrigated rice are associated with large applications of fertilizer (Cassman et al, 1998). To ful ll the yield potential of the super hybrid rice, N input of more than 350 kg N ha -1 was needed (Wang and Peng, 2017). The yield of hybrid rice increased by 59-71% when the N application rate increased from 150 to 210 kg ha −1 . The results of grain yield, NUE, and apparent N balance (ANB) indicated that the 180 kg ha −1 rate of N application was the most effective (Yousaf et al, 2016). The average yield of hybrid cultivars LYPJ and YLY1 was 22% and 16% higher than that of inbred cultivars HHZ and YXYZ, the higher grain yield with N fertilizer in hybrid rice was driven more by a higher yield without N fertilizer than by increases in grain yield with N fertilizer under moderate to high soil fertility conditions (Huang et al, 2017a).
The grain yield of super hybrid rice was higher than that of inbred rice by 3.33-7.41% at N0 (0 kg N ha -1 ) and 5.94-19.87% at N90 (90 kg N ha -1 ) (Huang et al, 2018). Another study showed that N2 ( 125-176 kg N ha -1 ) had higher agronomic NUE, whereas the difference in grain yield between N1 ( 225 kg N ha -1 ) and N2 ( 125-176 kg N ha -1 ) was relatively small ( ( Table S3 and S4, Fig. 5c and 5d ). It was shown that the yield advantage of hybrid rice under COF was higher than that under CUF.
Farmers in China usually over-apply synthetic N fertilizer to maximize grain yield, resulting in a steep decline in NUE (Miao et al, 2011, Ju et al, 2009. A PFP N of 41.1 kg kg -1 of irrigated rice was reported previously on a national scale in China (Xu L et al, 2018). PFP N values of 50 kg kg -1 and above are generally considered to be achievable with good management (Xie et al, 2020, Dobermann and Fairhurst, 2000). In this study, the PFP N s of early-season rice were 37.82 and 42.24 kg kg -1 in 2017 and 26.06 and 38.34 kg kg -1 in 2018 under CUF and COF treatments, respectively, whereas the PFP N s of late-season rice were 51.85 and 60.49 kg kg -1 in 2017 and 36.73 and 49.14 kg kg -1 in 2018 under CUF and COF treatments, respectively. NUE under COF treatment was much higher than that under CUF treatment. This result provided an important clue for the molecular mechanism of the interaction between yield advantage and NUE in hybrid rice, which could be of use in the future. Previous studies have suggested that a small farm size and smallholder management are key causes of low agricultural productivity worldwide (Ju et al, 2016). In China, the rice planting area per household was an average of 0.33 ha, and 60% of farmers had <0.3 ha of planting area (Xie et al, 2020). Most farmers lacked the knowledge of required application of N-fertilizer to obtain optimal amounts of grain production. Consequently, they supposed that applying more N, regardless of how much a crop needed, was an "insurance" strategy against low yields (Jiao et al, 2018, Zhang and Yang, 2016, Vitousek et al, 2009). The combined fertilization can reduce N fertilizer and total fertilizer application, still achieve high yield, and further improve NUE when used on hybrid rice, while being convenient for elderly farmers to use against weeds and insect pests.
Conclusion
Rice grain yield is determined by four factors: EPN, GNP, SSR, and KGW (Gravois and Helms, 1992); it is also greatly affected by climate and cultivation conditions. The EPN was the main advantage of early-season hybrid rice with a short-growth period, where as the GNP was the main advantage of late-season hybrid rice with a long-growth period. The main yield characteristic advantage of hybrid rice was stronger under combined (low) fertilization than under customary (high) fertilization. In summary, high yield can be achieved by selecting excellent hybrid rice varieties and using combined fertilization (low fertilizer). Additionally, combined fertilizer can reduce the amount of fertilizer used, pesticide spraying times, and labor costs as well as production cost and more economic returns to rice growers Figure 1 Biomass accumulation of early-season and late-season rice under different fertilization conditions. ESR: early-season rice, LSR: lateseason rice, CUF: customary fertilization, COF: combined fertilization.
Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download. Supplementarytables.docx | 2020-07-09T09:14:29.516Z | 2020-07-07T00:00:00.000 | {
"year": 2020,
"sha1": "285c5eccd8ac5300fe9886c0e82f54aaa70dd724",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-40038/v1.pdf?c=1631859552000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "d3d7c549debaeb5cb33c6a0f780acc39814ed1d2",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
267465571 | pes2o/s2orc | v3-fos-license | Single cell RNA-sequencing data generated from mouse adipose tissue during the development of obesity
In recent years, the number of obesity has increased rapidly around the world, and it has become a major public health problem endangering global health [1]. Obesity is caused by excessive calorie intake over a long period of time, and high-fat diet (HFD) is one of the important predisposing factors [2], [3], [4]. Adipose tissue (AT) is an important immune and endocrine organ in the body, and plays an important role in the body [5]. Obesity leads to AT dysfunction, AT dilation and cell hypertrophy. Dysfunctional fat cells are the main source of pro-inflammatory cytokines, which aggravate low-grade systemic inflammation and further promote the development of obesity-related diseases [6], [7], [8]. However, whether AT releases pro-inflammatory cytokines in the early stages of obesity development remains unknown. The AT microenvironment is composed of a variety of cells, including fat cells, immune cells, fibroblasts, and endothelial cells. The immune microenvironment (TIME) and its metabolic imbalance can lead to the secretion or regulation of related hormones, which causes inflammation AT [9]. TIME is very important for maintaining AT homeostasis, which is crucial for the occurrence of obesity [10,11]. This data use single-cell RNA sequencing (sNuc-Seq) to analyze the characteristics of TIME changes in the mouse epididymal adipose tissue during the development of obesity, and the changes of cell types and genes in the tissue.
a b s t r a c t
In recent years, the number of obesity has increased rapidly around the world, and it has become a major public health problem endangering global health [1] .Obesity is caused by excessive calorie intake over a long period of time, and highfat diet (HFD) is one of the important predisposing factors [2][3][4] .Adipose tissue (AT) is an important immune and endocrine organ in the body, and plays an important role in the body [5] .Obesity leads to AT dysfunction, AT dilation and cell hypertrophy.Dysfunctional fat cells are the main source of pro-inflammatory cytokines, which aggravate lowgrade systemic inflammation and further promote the development of obesity-related diseases [6][7][8] .However, whether AT releases pro-inflammatory cytokines in the early stages of obesity development remains unknown.The AT microenvironment is composed of a variety of cells, including fat cells, immune cells, fibroblasts, and endothelial cells.The immune microenvironment (TIME) and its metabolic imbalance can lead to the secretion or regulation of related hormones, which causes inflammation AT [9] .TIME is very important for maintaining AT homeostasis, which is crucial for the occurrence of obesity [10 , 11] .This data use single-cell RNA sequencing (sNuc-Seq) to analyze the characteristics of TIME changes in the mouse epididymal adipose tissue during the development of obesity, and the changes of cell types and genes in the tissue.
Value of the Data
• These single cell RNA-sequencing profiles, obtained from the epididymal adipose tissue during the development of mouse obesity, which can explain the cell types and gene changes in the epididymal adipose tissue during the development of mouse obesity.• The exploration of these data will provide molecular insights into the changes in adipose tissue microenvironment triggered by obesity, clarify the stage of adipose tissue inflammation and the mechanism of adipose tissue neuron apoptosis, which will be useful for researchers studying the occurrence and development of adipose tissue inflammation caused by obesity.• These data can be further analyzed to better understand the regulatory mechanism of "fibroblast-neutrophil-macrophage-neuron" crosstalk in adipose tissue and guide the treatment of obesity and complications.
Objective
The Objective of this study was to analyze the internal mechanism of adipose tissue changes during obesity formation by single cell sequencing, to provide theoretical support for the treatment of obesity and its complications, and to provide help for relevant researchers.
Data Description
This dataset contains data of mouse epididymal adipose tissue during the development of obesity, including data at three stages: before obesity (Ctrl group), during obesity development (Mid_Ob group), and during obesity formation (Ob group) ( Fig. 1 ).Table 1 described the data storage location.Table 2 shows the genome sequencing data of mouse epididymal adipose tissue.This analysis completed single-cell transcriptome sequencing of 3 samples, and the number of high-quality Cell Ranger cells in each sample was distributed in the range of 5021-14,621 for quantitative quality control.After quality control such as double cell, multicellular and apoptotic cells were eliminated, Finally, the number of cells obtained was distributed in the range of 4 4 42-11,672, the average UMI number in each cell was distributed in the range of 6230-10,619, the average gene number in each cell was distributed in the range of 24 80-344 9, the average mitochondrial UMI ratio in each cell was distributed in the range of 0.0 030-0.0050.Table 3 shows the list of accession number of epididymal adipose tissue in mouse in GEO database.
Experimental Design, Materials and Methods
All animal experiments were approved by the Animal Ethics Committee of Shandong Physical Education University.The 5-week-old C57BL/6J male mice were purchased from Jiangsu Huachennuo Medical Technology Co., LTD., and domesticated for 1 week in SPF facilities.The mice were maintained in a temperature-controlled (25 °C) facility with a 12-h light/dark cycle.Mice in the control group were fed ordinary diet, and mice in the high-fat diet group were fed 60% high-fat diet.The mice were weighed at a fixed time each week.Normal mice (Ctrl group), during obesity (Mid_Ob group) and after obesity (Ob group) were selected, 8 mice in each group.The criterion for judging the success of Mid_Ob group modeling is that its body weight exceeds 10% of the average body weight of the control group, and the criterion for judging the success of Ob modeling is that its body weight exceeds 20% of the average body weight of the control group [12] .Serum total cholesterol (TC), triglyceride (TG), low density lipoprotein (LDL-C) and high density lipoprotein (HDL-C) C57BL/6J male mice were detected by Elisa method every week.The experimental animals were prohibited from drinking water 12 h before sampling.The mice were injected with peritoneal anesthesia with 3% concentration of pentobarbital sodium according to 1 ml/kg of body weight, and the neck was removed after taking blood from orbit.The abdominal cavity of the mice was opened, the liver was removed, and the epididymal fat was removed with tweezers.Adipose tissue was removed and blood stains were rinsed with 0.9% saline.Eight pieces of subcutaneous adipose tissue and periepididymal adipose tissue (about 300 mg each) were selected from each group and put into a fixed bottle containing 4% paraformaldehyde for histological fixation.The remaining adipose tissue was cut and divided into a labeled 2 ml cryopreservation tube (Thermo, external rotating cryopreservation tube 375418, USA), and immediately frozen in liquid nitrogen.After sampling, the samples were transferred to the −80 °C refrigerator for single-cell sequencing.We mixed the parepididymal adipose tissue of 8 mice from each group into a tube for single-cell sequencing to avoid some errors caused by the experimental process.
The single Cell sequencing process is as follows: (1) Raw data quality assessment (2) Quantitative quality control of Cell Ranger gene: Cell Ranger, the official software of 10x genomics, was used for sample quality control, and STAR [13] software was integrated in it.Reads were compared to the reference genome to obtain quality control results such as the number of high-quality cells, number of genes and genome comparison rate in the original data.Thus the quality of each sample is evaluated.(3) Quantitative post-quality control: Based on the preliminary quality control of Cell Ranger, further quality control of experimental data was carried out, and the data of multicellular, double-cell or uncombined upper cell were eliminated for downstream analysis.The quality control standards in this study are: Cells with more than 200 retained genes, more than 10 0 0 UMI, more than 0.7 log10GenesPerUMI, less than 5% mitochondrial UMI, and less than 5% red blood cell genes were treated as high-quality cells and then double-cell removal was performed using DoubletFinder software [14] .Perform downstream analysis.(4) Standardized treatment of gene expression.Cell heterogeneity analysis: dimensionality reduction clustering, Marker gene identification, cell type identification, cell subsets and other downstream personalized analysis.6 Gene expression analysis: differential gene analysis, differential gene enrichment analysis and other downstream personalized analysis.Fig 2 shows the main flow diagram of data acquisition.Adipose tissue was divided into 18 cell subsets by the reduced-dimension cluster analysis, different colors in the diagram represent different subpopulations of cells ( Fig. 3 ).Adipose tissue has been identified as seven cell types, with different colors representing different cell types ( Fig. 4 ), from the figure, we can see that fibroblasts and macrophages account for a large proportion, so we speculate that they play a key role in the development of adipose tissue obesity.The dimensionality reduction clustering diagram of GSM7502687 is shown in Figs. 4 is statistical table of differentially expressed genes.String_protein-protein-interaction (p < 0.05, FC > 1.5 ) was shown in Fig. 6 .Gene Ontology Classification was shown in Fig. 7 .KEGG Pathway Classification was shown in Fig. 8 .
The horizontal coordinate is cell population, and the vertical coordinate is Marker gene.In the figure, red indicates high expression and blue indicates low expression ( Fig. 5 ).
Limitations
Not applicable.
Ethics Statement
The animals we selected were male C57BL/6J mice, All experiments were conducted in accordance with the National Institutes of Health Guidelines for the Care and Use of Experimental Animals (NIH Publication No. 8023, revised 1978).Approval for this study was provided by the Shandong Sport University Animal Ethics Committee (China).
Data Availability
Single cell sequencing reveals changes in the adipose tissue of the epididymis during the development of obesity in mice (Original data) (GEO).
3 a and 4 a.The dimensionality reduction clustering diagram of GSM7502688 is shown in Figs. 3 b and 4 b.The dimensionality reduction clustering diagram of GSM7502689 is shown in Figs. 3 c and 4 c.Table
Fig. 2 .
Fig. 2. Schematic diagram of the main process of data acquisition.
Fig. 5 .
Fig. 5. Heat map of Top10 Marker gene expression in each cell subgroup.
Table 1
Data storage location.
Table 2
Single nucleus transcriptome sequencing results of epididymal adipose tissue of 3 groups.
Table 3
List of accession number of epididymal adipose tissue in mouse in GEO database.
Table 4
Statistical table of differentially expressed genes. | 2024-02-06T16:48:27.746Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "b44cc48a5c1281ede8842595dd1d97d664c22782",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.dib.2024.110119",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9873ebf6e6dce49c45b9b5dfea0aa0a1c6a74fcf",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253174337 | pes2o/s2orc | v3-fos-license | Hydrological Investigation and Characterization of Sokouraba Watershed, Burkina Faso
The purpose of this study is to determine the hydrological characteristics and the morphometric parameters of the Sokouraba watershed (BV) where the climatic hazards make the water resource increasingly insufficient. This m/Km respectively. The value of the corrected overall slope index of the watershed is 7.05 m/Km while the Gravelius compactness index is 1.42. This value of K G higher than 1 indicates that the watershed of sokouraba is of elongated shape. The specific gradient value of the watershed is 61.99. This value is included between 50 and 100 m, thus according to the classification of the reliefs the latter, the relief of the catchment area of Sokouraba can be qualified as moderate. The values of the length of the main watercourse and the total length of the watercourses calculated are respectively 15,41Km and 63,25 Km. The calculated value of the drainage density of the catchment area is 0.21 Km -1 . The value of the attenuation coefficient and the ten-year rainfall calculated in this study are 0,78 and 100,062 mm respectively. The value of the decennial runoff coefficient thus calculated is 27,83%.
m/Km respectively. The value of the corrected overall slope index of the watershed is 7.05 m/Km while the Gravelius compactness index is 1.42. This value of K G higher than 1 indicates that the watershed of sokouraba is of elongated shape. The specific gradient value of the watershed is 61.99. This value is included between 50 and 100 m, thus according to the classification of the reliefs the latter, the relief of the catchment area of Sokouraba can be qualified as moderate. The values of the length of the main watercourse and the total length of the watercourses calculated are respectively 15,41Km and 63, 25 Km. The calculated value of the drainage density of the catchment area is 0.21 Km -1
INTRODUCTION
In Africa, the economic development of most countries is dependent on agriculture and livestock. Modernization of the primary sector, which is highly dependent on rainfall, is necessary to boost development. This is why the mobilization of surface water is a major concern for developing countries and more particularly for Sahelian countries where rainfall is relatively low but also characterized by an unequal spatial and temporal distribution [1]. Indeed, in most of these Sahelian countries, the water problem is one of the greatest concerns that threaten economic and social development. The rainfall deficit due to insufficient and irregular rainfall intensifies the difficulties related to access to water. The control of the hydrological functioning of watersheds is one of the solutions that can allow coherent and adequate control and management of surface water resources. Most watershed studies were aimed at good control of water for its conservation and protection against the risks it can cause [2].
Burkina Faso, a landlocked country in the Sudano-Sahelian zone, is not immune to this situation of rainfall precariousness to the point of being placed in a situation of water stress. This rainfall deficit has led to a situation of severe water stress in the region in general and in Burkina Faso in particular, which has been affected by this continuous drought situation [3]. The country is further subject to climatic hazards that make water resources unavailable [4]. In addition, the population size has increased from 10,312,000 in 1996 to about 20,000,000 in 2019 [5,6,7]. One of the corollaries of this galloping demography is the increased demand for water. In this country, which is representative of the Sudano-Sahelian zone, since it is framed in latitude by the 400 mm and 1300 mm interannual isohyets, an in-depth study of the long-term rainfall series and its repercussions is essential [8].
In this country, climate specialists predict an increase in average temperatures of 0.8°C by 2025 and 1.7°C by 2050, a decrease in rainfall of -3.4% in 2025, and -7.3% in 2050 [9]. The consequences of these climate changes include a marked decrease in water availability, a regression in biomass potential, a drastic reduction and degradation of pastures The work carried out by Mahé et al. [10] about the climatic and anthropogenic impact on the Nakambe runoff in Burkina Faso shows that the Nakambe basin in Wayen occupies an area of nearly 21,000 km2 in the Sahelian domain. Despite the decrease in rainfall since 1970, its peak flows and flow coefficients are increasing regularly. Floods are earlier -in August instead of September and more intense. This increase is also observed for other neighboring Sahelian rivers. The maximum daily flows increase by almost 100%, but the number of days when the flow is higher than half the maximum flow varies a little over the same period, reflecting a flood that is not very spread out over time. Thus Climate change affecting most countries in the world in general and developing countries, in particular, today requires a better understanding of the hydrological behavior of watersheds to implement effective measures to regulate surface runoff, or less, to build resilience ( Gbohoui et al. [11]. In West Africa, a change in climatic conditions since the 1970s has resulted in a decrease in average annual rainfall and an increase in temperature [12]. Okafor et al. [13] realized research to study climate change and Spatio-temporal variations in extreme climate indices of the Dano watershed in Burkina Faso using historical data for the period 1981-2010 and projections for the period 2020-2049. According to the results of their study, the climate change signal in the future based on the ensemble mean of the regional climate models a decrease in the mean annual rainfall by 25.2% and 25.6% for Representative Concentration Pathways 4.5 and Representative Concentration Pathways 8.5 respectively. The work realized by Traore et al. [14] to assess the evolution of irrigated areas with Landsat images in the Kou watershed highlighted that for several years, the pressure on the water resources of the Kou has increased, partly due to the extension of irrigated agricultural perimeters. He further demonstrated that over the past 10 years the irrigated area has increased by almost 70% in 20 years, with most of this expansion occurring. The work realized by Belemtougri et al. [15] to identify environmental variables that best explain the geographic variations of the flow intermittency regime, focusing on intermittency duration, suggested that catchment permeability and catchment areas are the most critical variables in determining flow intermittency classes in Burkina Faso, as the effect of precipitation can be overruled by the ones of permeability, catchment area, and Strahler order. The work carried out by Ouedraogo [16] in 2012 on the impact of climate change on agricultural income in Burkina Faso showed that agriculture is very sensitive to rainfall in Burkina. A 1% increase in rainfall leads to a 14.7% increase in agricultural income. However, a 1% increase in temperature leads to a 3.6% decrease in farm income. Sensitivity analyses showed that farmers will lose 93% of their income following a 5°C temperature increase. Thus, the Burkinabe government is attempting to alleviate the problem of water deficits by conducting a vast program of maintenance, rehabilitation, and construction of surface water reservoirs. It is in this context that the construction of several dams is planned, including that of Sokouraba. Located in the province of Kénédougou, it is a hydro-agricultural dam that aims to increase agricultural and pastoral production capacities. The realization of this dam requires a hydrological study and characterization of the catchment area of sokouraba is essential. Remote sensing tools and geographic information systems are the most efficient means to determine the hydrological and morphometric characteristics of the watershed. The determination of the physiographic characteristics of a watershed is necessary to determine and analyze the hydrological behavior of a watershed (precipitated water wave, stream flow, balance [17,18]. The use of geographic information systems (GIS) in the study of watersheds allows nowadays to make the management of water balances of a rainfall-receiving area and to evaluate of its contribution to runoff to the redistribution of water in the soil and towards the water table and its consumption by vegetation [19]. Thus, the present study focuses on the hydrological study and the characterization of the Sokouroba watershed in order to better understand the functioning of the hydrographic networks as well as the quantity of water likely to be used for the realization of a dam in this watershed. In this region, no sufficient study has been conducted in the past to understand the hydrological behavior of the watershed as well as its hydrological functioning. Sokouraba is a village in the Kangala Commune in the Hauts Bassins Region. This village has 4196 inhabitants. The economic activities of the population of this village, like the rural world in Burkina Faso, are based on agriculture and breeding which are strongly dependent on the availability of water resources.
Study Area
Sokouraba is a village in the Kangala Commune located in the Hauts Bassins Region. This village had 4196 inhabitants. This locality is accessible from Ouagadougou by taking the national road number one (Ouaga-Bobo) long of 360 Km, then the national road number 8 (Bobo-Orodara-Diéri-Mahon) long of 122Km, then the RD 69 road from Mahon to Kangala over a distance of 9 km and finally the track leading from Kangala to Sokouraba over a distance of about 15 km. It is also accessible from Diéri via the RR20 road for 25 km through Samogohiri. The travel distance from Ouagadougou is about 506 km, passing through Diéri. Moreover, Sokouraba is located at about 40 km from the Malian border. According to the phytogeographical division of Guiko [20], Sokouraba belongs to the Southern Sudanian domain and receives an average of 1000 mm of water per year. The climate of this locality is of Sudanian type.
It is characterized by the alternation of two seasons: The dry season which is cool from November to February then hot from March to April. The dry season is characterized by the continental trade wind known as harmattan, a hot and dry continental wind coming from the Sahara anticyclone. However, low rainfall occurs annually during the second half of the dry season; the rainy season which begins in May and ends in October. It lasts on average 150 to 175 days with an average annual rainfall of 900 to 1200 mm. The wettest month is August, while December and January are the least rainy months. The average temperatures in the municipality vary between 24 and 30°C with a relatively small temperature range of 5°C. Air humidity in the dry season is around 25%, while in the wet season it is around 85%. The average annual evaporation varies between 1500 and 1700 mm [21]. During the period from 2004 to 2013, the annual rainfall in the commune varied between 982 and 1357 mm. The year 2006 was the wettest year (1357 mm) during this time interval, while the minimum was recorded in 2008.
Methods
To carry out this study, a set of tools was used in the framework of our work, among which we can mention the geographic information system (GIS) software ArcGis used for the characterization of the watershed and the edition of the various maps; Google Earth Pro used for the geolocalization of the outlet and Hyfran Plus software used for the frequency analysis of the rainfall data. In addition, data essential to the study, notably climatic data such as rainfall and evaporation from the rainfall and synoptic stations of Orodara and Bobo Dioulasso respectively were also processed.
Data from the Orodara rainfall station was selected for the frequency analysis of rainfall due to its geographical position in the watershed. Maximum daily and annual rainfall series over a period from 1960 to 2019 was used as the basis for this frequency analysis. For the analysis of a rainfall data series, the minimum recommended size is a sample of at least thirty (30) years. With a sample size of fifty-nine (59) years, the condition for conducting this analysis is therefore met. For dam sizing, two models, namely the Normal or Gauss law and the double exponential law or Gumbel law, are used to validate the results of the frequency analysis of rainfall. The objective of this frequency analysis is to determine the different rainfall quantiles that are essential for the rest of our study and that correspond to given return periods. The fit is checked by establishing that at least 95% of the points are within the confidence interval. In addition, this analysis was used to determine the climate regime associated with the study area. All of these analyses are performed using HyfranPlus software, which is a tool adapted to the analysis of rainfall data.
The Normal or GAUSS law for fitting annual mean rainfall is based on the following function (Equation 1) [22]: (1) Where F(x) is the distribution function and is the variable, is the reduced mean and represents the standard deviation.
The double exponential or GUMBEL law for the adjustment of maximum daily rainfall is based on Equation 2. Considering the highest rainfall "X" over a time interval of any calendar period of the year, the distribution follows a distribution function represented in Equation 2 above [23]. (2) Where F(x) represents the distribution function, x 0 represents the position parameter and α corresponds to the scale parameter.
The DTMs established from the digitized contour lines of the basin were used as a basis for the determination of the morphological characteristics of the watershed feeding the Sokouraba dam. Then the ArcGis software was used for data processing. The result of this treatment made it possible to characterize the Sokouraba watershed through its surface, its shape, and its relief and to know thus the type of hydrographic network to which our basin belongs. The determination of longitudinal slope of the basin was determined after having traced the longitudinal profile of the main watercourse and made the ratio between its variation in altitude and its length (Equation 3). (3) Where I l represents the longitudinal slope (m/km), ∆H represents the elevation change (m) and L c corresponds to the length of the main river.
The geological map of Burkina was used to identify the different soil types in the watershed. The ORSTOM experiments allowed to define the permeability indices of the watershed. Thus, after having identified the nature of the soil of the Sokouraba watershed, it was classified according to the classification of RODIER and AUVREY.
The specific gradient of the watershed was determined by multiplying the corrected overall slope index by the square root of the watershed area. The Gravelius Compactness Index (K G ), the equivalent rectangle length (L éq ), the overall slope index (I g ), the corrected overall slope index, the gradient (D), and the Density of Drainage ge (D d ) of the watershed were determined using the following relationships (Equation 4 to 8): Where K G represents the compactness index, P is the watershed perimeter and S represents the watershed area.
Where L éq is the length of the equivalent rectangle expressed in Km, P is the perimeter of the catchment area expressed in Km and S is the area of the catchment area given in km 2 .
Where Ig represents the global slope index expressed in m/km, D corresponds to the difference in altitude separating the altitudes having 5 % and 95 % of the basin area above them given in m and Léq represents the length of the equivalent rectangle expressed in Km.
Where Ig cor corresponds to the global index of corrected slope expressed in m/km, I g represents the global index of slope expressed in m/km, I t corresponds to the transverse slope given in m/Km, and n represents the coefficient according to the length of the equivalent rectangle. The decennial runoff coefficient can be determined by calculating the ratio of runoff volume to precipitation volume. Very difficult to estimate, its evaluation is based on relatively subjective criteria. Two methods were used for the determination of the coefficient Kr 10 namely that of the ORSTOM and the CIEH. The hypsometric curve which is a curve that shows the distribution of the surface of the watershed according to the altitude was also carried out within the framework of the present work.
Determination of the design flood consists of estimating the flood flow for which the structure must be able to discharge without damage. The design of any structure on a Sahelian river, be it a bridge or a dam, should be carried out with a minimum of knowledge of flood flows [20]. As the BV is not gauged (no flow measurement device installed), the empirical methods implemented for small BVs in West and Central Africa are those used for determining flood flows. The methods used are the deterministic ones of Rodier [24] of 1986, the linear regression of Puech and Chabi (CIEH), and the exponential gradient of Gresillon et al (GRADEX). Thus, the rainfall series of the Orodara station made it possible to determine the decadal daily rainfall P 10 and the centennial daily rainfall P 100 through statistical adjustment methods.
The peak coefficient is defined as the ratio of the maximum runoff to the average runoff. It is expressed by the following relation (Equation 9): (9) Where α 10 corresponds to the peak coefficient, Q r10 expresses the maximum runoff given in m 3 /s and Q mr10 represents the average runoff expressed in m 3 /s. Whatever the area of the watershed, the peak coefficient α 10 is admitted to be close to 2.6.
The abatement coefficient corresponding to the reduction coefficient that allows passing, for a given frequency, from a point rainfall to an average rainfall calculated on a certain surface, located in a rainfall homogeneous area was also determined within the framework of the present study. It is used to determine the ten-year average rainfall P m10 . It is obtained using the following relationship (Equations 10 and Equation 11): (10) (11) Where A represents the abatement coefficient, P an expresses the average annual rainfall (mm), S corresponds to the surface area of the watershed (km 2 ), P m10 : average ten-year rainfall (mm) and P 10 represents the ten-year rainfall (mm).
The basic time Tb 10 and the time of rise Tm 10 of water at the outlet of the dam are deduced from the charts of the method of AUVREY and RODIER. The basic time is the time included between the beginning and the end of the fast runoff. In a dry tropical zone, the base time is expressed by the following relation (Equation 12): (12) Where Tb 10 represents the basic time, a and b are parameters depending on the global slope index, the permeability, and the climatic zone of the watershed.
The rise time corresponding to the time between the beginning of the runoff and the maximum of the flood was determined using the following formula (Equation 13): (13) Where Tm 10 is the rise time and Tb 10 is the base time.
The determination of the ten-year flood was carried out using the ORSTOM and CIEH methods. The ORSTOM method is resolutely deterministic and was developed by Rodier [24] in 1986. The proposed approach is that of a global rain-flow model based on the unit hydrograph theory [21]. For this model, the peak flow corresponding to the surface runoff of the decennial flood is defined by the following relation (Equation 15 and Equation 14): Or Qr 10 represents the peak flow of the decennial surface runoff, Q 10 represents the decennial flood flow, m corresponds to the coefficient of majority taken equal to 1,05, A expresses the coefficient of abatement of VUILLAUME, Kr 10 corresponds to the ten-year runoff coefficient, P 10 corresponds to the ten-year maximum daily rainfall, α 10 represents the peak coefficient taken equal to 2.6, S represents the surface area of the catchment area and Tb 10 the base time of the ten-year flood.
The CIEH method was proposed by PUECH and CHABI-GONNI and was based on 162 watersheds coming essentially from the collection of the experimental basins. The expression of the peak flow of the decennial flood is based on a multiple regression scheme and is presented in the following form (Equation 16) [24]; [25].
Where a, s, p, i, k, d correspond to the coefficients to be determined, S represents the surface area of the catchment area, I g represents the global slope index, P an corresponds to the average annual rainfall, Kr 10 represents the tenyear runoff coefficient and D d corresponds to the drainage density. The list of parameters to be included in the model is not exhaustive. This method is made of statistics with several variants depending on the belonging of the basin to a climatic division, a geographical position, a division for a country or a group [26]; [27].
The estimation of the peak flows of the flood of return period higher than 10 years was made according to the GRADEX theory. The principle on which this method is based consists in assuming that beyond a certain return period, any rain that falls will run off. The 10-year period, corresponding to the precipitation that generated the ten-year flood, is used as a threshold. Thus, any extreme precipitation beyond the 10-year period generates an additional flow equal to the additional rainfall compared to the 10-year period. The additional flow is translated by a multiplier coefficient C (Equation 17) higher than 1. This coefficient was proposed in 1977 by and is based on the GRADEX method of GUILLOT and DUBAND after a critical study of the various proposed coefficients.
Where C is the surcharge factor, P 10 is the maximum daily wet decadal precipitation, P 100 is the maximum daily wet centennial precipitation, T b10 is the base time and Kr 10 is the decadal runoff factor. This method makes it possible to pass from the decennial flow to the project flow thanks to a linear relation. Thus the project flow corresponding to the 100-year flood is given by the following equation (Equation 18): (18) Where Q 100 is the project discharge, C is the design factor and Q 10 is the 10-year flood discharge.
Rainfall Data
The treatment of the annual mean and daily maximum rainfall data from the Orodara station by the GAUSS and GUMBEL laws respectively can be summarized in Table 1 The analysis carried out on the annual rainfall trends revealed that 91.50% of the rainfall is observed between May and October and that August, July, and September are the rainiest months with 25.70%; 19.80%, and 17.50% respectively (Fig. 1).
The results of the quantiles from the frequency analysis that will be used in the rest of our study are shown in Table 2. The Sokouraba watershed with its hydrographic network is represented in Fig. 2. The average annual rainfall recorded is about 1100 mm/year. The values of wet decennial and wet quinquennial rainfall are respectively 1380 and 1280 mm. The dry five-year and dry ten-year rainfall values are 917 mm and 821 mm respectively ( Table 2). Fig. 3. This slope is then deduced from the ratio of the difference in altitude that can be read in the same Fig. 2 The value thus calculated of the average cross slope index of Sokouraba is 14.75 m/Km. This value is different from its longitudinal slope index by more than 20%. It is then necessary to proceed to a correction of the global slope index [28]. The value of the corrected overall slope index of the watershed is 7.05 m/Km. The value of the Gravelius compactness index of the watershed calculated in this way is 1.42. This value of KG higher than 1 indicates that the watershed of Sokouraba is of elongated shape. The drawing of the hypsometric curve ( Fig. 4) allows us to determine the altitudes at 5% and 95% of the cumulative surface percentages, which are respectively 562 m and 505 m. These curves are a useful tool for comparing several basins together or the various sections of a single basin. In addition, they can be used to determine the average rainfall over a catchment area and provide information on the hydrological and hydraulic behavior of the catchment and its drainage system. . This value is determined by the knowledge of the total length of the watercourses of the basin and its surface.
Morphometric Characteristics of the Watershed
The value of the attenuation coefficient and the ten-year rainfall calculated in this study are 0.78 and 100.062 mm respectively. This coefficient of abatement corresponds to the reduction coefficient which allows passing, for a given frequency, from a height of punctual rain to an average height calculated on a certain surface, located in a homogeneous rainfall area. The values of the base time and rise time determined are 915.33 minutes and 305.11 minutes respectively. The value of the decadal runoff coefficient was determined using the CIEH method (Table 4). This method is a function of the climatic zone and substrate and uses regression results obtained on the basis of the geological substrate and annual precipitation. The soil map of the Sokouraba watershed shown in Fig. 5, indicates that the geology of Sokouraba is heterogeneous and consists of 76.68% clay and 23.32% sandstone (58%). Can be used in our case, the relations K 4 for clays and K 2 for sandstones. Based on these relations, the value of K r10 determined by the CIEH method is 33.21% (Table 5). The value of the 10-year runoff coefficient was determined again using the ORSTOM method.
On the basis of the general form analytical equations presented in Table 6, the runoff coefficients Kr were determined for P 10 = 70 mm and P 10 = 100 mm (Table 6). In a dry tropical regime, for watersheds whose surface is higher than 10 km 2 with a class of permeability P3 or RI and with a global corrected slope index of 7,05 included between 7 and 10, the parameters a, b, and c of kr 70 and kr 100 necessary to the determination of the runoff coefficient Kr 10 by the ORSTOM method are presented in tables x and x. For a ten-year rainfall P 10 = 129 mm higher than 100 mm the value of Kr 10 is obtained by extrapolation between the values of Kr for P 10 = 70 mm and P 10 = 100 mm. The value of the decennial runoff coefficient thus calculated is 27.83% (Table 7).
The value of the decennial flow was determined initially by using the CIEH method (Table 8). For the case of Burkina Faso eight (08) regression equations are presented in Table 8 which will allow approaching the decennial flood according to the most representative parameters which are the surface of the basin S, the average annual rainfall P an , the global index of corrected slope Ig cor , the decennial runoff coefficient Kr 10 and the drainage density D d are generally used (Table 8) [29]. Expression N o 3 of Table 8 judged much more representative of the parameters of the catchment area of Sokouraba was used for the estimation of this decennial flow. The value thus calculated of this decennial flow is 70,41 m 3 /s. The ten-year peak flow rate Q10 is deduced by multiplying the peak flow rate corresponding to surface runoff by the surcharge coefficient m. This coefficient, which is a function of the class of inflitability of the basin and its climatic zone, is taken equal to 1.05 within the framework of the present study. The peak flow corresponding to the surface runoff of the decennial flood is defined by the parameters presented in Table 9.
The value of this peak flow calculated in this way is 122.54 m 3 /s and that of the decennial peak flow Q10 deduced is estimated at 128.67 m 3 /s. The value of the project flow corresponding to the 100-year flow determined in this study was estimated at 173.16 m 3 /s. | 2022-10-28T15:07:34.835Z | 2022-10-26T00:00:00.000 | {
"year": 2022,
"sha1": "c0a3587e86e106141b42e1f9cb887d8adc0f142f",
"oa_license": null,
"oa_url": "https://journalcjast.com/index.php/CJAST/article/download/3973/7945",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8e4e75ad142b29ac967a4e3ab06d89577e3f7378",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
271413073 | pes2o/s2orc | v3-fos-license | Health education as a strategy for encouraging children's vaccination: rapid review
Introduction : Although vaccines have significantly improved the lives of the world's population since their implementation, there has been recent stagnation and, in some cases, even a reversal of previously achieved gains. Aim : To summarize the evidence on how to use health education to encourage the vaccination of children. Outlining : Rapid review, guided by the question "How to use health education to encourage the vaccination of children?" carried out in 2023, using two databases and an electronic library. The searches yielded 2,666 documents, of which 12 articles were selected for data extraction, summarization, and discussion. Results : The community and the home were the main places described for interventions. Fathers, mothers, and guardians/caregivers of children were the audience most mentioned for interventions. Educational instruments, such as booklets, leaflets
INTRODUCTION
In this context, it is noteworthy that efforts to distribute vaccines have been ongoing since the eradication of smallpox, with the launch of the Expanded Immunization Program (PAI) in 1974 by the World Health Organization (WHO) to ensure that all children, in all countries, have access to vaccines. 3veral countries worldwide have national immunization programs, such as Brazil, which has had the National Immunization Program (PNI) since 1973.This program is an efficient public policy that positively impacts the Brazilian population, reducing morbidity and mortality and adapting to changes in the political, epidemiological, and social fields, being guided by the doctrinal principles of the Unified Health System (SUS in Portuguese): universality, comprehensiveness, and equity in health care. 4wever, despite improving the lives of the world's population, vaccines have recently faced stagnation and, in some cases, a reversal of previous gains, serving as a warning sign for programs that aim to provide universal access to immunization now and in the future. Notably, there was a significant reduction in vaccination coverage of vaccine-preventable diseases before and during the COVID-19 pandemic. 8he literature attributes the return of already eradicated diseases and the difficulty in preventing new diseases, such as COVID-19, to several factors. 9ong these factors are the collapse of national immunization programs due to conflicts, wars, and social tensions; the migration of unvaccinated people; inadequate vaccinations in hard-to-reach groups and minority populations; as well as the anti-vaccination debate, often influenced by fake news shared through social networks. 10 is noteworthy that the emergence of the anti-vaccination movement, triggered by Andrew Wakefield around 1998 after the publication of an article that alleged an association between autism and vaccines, is of extreme global relevance.] Vaccine hesitancy is the delay in accepting or refusing vaccination despite the availability of vaccination services. 13 this context, health education emerges as a strategy, defined by the WHO as any combination of learning experiences designed to help individuals and communities improve their health with increased knowledge, influence on motivation, and improved health literacy, promoting the development of knowledge and skills that enable action to address the determinants of health. 14erefore, this rapid review is justified by the increasing difficulty of vaccination, which makes health education a feasible and cheap strategy to combat vaccine hesitancy and, consequently, to encourage vaccination in children.Knowing that governments and public health authorities must be proactive to mitigate potential losses in vaccine acceptance, the objective was to summarize the evidence on how to use health education to encourage the vaccination of children.
Study Design
This rapid review is a highly efficient method primarily used for developing public policies.Despite its speed, it maintains a high level of methodological rigor, ensuring quick and effective evidence synthesis. 15rom August to September 2023, we conducted the bibliographic survey, data collection, and analysis with precision and thoroughness.
Research Question
The acronym Population, Concept, and Context (PCC) was used to formulate the guiding question. 16he following were adopted: P: Children, C: Health education, C: Incentive to vaccination, generating the question: "How to use health education to encourage vaccination of children?" After creating the question, we evaluated it using the acronym FINER (Feasible, Interesting, New, Ethical, and Relevant). 17he question appeared feasible as a rapid review is a low-cost alternative that did not require financing; interesting, motivating the team to seek answers and inspiring managers and policymakers to act on those answers; new, gaining prominence in recent years and capable of expanding current findings; ethical, meeting current ethical principles and requiring the team to reference the articles correctly; and relevant, as the results could summarize evidence to improve vaccination levels in children and support public policies.
Eligibility Criteria
We adopted the following inclusion criteria: original (primary) articles, without language or geographic restrictions, within the review's scope, and answering the research question, using the last five years (2018-2023) as a timeframe.We excluded articles without intervention, review articles, letters to the editor, book chapters, duplicate articles, theses, dissertations, and those that did not answer the guiding question.
Data Collection
For the bibliographic survey, we consulted two databases and an electronic library: the Medical We exported all identified studies to EndNote® Web software to identify and remove duplicates, then transferred them to the Rayyan web application. 18wo reviewers evaluated the studies' eligibility, with a third reviewer intervening in cases of disagreement.The studies were selected independently and masked by three reviewers, following the PRISMA 2020 statement steps. 19e first searched the databases, applied filters according to the inclusion criteria, and removed duplicate records.
In the screening stage, we selected records for reading titles and abstracts, as well as the complete text.Therefore, the full text of the selected articles was read, which led to the selection of studies to be included.
RESULTS
Initially, the researchers identified 2,666 documents, which were reduced to 526 articles using automation tools and the removal of duplicates.
During the screening process, they read the titles and abstracts of the remaining articles, selecting 32 for full reading.Ultimately, 12 articles met the inclusion criteria and were included in this rapid review.Figure 1 shows the flowchart of this process.
DISCUSSION
The present study focuses on vaccine education, a subset of health education aimed at neutralizing the growing global hesitancy towards vaccines and encouraging the development of systems that increase public engagement with vaccination.
Given the prevalence of diseases, epidemics, and pandemics, studies in this field are essential for enabling professionals to quickly advance in their practical roles through vaccination education, developing vaccine policies, and promoting patient immunization. 33e findings underscore the importance of promoting health education.Digital media has provided the public with unrestricted access to health information, presenting challenges for professionals, governments, and health organizations to manage this vast information and ensure its quality and reliability to prevent harm.In this dynamic landscape, health education remains a cornerstone of public health, continually evolving with new concepts and strategies.Its content, methods, and communication channels are designed to inspire behavior modification and develop enduring, transferable skills. 34 is crucial to identify the intervention sites for health education strategies to be effective.The studies described the following locations for health education: primary health care, 24,27 community/home, 21,24,28 schools, 22,29 general hospitals, 26,30 obstetric hospitals, 32 pediatric hospitals, 31 vaccination posts, 23 and neonatal intensive care units. 25e between Basic Health Units and public schools. 35us, it is clear that strengthening this program as a public policy is necessary.In addition to covering most identified intervention locations, it includes the target populations, as described below. 36e intervention's target populations, a key focus of this research, are notably diverse.They include children, 22 health staff, 24 pregnant women, 30,32 community leaders, 28 and parents/guardians/caregivers of children, 21,[23][24][25][26][27]29,31 all of whom are crucial in the context of improving vaccination rates.
The results of addressing different audiences during interventions align with the literature, which suggests that high and equitable vaccine adherence can only be achieved through research and engagement with target groups.Therefore, before interventions, it is essential to consider social and cultural support, norms, and identity, including various religious, educational, or philosophical views that may influence attitudes towards vaccination and social determinants such as socioeconomic status, years of schooling, and ethnicity. 37e instruments used in these interventions varied and included educational applications,23 vaccination calendars, 30 posters, 23 educational booklets, leaflets, health manuals, 23,26,[30][31][32] games, 22 social networks, 25 slides, 26,29 photos, 26 and videos. 21,22,26,32alth professionals, as the primary advocates for vaccination, play a pivotal role in promoting trust, validating parents' concerns for their children's well-being, avoiding coercive language, and valuing clear and positive communication.Negative or strained communication between providers and patients can reduce patient confidence and negatively impact health outcomes over time.
Limitations
The main limitation of this study lies in the lack of details in some studies regarding the method of conducting the intervention, which makes replication difficult and demonstrates the need for further research on the topic.These new studies should be conducted with high methodological rigor, enabling their execution in different contexts and contributing to the scientific literature and the creation of effective strategies for the use of health education.
Contributions to Clinical Practice
The
Chart 1 .
following variables: authorship, year of publication, country of study, study title, type of study, place of study, participants, and main results.Both reviewers performed data extraction, compared the information collected, and synthesized it for inclusion in the review.After extraction, we organized the data into tables.To summarize the findings, we used the data reduction method, critical reading, and classification of results into conceptual categories for discussion. 20periodicos.ufpi.brRev Pre Infec e Saúde.2024;10:5301 Strategies used to search for articles in the databases.MeSH Terms] OR Children[All Fields]) AND ("Health Education"[MeSH Terms] OR "Education, Health"[All Fields] OR "Community Health Education"[All Fields] OR "Education, Community Health"[All Fields] OR "Health Education, Community"[All Fields]) AND (Vaccination[MeSH Terms] OR Vaccinations[All Fields] OR "Immunization, Active"[All Fields] OR "Active Immunization"[All Fields] OR "Active Immunizations"[All Fields] OR "Immunizations, Active"[All Fields]) WoS TM (ALL=(Child*)) AND (ALL=("Health Education") OR ALL=("Education, Health") OR ALL=("Community Health Education") OR ALL=("Education, Community Health") OR ALL=("Health Education, Community")) AND (ALL=(Vaccination*) OR ALL=("Immunization,
Figure 1 .
Figure 1.Flowchart of the process of identifying, screening, and including articles. | 2024-07-25T15:19:11.882Z | 2024-07-23T00:00:00.000 | {
"year": 2024,
"sha1": "78e29595e08989d17a7bb0e36b083cbd0d6122e3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.26694/repis.v10i1.5301",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "42102aa9525705befbacf3e7241e99d7d64bd244",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": []
} |
237916350 | pes2o/s2orc | v3-fos-license | Internet users’ utopian/dystopian imaginaries of society in the digital age: Theorizing critical digital literacy and civic engagement
This article proposes a theoretical framework for how critical digital literacy, conceptualized as incorporating Internet users’ utopian/dystopian imaginaries of society in the digital age, facilitates civic engagement. To do so, after reviewing media literacy research, it draws on utopian studies and political theory to frame utopian thinking as relying dialectically on utopianism and dystopianism. Conceptualizing critical digital literacy as incorporating utopianism/dystopianism prescribes that constructing and deploying an understanding of the Internet’s civic potentials and limitations is crucial to pursuing civic opportunities. The framework proposed, which has implications for media literacy research and practice, allows us to (1) disentangle users’ imaginaries of civic life from their imaginaries of the Internet, (2) resist the collapse of critical digital literacy into civic engagement that is understood as inherently progressive, and (3) problematize polarizing conclusions about users’ interpretations of the Internet as either crucial or detrimental to their online engagement.
Introduction
Media literacy, understood as the ability to access, evaluate and produce media content, is crucial to a well-informed citizenry's participation in society. Digital literacy, a variant of media literacy, consists of functional and critical skills and knowledge about the Internet. Two major limitations, however, affect research on whether and how its critical dimension, in particular, facilitates participation in civic life, understood as both community and political life. First, media literacy research has focussed predominantly on users' ability to evaluate online content, with little attention to their understanding of the digital environment, from how Internet corporations operate to the Internet's potentials and limitations for civic life. Second, users' critical reflections have often been approached as conducive to progressive action, thus leaving little room for civic engagement underpinned by different ideologies.
To overcome these limitations and facilitate richer analysis of whether and how critical digital literacy contributes to civic engagement, this article draws on utopian studies and political theory to offer a novel conceptualization of critical digital literacy. Such a conceptualization is grounded in notions of utopian thinking and social imaginaries. While the latter consist of expectations of society that are often ideologically driven (Thompson, 1982), utopian thinking, understood as relying dialectically on both utopianism and dystopianism, represents a powerful force for social change. We live in an age when the social is increasingly intertwined with the digital, which is why (re)imagining and participating in society require an understanding of the digital environment. This article therefore proposes a framework for how critical digital literacy, conceptualized as incorporating utopian/dystopian imaginaries of society in the digital age, facilitates civic engagement.
Such a framework enables us to disentangle the ways in which users construct and deploy, in line with different ideologies, their imaginaries of the Internet from their imaginaries of civic life. Applying utopianism/dystopianism to critical digital literacy prescribes that the latter requires an understanding of the Internet's potentials and limitations for civic life. It is argued here that constructing such an understanding does not necessarily lead to civic engagement and can intersect with different ideologies. At the same time, this article theorizes that deploying, and not just constructing, such an understanding is crucial to pursuing civic opportunities. In doing so, it problematizes media literacy research that has polarized users' positive or negative interpretations of the Internet as, respectively, crucial or detrimental to their online engagement.
The first section of this article discusses the role of the Internet in civic life. A section follows on how, and with what limitations, different traditions of media literacy research have explored digital literacy and how its critical dimension intersects with civic engagement. The article then draws on utopian studies and political theory to frame utopian thinking as relying dialectically on both utopianism and dystopianism in ways that can underpin civic engagement in line with different ideologies. After examining the intersection of utopian studies and media studies, a two-stage framework is proposed for how critical digital literacy, based on the construction (stage 1) and deployment (stage 2) of utopian/dystopian imaginaries of society in the digital age, facilitates civic engagement. Finally, the implications of the framework for media literacy research and practice are discussed.
The Internet and civic engagement
For the past few decades, Western liberal democracy -a system of representative institutions operating under principles of individual liberty and equality -has suffered from a decline in citizens' participation in electoral politics (Coleman, 2017). While this decline is exacerbated by distrust in institutions and traditional media, the Internet is often praised for facilitating civic engagement, understood more comprehensively as how citizens take part in community and political life, which can be either institutional or noninstitutional (i.e. unmediated by institutions) (Dahlgren, 2003). Examples of civic engagement, which might be meaningful to citizens and to their subjectivity but might not necessarily affect decision-making, include community involvement, volunteering, contacting politicians, sharing or commenting on political content on social media, signing a petition, participating in a demonstration and using alternative media (Dahlgren, 2003;Smith, 2013).
Understanding the Internet requires an understanding of its technical features, online content and Internet usage, as well as of the ownership, governance and business models of corporations such as Google and Facebook (Van Dijck, 2013: 28). Within civic life, the Internet is praised for its potential to facilitate decentralization of power, public debate, and interaction between citizens and politicians (Enjolras et al., 2013;Lee and Shin, 2014). It contributes to a deliberative democracy based on citizens' public deliberation (Blumler and Coleman, 2010). Furthermore, it is beneficial for community building, networked activism and better-organized protest (Garrett, 2006). At the same time, the Internet can be used for political repression and voter manipulation based on economic and government surveillance. We live in an age, which some describe as postdigital (e.g. Selwyn and Jandrić, 2020), of facial recognition and datafication. As exemplified by the Snowden revelations or the Cambridge Analytica scandal, this is an age when Internet corporations, which collect, track and profile users' data for advertising purposes, can work closely with governments or political parties (McChesney, 2013). In addition, not only do private interests prevail within public debate online, but also those who participate are predominantly male, white and well educated, which makes the Internet elitist (Hindman, 2009). This problem is exacerbated by the 'economic structure' of the Web, which 'encourages audiences to cluster around' a few sites that enjoy visibility (Hindman, 2009: 55). Internet corporations, furthermore, use algorithms that expose users primarily to information which, regardless of its authenticity, reinforces their pre-existing beliefs. This is referred to as the problem of the filter bubble. As a result, public debate online is increasingly subject to polarization and misinformation, which undermine democracy's reliance on a well-informed citizenry (Vaidhyanathan, 2018).
Critical digital literacy and civic engagement
Media literacy is often used as an umbrella term encompassing various literacies, including information, media, digital, data, multimodal and network literacies. Functional digital literacy requires operational, information-navigation, social and creative skills that users need in order to engage practically with the Internet. In addition, it can be understood as incorporating users' understanding of what the Internet affords in terms of its technical features, as well as their dispositions towards its advantages and disadvantages in relation, for instance, to finding information or to online safety. By contrast, the critical dimension of digital literacy can be approached not only as the ability to evaluate online content in terms of bias and trustworthiness, but also as knowledge about the role of the Internet in relation to broader socio-political and economic forces (Polizzi, 2020a). Critical digital literacy is essential to the active participation of critically autonomous and well-informed citizens in society (Hobbs, 2010;Polizzi, 2020b). Media literacy research, however, has often prioritized their evaluation of media representations, both offline and online, with little attention to whether or how their understanding of the digital environment facilitates their civic engagement. This literature can be categorized into four traditions: (1) educational research informed by social psychology, (2) research on digital inequalities, (3) research inspired by cultural studies and critical pedagogy, and (4) the New Literacy Studies. Each is now discussed briefly.
Employing methods adopted in social psychology, a few educational studies have found that students' ability to analyse news articles and their knowledge of mass media are positively associated with their intention to participate in civic life (Duran et al., 2008;Martens and Hobbs, 2015). Their ability to evaluate online content, furthermore, corresponds to more exposure to political discussions (Kahne et al., 2012). Despite under-researching students' understanding of the Internet, these studies have focussed on critical aspects of digital literacy. By contrast, another strand of educational research has prioritized students' functional digital skills and dispositions towards the Internet's advantages and disadvantages for playing games, learning, socializing and finding information. According to this strand, students' positive or negative dispositions explain, respectively, their willingness or reluctance to use the Internet (e.g. Chou et al., 2009;Meelissen and Drent, 2008). This strand, however, has under-explored users' understanding of the Internet within civic life and in ways that transcend the individual.
Research on digital inequalities, which has also paid more attention to functional digital literacy, has argued that users' digital skills are crucial to overcoming gaps in democratic participation (e.g. Min, 2010). In addition, a few studies have focussed on users' dispositions towards the Internet, but not in the context of their civic engagement (e.g. Eynon and Geniets, 2016;Hakkarainen, 2012;Reisdorf and Groselj, 2017). According to these studies, users' dispositions may be positive or negative for their online engagement, depending on whether or not the Internet is perceived as safe and useful for health, information seeking, social interaction and online shopping. Recent work on digital inequalities has argued that limited engagement online, if it leads to quality outcomes, is not necessarily problematic (e.g. Van Deursen and Helsper, 2018). But, as with the educational studies reviewed above, this strand of research has ultimately polarized users' positive or negative interpretations of the Internet as facilitating, respectively, their online engagement or disengagement.
Indebted to Marxist education scholar Freire (2000), media literacy research inspired by cultural studies and critical pedagogy has employed notions of critical literacywhich refers to the ability to question power and authority -to examine how students construct critical reflections about mainstream representations in ways that inform their production of alternative content challenging dominant ideologies (e.g. Share, 2007, 2019). However, 'there is little . . . of critical digital literacy that appears specifically "digital"' as in incorporating users' understanding of the Internet as embedded in power structures (Pangrazio, 2016: 164). Exceptionally, recent work on data literacy within this tradition has argued that civil society organizations, on the one hand, should understand the implications of how governmental data can be accessed and used to promote causes important to their communities (e.g. Fotopoulou, 2020). Users, on the other hand, should understand how Internet corporations like Google and Facebook operate and how to protect their privacy online (e.g. Pangrazio and Selwyn, 2019). Similarly, for Buckingham (2007), the critical dimension of digital literacy requires an understanding not only of media representations but also of the political economy of the Internet, along with its implications for public debate, campaigning and surveillance (Banaji and Buckingham, 2013: 82-83). Ultimately, for Fry (2014: 133), digital literacy should be approached as including an understanding of the Internet's potentials and limitations for democracy. Nevertheless, such an approach, and whether it has the potential to challenge polarizing conclusions about users' interpretations of the Internet as positive or negative for their online engagement, has remained under-studied.
Conceiving of critical digital literacy as incorporating an understanding of the digital environment raises the question of how to disentangle users' understanding of the Internet from their understanding of the socio-political order. This question has remained underexplored both within and beyond the critical pedagogy tradition. According to Fotopoulou (2014), while feminist activists are motivated by imaginaries of networked feminism grounded in the Internet's potential for freedom and open data, gaps in their digital skills undermine their civic engagement. Their imaginaries, for Fotopoulou (2014), are not a dimension of their digital literacy. By contrast, according to critical pedagogy, users' critique of the socio-political order, while not necessarily focussed on the Internet, is indicative of critical literacy. A limitation of this tradition lies, however, in its interpretation of the critical. Critical pedagogy research has collapsed users' critique into a normative vision of civic engagement as inherently progressive (e.g. Share, 2007, 2019). In doing so, it has left little room for civic engagement that, while not necessarily critical of the political establishment, may be underpinned by a critical understanding of the Internet's political and democratic potential. By the same token, it has hardly recognized that resisting dominant ideologies does not necessarily imply a critical understanding of the Internet. Critiquing the Internet, furthermore, does not inherently overlap with critiquing the social or with progressive action.
Approaching literacy as a socio-cultural practice, the New Literacy Studies tradition has often emphasized users' creative engagement with multimodality -referring to the integration of different media texts -over their critical understanding of online content and the Internet, while also paying little attention to their civic engagement (e.g. Bulfin and North, 2007;Jewitt, 2008). Exceptionally, a few studies inspired by the New Literacy Studies and critical pedagogy have argued that digital literacy should be based on civic imagination, which enables users to imagine socio-political alternatives (Jenkins et al., 2016;Mihailidis, 2018). These studies, however, have under-researched whether and how users imagine the Internet in ways that intersect with, and may be differentiated from, how they imagine such alternatives. Jenkins et al. (2016) have found that the production and sharing of multimedia content enables young activists motivated by progressive ideals to question the Internet's potential for activism and to critique dominant ideologies. Leaving exceptions aside, however, the New Literacy Studies has generally overemphasized the importance of creating '"new" things, while along the way learning skills of mastery and critique' (Pangrazio, 2016: 167).
Given the limitations of media literacy research, this article conceptualizes critical digital literacy as incorporating users' utopian and dystopian imaginaries of society in the digital age, differentiating between their imaginaries of civic life and of the Internet. Before unpacking how such a conceptualization facilitates richer analysis of whether and how critical digital literacy contributes to civic engagement, the next section draws on utopian studies and political theory to frame utopian thinking as relying dialectically on both utopianism and dystopianism.
Utopianism/dystopianism: a dialectical approach to utopian thinking
Utopian thinking, which deals with questions of power, the socio-political system and participation in civic life, has the potential to generate social change. Utopian studies and philosophy -drawing on science fiction literature, political theory, Marxism and postmodernism -represent an interdisciplinary field that identifies and analyses utopian forms, content and functions in society (Levitas, 2010: 6, 179). Before reflecting on the relationship between utopian thinking, action and social change, a brief account of the history of utopian thinking can help us to grasp the dialectic between utopia and dystopia.
Utopianism consists of movements producing utopias. The term utopia was coined by Sir Thomas More in 1516 when he published his Utopia, which tells the story of a homonymous fictional island and its perfect society. More Latinized two Greek compounds -ou (not) and topos (place), and eu (good) and topos (place) -to refer ambiguously to a place that is both a non-place and a good place. By contrast, dystopia, understood as a fictitious abhorrent socio-political world, is believed to derive either from the Greek prefix dys standing for bad, dysfunctional, or from 'Dis', the Greek mythological underworld of the dead (Ransom, 2009: 118, 123).
Since one person's utopia can be another's dystopia, no binary opposition should be asserted between the two, not least because of the role of dystopianism in shaping utopianism. Utopian thinking can be understood as fulfilling a twofold function: (1) raising awareness through a critique of dystopian limitations of the present, while (2) promoting contemplation of utopian elements projected into the future (Shor, 2010: 125). Essential for critiquing the present and envisioning social change, the probing of utopian moments of building another world . . . requires some understanding of the dystopian elements of this and future worlds. In order to comprehend the utopian/dystopian dialectic, one needs to define that dialectic in ways that underscore . . . [its] fictive and real nature. (Shor, 2010: 124) Dating back to Hegelian theory, the concept of dialectic refers to a process of reasoning whereby opposed ideas -thesis and antithesis -are negotiated to reach synthesis (Maybee, 2016). While Harvey's (2000) dialectical utopianism relies on the interdependence of alternative space and time, for post-structuralist Marin (1990: xxiv) utopian thinking, based on imagination and realism, requires the creation of a 'timeless no-place' where contemporary socio-political forces undergo 'critical examination'. Similarly, according to Marxist political theorist Fredric Jameson (2005: 15, 180), 'utopian space is an imaginary enclave within real social space' where tensions are played against each other in a 'negative dialectic'. Such a dialectic suggests that utopian thinking relies on both utopianism and dystopianism, provided that these are not synthesized but in a constant state of conflict. Examples of utopian/dystopian configurations include anti-utopianism, critical utopianism and critical dystopianism. Inasmuch as utopian thinking must be political and ideological to facilitate social change, anti-utopianism refers to the rejection of utopianism erected from the perspective of power (Jameson, 2005: 199). Theorized in the 1970s in the light of optimism about the anti-war, civil rights and environment movements, critical utopianism consists of 'ideological critique . . . and social dreaming/planning' (Moylan, 2000: 82). Finally, theorized as scepticism about Western neoliberal politics in the 1980s and 1990s, critical dystopianism is forged when a utopian enclave is carved from a dystopia (Moylan, 2000: 185, 189).
Not only has Marxism informed discussions of ideology and utopia, but it has also influenced how utopian thinking may be expected to guide action and social change. Arguably, the utopian/dystopian dialectic is embedded in Marx's dialectical materialism, which sits within his political project of overturning capitalism. Referring to a method through which dialectical reasoning problematizes sociality as developing through material conditions, dialectical materialism reflects the aspiration to transcend the contradictions resulting from power asymmetries through action challenging the status quo (Edgley, 1990). While anti-utopianism has often translated into a rejection of left-wing utopianism, Marxism has informed forms of critical utopianism and critical dystopianism resisting capitalism, the patriarchal society and ecological degradation (Jameson, 2005: 199;Moylan, 2000: 82).
Drawing on Bloch's (1995) approach to utopia as imagination and hope, Levitas (2010) has argued that social change results from combining utopian desire with action. But utopian thinking does not intrinsically translate into action. Marxism assumes a link between the two, which leads to 'over-optimis[m] about . . . utopia' (Levitas, 2010: 200). Notions of action and social change, furthermore, are far from univocal, even within the Marxist tradition. Western Marxism differs from orthodox Marxism in its diminished concern with the socialist revolution as a utopian project aspiring to empower the working class to overturn capitalism through controlling the state (Anderson, 1979). Acknowledging the failure of such a revolution in the West, Western Marxism, central as it is to critical theory, critical pedagogy and cultural studies, has informed work on hope and utopia that has approached radical action as multifaceted.
For Bloch (1995), alienation from Western societies is a precondition for radicalism that resonates with orthodox Marxism in its aspiration to overturn capitalism. By contrast, Giroux (2004: 38) has defined radical hope as a pedagogical practice that teaches citizens to take civic action. Hope, for him, represents 'utopian longing' that serves as a 'subversive force . . . evoking . . . different futures' (Giroux, 2004: 38-39). But its subversive nature does not necessarily equate with a rejection of capitalism. It aligns with a vision of radical democracy that aims to facilitate social justice and equality through institutional and non-institutional politics. Finally, for Raymond Williams (1980: 198), utopian thinking consists of a cultural creativity whereby left-wing possibilities can be imagined.
Although much work relevant to utopian studies is indebted to Marxism, Levitas (2010: 214) has emphasized that 'utopias are not the monopoly of the Left'. While socialist and progressive utopias promote social justice and egalitarianism by rejecting power imbalances, there are also 'utopias of the dominant classes in society' (Levitas, 2010: 214). However different in content or purpose, these utopias also operate through a utopian/dystopian dialectic. Neoliberal utopianism, for instance, promotes individual freedom and free-market values by framing taxation and bureaucracy as dystopian threats. We can portray the neoliberal utopia as a dystopia. But we cannot deny that it projects a vision of a desired society (Levitas, 2010: 215-216). Similarly, conservatism encapsulates a utopia that is critical of the individualistic character of liberalism while promoting preservation, centralized power, defence, law and order, and loyalty to the state (Levitas, 2010: 218).
Since utopianism varies in terms of its socio-political purpose, understanding 'the utopist as a radical revolutionary is problematic' (Morgan, 2015: 107). Ideologies, furthermore, are not fixed systems of ideas and can overlap. Operating through a utopian/ dystopian dialectic, democratic socialism and sustainable development exemplify progressive ideologies that, while coexisting with capitalism and liberal democracy, resist social inequalities and environmental degradation by often relying on policy reforms and on institutions as actors for social change (Morgan, 2015: 115, 118). In short, not only does the utopian/dystopian dialectic apply to different ideologies that potentially, but not inherently, underpin participation in society, but the latter can also be institutional or non-institutional, ranging from voting for policy reforms to participating in resistance and activism.
Conceptualizing critical digital literacy as incorporating utopian thinking
Insofar as the complexity of change represented by the Internet requires a more nuanced understanding of its interrelation with the social, what can media literacy research gain from utopian studies and political theory? Conceptualizing critical digital literacy as incorporating utopian thinking, as framed above, has the potential to facilitate richer analysis of whether and how critical digital literacy contributes to civic engagement. Such an approach sheds light on the ways in which users participate in civic life by constructing and deploying, in line with different ideologies, utopian/dystopian imaginaries of society in the digital age. Before discussing this further, it is worth examining the intersection of utopian studies and media studies.
A dialectical approach to utopianism/dystopianism can serve as a lens through which to examine the Internet's potentials and limitations for civic life. As addressed above, the Internet facilitates, for example, decentralization of power, political participation and deliberative democracy, but also political repression, surveillance and misinformation (Enjolras et al., 2013;McChesney, 2013;Vaidhyanathan, 2018). Indeed, media scholars have employed notions of both utopia and dystopia to explore, for instance, the implications of digital commons for transcending online commodification (e.g. Loustau and Davis, 2012), or the potential of Internet-mediated collective action (e.g. Wilken, 2012). Discussions of utopianism and the Internet, in addition, have often been accompanied by discussions of ideology. Mejias (2012), for instance, has argued that optimism about the use of Twitter during the Arab Spring has served as a utopian discourse diverting attention in the West from the deepening of social inequalities resulting from capitalism. According to Turner (2006: 244), furthermore, digital technologies have led to cyberlibertarianism, which amounts to a digital utopianism promoting individual liberty by drawing on progressive values that have 'turned away from political struggle and toward social and economic . . . change'.
Contemporary utopian and dystopian imaginaries of society in the digital age reflect different discourses about the Internet that shape, as examined by research on sociotechnical imaginaries (e.g. Milan and Ten Oever, 2017), policy decisions about the digital infrastructure. According to Cohen (2012: 12), these discourses are embedded within the dialectic of information as freedom and information as control (Cohen, 2012: 12). On the one hand, the Internet is expected to promote either economic and political freedom in line with cyberlibertarianism or collective participation against social injustice. On the other hand, a vision of information as control underpins forms of online coercion as well as the expectation that financial profitability, citizen welfare and collective security will require Internet regulation and surveillance (Mansell, 2017).
As captured by the literature on socio-technical imaginaries, media research intersecting with utopian studies has largely prioritized questions about the digital environment. Less is known, however, about whether and how Internet users draw on utopian thinking to understand and participate in society in the digital age, with a few exceptions that, as discussed below, can be found in media studies on social movements. Mindful of these studies, this article now proposes a framework for researching critical digital literacy within civic life in ways that incorporate utopianism/dystopianism, contributing, in turn, to media literacy research and practice.
A two-stage framework
What does applying utopianism/dystopianism to critical digital literacy involve? What should users know about the Internet? In what ways can we expect their knowledge to intersect with their visions of social change? And what can we expect of their civic engagement once they deploy utopianism/dystopianism? This section proposes a twostage framework for how critical digital literacy, based on the construction (stage 1) and deployment (stage 2) of utopian/dystopian imaginaries of society in the digital age, facilitates civic engagement (see Table 1).
It is important to keep in mind that the framework is theoretical, which means, as discussed later in this article, that it requires empirical testing. Nevertheless, it is supported by references to empirical studies that, as reported in this section, are grounded in media literacy research and in political research. Furthermore, it should be clarified that critical digital literacy may be expected to facilitate civic engagement as part of a framework that is wider than the one presented here. Such a framework should include multiple elements, from the different dimensions of digital literacy to access to resources such as time and money. It should also include political motivation and efficacy, which refers to citizens' perceived ability to influence decision-making (Morrell, 2003), along with civic literacy, which requires knowledge about the socio-political system (Lund and Carr, 2008). While unpacking such a wider framework transcends the scope of this article, the question of how critical digital literacy, as conceptualized here, reshapes digital literacy more broadly is an important one for the media literacy field. Therefore this section, which theorizes how critical digital literacy facilitates civic engagement on the basis of incorporating utopianism/dystopianism, reflects on the ways in which each stage of the framework presented below intersects with the other critical and functional dimensions of digital literacy. These dimensions range from the critical ability to evaluate online content and knowledge about Internet corporations to functional digital skills, knowledge of digital affordances and general dispositions towards the Internet (Polizzi, 2020a (2) imaginaries of the Internet. More specifically, given the dialectical nature of utopian thinking, on the one hand they need to project visions of social change rooted ideologically in the contemplation of utopian possibilities based on critiquing dystopian elements of the present. On the other hand, they need to understand the Internet's potentials and limitations for civic life. Users might construct their imaginaries of civic life, for example, as progressive or neoliberal expectations of the socio-political order that promote ideals of social justice or individual freedom based, respectively, on transcending social inequalities or taxation. At the same time, they should understand that the Internet facilitates, for instance, not just public debate, citizens' interaction with politicians and activism, but also elitism, misinformation, polarization and surveillance. This stage of the framework enables us, therefore, to examine whether and how users' imaginaries of civic life intersect with their imaginaries of the Internet. This means, in practice, that users may well appreciate, for example, that the Internet provides democratizing opportunities to share their political opinions, while also amplifying the spread of misinformation, in ways that may be intertwined with different visions of social change and different ideologies. These may range from socialism and progressivism, with some users longing for forms of egalitarianism, to conservatism or neoliberalism, with others projecting hope for law and order or for the free market. Indeed, we know from media research on social movements, which has hardly engaged with notions of media literacy, that activists' progressive visions of collective freedom or their anti-democratic values are often blended with an understanding of the Internet's implications for surveillance and visibility as well as with cyberlibertarian principles that champion its potential for individual liberty (e.g. Postill, 2014;Treré, 2019). Relatedly, the ways in which users' imaginaries of civic life can intersect with their imaginaries of the Internet are captured by media activism, which refers to activism around traditional media and/or digital technologies. On the one hand, for example, British organizations such as the Open Rights Group and the Campaign for Freedom of Information -which, in accordance with progressive principles, are critical of online censorship and surveillance -promote visions of a better society by campaigning for users' privacy and free speech. On the other hand, Accuracy in Media in the United States and Mediawatch UK campaign against media bias and harmful content in line with socially conservative and economically liberal agendas (Hackett and Carroll, 2006: 57).
Finally, this stage of the framework suggests that the process of constructing utopian/ dystopian imaginaries of the Internet in synergy with imaginaries of civic life may well intersect with the other critical and functional dimensions of digital literacy. Media literacy research on the expertise of digital specialists, including information, IT and media professionals, has found that their ability to evaluate online content is underpinned by knowledge about the Internet's potential for public debate but also for misinformation, knowledge that intersects with functional dispositions towards the Internet in relation to accessing information (Polizzi, 2020a). Experts, furthermore, are often conscious of the Internet's implications for the polarization of public debate and for surveillance in ways that are blended with a critical understanding of how Internet corporations like Google and Facebook operate. Such an understanding relies on practical knowledge of how the algorithms of these corporations function and what they afford in terms of the creation of filter bubbles and the tracking of users' data for commercial purposes (Polizzi, 2020a). Similarly, according to data literacy research, users need socio-technical knowledge to understand how search engines and online platforms function, with emphasis on their implications for privacy and surveillance (Pangrazio and Selwyn, 2019). Considering this research, what this stage of the framework adds is that, in concert with their different imaginaries of civic life and ideologies, users should understand the Internet's potentials and limitations for civic life in ways that underpin the other dimensions of their digital literacy.
Stage 2. The second stage of the framework -deploying utopian/dystopian imaginaries of society in the digital age -prescribes how critical digital literacy, as conceptualized here, can facilitate civic engagement. For this to happen, users should not only construct (stage 1) but also deploy (stage 2) imaginaries of civic life in synergy with imaginaries of the Internet. As argued earlier, utopian thinking, provided it relies on both utopianism and dystopianism, can underpin participation in institutional or non-institutional politics in line with different ideologies. It follows that, in order to participate in civic life in ways that are mediated by the Internet, users need to deploy imaginaries of civic life that resonate with different ideologies. This might include, for instance, raising awareness of social justice issues in accordance with progressive ideals, or supporting neoliberal causes. At the same time, they need to deploy imaginaries of the Internet's civic potentials and limitations. To give an example, users' civic engagement, in line with left-or right-wing ideals, might be underpinned by an understanding of the Internet's potential for public debate but also for elitism in ways that enable them to take advantage of using social media to discuss politics or raise awareness of socio-political issues. Given the potential for alternative media to reach wider audiences through social media platforms (Fenton and Barassi, 2016), on the one hand this might include disseminating progressive content in opposition to social inequalities via alternative news sites or activists' own websites, thus avoiding the limitation of interacting primarily with users from higher socio-economic backgrounds. On the other hand, it might include using alternative media online to reach different communities with a view to promoting not just leftwing but also right-wing causes, reflecting conservative principles of centralized power or neoliberal values of competitive individualism.
We know from political research that citizens and activists with different political views use the Internet in ways that are informed by knowledge of its implications for political expression, building support and organizing action (e.g. Barassi, 2015;Kwak et al., 2018). Media research on social movements, furthermore, has found that activists know how to adapt to the media ecosystem insofar as they are largely aware of both its potentials and its limitations. McCurdy (2011), for instance, has argued that they often understand that mainstream media have a wider reach but are driven by corporate interests, while alternative media have limited visibility -which is why they use both media, both online and offline, to compensate for their respective limitations. Similarly, Barassi (2015) has found that activists know that social media platforms like Facebook are shaped by corporate power, which has negative implications for users' privacy and in terms of surveillance. At the same time, they value online platforms for their potential to create networks of solidarity. As a result, they use social media to organize action, but they use also alternative platforms, including their own websites and newsletters.
Building on this research, this stage of the framework suggests that critical digital literacy can facilitate civic engagement provided it incorporates imaginaries of the Internet's potentials and limitations for civic life. But this can only happen as long as users deploy other dimensions of their digital literacy in synergy with their imaginaries. In the light of empirical research reported below, we can assume that they will need to deploy, for instance, sophisticated functional digital skills as well as dispositions towards accessing and sharing information online in order to take advantage of the Internet's utopian potential for public debate or for activism. At the same time, they will need to overcome its dystopian limitations in terms of misinformation, polarization or surveillance, which requires knowledge of how Internet corporations operate and function as platforms or search engines. This is how users might be able to produce and share political content online or organize action in line with left-or right-wing ideals, while using platforms and search engines in ways that enable them to evaluate and diversify their exposure to information, or to minimize the tracking of their data. This means, more concretely, that, by deploying different dimensions of their digital literacy together with utopian/dystopian imaginaries of the Internet and of civic life, they might be able, for example, to use social media platforms to raise awareness about the environment or individual economic freedom, while using fact-checking websites to corroborate information or, when discussing sensitive issues, messaging systems with higher encryption.
Indeed, we know from digital inequalities research as well as from political research that users need a range from operational and information-navigation skills to social and creative skills in order to use the Internet for civic purposes, from seeking to producing and sharing political content (Anduiza et al., 2010;Min, 2010). In addition, we know that digital experts, who are particularly conscious that the algorithms of Internet corporations create filter bubbles that exacerbate both misinformation and polarization, often deploy their ability to evaluate online content in ways that involve the use of multiple sources. This includes comparing political content across different search engines as well as diversifying their exposure to information by following on social media individuals or organizations with opposing views (Polizzi, 2020a). Finally, according to data literacy research, users' data tactics aimed at protecting their privacy, from managing their privacy settings to obfuscating personal information online, rely on an understanding of the technical features of online platforms as well as of the privacy implications of how the latter operate (Selwyn and Pangrazio, 2018). With these findings in mind, what this stage of the framework adds is that not only do users need to deploy -and not just to construct -imaginaries of society in the digital age in order for their critical digital literacy to facilitate their civic engagement, but also their imaginaries need to be deployed together with, and in ways that underpin, the other dimensions of their digital literacy.
Implications for media literacy research and practice
The framework proposed above contributes to media literacy research by facilitating richer analysis of whether and how critical digital literacy, conceptualized as incorporating utopian/dystopian imaginaries of society in the digital age, facilitates civic engagement. The framework enables us to (1) disentangle users' imaginaries of the Internet from their imaginaries of civic life, (2) resist the collapse of critical digital literacy into civic engagement understood as inherently progressive, and (3) problematize polarizing conclusions about users' interpretations of the Internet as either crucial or detrimental to their online engagement. More specifically, 1. As argued earlier, leaving aside research that has prioritized users' functional over their critical digital skills and knowledge, with little attention to their civic engagement (e.g. Chou et al., 2009;Reisdorf and Groselj, 2017), a few educational studies informed by social psychology have found that digital literacy facilitates civic engagement. These studies have focussed, however, on students' ability to evaluate online content and on their knowledge of traditional media rather than on their critical understanding of the Internet (e.g. Duran et al., 2008;Kahne et al., 2012). Research inspired by critical pedagogy, furthermore, has approached users' critique of dominant media representations as inherently progressive, with little room for different ideologies (e.g. Share, 2007, 2019). Exceptionally, a few studies have framed digital literacy as incorporating an understanding of the digital environment. These include data literacy research (e.g. Pangrazio and Selwyn, 2019) as well as research arguing that digital literacy should be based on civic imagination enabling users to imagine socio-political alternatives (e.g. Mihailidis, 2018). The framework proposed above builds on these studies. The literature, however, has remained silent on whether and how users understand the Internet in ways that intersect with their imaginaries of civic life. By contrast, applying utopianism/dystopianism to critical digital literacy is analytically valuable because it enables us to disentangle how users draw on utopian thinking to understand the Internet -with emphasis on its civic potentials and limitations -from how they construct visions of civic life that can align with different ideologies. Given the dialectical nature of utopian thinking, such an approach suggests that critical digital literacy requires both utopian and dystopian imaginaries of the Internet. 2. Marxist utopian studies have collapsed utopian thinking into action. But utopian thinking does not inherently guide civic engagement. The utopian/dystopian dialectic, furthermore, applies to different ideologies and regardless of whether social change is promoted through institutional politics or resistance and activism. Applying utopianism/dystopianism to critical digital literacy suggests that the latter potentially, but not inherently, facilitates civic engagement. As captured by the framework above, in order to participate in civic life, users need not only to construct (stage 1) but also to deploy (stage 2) their imaginaries of society in the digital age, and alongside the other critical and functional dimensions of their digital literacy. These range from the critical ability to evaluate online content and knowledge of Internet corporations to functional digital skills, knowledge of digital affordances and dispositions towards the Internet (Polizzi, 2020a). It follows that users may well understand the role of the Internet in civic life in synergy, for example, with progressive or neoliberal ideologies, without necessarily deploying such an understanding, or without participating in civic life at all, which might be the result of limited digital skills, of limited resources or of a lack of political motivation (Min, 2010). Media literacy research inspired by critical pedagogy has collapsed users' critique of dominant media representations into a normative vision of civic action and resistance, approached as intrinsically progressive (e.g. Share, 2007, 2019). By contrast, conceptualizing critical digital literacy as incorporating utopianism/dystopianism facilitates broader analytical inquiry by suggesting that users' utopian/dystopian imaginaries of the Internet may (or may not) contribute to their civic engagement in ways that may be blended with different imaginaries of civic life and ideologies. 3. Approaching utopian thinking as projecting utopian possibilities for social change while critiquing dystopian limitations of the present prescribes an imagination/realism dialectic that makes the expectation of constructing both utopian and dystopian imaginaries of the Internet a sine qua non for critical digital literacy. Understanding the Internet's civic potentials and limitations does not necessarily translate into civic engagement. But, as theorized above, deploying, and not just constructing, such an understanding, along with different imaginaries of civic life and ideologies, can enable users to pursue civic opportunities provided that it is deployed together with, and in ways that underpin, the other dimensions of their digital literacy. Conceptualizing critical digital literacy in this way has repercussions for research on digital inequalities and for educational research inspired by social psychology. As discussed earlier, these strands of research have largely polarized users' positive or negative interpretations of the Internet as facilitating, respectively, their online engagement or disengagement (e.g. Chou et al., 2009;Hakkarainen, 2012;Reisdorf and Groselj, 2017). By contrast, media studies on social movements have found that activists participate in civic life in ways that are informed by knowledge of both the potentials and the limitations of the media ecosystem (e.g. Barassi, 2015;McCurdy, 2011). Bridging this body of work with media literacy research, applying utopianism/dystopianism to critical digital literacy suggests that the latter can facilitate civic engagement provided it incorporates imaginaries of the Internet's potentials and limitations for civic life.
Besides its implications for media literacy research, the framework proposed here has practical implications for the promotion of critical digital literacy and of digital literacy more broadly. On the one hand, it prescribes that users should understand the utopian/ dystopian potential of the Internet for civic life -an understanding that requires knowledge of how Internet corporations operate, with what privacy implications, and how they function as platforms or search engines. On the other hand, the framework suggests that, when deployed in concert with functional digital skills and the ability to evaluate online content across multiple sources, such an understanding, in synergy with different imaginaries of civic life and ideologies, can enable users to pursue civic opportunities online.
In countries of Europe and beyond, educationalists and policymakers are committed to promoting digital literacy as a lifelong set of digital skills and knowledge that users should develop from an early age (Frau-Meigs et al., 2017). To reach adults, most of whom are no longer in school, is challenging. But when it comes to children, often these efforts include ensuring that digital literacy is taught across the school curriculum. When considering, for example, the national curriculum for England, a few recommendations can be made on the basis of the framework described above. While subjects like Computing are suitable for teaching functional digital literacy, the Citizenship curriculum should be revised to ensure that it equips students with knowledge about the digital environment. Such a knowledge area, which is currently missing from the curriculum (Polizzi and Taylor, 2019), should be promoted in tandem with students' imaginaries of civic life. This task involves embedding critical digital literacy within civic education, and encouraging students to be critical of information online as well as to understand the political economy of the Internet and, ultimately, both its potentials and its limitations for civic life. At the same time, they should be encouraged to construct and deploy imaginaries of civic life that may well align with different ideologies.
Conclusion
This article provides a novel perspective for media literacy research by proposing a theoretical framework for researching how critical digital literacy, based on constructing and deploying utopian/dystopian imaginaries of society in the digital age, facilitates civic engagement. In doing so, it opens up possibilities for richer analysis of whether and how critical digital literacy facilitates civic engagement. The framework is grounded in the proposition, borrowed from utopian studies and political theory, that utopian thinking relies dialectically on projecting utopian possibilities for social change while critiquing dystopian limitations of the present. Approaching critical digital literacy as incorporating utopianism/dystopianism allows us to disentangle users' imaginaries of civic life from their imaginaries of the Internet. Such an approach builds on the idea that critical digital literacy should refer not only to the ability to evaluate online content, but also to knowledge of the political economy of the Internet, along with its potentials and limitations for civic life. While critical pedagogy research has framed users' critique and civic action as necessarily progressive, applying utopianism/dystopianism to critical digital literacy suggests that in the digital age users' utopian/dystopian imaginaries of society are potentially, but not inherently, beneficial for their civic engagement in line with different ideologies. At the same time, building on media studies on social movements, such an approach problematizes polarizing conclusions about users' interpretations of the Internet as either crucial or detrimental to their online engagement. Indeed, this article prescribes that critical digital literacy can facilitate civic engagement provided users construct and deploy both utopian and dystopian imaginaries of the Internet within civic life, and in ways that underpin and are deployed together with the other critical and functional dimensions of their digital literacy. These dimensions range from the critical ability to evaluate online content and knowledge about Internet corporations to functional digital skills, knowledge of digital affordances and dispositions towards the Internet.
Conceptualizing critical digital literacy in this way has repercussions for how educationalists and policymakers should promote digital literacy through the education system, with civic education being particularly suitable for encouraging students' understanding of the digital environment in synergy with different visions of social change. Such a conceptualization invites new intellectual directions by suggesting that critiquing both the Internet and civic life is paramount for (re)imaging society in the digital age through utopian thinking, which requires an imagining of potentialities together with realism. This article invites researchers working at the intersection of media studies and utopian studies to explore more closely how different socio-technical imaginaries, besides reflecting different discourses about the Internet, can be constructed and deployed by users in the context of their civic practices, which is an empirical question. This is why empirical research cutting across different literatures and epistemologies is needed to test the framework proposed above and explore, in practice, whether and how critical digital literacy, as theorized here, facilitates civic engagement within different contexts and among different populations. Qualitative research should explore how users construct and deploy utopian/dystopian imaginaries of society in the digital age. Meanwhile, quantitative research should measure the extent to which their imaginaries correlate with their civic engagement. New measures and survey items should be created and tested. Finally, regardless of its methodology, research is needed on whether and how critical digital literacy facilitates civic engagement as part of a wider framework including, as well as the other dimensions of digital literacy, access to resources, civic literacy, political motivation and efficacy. | 2021-09-01T15:09:08.434Z | 2021-06-25T00:00:00.000 | {
"year": 2023,
"sha1": "9a2ed234216db36573bfd3552cbbb00969dbf9b0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1177/14614448211018609",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "493862837038c22834e72049a0cc4ce1cc15189c",
"s2fieldsofstudy": [
"Computer Science",
"Sociology",
"Political Science",
"Education"
],
"extfieldsofstudy": [
"Sociology",
"Computer Science"
]
} |
211245761 | pes2o/s2orc | v3-fos-license | Auxotrophic Selection Strategy for Improved Production of Coenzyme B12 in Escherichia coli
Summary The production of coenzyme B12 using well-characterized microorganisms, such as Escherichia coli, has recently attracted considerable attention to meet growing demands of coenzyme B12 in various applications. In the present study, we designed an auxotrophic selection strategy and demonstrated the enhanced production of coenzyme B12 using a previously engineered coenzyme B12-producing E. coli strain. To select a high producer, the coenzyme B12-independent methionine synthase (metE) gene was deleted in E. coli, thus limiting its methionine synthesis to only that via coenzyme B12-dependent synthase (encoded by metH). Following the deletion of metE, significantly enhanced production of the specific coenzyme B12 validated the coenzyme B12-dependent auxotrophic growth. Further precise tuning of the auxotrophic system by varying the expression of metH substantially increased the cell biomass and coenzyme B12 production, suggesting that our strategy could be effectively applied to E. coli and other coenzyme B12-producing strains.
INTRODUCTION
Coenzyme B 12 , also known as adenosylcobalamin, plays an important role in several metabolic reactions occurring in different organ systems of the body (Guo and Chen, 2018). For example, it is required for proper functioning of the nervous system and synthesis of red blood cells, fatty acids, and amino acids (Ko et al., 2014;Martens et al., 2002). The demand for large-scale production of coenzyme B 12 has steadily increased owing to its applications in food, feed additive, and pharmaceutical industries (Fang et al., 2018(Fang et al., , 2017. However, chemical synthesis of coenzyme B 12 is highly complicated because of its complex structure. To overcome this shortcoming, industrial production of coenzyme B 12 through microbial fermentation has been regarded as an efficient method (Biedendieck et al., 2010).
Currently, microorganisms with the inherent ability to synthesize coenzyme B 12 , including Pseudomonas denitrificans and Propionibacterium freudenreichii (the highest production was 214.3 and 206.0 mg/L, respectively), are widely employed for its industrial production (Fang et al., 2018;Lee et al., 2018;Martens et al., 2002). However, these strains are not well characterized; thus, only limited engineering tools, such as random mutagenesis and plasmid-based gene expression, have been utilized (Fang et al., 2018(Fang et al., , 2017Yin et al., 2019). In addition, these strains are known to have long fermentation cycles and expensive and complex medium requirements (Fang et al., 2017).
The use of genetically well-characterized bacteria can be a compelling strategy for the production of coenzyme B 12 . In this regard, recent studies have reported the production of coenzyme B 12 by exploiting the representative microbial workhorse, Escherichia coli (Fang et al., 2018;Fowler et al., 2010;Ko et al., 2014). It has been demonstrated that E. coli could be used to synthesize coenzyme B 12 upon the addition of ado-cobinamide (AdoCbi) and dimethylbenzimidazole (DMBI) (Fowler et al., 2010;Jang et al., 2018) via the native coenzyme B 12 salvage pathway (Lawrence and Roth, 1996). Moreover, another group reconstructed the AdoCbi synthetic pathway from P. denitrificans and heterologously introduced it into E. coli BL21(DE3) (Ko et al., 2014). In their study, Ko et al. overexpressed 22 genes using three plasmids; the production of coenzyme B 12 (0.65 mg/g dry cell weight [DCW]) was confirmed even without the addition of AdoCbi. More recently, Fang et al. reported the production of unexpectedly high levels of coenzyme B 12 (307.00 mg/g DCW) through step-by-step optimization of the synthetic pathway (Fang et al., 2018). The entire synthetic pathway included heterologous expression of 28 genes, and the related genes from different microorganisms were screened for efficient production of coenzyme B 12 .
In addition to rational strain engineering strategies, selection-based engineering strategies have been valuable for improving the production capabilities of microorganisms. Indeed, recent studies have shown tremendous improvement in the biochemical production by devising genetic circuits with biosensors to detect metabolite levels of interest. Moreover, coupling the production capability with cell survival has led to a remarkable improvement in the titers (Gao et al., 2019;Jang et al., 2019). For example, Xiao et al. have developed a circuit to control the expression of the genes responsible for antibiotic resistance or essential amino acid synthesis under the control of free fatty acid (FFA)-responsive promoter (P AR ), only allowing the strains with active production of FFA to grow (Xiao et al., 2016). Consequently, it was highly effective to control the population quality by minimizing the heterogeneity in biological systems, thereby significantly enhancing the production of FFA. Similarly, Rugbjerg et al. have introduced a genetic circuit to couple the gene expression of cell wall synthesis with that of mevalonate production (Rugbjerg et al., 2018); the introduction of the circuit enabled stable plasmid maintenance and consistent mevalonate production during long-term cultivation.
In the present study, an auxotrophic selection strategy was designed to increase the coenzyme B 12 production in E. coli. We leveraged the characteristics of methionine biosynthesis in E. coli; E. coli was engineered to synthesize methionine only if coenzyme B 12 existed with cobalamin-dependent homocysteine transmethylase (metH) by deleting the gene encoding for cobalamin-independent homocysteine transmethylase (metE). The growth rate of the E. coli strain lacking the gene metE was highly dependent on the concentration of exogenously added coenzyme B 12 . When the strategy was applied to the EpACR cob strain, a previously reported coenzyme B 12 -producing strain (Ko et al., 2014), the specific coenzyme B 12 production was substantially improved by autonomous modulation of copy numbers of plasmids. The expression of metH was further varied to optimize the cell biomass and coenzyme B 12 production. The engineered strain exhibited significantly enhanced coenzyme B 12 production. We believe that this novel strategy would be useful not only for E. coli, but also for several other microorganisms for the production of coenzyme B 12 .
Construction of Coenzyme B 12 Auxotroph System in E. coli
To develop the auxotrophic selection strategy for the production of coenzyme B 12 , we initially sought an essential enzymatic reaction dependent on coenzyme B 12 in E. coli. The most well-known enzyme is cobalamin-dependent homocysteine transmethylase (encoded by metH, Figure 1), which catalyzes the transfer of a methyl group from 5-methyltetrahydrofolate to L-homocysteine to produce methionine and tetrahydrofolate (Lago and Demain, 1969;Neil and Marsh, 1999;Raux et al., 1996). Coenzyme B 12 is used as a direct mediator of the methyl group during the transfer process. In fact, E. coli possesses another cobalamin-independent enzyme, encoded by metE, to synthesize methionine (Davis and Mingioli, 1950;Mordukhova and Pan, 2013). Therefore, we decided to delete the metE gene in E. coli for auxotrophic selection. Additionally, we found an early stop codon at the 58th codon of btuB gene (KEGG number; ECD_03851, The synthetic coenzyme B 12 auxotroph could be constructed by the deletion of chromosomal metE (encoding cobalaminindependent homocysteine transmethylase). The expression of metH (encoding cobalamin-dependent homocysteine transmethylase) was varied using different synthetic promoters to obtain optimal production of coenzyme B 12 . See also Figure S1. pseudogene encoding cobalamin outer membrane transporter) in the wild-type E. coli BL21(DE3) strain (Studier et al., 2009), a host used in the previous coenzyme B 12 production study (Ko et al., 2014). Because it plays an essential role in coenzyme B 12 import (Fowler et al., 2010), the early stop codon was replaced with ''CAG'' to incorporate glutamine, an identical codon used by other E. coli species (see the Transparent Methods section); thus, the disrupt btuB gene was functionally expressed and the B 12 auxotrophic growth was achieved following the addition of coenzyme B 12 .
Upon deleting metE, methionine is produced only by coenzyme B 12 -dependent MetH and the cells are able to grow in the presence of coenzyme B 12 . To validate the coenzyme B 12-dependent cell growth, the resulting EDMB strain (E. coli BL21(DE3) with metE deletion and functional btuB, Table S1) was cultured in the B 12 auxotrophic medium containing all amino acids except methionine (see the Transparent Methods section). As expected, the strain exhibited negligible growth (growth rate <0.01 h À1 ) because of its inability to synthesize methionine. When the medium was supplemented with varying concentrations of coenzyme B 12 ranging from 0 M to 1 mM ($1.6 mg/L), the specific growth rate gradually increased as the concentration of coenzyme B 12 was increased to 500 pM ( Figure 2), reaching that of the wild-type (0.51 h À1 ) when a sufficient amount of coenzyme B 12 was present. Although the range of concentrations where the auxotrophic selection functions are at the picomolar level, given that the production of coenzyme B 12 is not high in E. coli (Ko et al., 2014), it is expected to be applicable to producing strains. Collectively, these results suggest that the coenzyme B 12 auxotrophic system was successfully constructed in E. coli.
Application of Auxotrophic System for Coenzyme B 12 Production
The auxotrophic system was applied to the previously reported coenzyme B 12 -producing strain, EpACR cob (Tables S1 and S2). The three plasmids (Table S1) harboring AdoCbi synthetic pathway genes from the EpACR cob strain (Ko et al., 2014) were introduced into the EDMB strain. The resulting EDMB cob and EpACR cob strains were initially cultured in the B 12 auxotrophic medium; however, a severely reduced cell growth was observed for the EDMB cob strain (the growth rate was 0.03 h À1 ). Therefore, we decided to use the RB 12 medium containing 10 g/L of tryptone instead of the individual amino acids to supplement the low amount of methionine and to enhance the protein synthesis at the early phase of culture. This supplementation was helpful to obtain higher biomass (0.60 g DCW/L, Figure 3A). However, the deletion of metE still resulted in the formation of significantly reduced biomass of the EDMB cob strain ( Figure 3A, a 5.01-fold decrease). These results indicated that intracellular methionine synthesis was critical for cell growth and the EDMB cob strain was still affected by the auxotrophic system.
We further quantified the production of coenzyme B 12 in both strains to investigate the effect of the selection strategy. The EpACR cob strain exhibited higher production of coenzyme B 12 (2.43 mg/g DCW) when compared with the previously reported value (0.65 mg/g DCW) (Ko et al., 2014). Given that most of the culture conditions are the same, this difference in productivity is probably owing to the use of RB 12 medium supplemented with glucose as a carbon source, unlike the previously used LB medium (see the Transparent Methods section). Surprisingly, the EDMB cob strain with the metE deletion showed a 2.73-fold increase in specific coenzyme B 12 production compared with the EpACR cob strain ( Figure 3B, 6.64 mg/g DCW). The The specific cell growth rate (h À1 ) was calculated and plotted on the y axis according to the coenzyme B 12 concentration (x axis). Error bars indicate the standard deviations from three independent cultures. result indicates that the auxotrophic selection system was effective in significantly improving the production of coenzyme B 12 .
Because the only genomic differences are the deletion of metE and functional expression of btuB, it was hypothesized that the increase in the specific production could be attributed to the altered expression levels of AdoCbi synthetic genes. Plasmids typically exhibit a huge heterogeneity in their plasmid copy numbers (PCNs), which has often affected the production performance of microorganisms (Jahn et al., 2016;Kang et al., 2018). To test this hypothesis, the copy number of each plasmid in both strains was measured by quantitative PCR (qPCR). As mentioned, the copy numbers of the pCcob (ColA origin) and pRcob (RSF1030 origin) plasmids in the EpACR cob strain ( Figure 3C) were observed to be relatively lower (14.2 and 16.9 copies per cell, respectively) than that known (20-40 copies and 100 copies, respectively), which could be due to different culture conditions, use of multiple plasmids, and metabolic burden from several heterologous gene expression (Jahn et al., 2016;Zhong et al., 2011). Meanwhile, the pACob plasmid (p15A origin) exhibited a significantly higher PCN (58.5 copies/cell) than the known PCN (10-12 copies/cell). Moreover, the high PCN of pAcob is consistent with the previously measured high mRNA levels of the genes in the pAcob plasmid (Ko et al., 2014). The measurement revealed that the EDMB cob strain showed dramatic changes in the copy number compared with the EpACR cob strain. The copy number of the pCcob plasmid (26.6 copies/cell) was almost 2-fold higher than that in the EpACR cob strain. The increased copy number of the pCcob plasmid led to enhanced expression of the genes responsible for converting hydrogenobyrinic acid a,c-diamide to AdoCbi ( Figure S1). On the contrary, PCN of both pAcob and pRcob plasmids (41.8 and 13.2 copies/cell, respectively) was 1.40-fold and 1.28-fold lower than that of the EpACR cob strain. The decreased copy number might be beneficial to minimize the wasteful usage of resources in gene expression. There might be other potential factors; nevertheless, the introduction of the auxotrophic selection system affected PCNs, improving the production of coenzyme B 12.
Optimization of metH Expression for Enhancing Coenzyme B 12 Production
Despite the enhanced specific production, the volumetric titer of the EDMB cob strain was lower than that of the EpACR cob strain by 1.83-fold (3.98 mg/L, Figure 3D) because of the decreased cell biomass of the EDMB cob strain. This result suggested that an additional tuning of the auxotrophic system was required to restore the cell biomass and thereby increase the overall titer.
It has been known that MetH has a higher catalytic efficiency than MetE (Gonzalez et al., 1992). However, methionine is mostly synthesized by MetE and not MetH because the expression of metH is induced only when cobalamin is present in the media (Helliwell et al., 2011;Roth et al., 1996). Therefore, it was believed that reduced cell biomass of the EDMB cob strain presumably resulted from insufficient methionine Figure S2), specific coenzyme B 12 production (B), and plasmid copy number (PCN) after 12-h cultivation (C) (see also Figure S3) and overall coenzyme B 12 titer (D) after 24-h cultivation. Error bars indicate the standard deviations from experiments conducted in triplicate. synthesis with low metH expression at an early stage of cultivation. To validate our assumption, the expression of metH was measured by qPCR ( Figure 4). Indeed, the metH expression was very low at the beginning of the cultivation (4 h), reinforcing our assumption that this interfered with the early biomass accumulation. The expression of metH was subsequently induced after synthesis of coenzyme B 12 (10 h); however, the cell biomass could not be recovered ( Figure S2). This is probably due to the metabolic imbalance that the proteins essential for cell growth could not be sufficiently synthesized along with the synthesis of numerous coenzyme B 12 (Darlington et al., 2018;Segall-Shapiro et al., 2014). Collectively, this result suggested that increasing the metH expression would improve the initial cell growth as well as the overall titer. Therefore, we tried to deregulate the expression of metH using synthetic constitutive promoters (http:// parts.igem.org/Promoters/Catalog/Anderson) (Kang et al., 2018;Noh et al., , 2017. To restore the cell biomass, the metH expression needed to be increased than before as its insufficient expression produced less cell biomass. However, excessive expression could also lower the specific coenzyme B 12 production. Consequently, multiple strains with varied expression levels of metH were generated by introducing synthetic metH cassettes with different-strength constitutive promoters (Tables S1 and S2). The resultant strains (EDMB1-5 cob ) were cultivated, and the expression of metH was measured in the same manner ( Figure 4). Although there was a gap between predicted strength and measured metH expression levels (Iverson et al., 2016;Kelwick et al., 2015;Noh et al., 2017), varied metH expression levels (up to 7.84fold) were successfully identified in these engineered strains as intended. Moreover, these strains showed higher metH expression up to 28.6-fold than the EDMB cob strain at an early stage of cultivation and appeared to maintain a relatively constant level during cultivation.
The synthetic expression of metH caused noticeable changes in both cell biomass and specific coenzyme B 12 production. All engineered strains, EDMB1-5 cob , displayed notably increased cell biomass compared with the EDMB cob strain (Figures 3A and S2). In particular, the cell biomass was generally enhanced at 12 and 18 h as the expression of metH increased ( Figure S2), and almost 5-fold increased cell biomass was observed in all engineered strains at 24 h ( Figure 3A). These values obtained at 24 h corresponded to the recovery of cell biomass to more than 90% of EpACR cob strain without metE deletion. This indicated that sufficient methionine could be synthesized by the deregulated metH expression in engineered strains ( Figure 4). On the contrary, specific coenzyme B 12 production was decreased as a result of the enhanced metH expression ( Figure 3B). Especially, specific production generally decreased as the metH expression increased, up to 1.78-fold for the EDMB5 cob strain with the highest metH expression as intended. Nevertheless, the significantly enhanced cell biomass increased the overall titer ( Figure 3D). Among the strains, the EDMB2 cob strain with moderate metH expression level showed the highest coenzyme B 12 production (13.2 mg/L), which was 3.31-fold and 1.80-fold higher than that of the EDMB cob and EpACR cob strains, respectively. In addition, the PCNs of EDMB1-5 cob strains showed a similar tendency as that of EDMB cob strain ( Figure S3), indicating they were still affected by the synthetic auxotroph system. Collectively, these results show that the synthetic auxotroph system with precise controlled metH expression could be successfully applied to coenzyme B 12-producing strains.
DISCUSSION
Recent studies have successfully demonstrated the production of coenzyme B 12 in a well-known microbial workhorse E. coli (Fang et al., 2018;Ko et al., 2014). However, the heterologous expression of synthetic genes has been an obstacle to further improvement. In the present study, a novel strategy to enhance the coenzyme B 12 production was designed and applied to previously constructed coenzyme B 12-producing E. coli (Ko et al., 2014). Initially, the coenzyme B 12 synthetic auxotrophic system was constructed using the characteristics of methionine synthesis in E. coli. Next, this system was applied to the previously reported producer strain to greatly improve the coenzyme B 12 production. We found modulated copy numbers in the coenzyme B 12-producing plasmids after the application of the auxotrophic selection system, which could explain the observed improvement in the production.
Optimizing complex metabolic pathways such as coenzyme B 12 to enhance the production has been a labor-intensive work in metabolic engineering (Smanski et al., 2014). Our successful application of the auxotroph system, which could significantly enhance the coenzyme B 12 production without optimization of individual gene expressions, suggests enough potential that the auxotroph system can be effectively used for complex pathway optimization. In addition, it was shown that the selection efficiency could be optimized through precisely regulating the expression of a key enzyme, which implies that the system can be optimized for different production levels like other selection-based strategies (Rugbjerg et al., 2018;Xiao et al., 2016).
The strategy would be applied to other coenzyme B 12 -producing strains such as P. denitrificans and P. freudenreichii. These strains also possess cobalamin-dependent homocysteine methyltransferase and are known to have a similar regulation system (Ainala et al., 2013;Falentin et al., 2010). Given that these strains produce high amounts of coenzyme B 12 , the expression of cobalamin-dependent methyltransferase may need to be optimized at lower levels. Alternatively, the coenzyme B 12 -binding residues could be intentionally disrupted to lower the affinity base on the elucidated structure of cobalamin-dependent methyltransferase (Drennan et al., 1994;Seo et al., 2018). In addition, other cobalamin-dependent enzymes, such as methylmalonyl-CoA mutase (essential for odd-chain fatty acid synthesis) and glycerol dehydratase (essential for glycerol utilization), could be utilized using a similar strategy (Banerjee and Ragsdale, 2003;Neil and Marsh, 1999). Moreover, these coenzyme B 12 synthetic auxotrophic systems could be applied to evolutionary engineering approaches Seok et al., 2018). The short-term change in PCNs was validated to enhance the coenzyme B 12 production in the current study; however, it could be used to screen genetically effective mutants as a powerful screening method in long-term evolution. Taken together, we expect our strategy to be widely applied for the efficient production of coenzyme B 12 .
Limitations of the Study
The system described in the present study may be limited by the binding affinity of MetH to coenzyme B 12 . When this system is applied to a superior coenzyme B 12 -producing strain (Fang et al., 2018), its operational range needs to be investigated or modified, if necessary. As discussed, the coenzyme B 12 -binding residues or expression levels can be altered for effective tuning of the dynamic range. Furthermore, other enzymes involved in different reactions could be considered also. Nonetheless, the improved coenzyme B 12 production with our parental strain (EpACR cob ) shows great potential. The present study could be valuable for understanding the heterogeneity during biochemical production and its minimization by introducing a selection strategy.
METHODS
All methods can be found in the accompanying Transparent Methods supplemental file.
SUPPLEMENTAL INFORMATION
Supplemental Information can be found online at https://doi.org/10.1016/j.isci.2020.100890. Table S1. Bacterial strains and plasmids used in this study, related to Figures 1-4 The extraction of plasmid and genomic DNA was conducted using the GeneAll R Plasmid 3 SV kit and the GeneAll R Exgene TM Cell SV kit (GeneAll; Seoul, Korea), respectively. The 4 GeneAll R Expin TM Gel SV and GeneAll R Expin TM CleanUp SV kits were used for the 5 purification of DNA. The Q5 R High-Fidelity DNA polymerase, T4 DNA ligase, restriction 6 enzymes, NEBuilder R HiFi DNA assembly reagents were purchased from New England 7 Biolabs (Ipswich, MA, USA). Synthetic oligonucleotides (Table S2) All bacterial strains and plasmids used in the study are listed in Table S1. The EpACR cob 1 strain and three compatible plasmids (Ko et al., 2014) were a kind gift from Prof. Sunghoon 2 Park (Ulsan National Institute of Science and Technology). Sequences of the synthetic 3 promoters (J231 promoter series) were obtained from the Registry of Standard Biological 4 Parts (http://parts.igem.org). Plasmid cloning was performed using E. coli Mach1-T1 R 5 (Thermo Scientific; Waltham, MA, USA) as the host. 6 For the inactivation of chromosomal metE of E. coli BL21(DE3), the lambda-red 7 recombination method was used with pKD46 and pCP20 plasmids (Datsenko and Wanner, 8 2000). An FRT-Kan R -FRT fragment was prepared by amplifying the pM_FKF plasmid (Noh 9 et al., 2018) with a metE_del_F/R primer pair. For replacing the early stop codon on btuB, the 10 btuB_s.d.m oligonucleotide was directly introduced into the EDM strain via electroporation.
11
To control the expression level of metH, the native metH was replaced and synthetic 12 expression cassettes were introduced. Initially, the native gene was removed similarly to the 13 metE deletion except that a metH_del_F/R primer pair was used. The synthetic expression 14 cassettes for metH expression were integrated using the phage-integration method 15 (Haldimann and Wanner, 2001). To construct the pCDFHMI1-5 plasmids, required for the 16 integration method, a vector fragment was amplified using pCDF_F/R and pCDFduet-1 as a 17 template. The vector fragments were assembled with fragments amplified using HK022_F/R 18 with pBAC-L6-PT3T4Ei as a template and metH_F1-5/metH_R with genomic DNA of 19 BL21(DE3) strain, respectively. Thereafter, the fragments for integration were amplified with 20 metH_int_F/R and pCDFHM1-5 plasmids as a template to remove replication origin. The 21 fragments were subsequently digested using BamHI and re-circularized for genomic 22 integration (Lee et al., 2016).
23
Cell culture medium and conditions 1 To validate coenzyme B12 auxotrophic growth, a seed culture was performed in the RB12 2 medium. After 12 h cultivation, cell pellets were washed twice with the B12 auxotrophic 3 medium. Next, the cultures were inoculated in a 15-mL test tube containing 3 mL of the B12 4 auxotrophic medium at an optical density at 600 nm (OD600) of 0.05. OD600 values were 5 monitored with cultivating the cells at 30C with agitation at 200 rpm. 6 The culture conditions for B12 production were determined by refering to the previously 7 optimized conditions (Ko et al., 2014). It was conducted in 300-mL Erlenmeyer flasks 8 containing 50 mL of the RB12 medium. For seed culture, a single colony was inoculated in a 9 15-mL test tube containing 3 mL of the RB12 medium. After 12 h, the seed culture was 10 inoculated in the fresh RB12 medium at OD600 of 0.05 and cultured. When the OD600 reached 11 0.4, isopropyl-β-D-thiogalactoside (IPTG) was added to a final concentration of 0.5 mM for 12 induction. OD600 values were monitored with cultivating the cells at 30C with agitation at 13 200 rpm. The pH was periodically measured using an Orion TM 8103BN ROSS TM pH meter 14 (Thermo Scientific) and adjusted to 7.0 with a 10 M NaOH solution. 15 All analytical cell cultures were performed in biological triplicate. To maintain the 16 plasmids, appropriate concentrations of antibiotics were added to the media (50 µg/mL 17 streptomycin, 50 µg/mL kanamycin, and 34 µg/mL chloramphenicol).
Quantification of cell biomass and coenzyme B12
20 Cell biomass was measured using a UV-1700 spectrophotometer (Shimadzu; Kyoto, 21 Japan) at a wavelength of 600 nm, and one OD600 unit was converted to 0.31 g/L of dry cell 22 weight (DCW) (Jo et al., 2019). For validation of coenzyme B12 auxotrophic growth, OD600 23 was measured using the VICTOR 1420 Multilabel Counter (PerkinElmer; Waltham, WA, USA). Similarly, OD600 was measured at 12, 18, and 24 h for quantification of cell biomass 1 during cell culture for coenzyme B12 production. 2 To quantify the amount of coenzyme B12, cell pellets after 24 h cultivation were 3 harvested, washed twice, and resuspended with 50 mM sodium acetate buffer (pH 4.0). The 4 cells were completely lysed using the French R Press FA-078A (Thermo Electron Co., 5 Waltham, MA, USA). The cell lysate was centrifuged and the supernatant was filtered using a 6 0.22-µm syringe filter to remove the cell debris. Coenzyme B12 was purified from the filtered 7 supernatant using EASI-EXTRACT R VITAMIN B12 immunoaffinity column (R-Biopharm 8 AG; Darmstadt, Germany) (Ko et al., 2014). The resultant samples were analyzed with ICP-9 MS (Element XR; Thermo Scientific) at the RIST Analysis & Assessment Center (Pohang, 10 Gyeongbuk, Korea). The amount of cobalt ion, which corresponds to coenzyme B12 (Karmi et 11 al., 2011;Ko et al., 2014) was measured. High-purity argon gas was used as the reaction gas 12 and the operation condition was as follows: auxiliary gas flow, 0.80 L/min; sample gas flow, 13 1.1 L/min; and ICP-RF power. 1250 W. The measurement was performed for biological 14 triplicate.
16
Quantification of plasmid copy number and gene transcripts 17 The PCN was determined as previously reported (Kang et al., 2018;Škulj et al., 2008). 18 Briefly, cultured samples were harvested and denatured by heating at 95 o C for 10 min. | 2020-02-13T09:20:19.228Z | 2020-02-07T00:00:00.000 | {
"year": 2020,
"sha1": "24e7ae6d5581cd25698dc762f9609e46d2843f7b",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2589004220300742/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5623165ebf21877cf1d851cd5d59e664184be092",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
4690820 | pes2o/s2orc | v3-fos-license | Neurometabolic characteristics in the anterior cingulate gyrus of Alzheimer’s disease patients with depression: a 1H magnetic resonance spectroscopy study
Background Depression is a common comorbid psychiatric symptom in patients with Alzheimer’s disease (AD), and the prevalence of depression is higher among people with AD compared with healthy older adults. Comorbid depression in AD may increase the risk of cognitive decline, impair patients’ function, and reduce their quality of life. However, the mechanisms of depression in AD remain unclear. Here, our aim was to identify neurometabolic characteristics in the brain that are associated with depression in patients with mild AD. Methods Thirty-seven patients were evaluated using the Neuropsychiatric Inventory (NPI) and Hamilton Depression Rating Scale (HAMD-17), and divided into two groups: 17 AD patients with depression (D-AD) and 20 non-depressed AD patients (nD-AD). Using proton magnetic resonance spectroscopy, we characterized neurometabolites in the anterior cingulate gyrus (ACG) of D-AD and nD-AD patients. Results Compared with nD-AD patients, D-AD patients showed lower N-acetylaspartate/creatine (NAA/Cr) and higher myo-inositol/creatine (mI/Cr) in the left ACG. NPI score correlated with NAA/Cr and mI/Cr in the left ACG, while HAMD correlated with NAA/Cr. Conclusions Our findings show neurometabolic alterations in D-AD patients. Thus, D-AD pathogenesis may be attributed to abnormal activity of neurons and glial cells in the left ACG.
Background
Depression is a common comorbid psychiatric symptom in patients with Alzheimer's disease (AD). It is associated with cognitive decline in AD patients, and reduced quality of life in patients and their caregivers [1]. Currently, there are no effective therapeutic interventions. Therefore, discovering and understanding the mechanisms and biological signatures related to depression in AD has clinical value.
Neuroimaging studies have yielded mixed results on regional brain volume differences between depressed and non-depressed AD patients. Son et al. [2] reported that AD patients with depression show decreased gray matter volume in the left inferior temporal gyrus compared with non-depressed AD patients. Moreover, Lebedev et al. [3] reported cortical thinning in left parietal and temporal brain regions in AD patients with depressive symptoms compared with non-depressed AD patients. They also reported strong negative correlation between cortical thickness in the precuneus and parahippocampal cortex and total tau (t-τ) (an AD biomarker) in the cerebrospinal fluid of depressed compared with non-depressed AD patients. Hu et al. [4] reported significant correlation between depression assessed with the Neuropsychiatric Inventory (NPI) and gray matter atrophy in the left middle frontal cortex. However, this was not supported by Bruen et al. [5]. Studies using single photon emission computed tomography show reduced perfusion in the middle frontal gyrus [6] and dorsolateral prefrontal (DLPFC) [7] regions in AD patients with depression.
These conflicting findings may partly be owing to limitations of the image analysis methods. For example, metabolic data from positron emission tomography (PET) studies are usually expressed relative to the whole brain or a specific structure. Therefore, as it does not reflect an absolute metabolic rate, PET may only indirectly reflect regional neuronal activity. In addition, structural measures are not sensitive to neuronal function at early stages [8]. Histological studies of postmortem samples from wellcharacterized depressed individuals who have committed suicide show altered cortical dendritic branching of pyramidal neurons in the ACC [9]. Additionally, the ratio of primed over ramified microglia in the dorsal ACC is significantly increased in depressed individuals who have committed suicide compared with healthy controls [10]. Hence, investigation of neurons and glial cells may aid our understanding of depression pathogenesis in AD patients.
Proton magnetic resonance spectroscopy ( 1 H-MRS) is a noninvasive magnetic resonance imaging (MRI) method to measure brain metabolite concentrations, including the neuronal marker N-acetylaspartate (NAA), the membrane phospholipid product choline (Cho), the second messenger metabolite and gliosis marker myo-inositol (mI), and total creatine (Cr), which includes creatine and phosphocreatine and is used as an internal standard [11]. 1 H-MRS has been used in clinical studies of major depressive disorder (MDD) [12,13] and AD [11]. Studies in patients with MDD compared with healthy controls report decreased NAA [14], decreased mI [15], and normal Cho/Cr [16] in the anterior cingulate cortex (ACC). Furthermore, treatment of MDD with lamotrigine and antidepressants may increase NAA and mI in the ACC [14,15]. 1 H-MRS findings also suggest that AD patients exhibit reduced NAA/Cr and elevated mI/Cr ratios in the medial temporal lobe, posterior cingulate gyrus, temporoparietal lobe, hippocampus, and prefrontal lobe [11].
Recently, several studies have been performed using 1 H-MRS in AD patients with behavioral and psychological symptoms. One study of 30 AD patients found significantly lower NAA/Cr and higher mI/Cr ratios in patients with delusions compared to those without. Additionally, patients with activity disturbances had significantly lower NAA/Cr in the ACC compared to those without, but there was no relationship between depression and NAA/Cr or mI/Cr [17]. Another study of 36 AD patients, 19 patients with amnestic mild cognitive impairment (aMCI), and 23 cognitively normal (CN) subjects revealed statistically significant correlations between mI/Cr in ACG and total NPI scores. However, the relationship between the depressive domain of NPI and brain metabolites was not specifically investigated [18]. Tsai et al. [19] reported a positive correlation between depression in AD and the Cho/Cr ratio in the left DLPFC, and mI/Cr ratio in both the left and right cingulate gyrus. In contrast, in a trial of donepezil versus memantine treatment for AD, no significant correlations between 1 H-MRS in the ACG and depression were reported [20]. Therefore, the underlying pathophysiology of AD with depression remains unclear. Inconsistencies may result from heterogeneity in previous research subjects, for example, AD patients may have more than one component of behavioral and psychological symptoms. Furthermore, few studies of depression in AD patients using 1 H-MRS have been performed to date.
The main purpose of our study was to investigate the relationship between depression and brain metabolites in AD patients. Given the known involvement of the ACG in MDD and AD, we examined its contribution to depression in AD patients, and hypothesized that metabolic changes would be observed in depressed AD patients. Thus, we investigated ACG metabolites using 1 H-MRS in AD patients with and without depression.
Patients
Thirty-seven patients with mild AD were recruited between December 2013 and December 2014 from Zhejiang Mental Health Center, China. All patients met the criteria for probable AD from the National Institute of Neurological and Communicative Diseases and Stroke and Alzheimer's Disease and Related Disorders Association [21]. Patients' scores ranged from 18 to 24 on the Mini-Mental State Examination (MMSE) and all patients had a score of 1 on the Clinical Dementia Rating (CDR) scale. A diagnosis of depression was confirmed using the Diagnostic and Statistical Manual of Mental Disorders, fourth edition (DSM-IV) [22]. Depression severity was assessed using the Hamilton Depression Rating Scale (HAMD-17) and NPI [23]. AD patients with depression (D-AD) had a score of 1 on the depression domain and a score of 0 on any of the other 11 NPI domains, and scored ≥ 7 on HAMD-17. Based on recommendations for inclusion criteria for clinical trials, D-NPI scores of ≥ 4 are indicative of clinical significance [24]. Non-depressed AD patients (nD-AD) did not meet DSM-IV criteria for depression. All scales were administered by trained neuropsychologists. All patients were right-handed, had more than 6 years' education, and were aged between 65 and 80 years. Caregivers were a patient's spouse and/or first-degree relative, and also had more than 6 years' education.
Patients with a history of neurological disorders (e.g., active epilepsy), psychiatric illnesses (e.g., schizophrenia, major depression, or mania), traumatic brain injury, those taking psychotropic medications, and significant alcohol and/or other substance abuse issues were excluded. To minimize the risk of concomitant vascular pathology, subjects were also excluded if dual-echo MR images showed two or more hyperintense lesions with diameters ≥ 5 mm, or more than four hyperintense lesions with diameters between 0 and 5 mm.
All participants (or their legal representatives) provided formal written consent. The research protocol was approved by the Ethics Committee of Tongde Hospital of Zhejiang Province (No.2013-001).
MRI and MR spectroscopy
MRI and 1 H-MRS were acquired for all study participants on a 3.0 Tesla unit (Siemens MAGNETOM Verio; Siemens Medical Systems, Erlangen, Germany) using an eight-channel phased-array head coil. Foam padding and headphones were used to reduce head motion and scanner noise. Advanced shimming, as provided by the manufacturer, was performed automatically to optimize field homogeneity. A two-dimensional chemical shift imaging sequence (CSI), using a point-resolved spectroscopy (PRESS) technique, was used to acquire water-suppressed 1 H-MRS simultaneously from the left and right cingulate region. The imaging parameters were: echo time (TE) = 35 ms, repetition time (TR) = 1500 ms, and average = 4, including 'weighted' mode used for k-space acquisition, with matrix size = 16 × 16 without interpolation, field-of view (FOV) = 160 mm, voxel of interest (VOI) = 80 mm, slice thickness = 1.5 cm, and 'fully excited VOI' was switched on. T2-weighted transverse, sagittal, and coronal gradient echo images (TR/TE = 600/95 ms) were acquired to localize the 1 H-MRS signal. A line was drawn perpendicular to the AC-PC line, which also cut across the medial section of the genu of the corpus callosum. The VOI in the anterior cingulate was located in front of the line. The VOI mainly consisted of the dorsal ACC and part of the lateral prefrontal cortex (PFC) [25,26] (Figure 1).
Total acquisition time for both MRI and MRS was approximately 10 min. Spectra data were post-processed using a commercially available, spectral analysis software package (Syngo spectroscopy post-processing package, Siemens Healthcare, Erlangen, Germany). The spectrum covered a frequency range of 4.3-0.1 ppm. Peak areas of NAA, mI, and Cr were estimated, and ratios of the area under each peak were expressed relative to Cr in each spectrum ( Figure 2). Voxel placement for spectroscopy and all data analysis were performed by one experienced radiologist who was blind to each subjects' diagnosis, and confirmed by another radiologist and one physician. VOI position was carefully adjusted to minimize white matter and cerebrospinal fluid contamination.
Data analysis
Statistical analyses were performed using the Statistical Package for the Social Sciences, Version 15.0 (SPSS, Inc., Chicago, IL, USA). Demographic and clinical characteristics of D-AD and nD-AD patients were assessed using independent-samples t-tests and χ 2 tests. Differences between metabolite ratios relative to Cr (NAA/Cr, Cho/Cr, and mI/Cr) in the left and right ACG were tested using independent-samples t-tests. Pearson's correlation analysis was used to investigate the correlation between metabolite ratios and NPI and HAMD scores. Statistical significance was set at P < 0.05. Table 1 lists the demographic and clinical data for both groups. Thirty-seven mild AD patients were included in the study. Seventeen AD patients had depressive symptoms. Scores on the depression domain of NPI (D-NPI) ranged between 4 and 9, with a mean of 6.76 ± 2.1. There were no significant differences in sex, age, education or MMSE between the D-AD and nD-AD groups (P > 0.05). Table 2 shows the ratio of measurements obtained by 1 H-MRS. There were significant differences in the ratios of NAA/Cr and mI/Cr in the left ACG. In addition, the D-AD group had a lower NAA/Cr ratio than the nD-AD group (1.35 ± 0.18 vs. 1.50 ± 0.23, respectively; t = −2.161, P < 0.05). Similarly, the D-AD group had a higher mI/Cr ratio than the nD-AD group (0.66 ± 0.13 vs. 0.58 ± 0.09, respectively; t = 2.213, P < 0.05). No differences were found in ratios between groups in the right ACG (P > 0.05). Pearson's correlation analysis of the D-AD group detected statistically significant correlations between NPI scores and NAA/Cr and mI/Cr ratios of the left ACG (r = −0.717, P = 0.001; r = 0.492, P = 0.045). Similarly, statistically significant correlation between HAMD and NAA/Cr ratio of the left ACG was detected (r = −0.778, P = 0.000).
Discussion
To the best of our knowledge, this is the first study to examine neurometabolic characteristics by 1 H-MRS in the ACG of AD patients with depression. The major findings of our study are: (1) Compared with nD-AD patients, D-AD patients show lower NAA/Cr and higher mI/Cr ratios in the left ACG; and (2) correlation between NPI and HAMD scores and NAA/Cr and mI/Cr ratios of the left ACG suggest abnormalities in ACG neurometabolites are associated with depression in AD.
The ACG processes the integration of cognition and affect [27,28]. It is also implicated in MDD, in which abnormal metabolism [29,30], decreased blood flow [31], reduced gray matter volume [32][33][34], and ACG involvement in disrupted brain networks [35] have been observed. In a post-mortem examination of depressed individuals that committed suicide, alterations in dendritic branching and microglial phenotypes were observed in the ACC [9]. A Data represent mean ± SD. Data were analyzed using independent-samples t-tests and χ 2 tests AD alzheimer's disease, D-AD alzheimer's disease with depression, nD-AD non-depressed AD patients, M male, F female, MMSE mini-mental state examination, D-NPI depression domain of neuropsychiatric inventory, HAMD hamilton depression rating scale Data represent mean ± SD ACG anterior cingulate gyrus, AD alzheimer's disease, D-AD alzheimer's disease with depression, nD-AD non-depressed AD patients recent neuroimaging study of depression in AD found that depression is associated with damage to structures in specific neural networks and functional disruption of cortical neural systems involving the ACG [8]. Therefore, the ACG is considered a crucial region in the neuronal circuitry underlying depression pathophysiology. Our observation that D-AD patients have abnormalities in ACG neurometabolites is consistent with this. Accordingly, it is likely that the ACG also underlies depressive symptoms in AD patients. NAA is the most prominent 1 H-MRS peak, and only found in the nervous system. It is a marker of neuronal density or function, osmoregulation, and energy homeostasis. There is a direct relationship between NAA synthesis, oxygen consumption, and ATP production in the central nervous system [36,37]. Reduction in NAA levels measured by 1 H-MRS are a recognized marker of neuronal loss or dysfunction in depressive disorders [38]. Previous studies have demonstrated that MDD and bipolar disorder patients have lower NAA/Cr levels than healthy controls in the PFC, medial frontal cortex, and ACG [14,15,39,40]. Longitudinal research also shows that NAA/Cr in the pregenual ACC of patients with MDD significantly decreases over 9-10 months, and at baseline, has a logarithmic negative association with illness duration [41]. Furthermore, successful treatment of MDD with antidepressants is associated with normalization of NAA levels in the ACC [14,15]. More specifically, a growing body of research suggests that MDD patients exhibit decreased beta nucleoside triphosphates compared with healthy controls [42,43]. These studies, along with our finding of lower NAA/Cr in D-AD patients, implicates neuronal degeneration and dysfunction in the ACG of depressed AD patients. mI is considered a maker of glial proliferation [44]. It is also involved in the regulation of neuronal osmolarity, metabolism of membrane-bound phospholipids, and the phosphoinositide secondary messenger pathway. Several animal studies of depression report higher mI/Cr in the PFC of depressed compared with control animals [45,46]. Lirng et al. [47] observed that migraine patients with MDD had higher mI/Cr ratios in the bilateral DLPFC compared with patients without MDD. Furthermore, mI/ Cr in the right DLPFC positively correlates with scores on the Beck Depression Inventory, suggesting that increased mI/Cr within the DLPFC might be associated with MDD in migraine patients. Torres-Platas et al. [48] reported hypertrophic astrocytes in the ACG of 10 well-characterized depressed suicide cases. Recently, they reported increased microglial priming and increased gene expression of microglial markers in the dorsal ACG in postmortem brain samples from middle-aged depressed individuals who committed suicide [10]. Higher mI/Cr in the ACG of D-AD patients observed in our study is consistent with these previous studies. We suggest that higher mI/Cr in the ACG region reflects increased glial content and activation. Therefore, our study provides further evidence for the involvement of ACG glial cells in depression of AD patients.
We also found that D-AD patients have abnormal neurometabolic changes in the left ACG, but not the right ACG, suggesting asymmetrical alterations across hemispheres. Prior studies have also shown evidence of asymmetrical ACG abnormalities in late-life depressed (LLD) patients. Disabato et al. [49] found significantly smaller left anterior cingulate thickness in late-onset LLD compared with early-onset LLD subjects. The late-onset group also had more hyperintensities than early-onset LLD subjects. Similarly, Yuan et al. [50] found abnormal left ACG volume in geriatric depression (RGD) patients compared with healthy control subjects. Furthermore, there was a significant correlation between left ACG volume and Rey Auditory Verbal Learning Test delayed recall raw score in RGD patients. Ritchie et al. [51] confirmed that early-onset depression and late-onset depression exhibit heterogeneity in etiology, including onset age. These findings indicate that age at onset of depressive symptoms in LLD subjects is associated with differences in cortical thickness. Moreover, the left ACG might be involved in psychopathology and pathophysiology of RGD. One study using 11 C-Pittsburgh Compound B PET imaging found more amyloid plaques in the ACG of LLD patients compared with controls [52]. Meanwhile, another longitudinal study of brain metabolic changes during the conversion from aMCI to AD, found that converters had a significantly greater metabolic decrease in the left ACG than non-converters [53]. Therefore, we speculate that the left ACG might be involved in the psychopathology of depression in AD patients.
Some potential limitations of our study should be taken into consideration. First, we used a semiquantitative MRS approach, in which the intensity of each metabolite is normalized to Cr under the assumption that Cr concentration remains relatively constant in different brain diseases. This method is the most frequently used method for clinical MRS studies [54]. However, variations in Cr concentration are present during tissue destruction or in systemic diseases. Subsequent studies with absolute measures may revalidate our findings. Second, only the ACG was investigated. Therefore, even though neurodegenerative processes are suggested in D-AD, caution should be taken to extrapolate the findings to other brain areas. Third, although most demographic or clinical features were relatively balanced between both groups, the sample size was rather small. Thus, these preliminary results need further replication with a larger sample size. | 2018-04-03T04:39:06.593Z | 2015-12-02T00:00:00.000 | {
"year": 2015,
"sha1": "a18053a69f9143d0ed1f2f7cc96118a860e2da75",
"oa_license": "CCBY",
"oa_url": "https://bmcpsychiatry.biomedcentral.com/track/pdf/10.1186/s12888-015-0691-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a18053a69f9143d0ed1f2f7cc96118a860e2da75",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
270844662 | pes2o/s2orc | v3-fos-license | Ventricular restoration in adults with huge congenital left ventricular aneurysm: report of two cases
Congenital ventricular aneurysms (CVA) are rare cardiac anomalies that have been predominantly described in the Black population. They are characterized by an akinetic ventricular protrusion that is commonly located at the basal and apical segments. Although the diagnosis is often incidental and the majority of patients are asymptomatic, life-threatening events such as persistent ventricular arrhythmias, CVA rupture, and heart failure are not uncommon. However, no standardized therapy is currently available and good outcomes have been reported with both conservative and surgical management. We report the cases of two young Black African patients with huge symptomatic CVA lesions who underwent successful surgical repair with a ventricular restoration technique. Both cases were consulted for chest pain and dyspnea. Chest X-ray and transthoracic Doppler echocardiography suggested the diagnosis. Thoracic angioscanner and thoracic magnetic resonance imaging confirmed the diagnosis. Both patients underwent successful surgery. This case report aims to revisit the diagnostic and therapeutic approach to this rare pathology, in our professional environment.
Introduction
Congenital ventricular aneurysms (CVA) are rare entities with an estimated incidence between 0.02% and 0.34% [1,2].The CVA appears as a saccular and akinetic extension of the ventricular wall, mainly located at the apical segment in the majority [2].The left ventricle is the mostly affected, and the differential diagnoses are with congenital ventricular diverticula (CVD) and acquired ventricular aneurysms such as postmyocardial infarction aneurysms and infectious conditions such as viral cardiomyopathy and myocardial involvement in Chagas and tuberculosis diseases [3][4][5][6].Despite the risk of life-threatening complications such as rupture, there has been some reluctance to recommend surgical therapy unless symptoms are present or in cases with associated cardiac lesions.We report the cases of two young adults who underwent ventricular restoration for symptomatic and complicated CVA in our institution.This case report aims to revisit the diagnostic and therapeutic approach to this rare pathology, in our professional environment.
Case 1
Patient information: a 27-year-old Black male patient was referred to our department from an outside hospital with a diagnosis of a large apical ventricular pseudo-aneurysm.The patient had no familial history of cardiovascular disease or sudden death.
Clinical findings: he complained 3 weeks earlier of recurrent headaches, general body weakness, thoracic compression, and sporadic episodes of palpitations.
Diagnostic assessment: the electrocardiogram showed a Cornell index at 28 mm with ST-T changes in leads V5 and V6.While a Color-Doppler transthoracic echocardiogram ruled out valvular, congenital, and ventricular dysfunction, an apical flow acceleration was found with a suspicion of left ventricular diverticulum or aneurysm.The latter was confirmed at subsequent Magnetic Resonance Imaging (MRI) describing an akinetic fibrotic and partially thrombosed cavity, suggesting an apical CVA (Figure 1), communicating with the left ventricle through a 2-centimetre defect.Additional investigations with 24-hour Holter-Electrocardiogram, chest X-ray, and serology (VDRL, HIV) did not reveal other abnormalities.
Therapeutic interventions:
an oral anticoagulation prophylaxis and beta-blocker therapy were initiated with poor improvement of the symptoms after 3 weeks of treatment.Following a clinical discussion with the following cardiologist and the patient, a consensual decision was taken for elective surgical repair.The patient underwent a successful ventricular restoration with a bovine pericardial patch under cardiopulmonary bypass (Figure 2).
Follow-up and outcome of interventions:
the postoperative course was uneventful and the patient was discharged from the hospital 6 days after surgery.
Informed consent: the patient reported his full consent to publish his case.
Case 2
Patient information: a 29-year-old woman with a known history of thoracic mass was admitted to the emergency department of our institution for progressive tachypnea.
Clinical findings: she complained two months earlier of recurrent thoracic compression and dyspnea.
Diagnostic assessment: a chest X-ray revealed left para-cardiac opacity without parenchymal involvement.The electrocardiogram showed a first-degree atrioventricular block, left ventricular hypertrophy with a Cornell index at 27 mm, Cornell product criteria at 3240mVms, and ST-T changes in leads V5 and V6.An emergency angio-computed tomography scan revealed a huge thrombosed cavity communicating with the left ventricle at the apical segment (Figure 3).
Therapeutic interventions:
considering the risk of rupture, an urgent surgery was planned.A diagnosis of large CVA was made intra-operatively and the patient underwent a surgical ventricular restoration and mass resection (Figure 4).
Follow-up and outcome of interventions:
the postoperative course was uneventful and the patient was discharged home on the 10 th postoperative day.A postoperative angiocomputed tomography scan showed no residual shunt and good restoration of the left ventricular shape (Figure 3).
Informed consent: the patient reported her full consent to publish her case.
Discussion
CVA lesions are a large spectrum of heterogeneous patterns varying from isolated myocardial protrusions, described mostly in autopsy studies, to voluminous protuberances as observed in the current series.While the precise mechanism of CVA development remains unknown, embryogenic defects within the endocardial tube from the 4 th embryogenic week have been hypothesized [7].No evidence of sex predominance is supported by various reports, while the majority of CVA patients were mainly from African and American regions [2].Morphologically, CVAs appear as akinetic and fibrous protuberances of the ventricular wall, with a predominant location at the apical segment [1,2].They are histologically distinct from congenital ventricular diverticula (CVD) which wall presents intrinsic similarities with myocardial tissue, including contractile activity.As opposed to CVA, CVD is commonly associated with cardiac and/or extra-cardiac lesions [8].Other differential diagnoses are post-infarction ventricular aneurysms and aneurysmal lesions from infectious processes such as the human immunodeficiency virus, Chagas, and tuberculosis diseases [3][4][5][6].The heterogeneity of CVA lesions, with regards to size, location, thrombosis, and associated anomalies, translates in a variety of clinical presentations from asymptomatic status which is the majority, to more invaliding events such as persistent arrhythmias, heart failure, rupture, and sudden death [9].
The diagnosis of CVA is often incidental during routine investigations for other diseases.Although CVA can be associated with electrical abnormalities [10], these events are relatively rare [11].Pellicia et al. suggested a classification in 3 groups (distinct, mild, and minor) of these electrocardiographic abnormalities [12].According to that classification, the abnormalities observed were then mild although the sensitivity, specificity, positive predictive value, and negative predictive value of a 12-lead ECG for the diagnosis of CVA are low [11].CVA lesions can be primarily detected by Color-Doppler Echocardiography which is largely accessible and provides a reliable description of ventricular morphology and associated cardiac anomalies, despite misdiagnoses of small apical lesions are possible [13,14].Complementary imaging with magnetic resonance imaging (MRI), computed tomography scan, and conventional computed angiography is often required in doubtful cases and when surgical correction is considered.They provide specific details on CVA tissue, size, kinesia (to differentiate CVA from CVD), and associated lesions such as congenital anomalies in children and/or coronary disease in adult patients [15,16].To date, no consensus exists in clinical practice for the management of CVA due to its scarce prevalence.Indeed, limited data from case series have presented heterogeneous outcomes with several strategies including surgical repair, antiarrhythmic treatment, and conservative management among others [2,9].In a study by Mayer et al., no cardiac death was observed among patients with CVA and CVD who underwent conservative management over a 13-year follow-up period [17].Similar experiences with non-surgical approaches were reported by other authors with imaging follow-up from the fetal period [18,19].However, surgery was mostly described in cases with associated anomalies or those with clear symptoms, following congestive heart failure, thromboembolism, and sustained ventricular arrhythmias.The type of surgery depends on the size and location of the CVA, in addition to the associated anomalies.Simple direct suturing of small defects (<2 cm) is often sufficient, whereas larger lesions require more complex procedures such as ventricular restoration with circular patches as described in post-infarction ventricular aneurysm surgery [1,20].Thus, care should be taken to avoid restrictive dysfunction following ventricular repairs, and mitral valve insufficiency resulting from displacement/distortion of the papillary muscles.If the operative risk is relatively low (<2%) in isolated lesions, it might significantly increase in cases with associated anomalies.In our case, the repair of a giant thrombosed CVA with partial rupture required femora-femoral cannulations for cardiopulmonary bypass and moderate hypothermia considering the risk of complete rupture during chest entry.In cases presenting with ventricular sustained arrhythmias, antiarrhythmic treatment including radiofrequency ablation or ICD implantation has been described as lone therapy or in combination with surgery [21,22].
Conclusion
Fatal complications from CVA could be more common than expected.Thus, an accurate evaluation of ventricular wall abnormalities during routine imaging analysis should be considered in patients presenting with symptoms of heart failure, stroke, and ventricular arrhythmias.Surgical treatment is effective and should be considered in cases refractory to conservative therapy.
Figure 2 :Figure 4 :
Figure 2: intraoperative views of the congenital ventricular aneurysms, A) apical defect, B) closure of the defect with a bovine pericardium Figure 3: angio-computed tomography scan views: comparative aspects, preoperative (A, B), postoperative (C, D) Figure 4: excised congenital ventricular aneurysms with diffused thrombosis (A), histopathology of congenital ventricular aneurysms wall (B, C) showing fibrotic tissue with diffused inflammatory cells and focus of calcification
Figure 2 :
Figure 2: intraoperative views of the congenital ventricular aneurysms, A) apical defect, B) closure of the defect with a bovine pericardium
Figure 4 :
Figure 4: excised congenital ventricular aneurysms with diffused thrombosis (A), histopathology of congenital ventricular aneurysms wall (B, C) showing fibrotic tissue with diffused inflammatory cells and focus of calcification | 2024-07-01T05:06:01.257Z | 2024-05-06T00:00:00.000 | {
"year": 2024,
"sha1": "79fe748534346ac73909bc4beffd250e12ed806b",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "79fe748534346ac73909bc4beffd250e12ed806b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256242437 | pes2o/s2orc | v3-fos-license | A filter method for inverse nonlinear sideways heat equation
In this paper, we study a sideways heat equation with a nonlinear source in a bounded domain, in which the Cauchy data at x=X\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$x = \mathcal {X}$\end{document} are given and the solution in 0≤x<X\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$0 \le x < \mathcal {X}$\end{document} is sought. The problem is severely ill-posed in the sense of Hadamard. Based on the fundamental solution to the sideways heat equation, we propose to solve this problem by the filter method of degree α, which generates a well-posed integral equation. Moreover, we show that its solution converges to the exact solution uniformly and strongly in Lp(ω,X;L2(R))\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathscr {L}^{p}(\omega,\mathcal {X};\mathscr {L}^{2}(\mathbb {R}))$\end{document}, ω∈[0,X)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\omega\in [0,\mathcal {X})$\end{document} under a priori assumptions on the exact solution. The proposed regularized method is illustrated by numerical results in the final section.
Introduction
In this paper, we determine the surface temperature u(x, t) for 0 ≤ x < X from the known temperature measurements u(X , t) = φ(t) and heat-flux measurement ∂u ∂x (X , t) = ψ(t) when u(x, t) satisfies the following system: ∂x 2 + f (u)G(x, t; u), (x, t) ∈ (0, X ) × R, u(x, t)| t→±∞ = 0, (x, t) ∈ (0, X ) × R, where φ, ψ ∈ L 2 (R) are given functions. The source terms f (u), G(u) are globally Lipschitz functions satisfying (2.15a) and (2.15b), respectively. The problem called the inverse nonlinear sideways heat equation (INSHE for short) is a model of a problem where one wants to determine the temperature on both sides of a thick wall, but where one side is inaccessible to measurements. In many dynamic heat transfer situations, one wishes to determine the temperature on the surface of a body, where the surface itself is inaccessible for measurements. The physical situation at the surface may be unsuitable for attaching a sensor, or the accuracy of a surface measurement may be seriously impaired by the presence of the sensor. Typical practical applications are the estimation of the heat flux and the temperature at the surface of the body under investigation, e.g., re-entry vehicles, calorimeter-type instrumentation, and combustion chambers [1,2,5,11,13,15,16,19,20]. In such cases, one is restricted to interior measurements, and from these one wishes to compute the surface temperature. Cannon (1984) [4] considered the direct problem for the homogeneous heat equation in the quarter plane (x ≥ 0, t ≥ 0): (1. 2) The functions φ(·) and u(x, ·) are to be in L 2 (R) (φ and u vanish for t < 0). The author proved that, for each φ ∈ L 2 (R), (1.2) has a unique solution u with u(x, ·) ∈ L 2 (R) for each x ≥ 0. Fredrik Berntsson (1999) [3] considered the sideways heat equation The author used the spectral method to solve problem (1.3). Error estimates for the regularized solution were derived, and a procedure for selecting an appropriate regularization parameter was given. In recent years, linear homogeneous problem (1.1), i.e., f (u)G(x, t; u) = 0, has been researched by many authors, and various numerical methods have been proposed, e.g., the boundary element Tikhonov regularization method (Lesnic et al. (1996) [9]), the conjugate gradient method (Hao (2012) [8]), the difference regularization method (Xiong et al. (2006a) [17]), the "optimal filtering" method (Seidman & Elden (1990) [14]), the Fourier method (Xiong et al. 2006b [18]), the quasi-reversibility method (Elden (1987) [6], Liu & Wei (2013) [10]), the wavelet, wavelet-Galerkin, and the spectral regularization methods (Elden et al. (2000) [7], Reginska & Elden (1997) [12]), to mention only a few.
The more important but challenging semilinear sideways heat equation with the heat source depends nonlinearly on the temperature, which occurs in many applications related to reaction-diffusion. The function f (u)G(u) is known as a special type of locally Lipschitz function. For example, if we choose f (u) := u, G(x, t; u) := sin u (individually they are globally Lipschitz), then Although there are some works on the nonlinear case, the literature on the case of locally Lipschitz sources f (u)G(x, t; u) is quite scarce. Our results extend problem (1.3), and we propose a new filter method to establish regularized solutions of problem (1.1) in the case of the locally Lipschitz function f (u)G(x, t; u). The paper is organized as follows. In Sect. 2, the formulation of problem and regularization methods is given. In Sect. 3, a stability estimate in L p (ω, X ; L 2 (R)), ω ∈ [0, X ) is proved under a priori condition of the exact solution and the locally Lipschitz source term. Finally, we present a numerical result to illustrate the proposed regularized method in Sect. 4.
Mathematical problem and mild solution of (INSHE)
For w ∈ L 2 (R), we have the Fourier transform and the L 2 norm of w is Suppose that the solution of problem (1.1) is represented as a Fourier transform Throughout this paper, we let W(x, t; u) = f (u)G(x, t; u), ∀(x, t) ∈ (0, X ) × R. From (1.1), we have the following systems of second order ordinary equation: We thus have after some direct calculation Moreover, for ξ = 0, we have From (2.5), the exact form of u is given by We say that u is a mild solution of problem (1.1) if u satisfies integral (2.6). We know that the three functions are unbounded as a function of the variable ξ . Consequently, small errors in high frequency components can blow up and completely destroy the solution for 0 < x < z < X . A natural idea to stabilize the problem is to replace them by a bounded approximation. In a natural way, we can replace the terms in (2.7) by (respectively), with δ > 0 is a small positive number representing the level of noise and the parameter γ (δ) > 0 is small (regularization parameter). We introduce the first regularized solution U δ γ (δ) obtained by are defined for all 0 ≤ y ≤ X and ξ ∈ R in the following: We introduce some notations and assumptions that are needed for our analysis.
Definition 2.1 (Gevrey space) The Gevrey class of functions of order θ ≥ 0 defined as is equipped with the norm defined by the Banach spaces of measurable (respectively, continuous functions) functions w : We assume the following: (H 1 ) The data φ, ψ ∈ L 2 (R) are noisy and are represented by the observation data here δ > 0 is a small positive number representing the level of noise.
3 Error estimate in L p (ω, X ; L 2 (R)), 0 ≤ ω < X First, we have the following lemmas which will be useful.
Our result is in the next theorem.
2nd part. Error estimate U δ γ (δ)u L p (ω,X ;L 2 (R)) . Note From (2.6) and Lemma 3.2, we have From Lemma 3.1, we get Hence, we get Moreover, like in (3.15), we obtain Thus, we have the following inequality: From Gronwall's inequality, we conclude that Hence, we deduce that Next, we estimate U δ γ (δ) -V γ (δ) L p (ω,X ;L 2 (R)) . Using the basic inequality (a + b + c) 2 ≤ 3(a 2 + b 2 + c 2 ) and Hölder's inequality, we obtain Similar calculations as in (3.15) yield From Lemma 3.1 and using the Lipschitzian property of W, we get the following inequality: Based on the Fourier transform, for F ∈ L 2 (R), we have The exact solution of problem (4.1)-(4.3) is given by The data φ, ψ ∈ L 2 (R) are noisy and are represented by the observation data φ δ , ψ δ ∈ L 2 (R) satisfying here δ > 0 is a small positive number representing the level of noise (δ → 0 + ). We recall the regularized solution U δ γ (δ) obtained by Here cosh γ (δ) (y √ iξ ), sinh γ (δ) (y √ iξ ) are defined for all 0 ≤ y ≤ X and ξ ∈ R in the following: where Next, we consider the problem of computing the Fourier transform as follows: Let m, n ∈ R, m < n and assume that Put h t = n-m N t , t i = ih t + n, i = 1, N t , respectively. Noting that ξ k = 2π (k-N t
From them, we observe that the errors at δ = 0.001 are greater than those at δ = 0.0001 and smaller than those at δ = 0.01. Furthermore, with the smaller errors of input data, the results obtained are more accurate, which verifies the theoretical results. | 2023-01-26T15:02:35.147Z | 2020-04-07T00:00:00.000 | {
"year": 2020,
"sha1": "2be6e16a4203aedf7a929dd43a4fa93e7359c15b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13662-020-02601-4",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "2be6e16a4203aedf7a929dd43a4fa93e7359c15b",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
253296699 | pes2o/s2orc | v3-fos-license | Electroporation in Head-and-Neck Cancer: An Innovative Approach with Immunotherapy and Nanotechnology Combination
Simple Summary This review provides a summary of the head-and-neck squamous cell carcinoma (HNSCC) biological characteristics and its current treatments. Furthermore, insight and outlook on the relationship between electroporation and its implementation (combination with nanotechnology and immunotherapy) in the treatment of H&N cancers are provided. Abstract Squamous cell carcinoma is the most common malignancy that arises in the head-and-neck district. Traditional treatment could be insufficient in case of recurrent and/or metastatic cancers; for this reason, more selective and enhanced treatments are in evaluation in preclinical and clinical trials to increase in situ concentration of chemotherapy drugs promoting a selectively antineoplastic activity. Among all cancer treatment types (i.e., surgery, chemotherapy, radiotherapy), electroporation (EP) has emerged as a safe, less invasive, and effective approach for cancer treatment. Reversible EP, using an intensive electric stimulus (i.e., 1000 V/cm) applied for a short time (i.e., 100 μs), determines a localized electric field that temporarily permealizes the tumor cell membranes while maintaining high cell viability, promoting cytoplasm cell uptake of antineoplastic agents such as bleomycin and cisplatin (electrochemotherapy), calcium (Ca2+ electroporation), siRNA and plasmid DNA (gene electroporation). The higher intracellular concentration of antineoplastic agents enhances the antineoplastic activity and promotes controlled tumor cell death (apoptosis). As secondary effects, localized EP (i) reduces the capillary blood flow in tumor tissue (“vascular lock”), lowering drug washout, and (ii) stimulates the immune system acting against cancer cells. After years of preclinical development, electrochemotherapy (ECT), in combination with bleomycin or cisplatin, is currently one of the most effective treatments used for cutaneous metastases and primary skin and mucosal cancers that are not amenable to surgery. To reach this clinical evidence, in vitro and in vivo models were preclinically developed for evaluating the efficacy and safety of ECT on different tumor cell lines and animal models to optimize dose and administration routes of drugs, duration, and intensity of the electric field. Improvements in reversible EP efficacy are under evaluation for HNSCC treatment, where the focus is on the development of a combination treatment between EP-enhanced nanotechnology and immunotherapy strategies.
The main risk factors for HNSCC were registered from epidemiological studies and classified by the International Agency for Research on Cancer (IARC) of the World Health Organization (WHO) [3]. Main exogenous risk factors include tobacco, alcohol consumption, exposure to environmental pollutants, and viral agents' infection. Human papillomavirus (HPV-16 and HPV-18) is the most common oncogenic viral agent, and human papillomavirus-associated HNSCCs start primarily from the palatine and lingual tonsils of the oropharynx. Instead, HPV-negative HNSCCs begin in the oral cavity, hypopharynx, and larynx. Around 75-85% of HNSCCs are HPV-negative, and they are associated with poor prognosis compared to HPV-positive HNSCCs [4,5]. Moreover, epigenetic alterations such as DNA methylation, histone covalent modifications, chromatin remodeling, and non-coding RNAs resulted involved in HNSCC (HPVnegative), tumor progression, and resistance to traditional therapy [6]. The main risk factors for HNSCC were registered from epidemiological studies and classified by the International Agency for Research on Cancer (IARC) of the World Health Organization (WHO) [3]. Main exogenous risk factors include tobacco, alcohol consumption, exposure to environmental pollutants, and viral agents' infection. Human papillomavirus (HPV-16 and HPV-18) is the most common oncogenic viral agent, and human papillomavirus-associated HNSCCs start primarily from the palatine and lingual tonsils of the oropharynx. Instead, HPV-negative HNSCCs begin in the oral cavity, hypopharynx, and larynx. Around 75-85% of HNSCCs are HPV-negative, and they are associated with poor prognosis compared to HPV-positive HNSCCs [4,5]. Moreover, epigenetic alterations such as DNA methylation, histone covalent modifications, chromatin remodeling, and non-coding RNAs resulted involved in HNSCC (HPV-negative), tumor progression, and resistance to traditional therapy [6].
Differences between HPV-positive and HPV-negative HNSCC are also highlighted by distinct gene expression, gene mutation, and immune profiles (Figure 1b). The Cancer Genome Atlas (TCGA) reports whole data of copy number alterations, mutational profiles, and mRNA expression from more than 520 human HNSCC [7]. Compared to other tumor types, HNSCC is more frequently induced by loss of tumor suppressor, consisting of loss of chromosomal regions and multiple genetic alterations [7]. Tumor suppressor protein p16 INK4a is encoded by the CDKN2A gene and is considered a prognostic factor for positive-HPV oropharyngeal cancer. Clinical practice guidelines for HNSCC use the p16 immunohistochemistry (ICH) test to identify HPV-positive cancers, and in case of positive results, other specific HPV tests are planned to confirm cancer origin [5,8].
Histologically progression of HNSCC follows an ordered series of steps that can be classified depending on specific genes alteration/mutations [2]: Epithelial cell hyperplasia characterized by loss of 9p21 and consequent downregulation of tumor suppressor genes (TSGs) such as CDKN2A.
Invasive carcinoma in which is observed loss of 6p, 4q27, and 10q23. 5.
Second primary tumors (SPTs) and metastases localized at distinct anatomical sites (esophagus, lungs, skin) can express the same molecular abnormalities of the primary tumor or different markers.
Depending on the HNSCC stage and type, specific guidelines and pharmacologic treatments are approved and used in clinics [5,9,10]. Briefly, first-line treatment includes surgical resection, followed by adjuvant radiation or chemotherapy in addition to radiation (known as chemoradiation or chemoradiotherapy (CRT) depending on the cancer stage and pathological risk factors [2,5]. Suitable treatment strategy should be therefore discussed in a multidisciplinary team including not only surgery and medical oncology but also specialist involved in diagnosis (radiologist, nuclear medical doctor, and pathologist) and in supportive care (nutritionist, pharmacist, researcher, psychologist, physiotherapist, and occupational speech). Innovative treatments, recently approved by U.S. Food and Drug Administration (FDA) and European Medicines Agency (EMA), included monoclonal antibodies for selective immunotherapy depending on the biomarkers overexpressed by tumor in recurrent and/or metastatic HNSCC [11,12].
In general, early-stage diseases in the oral cavity, laryngeal, hypopharyngeal, and p16-negative and p16-positive oropharyngeal cancer are treated with a single-modality approach, radiotherapy (RT), or conservative surgery. RT for first-stage disease (I) requires a dose ranging from 66-70 Gray (Gy) total dose; in advanced disease stage (III-IV), the total dose could be increased to 80.5 Gy cumulative [13].
Minimally invasive surgery techniques, such as robotic surgery and laser microsurgery, are widely used to resect limited tissue portions in early stage disease, preserving organ functionality and improving patient quality of life (QoL) [14]. In the case of locally advanced disease (stage III and IV of oral cavity, larynx, hypopharynx), the standard treatment option could be surgery, sometimes in combination with RT +/-chemotherapy or exclusive CRT. In this case, the surgery requires the complete resection of the diseased tissue and musculoskeletal, vascular, and/or nerve reconstruction to enable the return to the acquired physiological and bodily functionality. Traditionally, organ/tissue reconstruction is performed using autologous materials; for example, the resected esophagus is replaced with an autologous intestine conduit [15]. On the latter, nowadays, improvements in tissue engineering and regenerative medicine are developing synthetic biodegradable and biocompatible engineered scaffolds that are emerging as alternative and useful organ substitutes [16,17]. Some clinical trials are currently underway for the validation of engineered scaffolds also for application in the H&N district (NCT01997437, NCT02949414, NCT01977911, NCT02770209, NCT01242618, and NCT04633928) (www.clinicaltrial.gov, accessed on 28 October 2022).
CRT is also used as standard treatment in a single treatment or in combination with surgery. The most common chemotherapy drugs clinically used are cisplatin, carboplatin, and 5-fluorouracil (5FU) [18] in combination with taxanes (such as Paclitaxel and Docetaxel) [19]. More in detail, Cisplatin (CisPt) is an antineoplastic agent widely used in many cancer therapies. Mechanism of action is based on DNA intercalation and consequent cross-linking that inhibits cell replication inducing apoptosis. Radiotherapy combined with Cancers 2022, 14, 5363 4 of 28 three weekly administrations of 100 mg/m 2 cisplatin is the accepted therapeutic standard in HNSCC [20].
Bleomycin (BLM) is a cytotoxic glycopeptide antibiotic produced by Streptomyces verticillus. Its activity induces DNA strand breaks and superoxide radical production that cleave DNA. After intravenous injection, bleomycin keeps at a high concentration in blood for one hour then it is mainly excreted by the urine. From the pharmacokinetics of BLM, it was shown to have a higher concentration in the skin, peritoneum, and lungs [21]. EMA approved BLM (15 U (USP)/vial) as a powder for solution for injection (Local/Intratumoral injections), and its use is also allowed in combination with anti-cancer drugs or in combination with radiation. Regarding the indication in head-and-neck cancer, there is still a role for bleomycin in the (neoadjuvant) treatment of the disease [22][23][24].
Despite this, chemotherapeutics are effective in their treatment; they are impacted by many severe side effects (e.g., pulmonitis, pulmonary fibrosis, stomatitis and skin changes, fatigue, hair loss, easy bruising and bleeding, infection, anemia, etc.), which reduce the patients QoL.
The increasing study focused on HSNCC molecular biology identified a considerable number of molecular biomarkers useful to be used as a target for more selective and targeted therapies such as immunotherapy. Among all, EGFR, CD44, CD133, and ALDH1 are overexpressed by HNSCC and cancer stem cells (CSCs) and are associated with prognostic significance [25].
The identified molecular biomarkers of HNSCC and CSCs are reported and explained in Table 1. It drives expression of genes promoting cellular proliferation and survival and genes encoding growth factors and cytokines promoting immunosuppression (IL-6, IL-10, and TGF-beta) [30] PTPRs Protein tyrosine phosphatase receptors It causes STAT3 hyperactivation in H&N [31] PD-L1 Programmed death-ligand transmembrane protein Biding receptor PD-1 suppresses the adaptive immune system [32] EGFR, overexpressed in 80-90% of HNSCC tumors, is associated with low overall survival (OS). Instead, high levels of CD44 are associated with metastasis and poor prognosis. High levels of ALDH1 expression or activity are associated with self-renewal, invasion, and higher metastasization in HNSCC. Moreover, analysis of HNSCC showed that 80% of ALDH1+ cells are closer (<100 µm) to a blood vessel, suggesting that CSCs reside primarily in perivascular niches [33].
Over-expression of PD-L1 (PD-L1+) has been recorded in 24-49% of melanoma cancer cases and is related to increased faster growth and poor overall survival (OS). PD-L1 is overexpressed in about 80% of cases of varying degrees in gastro-esophageal, colorectal, and bile duct carcinomas and is usually associated with a poor prognosis [34].
Other CSCs markers (OCT3-OCT4-SOX2 and NANOG) are identified, and their levels are correlated with tumor grade in oral cancer [35].
Immunotherapy for HNSCC is an innovative and valuable treatment without the potentially devastating side effects of conventional treatments. Basically, immunotherapy works by helping the immune system recognize and attack cancer cells. Using molecular biomarkers as a target, a monoclonal antibody was used to selectively target cancer cells or protein on cells of the immune system, ignoring healthy cells. Incorporation of prognostic and predictive biomarkers into clinical management may overcome obstacles to targeted therapies and enable prolonged survival.
Monoclonal antibodies currently used in the treatment of HNSCCs are reported in Table 2. [38]. Not yet approved by EMA Cetuximab (Erbitux) is a chimeric monoclonal antibody (mAb) of the immunoglobulin G1 (IgG1) class. The affinity for EGFR is approximately 5 to 10-fold higher compared to endogenous ligands, so it is able to block the binding of these ligands, resulting in inhibition of the receptor function.
In 2006, Cetuximab was FDA-approved for H&N cancer treatment either for local/regional advanced SCC in combination with RT or as a monotherapy (400 mg IV 120 min/1 • week, followed by 250 mg/m 2 ) for recurrent or metastatic HNSCC progressing after platinum-based therapy. In 2011, FDA approved Cetuximab for late-stage HNSCC in combination with chemotherapy (recurrent locoregional disease or metastatic HNSCC in combination with platinum-based therapy and with Fluorouracil) [39].
Cetuximab induces EGFR internalization, which can lead to its downregulation [40]. Another mechanism of action identified for cetuximab is to induce antibody-dependent cell cytotoxicity (ADCC) through Fcγ receptors on immune effector cells [41,42]. ADCC is a set of mechanisms that target appropriate subclasses of cells coated with IgG antibodies (IgG1 in humans) to be attacked by cell-to-cell cytolysis performed by FcRIIIA (CD16A) expressing immune cells [43]. In cancer therapy, ADCC is exploited by antibodies that selectively recognize surface proteins on malignant cells. Lattanzio et al. investigated the impact of baseline ADCC on the patients' outcomes presenting locally advanced HNSCC treated with cetuximab and radiotherapy [44]. In this study, patients showing a high Cancers 2022, 14, 5363 6 of 28 baseline of both ADCC and EGFR have a significantly higher probability of achieving a complete response and a long OS compared to other patients.
Pembrolizumab (Keytruda) is a humanized IgG4 isotype antibody. In 2019, Pembrolizumab was approved by FDA as first-line treatment or in combination with platinum and FU for patients with metastatic or unresectable recurrent HNSCC. Moreover, Pembrolizumab monotherapy is indicated by EMA for first-line treatment of patients with metastatic or unresectable, recurrent HNSCC whose tumors express programmed death ligand 1 (PD-L1-combined positive score (CPS) ≥1).
The Pembrolizumab dose recommended for HNSCC is 200 mg administered as an intravenous (I.V.) infusion (30 min infusion time/every 3 weeks). Therapy can be modified if disease progression keeps going, unacceptable toxicity is registered, or up to 24 months in patients are not highlighted disease progression. The Pembrolizumab efficacy was investigated in a randomized, multicenter, open-label, active-controlled trial conducted on 882 patients with metastatic HNSCC and the one, which did not have previously received systemic therapy for metastatic disease, or with recurrent disease considered incurable by local therapies (clinical trial reference: KEYNOTE-048 NCT02358031) as shown in Figure 2a-c [37,45]. This emerging PD-1/PD-L1 blockade immunotherapy exhibited more satisfactory curative effects, and lower toxicity for patients with advanced HNSCC compared to standard (Cetuximab + Chemo) treatment [46]. Management of recurrent and/or metastatic HNSCC (50% of patients with locally advanced HNSCC will recur after primary treatment) that are not amenable to surgery and show PD-L1 expression are currently treated with Pembrolizumab + cisplatin and FU, showing an improvement in the overall survival (OS).
In case of recurrent and/or metastatic HNSCC not expressing PD-L1, Cetuximab combined with platinum-based therapy is the standard of care. Instead, in the case of HNSCC progression within 6 months since the last platinum therapy, Nivolumab is both FDA-and EMA-approved therapy [5].
Recurrent and metastatic HNSCC still remain problematic to treat. The most common metastatic sites are the lung, bone, liver, and skin [47]. Traditional therapies are not able to eradicate all cancerogenic cells, and in some cases, surgery is impossible. More selective and enhanced treatment should be used to increase chemotherapy drug concentration only at the cancer site, enhancing antineoplastic activity.
Innovative approaches for recurrent H&N cancer treatment are under evaluation in preclinical and clinical trials; among these, we include extracellular vesicles, thermal ablation/hyperthermia, gene therapy, and nano-immunotherapy as summarized in Figure 3 [11,[48][49][50][51][52]. These treatment strategies can come as combined interventions to obtain a synergic and enhanced anticancer effect. In this review, we expand on the application of reversible electroporation (electrochemotherapy, gene electroporation, calcium electroporation) as HNSCC treatment and its possible use with immunotherapy and/or nanotechnology-based strategies.
Electroporation
Electroporation/Electropermeabilization (EP) results in the application of a localized electric field able to increase the permealization of molecules into cell membranes. Due to the physically triggered phenomenon, the cell membrane can induce the temporary depolarization of the voltage-gated channels, which subsequent increase the cell permeability by hydrophilic pores formation (≈23 nm radius) [53]. EP of cells in vitro can be used for the introduction of DNA, enzymes, antibodies, and other biochemical reagents. However, EP has begun to be investigated to enhance cancer tumor chemotherapy, transdermal drug delivery, non-invasive sampling for biochemical measurement, and localized in situ gene therapy [54,55].
Briefly, when an electrical field pulse (EFPs) is applied, a transient change in permeability and electrical conductivity in the cell's membranes (phospholipid bilayer membranes thickness of h ≈ 3 to 7 × 10 −9 m) is induced. EFP must be applied with short times and in a high electrical field in order to achieve suitable membrane permealization able to permit passage of any molecules through cell membrane pores. Essentially, the driving force is the physical interaction of electric fields with two deformable materials with different dielectric constants (K): such as lipids (l), with K l ≈ 2-3 and aqueous electrolytes, with K e ≈ = 70-80 = K w , where e denotes electrolyte and w water [56,57]. At this point, an accumulation of charges due to ions migration occurs, and the cell membranes undergo a rearrangement in their morphology after exceeding a critical threshold causing the rapid creation of aqueous pathways through lipid-containing barriers in cells and tissue (transient hydrophilic pores are formed). Electroporation occurring when the cell transmembrane voltage, (Vm), reaches values (0.5-1 V) much higher than the normal values of "resting potential" (≈−0.1 V) developed for living cells [56,58]. The resting potential is important for two events: the threshold required for permeabilization and the electroporation steps. Considering that cells are physiologically negatively charged, permeabilization happens first of all in the area of the cell facing towards the electrode with a positive charge; because in this area, the membrane capacitance is the first to exceed when an external field is applied. Subsequently, the portion of the cell facing the negative electrode is electroporated. Other aspects, such as the continued permeabilization (area of the membrane which is permeabilized) on the area facing the positive electrode, can be modulated by pulse amplitude, i.e., higher pulse amplitude enhance the diffusion area through the cell membrane. Instead, the permeabilization degree can be controlled by the pulse duration and pulse number), i.e., a longer pulse enhances the perturbation of the membrane in the treated area [59]. Therefore, it was seen that the cell membrane that is most sensitive to the effect of the electric field is the one closest to the positive electrode, but instead, the degree of permeabilization is greater for the membrane facing the negative electrode [60]. Thus, larger molecules will be able to diffuse into the cell through the membrane facing the negative electrode, but the area over which diffusion can take place is larger towards the positive electrode [61].
Electric field parameters that control membrane electroporation are electric field strength, pulse duration (nanosecond, microsecond, or millisecond), number of applied pulses, and pulse frequency. Usually, nanosecond electrical field pulses (NsEFPs) use a field strength of kV/cm (20 kV/cm and greater) while micro-and millisecond use 100-1000 V/cm [57].
The transmembrane potential induced in a cell by an external field, such as EFP, is usually described by the Schwan equation (Equation (1)) where Vm is the transmembrane potential, "f" is a cell-shape factor (usually 1.5 for spherical cells), "E(t)" is the applied external electric field for a determined time, "R" is the cell radius," ϕ" is the polar angle between the direction of E and the specific location on the membrane [62]. Other parameters that can influence membrane electroporation are the composition of the electroporation buffer, temperature, and cells' intrinsic properties (size, type, shape, density, and adhesion) [63]. One aspect is that with a smaller cell radius, a higher external electric field is needed to achieve suitable permeabilization. The electric fields required for mammalian cells' permeabilization are much lower than those required to permeabilize, e.g., bacteria. It is also evident that, e.g., mitochondria or other intracellular organelles will not be permeabilized by the same electric fields used to permeabilize the cell membrane.
Depending on pulse exposing time and electric field intensity, three different types of EP exist [64]: 1.
Reversible EP: The electric field is sufficient enough (≈1 kV/cm) to exceed the critical threshold, but the cell is still able to return to its initial state (resting potential). Due to its reversible pore formation in the microsecond time frame, membrane resealing happens over a range of minutes. In an in vivo experiment performed on mouse skeletal muscle tissue, it was found that a 63% resealing time (s) required approximately 9 min [65]. Due to it is not irreversible destructive action, this type of EP is used to insert molecules in the intracellular environment (i.e., Electrochemotherapy, Genetic transfer, and Calcium electroporation).
2.
Irreversible EP: The electric field is extremely high (>2-5 kV/cm), and the number of pores created induces cell osmotic imbalance or homeostasis loss, resulting in cell necrosis and death (i.e., non-thermal tissue ablation) [66]. Cell death due to irreversible electroporation is a function of electric field strength and pulse number [67].
3.
Thermal irreversible EP: Electric field intensity or exposing time is so high that Joule heating is observed (≈10 kV/cm and T • > 50 • C) [68].
Focusing on the effects of reversible EP, EFP causes various effects at different levels. At the membrane level, pore formation (hydrophilic openings) is achieved when ∆Vm reaches the threshold causing an electric breakdown of the cell membrane and lipids reorientation [69]. Pore formation and transport through them are conditioned by EFP and EP type. However, pore formation is a stochastic event, and cells can modify their size and shape under electric field stimulation. Moreover, cell type and medium (conductivity, osmolarity, and solutes) can affect EP results. Instead, at the tissue level, a strong EFP causes highly inhomogeneous EP in multicellular tissue due to differences in cells, the presence of vascularization, and gap junction between cells that made space causing anisotropic electrical properties [70]. To improve EFP homogeneity, a proper design of the EP electrode positioning should be carried out to maximize the targeted effects on tissue and cells [53,71]. Moreover, clinical application demonstrated that EP has two other main effects that can contribute to increasing cancer treatment (i.e., cutaneous, mucosal) efficacy [72]. First of all, it causes a local "vascular lock", because, during electroporation, the perfusion of the treated tissue is blocked as a result of the vasoconstrictor reflex (vasoconstriction of blood vessels), inducing a hypoxic effect [73]. A differential effect between normal vs. tumor vessels was demonstrated resulting from the application of EP. In fact, EP has a selective vascular disrupting action on tumors by destructing small tumor blood vessels (anti-angiogenic effect) without affecting the larger normal blood vessels surrounding the tumor. This effect is amplified when EP is combined with cytotoxic drugs (electrochemotherapy), also increasing the residence time of the molecules in the treated site [74]. Moreover, in tumor blood vessels, a transient vascular constriction induced by EP prevents further bleeding and even stops previous bleeding in the case of hemorrhagic nodules [73,74].
The second important event correlated to EP application is the local activation of immune system cells causing immune response stimulation and immune cytokines production [75,76]. During EP, the uptake of extracellular proteins happens, and consequently, intracellular proteins escape into the extracellular milieu, acting as a source of damage-associated molecular patterns (DAMPs) that potentially induce an immunogenic response [77]. Moreover, under local electric stimulation, it has been reported in vitro that macrophages polarization in M1(pro-inflammatory macrophages) or M2 (anti-inflammatory macrophages) can be modulated; as well as migration, proliferation, and cytokines production from T-cells can be controlled by EFP application [75,78]. In one other work, Arnold et al. showed that exogenous electrical fields affected the migration, proliferation, and cytokine production of T cells. They demonstrated that human primary T cells migrate directionally to the cathode in low-strength (50-150 mV/mm) electric fields [78]. This immunological stimulation induced by the application of the electric field has a synergistic effect, together with the chemotherapy drug and the vascular lock, in the eradication of the tumor. In fact, electrical stimulation is capable of promoting both cell differentiation and cell death in human cancer environment [79].
Electrochemotherapy (ECT)
The electrochemotherapy technique uses the physical principle of reversible EP to accelerate penetration into the intracellular environment of non-permeant hydrophilic anticancer drug or low permeant anticancer drug (i.e., CisPt) [69]. The membrane/tissue effects mentioned above for EP help chemotherapy drugs to enhance their activity. Membrane poration enhances drugs intracellular concentration increasing their local cytotoxicity (up to 300-700 fold for BLM and 12-70 for cisplatin) [80]. Vascular lock entraps drugs in the tumor site and slows down their washout. Finally, the effect of tumor cell disruption during EP is enhanced during ECT, which induces greater recruitment of antigen-presenting cells (APCs) and release of damage-associated molecular patterns (DAMPs) that, by activating immune cells interacting with pattern recognition receptors (PRRs), act synergistically in fighting against cancer exploiting immune-response [75,81]. By knowing these, mathematical models and optimization techniques have enabled more effective ECT design and electrodes positioning when considering the effect of tissue inhomogeneity on electric field distribution and, consequently, outcome [82].
ECT is becoming a popular operating procedure for many cancer types, including H&N, that do not respond to other first-choice treatments (surgical excision, chemotherapy, radiotherapy). However, it is not still approved for cancer treatment. Indeed, the advantage of this technique is that the physico-chemical principle on which it works (membrane permealization, vascular lock, and immune-stimulation) could be applied to all tumor types (skin or subcutaneous tissue) [83]. Furthermore, compared to traditional chemotherapy, ECT employs lower dosages of chemotherapeutic drugs and can be used as a strategy to avoid drug resistance [84]. Studies reported that a faster and more efficient reduction of tumor size was performed by ECT compared to standard chemotherapy for both cutaneous and subcutaneous tumors [85][86][87].
Comprehensive ECT guidelines were published in 2006 by the European Standard Operating Procedures in Electrochemotherapy (ESOPE) and made a decisive contribution to the standardization of ECT procedures in clinics and in oncology treatment [88][89][90].
In the pre-treatment examination, patients were selected following specific criteria and medical history, which were taken into account together with the presence of allergy or hypersensitivity to the candidate drugs. Tumor entities, as individual units for treatment, are evaluated by imaging (Magnetic Resonance Imaging, Computerized Tomography, Positron Emission Tomography) in terms of size and numbers in order to set up a suitable course for treatment and Standard Operating Procedures (SOP). Identified tumor units are calculated in their volume with the envelope ellipse formula as commonly used in the SOP (Equation (2)) [88,91].
where "a" is tumor length, and "b" is tumor width. Depending on tumor size and count, the type of anesthesia and drug administration are decided; for example, tumor that are ≤3 cm and less than 7 are usually treated with local anesthesia (LA) and intra-tumor (I.T.) drug injection. Instead, for tumor >3 cm and in number higher than 7, general anesthesia and intravenous (I.V.) drug injection is preferred.
Drug dose are usually 15,000 IU/m 2 (I.V) and 1000 IU/m 2 (I.T) for BLM and 2 mg/ml for CisPt (only I.T.). SOP for ECT in the treatment of mucosal cancer (i.g. oral cavity), cutaneous primary and secondary tumors (metastases derived from HNSCC, malignant melanoma, basal cell carcinoma, breast, and salivary gland adenocarcinoma) were established as eight pulses of 100 µs long at appropriate voltages (1 kV/cm) and frequency 5 kHz with more suitable electrodes depending on tumor location [88].
Different types of electrodes can be selected depending on lesion location and size; for an effective ECT, electrode coverage should be the entire tumor surface. Electrodes that can be used with an electroporator apparatus (i.e., Cliniporator TM , IGEA, Carpi, Italy) are finger electrodes and plate electrodes for small and superficial tumor nodules or needle electrodes for deeper and thicker tumor nodules. Among the different types of electrode needles, the two parallel arrays of needle can be selected (4 mm gap between needle) for small nodules and a hexagonal array for tumor nodules bigger than 1 cm [89]. Other electrode types were designed and patented to be more performing and better reach the tumor lesions [92]. Campana et al. studied the effect of electrode position and electric field distribution for non-parallel needles in ECT [93,94]. Their work demonstrated that needle inclination (higher than α = 30 • angles inclination) affects the homogeneity of field distribution and is strongly recommended to avoid in living tissue. Moreover, improper insertion of electrodes in tissues could cause local skin burns [95].
Usually, follow-up is programmed 4 weeks after ECT treatment and documented with measurement of tumor size, pictures, and report of pain. In case, further treatment can be performed at least 4 weeks after the first one if BLM (I.V.) was used; otherwise, for I.T. treated lesion, further treatment can be applied whenever needed [88].
European Research on Electrochemotherapy in Head and Neck cancer (EURECA project) reported results of the application of ECT in the treatment of mucosal cancers. The study is supervised by the International Network for Sharing Practice in Electrochemotherapy (INSPECT) and included trials from six European institutions [96,97]. ECT in combination with BLM (I.V. 15,000 IU/m 2 ) was used for the treatment of recurrent and/or metastatic H&N cancer using ESOPE guidelines [98]. Patients (n = 43) treated were followed during post-treatment, and tumor response, side effects, and pain were evaluated. The overall objective response after 12 months was 56%, and in 7% of patients, a long-term complete remission was observed. Results indicate that BLM-ECT is a valid treatment for untreatable recurrent H&N mucosal cancer [96].
Other main clinical aspects to be considered in ECT treatment of cutaneous primary and secondary tumors and mucosal cancer are reported in the literature [89,96,99,100] and summarized in Figure 4. Moreover, clinical trials are ongoing to evaluate ECT efficacy and validate further SOP for solid tumors such as the liver, pancreas, and lung [101][102][103][104]. However, some possible adverse effects related to ECT treatment should be mentioned, such as pain at the site of application of the electrodes, redness and swelling in the treated area, muscle contraction, nausea, skin breakdown (rash and mild scarring), and rarely even infections at the application site. It was reported by Landström et al. that ECT in the head-and-neck region may have some limitations related to the onset of lethal bleeding, osteoradionecrosis, and fistula [105]. However, some possible adverse effects related to ECT treatment should be mentioned, such as pain at the site of application of the electrodes, redness and swelling in the treated area, muscle contraction, nausea, skin breakdown (rash and mild scarring), and rarely even infections at the application site. It was reported by Landström et al. that ECT in the head-and-neck region may have some limitations related to the onset of lethal bleeding, osteoradionecrosis, and fistula [105].
Gene Electroporation (GE)
Cancer gene therapy is based on gene transfection (insertion of new genetic material into the cell). New DNA introduced is then "transcribed" to make mRNA able to encode a specific protein made through the translation process. Gene therapy can be used to obtain different actions: "corrective", "cytoreductive", or "immunomodulatory" [106].
Gene therapy can be used in recurrent HNSCC and localized distant metastatic disease treatments. Many clinical trials are ongoing to validate the safety of gene therapy in HNSCC; for instance, trials no. NCT00009841 or to demonstrate the clinical efficacy of gene therapy when combined with chemotherapy or radiation therapy, under trial no. NCT00017173 [107].
Brezar et al. combined radiotherapy (RT) with gene electroporation (GE) for the delivery of plasmid encoding shRNA against melanoma cell adhesion molecule (MCAM) [108]. Gene therapy goals were to achieve (i) a vascular-targeted effect mediated by the silencing of MCAM and (ii) an immunological effect mediated by the presence of plasmid DNA in the cytosol-activating DNA sensors. Complete antitumor effectiveness using combined therapy (RT + GE) was achieved in immunogenic B16F10 melanoma (81%) while in less immunogenic TS/A carcinoma was reached 27% of complete responses. Moreover, a significant increase in infiltrating immune cells and a radio-sensitization effect was observed in both radioresistant tumor models, while the expression of IL-12 and TNF-α (proinflammatory cytokines of mainly innate immunity) was determined preferentially in the melanoma cancer model. Furthermore, the outcome of the combined modality treatment response of tumors seemed to depend on tumor immunogenicity [109]. Sedlar et al. showed increased efficacy of cisplatin-ECT adding intramuscular interleukin-12 (IL-12) gene electro-transfer [110]. They used murine sarcoma (SA-1) and carcinoma (TS/A) models to test techniques combination and showed that intramuscular IL-12 gene electrotransfer increased the log cell kill in both tumor models, potentiating the specific tumor growth delay by a factor of 1.8-2 and increasing tumor cure rate by approximately 20%. Adil Daud et al. used plasmid IL-12 electroporation (six 100-µs pulses at a 1300-V/cm electric field through a six-electrode array) in patients (n = 24) with metastatic melanoma. Post-treatment biopsies revealed an increase in IL-12 protein levels and higher tumor necrosis (52% showed partial response) and lymphocytic infiltration, demonstrating that IL-12 in combination with electroporation was safe to use but also effective and reproducible [111].
Not only has IL-12 immune cytokine shown to be effective for electroporation-based cancer therapies, but also other genes expressing cytokines (IL-18, IL-33, IL-15, IFn-α, and IFn-γ) were tested in combination with EP to promote the production of T cells and Th1 cells differentiation, increasing antigen presentation and recruitment of dendritic cells [112].
GE was studied in a clinical trial for a safety study of the HPV DNA Vaccine (pNGVL4a-CRT/E7) to treat H&N Cancer Patients (trial no. NCT01493154). In this study HPV DNA vaccine was injected using an electroporation device (TriGridTM Delivery System made by Ichor Medical Systems, San Diego, CA, USA), and the vaccine's ability to help the body's immune system to recognize HPV-positive HNSCC was evaluated. They proved that electroporation is a safe, tolerable, and promising method for the delivery of the HPV DNA vaccine, and it should be considered for DNA vaccine delivery in human clinical protocols [113].
Another clinical trial was performed to test HPV-specific immunotherapy on HPVpositive HNSCC patients. DNA vaccine (INO-3112) was delivered by EP (CELLECTRA™-5P, INOVIO Pharmaceuticals) (trial no. NCT02163057). Compared to other techniques (conventional intramuscular injection and epidermal gene gun-mediated particle delivery), electroporation-mediated intramuscular delivery generated the highest number of E7-specific cytotoxic CD8+ T cells, which correlated to improved outcomes in the treatment of HPV-positive HNSCC. Moreover, DNA-vaccine + EP resulted in significantly higher levels of circulating protein which likely enhances calreticulin's role as a local tumor antiangiogenesis agent. The study supported further development of EP as a vaccine/gene delivery technique to enhance immunogenicity, particularly for diseases in which traditional vaccination approaches are ineffective [114].
Calcium Electroporation
Calcium (Ca 2+ ) is an ion normally present in cells at a physiologic intracellular concentration of 10 −7 mol/L. It is involved in cell apoptosis, muscle contraction, gene transcription, metabolism, and other functions [115]. In the cells, the endoplasmic reticulum (ER), the sarcoplasmic reticulum (SR, in muscle cells), and mitochondria act as a calcium ions resource. Calcium is pumped into the ER and SR through the sarco-endoplasmic reticulum calcium ATPase (SERCA) [116]. An increase in intracellular Ca 2+ concentration above the mentioned physiologic values, can lead to toxic effects since it causes ATP depletion and cell necrosis. Combining EP with Ca 2+ is possible to increase and speed up intracellular ion concentration in cancer cells. For this reason, Calcium electroporation (Ca 2+ -EP) is emerging as a new cancer treatment able to induce cell necrosis by an increase in calcium intracellular influx [117].
The effective anticancer activity of Ca 2+ -EP was tested both in vitro and in vivo, and it demonstrated effective cell necrosis [118,119]. Moreover, it was highlighted that Ca 2+ -EP has a higher effect on tumor cells compared to healthy cells, confirming that healthy tissue surrounding the tumor is less affected by EP treatment. This mechanism can be explained by the fact that the calcium pathway is often modified in cancer cells compared to normal healthy cells. Calcium channels, pumps, and exchangers are present in malignant cells as well as normal cells; however, their expression, localization, and/or activity can be altered (decreased expression of SERCA2 and SERCA3) [120]. These modifications caused a reduced calcium transport from the cytosol to ER in cancer cells.
Planshke et al. performed a clinical phase I study focused on Ca 2+ -EP for recurrent H&N cancer (trial no. NCT03051269). More in detail, they tested the safety of calcium electroporation on mucosal, head-and-neck cancers [117]. The patients selected (n = 6) for the study showed reoccurrence after surgery and RT, thus not indicated for surgery, and with poor tolerance to palliative chemotherapy. A ca2+ dose of 0.225 mmol/ml (or 9 mg/ml) was decided considering tumor volume (calculated according to ESOPE guidelines), including a safety margin of 1 cm tissue surrounding the tumor was treated [89]. EP was performed using a Cliniporator (model EPS02, IGEA, Carpi, Italy) equipped with finger-electrode with linear array needles (10 mm long), set up with the following values of pulse (0.1 msec) intensity (1 kV/cm) and frequency (1 Hz or 5000 Hz).
Data collected demonstrated that Ca 2+ -EP was safe and did not cause hypercalcemia, cardiac arrhythmias, or other severe side effects. Objective tumor responses were observed in three of the six treated patients, with one patient in complete clinical remission one year after treatment [117].
Vissing et al. performed a non-randomized phase II clinical trial (trial no. NCT04225767) to validate a design protocol and investigate tumor response to (Ca 2+ -EP) in cutaneous tumor [121]. Promising results were obtained using EP set up with 220 mmol/L Ca 2+ with eight pulses of 0.1 ms duration, amplitude 1 kV/cm at a frequency of 1 kHz, where the complete response of 66% patient panel was shown up to 2 months after treatment.
Falk et al., in a randomized double-blinded phase II study (NCT01941901), evaluated Ca 2+ -EP for treatment of cutaneous metastasis from the tumor histology (cutaneous metastasis occurs in 9% of all cancer patients) and compared the results with standard ECT [119,122]. Ca 2+ dose was defined as 9 mg/ml (220 mmol/L) while BLM was fixed at 1000 IU/ml, and injected volume used for both treatments was 0.5 ml/cm 3 tumor volume.
EP was performed with a linear array electrode; eight pulses of 0.1 ms duration and 400 V at a frequency of 5 kHz were delivered using a square wave pulse generator (Cliniporator TM , IGEA, Italy). The data achieved after 6 months of treatment showed that Ca 2+ -EP obtained an objective response of 72% (i.e., 66% complete response, 5% partial response) while ECT achieved 84% (i.e., 68% complete response and 15% partial response). This study confirmed that Ca 2+ -EP is feasible in clinical settings with minimal toxicity and is effective in local tumor reduction, and it could further be considered for future treatment in small cutaneous metastases.
Electroporation and Immunotherapy
Immunotherapy is becoming an attractive cancer treatment also for HNSCC and recurrent and/or metastases H&N cancers [26]. Many approaches have been reported and evaluated; these include checkpoint inhibition, T cell transfer therapy, monoclonal antibodies, and cancer vaccination; however, high doses of immune therapeutic agents still caused some side effects [47]. Combining EP with immunotherapy is possible to achieve treatment dose reduction obtaining a synergic effect between EFP and therapy. The latter has, therefore, the ability to modulate the immune system to produce immune cytokines and agents in the patient's body, increasing the cellular uptake of these immune agents via electroporation through cancer cell membranes.
The synergic effect of EP immune-activation and immunotherapy was tested. EP efficacy in reducing tumor size by initiating a host immune reaction against cancer cells was proved in immune-competent mice in contrast with immune-deficient mice, which did not give any improvement [123].
Nanosecond electrical field pulses (NsEFPs), a type of stimulation using electrical pulses which last only a few hundred nano seconds (30 kV/cm, 100 ns, 200 p), showed an inhibitory effect on proliferation of malignant melanoma promoting activation of immune cells (killer T cells) and increasing the release of anti-tumoral cytokines (TNF-α and IL-2). Further investigation could be performed by combining physical therapy with immunotherapy [124].
ECT was implemented with monoclonal antibody to enhance its antitumoral activity; preclinical evidence suggests that the association of ECT with immune-stimulating agents can be an efficient way to cure the targeted malignant tumors and any distant nontargeted tumor nodules, even if it is an undetectable metastasis [75,125].
ECT with ipilimumab was used to treat metastatic melanoma, and the results obtained showed a complete cutaneous, and visceral response of the 28 tumor nodules treated [126,127].
Ipilimumab (CTLA-4 inhibitor) and Nivolumab (PD-1 inhibitor) were tested in combination with ECT, and the results demonstrated that ECT + PD-1 inhibitors were more effective [128,129]. Pembrolizumab (PD-1 inhibitor) was used in combination with ECT in patients with unresectable melanoma with superficial and visceral metastases [130]. The multicenter study showed that the local objective response rate (ORR) was higher in the pembrolizumab-ECT group than in the pembrolizumab group (78% and 39%, p < 0.001). Moreover, 1-year follow-up showed that the one-year local progression-free survival (LPFS) rates were 86% and 51% (p < 0.001), respectively. Table 3 reports the clinical trials that involved electroporation in the treatment of H&N cancers and correlated metastases.
Electroporation and Nanotechnology
Nanotechnology is an emerging field that involves the functionalization and engineering of nanosized materials. The National Nanotechnology Initiative (NNI, 2010) describes nanotechnology as "the understanding and control of matter at dimensions between approx-imately hundred nanometers, where unique phenomena enable novel applications" [131]. Thanks to their large surface area, nanomaterials (NMs) interaction with cells is maximized, increasing therapeutic effectiveness compared to traditional methods [132]. The result adds up to a decreased risk to the patient and an increased probability of survival [133]. NMs designed for cancer therapy are of various types, such as micelles, liposomes, dendrimers, inorganic nanoparticles (gold, silver, iron), carbon nanoparticles and nanotubes, nanodiamonds, nanoemulsions, viral nanocarriers, polymeric or peptide nanoparticles, and solid lipid nanoparticles; they can be used as stand-alone cancer therapies or can be used as adjuvants or as part of a combinatorial therapy [134]. Generically, these NMs can be called nanovectors (NVs), a type of targeted delivery vehicle (nanopharmaceutics and bioinspired nanoparticles) that transports nanoscale material [135].
Despite their small dimension, nanomaterials can easily reach tumor sites by exploiting the enhanced permeability and retention (EPR) effect (passive targeting) and protein coronas (PCs). EPR exploit characteristics of newly formed tumor vessels having poorly aligned and defective endothelial cells with wide fenestrations that allow passage of NVs and macromolecules [136].
PCs involve protein absorption onto particle surfaces, causing a different biological identity of NVs, which can cause two types of responses: immune blinding (reducing cancer uptake) or immune reactivity (excessive immune activity with increasing proinflammatory cytokines production). A Red blood cell (RBC) membrane coating, PEGylation, or the biocompatible materials used, such as zwitterionic polymers or hydrophilic nanoparticles, are strategies that can be used to decrease protein adsorption and thus avoid unwanted responses [137,138]. Furthermore, the advantages of the use of nanotechnology in cancer treatment are the ability to target chemotherapies directly and selectively (active targeting) to cancerous cells and neoplasms, guide cancer surgical resection, and enhance the therapeutic efficacy of radiation-based and other current treatment modalities. NMs can be functionalized on their surface with ligands (i.e., small molecules, peptides, fluorophores, antibodies) useful to (i) selectively direct NMs in vivo, (ii) exert therapeutic action, and/or (iii) act as imaging agents. These multiple combinations of actions, such as drug delivery for treatment and imaging detection for diagnosis, are known as "theranostic" actions [139]. Moreover, depending on NM physico-chemical properties is possible to modulate their activity (drug release, energy absorption, re-radiation, and localization) using external triggers such as electric and magnetic fields, hyperthermia, light, and ultrasound [140].
The Nanomedicine Strategic Research and Innovation Agenda (2016-2030) from the European Nanomedicine Community listed the most needed implementation for cancer treatment. These innovations concern the improvement of early detection diagnosis methods for tumors, circulating tumor cells, and metastases, maximizing the effectiveness of treatment for solid tumors and chemo-resistant tumors, making more selective and effective radiotherapy, immunotherapy, photodynamic, individualized, and hyperthermia therapies and avoid side effects applying more targeted chemotherapy [141].
NMs in H&N cancer treatment have the potential to emerge as alternatives to conventional treatments, as these systems can offer solutions (non-invasively, minimize nonspecific delivery failures, reduce multidrug resistance) to the problems encountered in conventional treatments (chemotherapy or radiotherapy) [142][143][144]. The combination of EP with nanomedicines is starting to present itself as a valid adjuvant strategy for the treatment of some diseases, including liver, pancreatic, and bone tumors [145,146]. NVs combined with EP were tested in vitro for H&N cancer treatment. Gold nanoparticles (4.79 µg/ml of AuNPs) were used during electroporation (10 pulses at 200 V, equal time intervals of 4 sec) of Hep-2 laryngeal cancer cells inducing cell apoptosis, alterations of cell cycle profile, and morphological changes [147].
Nowadays, liposomes are the most successful drug delivery systems, with a dozen drug products available in clinics and FDA-approved cancer therapy (e.g., Doxil) [148,149]. Liposomes are self-assembled bilayers of lipid vesicles that possess an aqueous core and a hydrophobic membrane; this peculiar property allows them to encapsulate both hy-drophilic and hydrophobic drugs. Depending on lipid bilayer structure, liposomes can be classified into small unilamellar vesicles (SUV), large unilamellar vesicles (LUV), and giant unilamellar vesicles (GUV) that have a single lipid bilayer, while multilamellar vesicles (MLV) are characterized by more than one lipid bilayer. Proper lipid selection is essential to modulate liposomes' pharmacologic activity in such a way that. Zwitterionic, cationic, or anionic lipids and/or cholesterol have a different effect on liposome stability, pharmacokinetics, and delivery of the drug formulation [150]. Further functionalization of liposome surface, such as grafting poly-(ethylene glycol) (PEG), is useful to prevent protein binding and prolong liposomes blood circulation meanwhile avoiding uptake from the reticuloendothelial system (RES). Moreover, synthetic modification of the terminal PEG molecule with ligands (e.g., monoclonal antibodies and peptides) can be carried out to promote the selective and enhanced accumulation of liposomes in the tumor region with respect to healthy tissues [151,152]. Therefore, liposomes are very versatile drug delivery systems that can achieve passive or active drug targeting depending on their designed formulation ( Figure 5A). liposomes and cell membrane phospholipidic bilayer, using EFPs is possible to induce both cells and liposomes membranes electroporation without triggering irreversible damage to cells. Therefore, when combining nanotechnology-based solutions, the EP goal is to design molecular carriers (e.g., liposomes) of nanoscale dimension (hundreds of nm) able to guarantee an intracellular or extracellular (close to the target cells) drug-controlled release by application of the external electric field. Liposome poration could permit a fast drug's release in the intracellular medium and/or in the extracellular medium closer to the cells and an easy drug uptake by the electroporated cells (the concept is similar to ECT, with the benefit of increased drug concentration in the tumor due to its vectorization in liposomes) [160][161][162] ( Figure 5B). This allows the development of a selective and targeted delivery system where the drug is activated only in the diseased site in the body (e.g., a cancerous tissue), avoiding any damage or toxicity to healthy cells and to the surrounding tissues. The most important parameters that influence liposome behavior when subjected to an in vitro electric field are (i) cholesterol ratio, (ii) liposome surface charge, and (iii) liposome size.
Cholesterol is an important component for liposome formation and stability, and it was demonstrated that it has a concentration-dependent effect on lipid membrane organization. Raffy et al. evaluated cholesterol amount on phosphatidylcholine bilayer stability under imposed electric field [163]. The study was performed on lipids in gel (1,2-dipalmitoyl-sn-3-phosphatidylcholine-DPPC) and in fluid states (egg 3-phosphatidylcholine-PC). For lipids in the gel state, cholesterol in a concentration equal to 6% (mol/mol) prevents electropermeabilization, while for concentrations higher than 12% (mol/mol), electropermeabilization and electroinsertion are obtained under milder field conditions (0.3 kV/cm). Instead, cholesterol does not affect electro-permeabilization and electro-insertion in lipids in the fluid state. Liposomes are considered biocompatible due to their similarity with composition of biological membranes, low immunogenic, easily modifiable, reproducible, and scalable, and possess a good safety profile [153]. Clinical trials demonstrated that liposomal formulations are less toxic than drugs alone and have better pharmacological parameters [154]. They represent the first-choice drug delivery systems for various diseases. As a matter of fact, liposomes are under evaluation in clinical trials also for H&N cancer treatment (Table 4). The first generation of traditional liposomes was commonly formulated using phospholipids and cholesterol. The most recent generation of lipid vesicles also includes the use of (i) surface functionalization to reach the specific cell or tissue (targeted therapy), (ii) adjustable and adaptive structure to be administered via transdermal and/or oral routes (transfersome) and, (iii) external stimuli sensitive liposomes (electric and/or magnetic field, ultrasound, UV/light) or internal stimuli sensitive (pH, temperature, redox potential, enzymes, electrolyte concentration) to achieve a spatiotemporal control of drug release (smart delivery system) [155][156][157].
Among smart delivery systems, liposomes and their response to electric pulse (electrosensitive smart delivery systems) attract a lot of interest in combination with EP treatment to obtain a drug release on-site and on-demand [158,159]. Due to the similarity of liposomes and cell membrane phospholipidic bilayer, using EFPs is possible to induce both cells and liposomes membranes electroporation without triggering irreversible damage to cells. Therefore, when combining nanotechnology-based solutions, the EP goal is to design molecular carriers (e.g., liposomes) of nanoscale dimension (hundreds of nm) able to guarantee an intracellular or extracellular (close to the target cells) drug-controlled release by application of the external electric field. Liposome poration could permit a fast drug's release in the intracellular medium and/or in the extracellular medium closer to the cells and an easy drug uptake by the electroporated cells (the concept is similar to ECT, with the benefit of increased drug concentration in the tumor due to its vectorization in liposomes) [160][161][162] ( Figure 5B). This allows the development of a selective and targeted delivery system where the drug is activated only in the diseased site in the body (e.g., a cancerous tissue), avoiding any damage or toxicity to healthy cells and to the surrounding tissues.
The most important parameters that influence liposome behavior when subjected to an in vitro electric field are (i) cholesterol ratio, (ii) liposome surface charge, and (iii) liposome size.
Cholesterol is an important component for liposome formation and stability, and it was demonstrated that it has a concentration-dependent effect on lipid membrane organization. Raffy et al. evaluated cholesterol amount on phosphatidylcholine bilayer stability under imposed electric field [163]. The study was performed on lipids in gel (1,2-dipalmitoyl-sn-3phosphatidylcholine-DPPC) and in fluid states (egg 3-phosphatidylcholine-PC). For lipids in the gel state, cholesterol in a concentration equal to 6% (mol/mol) prevents electropermeabilization, while for concentrations higher than 12% (mol/mol), electropermeabilization and electroinsertion are obtained under milder field conditions (0.3 kV/cm). Instead, cholesterol does not affect electro-permeabilization and electro-insertion in lipids in the fluid state.
Concerning their size, liposomes >500 nm can cause an immune system reaction in the body but have dimensions more similar to cells; they require the same electric field amplitude for electroporation. Instead, liposomes <500 nm show higher cell internalization (due to the EPR effect) and low immunogenicity, but they require higher field values to be permeabilized, which could cause irreversible electroporation and cell death. Schwan's equation at the steady-state explains this direct proportionality between transmembrane voltage (Vm) and particle radius (Equation (1)).
The strategy to overcome transmembrane dependence on the radius of microscopic structures for pulses with a lower frequency spectral content was the application of a second-order model of induced transmembrane potential (TMP) exploiting nanosecond pulses at higher intensity MV/m (NsEFPs) [162,168].
Denzi et al. explored NsEFPs applicability for remote control of electro-sensitive smart delivery systems in order to achieve electro-permeabilization of liposome membrane (100,200, and 400 nm and membrane thickness 5 nm) with electric field amplitudes similar to the ones needed to simultaneously permeabilize a biological cell membrane [161]. Using unilamellar liposomes (200 nm) and 12 NsEFPs, they demonstrated that it is possible to permeabilize both liposomes and cells with comparable electric field intensity without damaging cells (10% of cell membrane area was porated). The results obtained confirmed that the difference in cells and liposome dimensions is not so crucial when using NsEFPs [161,162].
Caramazza et al. studied NsEFPs liposome activation in dry and wet experiments. A fluorescent dye was used as a drug model to visualize its release after electric field stimulation (10 ns, 14 MV/m, 2 and 4 Hz). In total, 15-20% drug release was achieved after a single treatment, so a multi-dose NsEFPs can be potentially used in the future to improve the amount of drug released. Moreover, in their work, the authors considered the effect of NsEFPs' interaction with liposomes in terms of electromagnetic energy absorption, temperature distribution, and pore density formation [159]. Retelj et al. evaluated NsEFPs' ability to control intracellular drug release from liposomes (50-500 nm). The aim was to obtain an efficient electro-sensitive drug release keeping the plasma and nuclear membranes of cells intact. The results achieved showed that using shorter pulses (10 ns) and larger liposomes, the possibility of selective electroporation is higher, with smaller risks for cell viability. Liposomes with 500 nm could be electroporated using ≈20 KV/cm, while 50 nm liposomes required higher amplitude >150 kV/cm. Moreover, liposomes with higher internal conductivity and lower membrane permittivity are favorably electroporated using NsEFPs. Instead, liposomes' location inside the cell did not influence liposomes electroporation, so liposomes having similar sizes are electroporated simultaneously [160].
Tian et al. demonstrated in vitro antitumor activity of liposomes loaded with a dual PI3 K/mTOR inhibitor (NVP-BEZ235) in combination with irreversible electroporation (IRE) for H&N cancer treatment. After 1 month, in the nude mice model, only a combination with irreversible EP and loaded liposome was able to eradicate tumor masses, demonstrating that this combination of treatments could be useful to eradicate cancer and prevent recurrence [169].
These demonstrations would open the way to a feasible use of nano-pulses for electrosensitive smart drug delivery applications where the electric pulse acts as a drug release remote controller by the carrier (i.e., liposomes) and as a facilitator of drug internalization in the cell.
Conclusions and Future Prospective
The incidence rate of H&N cancers is likely to increase, and especially recurrent and metastatic forms are the most frequent and difficult to treat. Improved antitumoral treatments are an important tool in order to counteract this increase in aggressive forms and improve the patient's quality of life. In this regard, the use of nanomedicine and medical technologies has greatly shown their clinical potential in cancer treatment. Interestingly, the use of nanovectors (nanopharmaceutics and bioinspired nanoparticles) demonstrated higher therapeutic potential in the treatment of different types of cancers, including H&N, as they increase target selectivity, attenuate drug toxicity, and protect drugs from rapid clearance [51,135,170].
In this review, the latest innovations in the treatment of H&N cancer exploiting EP were summarized and discussed. Reversible EP can be considered a safe and effective technique able to act synergically, permeabilizing cancer cell membranes and, at the same time, inducing functional vasoconstriction and host immune system activation. Currently, ECT with Bleomycin is an emerging treatment modality for the treatment of superficial and cutaneous metastatic cancer, and its improvement using immunotherapy and/or nanotechnology approaches is to be considered and evaluated. The potentiality of EP and ECT as advanced treatments was preliminarily investigated for those exploiting nanotechnologybased or immunotherapy solutions.
Many clinical trials have already started to test the synergistic activity of electrochemotherapy with monoclonal antibodies and the action of liposomal nanocarriers against H&N cancers. Moreover, the effect of the electric field applied to lipid nanosystems for the on-site and on-demand control of drugs released in vitro is well described in the literature and awaits practical findings in the clinic. | 2022-11-04T18:06:37.415Z | 2022-10-31T00:00:00.000 | {
"year": 2022,
"sha1": "3a82cdf11c933d5ee63c7810f04419bc13b73541",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/14/21/5363/pdf?version=1667202709",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "493708440ac96f5fcb6f7c7ee93d902e8aad470d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233717212 | pes2o/s2orc | v3-fos-license | Detection of Stephanurus dentatus in wild boar urine using different parasitological techniques
Stephanurus dentatus is a nematode that parasitizes the urinary tract of domestic and wild Suidae, especially in tropical areas. However, there is a lack of information about stephanurosis in wild boar (Sus scrofa), thus making it necessary to develop sensitive techniques with which to diagnose this pathogen in order to carry out further research. In Spain, the high prevalence of this nematode has been evidenced in Doñana National Park (DNP). The objective of the present work is twofold. The first is to compare the efficacy of three parasitological techniques to detect S. dentatus eggs in the urine of infected wild boar: (i) gravity sedimentation, (ii) sedimentation by centrifugation, and (iii) flotation techniques, while the second is to determine whether the quantification of eggs can serve as an indicative value of the host's parasite intensity. In order to accomplish these purposes, 27 wild boars from DNP were necropsied, and the urinary system of each animal was examined in order to determine parasite intensity. While all the aforementioned techniques can be used to detect eggs in urine, the most effective in terms of egg quantification are sedimentation by gravity and by centrifugation, as they allow a greater number of S. dentatus eggs to be detected. However, none of the results obtained with these techniques significantly correlated with the number of adult nematodes parasitizing the host, signifying that counts in urine can provide guidance on only the parasite intensity of wild boar.
Introduction
Stephanurosis is a parasitic disease of great significance that is widely distributed in domestic and wild Suidae (Stewart et al., 1964) and is caused by Stephanurus dentatus (Diesing, 1839), a nematode known as the pig's kidney worm. Adult nematodes are located in cysts and nodular granulomatous lesions in the perirenal fat, ureters and kidney (Morosco et al., 2017). The clinical signs are rare, but it has been shown that pigs with high parasite intensities can be harmed owing to the destruction of the functional tissue of the kidney (Hale and Marti, 1983). This can lead to the retention of metabolic waste, which results in a poor appetite and, therefore, a loss of weight and weakness in the infected animal (Islam et al., 2015). This nematode has principally been reported in tropical and subtropical countries, and is particularly common in pigs reared in traditional free-range production systems (Batte et al., 1960;Waddall, 1969). Although S. dentatus is mainly present in tropical areas, it has also occasionally been detected in different areas of the Iberian Peninsula (Cádiz, Granada, Madrid and Portugal) (Cordero del Campillo et al., 1994). However, there is a lack of information about S. dentatus in wild boar (Sus scrofa), and its distribution and prevalence in wildlife, therefore, remain unknown. In a recent study conducted in several populations of wild boar in South-central Spain, infection was evidenced only in wild boar from Doñana National Park (DNP), with a remarkably high prevalence (76.5%), suggesting a clustered distribution of this parasite (Moratal et al., 2018). However, since this is the single epidemiological study of stephanurosis conducted in wild boar in Europe, additional research is needed to clarify the role of wild boar as potential reservoir of S. dentatus for domestic pigs. In fact, there are large extensions of Iberian pig farms near DNP that are extensive systems. Although S. dentatus has never been identified as a problem for the pig industry in the Iberian Peninsula, the wild boar is a reservoir of many diseases that are potentially transmissible to the domestic pig, since there is a flow of pathogens between these two sympatric species (Gortázar et al., 2007). Stephanurosis might, therefore, imply an important health problem that could be highly relevant in specific areas.
Precise diagnosis is a fundamental element when studying the various epidemiological and health aspects of different parasitosis in order to implement surveillance, control and preventive programs. Endo-macroparasite infections in wildlife can be investigated by employing post-mortem examination when the sampling of dead animals is feasible. However, the diagnosis of nematode infections in living hosts relies mostly on non-invasive methods to detect the presence of eggs or larvae in host excreta, such as feces or urine. Individual intensity measures through the use of non-invasive methods can provide important information on nematode distribution throughout the host population, such as transmission rates or seasonal host-parasite interactions in wildlife, thus increasing our understanding of parasite epidemiology (Arneberg et al., 1998;Cattadori et al., 2005).
Considering that stephanurosis has barely been studied in the Iberian Peninsula, and taking into account the hyperendemic focus recently detected in DNP (Moratal et al., 2018), it is necessary to assess sensitive diagnosis techniques to design further studies. In this context, the aim of our study was: (i) to compare the efficacy of three parasitological techniques (gravity sedimentation, sedimentation by centrifugation, and flotation) in order to detect S. dentatus eggs in the urine of infected wild boar, and (ii) to determine whether the quantification of eggs in the urine can serve as a proxy for the adult parasite intensity in this host species.
Sampling, necropsy, and parasite intensity
The study was carried out on twenty-seven wild boar (two females and twenty-five males; seven sub-adults and twenty adults), which were shot by park rangers as part of the DNP health-monitoring and population control program, approved by the park's Research Commission in accordance with the management rules established by the Autonomous Government of Andalusia. The sex-ratio in this study is skewed in favor of males, since urine samples are more difficult to obtain in females in which the bladder is normally emptied after the death. The necropsy of these animals was performed in the field, and the urinary system, including the perirenal fat, kidneys and ureters, was collected as described by Moratal et al. (2018) from those animals whose urinary tract was in a good condition in order to determine the intensity of parasites (n = 23). Furthermore, urine samples were collected from those animals in which there was bladder content (n = 27). The bladder was specifically removed from the animal by making a cut in the lower part of the urethra to prevent the contents from leaking out. The bladder was subsequently shaken, and its content transferred to sterile jars. The urine was preserved by the immediate addition of 10% formalin of the total sample volume as a preservation method.
Laboratory procedures
The urinary tract of each wild boar was dissected to detect S. dentatus specimens. Adult nematodes were collected in 70% ethanol and later counted and morphologically identified in accordance with the method described by Skryabin (1991). The sex ratio was calculated by means of the microscopic examination (Motic, B1 Series) of adult nematodes at 40x magnification, as described by Skryabin (1991), and were found to be 1 female per 1.1 male and relatively constant among the infected wild boar (±5.1%). In order to compare the performances of the techniques employed to quantify parasite eggs, each urine sample was analyzed using three different techniques, based on the Manual of Veterinary Parasitological Laboratory Techniques (MAFF, 1986), with some variations, as detailed below. Before carrying out the different techniques, the urine samples were shaken in order to homogenize them. In the case of the gravity sedimentation technique with a Favatti counting chamber (hereafter referred to as "sedimentation"), 4 ml were taken from each sample and placed in a 15 ml falcon tube where they were left to settle at room temperature for 1 h, after which 4.5 ml of the supernatant was removed. A dilution of 1:10 of the sediment was made in distilled water in order to facilitate the count of eggs. 0.5 ml were taken from this dilution, and this volume was deposited in a Favatti chamber of 1 × 1 cm. The egg count was performed under a microscope (Motic, B1 Series) with a magnification of 40x. In the case of the sedimentation by centrifugation technique (hereafter referred to as "centrifugation") and the subsequent counting in the Favatti chamber, the protocol followed was the same as that described above, but in this case the sample was sedimented by means of centrifugation at 800×g for 5 min. Finally, the flotation technique (hereafter denominated as "flotation") was performed by taking 4 ml of each sample and centrifuging it at 800×g for 5 min, after which 4.5 ml of the supernatant was removed. In this case, zinc sulfate was used (specific gravity: 1.200) to perform the 1:10 dilution of the sediment and as a floating solution. The quantification of the eggs was performed by waiting for 5 min after the sample had been deposited and using a microscope with a magnification of 40x in a McMaster chamber. These techniques were compared on the basis of the number of S. dentatus eggs in wild boar urine that each parasitological method detected, since there is no gold standard technique in scientific literature or previous studies on the diagnosis of this parasite. This assumption is based on the fact that parasitic diagnostic techniques that detect a greater number of eggs have a greater precision (Godber et al., 2015). Therefore, we assumed that the highest values obtained by the different methods were the closest to the real value. The S. dentatus eggs were identified on the basis of the descriptions provided by Skryabin (1991).
Statistical analysis
The association between the number of eggs detected (continuous response variable) and the technique used (explicatory factor) was studied by performing a mixed generalized linear model, in which the random variable was the host. We also included the number of parasites present in the urinary tractas (continuous explanatory factor), and the interaction between the technique and the number of parasites. A negative binomial error and a loglink were used. The P-value was set at 0.05. Analyses were conducted using IBM SPSS V21 (StatSoft Inc.).
Results and discussion
Stephanurus dentatus eggs were detected in all infected animals, showing that the sensitivity of all three techniques is 100% (Table 1). These results indicate that any of the three techniques can be used as an accurate method to diagnose the presence of this nematode species in the urine of Suidae hosts. However, there are significant differences between the three techniques in terms of the number of eggs counted in the urine (F = 4.40, 2, 66 d.f., p = 0.015; Fig. 1), with the sedimentation by centrifugation and by gravity being more efficient than the flotation technique. In general, flotation methods are usually employed for the laboratory diagnosis of nematode infections using the McMaster chamber (MAFF, 1986;Kaminsky, 2014), since it is a rapid method and has a high diagnostic sensitivity. However, in our study, the flotation technique detected the lowest number of eggs (Fig. 1).
With regard to the possible negative effect of the sample preservation method on the estimates of the number of eggs recovered and the technique performance, Maurelli et al. (2014) carried out several diagnostic methods to detect the eggs of Capillaria plica, a nematode that parasitizes the urinary tract of dogs, using fresh urine and urine conserved in formalin; after using different flotation solutions, the aforementioned authors concluded that the flotation technique with saturated saline solution and fresh urine provided the best results. In our study, the samples were taken in the field and could not be processed immediately, so they were preserved in 10% formalin. Therefore, we could not compare our method with other preservation methods, which is a factor to be considered in future studies. Interestingly, in our study we have observed the formation of aggregates in the urine, possibly as a result of the addition of formalin, as described elsewhere (Boon and Kok, 2008); this flocculation process may have influenced the detection of S. dentatus eggs, especially when using the flotation technique. In particular, aggregates of coagulated proteins may have prevented the eggs from floating freely, and they might, therefore, have been outside the upper detection area of the McMaster chamber. This could explain why the flotation technique was the least effective diagnostic method in our study.
Although there was a positive trend, none of the egg counts obtained with any of the techniques can be reliably applied in order to infer the number of adult nematodes infecting the host (F = 3.05, 2, 66 d.f., p = 0.054; Fig. 2), and counts in urine can, therefore, provide only guidance on the parasite intensity in wild boar. Numerous studies have attempted to develop non-invasive methods with which to determine parasite intensity by associating the number of excreted eggs or larvae with the adult parasites found in the host (Budischak et al., 2015;Seivwright et al., 2004). However, previous studies have shown that the estimation of parasite intensity through the use of the non-invasive method of counting parasite stages excreted by the infected host is not always reliable (Gillespie, 2006;Romeo et al., 2014). There are several factors related not only to the parasite, the host and the environment, but also to the diagnostic methods, that determine the number of eggs and larvae excreted and detected. In particular, it has been shown that the elimination of parasitic forms is usually intermittent over time, as the excretion of eggs and larvae of some nematodes that infect wild ungulates have daily or seasonal fluctuations (Vicente et al., 2005). One relevant determinant of parasite eggs is the number of adult reproductive females present in the host, along with the presence of mature males (Stear et al., 1997). This factor is not relevant for our study since, as mentioned above, the sex ratio of mature nematodes remained relatively constant in our study population. Several host dependent factors (e.g., age, sex, nutritional status and mating or breeding season) that potentially influencing the host immune response may condition the number of reproductive females that are eventually installed and the number of eggs that they successfully produce and excrete .
Conclusion
This is the first study to investigate and compare different diagnostic methods by which to detect the presence of S. dentatus in infected Suidae. Moreover, if we assume that the technique is equally effective in the domestic pig, which is a sympatric species of wild boar, the diagnostic techniques described here would be useful to detect and monitor the presence of S. dentatus in the pig farming sector, especially in endemic tropical and subtropical areas in which it may lead to economic losses. Further studies are required in order to develop a precise and effective diagnostic method, and to propose essential tools for the study of the epidemiology of this parasitic disease that has rarely been studied in Europe. There is also a need to evaluate other potential factors that may affect the S. dentatus egg count in urine, such as the sample preservation method.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Table 1 Mean (±SD), maximum and minimum number of Stephanurus dentatus adults found in infected wild boar (n = 23) and number of eggs counted in urine according to each parasitological technique (n = 27). | 2021-05-05T05:17:58.279Z | 2021-04-15T00:00:00.000 | {
"year": 2021,
"sha1": "cc12d3dd106141d23dc3f233c85b2a6c6561985f",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ijppaw.2021.04.006",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cc12d3dd106141d23dc3f233c85b2a6c6561985f",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
19869365 | pes2o/s2orc | v3-fos-license | Same site submucosal tunneling for a repeat per oral endoscopic myotomy: A safe and feasible option
Per oral endoscopic myotomy (POEM) is a novel endoscopic procedure for achalasia treatment. Due to its novelty and high success rates, a repeat procedure is usually not warranted, making the feasibility and safety of such approach unknown. We report the first case of a successful repeat POEM done at the same site of a previously uncompleted POEM. An 84-year-old female with type 2 achalasia presented for a POEM procedure. The procedure was aborted at the end of tunneling and before myotomy due to hypotension, which later resolved spontaneously. POEM was re-attempted at the same site of the original tunnel 1 year afterward, and surprisingly we didn’t encounter any submucosal fibrosis. The procedure felt similar to a native POEM and a myotomy was performed uneventfully. Our case is the first to suggest that submucosal tunneling during a repeat POEM can be done at the same site. Hypotension during POEM is a rare complication that should be recognized as a potential result of tension capnothorax, it can however, be managed with close supportive care.
Abstract
Per oral endoscopic myotomy (POEM) is a novel endoscopic procedure for achalasia treatment. Due to its novelty and high success rates, a repeat procedure is usually not warranted, making the feasibility and safety of such approach unknown. We report the first case of a successful repeat POEM done at the same site of a previously uncompleted POEM. An 84-year-old female with type 2 achalasia presented for a POEM procedure. The procedure was aborted at the end of tunneling and before myotomy due to hypotension, which later resolved spontaneously. POEM was re-attempted at the same site of the original tunnel 1 year afterward, and surprisingly we didn't encounter any submucosal fibrosis. The procedure felt similar to a native POEM and a myotomy was performed uneventfully. Our case is the first to suggest that submucosal tunneling during a repeat POEM can be done at the same site. Hypotension during POEM is a rare complication that should be recognized as a potential result of tension capnothorax, it can however, be managed with close supportive care.
INTRODUCTION
Per oral endoscopic myotomy (POEM) is a novel endos copic procedure for achalasia treatment. The principle techniques involve endoscopic submucosal tunneling followed by a myotomy [1] . Overall, it has success rates reportedly ranging between 82% to 100% and can be safely performed with a small number of reported major complications [13] . However, one of the complications that can occur is hemodynamic instability, which has been reported in up to 20% of patients in one study [4] . Due to its novelty and the high success rates, repeat procedure is rarely warranted, making the feasibility and safety of such approach unknown. Here, we report the first case of a repeat submucosal tunneling successfully performed at the same site of a previous POEM procedure.
CASE REPORT
An 84yearold female presented with progressive dysphagia to both solids and liquids and failure to thrive over several months. Her other medical problems included gastroesophageal reflux disease, hypertension, deep vein thrombosis, severe osteoarthritis of both hips, lower extremities contracture, and chronic low back pain. Initial laboratory work up was essentially unrevealing. Manometry study confirmed severe achalasia type 2. The decision was made to proceed with POEM procedure. During endoscopy, she was placed in supine position which was standard practice at our institution. Incision site was first injected with a premixed solution of saline and methylene blue (5 mL/500 mL) followed by careful dissection to the submucosal layer using a triangle tip knife. A submucosal tunnel was being made from the incision site to 2 cm distal to the cardia, but after a complete submucosal tunneling process just before myotomy (Figure 1), she developed severe hypotension and bradycardia. Consequently, the procedure was aborted. Chest x-ray revealed left apical pneumothorax, pneumomediastinum, pneumoperitoneum, and extensive subcutaneous emphysema. Her hypotension re solved with supportive care within minutes of aborting the procedure. A gastrografin swallow study was obtained which did not show any evidence of contrast leakage, but it demonstrated a grossly dilated esophagus consistent with achalasia, and postoperative edema with slow emptying at the gastroesophageal junction ( Figure 2). Thereafter, she underwent an upper endoscopy with Botulinum injection every 23 mo but eventually her symptoms stopped responding to botulinum treatment. Repeat POEM was thus performed 1 year later. She was placed in the same supine position due to her medical comorbidities. A severely dilated sigmoid esophagus was observed ( Figure 3A). The GE junction was tight, and some pressure was required to traverse the endoscope, consistent with known achalasia. Due to great difficulty orienting the endoscope on a different plane, submucosal incision was made at the exact same site ( Figure 3C) of the original tunnel, and surprisingly we did not encounter any submucosal fibrosis or technical challenges. The repeat tunneling at the same submucosal plane was successfully completed and felt similar to a native POEM (Figure 4). A myotomy was quickly and uneventfully performed followed by mucosal closure with hemostatic clips (Figure 3). The length of the myotomy was 8 cm, which is the standard at our institution. At 4wk follow up her symptoms remarkably improved, as shown by a decreased Eckhardt score from 9 to 4. Her reflux symptoms also remained stable on the same dose of omeprazole.
DISCUSSION
Our case is the first to highlight the feasibility and safety of performing a repeat POEM at a location where submucosal tunneling was previously performed. As discussed above, there are limited scenarios where repeat POEM would be considered. They include intra procedural complications resulting in incomplete pro cedure, insufficient symptomatic relieve, and recurrent symptoms after an initial improvement [5] . The question remains as whether it is feasible to repeat a POEM procedure, and if so what would the best approach be.
POEM on a site of prior endoscopic mucosal resec tion is considered relatively contraindicated due to fear of encountering fibrosis [2] . Recent report on repeat POEM procedures opted to create submucosal tunnel at the opposite side of the scarred mucosectomy area due to concern for an obliterated submucosal space [5] . However, this did not apply to our case, meaning that submucosal fibrosis does not necessarily result from a first POEM attempt. Although theoretically myotomy may lead to fibrosis, but this hasn't been confirmed in the literature. In addition, myotomy is performed from 3 cm proximal to gastroesophageal junction and is therefore unlikely to impact the development of submucosal fibrosis in the proximal tunnel.
Moreover, the opposite site approach may not always be feasible due to different patient position and endoscopic orientation as in our case where patient position is very limited. Therefore, we opted for the same posterior approach as it allowed most flexible endoscopic maneuverability. Same site repeat POEM also preserves the opposite side of the esophagus for other potential procedures. Similar to our findings, double POEM was recently reported whereby tunneling was done more proximal to original tunnel to extend the myotomy [6] . Hypotension during POEM is a rare complication that should be recognized as a potential sign of tension capnothorax; it can however, be managed with close supportive care [7,8] . Other commonly reported physical findings include subcutaneous emphysema, medias tinal emphysema, and pneumoperitoneum without hemodynamic instability, all of which are believed to be normal physiologic reaction to the procedure [9] . In summary, this report suggests that should POEM need to be reattempted, same site operation, including incision, submucosal tunneling and myotomy, is a viable method.
Case characteristics
An 84-year-old female with progressive dysphagia to both solids and liquids and failure to thrive.
Clinical diagnosis
She had severely dilated sigmoid esophagus and tight gastroesophageal junction upon passing gastroscope, consistent with manometry-proven achalasia type 2.
Laboratory diagnosis
Laboratory testing on initial presentation was essentially unremarkable.
Imaging diagnosis
No imaging was required to make the diagnosis. Manometry study revealed panesophageal pressurization and elevated integrated resting pressure, diagnostic of achalasia type 2.
Pathological diagnosis
Biopsy was not required to establish the diagnosis.
Treatment
Per oral endoscopic myotomy (POEM) was performed twice: the first submucosal tunneling was completed without myotomy due to hemodynamic instability. The second attempt was successfully performed via the same site tunneling.
Related reports
Only 2 other reports on repeat POEM were found, but neither of them reports on performing repeat submucosal tunneling on the same site and orientation of the original tunnel. | 2017-08-04T23:48:44.101Z | 2016-10-16T00:00:00.000 | {
"year": 2016,
"sha1": "f73ba4d24e8a50fcc0fb22b2d151663b05cf80d3",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4253/wjge.v8.i18.669",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "2a396fced870e8ce0abb4525be1b143bbdc9ff36",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251372389 | pes2o/s2orc | v3-fos-license | Do hospital workers experience a higher risk of respiratory symptoms and loss of lung function?
Background Hospital work environment contains various biological and chemical exposures that can affect indoor air quality and have impact on respiratory health of the staff. The objective of this study was to investigate potential effects of occupational exposures on the risk of respiratory symptoms and lung function in hospital work, and to evaluate potential interaction between smoking and occupational exposures. Methods We conducted a cross-sectional study of 228 staff members in a hospital and 228 employees of an office building as the reference group in Shiraz, Iran. All subjects completed a standardized ATS respiratory questionnaire and performed a spirometry test. Results In Poisson regression, the adjusted prevalence ratios (aPR) among the hospital staff were elevated for cough (aPR 1.90, 95% CI 1.15, 3.16), phlegm production (aPR 3.21, 95% CI 1.63, 6.32), productive cough (aPR 2.83, 95% CI 1.48, 5.43), wheezing (aPR 3.18, 95% CI 1.04, 9.66), shortness of breath (aPR 1.40, 95% CI 0.93, 2.12), and chest tightness (aPR 1.73, 95% CI 0.73, 4.12). Particularly laboratory personnel experienced increased risks of most symptoms. In linear regression adjusting for confounding, there were no significant differences in lung function between the hospital and office workers. There was an indication of synergism between hospital exposures and current smoking on FEV1/FVC% (interaction term β = − 5.37, 95% CI − 10.27, − 0.47). Conclusions We present significant relations between hospital work, especially in laboratories, and increased risks of respiratory symptoms. Smoking appears to enhance these effects considerably. Our findings suggest that policymakers should implement evidence-based measures to prevent these occupational exposures. Supplementary Information The online version contains supplementary material available at 10.1186/s12890-022-02098-5.
System worker data reported that healthcare workers were more likely to report recent respiratory symptoms than workers in general [8]. A more detailed survey of healthcare professionals from four groups of Texas health professionals, licensed as of 2003, including physicians, nurses, occupational therapists, and respiratory therapists, found that those working with instrument cleaning and administration of aerosolized medications showed a higher occurrence of adverse respiratory outcomes. In contrast, the risk of symptoms and diseases related to the use of latex gloves decreased since the year 2000 [4]. A recent survey of more than 2000 New York healthcare workers showed that use of alcohol, bleach, and other disinfectants was associated with reported asthma symptoms [9,10]. No previous study has addressed potential interaction between smoking and occupational exposures containing disinfectant products.
The objective of this study was to investigate potential effects of occupational exposures among hospital workers on the risk of respiratory symptoms and lung function level in Shiraz, Iran, and to evaluate potential interaction between occupational exposures and smoking.
Study design and study population
This was a cross-sectional study conducted in 2015. The study population comprised of 228 hospital workers of a large teaching hospital in Shiraz, Iran, and a reference group of 228 office workers recruited from the same area nearby the hospital. The reference group workers were in managerial, administrative, or clerical jobs, and they visited the hospital only occasionally. To be included in this study the participants had to have at least 1 year of work experience in the current job. The exclusion criteria included any history of previous respiratory or heart diseases, any history of asthma, chest or heart surgery, recent eye or ear surgery, heart attack, stroke, bloody phlegm (i.e. haemoptysis), systolic blood pressure above 180 mmHg, recent severe common cold, or previous exposure to toxic pollutants in other occupations. The sample size was based on a comparison of the prevalences of respiratory symptoms between the exposed and the reference groups (beta = 90%, alpha = 0.05).
Data collection
Both the exposed and reference groups answered a set of standardized questions modified from the American Thoracic Society's (ATS) Respiratory questionnaire [11], which has been validated in multiple populations around the world [12][13][14].
This questionnaire inquired about respiratory symptoms, including current cough, productive cough, phlegm production, wheezing, shortness of breath, and chest tightness. It contained questions on age, gender, weight, height, working history, current job title, current workplace, past occupations, history of previous diseases, and some additional demographic characteristics. In addition, the participants were asked to fill in a checklist of the work environment characteristics, including the dimensions of the workspace, number of people working in the same area, ventilation system of the workplace, and availability of heating and cooling systems. The questionnaire inquired also about job tasks and their duration, chemical products used, availability and application of chemical safety guidelines, use of personal protective equipment, and training in occupational health issues. These questions were modified to suit the Iranian environment, then translated to Farsi and then backtranslated to English by a different person. The backtranslated questionnaire was then compared to the original version, and corrections were made in the translated questionnaire.
Exposure assessment
We assessed occupational exposure on the basis of the study subject's job category. First, all the healthcare workers constituted the exposed group and office workers formed the reference group. Second, we formed 6 subcategories with different types of exposures: nurses, laboratory workers, nurses' aides, cleaners, surgical technicians, and others. Unfortunately, the study size was not large enough to address effects of individual exposures.
Outcome assessment
The main outcomes of interest were the occurrence of six respiratory symptoms, including (1) cough, (2) mucus production, (3) cough accompanied by mucus, (4) wheezing, (5) shortness of breath, and (6) chest tightness, and three lung function parameters, including (1) forced vital capacity (FVC), (2) forced expiratory volume in one second (FEV1), and (3) FEV1/FVC ratio. Both absolute values and percentage predicted values of the lung function parameters were used as outcomes. Forced expiratory maneuvers were performed according to the ATS/ERS guidelines using a Spiroanalyzer ST-150 spirometer (Fukuda Sangyo Inc, Japan). The equipment was calibrated every four hours according to the manufacturer's recommendations. Predicted values were derived from the GLI reference equation [15]. Spirometry was conducted by the same trained, experienced technician for both groups at their workplaces. The study was approved by the ethics committee of the Shiraz University of Medical Sciences. All participants signed informed consent before participation.
Statistical methods
We compared the risk of respiratory symptoms and the levels of lung function parameters between hospital workers (the exposed group) and office workers (the reference group). Prevalence ratio (PR) was used as the measure of effect. We adjusted the effect estimates for potential confounding in Poisson regression analyses, producing prevalence ratios (PR) with corresponding 95% confidence intervals (CI). Poisson regression models were fitted applying SAS procedure GENMOD, with logarithmic link function [16]. Multiple linear regression models were applied to estimate the effects of occupational exposures on the lung function parameters. The effect estimates were adjusted for age, height, marital status, education, sex (model 1), then additionally for smoking (never, ex, and current), and use of waterpipe (never, ex, and current) (model 2).
We assessed the excess relative risks (ERR) for the joint effects of occupational exposures and smoking status on the risk of the five studied symptoms. The relative risk due to interaction (RERI) was quantified on an additive scale by calculating the risk that is more than expected based on summing the independent effects of these factors. This can be expressed in terms of ERRs as follows: We estimated the 95% CI for RERI using the method of variance estimates recovery [17]. For RERI, the null value corresponds to a statistical significance level p = 0.05. Only the estimates for which sufficient information for calculation of ERRs and estimation of RERIs was available are given in results. Data was analyzed using SAS statistical package v.9.4 (SAS Institute Inc., Cary, NC, USA).
Characteristics of the study population
The characteristics of the exposed and reference groups are presented in Table 1. Most of the characteristics were similar between the two groups. However, hospital staff had longer work experience than the reference group, and a shorter duration of education. Hospital staff reported current smoking more frequently, although there was no significant difference with respect to the lifetime smoking. In addition, use of a face mask was reported by 67.1% of the exposed participants overall, and was most frequent among operating room workers (100%), followed by cleaners (90.9%), nurses' aides (75.7%), nurses (62.8%), laboratory workers (50.0%), and others (36.4%). Availability of a local exhaust system was reported by 9.7% of the exposed group.
Prevalence of lung function impairment among the exposed and control groups Table 2 shows the prevalence of lung function impairment in the four categories of the exposed group and the reference group. The overall prevalence of lung function impairment was 22.0% (n = 50) in the exposed group and 17.5% (n = 40) in the reference group. Within the exposed group, 16.7% showed a restrictive pattern while 5.3% showed obstruction. In the reference group the majority of those with lung function impairment had restriction 29 (12.7%), followed by obstruction 11 (4.8%). Obstruction in combination with restrictive impairment was not observed in either group.
Effects of exposure on respiratory symptoms
The prevalences of respiratory symptoms among the hospital and the reference groups are reported in Table 3. The prevalences of all symptoms were higher in the hospital workers compared to the office workers. The results showed that even after adjusting for potential confounders, significantly higher PRs were found in the exposed hospital staff group with adjusted PRs for cough 1.90 (95% CI 1.15, 3.16), phlegm production 3.21 (95% CI 1.63, 6.32), productive cough 2.83 (95% CI 1.48, 5.43), and wheezing 3.18 (95% CI 1.04, 9.66) ( Table 3).
Effects of exposure on lung function
Additional file 1: Table S1 presents the estimated differences in lung function parameters between the exposed and the reference groups. The negative values represent adverse effects. None of the effect estimates for hospital workers analyzed collectively were statistically significant, although the direction of effect was negative (i.e. worse lung function in the exposed group) for most estimates. However, Additional file 1: Table S1 also presents comparisons between subgroups of the exposure and the reference groups. In the unadjusted models, the nurse subgroup showed a statistically significant decrease in FEV1 (unadjusted difference = − 0.20 L, 95% CI − 0.36, − 0.03) and FVC (unadjusted difference = − 0.25 L, 95% CI − 0.45, − 0.06) compared to the reference group, although the differences were not significant after adjusting for confounding.
Synergistic effect of occupational exposures and smoking
Tables 4 and 5 present the results of studying potential interaction between occupational exposures and smoking (both former and current smoking) on the risk of respiratory symptoms and lung function levels. Table 4 Table 1 Characteristics of the study population of 228 healthcare workers (HCWs) and 228 reference group (office workers) Information on waterpipe status is missing for one person in the hospital staff group, and for three persons in the reference group a Mann-Whitney U test b χ2 test
p-value
Age ( shows estimates for the independent and joint effects of current smoking and occupational exposures and the corresponding excess relative risks (ERRs) and relative risks due to interaction (RERIs). The observations of RERI should be interpreted with caution, because an analysis of joint effects would ideally be derived from a larger study population. Table 5 presents the effect estimates from the linear regression models, both from the main effects models and models with interaction terms included. The interaction for occupational exposure and current smoking was significant on FEV1/FVC (β = − 5.31, 95% CI − 9.46, − 1.16, p = 0.03). Marginally significant evidence of interaction between occupational exposure and former smoking was also observed on FEV1/FVC (p = 0.05).
Discussion
We studied the prevalence of respiratory symptoms and level of lung function among staff working in a large and busy hospital in Iran. We found that hospital workers had significantly higher prevalences of cough, phlegm production, productive cough, and wheezing compared to the reference group of office workers from the same hospital but working in the neighbor building. In addition, among the specific healthcare worker groups, nurses, aid nurses and laboratory workers had increased prevalences of several respiratory symptoms. We also observed significant reductions in the levels of FEV 1 and FVC among the subgroups of nurses and other hospital workers in unadjusted models (Additional file 1: Table S1). Furthermore, the prevalence of restrictive lung function impairment was higher in the exposed group compared to the reference group.
Healthcare workers are exposed to multiple agents potentially harmful for respiratory health. Disinfection of medical instruments in healthcare is likely to expose workers to substances leading to airway inflammation, especially in work tasks requiring high volumes or concentrations of disinfectants. Use of disinfectants to clean surfaces may also be linked to workers' exposure to chemical agents that may cause adverse respiratory effects. In our study, local exhaust ventilation was used only in a few work areas.
We also addressed potential interaction between current or former smoking and occupational exposures, i.e., whether smoking modifies effects of such exposures. In our study, current smoking was found to have an independent adverse effect on FEV1/FVC%. Lack of any significant effect on the other lung function parameters could be due to the so-called healthy smoker selection, which means that healthier people are more likely to start smoking and to continue it [18]. In addition, the relatively young average age of this study population (mean 36.3 years, SD 8.25) could explain the rather modest effects on lung function, as the subjects had relatively short duration of smoking. In the main effect models, lung function, apart from FEV1/FVC %-predicted, was statistically significantly reduced in former smokers, suggesting lack of a recovery from the adverse effects of previous smoking. The absence of a parallel finding in current smokers is likely explained by two plausible general explanations: (1) the healthy smoker effect, meaning that those susceptible to adverse effects of smoking have already quit smoking while those resistant to these continue smoking, and (2) the relatively young study population with on average short duration of smoking. The interaction between current smoking and exposure to healthcare chemicals was significant on FEV1/FVC%, suggesting synergism between these two exposures.
Study population
We achieved 100% response rate for both the exposed and reference groups (i.e., after using an incentive), which practically eliminates potential bias that could be related to reduced participation. We compared hospital staff to a reference group of office workers and these two groups were found to have similar demographic and personal characteristics. There were no significant differences with respect to the lifetime smoking between the two groups. The small sample sizes, particularly after including smoking status in the models, generated wide confidence intervals. Although the sample sizes were designed to detect significant effects, they may not be large enough to detect statistically significant interaction.
Study design
This was a cross-sectional study; therefore, it is not possible to elaborate the possibility that workers with respiratory health problems may have been more likely to have left the work compared with workers who remain healthy [19]. This type of selection would lead to underestimation of the relations of interest for current exposures.
Outcome and exposure assessment
Occurrence of respiratory symptoms was assessed with the same questionnaire in a similar way among the occupational subgroups of the hospital staff and that of the control group of office workers. In addition, the spirometry measurements were conducted according to same protocol for both the healthcare worker and the office worker groups.
Exposure was assessed in this study with two methods: (1) on the basis of the broad job category, i.e. healthcare worker vs. office worker; and (2) on the basis of subcategory based on job titles. Both types of exposed categories were consistently related to respiratory symptoms. Unfortunately, we were not able to directly measure the occupational exposures and the study size was not optimal for addressing specific exposures.
Confounding
We collected information on several potential determinants of the studied outcomes, which were adjusted for as potential confounders in the multivariate models: personal characteristics (sex, age), socioeconomic status (education, and marital status), and smoking habits. There is evidence that long-term exposure to air pollution reduces lung function and increases occurrence of respiratory symptoms. We recruited participants of both groups from the same hospital area located in Shiraz, Iran, and thus minimized potential confounding role of air pollution exposure.
Synthesis with previous knowledge
The most commonly reported exposures among hospital staff in the United States were cleaning products, latex, and poor indoor air quality in general [20]. Some recent studies have shed additional light on the role of cleaning products in hospital environment [9,10,21]. In the present study, prevalence of phlegm production was significantly greater in nurses compared to the office workers (i.e. the reference group), which is consistent with a study by Smedbold et al. [22] from Norway. The authors concluded that poor indoor environment might have affected the nasal mucosa of the nursing personnel causing nasal mucosal production.
The high prevalence of respiratory symptoms among hospital staff in our study is consistent with a previous study from United States [20,23]. In the present study, the most prevalent symptoms among hospital staff were shortness of breath (31.1%) and cough (23.7%). These prevalences are somewhat higher than reported in other studies conducted in different parts of the world. In our study, poor indoor air quality in general and exposure to detergents and disinfectants [24] are possible explanations underlying the higher prevalence of respiratory symptoms in the hospital staff. In our study, the clinical laboratory workers reported the highest prevalence of respiratory symptoms among the hospital group. This finding is consistent with results of the study by Mirabelli et al. [25]. Increased occurrence of symptoms among nurses was also reported in previous studies by Arif et al. [24] and Pechter et al. [20]. The previous studies have adjusted for smoking but have not explored potential modifying effect by it among healthcare workers. We did not identify any previous study that had investigated potential interaction between occupational exposures and smoking among healthcare workers. In our study, the interaction between current smoking and healthcare work exposure was significant in relation to FEV1/FVC, which suggests synergism between these two exposures, i.e., current smokers seemed to be more susceptible to the adverse effects of the exposures in hospitals.
We assume that the environmental conditions and the workers of the studied teaching hospital in Shiraz, Iran, represent well the situation also in other hospitals in Iran and other countries in the same region and in areas with similar environmental and socioeconomic conditions. Thus, the results are generalizable to such areas of the world.
Conclusions
In this study from Iran, the results showed that the prevalence of all respiratory symptoms, except for chest tightness, was higher among hospital staff compared to the reference group of office workers. Laboratory workers were found to be at the highest risk of experiencing respiratory symptoms compared to the comparison group of office workers and to workers in other hospital jobs. No significant differences were found in lung function between the hospital and office workers. There was a suggestion of a synergistic effect between occupational exposures and current smoking on reduced FEV1/FVC%. Our findings are relevant for policymakers to justify implementation of evidence-based measures to reduce occupational exposures in order to prevent respiratory illness in hospital staff. | 2022-08-08T13:33:35.626Z | 2022-08-08T00:00:00.000 | {
"year": 2022,
"sha1": "de03fec342b6a48647d6b79d7355d3cb2f54af9a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "de03fec342b6a48647d6b79d7355d3cb2f54af9a",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
36688877 | pes2o/s2orc | v3-fos-license | Position 834 in TM6 plays an important role in cholesterol and phosphatidylcholine transport by ABCA1
ATP-binding cassette protein A1 (ABCA1) plays a key role in eliminating excess cholesterol from peripheral cells by generating nascent high-density lipoprotein (HDL). However, it remains unclear whether both phospholipids and cholesterol are directly loaded onto apolipoprotein A-I (apoA-I) by ABCA1. To identify the amino acid residues of ABCA1 involved in substrate recognition and transport, we applied arginine scan mutagenesis to residues L821–E843 of human ABCA1 and predicted the environment to which each residue is exposed. The relative surface expression of each mutant suggested that residues L821–E843 pass through the plasma membrane as TM6, and the four residues (S826, F830, L834, and V837) of TM6 are exposed to the hydrophilic internal cavity of ABCA1. Furthermore, we showed that L834 is critical for the function of ABCA1. ABCA1 plays a key role in cholesterol homeostasis by generating HDL. We determined that four amino acid residues (S826, F830, L834, V837) of TM6 are exposed to the hydrophilic internal cavity of ABCA1 and L834 is critical for the function.
Cholesterol, a key component of cell membranes, is required for cell proliferation; however, excess accumulation of cholesterol is toxic to cells, and excess deposition in peripheral tissues causes atherosclerosis. Therefore, intracellular cholesterol concentration is strictly maintained by various mechanisms, including regulation of synthesis, storage, uptake, and elimination. ATP-binding cassette protein A1 (ABCA1) plays a key role in eliminating excess cholesterol from peripheral cells by generating nascent high-density lipoprotein (HDL), which consists of phosphatidylcholine (PC), cholesterol, and apolipoprotein A-I (apo A-I).
Because the defect in ABCA1 causes Tangier disease, in which patients have very low or absent circulating HDL, [1][2][3] it is clear that ABCA1 is essential for HDL generation. However, many questions persist regarding the mechanism of HDL generation, e.g. whether both phospholipids and cholesterol are substrates for ABCA1; and whether lipids are directly loaded onto apo A-I by ABCA1. To address these questions, in this study, we tried to identify the amino acid residues of ABCA1 involved in substrate recognition and transport.
Like ABCA1, MDR3 (also called ABCB4) transports PC as a physiological substrate; it functions in canalicular membrane of hepatocyte and is involved in bile formation. 4) Because MDR3 shares a highly conserved amino acid sequence (76% identity and 86% similarity) with MDR1, a multi-drug transporter, 5) it is predicted that MDR1 and MDR3 share similar mechanisms of substrate recognition and transport. Indeed, like MDR1, MDR3 also transports various drugs under certain conditions and is inhibited by cyclosporine A and verapamil, inhibitors of MDR1. [6][7][8] Previously, we reported that cyclosporine A and its non-immunosuppressive analog, PSC833, also inhibit ABCA1 via direct binding, 9) suggesting that MDR1, MDR3, and ABCA1 share similar substrate-binding sites. TM6 of MDR1 plays an important role in substrate recognition; 10,11) hence, we predicted that TM6 of ABCA1 is also involved in substrate recognition. Recently, we determined the three-dimensional (3D) structure of CmABCB1, an MDR1 ortholog of Cyanidioschyzon merolae, at 2.4 Å resolution. 12) The structure contains a spacious internal cavity, in which the substrate-binding site is predicted to be located, and a portion of TM6 faces the internal cavity. Therefore, we predicted that a portion of TM6 of ABCA1 would also face the internal cavity and be involved in substrate recognition.
To test this prediction, we first assigned the orientation of TM6 of ABCA1 by replacing each amino acid residue in TM6 with an arginine to investigate whether arginine can be accommodated at each position. Because arginine has a large side chain with positive charges, the introduction of this residue to positions that interact with the lipid bilayer or other TMs should disrupt the protein folding of ABCA1, hindering protein trafficking from the ER to the plasma membrane. By contrast, if an arginine residue is introduced to a position facing the internal cavity, it would not affect protein folding or trafficking to the plasma membrane. After determining which residues were predicted to face the internal cavity, we then analyzed their functions.
Materials and methods
Plasmids and transfection. The expression vectors for wild-type ABCA1, ABCA1-K939 M, and K1952 M (ABCA1MM), in which two lysine residues (K939 and K1952) crucial for ATP hydrolysis were replaced by methionine 13) and all the ABCA1 mutants were constructed by using In-Fusion Kit (Takara Bio). The integrity of the mutated DNA was confirmed by sequencing. ABCA1 cDNA was fused with the green fluorescent protein (GFP) at the C-terminus and then inserted into the pIRESpuro3 vector (Takara Bio). The influenza virus hemagglutinin (HA) epitope sequence (coding YPYDVPDYA) was introduced between G207 and D208 as previously reported. 14) Human embryonic kidney (HEK) 293 cells were transfected with each expression vector using Lipofectamine LTX with Plus Reagent (Invitrogen). Stable transformants were selected in the presence of 1 μg/mL puromycin.
Anti-HA antibody immunostaining. Cells grown on collagen-coated coverslips were fixed with 4% paraformaldehyde at room temperature for 30 min. After blocking with PBS(+) containing 1% BSA, the cells were incubated with anti-HA antibody (1 μg/ml) in PBS(+) containing 0.02% BSA for 15 min at room temperature. After washing with PBS(+), cells were incubated with an Alexa Fluor 546-conjugated secondary antibody and observed on a confocal microscope. To compare the surface expression of ABCA1, the intensity of Alexa Fluor 546 of 10 cells was analyzed with ImageJ and normalized to the GFP intensity.
Flow cytometry analysis. HEK293 cells were harvested after trypsinization and washed twice with Hanks' balanced salt solution (HBSS). The cells were then incubated with both anti-HA (F-7) antibody (Santa Cruz Biotechnology) (1/200 diluted) and an Alexa Fluor 633-conjugated secondary antibody (1/500 diluted) at room temperature for 30 min and analyzed with a flow cytometer (Accuri C6, BD). The amount of ABCA1 on the cell surface was calculated from the histogram of double-positive (GFP + HA + ) cells.
Cellular lipid release assay. Cells were incubated in the presence of 10 μg/mL apolipoprotein A-I (apoA-I) for 24 h in Dulbecco's modified Eagle's medium (DMEM) containing 0.02% bovine serum albumin (BSA). The cholesterol and PC contents in the medium were determined using a colorimetric enzyme assay 15) or a fluorescence enzyme assay. 16) Statistical analysis.
Values are presented as the means ± SD (n ≥ 3). Statistical significance was determined by Dunnett's test.
Results
Single arginine mutations were introduced at positions 821-843 of human ABCA1, which, based on predictions by SOSUI and PredictProtein, likely spans the membrane as TM6. HEK293 cells were transfected with the mutant cDNAs, and cell surface localization was monitored by immunostaining of the HA tag inserted at the position of 207 in the extracellular loop ( Fig. 1(A)). This peptide insertion had no effect on the subcellular localization or the function of ABCA1. 17) The relative surface expression of each mutant was calculated by dividing HA immunofluorescence by GFP fluorescence, and was compared with that of wild-type ABCA1 ( Fig. 1(B)). Mutants could be classified into two groups: group 1 mutants were localized to the plasma membrane at more than~50% of the wild-type efficiency, and group 2 mutants were localized to the plasma membrane at less than 25% of the wild-type efficiency. Group 1 (filled bars) consisted of 14 mutants: L821R, T822R, T823R, S824R, S826R, F830R, L834R, G836R, V837R, T839R, W840R, Y841R, I842R, and E843R. Group 2 (empty bars) consisted of nine mutants: V825R, M827R, M828R, L829R, D831R, T832R, F833R, Y835R, and M838R.
Amino acid residues of group 1 (filled circles) and group 2 (empty circles) were placed in the helix model, and two faces (A and B) of the helix are shown in Fig. 2. This model suggested three features of TM6: (i) arginine replacement of residues predicted to be located at either end of the helix (L821, T822, T823, S824, G836, V837, T839, W840, Y841, I842, and E843) did not affect protein trafficking, suggesting that these residues are in a hydrophilic environment; (ii) all the residues whose replacement severely affected protein trafficking (V825, M827, M828, L829, D831, T832, F833, Y835, and M838) are located in the middle of the helix, suggesting that they face the hydrophobic environment of membrane lipids or are involved in helix-helix interactions; and (iii) four amino acid residues (S826, F830, L834, and V837) whose replacement did not affect protein trafficking formed a line along the B face from the extracellular side to the cytosolic side (Fig. 2). The first two features are consistent with the prediction that the amino acid residues from L821 to E843 pass through the membrane as TM6. The third feature suggested that the four positions (S826, F830, L834, and V837) face the hydrophilic internal cavity of the protein, which can accommodate the large hydrophilic side chain of arginine.
Lipid export activity of arginine mutants Next, to determine whether the four amino acid residues that face the internal cavity (S826, F830, L834, and V837) are involved in substrate recognition or the transport process of ABCA1, we established cells stably expressing the respective arginine mutants. Because W840, whose arginine substitution was found in a Tangier disease patient, 18) was mapped one turn below of V837, we also established cells stably expressing the V840R mutant. Cells stably expressing the D831R mutant, which should not be localized to the plasma membrane, were established as a negative control. We used flow cytometry analysis with an antibody against HA peptide to determine how efficiently each ABCA1 variant was localized to the cell surface (Fig. 3(A)). In the case of cells expressing wild-type ABCA1(207HA)-GFP or five mutants (S826R, F830R, L834R, V837R, and W840R), the HA and GFP fluorescence intensities were well correlated, suggesting that these five mutant ABCA1 proteins were localized to the plasma membrane as efficiently as the wild type. In the case of D831R mutant, however, the HA and GFP fluorescence intensities were not correlated, suggesting that this mutation hindered trafficking to the plasma membrane, as predicted in Fig. 1. Next, we measured apo A-I-dependent cholesterol and PC efflux from these mutants (Supplemental Fig. 1) and normalized lipid efflux efficiency to the total amount of ABCA1 on the cell surface, as described in Materials and Methods section (Fig. 3(B)). The lipid efflux efficiency of L834R mutant was as low as that of the non-functional mutant ABCA1MM, whereas the S826R, F830R, and V837R mutants exhibited cholesterol and PC efflux activity of as high as that of the wild type (Fig. 3(B)). The W840R substitution, which was found in a Tangier disease patient, did not affect the function of ABCA1 (Fig. 3(C)). These results suggest that among the five amino acid residues predicted to be lined up along the B face of TM6 (Fig. 2), only L834R affects the function of ABCA1, although it does not affect protein folding.
Effect of amino acid substitution of L834 on lipid export activity
The results described above suggested that L834 is critical for substrate recognition or transport by ABCA1. To elucidate the role of L834 on the function of ABCA1, we substituted L834 with the 18 remaining amino acid residues (i.e. other than L and R). Cells stably expressing each mutant were established, and cell surface expression was analyzed. HA and GFP immunofluorescence intensities were well correlated in cells expressing most mutants, with the exceptions of L834D, L834E, and L834Q (Supplemental Fig. 2). We found that L834D, L834E, and L834Q could be localized to the cell surface when they were transiently transfected (Supplemental Fig. 3). Lipid efflux from L834 mutants was measured and normalized to the total of ABCA1 on the cell surface (Fig. 4). The results revealed that the presence of 17 amino acid residues did not significantly affect lipid efflux activity of ABCA1 or the ratio of cholesterol efflux to PC efflux, whereas substitution with lysine and arginine reduced and abolished the lipid efflux activity (both cholesterol and PC efflux) of ABCA1, respectively.
Discussion
In this study, we applied arginine scan mutagenesis to amino acid residues L821-E843 of human ABCA1 to predict the environment to which each residue of TM6 is exposed. Based on the output of SOSUI and PredictProtein, these residues are expected to form TM6. The relative surface expression of each mutant suggested that both ends of the helix, formed by L821-E843, are in a hydrophilic environment, whereas the middle part is in a hydrophobic environment. Thus, as expected, amino acid residues L821-E843 pass thorough the plasma membrane as TM6.
Arginine scan mutagenesis was first performed by Loo et al. 19) . Those authors introduced single arginine mutations to a processing mutant (G251V) of MDR1 (ABCB1), which is defective in folding and trafficking to the cell surface, and succeeded in identifying amino Notes: Amino acid residues whose arginine substitution did not affect the protein trafficking (group 1) and whose arginine substitution impaired trafficking to the plasma membrane (group 2) are shown as filled circles and empty circles, respectively, on the two faces ( (A) and (B)) of the helix. The helix model was built with the help of the program "Helical Wheel Projection." acid residues whose arginine mutations promoted maturation. The results suggested that those residues faced an aqueous drug translocation channel within MDR1. 19) In this study, we applied arginine scan mutagenesis to wildtype ABCA1 and succeeded in predicting the environment that each residue faces, suggesting that this method is also effective when applied to the wild-type protein.
The results of arginine scan mutagenesis suggested that the four amino acid residues (S826, F830, L834, and V837), whose replacement did not affect the protein trafficking, lined up along one face of TM6 from the extracellular side to the cytosolic side of the membrane (Fig. 2). Recently, we determined the 3D structure of CmABCB1, an MDR1 ortholog of Cyanidioschyzon merolae, at 2.4 Å resolution. 12) The structure contains a spacious internal cavity, in which the substrate-binding site is predicted to be located, and a portion of TM6 faces the internal cavity. Face A of TM6 of ABCA1 contains four amino acid residues (S826, F830, L834, and V837) that are predicted to be exposed to the hydrophilic internal cavity, which can accommodate the large hydrophilic side chain of arginine.
The amino acid substitution of L834 revealed that only arginine and lysine affected the lipid transport activity of ABCA1. Arginine abolished transport almost completely, whereas lysine had a more moderate effect. This observation suggested that the function of ABCA1 is affected by the length of the side chain, the number of positive charges, or both of L834. However, the ratio of transported cholesterol and PC was unchanged. Negative charges of the side chain did not affect its function, whereas it slightly affected trafficking to the plasma membrane. These results suggest that the side chain of position 834 is not directly involved in substrate recognition. Alternatively, position 834 could play an important role in conformational changes during the transport process. The L834R mutant may make the inward-open form quite stable and hinder the conformational change to the outward-open form. Indeed, apo A-I did not bind to the ECD of the L834R mutant (data not shown), which is believed to be dependent on the conformational change of ABCA1 after ATP hydrolysis. 14) The amino acid residues of TM6 including L834 are conserved among mammalian ABCA1 proteins, also indicating the importance of this transmembrane helix for the functions of ABCA1.
Probost et al. 18) reported that a Tangier patient was heterozygous for two mutations, which result in the amino acid substitutions W840R and N935S. N935, which is in the highly conserved Walker A motif of the amino-terminal ATP-binding domain, is a Tangier mutation originally reported by Bodzioch et al. 1) . Because family members that are heterozygous for N935S showed subnormal plasma HDL levels but have mild disease, 1) and because a patient with heterozygous for W840R and N935S showed severe HDL deficiency, it was predicted that W840R is a Tangier mutation. 18) However, it was not clear whether W840R substitution itself affects the function of ABCA1. In this study, we observed no obvious effects of this mutation on the trafficking and function of ABCA1 when expressed in HEK293 cells, suggesting that this mutation does not impair the transport activity of ABCA1 when expressed on the cell surface. Thus, W840R might affect protein trafficking or some type of post-translational regulation of ABCA1 in vivo.
In summary, based on our arginine scan mutagenesis, we predicted that four amino acid residues (S826, F830, L834, and V837) of TM6 are exposed to the hydrophilic internal cavity of ABCA1, and identified L834 as critical for the function of ABCA1. These findings should facilitate the study of the functional mechanism of ABCA1 in nascent HDL formation.
Supplemental material
The supplemental material for this paper is available at http://dx.doi.org/10.1080/09168451.2014.993358. | 2018-04-03T05:35:23.708Z | 2015-01-12T00:00:00.000 | {
"year": 2015,
"sha1": "03778018a555555ba9066178b485c20a1510d913",
"oa_license": "CCBY",
"oa_url": "https://ndownloader.figshare.com/files/1864438",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "2d6f2e5332ea22a88805ee59bebe2808a6368444",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
269612905 | pes2o/s2orc | v3-fos-license | Characterization of an Algorithm for Autonomous, Closed-Loop Neuromodulation During Motor Rehabilitation
Background Recent evidence demonstrates that manually triggered vagus nerve stimulation (VNS) combined with rehabilitation leads to increased recovery of upper limb motor function after stroke. This approach is premised on studies demonstrating that the timing of stimulation relative to movements is a key determinant in the effectiveness of this approach. Objective The overall goal of the study was to identify an algorithm that could be used to automatically trigger VNS on the best movements during rehabilitative exercises while maintaining a desired interval between stimulations to reduce the burden of manual stimulation triggering. Methods To develop the algorithm, we analyzed movement data collected from patients with a history of neurological injury. We applied 3 different algorithms to the signal, analyzed their triggering choices, and then validated the best algorithm by comparing triggering choices to those selected by a therapist delivering VNS therapy. Results The dynamic algorithm triggered above the 95th percentile of maximum movement at a rate of 5.09 (interquartile range [IQR] = 0.74) triggers per minute. The periodic algorithm produces stimulation at set intervals but low movement selectivity (34.05%, IQR = 7.47), while the static threshold algorithm produces long interstimulus intervals (27.16 ± 2.01 seconds) with selectivity of 64.49% (IQR = 25.38). On average, the dynamic algorithm selects movements that are 54 ± 3% larger than therapist-selected movements. Conclusions This study shows that a dynamic algorithm is an effective strategy to trigger VNS during the best movements at a reliable triggering rate.
Introduction
Neurological injuries are a common cause of disability in the U.S.There are approximately 800 000 strokes each year and over 300 000 people live with the effects of spinal cord injury (SCI). 1,2Many survivors are left with long-term upper limb hemiparesis, which can lead to disability. 3 There is a clear and present need to develop interventional strategies to reduce this disability.
5][6] VNS therapy is premised on the timing of VNS concurrent with upper limb movement during rehabilitative exercises. 7Limb movement is driven by engagement of motor networks in the central nervous system, and the concurrent VNS generates a rapid release of neuromodulators that facilitates synaptic plasticity in the active motor networks.Consequently, degradation of the timing between VNS and the occurrence of the target movement reduces the efficacy of this approach.Studies in animal models show that delaying VNS until after training results in significantly less recovery. 8Moreover, even VNS delivered during rehabilitative exercises fails to be effective if it is not delivered concurrent with the best movements. 9inally, emerging clinical studies provide some preliminary evidence of the need for precise timing.Whereas VNS delivered by a therapist during movements produces robust enhancement of upper limb recovery, additional VNS delivered during unsupervised exercises where stimulation did not explicitly coincide with movement provides comparatively modest benefits. 5,6,10n addition to these lines of evidence, the excellent adherence to the use of VNS at home and compounding functional benefits raise the prospect that a strategy to allow precise timing of stimulation during movement holds promise to maximize the benefits of VNS. 10 Telerehabilitation solutions promote therapy adherence after neurological injury and demonstrate equivalent or better outcomes when compared to conventional face-to-face therapy. 11Additionally, telerehabilitation increases engagement and permits longer courses of rehabilitation, which has been shown to produce additional recovery. 12,13Take-home systems, such as RePlay, have been developed to support high-repetition motor rehabilitation and could be readily combined with strategies to improve VNS stimulation timing.To this end, we sought to leverage advances in rehabilitative technology and use miniaturized sensors in conjunction with an algorithm to design a system that automatically triggers stimulation based on selected parameters of movement during rehabilitation.To develop the algorithm, we analyzed data previously collected from 14 stroke and 18 cervical SCI patients using motion controllers to perform rehabilitative exercises.We selected the relevant sensor dimension for each exercise to isolate a single signal, then allowed an algorithm to simulate when to deliver stimulation as the exercise progressed.We simulated the application of 3 different algorithms to the signal and analyzed their triggering choices.Based on this analysis, we determined that a dynamic algorithm reliably selected the best movements with desired timing intervals.The dynamic algorithm continually adapts over time to adjust for intermittent periods of rest and person-to-person variability and can flexibly be applied to signals from various controllers (handheld sensors and touchscreen) while maintaining similar triggering characteristics.In addition to the simulated triggering analyses, we compared the dynamic algorithm to actual manual VNS triggers selected by a therapist from the same dataset.Validation of this approach shows that the dynamic algorithm selects optimal movements comparable to or better than a trained human observer.These findings lay the groundwork for the implementation of this approach to supplement delivery of VNS in future studies.
Study Design and Testing Protocol
All procedures were approved by the Institutional Review Board at the University of Texas at Dallas. 14,15A total of 32 participants ages 23 to 77 years with a history of upper limb motor impairment due to stroke or SCI (mean time since neurological injury was 5.8 ± 1.4 years) were recruited for a VNS clinical study with RePlay, a tablet-based rehabilitation system, 16 and ReStore, an implantable device for VNS. 17All participants had motor impairments in the upper limbs with some residual function.Participants with stroke mostly completed range of motion exercises and SCI participants primarily completed isometric strength exercises, according to their deficits.We pooled their data together to increase the number and type of unique exercises to analyze.Study participants used the FitMi handheld motion controller (Flint Rehab, California) and the ReCheck system to perform rehabilitative exercises while playing games on an Android tablet. 18The FitMi controller is a rubberized puck that contains several sensors including a 3-axis accelerometer/gyrometer, magnetometer, and a force sensor.The ReCheck system supports 4 isometric tasks and 3 range-ofmotion tasks via interchangeable modules.In addition to the FitMi and ReCheck controllers, an additional game allowed use of the tablet's touchscreen for gameplay.Physical therapists guided participants to perform exercises that challenged range of motion and isometric strength.Measurements from the sensors included rotation angle (°), pressing and pinching force (g), and movement distance.Measurements from the touch screen included speed of finger movements across the screen.The study contained both traditional repetition-based exercises and dynamic gamecontrolling movements, each measured by a sensor array housed in the selected handheld device.If patients were not able to grasp and hold the device, the device was affixed to a stabilizing base.
Signal Processing
Movement Signal Capture and Preprocessing.To capture movement data during rehabilitative exercises, the tablet application streamed and processed incoming 60 Hz data from the controllers and saved the data to local storage for offline analysis (Figure 1).Custom Python routines were developed for simulations and analysis.
Each algorithm accepted a preprocessed discrete signal of a single dimension.To select the relevant channel, RePlay required users to select specific exercises prior to movement initiation.This selection process guides the software to isolate the movement data to the specific sensor and dimension that matched the exercise.The software then processed the rotation or force signals to construct movement signals that describe rate of change over time.To begin, unprocessed movement signals were smoothed with a moving average filter (discrete, linear convolution, 300 ms of movement data in smoothing window, and kernel size varied with calculated sampling rate).About 300 ms of movement data was chosen for the smoothing window because large movements require approximately 300 ms to complete. 19At each point in time, the gradient of the most recent values of the smoothed movement signal was obtained to calculate the rate of change.The mean gradient value over the was calculated, resulting in a single value that represented the average rate of change over the 300 ms window.Each consecutive movement signal value was processed in the same manner.This signal processing resulted in a rate-of-change-based signal that could be used as input to an algorithm for stimulation (Figure 4).For force-based exercises, this signal indicates the rate of change of force, which may be positive (pressing or gripping) or negative (releasing).For rotation-based exercises, a positive signal indicates clockwise angular rate of change of the sensor puck, and a negative signal indicates a counterclockwise angular rate of change.For touchscreen data, this created signals that described the speed of touch over time.
We processed manual triggering data for comparison with the algorithms.A Google Pixel 2 smartphone triggered and recorded therapist-selected manual stimulations during the study, while a Samsung Galaxy Tab S4 tablet captured movement data during gaming and repetition-based exercises.To align clocks between devices, we calculated the average difference between session start time on the tablet and stimulation start time on the smart phone to realign the triggers with the movement data.
Movement minimums were selected for each exercise to separate movement from noise.The movement minimum for each exercise was determined by measuring the mean maximum value of the signal when a control subject held the controller with the arm at rest.Any activity that did not exceed the value of the movement minimum was excluded.
Dynamic Algorithm Process.As movement began and progressed, signal preprocessing was performed as described above, and each sample was passed to the algorithm.The algorithm placed each incoming sample into a buffer that held up to 3000 of the most recent samples.The buffer size was selected so that it was long enough to capture short game sessions (~1 minutes) but short enough to adjust to changes in movement amplitude that may happen due to fatigue or level changes in longer games (>4 minutes).Stimulation was prevented at the beginning of the game prior to movement initiation.The dynamic algorithm continuously analyzed the 3000-sample buffer and calculated a rate of change threshold value at the user-selected percentile and adjusted it for every new preprocessed sample.If the dynamic threshold was surpassed by an incoming value, the algorithm delivered a simulated VNS trigger.Optionally, users could set values for the minimum interstimulus interval and directionality.
Static Algorithm Process.
As movement began and progressed, signal preprocessing was performed as described above, and each sample was passed to the algorithm.As each sample was analyzed, the static algorithm compared its value to a user-selected multiple of the movement minimum for that exercise.If the static threshold was surpassed by an incoming value, the algorithm delivered a simulated VNS trigger.Optionally, users could set values for the minimum interstimulus interval and directionality.
Periodic Algorithm Process.
No signal preprocessing or continuous analysis were required for the periodic algorithm, as the nature of the algorithm is signal agnostic.Users were required to set a value for the interval between stimulations.The algorithm delivered a simulated VNS trigger periodically at the set interval.
Statistics
Data are reported as mean ± standard error of the mean (SEM) or median with interquartile range (IQR).Where appropriate, standard parametric statistical tests (paired or unpaired t-tests) were used to make comparisons.Statistical tests for each comparison are noted in the text.Paired 2-tailed t-tests were used to determine differences in the triggering quality and rate of the algorithms and unpaired 2-tailed t-tests were used to determine differences in the movement signal pairings.The threshold for statistical significance was set at P < .05.Error bars in figures represent SEM.Whiskers in boxplots represent Q1 − (1.5 × IQR) or Q3 + (1.5 × IQR).
Collection of Quantitative Upper Limb Movement Data From Stroke and SCI Patients
We sought to design a real-time algorithm that could be used to identify the best movements during a variety of different rehabilitative exercises and on average produce 5 stimulation pairings per minute, based on the most effective paradigms from preclinical studies.To develop this algorithm, we simulated various triggering algorithms on a previous set of rehabilitative movement data collected from 14 stroke and 18 SCI patients with impairments in upper limb motor function.The dataset includes captured movements from 9 sensing devices and 7 games.Data was collected from 1160 exercise sessions of 30 seconds or longer.Individual sessions had unique characteristics relative to each participant, game, exercise, and controller type (Figure 1).FitMi puck and ReCheck device movement was measured as rotation angle or force, and touch screen movement was measured as swipe speed.
We developed 3 algorithms premised on differing selection criteria and applied them to the previously collected data.The first algorithm delivered stimulation triggers when the movement signal exceeded a dynamically-adjusted minimum activity threshold, which varied during the exercise session based on recent movement (Figures 2A and 3A).The second algorithm delivered stimulation triggers when the movement signal exceeded a fixed minimum activity threshold (Figures 2B and 3B).The third algorithm delivered stimulation triggers at a regular interval, irrespective of the movement signal (Figures 2C and 3C).To evaluate the performance of these algorithms, we applied each algorithm to the previously collected data sets and examined movement magnitude and interval between triggering instances.Additionally, we compared the performance of the dynamic algorithm to previously recorded triggering selections made by a therapist during rehabilitation exercises.
A Dynamic Algorithm Pairs VNS With the Best Movements and Produces Minimum Triggering Rate Variability
First, we explored the performance of a dynamic algorithm.1][22][23] Since variability of impairment levels between stroke or SCI patients is a consideration, the dynamic algorithm was designed to actively adjust the stimulation threshold to account for differences in performance across subjects and exercises (Figure 4).We examined the algorithm at multiple minimum activity thresholds created from distributions of recent movement (Percentiles: 45%, 55%, 65%, 75%, 85%, and 95%).Analysis of the algorithm showed VNS was routinely paired with movements above the set minimum percentile of recent movement.When the algorithm was set to pair VNS with movements above the 95th percentile of recent movements, the algorithm triggered at 5.09 (IQR = 0.74) stimulations per minute and selected movements at a percentile of 97.61% (IQR = 0.88) during VNS (Figure 5A and 5B).
All triggers were paired with movements above the 95th percentile of recent movements.Moreover, the dynamic algorithm produced triggers in all exercise samples, indicating that this approach is able to adjust to various types of signals and levels of participant performance.
A Static Threshold Algorithm Pairs VNS With Movements But Produces Substantial Triggering Rate Variability
Second, we evaluated the performance of a static threshold algorithm.The static threshold reduced the complexity of the signal processing, but as a tradeoff, it could reduce the flexibility to accommodate differences in performance across different patients and exercises.We explored the performance of this algorithm at multiple fold increases over the noise floor (Movement minimum multipliers = 1, 2, 4, 8, 16, and 32).For comparison, we set the level that produced a median triggering rate similar to the dynamic algorithm.Figure 5A and 5B shows a movement minimum multiplier of 32× produces a triggering rate of 4.94 (IQR = 1.78) stimulations per minute and selectivity of 64.49% (IQR = 25.38).
29.03% of sessions resulted in no stimulation triggers during the entirety of the session.Individual analyses of these sessions revealed that participants were moving during the session, but the movements were not large enough to surpass the static threshold.About 43.86% of sessions resulted in total movement-VNS pairings below 95% selectivity.Individual analyses of these sessions revealed an abundance of activity above the minimum activity threshold that was set.As expected, when the triggering threshold was increased, the selectivity of the algorithm increased, and the triggering rate decreased (Supplemental Figure 1).At high thresholds, selection emphasized performancebased triggering, so stimulations only occurred on the largest movements.However, this consequently produced less frequent triggers.Similarly, when the triggering threshold was decreased, the selectivity of the algorithm decreased and the triggering rate increased.
A Periodic Algorithm Provides Consistent Inter-Stimulation Intervals But Poor Selectivity
Finally, we investigated the performance of a periodic algorithm.This is advantageous in that it represents the simplest implementation of signal processing, but because it does not expressly account for performance, it may fail to effectively trigger stimulation concurrent with the best movements.We evaluated performance at multiple inter-stimulus intervals (6, 6.67, 7.5, 10, 12, and 15 seconds).By design of the algorithm, each timing parameter resulted in a consistent stimulation interval at the set value (Figure 5A).Because this approach does not consider movement when determining triggering, the algorithm often produced trigger events when no movement was occurring and only rarely produced trigger events during large movements.As a result, the algorithm consistently triggered during movements that were below the 50th percentile of recent movements.The median movement selectivity was 34.05% (IQR = 7.47) when the inter-stimulus interval was set to 12 seconds (Figure 5A and 5B).Because the algorithm does not account for movement, it frequently triggered during periods of rest, which decreased the percentile of the selected movements.When the inter-stimulus interval was shortened or lengthened, the selection characteristics of the algorithm do not improve (Supplemental Figure 1).Thus, the periodic algorithm produced reliable trigger intervals, but demonstrated poor selectivity for the best movements.
Not All Algorithms Create Appropriate Triggering Intervals
5][6] In the periodic algorithm, the inter-stimulus interval is the only input parameter and is constant across exercise types and patients (Figure 5B).The threshold and dynamic algorithms employ a minimum inter-stimulus interval to ensure stimulations are separated by at least 5 seconds.When the minimum activity threshold in the static algorithm is set to 32 times the movement minimum, the mean inter-stimulus interval is 27.16 ± 2.01 seconds for sessions where triggering occurred.When the minimum percentile of recent movement is set to 95% in the dynamic algorithm, the average inter-stimulus interval is 13.87 ± 0.22 seconds.Thus, the periodic and dynamic algorithms can produce triggering near the desired 12-second interval but the threshold algorithm does not consistently produce enough triggers.
The Dynamic Algorithm Maximizes Movement Magnitude Across All Exercises and Capabilities
Triggering stimulation to coincide with the best movements during rehabilitation is necessary for VNS-dependent benefits. 24We compared the selectivity of the algorithms to determine which algorithm balanced consistent timing with triggering on the best movements.The quality of the movements selected by the algorithms are represented as the percent of maximum movements during each exercise.Overall, the dynamic algorithm resulted in the greatest percent of maximum movement compared to periodic and static algorithms (Figure 5B, periodic: 33.86 ± 1.01%, paired t-test, P = 1.13 × 10 −34 ; static: 64.30 ± 3.04%, paired t-test, P = 2.77 × 10 −12 ).This indicates that the dynamic algorithm provides the most reliable selection of the best movements across exercises and participants (Figure 6, Supplemental Table 1).
The Dynamic Algorithm Selects Larger Movements Than Supervised Manual Triggering
Based on the ability to trigger at the desired interval and selection of the largest movements, the dynamic algorithm represented the optimal triggering paradigm of those tested.Because the algorithm is ultimately intended to facilitate the delivery of VNS therapy by reducing the burden on a therapist to trigger stimulation, we sought to directly validate performance by comparing to dynamic algorithmic stimulation selection to that delivered by a trained therapist.To do so, we reanalyzed a large set of rehabilitative data in which a therapist triggered stimulation and compared the paired movement magnitude and stimulation timing to the dynamic algorithm's selections.We individually normalized the movement data by calculating the average paired peak size within ±1 second of periodic stimulations at 12 seconds intervals throughout the therapy session, for all possible 2-second samples.Normalization was performed per participant, per exercise, per game, for each therapy date.The selection quality of the manual and dynamic algorithm triggers is represented by the percent improvement of the paired movement peaks over the periodic algorithm.
Both manual triggers and the dynamic algorithm pair VNS with larger movement peaks than the periodic algorithm.Peak-pairing performance of the dynamic algorithm indicates that algorithm selects large movements at least as well as a trained human observer (Figure 7, P = 1.77 × 10 −74 , dynamic algorithm stimulations: 25 203, manual stimulations: 31 079).On average, the dynamic algorithm selects movements that are 70.28 ± 3.03% bigger than the periodic algorithm and 54.38 ± 2.97% larger than therapist-selected movements (Figure 7).
Discussion
Here, we report the design of an algorithm capable of pairing VNS with the best movements during upper limb rehabilitation following neurological injury.We recorded movement data from 14 stroke and 18 SCI patients during a variety of different rehabilitative exercises.We used this data to develop a dynamic algorithm and compare it to alternative and clinically-employed algorithms to compare which strategy exhibits the best triggering criteria.After testing a range of parameters within each of the algorithms, we identified a set of parameters within the dynamic algorithm that can select the best movements while maintaining a consistent median triggering rate.Additionally, we validated that the dynamic algorithm performs at least as well as a trained human observer, indicating that this approach represents a means to provide unsupervised, closed-loop VNS during rehabilitation.
The motivation for this study was to develop an algorithm to facilitate feedback-controlled neurostimulation during rehabilitation, aimed at increasing the dose and quality of VNS pairings.Previous studies show stimulation timing and trial selection affect the magnitude of VNS-dependent enhancement of post-stroke and post-SCI recovery. 8,9,20,24,25 recent preclinical study clearly illustrates the reliance of VNS effects on trial selection.Pairing VNS with the strongest forelimb movements during rehabilitative training significantly enhanced recovery of forelimb strength, whereas pairing the weakest movements failed to promote recovery. 91][22][23] Moreover, several studies confirm that a matched amount of stimulation that is not paired with movement fails to enhance recovery. 6,8,25These provide the rationale for developing an algorithm that can trigger stimulation concurrent with the best movements.Several additional lines of evidence demonstrate the importance of stimulation timing on VNSdependent effects. 26Faster rates of stimulation (ie, shorter inter-stimulation intervals) are associated with smaller VNS-dependent effects in preclinical models. 27,28Moreover, clinical evidence shows large amounts of periodic VNS during rehabilitation provides only modest benefits compared to stimulation delivered explicitly concurrent with exercises. 5,10Together, these findings reinforce the importance of incorporating inter-stimulation timing into an algorithm for unsupervised stimulation.Overall extraction of these findings indicates that VNS is likely most effective when therapy sessions include an effective number of stimulations that incorporate the best movements with the longer intervals between stimulations.
Given the reliance on movement selection and interstimulation timing, we developed algorithms with different characteristics to achieve VNS triggering based on these factors, including replicating the methods that were effective in preclinical studies.1][22][23] Successful translation of this approach from bench to clinic requires task consideration.Patients perform continuous tasks while playing games for several minutes at a time to maximize repetitions within the allotted rehabilitation session time.Thus, the dynamic algorithm employed here was augmented to fit a continuous signal while still adjusting the VNS threshold based on recent movement.The dynamic algorithm selects larger movements than a trained observer.During upper limb physical therapy with RePlay and ReCheck, the dynamic algorithm triggered stimulation on movements that were 54.38 ± 2.97% larger than movements selected by a trained physical therapist (unpaired 2-tailed t-test, P = 1.77 × 10 −74 ).We individually normalized the movement data by calculating the average paired peak size within ± 1 second of periodic stimulations at 12 second intervals throughout the therapy session.The dynamic algorithm and the periodic algorithm were applied in post-hoc analysis and the manual stimulations were conducted in real time.
Each of the algorithms explored here has unique benefits.The periodic algorithm employs simple timing for the sake of prioritizing the rate of VNS over the quality of its pairings and matches the conventional method for unsupervised VNS delivery.This algorithm is beneficial for ease of implementation and simplifies validation testing.However, since the periodic algorithm is operated by a countdown timer, it does not account for movement quality and thus does not provide selection of the best movements.Alternatively, the static algorithm can select movements while maintaining an optimum stimulation rate.The capability to select movements makes a static threshold appealing, but the realistic application of such an algorithm is hindered by the large variability of impairment levels observed in patients with neurological injuries.Additionally, motor performance can fluctuate day-to-day or with improvement over the course of rehabilitation, which complicates selection of an appropriate static threshold.If the threshold is set too high, a patient may not perform any movements of a magnitude great enough to trigger VNS.If the threshold is set too low, movement of virtually any magnitude will trigger stimulation, which limits selection of the best movements.The dynamic algorithm compensates for this issue by automatically individualizing the triggering criteria in real-time without the need to fine tune parameters for each person or session.An adaptive minimum activity threshold fluctuates according to recent movement to achieve selectivity on the best movements while maintaining the optimum rate of stimulation.The dynamic threshold ensures the algorithm can be applied to various exercises across a range of impairment levels without losing its main advantages.These characteristics increase the number of ideal VNS pairings during therapy.
VNS has emerged as an FDA-approved strategy to enhance rehabilitation following neurological injury.5][6] In its conventional implementation, VNS is delivered by a therapist who pushes a button during movements they want to reinforce.This method enhances motor function and improvements persist over time, demonstrating that supervised rehabilitation with VNS can generate long-lasting, clinically significant improvements in stroke patients.5][6] Maximizing the clinical impact of VNS for stroke recovery may depend on the selection of an algorithm that can properly address many unique movements that take place during rehab.Some clinical observations support this notion.Whereas patients demonstrated significant changes in function when VNS was triggered by a therapist observing movement, patients from a pilot study demonstrate comparatively modest gains when receiving unsupervised VNS that was not explicitly paired with movements, even when stimulation was delivered for years. 5,10This approach uses a stimulation paradigm congruent to the periodic algorithm described in this study.The absence of continued improvements in function may reflect the lack of consistent stimulation during the best movements.Given the advantage in selectivity with the dynamic algorithm, it is reasonable that using this approach to deliver unsupervised closed-loop stimulation during rehabilitation over a long time-course may represent a means to drive greater recovery. 16,28ince the conventional approach involves a therapist pairing VNS manually with rehabilitative movements, we sought to determine if the dynamic algorithm could select movements at least as well as a trained observer.An algorithm capable of matching the selection characteristics of a human observer would allow for paired VNS during unsupervised rehabilitation and also let therapists focus on rehabilitative exercises rather than stimulation timing.In the current study, therapists observed patients during gaming and pressed a button to deliver stimulation when large movements were observed, with a limit of at least 5 seconds between consecutive stimulation.We used this manual stimulation timing data to conduct post-hoc timing analysis of the dynamic algorithm and the human observer on the same movements.The result of this comparison indicates that the algorithm is able to match and exceed the selection of large movements during therapy.The automation of this effective conventional approach indicates the algorithm could be used for unsupervised at-home VNS.
Full automation of movement selection must consider unbalanced deficits that could be present during bidirectional exercises.Motor impairments after neurological injury commonly present as deficits that exist to a greater degree in 1 direction over another, such as a moderate impairment of forearm supination and severe impairment of forearm pronation in the same arm.These deficits appear as low levels of activity in 1 direction of the respective movement signal.We considered this scenario and designed the dynamic algorithm to handle bidirectional movements by optionally maintaining 2 separate movement distributions, which provides automatic adjustment of 2 individual thresholds during bidirectional movements (Supplemental Figure 2).The static algorithm can compensate when different minimum activity thresholds are set for each direction, but extensive manual tuning would be needed.The periodic algorithm is unable to compensate for a deficit imbalance, indicating it is least suitable for handling bidirectional movement training.
This algorithm is designed to identify instances where a single-dimension signal outperforms its own previous activity; however, greater complexity of movement analysis that require multidimensional examination may be valuable, such as during bimanual exercises.In the future, signal preprocessing could combine multiple dimensions by averaging multiple channels together, taking the largest channel, or many other methods that could be tested and employed.Future implementations could utilize multiple signals competitively to prevent stimulation during movement with characteristics that would suggest unwanted compensation.
In addition to developing an algorithm that can select optimal movements while maintaining an effective triggering rate, the simplicity of the algorithm is a consideration.It is likely that more complex methods, such as approaches premised on machine learning, could be comparable or superior at movement selection.However, we sought to develop an algorithm that yielded the appropriate behavior with minimal complexity based on 2 overarching considerations.First, machine learning algorithms can be computationally intensive, and we sought to avoid a scenario that required combining a high bandwidth computation algorithm and the software that governs control of the VNS system on the same smart device.Second, the black-box nature of machine learning complicates verification and validation testing for regulatory approval, a crucial consideration in eventual deployment of this approach.We expect this algorithm to be useful in future applications of medical devices with physiologic closed-loop control technology or for integration into Software as a Medical Device; subsequently, we have followed all relevant and emerging guidance, such as simplicity of system integration and operational transparency for clinicians, while meeting our requirements for rate and signal peak selection.
Here, we describe an algorithm that can be used to pair VNS with the best movements at a reliable interval to replicate and build on the manually paired VNS delivery paradigm that produced clinical benefits.Automatic closed-loop VNS can be achieved with an algorithm that dynamically modulates a minimum activity threshold based on previous movements.This approach performs at least as well as a trained human observer, providing initial evidence of validity.If effective, this strategy could improve the timing of VNS delivery during rehabilitation, reduce on therapists of simultaneously overseeing rehabilitative exercises and stimulation delivery, and allow for closed-loop unsupervised stimulation at home to extend the duration of therapy.Future studies should implement this algorithm to control VNS delivery and determine whether this approach can complement conventional VNS therapy to generate greater recovery in individuals with neurological injury.The algorithm may also be effective in providing closed-loop neuromodulation via transcranial magnetic stimulation, spinal cord stimulation, cortical stimulation, deep brain stimulation, or peripheral nerve stimulation.
Figure 1 .
Figure 1.Examples of rehabilitative exercises and the corresponding movement signals collected from participants with stroke or SCI.Game titles and exercise types are listed above each representative signal.Participants controlled 9 sensing devices and 7 games during rehabilitative exercises to produce the movement signals in this study.Not all combinations are shown here.(A) Some exercises are conducted during gameplay.The Space Runner game responds to a force signal.Participants control the Fruit Ninja game with a touch screen, where they drag a finger across the touch screen to produce a signal that represents finger location over time.Here, the touch signal is shown as either swiping or not swiping.Participants control the Traffic Racer game by rotating a sensor puck that rests on a table.(B) Some exercises are performed without companion games to replicate traditional rehabilitation.A sensor puck can be used to detect movement for repetitive exercises such as curls, shoulder abduction, or reach across.
Figure 2 .
Figure 2. Graphical description of each algorithm.(A) The dynamic algorithm measures the movement in progress and adjusts a minimum activity level and triggers stimulation if that movement exceeds the threshold.(B) The threshold algorithm measures the movement in progress and triggers stimulation if that movement is larger than a preset minimum activity level.(C) The periodic algorithm employs a countdown timer that triggers stimulation upon expiration.
Figure 3 .
Figure 3. Examples of triggering during rehabilitative exercises with each algorithmic method.A rotation exercise produced the representative signal in each plot.(A) The dynamic algorithm triggers stimulation when movement crosses a variable threshold based on percentile of recent movement.The green curve represents a threshold that mark the 95th percentile of recent movement in the direction of supination.(B) The threshold algorithm triggers stimulation when movement crosses beyond preset movement levels.The blue horizontal line represents the preset movement level, which is set to 32× the movement minimum in this example.(C) The periodic algorithm triggers stimulations every 12 seconds, regardless of movement.Vertical dashed lines represent VNS stimulations.Red dots represent the movement level that coincided with the VNS trigger.
Figure 4 .
Figure 4. Illustrative example of movement sampling and the processes employed by the dynamic algorithm.(A) Prior to analysis by the algorithm, the movement signal is collected by the sensor and preprocessed.Step 1: Users select specific exercises from the RePlay application to guide the rehabilitation and select the relevant sensor dimension.Step 2: Sensors capture movement in a single relevant dimension while the rehabilitation exercise is performed.Step 3: The movement data is continuously smoothed with a convolution filter to remove the noise.The gradient of the samples in the last 300 ms of the smoothed signal produces the rate of change.The preprocessing ends by calculating the average rate of change within the 300 ms window.(B) Preprocessed movement signal samples are continuously delivered to the algorithm for triggering decisions.Step 1: The algorithm receives single values previously calculated from the average rate of change in the movement signal.Step 2: The dynamic algorithm identifies the size of the current movement sample by its location in the distribution of up to 3000 recent samples.Step 3: If the movement magnitude is in the top 5% of the 3000 samples, the algorithm triggers VNS if the 5 s minimum inter-stimulus interval has passed.
Figure 5 .
Figure 5.The dynamic algorithm yields the most robust selection of large movements and a reliable triggering interval.(A) The periodic algorithm produced consistent triggers every 12 seconds for a rate of 5 stims/minute.Variance of the periodic triggering rate seen here is from gameplay times not divisible by 12.The static threshold algorithm produced a median triggering rate of 4.94 (IQR = 1.78) stimulations per minute with considerable variance.The dynamic algorithm produced a median triggering rate of 5.09 (IQR = 0.74) stimulations per minute with moderate variance.No significant differences between median rates were observed.The notched boxes in each plot represent 1160 exercise sessions and include all controllers, games, and participants.Notches represent a 90% CI of the median.(B) The quality of the movements selected by the algorithms are represented as the percent of maximum movements during each exercise.The periodic algorithm triggered VNS on 34.50% (IQR = 7.47) of maximum movement.The threshold algorithm triggered VNS on 64.50% (IQR = 25.38) of maximum movement.The selective algorithm triggered VNS on 97.61% (IQR = 0.88) of maximum movement.
Figure 6 .
Figure 6.Illustrative examples of exercise signals after processing by the dynamic algorithm.Each processed signal is labeled with its game title and exercise type.Puck movements were translated to rate of change signals.Swiping speed was extracted during gameplay and used as the main signal for Fruit Ninja.Red dots indicate movement instances that triggered VNS.The dynamic algorithm triggers a stimulation when movement crosses an active threshold, represented as a percentile of recent movement.The green curve represents a threshold that marks the 95th percentile of recent movement (last 3000 samples).
Figure 7 .
Figure7.The dynamic algorithm selects larger movements than a trained observer.During upper limb physical therapy with RePlay and ReCheck, the dynamic algorithm triggered stimulation on movements that were 54.38 ± 2.97% larger than movements selected by a trained physical therapist (unpaired 2-tailed t-test, P = 1.77 × 10 −74 ).We individually normalized the movement data by calculating the average paired peak size within ± 1 second of periodic stimulations at 12 second intervals throughout the therapy session.The dynamic algorithm and the periodic algorithm were applied in post-hoc analysis and the manual stimulations were conducted in real time. | 2024-05-08T06:17:06.868Z | 2024-05-07T00:00:00.000 | {
"year": 2024,
"sha1": "e0af76d3a8aec374c0419ba56fbc7c795fd6bad9",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/15459683241252599",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "68eaf5f582bb7195649102b99687bb07de74568c",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
95351208 | pes2o/s2orc | v3-fos-license | Interactive comment on “ Analysis of the application of the optical method to the measurements of the water vapor content in the atmosphere – Part 1 : Basic concepts – the measurements of the water vapor content in the atmosphere with the optical method ”
1) There are many studies on the measurement of water vapor columns recently published e.g., FTIR instruments of P. Demoulin or R. Sussmann and microwave radiometer of J. Morland. Since these articles are not mentioned by the authors, I conclude that they are not informed about the state of research. However it is important that the authors relate their measurement technique to other measurement techniques in the introduction and maybe later in the trend analysis.
Introduction
Atmospheric water vapor is the most important trace gas in the atmosphere, since it plays the key role in its energy budget, the water cycle, cloud formation, and precipitation, as well as in the greenhouse properties of the atmosphere.Due to its large temporal and spatial variability, its observation still poses great challenges to experimentalists.
The optical method for measurements of the atmospheric water vapor content has already been used for almost a century (Fowle, 1912(Fowle, , 1913(Fowle, , 1915)).In order to determine the total column water vapor content, the absorption caused by the water molecules Figures or a star as a light source.The inverse problem, i.e. the retrieval of the water vapor content from the measured value of the absorption, requires careful calibration.Although other methods for the observation of the atmospheric water vapor content were also introduced later (such as radiosondes, microwave radiometry, and GPS delay), the optical method has not lost its value.Numerous sun photometers designed for studying aerosol components of the atmosphere contain a channel aligned on water vapor absorption bands, which makes it possible to observe quantitatively the water column in the atmosphere (Michasky et al., 1995;Halthore et al., 1997;Schmid et al., 1996;Ingold et al., 2000;Leiterer et al., 1998).Since 1995, Lindenberg Meteorological Observatory has routinely been monitoring the aerosol optical thickness at standard wavelengths using sun and star photometers, and the water vapor content has also been retrieved from these measurements (Leiterer et al., 1998(Leiterer et al., , 2001;;Alekseeva et al., 2001;Novikov et al., 2010).Analysis of the internal accuracy of these data is presented in Galkin et al. (2010).
Previously, the water vapor content in the atmosphere was observed at night-time by astronomical methods, in order to reveal and analyze variations in the atmospheric transparency in the regions of telluric water vapor bands during astronomical observations (Galkin andArkharov, 1980, 1981;Alekseeva et al., 1983).When using a filter centered on a water vapor absorption band, the measured flux is affected by the absorption in the set of lines with different degrees of saturation and values of the parameters describing the absorption in the line.Therefore, the dependence of the absorption by water vapor on the water vapor content should be determined empirically.Generally, to describe the dependence of the optical thickness in the absorption band on the water vapor total column and pressure, the power function is used (Golubitskyi and Moskalenko, 1968;Moskalenko, 1968Moskalenko, , 1969)).A similar approach was used at Pulkovo Observatory, in the process of compilation of the Pulkovo Spectrophotometric Catalog, to retrieve extraterrestrial brightnesses of stars in the spectral regions affected by telluric contamination (Galkin andArkharov, 1980, 1981;Alekseeva et al., 1997).
The necessary empirical spectral parameters of the power function were obtained with Introduction
Conclusions References
Tables Figures
Back Close
Full the SF-68 spectrophotometer and the unique Pulkovo multipass vacuum cell VKM-100 within the interval of the water vapor content 0.3-5.0cm ppw (cm of precipitated water) (Alekseeva et al., 1994).On the basis of these data and individual spectral transmission curves for filters used in star and sun photometers, the empirical parameters can be calculated and subsequently used to determine the atmospheric water vapor content from observations with a particular instrument.
In recent publications, water vapor absorption spectra were calculated on the basis of radiative transfer models (e.g., LOWTRAN or MODTRAN), in order to obtain the empirical parameters of the power function (Michasky et al., 1995;Halthore et al., 1997).Schmid et al. (1996) and Ingold et al. (2000) determined the empirical parameters from the comparison of photometrical data (obtained with a sun photometer) with the measurements of the atmospheric water vapor made with microwave radiometers or radiosondes.These empirical parameters differ noticeably from those calculated within the models.
While the overall uncertainty of the water vapor content obtained by the optical method is about 10% (Schmid et al., 1996;Ingold et al., 2000), the error of one photometric measurement itself is only about 0.5% (Galkin et al., 2010).The loss of accuracy during of procedure of the water vapor retrieval is first of all a problem of the theoretical or experimental way of defining the calibration dependence between the absorption for the given filter and the water vapor content in the line of view.
Therefore, our goal was to check the reliability of the calibration on the basis of laboratory modeling for the absorption by atmospheric water vapor with the use of the VKM-100 multipass vacuum cell.In this cell, a variation of the absorption by water vapor can be accurately related to the variation of water vapor content along the line of sight which is attained by varying the number of passages of the light through the cell.This makes it possible to study the form of the approximation for the relative calibration dependence on the water vapor content (in relative units of the number of passages), for various values of the pressure and temperature, with the accuracy ∼1%.To this end, numerous measurements were made with the Pulkovo cell to calibrate the Introduction
Conclusions References
Tables Figures
Back Close
Full Lindenberg's ROBAS-30 sun photometer, star photometer, and high-resolution ASP-12 spectrograph.It is possible to derive the absolute calibration from measurements of the humidity in the cell, which are currently made with polymer sensors of a limited accuracy.
In addition, the calibration (i.e. the determination of the empirical parameters) was made on the basis of the Pulkovo Catalogue (Alekseeva et al., 1997), and also of calculated spectra taken from the MODTRAN-4 database.Further on, we used radiosonde data to calibrate our photometers.These methods are described in Sect.3. The results obtained from these approaches are presented in Sect. 4 and discussed in Sect. 5. On the basis of these studies, we have developed some ideas to improve the accuracy of the photometric method, in order to fully explore the potential of this technique as an independent reference for determination of the atmospheric water vapour content.
The empirical approximation for the absorption in the water vapor spectrum
Since more than 90% of measurements made with photometers are carried out when the amount of the water vapor along the line of sight is within the interval 0.5-5.0cm ppw, the absorption in this interval should be calculated or obtained experimentally.At a certain moment, the effective pressure and temperature for the atmospheric water vapor deviate from their average values by no more than 5%.In the optical method, the absorption in the certain interval of wavelengths is averaged by the filter or a slit of the spectrophotometer over the spectral lines which lie within the given interval.In addition to the absorption in multiple lines within the wavelength interval of the used filter, the observed signal value is also influenced by Rayleigh scattering and aerosol absorption.In the course of observations, all these factors are taken into account routinely; this procedure is described in detail in (Alekseeva et al., 2001;Novikov et al., 2010;Galkin et al., 2010).Introduction
Conclusions References
Tables Figures
Back Close
Full According to the statistical model, the absorption in multiple spectral lines is given by the expression (Goody, 1964) where A is the absorption in multiple spectral lines, T the transmission, W i the equivalent width of the i th line, ∆λ the wavelength interval.The expression (Eq. 1) makes it possible to calculate the absorption in multiple lines depending on the pressure, the temperature, and the amount of water vapor, provided the spectroscopic parameters of the individual lines are known.The expression (Eq. 1) does not yield the analytical dependence of the absorption on the number of absorbing water vapor molecules W , the pressure P , and the temperature of the vapor.More promising is the empirical approach to the determination of this dependence (at least from the physical parameters W and P ) based on the approximation of the variations of the optical depth τ as a function of the water vapor content W and the pressure P by a power law: where β, µ, n are empirical parameters.Note that Eq. ( 2) contains separate dependences on W and P , with different power indices.The temperature dependence of the optical depth can be included in the parameter β.However, the influence of the temperature on the transmission does not exceed 1-2% for the temperature interval in the atmosphere, and can therefore be neglected.In operations with star and sun photometers, star magnitudes m are commonly used, defined by the following relation: ), where I is the intensity of the optical star radiation (in W m −2 ).Therefore, the absorption by water vapor in terms of the star magnitudes is, according to Eq. ( 2): absorption by water vapor in star magnitudes for 1 cm ppw, and µ -the dimensionless parameter describing the variation of the absorption with the water vapor concentration.The parameter c is constant for a given pressure.In real observations, in the first approximation it corresponds to the average effective pressure of the water vapor in the atmosphere, P eff.(for Lindenberg, P eff.= 0.845 atm = 856 hPa).Estimates show that P eff.deviates from its average value by less than ±70 hPa, which corresponds to the expected maximum variation of c±4−5%.Therefore, the dependence of the parameter c on P eff.should be taken into account only for a small number of abnormal cases.Schmid et al. (1996) and Ingold et al. (2000) used similar approximations for the dependence of the transmission on the amount of the water vapor and calculated the empirical parameters using MODTRAN.In these cases, the empirical parameters a and b are related to our parameters c and µ as follows: In the literature, other approximations of the absorption in multiple spectral lines were also discussed, in the form of a combination of trigonometric functions or polynomials of different degrees.Some of them are reviewed in (Golubitskyi and Moskalenko, 1968;Moskalenko, 1968Moskalenko, , 1969)).However, in practice the approximation by power function (Eq.2) is preferred.This approximation successfully represents the dependence of absorption on the concentration of the absorbent, pressure and temperature as a product of functions of these parameters.However, the disadvantage of power approximation is that it insufficiently accurately represents the calculated or experimental dependence of absorption on the amount of the absorbent.Therefore, the values of the empirical parameters depend, in particular, on the interval of the contents of the water vapor, within which the approximation is carried out.Thus, further studies are necessary to obtain a more accurate form of the approximation (Eq.2); for example, different parameters may be used in the expression (Eq.2) for different intervals of water vapor contents, or another analytical form of the approximation may be searched.Introduction
Conclusions References
Tables Figures
Back Close
Full The water vapor absorption was studied with the use of the VKM-100 multipass vacuum cell, in which the system of mirrors was placed according to White's scheme (Galkin et al., 2004;White, 1942).
Figure 1a presents the general optical schematic diagram of the cell.The spherical mirrors A, B, and C with the radius of curvature 96.5 m are mounted so that the mirrors A and B form a consecutive set of images of the entrance slit on the mirror C. The mirror C reflects the mirror A onto the mirror B, and vice versa.The input objective O1, located in the plane of the entrance slit E, reflects the light source (restricted by the diaphragm S) onto the mirror A. The diaphragm S restricts the size of the light beam to the solid angle of the mirror A, thereby eliminating superfluous light scattering in the cell.The number of light passages varies due to variation of the relative position of the optical axes of the mirrors A and B and, hence, to variation of the number of images on the mirror C. We can see in Fig. 1a that the mirrors A and B should be adjusted so that in the upper row of images formed on the mirror C, an odd number of images is formed; given that, the last (even) image will be placed on the exit slit.In contrast to White's scheme, instead of the exit slit, the mirror D is introduced, which reflects the mirror B onto the output objective O2.Thereby, the system of mirrors A, B, and C, makes it possible to obtain multiple passages of light, starting with the minimum number of passages equal to 4, and then increasing it by an integer factor.Thus, the images of the entrance slit appear in the exit window of the cell (behind the objective O2) after the number of passages equal to 5 (4+1), 9 (8+1), 13 (12+1), 17 (16+1) etc.The maximum number of passages is restricted by the number of images of the entrance slit which can be placed along the mirror C (for the VKM-100 cell, this number reaches a hundred images, which corresponds to the length of the path of 40 km).However, in practice, the maximum number of passages is substantially lower due to light losses on reflection, which vary as r N , where r is the reflection index, and N the Introduction
Conclusions References
Tables Figures
Back Close
Full number of reflections.For the path length of 4100 m, the signal decreases by 6 star magnitudes (by the factor of 250), which corresponds to the reflection index of mirrors ∼89% (aluminum covering).Another reason limiting the maximum distance that light can pass in the cell is the diffusion of the entrance slit image with the increase in the number of reflections.This is due to insufficient quality of the surfaces of the mirrors caused by difficulties with the testing of the curvature radius for mirrors with such small curvature.Improving the quality of mirror surfaces and using silver covering (with the reflection index 95-96%), one may substantially increase the maximum number of light passages and the corresponding interval of contents of water vapor along the line of sight.
The length of the cell is 97.5 m; the minimum path length used for our measurements was 500 m.The measurements were also made with the path length 900, 1300, 1700, 2100, 2500, 2900, 3300, 3700, and 4100 m. Figure 1b presents the general structure of the experiment.
The amount of water vapor along the line of sight depends on the path length and the absolute humidity in the cell.The latter was measured by four polymer sensors connected with the control unit; the data obtained from the sensors were periodically logged in and averaged.A detailed study of these sensors for various values of relative humidity, temperature, and pressure was carried out at Lindenberg Meteorological Observatory.Our sensors were calibrated to the standard humidity of saturated vapor above various salt solutions and also to the data obtained with TOROS reference devices used for measurements of humidity at the frost point and by Vaisala sensors that used the FN technique introduced at Lindenberg Observatory (Leiterer et al., 1997).
A comparison between our sensors and reference instruments in a climate chamber was carried out in Lindenberg by Galkin et al. (2006) and showed that the accuracy of the measurements of humidity in our cell was only 5-10%.
For several years, the calibration of star and sun photometers with the VKM-100 cell was made in accordance with the scheme in Fig. 1b.The water vapor content, as a rule, was determined by the polymer sensors.Some of the calibrations of the 5714 Introduction
Conclusions References
Tables Figures
Back Close
Full photometers with the VKM-100 cell were accompanied by measurements made with the high-resolution ASP-12 spectrograph.The equivalent width of the water vapor absorption line at 694.3803 nm was determined (see Fig. 1c).The measurement of the equivalent width of this line makes it possible to determine the water vapor content in the cell at various pressure.These measurements made it possible to determine the water vapor content under conditions of low relative humidity (<30-40%), when the measurements with polymeric sensors were unreliable (Galkin et al., 2006).
Later on, ASP-12 and the sun photometer ROBAS-30, calibrated under the same conditions in the cell, were used for simultaneous determinations of atmospheric water vapor content made from observations of the Sun.The purpose of these observations was to compare the experimental data obtained by two optical methods for identical light paths in the atmosphere.The first instrument used an isolated absorption line, with the intensity independent of the temperature, while the second analyzed a set of lines with different intensities and temperature dependence.
The comparison between the two techniques of observations depends not only on specific features of the accepted methods, but also on imperfections of the used photometers.This is the reason why, to discriminate the sources of errors, we carried out our observations simultaneously.The results of the comparison of the photometers will be considered in detail in a separate study.Figure 2 presents the relative spectral transmission curves of the filters used in Lindenberg star and sun photometers, and the spectral distributions for the parameters c and µ in the region of 935 nm water vapor absorption band (Alekseeva et al., 1994).
Figure 2 displays a high degree of variability of the spectral parameters c and µ within the broad wavelengths interval of the sun filters.Using the data presented in Fig. 2, it is possible to calculate the parameters c and µ for any given filter.
Measurements of the intensity of light that passed through the cell were carried out with star and sun photometers (Fig. 1b). Figure 3 presents an example of such measurements made with the BAS-30 sun photometer, with the path length of 2500 m through the air with P = 0.9 atm and through an evacuated cell (P = 0.001 atm).The Introduction
Conclusions References
Tables Figures
Back Close
Full ratio of intensities of the spectra observed with the filled and empty cell yields the water vapor transmission for the given path length.The transmission obtained for another path length indicates the variation of the transmission with the increase of the water vapor content along the line of sight (Fig. 4).In Fig. 4, the measured transmission (in star magnitudes) is presented as a function of the length of the light path in the cell (in the units of the minimum path, 500 m).This illustrates the variation of the transmission in on-sky measurements with the increase of the zenith distance of the observed object.
The approximation of the data in Fig. 4 by a power function (Eq. 3) yields the values for the parameters c and µ with the standard deviations σ c =0.004 and σ µ =0.014.
The procedure of measurements of the parameters does not last longer than half an hour, which provides an opportunity to study the dependence of the empirical parameters on the conditions in the cell (the temperature, pressure, water vapor content, and path length).The error of determination of the parameter c is caused primarily by the error of the sensors used for the measurements of the absolute humidity.On the other hand, the given technique makes it possible to carry out further experiments in order to increase our level of knowledge about the absorption of water vapor and various forms of approximations for the absorption as a function of the water vapor content.
The results of determination of the empirical parameters with different methods
The study of absorption by water vapor under various physical conditions makes it possible to consider separately the dependence of absorption on the amount of absorbing substance, pressure, and temperature.The dependence of absorption on the amount of the water vapor along the line of sight for constant pressure and temperature is established easily (varying the number of passages of light in the cell); however, the variation of pressure or temperature for constant humidity presents more serious experimental problems.Introduction
Conclusions References
Tables Figures
Back Close
Full The measurements of the parameters c and µ were carried out with the star photometer and sun photometer BAS-30 for pressures ranging from 0.1 to 1 atm with the step of 0.1 atm.Table 1 presents the results obtained for one of the filters (948.0 nm) with the star photometer for various pressure values.
It follows from Table 1 that the parameter µ only weakly depends on pressure.This justifies the assumption of separate dependences of the absorption on pressure and concentration.Parameter c corresponds to the amount of water vapor in the minimum path length 500 m.To recalculate the parameter c for 1 cm ppw, we used the readings of the humidity sensors, which resulted in the increase of the determination error for the parameter c.Table 1 also contains standard deviations for the determined parameters.
They are specified only by the accuracy of the photometric measurements, the stability of the source of radiation, and the adjustment of the optical scheme.Table 2 presents the results obtained for other water vapor-centered filters of the star photometer and the BAS-20 and BAS-30 sun photometers, and laboratory data for the SF-68 spectrophotometer (taken from Alekseeva et al., 1994).Columns 1 and 2 contain the central wavelengths of the water filters and their widths; columns 3 and 4 -parameters c and µ obtained from direct measurements in the cell.All values of the parameter c were recalculated for 1 cm ppw and the pressure 0.845 atm, corresponding to the effective pressure of the water vapor in the atmosphere at the sea level.
The columns 5 and 6 present c and µ calculated from the spectral transmission functions (according to Alekseeva et al., 1994) and from the transmission curve of the water vapor-centered filter of the photometers.The columns 7 and 8 contain c and µ calculated with MODTRAN-4.The last columns (9 and 10) present c and µ obtained from on-sky observations with sun and star photometers calibrated by radiosonde data.
In practice, parameters for Lindenberg sun photometers were determined as follows.
For a given photometer and for a particular water vapor-centered filter, the empirical parameters c and µ were derived from the laboratory spectra obtained at Pulkovo with the VKM-100 vacuum multipass cell and the SF-68 spectrophotometer within the range of water vapor contents 0.5-5.0cm ppw along the line of sight (Alekseeva et al., 1994).Figures Back Close Full Then the parameter µ (column 6) was used to derive the extraterrestrial magnitude m 0 of the Sun in the water vapor band and the parameter c, from radiosonde and observational photometrical data, for the time interval when the calibration of the photometers did not vary.Figure 5 Here, m obs is the measured signal from the Sun (in star magnitudes), α Ray , α aer -Rayleigh and aerosol components of atmospheric extinction (in star magnitudes), W RS80 -the water vapor content derived from radiosonde data (cm ppw), Mthe air mass.Figure 5 demonstrates that the obtained 7154 individual measurements are closely matched with a linear dependence.For the other observational periods, the sun photometers ROBAS-20 and ROBAS-30 were calibrated in a similar way.For the star photometer, the procedure of determination of the parameters c and µ was slightly different, however, the basic principle of the selection of the parameter µ on the basis of laboratory data and the recalibration of the parameter c according to radiosonde data was maintained.
The total volume of observational data obtained with the star and sun photometers from the year 1995 to 2008 was processed with the parameters determined as it was described above and presented in Fig. 6.The results of the determination of the water vapor column contents made by the optical method and their comparison with those obtained with the use of other techniques will be discussed in more detail in another publication.Introduction
Conclusions References
Tables Figures
Back Close
Full
Discussion
Table 3 presents the values for empirical parameters for some sun photometers taken from literature.Since the wavelength for the measurement of the atmospheric water vapor content was established by WMO, and most photometers carry out measurements in this wavelength, the parameters obtained in different studies can be compared directly.The parameter b=µ essentially does not depend on the width and the shape of the transmission curve of the filter; therefore, provided the intervals of water vapor contents are close to each other, the direct comparison of different values for the parameter is possible.The parameter a=0.921•c depends on the half-width and shape of the filter transmission curve, and also on the set of data used for the calibration.
The height of the point of observations above the sea level also affects the value of parameter a.
Figures 7 and 8 present the comparison between our determinations for the parameters c and µ (Alekseeva et al., 1994 and Table 2) and the data taken from other studies (Table 3).Taking into account that the parameters may be dependent on the used spectral resolution, the data in the Figures are presented as a function of the halfwidth of the transmission curve of the filter.Tables 2 and 3 contain the values of the parameter obtained using different techniques with the same photometers, with photometers of different type, and even with different light sources (the Sun and stars).We present this comparison to demonstrate the consistency of these various types of data and, on the other hand, to find some trends in the variation of the parameters as a function of the half-width of the filter ∆λ (nm).From the total volume of the data (Fig. 7), for the parameter µ we obtain: Figures 7 and 8 indicate that, in spite of the differences in the techniques used for determination of the parameters, their consistency is fairly satisfactory.The determination errors for the water vapor contents obtained with the calculated parameters correspond to the standard deviations of water vapor contents measured in real observations, both in the present study and in the studies (Michasky et al., 1995;Halthore et al., 1997;Schmid et al., 1996;Ingold et al., 2000).The error of photometric measurings with a star photometer 0.
m 005 and with a sun photometer 0. m 001 corresponds to potential possibility to reach uncertainty of measuring of water vapor content ∼1%.
Really we have an error of definition of the water vapor content of 5-10% both at sun, and at star observations.In our opinion the principal cause of accuracy losses in both cases same is use for definition of extraterrestrial star magnitudes and then for water vapor content definition of expression (Eq.3): [m−m 0 ]=c•W µ .For example, on Fig. 5 straight line corresponds to the parameters c and µ obtained in laboratory for range of water vapor contents 0.5-5 cm ppw.Points represent data of real measurings with the sun photometer, obtained in Lindenberg during 1.5 years (7154 values).One can see from this figure, that the postulated function well enough features observed data for the basic range of water vapor contents (1-9 cm ppw along the line of sight, or 1-3 for W µ ), where overwhelming majority of points are allocated, and it corresponds to an error of definition of 5-10%.In too time it is possible to note some diversions of points from a straight line for small and very major water vapor contents, that testifies to insufficient accuracy of the accepted approximation.Therefore, µ depends on the interval of the water vapor contents, and tends to decrease with an increase of the latter.The parameter c depends on pressure.The height distribution of water vapor in the atmosphere varies within a wide range.The variations in the water vapor distribution affect the effective pressure of water vapor and thereby specify the value of parameter c.To a larger extent, the parameter c depends on the interval of the water vapor content for which the parameter was determined.All these factors should Introduction
Conclusions References
Tables Figures
Back Close
Full be studied and subsequently included into the processing algorithm, to maintain the accuracy of 0.5% (already reached in photometric observations) and to decrease the error of calibration of the water vapor content closer to 1%.The additional analysis of other sources of errors is necessary to achieve the approximately same accuracy at natural photometric measurings (the Rayleigh scattering, aerosol absorption, errors at definition of extraterrestrial star magnitudes, stability of instrumental photometric system etc.).Partially (for old observations) it is made by us in paper (Galkin et al., 2010).We plan to return to this problem in our following "Paper II" devoted to the analysis of the data, obtained in Lindenberg by various devices and methods in 1995-2007.A detailed examination of humidity observed in the cell with the use of calibrated sensors showed the impossibility to determine the integrated values of the water vapor content in the cell with the accuracy higher than 5%, using the sensors (Galkin et al., 2006).It is due to both the insufficient accuracy of the sensor readings and the inhomogeneities of water vapor content along the length of the cell, caused by the temperature gradient and local peculiarities.Further on, we plan to use the new thermohygrometer with 4 calibrated polymeric sensors for additional testing the homogeneity of water vapor content along the length of the cell.The data of this testing will make it possible to recalibrate the humidity scale obtained on the basis of Pulkovo spectroscopy by comparison with the standard Lindenberg humidity scale, used for calibration of radiosondes.
In order to calibrate photometric measurements and determine the zero-point of the scale for the empirical parameter c, the total (integrated) water vapor content along the total optical way in the cell in absolute units (cm ppw) should be known.To this end, we suggest using the ASP-12 vacuum high-resolution spectrograph, with which it is possible to derive the water vapor content from absorption in a separate narrow water vapor line (primarily, 694.3803 nm) with its known half-width and intensity.
The absorption in an isolated spectral line is strictly related to the parameters of the line: its intensity, half-width, and the line shape, and to the physical conditions under which the line is formed (the concentration of the absorbing substance, the pressure Introduction
Conclusions References
Tables Figures
Back Close
Full and temperature of the absorption mixture).If the physical conditions, under which the measurements are obtained, are known, the line parameters can be determined.And on the contrary, if the line parameters are known, the measurements of absorption in the isolated spectroscopic line can be used for determination of the physical conditions under which the line is formed.The growth curve method (the dependence of absorption on the line parameters and on the physical conditions under which the line is originated) is widely used in astrophysical studies of physical conditions in atmospheres of the Earth, planets, and stars.
In the 70-ies of the last century, the water vapor line with the wavelength 694.38 nm (the parameters of which were repeatedly determined at that time) was commonly used for determination of the water vapor content in the Earth's atmosphere.The typical measurement accuracy for the intensity and half-width of the line was ∼10%.
In the last decade, extensive studies of absorption in water vapor bands at optical and IR-wavelengths have been made (see references in HITRAN 2008 database; Rothman et al., 2009).According to the database, currently there exist a number of lines for which the line parameters have been determined with the accuracy of 1%.
The method of determination of the water vapor content from the absorption in a separate narrow line was successfully used at Pulkovo for astrophysical purposes when the Pulkovo spectrophotometric star catalog was being composed (Alekseeva et al., 1997(Alekseeva et al., , 1994)).However, in that case it was not necessary to know the real absolute water vapor contents in the cell with a very high accuracy.Only the relative homogeneity of scale for the empirical parameter C was really important.As a result, when we tried to apply our previous laboratory spectral tables for parameters C and µ (Alekseeva et al., 1994) to geophysical instruments (Lindenberg's star and sun photometers), it appeared that our scale of water vapor contents differed systematically from the radiosonde scale (possibly, due to the incorrect zero-point of scale for the parameter C).Therefore, we had to correct our water vapor contents scale, recalculating the parameters C with the use of a large volume of interpolated radiosonde data for every year of photometric observations (as it was described above).Introduction
Conclusions References
Tables Figures
Back Close
Full In order to transform our optical method of star and sun photometry into a independent reference method for determination of the atmospheric water vapor contents, it is necessary to repeat at Pulkovo the series of spectral measurements for the determination of parameters C and µ with the VKM-100-ASP12 laboratory complex, with a substantially higher accuracy.To this end, we are planning to introduce to this complex the new AvaSpec-3648TEC-USB2 laboratory fiber spectrometer.
In order to achieve the desired accuracy ∼1%, the optical connection between the exit window of the cell and the entrance window of ASP-12 should be provided with the use of a fiber optic cable (with the length ∼25 m).We are also planning to use an improved detection system in ASP-12, with the signal-to-noise ratio of the order of 10 3 .
For more accurate determination of the relative humidity in the cell, a new mirror system should be mounted.Currently, with the reflection factor of the aluminum-coated mirrors less than 89%, on the 4100 m length, the signal is deteriorated for about 6 star magnitudes (250 times).The optical quality of the mirrors is also insufficient.With a new mirror system (with silver coating), the number of light passes could be increased to extend the interval of the measured water vapor content.
The extension of the interval of the measurements, both with the reference to line intensity and to the wavelength interval, will make it possible to involve more lines with various parameters in the measurements of the water vapor content in the cell.
Conclusions
Since 1995, at Lindenberg Meteorological Observatory (currently, Richard-Aßmann-Observatory) sun and star photometers have been in operation; using these photometers, we have measured the aerosol optical thickness and atmospheric water vapor content.As a result, a unique database has been formed.To retrieve the water vapor content in the atmosphere from our measurements, we have developed an algorithm based on laboratory data obtained at Pulkovo Observatory with the VKM-100 multipass vacuum cell.Here, we present the empirical parameters that characterize the Figures
Back Close
Full We strongly believe that with the use of our integrated approach, i.e. the combination of laboratory modeling of the absorption, numerical modeling of the atmospheric absorption, and detailed analysis of measurements made with different type photometers, the needed accuracy may be attained.Introduction
Conclusions References
Tables Figures
Back Close
Full Full Full Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | ) where [m−m o ](W ) is the absorption in a water vapor band (m and m o are the star magnitudes with and without the absorption, respectively), c and µ are the empirical parameters that describe the absorption at a given pressure.In particular, c is the Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | 2.2 The usage of the multipass vacuum cell for determination of the empirical parameters c and µ µ µ Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | presents the example for the calibration curve for the ROBAS-20 sun photometer, the 945.51 nm filter, and the observational period from April 2002 to August 2003.The observational (m obs.−(α Ray +α aer.)•M) are plotted as a function of radiosonde data [W RS80 •M] µ .
Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper |
Fig. 2 .Fig. 5 .
Fig. 2. The relative spectral transmission curves for filters of the Lindenberg's star (blue solid curve) and sun (red and green curves) photometers, and the spectral distributions of the parameters c(λ) (blue) and µ(λ) (magenta) in the region of 935 nm water vapor absorption band.
Table 1 .
The results obtained for one of the filters (948.0 nm) of the star photometer for various pressure. | 2018-04-14T07:23:47.762Z | 2010-12-15T00:00:00.000 | {
"year": 2010,
"sha1": "6f125ed06a1df7485820280a1c99b3ce628746fb",
"oa_license": "CCBY",
"oa_url": "https://amt.copernicus.org/articles/4/843/2011/amt-4-843-2011.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "6f55cfa243351795e49e89c55ea8b4432e0f8097",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
231936423 | pes2o/s2orc | v3-fos-license | Validation of the STEP deflation algorithm of the Midmark IQvitals Zone Vital Signs Monitor: part of a novel clinical ecosystem
Objectives Assess the accuracy of the Midmark IQvitals Zone Vital Signs Monitor STEP deflation algorithm according to the ANSI/AAMI/ISO 81060-2 Standard. Methods A total of 85 subjects completed the testing protocol. All standard requirements for gender, blood pressure (BP) values, and arm circumferences were met. Manual auscultation was performed by testers blinded to the device; the manual BP values were compared to the device readings. Results: The Standard Criterion 1 data analyses showed mean ± SD device minus manual BP values of 1.22 ± 6.3 mmHg for SBP and −1.67 ± 6.09 mmHg for DBP. The SD values for criterion 2 were 5.06 mmHg (SBP) and 4.98 mmHg (DBP). Conclusions The device passed all Standard requirements. The Midmark IQvitals Zone device has features to improve accuracy and reduce or eliminate transcription errors and inaccuracy from improper patient positioning.
Introduction
As the world phases out the use of mercury, diagnostic oscillometric-based automated sphygmomanometers have become the standard for both in-office and out-ofoffice blood pressure (BP) determination. A recent publication by Munter et al. [1] stresses the need for precision in the evaluation of BP. The critical dividing lines for various diagnostic categories are at 120, 130, and 140 mmHg. Thus, the automated BP device must be able to achieve a high level of accuracy. The current worldwide standard for BP device validation is the ANSI/AAMI/ISO 81060-2 Standard [2]. The protocol for meeting the requirements of the Standard calls for testing of ≥85 subjects, with necessary goals for gender, BP levels, and arm sizes.
The goal of this study was to validate the Midmark IQvitals Zone Vital Signs Monitor (IQvitals Zone) STEP deflation algorithm to meet the Standard protocol requirements. A validated linear deflation algorithm also is available to provide an option for clinician preference. Innovative improvements to that algorithm are being finalized and will be validated in a future study. IQvitals Zone has novel features that ensure precision and provide added efficiency. These features include programmable automated BP determination without an operator present during the readings to help reduce the effects of white coat hypertension, automated transfer of BP data to the electronic medical record (EMR) via a Bluetooth Low Energy connection to eliminate transcription errors, and, when used in conjunction with the Midmark 626 Examination Chair, accomplishing American Heart Association (AHA) and American College of Cardiology (ACC) requirements for patient positioning [1].
Materials and methods
The IQvitals Zone was developed with a STEP deflation algorithm that utilizes advanced signal processing to monitor accurately and analyze the stability and quality of the baseline prior to extracting true pulse waves with enhanced accuracy of measuring the mean arterial pressure point. This algorithm allows for short reductions of pressure by increments of 10 mmHg, with a brief variable hold duration based on the patient's heart rate. The hold duration ranges from 1 s for rapid heart rates to approximately 2.5 s for slow heart rates at each pressure step until DBP is achieved, followed by rapid deflation.
Validation testing for the IQvitals Zone STEP deflation algorithm was performed independent of manufacturer supervision by the staff at Clinimark, LLC in Louisville, Colorado, USA. For this study, 87 subjects were recruited and two were excluded because the Copyright © 2021 Wolters Kluwer Health, Inc. Unauthorized reproduction of this article is prohibited.
Validation of Midmark STEP algorithm Alpert 235 observers could not hear the K sounds well enough to assign K1 and K5 values. The 85 subjects completing the study included 50 adults and 35 children per the testing requirements [2]. Subjects ranged in age from 3 to 77 years and arm circumferences were 15.5-44 cm. For this study, the same-arm sequential protocol was followed. Written informed consent from adults and assent from children (7-17 years of age) were obtained for all study subjects. The studies were approved by the Salus Institutional Review Board and were performed in late 2019 and into 2020.
Procedure
Subjects were seated in the Midmark 626 examination chair, which provides adjustments to ensure recommended positioning: feet flat on the floor, back supported, and the arm with the cuff applied supported at heart level. The testing room was quiet; no talking was allowed during BP acquisitions. The Standard protocol was strictly followed, with alternating readings performed by testing personnel doing simultaneous auscultation or IQvitals Zone device BP measurements. The personnel performing auscultation were blinded to the results of the device readings. The cuff for the auscultation method was a standard two-piece cloth cuff with a bladder diameter between 0.37 and 0.5 times the subject's arm circumference. IQvitals Zone cuffs of the recommended size were used for the automated readings.
Analyses
Each individual device reading was compared to the average of the two auscultation readings, one prior to and one following the device reading. Means ± SDs were calculated per both Criterion 1 and Criterion 2 of the Standard [2].
Results
Data were analyzed per the Standard requirements [2] and expressed as the mean ± SD of the differences between the device and manual BP readings ( Table 1). The mean difference values approached zero, indicating a robust result. The Standard also requires Bland-Altman plots demonstrating the scatter of data points expressed in a different way ( Fig. 1a and b) for both SBP and DBP readings.
Discussion
As clinical guidelines evolve, requiring more precise estimation of BP, automated devices must be developed and validated to perform in compliance with these strict requirements [1]. In addition to the automated device itself, the clinical technique of individual assessments is critical [1]. To satisfy the AHA/ACC guidelines [1], patients must be seated and rested in a quiet room for 5 min, have their feet flat on the floor with their back supported, and have the arm with the cuff applied supported at heart level. Often in current medical care delivery, these guidelines are not followed. If, for instance, the patient is seated on a traditional examination table, none of the three requirements listed can be achieved. Midmark has developed an examination chair (Midmark 626 Barrier-Free Examination Chair) with adjustment capabilities incorporated to ensure that all three of these requirements are met. Use of the IQvitals Zone device in conjunction with the 626 examination chair forms a novel ecosystem that will help achieve the most accurate BP measurements.
Routine manual clinical office BP measurement introduces many factors that can reduce accuracy. A Bland-Altman plots of (a) SBP and (b) DBP measurements. recommendation for Automated Office BP (AOBP) has been incorporated into the AHA/ACC report [1]. AOBP involves multiple readings and various averaging calculations, with the possibility of several protocols (e.g. the SPRINT BP protocol). During many situations in an office visit, when the patient is alone in the examination room, an automated device could perform AOBP readings without operator presence. The IQvitals Zone has six programmable protocol modes: Spot, Interval, Continuous, Averaging, SPRINT, and a Custom protocol allowing users to designate BP protocols with a user-specified number of readings and interval lengths.
With the use of most current automated devices, BP readings are manually transcribed into the EMR, a process prone to human error. The IQvitals Zone utilizes Bluetooth Low Energy technology for confidential electronic transfer of BP readings into nearby computers used for that purpose. The data are stored securely and can be transferred directly into the correct patient's EMR via a protected software link. This feature saves time and eliminates transcription errors.
Conclusion
The Midmark IQvitals Zone automated oscillometric BP device has a STEP deflation algorithm meeting all requirements of both the ANSI/AAMI/ISO 81060-2 Standard and the British Hypertension Society protocol requirements [3], with an AA rating. Because of the rapid deflation rate, the time with the cuff inflated is reduced, thus increasing patient comfort. The IQvitals Zone is part of a connected ecosystem that can increase clinical efficiency, reduce errors made in patient positioning, and eliminate transcription errors. AOBP criteria can be programmed to help reduce the effects of white coat hypertension. Taken together, features of the Midmark IQvital Zone automated vital signs device make it ideal for obtaining more accurate BP readings to improve healthcare for all patients. | 2021-02-17T06:16:48.448Z | 2021-02-12T00:00:00.000 | {
"year": 2021,
"sha1": "0071da8afa0ac89bf9b8a6e6937d27adf3074023",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.lww.com/bpmonitoring/Fulltext/2021/06000/Validation_of_the_STEP_deflation_algorithm_of_the.11.aspx",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "97cec8782ec7e14f3d75c99cc19051b7d52b98d0",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210838690 | pes2o/s2orc | v3-fos-license | Analytic Eigenbranches in the Semi-classical Limit
We consider a one parameter family of Laplacians on a closed manifold and study the semi-classical limit of its analytically parametrized eigenvalues. Our results establish a vector valued analogue of a theorem for scalar Schrödinger operators on Euclidean space by Luc Hillairet which applies to geometric operators like Witten’s Laplacian on differential forms.
where is a selfadjoint (bounded from below) Laplacian and A, V ∈ (end(E)) are smooth symmetric sections. This is a selfadjoint holomorphic family of type (A) in the sense of [9, Section VII §2]. According to the Kato-Rellich theorem, see [9,Theorem VII.3.9], its eigenvalues can be organized in analytic families referred to as analytic eigenbranches of t . More precisely, there exist eigenbranches λ t and eigensections ψ t , both analytic in t ∈ R, such that Communicated by Daniel Aron Alpay.
This article is part of the topical collection "Spectral Theory and Operators in Mathematical Physics" edited by Jussi Behrndt, Fabrizio Colombo and Sergey Naboko.
Furthermore, it is possible to choose a sequence of analytic eigenbranches, λ (k) t , and corresponding analytic eigensections, ψ (k) t , such that at every time t, the sequence λ (k) t exhausts all of the spectrum of t , including multiplicities, and ψ (k) t forms a complete orthonormal basis of eigensections. The analytic parametrization of the spectrum, λ (k) t , is unique up to renumbering. The eigensections, on the other hand, are by no means canonical, and it seems more natural to consider the spectral projections instead.
In this note we study the semi-classical limit of the analytic eigenbranches, i.e., the behavior of λ t as t → ∞. We will show that t −2 λ t converges to a finite limit μ, see part (b) of the theorem below. Moreover, if the potential V is scalar valued, i.e., if V = v · id E for a smooth function v, then μ has to be a critical value of v, see part (h) in the theorem below. These observations are analogous to a result of Luc Hillairet [8] who considered the scalar case on M = R n with A = 0. While Hillairet's proof uses the invariance of semi-classical measures under the corresponding Hamiltonian flow, our approach is entirely elementary and does not make use of these concepts. The ideas entering into the proof, however, appear to be essentially the same. Notably, in order to show that μ has to be a critical value of V , we too rely on commutator computations. Avoiding semi-classical measures makes the generalization to the vector valued case considered here straight forward. As in Hillairet's argument, convergence of t −2 λ t follows from the fact that this quantity, suitably corrected due of the presence of A, is bounded and monotone, cf. (14) below. The fact that the Laplacian is semi-bounded enters crucially at this point.
Let us emphasize that analytically parametrized eigenbranches may cross and will in general not remain in the same order. Hence, the asymptotics of analytic eigenbranches might be quite different from the well understood asymptotics of the spectral distribution function, i.e., the asymptotics of the eigenvalues ordered increasingly. In particular, we cannot rule out the existence of analytic eigenbranches which do not correspond to any eigenvalue of the approximating harmonic oscillator associated with the deepest wells, cf. the concluding remarks at the end of this note.
The asymptotics of the spectral distribution function in the semi-classical limit has applications in quantum mechanics and geometric topology [4][5][6][7]. We merely mention Witten's influential paper [12] and and proofs of the Cheeger-Müller theorem [1][2][3]. In these geometric applications, a Morse function f provides a deformation of the deRham differential, , is a one parameter family of operators acting on differential forms, i.e., E = * T * M, which is of the type considered here with scalar valued V = |d f | 2 . Hence, in this case the absolute minima of V coincide with the critical points of f .
Let us return to a general one parameter family of operators considered above, see (1), and an analytic eigenbranch as in (2). Subsequently, we will use the notatioṅ λ t := ∂ ∂t λ t ,ψ t := ∂ ∂t ψ t , and˙ We have the following analogue of Theorem 1 in [8].
Theorem For each analytic eigenbranch the following hold true: (e) The limit, μ, has the following interpretations: Furthermore, for each sequence t n → ∞ such that, cf. (d), the following hold true: Hence, the eigensections ψ t n localize near μ : (h) If, moreover, the potential V is multiplication by a (scalar) function, then Hence, the eigensections ψ t n localize near the critical points of V . For every neighborhoodŨ of the critical set of V we have In particular, μ has to be a critical value of V .
Proof From (2) we obtain Differentiating the second equation in (2) we get Differentiating (10) and using the selfadjointness of t this leads to Combining this with (3) we obtaiṅ Since A and V are bounded operators, this impliesλ t = O(t), whence (a).
From (1) and (3) we immediately get Combining this with (10) and (11) and using the boundedness of A, we obtain Hence, as is bounded from below, there exists a constant C such that for sufficiently large t. This shows that the quantity t −2 λ t + Ct −1 is monotone, for large t. In view of (a) it is bounded too. Whence t −2 λ t + Ct −1 converges, as t → ∞. This immediately implies (b). Similarly, one can show (c) and (d): Rewriting (13) we get As t −2 λ t is bounded, we must have for otherwise t −2 λ t would diverge logarithmically. Moreover, since is bounded from below. Combining (15)-(17) we obtain This completes the proof of (c) and (d), the statements on boundedness follow immediately from (a) and (15). To see (e) note that (1) and (10) give Using (d) we obtain the second equality in (e). The third equality follows from (12). The last one is immediate using ∂ To see (f), note first that the case s = 1 is immediate from (5) since there exists a constant C such that ψ 2 H 1 (M) ≤ C( ψ, ψ + ψ 2 ). Using the eigensection equation t ψ t = λ t ψ t and (a), one obtains constants C s such that This permits to establish (f) inductively for all odd integers s ∈ N. The even case can be reduced to the odd one using ψ 2 , an estimate which readily follows from the Cauchy-Schwarz inequality.
Using the estimate in (f) for s = 2, the eigensection equation, implies (6). As V − μ is invertible over the compact set M\U , there exists a constant c > 0 such that Combining this with (6), we obtain (7). Let us now turn to the proof of (h). Suppose D is a first order differential operator acting on sections of E. Since V is scalar valued, the commutator σ := [D, V ] = DV − V D is a differential operator of order zero, namely the principal symbol, σ = σ D (dV ) ∈ (end(E)). As is a Laplacian, the commutator [D, ] is a differential operator of order at most two. Using the Cauchy-Schwarz inequality and (f), we thus obtain Using t ψ t = λ t ψ t one readily checks Since V is scalar valued we have [σ, V ] = 0 and where [σ, ] is a differential operator of order at most one. Proceeding as above, the Cauchy-Schwarz inequality and (f) yield Combining the latter with (18) and (19), we arrive at Specializing to D = ∇ X where ∇ is some linear connection on E and X is a vector field on M, we obtain σ = X · V = dV (X ). Choosing X = grad(V ) with respect to some auxiliary Riemannian metric on M, we have σ = |dV | 2 , and (20) becomes (8).
On M\Ũ we have 0 <c ≤ |dV | 2 for some constantc, hencẽ and thus (8) implies (9). Combining the latter with (g), we see that μ has to be a critical value of V . This completes the proof of the theorem.
Let us mention that many of the preceding statements remain true for continuously parametrized eigenbranches. In particular, the limit in (4) exists for every continuous eigenbranch λ t , and this limit has to be a critical value of V , provided V is scalar. Since continuous eigenbranches are piecewise analytic, this can readily be derived by observing that the constants implicit in (a) are uniform for all analytic eigenbranches.
Concluding remarks
At the very end of Section 3 in [8], Hillairet points out that for M = R 1 and nondegenerate minima, the limit μ has to be the absolute minimum of V . Indeed, in this situation the spectrum is known to be simple, hence the analytic eigenbranches cannot cross and remain in the same order: λ (1) t < λ (2) t < · · · The semi-classical asymptotics of the k-th eigenvalue, however, is governed by the deepest wells.
For the operators considered in the theorem above we propose the following As t → ∞, the following statements are equivalent: for one (and then all) real α = 1.
Better estimates are available if the geometry is flat near the minima, cf. [10, Equation (1.15)]. Under additional assumptions [11] one even obtains a full asymptotic expansion in terms of integral powers of t.
For every eigenvalue ω of the harmonic oscillator associated with the minima of V , there exists an analytic eigenbranch for which (22) holds true with μ the absolute minimum of V . Moreover, the number of these eigenbranches coincides with the multiplicity of ω. However, it is unclear, if these exhaust all analytic eigenbranches. Hence, an intriguing problem remains open: Are there analytic eigenbranches with different asymptotics which are not governed by the approximating harmonic oscillator associated with the deepest wells? in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 2020-01-22T02:01:13.105Z | 2020-01-20T00:00:00.000 | {
"year": 2020,
"sha1": "6eb80771971eedab2a85b371f0377cd3d95baea2",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11785-020-01011-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f66f02d52bd0357574424cd3b57ed0426ec31d8d",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine"
]
} |
1043613 | pes2o/s2orc | v3-fos-license | Sex Differences in Treatment Quality of Self-Managed Oral Anticoagulant Therapy: 6,900 Patient-Years of Follow-Up
Background Patient-self-management (PSM) of oral anticoagulant therapy with vitamin K antagonists has demonstrated efficacy in randomized, controlled trials. However, the effectiveness and efficacy of PSM in clinical practice and whether outcomes are different for females and males has been sparsely investigated.The objective is to evaluate the sex-dependent effectiveness of PSM of oral anticoagulant therapy in everyday clinical practice. Methods All patients performing PSM affiliated to Aarhus University Hospital and Aalborg University Hospital, Denmark in the period 1996–2012 were included in a case-series study. The effectiveness was estimated using the following parameters: stroke, systemic embolism, major bleeding, intracranial bleeding, gastrointestinal bleeding, death and time spent in the therapeutic international normalized ratio (INR) target range. Prospectively registered patient data were obtained from two databases in the two hospitals. Cross-linkage between the databases and national registries provided detailed information on the incidence of death, bleeding and thromboembolism on an individual level. Results A total of 2068 patients were included, representing 6,900 patient-years in total. Males achieved a significantly better therapeutic INR control than females; females spent 71.1% of the time within therapeutic INR target range, whereas males spent 76.4% (p<0.0001). Importantly, death, bleeding and thromboembolism were not significantly different between females and males. Conclusions Among patients treated with self-managed oral anticoagulant therapy, males achieve a higher effectiveness than females in terms of time spent in therapeutic INR range, but the incidence of major complications is low and similar in both sexes.
therapeutic INR range, but the incidence of major complications is low and similar in both sexes.
Introduction
Oral anticoagulant therapy (OAT) with vitamin K antagonists (VKA), e.g. warfarin or phenprocoumon, remains the mainstay to prevent thromboembolism in a variety of clinical conditions. Mechanical heart valves, atrial fibrillation or recurrent venous thromboembolism are the most frequent clinical indications for long-term treatment [1]. The recent approval of new oral anticoagulant drugs (e.g. dabigatran, apixaban and rivaroxaban) for patients with atrial fibrillation has increased the number of treatment options for this patient group, however, VKA remains the cornerstone of OAT [2]. Furthermore, the investigation of the efficacy and safety of dabigatran in mechanical heart valve patients was terminated prematurely due to an excess of adverse events in the dabigatran group, leaving this patient group entirely dependent on VKA [3]. The new oral anticoagulant drugs can not be used in patients with renal impairment [4].
VKA impedes coagulation and consequently increases the risk of bleeding; hence, meticulous monitoring of coagulation time measured using the International Normalized Ratio (INR) and appropriate dosage adjustments are mandatory for patients prescribed VKA [1,5]. General practitioners and hospital departments generally perform conventional management of VKA-therapy, but the risk of major complications continues to cause concern [6]. Patient selfmanagement (PSM) of OAT is a concept empowering trained patients to monitor and adjust their treatment in home settings [1,7]. Randomized, controlled trials (RCT) have demonstrated the efficacy and safety of PSM, with self-managed patients achieving a significant reduction in major tromboembolism compared to conventional monitored patients [8,9]. The risk of thromboembolism can also be halved without a concomitant significant increase in mortality or bleeding [8][9][10]. The advantages of PSM in RCT represent the efficacy of PSM under ideal circumstances. Inclusion into a RCT is a distortion of usual practice, hence benefits shown in clinical trials might not translate into everyday clinical practice. Population-based studies evaluating clinical events are crucial for obtaining results generalizable to the general population [11,12]. A limited number of studies have evaluated the effectiveness of PSM in clinical practice, and these follow-up studies indicate an advantage of PSM compared to conventional management [12][13][14][15][16]. However, all these studies, apart from [12] were small and/or only used surrogate endpoints.
Sex-related differences is found regarding the risk of thromboembolism and death among patients with atrial fibrillation [17]. In addition, a meta-analysis suggests that the efficacy of PSM may be sex-dependent, with males benefiting the most [10]. When compared to conventional care, males performing PSM achieve important role in patient education and continuous registration of patient data.
Competing Interests: Erik Lerkevang Grove has received speaker honoraria from AstraZeneca, Baxter, Bayer, Boehringer Ingelheim, Pfizer and Sysmex and serves on advisory boards for AstraZeneca, Bayer, and Bristol-Myers Squibb. Torben Bjerregaard Larsen has served as an investigator for Janssen Scientific Affairs, LLC and Boehringer Ingelheim and has been on the speaker bureaus for Bayer, BMS/Pfizer, Roche Diagnostics, Takeda and Boehringer Ingelheim. Other authors none declared. Thomas Decker Christensen has been on the speaker bureaus for AstraZeneca, Boehringer Ingelheim, Pfizer, Takeda and Bristol-Myers Squibb, and has been a member of an advisory board in Bristol-Myers Squibb. No other relationships are present. The project was funded by a non-restricted grant from Takeda, Denmark, and this does not alter the authors' adherence to PLOS ONE policies on sharing data and materials. a significant reduction in thromboembolism, whereas females do not. Further investigations are important, as a considerable difference in a real-life setting may impact the approach to the educational program, which is mandatory for patients desiring to commence PSM. Therefore, we found it interesting to investigate if differences in the quality of warfarin treatment in such patients (as reflected by TTR) exist.
The aim of this study was to evaluate the sex-dependent performance of selfmanaged OAT assessed by major bleeding, intracranial bleeding, gastrointestinal bleeding, stroke, systemic embolism, death and time spent within therapeutic INR target range (TTR).
Study design
A case-series study was conducted at two Danish centers; Center of Self-Managed Oral Anticoagulation, Department of CardioThoracic and Vascular Surgery, Aarhus University Hospital and Center of Thrombosis, Aalborg University Hospital.
The study was approved by the Danish Data Protection Agency (ref. 2012-41-0633). Ethical approval is not required for register-based studies in Denmark.
Consent from patients is not required according to Danish law and was therefore not obtained. Patient records/information was not anonymized in the databases.
Study population
Out of approximately 3 million inhabitants in Western Denmark, an estimated 30.000 persons are prescribed VKA. General practitioners or hospital departments in Denmark referred potential eligible patients to Aarhus University Hospital in the period between 1 st of June 1996 and the 30 th of June 2012, or Aalborg University Hospital in the period between the 1 st of April 2008 and the 31 st of December 2012. All patients were required to attend an educational programme containing a minimum of three teaching lessons, including basic theoretical and practical skills including use of a coagulometer, interpretation of INR values, and VKA dosing. Over a period ranging from 3 to 27 weeks, patients gradually became self-managed. Finally, the patients were requested to demonstrate their skills in a multiple choice exam. The training scheme currently used in shown in Table 1.
The only inclusion criterion for this study was patients could be regarded as capable of self-managing their OAT, and this criterion was met when the patient successfully passed the final exam. Patients regretting their decision to undertake PSM before passing the exam were not considered as self-managed and excluded from this study. Patients discontinuing PSM after passing the exam were excluded, if less than two INR measurements were reported or the time lapse between the first and second INR measurement exceeded 6 weeks. Disabled patients and/or patients under the age of 15 could become selfmanaged if a caregiver or parent participated in the educational program and exam. It should be emphasized that the parent will always be involved to some extent. When we refer to the patient, therefore, this implicitly includes the parental involvement.
The TTR range was 2 -3 for the majority of indications, such as atrial fibrillation, venous thromboembolism, thrombophilia or mechanical aortic valves and 2.5 -3.5 for mechanical mitral valves. All patients used the portable coagulometer CoaguChek, CoaguChek S or the CoaguChek XS coagulometer (Roche Diagnostics, Switzerland) equipped with CoaguChek PT-test strips. The patients were requested to measure their INR value once a week and report all INR-data to their affiliated center.
Outcome measures
The quality of OAT managed by PSM was assessed using both clinical and surrogate outcome measures. The latter included: TTR, percentage of INRmeasurements within the target range and the variance (standard deviation (SD) 2 ) of the INR-values. The clinical outcomes measures were major bleeding, intracranial bleeding, gastrointestinal bleeding, stroke, systemic embolism and death.
The definition of major bleeding was: acute posthaemorrhagic anaemia, haemothorax, recurrent and persistent haematuria, menopausal and other perimenopausal bleedings, haemorrhage from respiratory passages, unspecified haematuria and haemorrhage not classified elsewhere. Intracranial bleeding included: subarachnoid, intracerebral and epidural haemorrhage, other nontraumatic intracranial haemorrhage, focal brain injury and traumatic subdural or subarachnoid haemorrhage. Gastrointestinal bleeding was defined by: gastric, duodenal, peptic and gastrojejunal ulcer, gastritis and duodenitis. Stroke included cerebral infarction and stroke, which was not specified as haemorrhage or infarction. Systemic embolism included arteriel embolism and thrombosis. Death was defined as all-cause death and was, therefore, not limited by cardiovascular deaths. The observation time began on the date of the first registered INR value, and terminated on the date of the first occurred clinical outcome for each outcome category, or the last registered INR value, whichever came first. Deaths recorded within 6 weeks after the last INR measurement were included in the analysis. Discontinuation from PSM was considered if the time lapse between two INR values was exceeding 6 weeks, hence person-time was censored in that case.
Data collection
Data was obtained from nationwide registers, medical records and local databases at Aarhus University Hospital and Aalborg University Hospital.
Trained nurses or patients using an online system developed for OAT have typed INR treatment data into the two databases prospectively, and a retrieval of aggregated data was collated into a spreadsheet (Microsoft Excel, Microsoft Corp., Redmond, WA, USA). Data originating from medical records was entered in EPIDATA (Epidata software version 3.1, Epidata Association, Denmark) before conversion into the spreadsheet. The collected information included: civil registration number, sex, clinical indication for anticoagulant therapy, VKA history, date of the exam.
Finally, the compiled data was linked to three nationwide registers. The Danish National Patient Register was used for obtaining data on co-morbidity, thromboembolism and bleeding events at baseline and in the follow-up period. This register contains information on all hospital admissions in Denmark since 1977, hospital dates of admission and discharge, surgical procedures, and up to 20 discharge diagnoses coded by physicians according to the International Classification of Diseases (ICD) [18]. Data on death was obtained from the Danish Civil Registration System, which contains records of date of birth, emigration, and death for all Danish residents [19]. Data on comorbid medications was obtained from the Danish National Prescription Registry using the Anatomical Therapeutic Chemical (ATC) Classification. This registry contains information on all prescription drugs sold in Denmark since 1994 [20].
Statistics
Characteristics of patients at baseline are presented as proportions for discrete variables and means (SD) for continuous variables. Co-morbidities were calculated according to Charlson's co-morbidity index and classified into three groups; low, medium or high co-morbidity, depending on their index value [21]. Index value 0 was classified as low, whereas 1-2 or 2,were classified as medium or high, respectively.
All aberrant therapeutic INR ranges were standardized to the predominant individual target ranges, 2 -3 and 2.5 -3.5. Thus, INR target ranges within 1.5 -2.4 and 2.5 -4.2 was standardized to 2 -3 and 2.5 -3.5 respectively. TTR was calculated according to Rosendaal's method [22]. The percentage of INR measurements within range was calculated for each patient, and the mean percentage is presented. The variance of INR values was calculated as a mean of all patients intra-patient variation.
Clinical endpoints are described using incidence rates and hazard rate ratios between females and males. A Cox proportional-hazards model was used for estimating the effect of sex for each outcome. A supplementary adjusted analysis was performed to evaluate the effect of sex conditional on comparable baseline characteristics. Exact 95% confidence intervals (CI) were used and a two-sided Pvalue ,0.05 was considered statistically significant. All analyses were performed using SAS software, version 9.3 (SAS Institute), and STATA software, version 12 (StataCorp LP, TX, USA).
Study population
During the period from the 1 st of June 1996 to the 31 st of December 2012, a total of 2,186 self-managed patients were identified at Aarhus University Hospital (1,405 patients) and Aalborg University Hospital (781 patients). A total of 118 patients were excluded for two reasons: firstly, 100 patients discontinued PSM with less than 2 registered INR measurements or a time lapse exceeding 6 weeks between the first and the second INR measurement. Secondly, 18 patients currently affiliated to Aalborg University Hospital, were identified with a previous terminated PSM treatment course at Aarhus University Hospital. After the exclusion, the study cohort comprised 2,068 patients spanning 6,900 patientyears, including 1,383 patients affiliated to Aarhus University Hospital and 685 patients affiliated to Aalborg University Hospital.
A flow chart of the study patients is displayed in Figure 1. Patient demographics and baseline characteristics are summarized in Table 2. The ratio of female to males was one to two (698 women vs. 1,370 men), and significant differences were seen in age, indication and duration of prior VKA therapy. However, co-morbidities at baseline and previous complications did not differ.
Therapeutic INR control
A full report of INR values could be obtained for all patients affiliated to Aarhus University Hospital. No INR data was found at Aalborg University Hospital prior to year 2010, due to the implementation of a new database that year. Therefore, the obtained INR measurements, and consequently the observation time for patients affiliated to Aalborg, began at the onset of PSM or earliest 1 January, 2011. The total number of INR measurements was 354,045.
The managed therapeutic INR control is provided in Table 3. TTR was 71.1% and 76.4% for females and males, respectively (p,0.0001). The percentage of INR measurements within the target range was 67.7% for females and 72.6% for males (p,0.0001). The across-patient mean intra-patient variance of INR measurements was 0.39 for females and 0.30 for males (p,0.0001).
Clinical outcome measures
The number of events and incidence rate of complications are shown in Table 4. The hazard rate ratio between females and males are shown in Figure 2. Females experienced more complications in terms of thromboembolic events, whereas males experienced more intracranial bleeding events. However, no significant differences between females and males were seen for the clinical outcome measures.
Discussion
This is the first study comparing the sex-dependent effectiveness of OAT managed as PSM in everyday clinical practice. The main results is that males have a significant higher TTR than females, whereas the incidence of death and major complications was low and essentially similar in both sexes. We found no significant difference in outcome with crude analysis compared to the adjusted analysis.
The finding of males having a higher TTR than females might be attributable to the significant differences in the baseline distribution of patient characteristics. There was a significant sex difference in the clinical indications for therapy, a vast majority of the mechanical heart valve patients were males, whereas most patients with venous thromboembolism were females. As the efficacy of PSM varies with the clinical indication for therapy and mechanical heart valves patients appear to benefit most, this can be a contributory cause to the results favouring males [8,10]. On the contrary, females were significantly younger at baseline, thus representing an advantage, since there is an association between age and clinical outcome [10]. TTR is a predictor of the clinical outcome, but it is merely a surrogate parameter, and may not reflect the risk of clinical complications sufficiently [23,24]. This may explain the non-significant findings in clinical outcome.
Little information regarding the sex-dependent outcome is available. Importantly, the present results are in accordance with a meta-analysis which found a small but significant sex-difference in clinical outcome, favoring the males [10]. Because the meta-analysis did not report any INR data, comparing the level of therapeutic control between the sexes is not possible. The overrepresentation of male subjects in the current study is noteworthy and consistent with previous studies on PSM, reporting at most a fourfold more males than females [10,25,26]. The reasons for this imbalance are unknown, and in comparison, the ratio of males to females receiving VKA in the Danish population is more evenly divided about 60% to 40% (www.medstat.dk). General practitioners and hospital departments have referred all enrolled patients and it cannot be explained whether females are reluctant to the concept of PSM, or if females do not receive the opportunity of PSM to the same extent as males.
Since the definitions of major complications are inconsistent, a comparison of incidence rates between trials is difficult, but the low incidence rates of this study are line with previous studies, reporting incidence rates on major thromboembolism and major bleedings at 0-1.6 and 0.6-1.3 per 100 patient-years, respectively [12,14,27]. Further, we did not have data on complications not requiring hospital care, which may contribute to underreporting of events in this study. The all-cause mortality was low with 0.50 deaths and 0.53 per 100 patientyears in females and males, respectively. The young population with little comorbidity may have affected the results in a positive direction. The level of therapeutic control observed for both sexes in the current study match well with that of RCT with self-managed patients achieving a TTR of 64 -79%, and keeping 59 -68% of their INR determinations within the range [25][26][27][28]. This suggests that the level of therapeutic control demonstrated in RCT can be maintained outside trial conditions. Moreover, the present results are concordant with small studies on the effectiveness of PSM [13][14][15][16]. Importantly, the anticoagulant control in both conventional care and PSM varies extensively among the study populations, and a comparison should therefor be interpreted with caution. Nevertheless, the low incidence rates of adverse events and the high level of INR control in the present study indicate that the efficacy of PSM achieved in RCT can be translated into everyday clinical practice showing effectiveness and efficacy is coincident. This has also been found by Nagler et al [12]. Our study has some limitations that should be acknowledged. The low age and relatively low comorbidity in our study population may reflect a pre-selection of patients. Furthermore, data is lacking on the number of patients who regretted their choice to perform PSM before passing the exam. Another major limitation is caused by the implementation of a new database in Aalborg University Hospital the 1 st of January 2011, which restricts the follow-up time. The implication of this is that the follow-up time do not cover the entire treatment course starting from the onset of PSM in Aalborg. However, it would not be expected that this lack of data affects the sex-dependent results. At most, it may affect the incidence of complications if the risk of complications is higher at the onset of PSM. A multivariable adjustement model was included. However, we emphasise that this model estimates gender differences assuming males and females are alike on the included controlling factors. The main focus of the current study was however the possible overall gender difference on performance in PSM.
The primary strength of this study is the design, where local datasbases provided the treatment data and nationwide registries provided independent information on the clinical outcome. The Danish civil registration number assigned to all Danish citizens and residents enabled an unambiguous linkage on an individual level. The two center study, the unselected cohort of patients performing PSM and the long-term follow-up with 6,900 patient-years of experience are also of important strengths. A high external validity of our data is expected. All hospital care and use of equipment, including the coagulometers and strips are free of charge for the Danish patients, which might reduce the impact of socioeconomic factors.
The future of VKA is currently under discussion since the new oral anticoagulants (e.g. dabigatran, apixaban, rivaroxaban and edexaban) offer a potential advantage of requiring no monitoring of INR values. Regarding the efficacy and safety, the new oral anticoagulants compare well with VKA when managed as conventional care [29][30][31]. Whether these new drugs can match VKA treatment managed as PSM remains to be clarified. In this study, we have demonstrated that the quality of care in self-management is high in a daily clinical setting, and the new types of treatments will have to match these results in order to replace VKA in the patients able to perform PSM.
In conclusion, patient self-management of oral anticoagulant therapy outside trial conditions is clinically effective for both females and males, and results in a high TTR and a low incidence of death and major complications. Males can achieve a higher effectiveness than females in terms of TTR, but the incidence of clinical complications is similar in both sexes. | 2016-05-12T22:15:10.714Z | 2014-11-21T00:00:00.000 | {
"year": 2014,
"sha1": "2f4c22223039265eb2625512e3497ac7f60d15a5",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0113627",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2f4c22223039265eb2625512e3497ac7f60d15a5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56252681 | pes2o/s2orc | v3-fos-license | Contractual Efficiency of PPP Infrastructure Projects : An Incomplete Contract Model
This study analyses the contractual efficiency of public-private partnership (PPP) infrastructure projects, with a focus on two financial aspects: the nonrecourse principal and incompleteness of debt contracts. The nonrecourse principal releases the sponsoring companies from the debt contract when the special purpose vehicle (SPV) established by the sponsoring companies falls into default. Consequently, all obligations under the debt contract are limited to the liability of the SPV following its default. Because the debt contract is incomplete, a renegotiation of an additional loan between the bank and the SPV might occur to enable project continuation or liquidation, which in turn influences the SPV’s ex ante strategies (moral hazard). Considering these two financial features of PPP infrastructure projects, this study develops an incomplete contract model to investigate how the renegotiation triggers ex ante moral hazard and ex post inefficient liquidation. We derive equilibrium strategies under service fees endogenously determined via bidding and examine the effect of equilibrium strategies on contractual efficiency. Finally, we propose an optimal combination of a performance guarantee, the government’s termination right, and a service fee to improve the contractual efficiency of PPP infrastructure projects.
Introduction
Public-private partnerships (PPPs) are innovative arrangements enabling the procurement of infrastructure services by governments with private participation.Pioneered in the United Kingdom through the private finance initiative in 1992 [1], PPP arrangements have since been adopted in countries with various levels of wealth on all continents [2][3][4].Despite their popularity, PPP infrastructure projects have produced diverse results.On the one hand, many projects in a broad range of sectors have been successfully developed using PPPs, with significantly improved efficiency [5][6][7].On the other hand, various problems have been encountered in PPP infrastructure development projects worldwide [8,9].For example, PPPs have resulted in high contracting costs, which reduce project efficiency.There is also evidence suggesting that a bilateral monopoly relationship between the government and private companies creates hold-up opportunities, which can lead the projects to be either abandoned or taken in-house [10].This point suggests that incentives for private companies to enhance contractual efficiencies are important in PPP infrastructure projects.
A PPP infrastructure project usually involves a large upfront investment.A special purpose vehicle (SPV) set up by the sponsoring companies raises project funds primarily via project financing, which includes features such as the nonrecourse principal and incompleteness of debt contracts.The nonrecourse principal releases the sponsoring companies from the debt contract in the event that the SPV falls into bankruptcy.Consequently, obligations under the debt contract after an SPV's bankruptcy are limited to the liability of the SPV.In addition, debt contracts in PPP projects are incomplete contracts [11].As Hart and Moore [12] noted, in relation to incompleteness of debt contracts, considerable importance has been attached to the question of whether contract efficiency can be achieved when considering default risks.The focus on incomplete debt contracts emphasizes the fact that renegotiation is an important mechanism for the allocation of control rights across states [13,14], which thus creates incentives for borrowers to strive to avoid adverse states [15].
This study aims to analyse the effects of renegotiation between the SPV and the bank on ex ante moral hazard and ex post inefficient liquidation by considering the nonrecourse principal and incompleteness of debt contracts features.Assume a situation in which the SPV falls into default because of cost overruns during a PPP project.A renegotiation with the bank will be necessary if the SPV is to obtain an additional loan.On the one hand, the option to renegotiate might be harmful to ex post efficiency.In general, if the PPP infrastructure project has large external benefits, a social loss will occur if the bank, which makes decisions based on the future revenues and risks of the project, refuses to provide financial support.On the other hand, the renegotiation might also worsen the ex ante asset substitution problem [16].In PPP projects, asset substitution, which is a type of moral hazard of the SPV, refers to a situation in which the SPV prefers a risky project strategy (i.e., a strategy that leads to lower operating costs in the good states but higher costs in the bad states) to a safe strategy because of limited liability and contractual incompleteness.Suppose there is an adverse state in which the bank believes that the project is worth more if it continues as a going concern in the hands of the current SPV.The SPV can renegotiate an additional loan, and a debt forgiveness agreement is likely to be reached because of the nonrecourse principal and the limited liability of the SPV.Hence, the SPV can capture some of the goingconcern surplus, and its payoff in the insolvency state will be nonnegative.Thus, the incentive for the SPV to adopt a suboptimal project strategy increases.
In particular, we build an incomplete contract model for a PPP project developing infrastructure services that are paid for by the government as a three-stage incomplete contract model.In the basic model, given the exogenous service fee, we investigate how the renegotiation between the SPV and the bank, which occurs after cost overruns have placed the project at risk, affects the SPV's choice of strategy (moral hazard) before it falls into default and possible project liquidation after it falls into default.Moreover, we consider the case in which the service fee is endogenously determined by competitive bidding.The basic model is then extended by introducing a performance guarantee into the concession contract and termination rights implemented by the government when the SPV falls into bankruptcy.In most concession contracts in relation to PPP projects, the government asks the SPV to deposit an amount of money as a performance guarantee to safeguard against project risk [17].If the SPV fails to fulfil its contractual responsibilities because of bankruptcy, the government has the right to terminate the concession contract and confiscate the performance guarantee.Thus, the performance guarantee is viewed as a hedge by governments against the possible bankruptcy of the project [18].However, the effect of a performance guarantee on the ex ante moral hazard of the SPV has not been investigated sufficiently.We examine how the combination of termination right and a performance guarantee improves both the ex ante and ex post efficiency of PPP projects.
In the PPP finance literature, the focus thus far has been on the trade-off between public and private financing [19].The benefits of private financing are explained as lowering the shadow cost of public financing [20], taking full advantage of lender knowledge and expertise to evaluate project risks [21] and improve incentives [22], and efficient termination of bad projects [23], whereas the costs of private financing include higher interest rates, exacerbated moral hazard by introducing further risk-sharing [22], and loss of the consumer surplus that the higher prices set by private companies generate [20].Our contribution here is that we focus on the two essential features of private financing, namely, the limited recourse principal and incompleteness of debt contracts.We present an incomplete contract model to examine the causes of ex ante moral hazard of the SPV and ex post project liquidation.
The remainder of this study is organized as follows.Section 2 presents related literature about the financial features of PPP infrastructure projects and sources of moral hazard behaviours of the SPV.Section 3 formalizes the PPP project contract scheme, which includes a debt contract and a concession contract, as an incomplete contract model.Section 4 analyses the mechanism for determining concession prices under competitive bidding systems.Section 5 investigates the effect of a performance guarantee on the efficiencies of PPP projects.Section 6 presents our conclusions and outlines issues to be examined in future studies.The Appendix provides proofs of our conclusions.
Literature Review
2.1.Financial Features of PPP Infrastructure Projects.In a typical PPP infrastructure project, all financing is run through the SPV created for the sole purpose of developing the project.This firm is managed by the sponsors, who are equity investors responsible for the construction and operation of the project.Because the typical PPP infrastructure project involves a large initial investment, the equity usually cannot cover the total investment; thus, the sponsors must raise a large amount of funding via debt.This arrangement leaves the sponsors highly leveraged, typically with banks providing 70 percent to 90 percent of their funds.This financial arrangement for a PPP infrastructure project has two important features.First, the sponsors provide no guarantees beyond the right to be paid from the cash flows of the project [24].When the SPV falls into default, the banks will recover the debt only from the revenue of the project [25,26].Moreover, the SPV's payoff in the default state will be nonnegative from the limited liability.This feature therefore leaves an opportunity for risktaking behaviour by the SPV.In other words, the SPV might prefer a high-risk, high-return project strategy, that is, moral hazard behaviour.
Second, the contractual term of a PPP infrastructure project is usually approximately 20-30 years [27].At the contract negotiation stage, the bank and the SPV cannot forecast all future events and write them into the contract; thus, the debt contract must be incomplete.A renegotiation between the bank and the SPV is necessary when an unpredictable event, for example, a cost overrun, occurs [28,29].The renegotiation also affects the efficiency of the project.A successful renegotiation promises continuation of the project, whereas failure of the renegotiation might lead to liquidation or termination of the project.
Contractual Efficiency and Moral
Hazard of the SPV.In a PPP infrastructure project, the government is concerned with both the social and financial efficiency of the project contract.Social efficiency refers to the project being implemented efficiently; for example, the SPV chooses an optimal project strategy to reduce project cost and enhance quality.We analyse this contractual efficiency from two aspects: ex ante efficiency and ex post efficiency.The former is affected by SPV's moral hazard behaviour.The latter is related to the continuation or liquidation of the project after the cost overrun occurs.Because the PPP infrastructure project is characterized by huge externality, project liquidation will cause a loss of ex post contractual efficiency.Conversely, financial efficiency refers to the minimum expenditure paid by the government for purchasing the infrastructure services.In this study, we focus on the PPP project developing infrastructure services that are paid for by the government.How to achieve the best project value with minimum money, that is, achieve the value for money (VFM) of the PPP project, is the primary goal of the government.
The causes of a moral hazard of the SPV can be divided into two aspects: internal causes and external causes.The internal causes come from the precondition that the SPV is a rational economic actor and behaves opportunistically [30].If the objectives of the government and the SPV coincide, the SPV is expected to reach decisions that maximize the government's interests because such decisions will also maximize its own interests.However, because the SPV's main objective is to maximize its profit obtained from the project, which diverges from that of the government, the SPV might behave opportunistically regardless of the government's interest [31].
The external causes are related to asymmetric information, contractual incompleteness, and inadequate monitoring.First, neither the government nor the bank can observe the strategy or the behaviour of the SPV after signing the contract.In other words, information about the strategy and the behaviour is private information of the SPV [32].It is difficult for the government or the bank to judge or verify the real effort level of the SPV.Thus, the SPV can take advantage of this asymmetrical information to maximize its interests at a cost to the government, for example, can choose the risky strategy or shirk in the construction or operation stage.This moral hazard behaviour of the SPV might further make the cost overrun occur more easily, which triggers the liquidation or termination of the project.
Second, the cost overrun caused by the moral hazard of the SPV cannot be written into the PPP contract [25], which is therefore an incomplete contract.In practice, the design of PPP contracts is often affected by the challenge of including the "appropriate" level of flexibility-with too much flexibility, a moral hazard of the SPV is likely to occur; with too little flexibility, opportunities for welfare-enhancing renegotiations will be lost [33].When the cost overrun occurs, the SPV must renegotiate with the bank to procure additional funding to enable the continuation of the project.The renegotiation not only has an effect on the ex post efficiency of the project but also influences the ex ante moral hazard of the SPV.If the renegotiation causes a large loss to the SPV, that loss will be effective for deterring the moral hazard of the SPV.However, the nonrecourse principal and limited liability of the SPV lead the renegotiation to impose little negative effect on the SPV's payoff.Consequently, it will increase the possibility of a moral hazard of the SPV.
Finally, during the contractual term of a PPP, the SPV has a large measure of freedom to manage the infrastructure [6].The government pays the predetermined service fee to buy the infrastructure services provided by the SPV.The government expects the SPV to use its professional expertise to improve the efficiency of the project, but it is difficult for the government to monitor or supervise the behaviour of the SPV.This insufficient monitoring or supervision is also an external cause that can trigger a moral hazard problem of the SPV.Under insufficient monitoring, the SPV who aims to maximize its own interests will prefer to choose risky project strategy or lower effort level.
In summary, the internal and external causes of the moral hazard problem suggest that the government should provide the SPV with proper incentives to improve contractual efficiency.Many studies lying fully in the realm of incomplete contract literature have focussed on the effects of allocation of property rights to the project efficiency [34,35].Their findings showed that the optimal allocation of property rights, depending upon the characteristics of tasks involved in PPP projects, will create proper incentives for the private company to improve project efficiency [36,37].Complementing these studies, this study focusses on incompleteness of debt contracts of PPP projects and proposes an optimal combination of performance guarantee, termination right, and service fee determined via competitive bidding to deter the moral hazard of the SPV.
Assumptions.
To examine the effects of incompleteness of the debt contract on project efficiencies, we assume that the SPV has no equity in the investment and raises all funds by means of a debt contract.The only role of the government in the basic model is to purchase the infrastructure services provided by the SPV.The PPP project is interpreted as a three-period game, as shown in Figure 1.At date = 0, the SPV is selected by a tender.Then, a concession contract is concluded between the government and the SPV, and a debt contract is concluded between the bank and the SPV.After choosing the project strategy (e.g., construction method), the SPV begins the construction of the facility.We consider the case in which project risks exist (e.g., construction risk).The project costs are determined at date = 1.In this stage, whether the project is continued is determined under the initial debt contract.When it is impossible to continue the project, a renegotiation in which the bank decides whether to modify the initial contract and provide an additional loan to the SPV is held between the SPV and the bank.If no additional loan is provided by the bank, the SPV cannot continue the project, and control rights are transferred to the bank.The bank then decides whether to continue or liquidate the project.The operation begins at date = 2.If the project continues, the government pays the SPV the service fee.The SPV then repays the debt, and the project terminates at = 2.
After signing the concession contract with the government, the SPV enters into a debt contract and starts the project at time = 0.The initial investment and repayment are described in the debt contract.Because the repayment includes not only the principal but also the risk premium of the project, > holds.The SPV then chooses the project strategy, which is classified into two types, denoted by ( = , ).Project strategy represents the safe strategy, whereas represents the risky one.
The project costs determined at = 1 are denoted by the function C( , ) ( = , ; = 1, 2, 3), which depends upon both the project strategy ( = ,) and the state variable ( = 1, 2, 3) realized with positive probabilities = Prob[ = ] > 0. The project cost is uncertain and is thus referred to as the cost risk.We assume that the cost risk is known to both the SPV and the bank.However, the realization of ( = 1, 2, 3) is observable but not verifiable to outsiders.Thus, the situations contingent on the cost risk cannot be described in the debt contract.Therefore, the debt contract is an incomplete contract.Suppose that the financial market is perfectly competitive, and both the SPV and the bank are risk neutral.Assume that ( That is, the project cost associated with the risky project strategy is lower than that associated with the safe one when the state variable 1 is realized.The bank cannot observe the project strategy chosen by the SPV, but it can observe the costs.Given (1), if the state variable 3 is realized, the project cost is determined as 2 , which is independent of the project strategy.Thus, the bank fails to identify the type of project strategy chosen by the SPV; that is, the information concerning the project strategy is private information held by the SPV.When the project proceeds, the SPV earns the service fee paid by the government, denoted by R, at t = 2.We assume that the service fee is determined exogenously in the basic game.Furthermore, we assume that If R ≥ 2 holds, the SPV earns nonnegative payoffs independent of the project strategy employed.Assumption (3) implies that the government intends to maximize the VFM of the project.Suppose that the repayment occurs after the SPV is paid.The discount rate is assumed zero.After the project cost occurs at = 1, a renegotiation can occur between the SPV and the bank.At the beginning of = 1, all of the borrowing = 1 is invested into the project.When the cost becomes 2 , the SPV cannot continue the project unless the bank provides an additional loan.Thus, the SPV will propose a renegotiation with the bank.If the bank decides to provide the additional loan, a new debt contract will be written, and the project is continued by the SPV.However, the project will be liquidated if the bank refuses to provide further financial support.At this point, the bank will suffer a loss equal to the initial investment 1 .
The last assumption states that the safe project strategy is socially optimal; namely, ( where Λ represents the social cost of a moral hazard.
Equilibrium Solutions of the Basic Model.
The basic model is a game with complete information, whereas the information on project strategy is asymmetric between the SPV and the bank.We solve the subgame-perfect equilibria by backward induction.First, let us focus on the subgame wherein the bank and the SPV renegotiate whether to continue the project and derive two lemmas.
holds, the renegotiation never occurs.
Lemma 1 shows that if the SPV's profit is nonnegative and the project cost does not cover the initial investment amount, the SPV should complete the project under the initial debt contract and make repayments.If (5a) holds, any renegotiation request from the SPV should be rejected because the bank knows that the SPV can fully repay the contract amount.However, (5a) or (5b) might not hold contingent on the realization of the state variable .In this case, the project cannot proceed unless the bank extinguishes part of the debt and provides an additional loan.
Lemma 2. In the renegotiation, the bank provides an additional loan if and only if
holds.
Mathematical Problems in Engineering 5
When the project cost is determined as 2 , the SPV needs an additional loan to continue the project.Suppose that the bank has all of the bargaining power in the renegotiation.Then, the new repayment ∘ is written as The left-hand side of ( 7) represents the payoff earned by the bank, and the right-hand side represents the additional loan required.Lemma 2 shows that the bank can recover part of the debt by continuing the project.If ( 6) is satisfied, the bank will provide an additional loan.Note that the initial loan is a sunk cost; thus, it is not considered in the renegotiation.Conversely, if ( 6) is not satisfied, the bank refuses to provide an additional loan, which results in the liquidation of the project.Different subgame-perfect equilibria are available depending upon the amount of the service fee.Thus, two scenarios arise: In Scenario 1, the SPV will continue the project to date 2 independent of cost risk.In Scenario 2, the project will be liquidated when the project cost is determined as 2 .
Scenario 1. First, consider the case in which the SPV chooses the safe project strategy at = 0.When the state variables 1 , 2 are realized, the project costs become ( , 1 ) = ( , 2 ) = 1 .The SPV can complete the project without any additional loan from the bank.Given the repayment , the payoffs earned by the SPV are spv ( 1 ) = spv ( 2 ) = + − 1 − .However, when state variable 3 is realized, an additional loan equal to 2 − 1 is necessary for the project to continue.From Lemma 2, the bank chooses to provide an additional loan and captures the entire service fee , which is paid at date 2. Thus, after 3 is realized, the SPV obtains the payoff written as spv ( 3 ) = 0. Correspondingly, the bank's payoffs are represented by bank ( 1 ) = bank ( 2 ) = − > 0 and bank ( 3 ) = − 2 < 0. In the case in which the SPV chooses the risky project strategy , a lack of funds is realized for the state variable 2 or 3 .The payoffs of the SPV and the bank are represented by respectively.Then, consider how the SPV chooses the project strategy at = 0. Given the amount of repayment , the SPV's expected payoffs are written as The conditions that ensure that the SPV chooses the safe project strategy are written as Condition (8a) is the incentive-compatible constraint, and condition (8b) is the participation constraint corresponding to the safe project strategy.In contrast, the conditions for choosing the risky project strategy are Before the SPV chooses the project strategy, the bank determines the amount of repayment at date 0. With respect to the project strategies chosen by the SPV, the bank's expected payoffs are . Suppose that the financial market is perfectly competitive.Then, the repayments necessary for the bank to break even are = + 3 ( 2 − )/(1 − 3 ) and = + (1 − 1 )( 2 − )/ 1 .The second terms denoted in and represent the premiums for undertaking the risk of debt forgiveness.
Scenario 2. From Lemma 2, the bank's decision-making in the renegotiation differs between Scenarios 2 and 1 when the project cost is determined as 2 .
When the project cost is determined as 2 , the bank will refuse to provide an additional loan, and the project will be liquidated.Because of the limited liability of the SPV, its payoff is 0, even when the project is liquidated.Thus, the SPV obtains the payoffs shown in Scenario 1.Given the repayment , the expected payoffs obtained by the SPV are Π spv ( ) = ( 1 + 2 )( + − 1 − ) and Π spv ( ) = 1 ( + − ).However, the bank's payoffs differ from those shown in Scenario 1.When the project cost is determined as 2 , the bank chooses to liquidate the project and obtains the payoff represented by −.Thus, given the project strategy chosen by the SPV, the break-even repayments are written as = /(1− 3 ) and = / 1 .
Note.The establishing condition column shows the establishing conditions of the relevant equilibrium solutions.Π spv * shows the expected payoffs earned by the SPV under equilibrium strategy * .The I sign in the default column implies that the debt contract is not modified; the Δ sign means that the bank extinguishes part of the debt and the project continues.The × sign means that the SPV is liquidated Finally, we derive the following equilibrium solutions, which are summarized in Table 1 (the proof is provided in Appendix B).
Bidding Systems and Equilibrium Solutions
4.1.Bidding Systems.In the basic model, the equilibrium solutions are derived by assuming that the service fee is an exogenous variable.However, in practice, the service fee is determined by bidding before the concession contract is signed.This study assumes that a bidding process concerning the concession contract is implemented and that the bidder offering the lowest price is selected as the SPV at the beginning of a PPP project.For example, assume firms attend the bid, and the corresponding prices submitted by each firm are denoted by ( = 1, 2, . . ., ); then the price submitted by winning bidder firm must satisfy = min{ 1 , 2 , . . ., }.In many real-world PPP projects, the winning bid is selected following a comprehensive evaluation of the proposals presented by all bidders.However, the cost of the comprehensive evaluation, which is represented by a trade-off between the evaluation items and the service fee, is generally measured in monetary terms.Suppose that perfect competition is observed during the bidding process.In the real world, there are significant transaction costs, for instance, the costs of participating in the bidding process.Thus, in many cases, the number of bidders participating in the bidding process is restricted, and perfect competition might not be realized.However, because the purpose of this study is to analyse the structure of PPP contracts, we assume that perfect competition is observed in the bidding process.We first analyse a case in which the bid price is determined by perfect competition and no constraints are imposed on the bid price.Assuming perfect competition means that the lowest price bidder is awarded as a winner.The equilibrium solutions in this case are referred to as competitive bidding equilibrium solutions (CBEs).In CBEs, the bidders that choose the safe project strategy might not be selected because of excessive competition.
Competitive Bidding Equilibrium Solutions (CBEs)
. Suppose that competitive bidding is implemented at = 0 to determine the service fee .The government signs the concession contract with the winning bidder, who offers the lowest feasible price.We find the lowest from the equilibrium solutions of the basic model.First, denote the lowest service fee (marginal service fee hereafter) in equilibrium solution A by min .From equilibrium A, min is written as Next, the marginal service fees min , min , and min are written as , and min{] 2 , ] 4 } > min = ] 5 , respectively.The order of marginal service fees depends upon the stochastic variable ( = 1, 2, 3) and the project cost, 1 or 2 .Given (4), the order of marginal service fees is summarized in four patterns from which we derive four CBEs as shown in Table 2 (the proof is included in Appendix B).
Note.The I sign in the liquidation column means that the project continues, whereas the × sign means that the project is liquidated.The I sign in the moral hazard column means that a moral hazard does not occur, whereas the × sign means that a moral hazard occurs.
Equilibrium D in the basic model is particularly reflected in CBE 1, in which neither a moral hazard nor inefficient liquidation can be avoided.CBE 2 reflects equilibrium C, in which the moral hazard problem is avoided, but the project is liquidated when the project cost is determined as 2 .CBE 3 and CBE 4 both reflect equilibrium B, in which a moral hazard occurs, and the inefficient liquidation of the project is avoided.
Proposition 3. When the service fee is determined via competitive bidding for a PPP concession contract, a moral hazard and inefficient liquidation cannot be avoided simultaneously during the contract period.
Contract Termination and Performance Guarantee.
We have shown that the financial efficiency and social efficiency of the PPP project cannot be achieved simultaneously via the competitive bidding system.Here, we address this issue by adding a performance guarantee and right of termination into the concession contract, as shown in the performance guarantee model.The differences between the basic model and the performance guarantee model are as follows.At the beginning of the project, the SPV is obligated to deposit guarantee money with the government.Assume that the SPV is prohibited from procuring this guarantee money from debt funding.Suppose that the project cost realized at = 1 is 2 ; thus, the project cannot continue unless an additional loan is provided by the bank.Then, the government exercises its termination right and cancels the concession contract with the SPV.The bank must then decide whether to accept the termination of the concession contract.If the bank accepts, the project is liquidated.The government would then pay the SPV compensation amount , which is equal to repayment D for acquiring the assets of the project.After making this payment, the government incurs additional expenditure 2 - 1 to continue the project.The government must decide whether to find an alternative SPV or to continue the project itself.When the concession contract is terminated, the guarantee money that is confiscated by the government is used wholly or partly for repayments to the bank.Conversely, if the bank rejects the termination of the concession contract, it can modify the debt contract and provide an additional loan to the SPV.Thus, the SPV can continue the project to its conclusion.Finally, if the termination right is not exercised by the government and the project continues until = 2, the guarantee money will be returned to the SPV.
Equilibrium Solutions of the Performance Guarantee
Model.When the project cost is determined as 1 , the SPV can complete the project without any additional loan.The amount of performance guarantee money, denoted by K, is paid at = 0 and is returned at = 2.The SPV's payoffs are the same as those in the basic model.However, when the project cost is 2 , an additional loan equal to 2 - 1 is required to enable the continuation of the project.The government exercises its termination right and confiscates the guarantee money.The government then acquires the assets of the project by paying the bank compensation amount D. The government can then sign a new concession contract with an alternative SPV (or the initial SPV).An alternative SPV will bear the additional cost 2 - 1 and continue the project.Therefore, when the project cost is determined as 2 , the SPV's payoff is −.
Conversely, the bank's payoffs are not contingent on the project state because the bank can always receive compensation equal to the repayment D. Therefore, the bank obtains the expected payoffs, which are independent of the strategy ( = , ) chosen by the SPV, represented by Πbank Given the competitive financial market, the repayment D is determined as D = .
Given guarantee money and D = , the expected payoffs of the SPV, which chooses either the safe or the risky project strategy, are written as Πspv () = ( 1 + 2 )( − 1 ) − 3 and Πspv () = 1 −( 2 + 3 ).The conditions ensuring that the safe project strategy is chosen are written as Equations ( 13a) and (13b) represent the incentive constraint and the participation constraint, respectively, for the choice of the safe project strategy by the SPV.Consider the case in which the participation constraint (13b) holds but the incentive constraint (13a) does not.For the risky project strategy to be feasible, the following two conditions must be satisfied: where (14a) and (14b) represent the incentive constraint and the participation constraint, respectively, corresponding to the risky project strategy.
Note.The I sign in the liquidation column means that the project continues.The I sign in the moral hazard column means that a moral hazard does not occur, whereas the × sign means that a moral hazard occurs.In equilibria E and F, the ex post efficiency of the project is ensured when the project cost is determined as 2 , because the project will be continued by an alternative SPV after the government terminates the existing concession contract.In addition, in equilibrium E, the SPV chooses the safe project strategy, so the socially optimal contract is realized.However, in equilibrium F, the SPV chooses the risky project strategy, and a moral hazard cannot be prevented.
Competitive Bidding Equilibrium Solutions of the Performance Guarantee Model.
To find the optimal value of the performance guarantee, we consider competitive bidding to determine the service fee endogenously.We refer to the competitive bidding equilibrium solutions of the performance guarantee model as P-CBEs.Because (13a) and (13b) can be rewritten as we conclude that the area Ω shown in Figure 2 satisfies (16a) and (16b) simultaneously.Conversely, from (14a) and (14b), the conditions corresponding to the risky project strategy are rewritten as From (17a) and (17b), the area labelled in Figure 2 represents the risky project strategy.When the service fee, denoted by * (), is determined by competitive bidding, the following two cases occur.First, if < ( 1 ( 1 + 2 )/ 2 ) 1 , then * () is equal to the lowest value existing in equilibrium F. From (17b), we obtain Second, if the performance guarantee is determined as ≥ ( 1 ( 1 + 2 )/ 2 ) 1 , only * () is feasible in equilibrium E. From (16b), the service fee * () is written as As shown in Table 3, the following P-CBEs are derived: In P-CBE 1, a moral hazard and liquidation are avoided, whereas a moral hazard occurs in P-CBE 2.
Proposition 4.
In competitive bidding for a concession contract, if the amount of performance guarantee satisfies then the socially optimal PPP project is possible.At the same time, maximum VFM is also realized.
Proposition 4 suggests that the performance guarantee is not only an instrument to hedge risks but also a means of increasing social and financial efficiencies by disciplining the SPV.The optimal performance guarantee will increase if 1 on the right-hand side of ( 21) is larger or 2 is smaller.Meanwhile, the social cost Λ arising from a moral hazard will decrease with a larger 1 or smaller 2 .Note that the choice of the safe project strategy will be socially efficient only if the equity cost of raising the guarantee money is small.However, if the equity cost is greater than the social cost arising from a moral hazard, it might be preferable to choose the risky project strategy.Conversely, because we assume that the government does not behave strategically and pays the bank a compensation equal to the initial loan, the bank has no incentive to exercise its step-in right, which is stipulated in some concession contracts for PPP projects [38].Thus, in future studies, it might be necessary to formulate a performance guarantee model incorporating a step-in right in cases in which the government behaves strategically.
Moreover, the expected payments from the government are ( 1 + 2 ) 1 + 3 2 , when the amount of performance guarantee satisfies (21).No additional costs (quasi-rent) are incurred because the SPV's expected payoff is zero, and no risk premium exists because the bank bears no risk.Thus, the government's expected payment is lowest; that is, value for money is realized.
Implications of the Performance Guarantee Policy.
Proposition 4 argues that the social optimum and maximization of VFM are realized in PPP projects if the guarantee money that is determined satisfies (21) and competitive bidding is implemented before the concession contract is signed.By comparing the CBEs in the basic model and P-CBEs in the performance guarantee model, we conclude that the performance guarantee system is characterized by the following four desirable features.
First, inefficient liquidation does not occur because the government guarantees the continuation of the project even when it becomes difficult for the initial SPV to continue the project.Second, the government commits to paying compensation to the bank to ensure the continuation of the project when it becomes difficult for the initial SPV to continue the project.Thus, there is no risk premium in the debt contract because the bank bears no risk.Third, the performance guarantee provides an incentive to the SPV to choose the safe project strategy.Moral hazard is avoided when the performance guarantee satisfies (21).Fourth, the additional payment will be avoided if the appropriate guarantee money is collected in advance.According to agency theory, the moral hazard can be avoided at a cost (quasirent), which is an additional payment from the principal to the agent.In a PPP project, any additional payment is actually income transferred from the government to the SPV.Thus, the performance guarantee system not only ensures the social welfare but also minimizes the expected payments from the government; that is, VFM is maximized by the introduction of the performance guarantee system.
Furthermore, the performance guarantee has certain characteristics in its practical application.Note that the optimal amount of the guarantee money is not a specific value but rather is represented by a range that satisfies (21).It is not easy to find the exact value of the guarantee money that satisfies (21).Therefore, the government must gather sufficient information about the project risks.In addition, because the guarantee money is obtained from equity funding, the opportunity cost of equity increases as the size of the guarantee money increases.Therefore, it is desirable to set the guarantee money as low as possible within the range satisfying (21).Issues relating to the calculation of the performance guarantee and the opportunity costs of equity have not been addressed in this study.These issues will be considered in future studies.
Conclusions
In this study, we formulate an incomplete contract model of PPP projects, which are assumed to provide infrastructure services paid by the government, to analyse the mechanism of ex ante moral hazard and ex post liquidation.When the SPV is selected through competitive bidding, we find that this method cannot prevent a moral hazard and inefficient liquidation from occurring simultaneously.Then, we introduce a performance guarantee and termination right for the government into the project contracts.The findings indicate that the optimal performance guarantee satisfying ( 21) not only provides the SPV with an incentive to choose the safe project strategy but also minimizes the expected payment from the government.In addition, inefficient liquidation is avoided because the government exercises its right to terminate the concession contract and finds an alternative SPV to continue the project after the existing SPV falls into default.That is, social efficiency and VFM can be achieved by setting the optimal performance guarantee level satisfying (21), which is stipulated in the concession contract.
From a real-world perspective, a performance guarantee can help to ensure the continuation of PPP infrastructure projects facing project risks because the guarantee money plays an important role in promoting banks to provide additional loans to the SPV.In addition, governments can use performance guarantees to provide SPVs with proper incentives to choose socially efficient project strategies to enhance contractual efficiencies.By setting the optimal performance guarantee level, it is possible to design an optimal contract that will maximize the VFM.
A different approach is necessary under different assumptions.First, after observing the state realized at = 1, it might be necessary to develop a real option approach to analyse future behaviour.Second, this study does not consider the possibility that the SPV strategically liquidates the project.For instance, the SPV might strategically default in an attempt to trigger the liquidation of the project.Third, this study assumes that the bank has all of the bargaining power in renegotiation, which occurs when an additional loan is necessary at = 1.However, the initial SPV might have considerable bargaining power because of the monopolistic conditions that exist in many international PPP projects.In this case, the performance guarantee would serve to restrain the SPV's bargaining power.Finally, the government might also behave strategically.We will address the issue of strategic Mathematical Problems in Engineering behaviour on the part of both the SPV and the government in future studies.
A. Notations
To make the paper more readable so that readers need not recheck the meanings of notations back and forth, this appendix provides a list of some frequently used notations.
respectively, from (8a) and (8b), which represent the incentive-compatible condition and participation condition corresponding to the safe project strategy.Because ((1 − 3 ) 2 / 2 ) 1 + 3 2 ≥ (1 − 3 ) 1 + 3 2 holds, the condition for equilibrium when the safe project strategy is chosen is written as Under assumption conditions (2), (3), and (4), equilibrium A always exists.Conversely, given = + (1 − 1 )( 2 − )/ 1 , we obtain < ] 1 and ≥ ] 3 from (9a) and (9b), respectively.Define ] 3 = (1 − 1 ) 2 .Therefore, the condition for equilibrium B in which the risky project strategy is chosen is written as the necessary and sufficient condition for equilibrium B to exist is written as ), the conditions that guarantee that the SPV chooses the safe project strategy are written as the necessary and sufficient condition for equilibrium C to exist is written as Λ ≥ 2 /( 1 + 2 ) 1 .Conversely, the conditions corresponding to the risky project strategy are written as Similarly, it is easy to prove that the following equivalent relationships exist (proof omitted): Therefore, (B.7) can be derived from (B.6) as follows: Equation (B.7) can be rewritten as Next, consider the case in which V 3 ≥ V 1 and V 4 ≤ V 5 hold.From (B.10), we obtain (B.11) Equation (B.11) can be rewritten as Equations (B.8) and (B.12) show that the following two patterns concerning the order of the marginal service fees exist: pattern 1: pattern 2: (B.13) The necessary and sufficient conditions for pattern 1 and pattern 2 are represented by (B.4) and (B.9), respectively.Clearly, reversion of the order of service fees in pattern 1 and pattern 2 is possible; that is,
Figure 1 :
Figure 1: Timing of the basic model.
Figure 2 :
Figure 2: Areas representing the alternative project strategies.
Table 1 :
Equilibrium solutions of the basic model.
Table 2 :
Competitive bidding equilibrium solutions (CBEs) in the basic model.
Table 3 :
P-CBEs of the performance guarantee model. | 2018-12-18T20:50:38.776Z | 2018-04-30T00:00:00.000 | {
"year": 2018,
"sha1": "e7b47e763ff2364ed4f40000e8a1ab5cd22b5e2e",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/mpe/2018/3631270.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e7b47e763ff2364ed4f40000e8a1ab5cd22b5e2e",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
16053047 | pes2o/s2orc | v3-fos-license | Differences in the predictive value of red cell distribution width for the mortality of patients with heart failure due to various heart diseases
Background Increased red blood cell distribution width (RDW) is associated with adverse outcomes in patients with heart failure (HF). The objective of this study was to compare the differences in the predictive value of RDW in patients with HF due to different causes. Methods We retrospectively investigated 1,021 HF patients from October 2009 to December 2011 at Fuwai Hospital (Beijing, China). HF in these patients was caused by three diseases; coronary heart disease (CHD), dilated cardiomyopathy (DCM) and valvular heart disease (VHD). Patients were followed-up for 21 ± 9 months. Results The RDW, mortality and survival duration were significantly different among the three groups. Kaplan–Meier analysis showed that the cumulative survival decreased significantly with increased RDW in patients with HF caused by CHD and DCM, but not in those with HF patients caused by VHD. In a multivariable model, RDW was identified as an independent predictor for the mortality of HF patients with CHD (P < 0.001, HR 1.315, 95% CI 1.122–1.543). The group with higher N-terminal pro-brain natriuretic peptide (NT-proBNP) and higher RDW than median had the lowest cumulative survival in patients with HF due to CHD, but not in patients with HF due to DCM. Conclusions RDW is a prognostic indicator for patients with HF caused by CHD and DCM; thus, RDW adds important information to NT-proBNP in CHD caused HF patients.
Introduction
Heart failure (HF) is a chronic, progressive illness that carries a very poor prognosis and is highly prevalent worldwide. [1,2] HF affects nearly four million Chinese people and is associated with elevated rates of mortality. [3] To target effective therapies for the most appropriate patients, there is a need for a simple but accurate prognostic indicator.
Red cell distribution width (RDW) is readily available from a standard full blood count and is a measure of variation in red blood cell (RBC) size. It is used clinically for morphological classification of anemia and the differential diagnosis of small cell anemia. [4] Furthermore, RDW has been shown to be a powerful predictor of short-and long-term outcomes in a patients with HF. [57] However, the etiology of heart failure is a complex and may be caused by myocardial systolic dysfunction, coronary artery disease, endocrine disease, heart valve disease, hypertension, acute pulmonary embolism, emphysema or other chronic lung diseases. Thus, in this study, our primary aim was to investigate differences in the prognostic value of RDW among patients with HF due to various heart diseases in order to expand the application of RDW in such patients.
Study samples
We identified 1,021 consecutive patients with HF hospitalized in the HF ward in Fuwai Hospital (Beijing, China), from October 2009 to December 2011. The diagnosis of HF was made by two clinicians with broad experience according to the European Society of Cardiology (ESC) guidelines. [8] To reduce the impact of other factors on RDW or the endpoint, the following patients were excluded: those with incomplete medical records, under 18 years old, with diseases such as infectious endocardial inflammation, aortic dissection, constrictive pericarditis, hydropericardium and pulmonary thromboembolism. In addition, patients with thyroid disease, acute cerebral vascular disease, cancer, chronic obstructive pulmonary diseases, end-stage renal disease and other diseases with the potential to change RDW were excluded. All the patients were given standard medication according to the guideline recommendations during the period of hospitalization and after discharge. Furthermore, all patients or their families were followed-up in telephone calls after discharge. The endpoint was defined as all-cause death. The study protocol was approved by the local ethics committee in accordance with the Declaration of Helsinki, and all study participants gave informed consent.
Clinical data collection
Clinical information on the patients, including age, sex, body mass index (BMI) and complicating diseases, was recorded on admission. Blood samples were taken from the participants for measurements at baseline, including routine blood tests, N-terminal pro-brain natriuretic peptide (NT-proBNP) and biochemical indexes. The blood samples were collected using a standard procedure after a 1-h fast, and were sent to the core laboratory of Fuwai Hospital for immediate testing using standard techniques. The RDW, RBC count, and hemoglobin (Hb) were tested using Sysmex XE-2100 blood cell analyzers and appropriate reagents; alanine aminotransferase (ALT), aspartate aminotransferase (AST), total protein (TP), albumin (ALB), total bilirubin (TBIL), direct bilirubin (DBIL), blood urea nitrogen (BUN), serum creatinine (CREA), uric acid (URIC) and high-sensitivity C-reactive protein (hs-CRP) were assayed using a Hitachi 7180 biochemistry autoanalyzer; plasma NT-proBNP was measured with a dedicated kit (NT-proBNP assays; Biomedica, Vienna, Austria). Chest X-radiography and echocardiography were performed and the left ventricular ejection fraction (LVEF) was measured according to the biplane Simpson rule. During the follow-up period, adverse events after discharge, such as rehospitalization due to HF or all-cause death, were recorded. If a patient was readmitted several times for HF exacerbation, we recorded the time of the first readmission. If a patient was readmitted to hospital and then died, death was regarded as an adverse event and the time of death was recorded. Following an all-cause death event, the follow-up period for this patient ended.
Statistical analyses
The results are presented as percentages for dichotomous variables, mean ± SD for parametric continuous variables, and the median (interquartile range) for nonparametric continuous variables. Two groups were compared by t-test or Mann-Whitney U-test or the Kruskal-Wallis H-test; further comparisons were performed by the Bonferroni method. Univariate analysis was used to select clinical variables, which were related to an endpoint with a P-value of < 0.05. A multivariate Cox proportional hazards model was used to calculate risk ratios for independent predictors of mortality with incremental increases in continuous variables. The RDW was non-normally distributed and is represented as the median [first quartile, third quartile: M (Q1, Q3)]. The survival rate of patients, estimated by Kaplan-Meier and log-rank tests, was analyzed to investigate the difference between two groups. Receiver operating characteristic (ROC) curve analysis was used to estimate the predictive value of RDW for death risk in patients with HF; the areas under the curve (AUC) were compared by the Z-test. The data were analyzed statistically using SPSS version 17.0.
RDW is an independent risk factor for death in patients with HF
In the univariate analysis, the significant clinical variables were BMI, NYHA heart function classification, blood pressure, heart rate, LVEF, RDW, NT-proBNP, hs-CRP, Hb, TP, ALB, ALT, AST, TBIL, DBIL, BUN, CREA, and URIC. In the multivariate Cox proportional hazard analysis, RDW was identified as a significant variable in predicting mortality ( Table 2).
Differences in the prognostic value of RDW in patients with HF due to various heart diseases
CHD, DCM, and VHD are the main causes of HF. Therefore, we selected patients with these three main causes for this study and stratified the population into three groups accordingly. In the univariate analysis, RDW remained a predictor of mortality in patients with CHD and DCM, but not in those with VHD (Table 3).
Kaplan-Meier analysis was used to evaluate the predictive ability of RDW on cumulative survival by stratification of the patients into four groups according to the quartiles of RDW. It was found that cumulative survival was significantly lower in patients with CHD and DCM in the higher quartiles of RDW, but cumulative survival was not significantly different for those with VHD ( Figure 2).
As shown in Table 4, after adjustment for potential confounding factors (NT-proBNP, Hb, LVEF, and age) in a multivariable Cox proportional hazards model, RDW remained an independent predictor for CHD mortality; however, RDW was no longer a predictor for DCM mortality.
We stratified the population into three groups according to the cause of HF and found that the median RDW, mortality during the follow-up period and median survival time were significantly different in the three groups of patients. Median RDW and mortality were significantly higher in patients with VHD and DCM than in patients with CHD and the survival time in these patients was shorter than that of patients with CHD, with no differences between the patients with VHD and DCM ( Table 5). Parameters of the ROC curves examining the power of RDW to predict mortality in patients with HF are shown in Table 5. The prognostic values of the RDW in various heart diseases were different. The AUC of RDW for predicting mortality due to CHD and DCM was 0.704 (P < 0.001, 95% CI: 0.609-0.799) and 0.753 (P < 0.001, 95% CI: 0.647-0.860), respectively, with no significant difference between the two values (P > 0.05). However, the AUC of the RDW for predicting mortality from VHD was 0.593 (P = 0.168).
RDW adds important prognostic information to NT-proBNP in HF due to CHD
We then stratified the population into four groups according to the median of RDW (13.4%) and NT-proBNP (1566.0 fmol/mL) with the purpose of exploring the power of RDW to adding prognostic information to NT-proBNP. We found that the group of patients with RDW > 13.4% and NT-proBNP > 1566.0 fmol/mL had the lowest cumulative survival in patients with HF due to CHD ( Figure 3A). However, for those patients with the NT-proBNP phase at the same level, the cumulative survival was significantly lower in patients with increased RDW. In patients with HF due to DCM, it was found that cumulative survival of patients with RDW < 13.4% and NT-proBNP > 1566.0 fmol/mL was significantly lower than the group of patients with RDW > 13.4% and NT-proBNP > 1566.0 fmol/mL ( Figure 3B).
Discussion
RDW reflects the variability in size of circulating RBCs and, when elevated, defines the state of anisocytosis. Felker et al. [9] first reported the increase in mortality and morbidity in patients with CHF and elevated RDW. Subsequent studies Table 5. RDW, mortality and survival time in the three groups, and ROC curve analysis. showed that increased RDW is associated with increased mortality in patients with chronic and acute HF, [1012] acute myocardial infarction, [13,14] coronary artery disease, [1517] acute coronary syndromes, [18] and stroke. [19] However, HF is a complex disease, with a variety of causes. [20] Therefore, in this study our primary aim was to investigate differences in the prognostic value of RDW among patients with HF due to various heart diseases and to apply it more effectively in predicting outcomes of HF patients.
Prognostic value of RDW in patients with HF due to various heart diseases
Kaplan-Meier analysis was used to evaluate the effect of RDW on cumulative survival by stratification of the patients into four groups according to the quartile of RDW.
RDW and CHD-HF
Cumulative survival was significantly lower in patients with CHD in the higher quartiles of RDW. The AUC of RDW for predicting the mortality of HF patients due to CHD was 0.704, showing that that RDW is an effective predictor of mortality in this group.
After adjustment for potential confounding factors (NT-proBNP, Hb, LVEF, and age) in a multivariable Cox proportional hazards model, RDW remained an independent predictor for CHD mortality. Consequently, we considered combining RDW with another biomarker to testify whether RDW could provide additional information for prognosis. On this occasion, NT-proBNP is a good choice as it has relatively authoritative power in the prognosis of HF. [21,22] The lowest cumulative survival for the patients with CHD was observed in the group with both RDW and NT-proBNP above the median values. In addition, with NT-proBNP values of the same level, the cumulative survival of patients with RDW above the median was lower than those with RDW below the median (dark blue line vs. red line and light blue line vs. green line in Figure 3A). These data indicate that the combination of NT-proBNP and RDW provide http://www.jgc301.com; jgc@mail.sciencep.com | Journal of Geriatric Cardiology more information on the prognosis of patients with HF due to CHD than that provided by NT-proBNP alone.
RDW and DCM-HF
Cumulative survival was significantly lower in patients with DCM in the higher quartiles of RDW. The AUC of RDW for predicting the mortality of HF patients due to DCM was 0.753, showing that that RDW is an effective predictor of mortality in this group.
However, after adjustment for potential confounding factors (NT-proBNP, Hb, LVEF, and age) in a multivariable Cox proportional hazards model, RDW was no longer a predictor for DCM mortality. The combination of NT-proBNP and RDW failed to provide additional information for prognosis of patients with HF due to DCM than that provided by NT-proBNP alone. This may be related to the small number of patients in this group.
RDW and VHD-HF
Cumulative survival in every RDW quartile group was similar in patients with VHD. The AUC of RDW for predicting the mortality of HF patients due to VHD was 0.593, indicating that RDW is not an effective predictor of mortality in this group.
These published data and the novel observations of this study provide a further understanding of why measurement of RDW is more appropriate in HF caused by coronary artery disease and DCM, than in VHD.
Study limitations
The current study has several limitations. Although it was a single-center study with consecutive patient enrolment from both in-and outpatient departments, patient heterogeneity may represent some background bias. HF is associated with many diseases. Due to the small number of cases of HF due to some of these diseases, in this study, we investigated only three main causes: CHD, DCM, and VHD. CHD (55.7%) is first leading cause of heart failures in China, followed by other caused including DCM, VHD, and hypertension. In the present study, the heart failure patients were continuously included, and thus the number of the patients with CHD was the highest (n = 503), while the number of the ones with the other two diseases, namely DCM (n = 155) and VHD (n = 155), was lower. Further large-scale studies are required to obtain more comprehensive data.
Conclusions
Accumulating evidence shows that different strategies are required to treat HF caused by different diseases. We identified differences in the prognostic value of RDW in patients with HF due to CHD, DCM and VHD. In conclusion, our study indicates that RDW is predicative for the mortality of HF patients due to CHD and DCM, but not VHD. In a multivariable Cox proportional hazards model, RDW was no longer found to be an independent predictor of DCM mortality. The combination of NT-proBNP and RDW provide more information on the prognosis of patients with HF due to CHD than that provided by NT-proBNP alone. This study should prompt further evaluation of the association between RDW and outcome in HF due to different to improve our understanding of the pathophysiology and to improve risk-stratification. | 2018-04-03T02:51:18.440Z | 2015-11-01T00:00:00.000 | {
"year": 2015,
"sha1": "f60095c2912b077a770b9a8db70ab1a8050104f7",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f60095c2912b077a770b9a8db70ab1a8050104f7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18993738 | pes2o/s2orc | v3-fos-license | Optimization of l‐DOPA production by Brevundimonas sp. SGJ using response surface methodology
Summary l‐DOPA (3,4‐dihydroxyphenyl‐l‐alanine) is an extensively used drug for the treatment of Parkinson's disease. In the present study, optimization of nutritional parameters influencing l‐DOPA production was attempted using the response surface methodology (RSM) from Brevundimonas sp. SGJ. A Plackett–Burman design was used for screening of critical components, while further optimization was carried out using the Box–Behnken design. The optimized levels of factors predicted by the model were pH 5.02, 1.549 g l−1 tryptone, 4.207 g l−1 l‐tyrosine and 0.0369 g l−1 CuSO4, which resulted in highest l‐DOPA yield of 3.359 g l−1. The optimization of medium using RSM resulted in a 8.355‐fold increase in the yield of l‐DOPA. The anova showed a significant R2 value (0.9667), model F‐value (29.068) and probability (0.001), with insignificant lack of fit. The highest tyrosinase activity observed was 2471 U mg−1 at the 18th hour of the incubation period with dry cell weight of 0.711 g l−1. l‐DOPA production was confirmed by HPTLC, HPLC and GC‐MS analysis. Thus, Brevundimonas sp. SGJ has the potential to be a new source for the production of l‐DOPA.
Introduction
L-DOPA is considered as the most potent drug available in the market for the treatment of Parkinson's disease (Kofman, 1971;Rani et al., 2007). L-DOPA is marketed as tablets under various brand names, such as Sinemet ® , Atamet ® , Parcopa ® and Stalevo ® (Ali and Haq, 2006). The world market for L-DOPA amounts to about 250 tons year -1 , and the total market volume is about $101 billion year -1 (Koyanagi et al., 2005). Tyrosinase catalyses the conversion of L-tyrosine to L-DOPA. Tyrosinases (E.C.1.14.18.1) are copper-containing enzymes widely distributed in plants, animals and microorganisms. They are involved in two steps of melanin synthesis, that is, from L-tyrosine to L-DOPA and then to L-DOPAchrome. Tyrosinase catalyses two successive reactions: cresolase and catecholase (Claus and Decker, 2006).
Although the chemical synthesis of L-DOPA was reported, it was produced by various biological systems. L-DOPA is conventionally extracted from the seeds of Mucuna pruriens and Vicia faba (Chattopadhyay et al., 1994). L-DOPA production been reported from plant sources like Portulaca grandiflora and banana (Bapat et al., 2000;Rani et al., 2007). The bacterial and fungal sources have been reported earlier for the production of L-DOPA, including Erwinia herbicola (Koyanagi et al., 2005), Aspergillus oryzae (Ali and Haq, 2006), Yarrowia lipolytica (Ali et al., 2007), Acremonium rutilum (Krishnaveni et al., 2009) and Bacillus sp. JPJ (Surwase and Jadhav, 2011). In addition, L-DOPA has also been produced by immobilized tyrosinase (Gabriela et al., 2000). The chemical process in L-DOPA production involves harsh conditions; hence, the eco-friendly bioconversion of L-tyrosine to L-DOPA is highly important. Aginomoto started the commercial production of L-DOPA using Erwinia herbicola in a fed-batch process, which has many advantages over the classical chemical process (Krishnaveni et al., 2009).
The high production cost and high commercial value of L-DOPA have motivated many researchers to search for alternative sources for its synthesis. The optimal design of the culture medium is a very important aspect in the development of fermentation processes. The conventional method for medium optimization involves changing one parameter at a time while keeping all others constant. This method may be very expensive and time-consuming. In addition, it fails to determine the combined effect of different factors (Lee et al., 2003;Zhang et al., 2007). Statistical experimental designs have been used to address these problems, such as the response surface methodology (RSM). In the present investigation, a Plackett-Burman design was used for screening of process parameters, while optimization of the critical factors for L-DOPA production was carried out using RSM with a Box-Behnken design.
Plackett-Burman design for screening of critical factors
Microorganisms have been exploited as an alternative source of L-DOPA. Given the potential uses of L-DOPA and the high demand for it, there exists a need to develop low-cost industrial media formulations. Statistical analysis using a Plackett-Burman design indicated that pH (X1), tryptone (X3), L-tyrosine (X7) and CuSO4 (X8) were significantly affected the L-DOPA production, with P-values less than the significance level of 0.1. The remaining components, including temperature (X2), yeast extract (X4), beef extract (X5), glucose (X6), MgSO4 (X9), K2HPO4 (X10) and NaCl (X11), were found to be insignificant, with P-values above 0.05. The experimental runs and their respective L-DOPA yield are presented in Table S1. Statistical analysis of the responses was performed, as shown in Table 1. The model F-value of 28.48 implies that the model is significant; there was only 0.01% chance that a model F-value this large could occur due to noise. The values of P < 0.05 indicate that the model terms are significant. 'Adeq Precision' measures the signal-to-noise ratio, with a ratio greater than 4 regarded as desirable (Anderson and Whitcomb, 2005). The 'Adeq Precision' ratio of 4.389 obtained in this study indicates an adequate signal. Thus, this model can be used to navigate the design space. Regression analysis was performed on the results, and a first-order polynomial equation was derived (Eq. 1), representing L-DOPA production as a function of the independent variables: Statistical analysis showed that it is not possible to evaluate the relationship between significant independent variables and the response by a first-order equation. Thus, the first-order model is not appropriate to predict the response. Indeed, further investigation could be conducted through a second-order model.
Medium optimization by response surface methodology
Medium optimization using the Box-Behnken design was carried out with the components found to be significant from the Plackett-Burman design, including pH (X 1), tryptone (X3), L-tyrosine (X7) and CuSO4 (X8). Table S2 presents the design matrix and the results of the 29 experiments carried out using the Box-Behnken design.
The results obtained were submitted to ANOVA using the Design expert software (version 8.0, Stat-Ease, USA), and the regression model was given as: where X1 is pH, X3 is tryptone, X7 is L-tyrosine and X8 is CuSO4. The ANOVA of the quadratic regression model (Table 2) demonstrated that Eq. 2 is a highly significant model (P = 0.001). The model F-value of 29.068 implies that the model is significant. There was only a 0.01% chance that a model F-value this large could occur due to noise. The goodness of fit of the model was checked using the determination coefficient (R 2 ). In this case, the value of the determination coefficient was R 2 = 0.9667. The value of the adjusted determination coefficient (Adj R 2 = 0.9335) was in reasonable agreement with the predicted R 2 (0.8217). The lack-of-fit value for regression Eq. 2 was not significant (0.9280), indicating that the model equation was adequate for predicting the L-DOPA production under any combination of values of the variables. 'Adeq Precision' measures the signal-to-noise Optimization of L-DOPA production by Brevundimonas 732 ratio, with a ratio greater than 4 considered as desirable (Anderson and Whitcomb, 2005). The 'Adeq Precision' ratio of 15.968 obtained in this study indicates an adequate signal. Thus, this model can be used to navigate the design space (Table 2).
Response surface curves
Graphical representation provides a method to visualize the relationship between the response and experimental levels of each variable and the type of interactions between test variables to deduce the optimum conditions. These techniques have been widely adopted for optimizing the processes of enzymes and peptides, solvents, polysaccharides, etc. (Wang and Lu, 2005). Three-dimensional (3D) graphs were generated for the pairwise combination of the four factors while keeping the other two at their optimum levels for L-DOPA production. The graphs are given here to highlight the roles played by various factors in the final yield of L-DOPA.
The interaction of two variables, pH and tryptone, is shown in Fig. 1A, the 3D response surface plot indicates that interaction of these components moderately affected the production of L-DOPA. The higher and lower levels of these components did not affect the L-DOPA yield drastically, but mid-levels provide a maximum yield. The acidic and alkaline pH results in lower L-DOPA yields might be because of inhibition of tyrosinase activity and cell viability. Also at alkaline pH, less L-DOPA yield resulted due to the conversion of L-DOPA into further metabolites such as dopaquinone and melanin (Ali and Haq, 2006). The pH 3.5 and 5.4 has been reported for bioconversion of L-DOPA by using biomass of Aspergillus oryzae and Yarrowia lipolytica respectively (Ali and Haq, 2006;Ali et al., 2007). The shape of the 3D response surface curve of the interaction between pH and L-tyrosine depicted in Fig. 1B shows that L-DOPA production was drastically affected by a slight change in the levels of these two factors. The higher and lower concentrations of both factors resulted in lesser L-DOPA yield.
The response surface curve is shown in Fig. 1C, illustrating that the interaction between pH and CuSO4 moderately affected the yield of L-DOPA. This is because the tyrosinase involved in the conversion of L-tyrosine to L-DOPA is a copper-containing enzyme (Claus and Decker, 2006). The use of CuSO4 in the media for L-DOPA production by Acremonium rutilum has been reported earlier (Krishnaveni et al., 2009). Figure 1D shows the quadratic effect of tryptone and L-tyrosine on L-DOPA productivity, which indicates that the L-DOPA yield was gradually enhanced by increasing the concentration of both components to their optimum level, after which the yield decreased steadily. The statistical analysis (Table 2) shows that insignificant interaction occurred with these components.
The effect of the interaction between tryptone and CuSO 4 on the production of L-DOPA is shown in Fig. 1E, which indicates that the L-DOPA yield was not highly altered by changes in the concentration of both media components. The shape of the response surface curve and statistical analysis (Table 2) indicate that insignificant interaction occurred between these ingredients. The response surface curve and contour of L-tyrosine and CuSO4 shown in Fig. 1F indicate a positive effect on L-DOPA production. The lower concentration of CuSO4 resulted in a lower yield of L-DOPA, while the higher concentration of L-tyrosine inhibited the L-DOPA production due to its decreased solubility (Ali et al., 2007;Surwase and Jadhav, 2011).
Validation of the experimental model
Validation was carried out under conditions predicted by the model. The optimized levels predicted by the model were pH 5.02, 1.549 g l -1 tryptone, 4.207 g l -1 L-tyrosine and 0.0369 g l -1 CuSO4. The predicted yield of L-DOPA with these concentrations was 3.361 g l -1 , while the actual yield obtained was 3.359 g l -1 . A close correlation between the experimental and predicted values was observed, which validates this model.
L-DOPA yield, biomass trend and tyrosinase activity
The L-DOPA production before and after optimization is illustrated in Fig. 2A, which indicates that in the medium before optimization (nutrient broth with 1 g l -1 L-tyrosine), L-DOPA production started after the 6th hour with a yield of 0.068 g l -1 , gradually increased to 0.402 g l -1 at the 18th hour, and then decreased to 0.194 g l -1 at the 24th hour. In contrast, in the medium optimized by RSM, L-DOPA production started at the 6th hour with a yield of 0.427 g l -1 , gradually increased to 3.359 g l -1 at the 18th hour, and finally decreased to 1.582 g l -1 . The decrease in the L-DOPA yield after the 18th hour was due to the conversion of L-DOPA to further metabolites, such as dopaquinone and melanin (Ali et al., 2007;Surwase and Jadhav, 2011). Thus, the medium optimization by RSM resulted in a 8.355-fold increase in the L-DOPA yield over the yield before optimization. The literature survey revealed that single-and multiple-stage cell suspension cultures of Mucuna pruriens have been reported to yield 0.028 g l -1 L-DOPA within 15 and 30 days respectively (Chattopadhyay et al., 1994). Portulaca grandiflora has been reported to produce 0.488 g l -1 of L-DOPA at the 16th hour (Rani et al., 2007); Acremonium rutilum produced 0.89 g l -1 L-DOPA, whereas Egyptian black yeast yielded Optimization of L-DOPA production by Brevundimonas 734 0.064 g l -1 (Krishnaveni et al., 2009;Mahmoud and Bendary, 2011). The biomass of Aspergillus oryzae and Yarrowia lipolytica has been reported to produce 1.686 and 2.960 g l -1 L-DOPA respectively (Ali and Haq, 2006;Ali et al., 2007). Thus, Brevundimonas sp. SGJ in the present study produced the highest yield of L-DOPA (3.359 g l -1 ).
The biomass trend and tyrosinase activity during L-DOPA production with optimized medium are depicted in Fig. 2B, which shows that the dry cell weight increased gradually up to the 15th hour (0. 614 g l -1 ) and then remained nearly constant, with a final weight of 0.711 g l -1 . Meanwhile, tyrosinase activity increased slowly up to 2471 U mg -1 at the 18th hour and then decreased suddenly to 1629 U mg -1 at the 24th hour. This observed decrease in tyrosinase activity may be due to the conversion of L-tyrosine to L-DOPA and of L-DOPA to dopaquinone; substrates for tyrosinase were not available further, during further pathway (Ali et al., 2007;Surwase and Jadhav, 2011).
L-DOPA analysis
The HPTLC peak profile (Fig. S1A) and the HPTLC plate (Fig. S1B) of the cell-free broth showed a distinct peak and band at the Rf 0.24, which was identical to standard L-DOPA (0.23). These results primarily confirmed the L-DOPA production in the medium. The HPLC elution profile of standard L-DOPA showed a peak at the retention time 2.723 min (Fig. S2A), while the HPLC elution profile of the broth after incubation showed a prominent peak at the retention time 2.725 min (Fig. S2B). This analysis confirmed the production of L-DOPA. GC-MS analysis of the extracted cell-free broth showed a distinct mass peak at m/z 197, which corresponded to the molecular weight of L-DOPA, confirming the production of L-DOPA by Brevundimonas sp. SGJ (Fig. S2C).
The Brevundimonas sp. SGJ reported here produced maximum L-DOPA and has several advantages over the plant, fungal and bacterial sources used earlier, such as a short incubation period, efficient production and requirement of simple medium components. The L-DOPA produced previously by bacterial sources like Erwinia herbicola used pyrocatechol as substrate, which is a toxic phenolic compound, and required polyacrylamide gel, which is an expensive chemical (Koyanagi et al., 2005;Surwase and Jadhav, 2011). Thus, the present study contributes to the optimization of the nutritional requirements that will be most useful for large-scale production of L-DOPA using Brevundimonas sp. SGJ.
Chemicals and microorganisms
L-tyrosine and L-DOPA were purchased from Sigma-Aldrich (St. Louis, MO, USA), and all other chemicals were purchased from Himedia (India). The bacterial strain producing L-DOPA was isolated from soil samples collected from Shivaji University, Kolhapur region, using a serial dilution technique. The nutrient broth (Himedia, India) used for the isolation with 1 g l -1 L-tyrosine. The bacterium was identified as Brevundimonas sp. SGJ (NCBI GenBank ID: HM998899) by 16S rDNA analysis.
Inoculum preparation and L-DOPA production
The nutrient broth (Himedia, India) used for the cultivation of the isolated bacterium consisted of 5 g l -1 peptone, 1.5 g l -1 beef extract, 1.5 g l -1 yeast extract and 0.5 g l -1 NaCl, supplemented with 1 g l -1 L-tyrosine and with a pH of 5.5. The 6 h grown, 2 ml of cell suspension was inoculated in 100 ml of the same medium for L-DOPA production in 250 ml Erlenmeyer flasks. The flasks were kept in an incubator shaker at 30°C and 120 r.p.m.; L-DOPA was assayed after 18 h.
Experimental design
Plackett-Burman design. A Plackett-Burman design was used to select the most critical physical and nutritional parameters for L-DOPA production by Brevundimonas sp. SGJ. The factors affecting the yield of L-DOPA were selected by screening various carbon sources, nitrogen sources, mineral salts and physical factors, such as pH and temperature. In addition, some of these variables were selected from the primary literature review (Krishnaveni et al., 2009;Mahmoud and Bendary, 2011).
A total of 11 process parameters, including pH (X1), temperature (X2), tryptone (X3), yeast extract (X4), beef extract (X5), glucose (X6), L-tyrosine (X7), CuSO4(X8), MgSO4 (X9), K2HPO4 (X10) and NaCl (X11), were added at two levels: low (-1) and high (+1). The low and high levels of these factors were taken as pH (4.5 and 6.5), temperature (20°C and 40°C). While levels of media components were (g l -1 ) tryptone (0.5 and 2.5), yeast extract (0.5 and 2.5), beef extract (0.5 and 2.5), glucose (0.5 and 2.5), L-tyrosine (1.5 and 5.5), CuSO4 (0.01 and 0.05), MgSO4 (0.001 and 0.005), K2HPO4 (0.5 and 2.5) and NaCl (0.1 and 0.5). This design characterizes a model that identifies the significant variables when no interaction among the factors is expected (Plackett and Burman, 1946;Anderson and Whitcomb, 2005;Wang and Lu, 2005). Therefore, a first-order multiple regression can model the data properly (Eq. 3): where Y is the predicted response (L-DOPA production), b0 is the intercept, and bi is the linear coefficient. The design matrix created using the Design expert software is presented in Table S1. Three replicates at the centre point were also performed to find the curvature that may exist in the model and the pure experimental error, which shows lack of fit. The statistical significance of the first-order model was identified using Fisher's test for analysis of variance (ANOVA). Moreover, the multiple correlation coefficients (R 2 ) were used to express the fit of this first model.
Box-Behnken design.
Once the critical factors were identified via screening, a Box-Behnken design for independent variables was used for further optimization. The variables each at levels with three replicates at the centre points (Box and Behnken, 1960;Anderson and Whitcomb, 2005), was used to fit a polynomial model by a response equation (Eq. 4): A second-order model is designed such that the variance of Y is constant for all points equidistant from the centre of the design. The Design expert software was used in the experimental design and data analysis. A multiple regression analysis of the data was carried out to define the response in terms of the independent variables. Response surface graphs were obtained to understand the effect of the variables, individually and in combination, and to determine their optimum levels for maximum L-DOPA production. All trials were performed in triplicate, and the average L-DOPA yield was used as response Y.
Biomass trend, tyrosinase activity and L-DOPA production
After validation of the experiment using the optimum process parameters generated by the Design expert software, the L-DOPA production was observed in the medium before optimization (nutrient broth with 1 g l -1 L-tyrosine) and after optimization. The biomass trend, tyrosinase activity and L-DOPA production were observed at 3 h time intervals for up to 24 h. The biomass trend was obtained by measuring the dry cell weight. The tyrosinase activity was determined by the previously described method (Kandaswami and Vaidyanathan, 1973).
L-DOPA assay
L-DOPA produced in the broth was determined according to Arnow's method (Arnow, 1937). The reaction mixture was centrifuged at 5000 r.p.m. for 15 min., and 1 ml supernatant was added with 1 ml of 0.5 N HCl, 1 ml of nitrite molybdate reagent, and 1 ml of 1 N NaOH; the final volume was adjusted to 5 ml with distilled water. The absorbance was measured at 530 nm using a double-beam UV-visible spectrophotometer (Shimadzu, Japan).
L-DOPA analysis
High-performance thin-layer chromatography (HPTLC) analysis of the cell-free broth was performed using a HPTLC system (CAMAG, Switzerland). The conditions used for HPTLC were similar to those in the previously described method (Surwase and Jadhav, 2011). High-performance liquid chromatography (HPLC) analysis of the cell-free broth was carried out (Waters model no. 2690) on a C18 column (4.6 mm ¥ 250 mm, Symmetry) using methanol as mobile phase, with a flow rate of 1 g l -1 for 10 min and a UV detector at 280 nm. The standard L-DOPA and cell-free broth were prepared in HPLC-grade water and injected into the HPLC column (Krishnaveni et al., 2009;Surwase and Jadhav, 2011). Gas chromatography-mass spectroscopy (GC-MS) analysis of the broth obtained after incubation was centrifuged at 5000 r.p.m. for 15 min, and supernatant was collected. The supernatant was extracted twice with equal volumes of chloroform in a separating funnel. The chloroform fraction was recovered in a new flask and evaporated. The residues were dissolved in methanol and used for further analysis. GC-MS analysis was carried out with a QP2010 gas chromatography coupled with a mass spectrometer (Shimadzu). The analysis was performed with conditions standardized earlier (Rani et al., 2007;Surwase and Jadhav, 2011).
Conclusion
From the results of this study, it is evident that the use of the RSM helped not only to find the most significant factors in Optimization of L-DOPA production by Brevundimonas 736 L-DOPA production but also to locate the optimum levels of these factors with minimum resources and time. Thus, the optimized medium formulation obtained in this study using RSM was proven more effective in improving L-DOPA production than the classical 'one factor at a time' method. In addition, Brevundimonas sp. SGJ presents a promising new source for L-DOPA production with advantages over traditional sources.
Supporting information
Additional Supporting Information may be found in the online version of this article: Table S1. Plackett-Burman design matrix performed for the L-DOPA production. Table S2. Box-Behnken design matrix for coded variables along with actual and predicted responses for L-DOPA production.
Please note: Wiley-Blackwell are not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article. | 2018-04-03T02:00:21.989Z | 2012-10-14T00:00:00.000 | {
"year": 2012,
"sha1": "2d3ec222f3aef470df1c75f7cc9d596e98c4659d",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1751-7915.2012.00363.x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2d3ec222f3aef470df1c75f7cc9d596e98c4659d",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
52909113 | pes2o/s2orc | v3-fos-license | NF-κB–responsive miRNA-31-5p elicits endothelial dysfunction associated with preeclampsia via down-regulation of endothelial nitric-oxide synthase
Inflammatory cytokines, including tumor necrosis factor-α (TNFα), were elevated in patients with cardiovascular diseases and are also considered as crucial factors in the pathogenesis of preeclampsia; however, the underlying pathogenic mechanism has not been clearly elucidated. This study provides novel evidence that TNFα leads to endothelial dysfunction associated with hypertension and vascular remodeling in preeclampsia through down-regulation of endothelial nitric-oxide synthase (eNOS) by NF-κB–dependent biogenesis of microRNA (miR)-31-5p, which targets eNOS mRNA. In this study, we found that miR-31-5p was up-regulated in sera from patients with preeclampsia and in human endothelial cells treated with TNFα. TNFα-mediated induction of miR-31-5p was blocked by an NF-κB inhibitor and NF-κB p65 knockdown but not by mitogen-activated protein kinase (MAPK) and phosphatidylinositol 3-kinase inhibitors, indicating that NF-κB is essential for biogenesis of miR-31-5p. The treatment of human endothelial cells with TNFα or miR-31-5p mimics decreased endothelial nitric-oxide synthase (eNOS) mRNA stability without affecting eNOS promoter activity, resulting in inhibition of eNOS expression and NO/cGMP production through blocking of the functional activity of the eNOS mRNA 3′-UTR. Moreover, TNFα and miR-31-5p mimic evoked endothelial dysfunction associated with defects in angiogenesis, trophoblastic invasion, and vasorelaxation in an ex vivo cultured model of human placental arterial vessels, which are typical features of preeclampsia. These results suggest that NF-κB–responsive miR-31-5p elicits endothelial dysfunction, hypertension, and vascular remodeling via post-transcriptional down-regulation of eNOS and is a molecular risk factor in the pathogenesis and development of preeclampsia.
The microRNAs (miRNAs) 2 are small noncoding RNAs of ϳ22 nucleotides, and their biosynthesis is controlled at multiple levels, such as transcription and nuclear and cytoplasmic processing. Mature miRNAs induce post-transcriptional silencing of target genes by binding to their 3Ј-untranslated regions (3Ј-UTR) (1). Like gene expression, miRNA biogenesis is also regulated at the transcriptional level under pathophysiological conditions, including inflammation, and contributes to the pathogenesis of vascular diseases.
The inflammatory response is finely controlled by transcription factors, including NF-B (NF-B), to prevent tissue or organ injury; however, prolonged and persistent activation of NF-B evokes elevated production of cytokines, such as tumor necrosis factor-␣ (TNF␣) and the interleukin (IL) family, which are important risk factors for several human diseases, including cardiovascular disorders and preeclampsia (2,3), although anti-angiogenic factors, hypoxia, and reactive oxygen species are also involved in the pathogenesis of these diseases (4). Chronic inflammation elicits functional alterations in vascular endothelial cells, which play a key role in regulating vascular relaxation and homeostasis (3,5). Studies have demonstrated that miRNA biogenesis is stimulated under inflammatory conditions and causes endothelial cell dysfunction, indicating that some miRNAs are risk factors in the pathogenesis of vascular diseases (2,3,5).
Preeclampsia is a unique disease occurring after 20 weeks of gestation and is characterized by hypertension with proteinuria. In patients with preeclampsia, circulating levels of proinflammatory cytokines, including TNF␣ and ILs, are elevated in maternal and cord blood (6) and are associated with endothelial dysfunction (6,7), resulting in vascular dysfunction associated with hypertension (8,9). In addition, recent studies have shown that expression levels of some miRNAs were increased in placentas or maternal blood in patients with preeclampsia (3,10). In fact, miR-155-5p is up-regulated in such patients and endothelial cells treated with TNF␣, which impairs vasorelaxation through down-regulation of endothelial nitric-oxide synthase (eNOS) (3). This suggests that miRNAs regulated under inflammatory states are involved in the development of vascular dysfunction.
Nitric oxide (NO) produced by eNOS is a potent vasodilator that contributes to the maintenance of vascular tone and blood pressure. Thus, eNOS-deficient mice spontaneously develop hypertension and defects in vascular remodeling (11,12). The pathogenic role of NO in vascular abnormalities and a preeclampsia-like phenotype have been demonstrated in mice deficient in eNOS or administered a NOS inhibitor (12,13). Recent studies showed that eNOS is down-regulated by several miRNAs, which target the 3Ј-UTR of eNOS mRNA, leading to inhibition of endothelial function and vasorelaxation (7,10). This suggests that miRNAs targeting the eNOS transcript induce endothelial dysfunction associated with preeclampsia.
In this study, we found that miR-31-5p was up-regulated in an NF-Bdependent manner in human umbilical vein endothelial cells (HUVECs) stimulated with TNF␣ and in sera from patients with preeclampsia. This miRNA suppressed eNOS expression by targeting its transcript and subsequently inhibited endothelial function and behavior, which are typical characteristics of preeclampsia. These results indicate that NF-Bresponsive miR-31-5p contributes importantly, but along with other miRNAs including miR-155-5p (3), to preeclamptic hypertension through negative regulation of the eNOS/NO pathway.
miR-31-5p predictively targeting eNOS is up-regulated in TNF␣-stimulated HUVECs
TNF␣ is up-regulated in patients with atherosclerosis and preeclampsia and induces endothelial dysfunction through biogenesis of miRNAs (3,10). To identify new miRNAs up-regulated in human endothelial cells under inflammatory condi-tions, we stimulated HUVECs with 10 ng/ml TNF␣ for 24 h and analyzed the HUVEC miRNA expression profile using Affymetrix microarrays. Several mRNAs were identified as significantly up-or down-regulated compared with their levels of untreated control cells (Fig. 1A). As shown previously (3,7,14), miR-155-5p and miR-146a-5p were up-regulated in response to TNF␣. Among the up-regulated miRNAs, miR-31-5p was newly identified as potentially targeting the 3Ј-UTR of human eNOS mRNA, but not its mutant generated in the seed sequences of the eNOS mRNA 3Ј-UTR (Fig. 1, A and B). The expression level of miR-31-5p was continuously elevated up to 24 h after stimulation with TNF␣ (Fig. 1C).
miR-31-5p is elevated in patients with preeclampsia and is induced by TNF␣-mediated NF-B activation
We examined biogenic regulation of miR-31-5p in patients with preeclampsia and endothelial cells under inflammatory conditions. miR-31-5p levels increased by 2.7-fold in the sera of patients with preeclampsia compared with those in the sera of healthy pregnant women ( Fig. 2A), whose clinical characteristics are summarized in Table S1. Serum levels of TNF␣ were also elevated in the patients compared with those in normal pregnant women (76.9 Ϯ 5.7 versus 21.5 Ϯ 1.6 pg/ml) (Fig. 2B), and the TNF␣ levels were highly correlated with miR-31-5p levels in the sera of the patients (Fig. 2C). Approximately 80% of total miR-31-5p in sera from preeclamptic patients was found in the exosomes, and the rest was present in the nonexosomal fraction (Fig. 2D), which is consistent with a previous report (15). Serum levels of miR-31-5p were also elevated in patients with atherosclerosis (Fig. S1). We next examined whether TNF␣ regulates miR-31-5p biogenesis and eNOS expression in endothelial cells. Stimulation of HUVECs with 0.05-10 ng/ml TNF␣ increased miR-31-5p levels and down-regulated eNOS protein levels in a dose-dependent manner at concentrations of TNF␣ greater than 0.05 ng/ml (Fig. S2, A and B), which are comparable with serum TNF␣ levels (41.39 -107.94 pg/ml) in preeclamptic patients. Moreover, direct treatment with HUVECs with the sera from patients with preeclampsia resulted in elevated miR-31-5p levels and down-regulated eNOS levels, as compared with the cells exposed to the sera from healthy pregnancies (Fig. S2, C-E). The miRNA levels also increased in HUVECs subjected to other inflammatory stimuli, Figure 1. eNOS mRNA-targeting miR-31-5p is up-regulated in TNF␣-stimulated endothelial cells. A, HUVECs were treated with TNF␣ (10 ng/ml) for 24 h, followed by analysis of miRNA expression profiles using Affymetrix miRNA microarrays (n ϭ 3). Ctrl, control. B, computational target prediction using Target-Scan showed complementarity between miR-31-5p and the 3Ј-UTR of human eNOS and generation of its 3Ј-UTR mutant (mt). Wildtype (WT) and mutant eNOS mRNA 3Ј-UTRs were cloned into the psiCHECK-2 vector. C, time-dependent expression of miR-31-5p in HUVECs stimulated with TNF␣ was determined at the indicated time periods by qRT-PCR (n ϭ 5). ***, p Ͻ 0.001.
Endothelial dysfunction by miR-31-5p
such as lipopolysaccharide (LPS), IL-1, and interferon (IFN)-␥ (Fig. 2E), which have previously been shown to down-regulate eNOS expression in HUVECs (7). Because miRNA biogenesis is regulated at the transcriptional and post-transcriptional levels (1), we examined whether miR-31-5p biogenesis is transcriptionally regulated by various signal transduction inhibitors. TNF␣-induced biogenesis of miR-31-5p was blocked by the NF-B inhibitor Bay 11-7082 but not by the p38 MAPK inhibitor SB203580, the JNK inhibitor SP600125, the extracellular signal-regulated kinase inhibitor PD98059, or the phosphatidylinositol 3-kinase inhibitor LY294002 (Fig. 2F), suggesting that TNF␣ stimulates miR-31-5p expression through activation of NF-B. As mature miRNA is generated by a sequential two-step process involving cleavage of primary miRNA (pri-miRNA) into precursor miRNA (pre-miRNA) in the nucleus and its subsequent maturation in the cytoplasm (1), we further examined which step of miR-31-5p biogenesis was regulated by NF-B inhibition. TNF␣-induced increases in the levels of pri-, pre-, and mature miR-31-5p were blocked by treatment with Bay 11-7082 or knockdown of NF-B p65 (Fig. 2, G-I). These results suggest that TNF␣ increases the transcriptional biogenesis of miR-31-5p via NF-B activation under inflammatory conditions.
miR-31-5p down-regulates human eNOS, but not mouse eNOS, by targeting its mRNA 3-UTR
Because miR-31-5p was predicted to target the 3Ј-UTR of eNOS mRNA (Fig. 1B), we examined the expression levels of eNOS in HUVECs treated with TNF␣ or transfected with miR-31-5p mimic and inhibitor. Treatment with TNF␣ or miR-31-5p mimic decreased eNOS mRNA and protein levels, and the effects of TNF␣ were reversed by transfection with an miR-31-5p inhibitor (Fig. 3, A and B). In addition, treatment of primary human aortic endothelial cells (HAECs) with TNF␣ resulted in increased miR-31-5p biogenesis and down-regulated eNOS, whose effects were blocked by Bay 11-7082 and an miR-31-5p inhibitor (Fig. S3, A and B), suggesting that the TNF␣/NF-B pathway is also essential for miR-31-5p-dependent down-regulation of eNOS in HAECs. However, the promoter activity of eNOS was not affected by either treatment with TNF␣ or transfection with miR-31-5p mimic and inhibitor (Fig. S4). Notably, treatment with TNF␣ and miR-31-5p mimic resulted in significant decreases in the half-life of eNOS mRNA, from 27.2 to 9.4 h and 11.5 h, respectively, and the TNF␣-mediated decrease was recovered to 20.1 h by an miR-31-5p inhibitor (Fig. 3C). This suggests that TNF␣ inhibits eNOS expression by biogenesis of miR-31-5p, which targets the eNOS transcript. To verify whether eNOS is a bona fide target of miR-31-5p, we explored the ability of miR-31-5p to target the 3Ј-UTR of eNOS mRNA. TNF␣ treatment inhibited the reporter activity of the eNOS mRNA 3Ј-UTR, but not of its mutant, and the inhibitory effect of TNF␣ was attenuated by an miR-31-5p inhibitor (Fig. 3D). In addition, transfection with miR-31-5p mimic inhibited the WT 3Ј-UTR activity, but not its mutant activity as did TNF␣ (Fig. 3D). As expected, treatment with TNF␣ or miR-31-5p mimic decreased NO production and cGMP synthesis, and the effects of TNF␣ were abolished by an miR-31-5p inhibitor (Fig. 3, E-G). Notably, computational analysis showed that miR-31-5p could also target the eNOS mRNA 3Ј-UTR of nonhuman primates, but not other species, such as mice and rats (Fig. S5A). Treatment with
Endothelial dysfunction by miR-31-5p
TNF␣ inhibited eNOS expression in mouse endothelial cells, and this inhibition was not reversed by a mouse miR-31-5p inhibitor. In addition, transfection with a mouse miR-31-5p mimic did not affect eNOS expression in mouse endothelial cells (Fig. S5B). Taken together, our data suggest that NF-Bresponsive miR-31-5p is essential for TNF␣-mediated downregulation of eNOS in humans, but not in rodents, leading to impairment of the human NO/cGMP signaling pathway.
miR-31-5p is a more potent inhibitor of eNOS expression than miR-155-5p
As TNF␣-responsive miR-155-5p is known to target the eNOS transcript (3, 7), we compared the expression levels of miR-31-5p and miR-155-5p and their inhibitory effects on eNOS expression. Treatment of HUVECs with TNF␣ increased the levels of both miRNAs, with a higher increase in the levels of miR-155-5p than of miR-31-5p, and their induction was completely blocked by Bay 11-7082 (Fig. S6). The TNF␣-induced decrease in eNOS 3Ј-UTR-reporter activity was rescued by transfection with an inhibitor of miR-31-5p or miR-155-5p; however, the effect of miR-31-5p inhibitor was stronger than that of miR-155-5p inhibitor (Fig. 4A). Moreover, miR-31-5p mimic more potently inhibited eNOS 3Ј-UTR-reporter activity than did miR-155-5p mimic (Fig. 4B). The inhibitory effect of TNF␣ on eNOS 3Ј-UTR activity was more effectively reversed by mutation of its complementary sequence for miR-31-5p than for miR-155-5p, as well as completely blocked by mutation of both miRNA-binding sites (Fig. 4C). Consistently, the sup-pressive effects of TNF␣ on eNOS expression and NO production were more potently blocked by an miR-31-5p inhibitor than an miR-155-5p inhibitor, and transfection with miR-31-5p mimic more effectively suppressed eNOS expression and NO production than did transfection with miR-155-5p mimic ( Fig. 4, D-F). These data suggest that miR-31-5p is a more potential negative regulator of eNOS expression than miR-155-5p.
miR-31-5p inhibits in vitro eNOS-dependent angiogenesis
Decreased eNOS expression and activity result in endothelial dysfunction associated with impaired angiogenesis and vascular remodeling, which are currently accepted as characteristics of the pathogenesis of preeclampsia (12,13,16). Thus, we examined the effect of TNF␣ and miR-31-5p on the in vitro angiogenic properties of endothelial cells. TNF␣ inhibited endothelial cell proliferation, a key characteristic of angiogenesis, and this effect was reversed by transfection with an miR-31-5p inhibitor; however, treatment with miR-31-5p mimic or the NOS inhibitor N G -nitro-L-arginine methyl ester (L-NAME) suppressed endothelial cell proliferation (Fig. 5A). Similar regulatory effects on cell migration and tube formation, which are other important processes in angiogenesis (3), were also observed in HUVECs treated with TNF␣, miR-31-5p mimic or inhibitor, and L-NAME (Fig. 5, B and C). Because vascular endothelial growth factor (VEGF) is important for angiogenesis, we further examined the effect of TNF␣-induced miR-31-5p on VEGF-induced angiogenesis. Treatment with TNF␣, miR-31-5p mimic, or L-NAME effectively inhibited VEGF-in- , or miR-31-5p inhibitor (I), followed by stimulation with TNF␣ for 24 h. eNOS mRNA and protein levels were determined by qRT-PCR and Western blotting. C, cells transfected with miR-31-5p mimic or miR-31-5p inhibitor (mi-miR-31) were stimulated with TNF␣ for 10 h, followed by incubation with DRB for the indicated time periods. eNOS mRNA levels were analyzed (n ϭ 5). D, cells were transfected with or without psiCHECK-2-eNOS 3Ј-UTR-reporter constructs (WT or mutant) alone or in combination with control miRNA (C), miR-31-5p mimic (M), or miR-31-5p inhibitor (I), followed by stimulation with TNF␣ for 24 h. eNOS 3Ј-UTR-reporter activity was determined by luciferase activity assay. E-G, cells were transfected with miR-31-5p analogues, followed by treatment with TNF␣ for 24 h. Intracellular NO, total nitrite, and cGMP levels were determined by confocal microscopy, Griess reaction, and ELISA, respectively. ***, p Ͻ 0.001.
Endothelial dysfunction by miR-31-5p
trophoblast invasion, and these effects were blocked by cotreatment with ODQ and knockdown of protein kinase G, a downstream mediator of cGMP (Fig. 7B). These results suggest that TNF␣-induced endothelial miR-31-5p inhibits trophoblast invasion associated with arterial remodeling during pregnancy by decreasing eNOS expression.
NF-B-responsive miR-31-5p impairs relaxation of human placental arteries
Because endothelium-derived NO stimulates vascular relaxation by activation of the sGC/cGMP pathway (18), we examined whether TNF␣-induced miR-31-5p regulates vasorelaxation in an ex vivo cultured model of placental arterial vessels from healthy pregnant women. Treatment of the arterial rings with TNF␣ increased miR-31-5p biogenesis (Fig. 8A), resulting in decreased eNOS levels (Fig. 8B), and these events were reversed by co-treatment with Bay 11-7082 (Fig. 8, A and B). Consistent with these findings, TNF␣ decreased cGMP production in cultured arterial rings, and this effect was reversed by Bay 11-7082 (Fig. 8C). Moreover, treatment of human placental arterial rings with TNF␣ or miR-31-5p mimic inhibited the vasorelaxant response to the vasodilator calcitonin gene-related peptide (CGRP)-␣, and the vasoconstrictor effect of TNF␣ was hampered by an miR-31-5p inhibitor (Fig. 8D).
Discussion
The inflammatory response is tightly regulated by activation of NF-B in various vascular diseases, such as atherosclerosis and preeclampsia (19,20), which are highly associated with endothelial dysfunction. Among many inflammatory cytokines, TNF␣ is the most potent activator of NF-B and is considered an important mediator of endothelial dysfunction in the pathogenesis of preeclampsia. This strongly suggests that endothelial dysfunction is linked to transcriptional expression of NF-B-responsive genes in endothelial cells. Although NF-Bresponsive adhesion molecules, such as vascular cell adhesion molecule-1, intercellular adhesion molecule-1, and E-selectin, are known biomarkers of endothelial dysfunction, they cause vascular inflammation by interacting with immune cells, but
Endothelial dysfunction by miR-31-5p
they do not directly alter endothelial cell function. Therefore, how NF-B induces endothelial dysfunction is still unclear.
Several miRNAs are reportedly involved in the pathogenesis of cardiovascular diseases (21), such as miR-155-5p, which targets BCL6 or high-mobility group box-transcription protein in atherosclerosis (22,23), as well as miR-31-5p, which targets cardiac troponin-T or large tumor suppressor homologue 2 in myocardial infarction and neointimal growth (24,25). Our previous report demonstrated that TNF␣ stimulated NF-B-dependent impairment of endothelial function and vasorelaxation via biogenesis of miR-155-5p, which targets the eNOS transcript (3). This study also found that NF-B-responsive miR-31-5p caused endothelial dysfunction via destabilization of eNOS mRNA. Consequently, both miRNAs were involved in dysregulation of the eNOS/cGMP pathway, leading to impairment of angiogenesis, trophoblast invasion, and vasorelaxation, which are known pathological characteristics of preeclampsia. However, miR-31-5p inhibited the eNOS/NO pathway more potently than miR-155-5p. These results suggest that NF-Bdependent miR-31-5p, in addition to miR-155-5p, is crucial for endothelial dysfunction and vasorelaxation through down-regulation of eNOS expression under inflammatory conditions (Fig. 9).
Abnormal implantation of the placenta causes poor placental perfusion and leads to placental and fetal hypoxia, causing sys-temic inflammation. Therefore, serum levels of pro-inflammatory cytokines, such as TNF␣ and IL-6, are significantly increased in patients with preeclampsia compared with those in normal pregnant women (3,8), indicating that preeclampsia is a systemic inflammatory disorder. Although the pathological link between inflammation and preeclampsia is unclear, recombinant TNF␣ infusion in animal models induces proteinuric hypertension, which is known to be the clinical symptom of human preeclampsia (9,26). TNF␣ activates NF-B in placentas and in circulating immune cells of preeclamptic patients (27,28), indicating that placental and maternal NF-B activations are involved in the pathogenesis of preeclampsia. Accumulating evidence has shown that inflammatory NF-B activation accelerates endothelial dysfunction, which is an early determinant of hypertension and impaired vascular remodeling in the progression to preeclampsia (3,7). In fact, inhibition of NF-B activation or TNF␣ activity using a pharmacological inhibitor or a neutralizing antibody prevents TNF␣-mediated endothelial dysfunction and high blood pressure in hypertensive patients (3,29). Therefore, reciprocal cross-talk between inflammation and NF-B activation is an important factor in endothelial dysfunction, which is a key event in the pathogenesis of preeclampsia.
Among biomarkers associated with vascular function, the reduced bioavailability of eNOS-derived NO is regarded as a 5). B, eNOS mRNA and protein levels were analyzed. C, cGMP levels were measured using a cGMP assay kit. D and E, arterial rings were transfected with or without miR-31-5p mimic or miR-31-5p inhibitor (i-miR-31), followed by stimulation with TNF␣ for 24 h. D, concentration-dependent vasorelaxant responses to CGRP-␣ were measured by strain-gauge plethysmography (n ϭ 6). E, endothelialized (End) or de-endothelialized arterial rings (DeE) were treated as above, and the vasorelaxant responses to CGRP-␣ were measured (n ϭ 5). **, p Ͻ 0.01, and ***, p Ͻ 0.001.
Endothelial dysfunction by miR-31-5p
representative parameter of endothelial dysfunction (12). Although eNOS is a constitutively expressed enzyme, its expression can be up-regulated by growth factors and hormones, such as VEGF and estrogen (30). However, inflammatory stimuli, such as TNF␣ and LPS, down-regulate the expression of this enzyme without transcriptional alteration through activation of NF-B (7). In addition, it has been shown that inhibition of the NF-B pathway can rescue decreased eNOS expression by preventing destabilization of eNOS mRNA under inflammatory conditions (3)(4)(5)(6)(7). This suggests that eNOS is negatively regulated in an NF-Bdependent manner at the post-transcriptional level. Our data demonstrated that NF-B activation is crucially involved in TNF␣-induced eNOS downregulation and endothelial dysfunction via biogenesis of miR-31-5p, which destabilizes eNOS mRNA. We also found that serum levels of miR-31-5p were significantly increased and mostly associated with circulating exosomes in patients with preeclampsia. The levels of exosomes in maternal circulation were markedly increased in preeclampsia (31,32). Their levels were highly correlated with quantitative concentrations of exosome-associated placenta-specific alkaline phosphatase, a membrane-bound protein in syncytiotrophoblasts (31). The exosomes originated from syncytiotrophoblasts can mediate intercellular communication with neighboring cells or endothelial cells via delivery of their bioactive components, including miRNAs (33). Thus, it is possible that placenta-derived exosomes impair endothelial dysfunction in patients with preeclampsia by delivering miR-31-5p. However, TNF␣, but not miR-31-5p, decreased mouse eNOS expression, indicating that the targeting activity of miR-31-5p against eNOS mRNA has species specificity. These results suggest the possible involvement of TNF␣-induced unidentified miRNAs in the down-regulation of mouse eNOS expression and in the pathogenesis of a murine preeclampsia-like syndrome (8,9), which is different from the naturally occurring human disease (34).
Our previous data demonstrated that eNOS levels were reduced in HUVECs freshly isolated from women with preeclampsia and that the levels were restored after 60 h of culture in fresh medium to those of normal HUVECs (3). This suggests that the decreased eNOS levels in the patients may be due to elevated levels of cytokines, including TNF␣ (3), and are rapidly recovered by deprivation of cytokines during in vitro culture. This phenomenon is similar to the resolution of preeclampsia symptoms, probably due to reduction of cytokine levels within a few weeks after delivery (35). These observations provide evidence that inflammatory eNOS down-regulation is crucially involved in the pathogenesis of preeclampsia. Thus, our findings strongly suggest that the NF-B/miR-31-5p axis is essential to vascular dysfunction in the development of preeclampsia by inhibiting the eNOS/NO pathway.
Noncoding miRNAs inhibit global post-translation gene expression via destabilization of target gene transcripts and contribute to the pathogenesis of various human diseases, including preeclampsia. A number of miRNAs, including miR-155-5p, miR-335, and miR-854, reportedly decrease eNOS mRNA stability and its translation (7,10). Particularly, miR-155-5p is up-regulated through activation of NF-B in response to TNF␣ and LPS (7). Thus, the eNOS/NO pathway is negatively regulated by activation of NF-B during inflammation that resembles the pathogenic conditions of preeclampsia (3,13,34). miR-31-5p is also up-regulated by NF-B signaling in keratinocytes stimulated with inflammatory cytokines, including TNF␣ (36). Consistent with this, we also found that miR-31-5p is NF-B-responsive and inhibits eNOS expression. This suggests that NF-B-induced miR-31-5p, in addition to miR-155-5p, causes endothelial dysfunction by inhibiting the eNOS/NO pathway and is a molecular risk factor for preeclampsia.
Endothelial NO, produced from L-arginine by the catalytic reaction of eNOS, plays a crucial role in regulating vascular relaxation and homeostasis via sGC-dependent cGMP production. Indeed, eNOS-deficient mice develop spontaneous hypertension and sFlt-1-induced proteinuria (11,13). Chronic inhibition of NOS activity in rats with L-NAME induces preeclampsia-like syndromes, such as systemic hypertension, glomerular sclerotic injury, and proteinuria (16). Therefore, decreased eNOS-derived NO synthesis is thought to be an important risk factor for endothelial dysfunction linked to the development of preeclampsia; however, some studies have shown conflicting results, such as increases, decreases, or no changes in plasma NO levels of patients (37-39). Our previous and present studies clearly demonstrated that TNF␣ was ele-
Endothelial dysfunction by miR-31-5p
vated in maternal plasma and stimulated endothelial dysfunction by decreasing eNOS expression, subsequently inducing characteristic features of preeclampsia in vitro (3). A preeclampsia-like syndrome can be induced in pregnant baboons by infusion with TNF␣ (26), probably via biogenesis of miR-31-5p and miR-155-5p. These observations suggest that TNF␣ increases the risk of preeclampsia by impairing the eNOS/NO pathway. Our results showed that NF-B-responsive biogenesis of miR-31-5p induced by TNF␣ is essential for impairment of human endothelial function associated with vascular relaxation and remodeling.
A typical miRNA is capable of silencing hundreds of genes because of its complementary base-pairing with sequence-specific sites on the 3Ј-UTR of RNA transcripts, and thus a single gene transcript can be targeted by more than one miRNA. Accordingly, eNOS expression has been shown to be regulated by multiple miRNAs, including miR-155-5p, miR-335, miR-543, and miR-584 (3,10,40). Notably, among these miRNAs, miR-155-5p has been shown to increase in sera of patients with preeclampsia and to contribute to TNF␣-mediated endothelial dysfunction and vasoconstriction in vitro (3). The effects of TNF␣ were reversed by the NF-B inhibitor aspirin and the anti-inflammatory molecule carbon monoxide (3,5), which can reduce the risk of preeclampsia (41,42). This indicates that TNF␣-induced endothelial dysfunction may be associated with NF-B-responsive miRNAs. We also found that TNF␣-induced miR-31-5p was elevated in preeclamptic patients and down-regulated eNOS expression, as well as the impaired angiogenic and vasorelaxant activity of endothelial cells. However, our data showed that miR-31-5p was more potent than miR-155-5p in suppressing eNOS expression. This difference in inhibition can be explained by the different target sites on the eNOS transcript, such as 8-mer for miR-31-5p and 7-mer-m8 for miR-155-5p (43). Thus, miR-31-5p is more important to endothelial dysfunction caused by reduced eNOS expression under inflammatory conditions, which results in impairment of vascular relaxation and remodeling. However, we suggest that both miRNAs may be synergistically involved in the pathogenesis of preeclampsia.
Taken together, our findings demonstrated that NF-Bresponsive miR-31-5p decreases eNOS expression by downregulating eNOS mRNA stability, which decreases the NO/ cGMP axis activity. This event elicits endothelial dysfunction associated with preeclamptic characteristics of hypertension and defects in vascular remodeling. These findings also suggest a possible mechanistic link between NF-B-induced miR-31-5p and vascular dysfunction through down-regulation of eNOS expression during the development of preeclampsia. Furthermore, our data provide evidence that NF-Bresponsive miR-31-5p may be a novel predictive risk factor of endothelial dysfunction and synergistically contribute to miR-155-5p to the pathogenesis of preeclampsia.
Human specimens
Human specimens were obtained from 11 healthy male adults and 11 male patients with atherosclerosis as well as 17 normal pregnant women and 17 patients with preeclampsia after full-term normal deliveries according to the protocols approved by the Institutional Review Board at Kangwon National University Hospital (KNUH-2017-01-010-004), and informed consent was obtained from women with normal and preeclamptic pregnancies. This study conformed to the principles outlined in the Declaration of Helsinki. Blood samples were centrifuged at 12,000 ϫ g for 8 min at 4°C, and plasma/serum was obtained and kept at Ϫ80°C until use. Preeclampsia was defined by clinical diagnosis with systolic blood pressure Ͼ140 mm Hg or diastolic blood pressure Ն90 mm Hg and proteinuria Ն0.3 g/24 h.
Endothelial dysfunction by miR-31-5p
Endothelial cell culture and treatment HAECs were purchased from ScienCell Research Laboratories (Carlsbad, CA). HUVECs were cultured, as described previously (3), and used at 2-6 passages, because they preserve their specific characteristics, which are quite similar to those of freshly isolated normal cells (44). The cells were grown in M199 media supplemented with 20% fetal bovine serum (FBS), 100 units/ml penicillin, 100 ng/ml streptomycin, 3 ng/ml basic fibroblast growth factor, and 5 units/ml heparin at 37°C in a humidified CO 2 incubator. Cells were cultured in serum-free media for 2 h and transfected with 80 nM siRNAs or miRNAs using Lipofectamine RNAiMAX as described previously (5). After recovery in fresh medium for 24 h, cells were pretreated with or without Bay 11-7082 (5 M), SB203580 (10 M), SP600125 (10 M), PD98059 (10 M), LY294002 (10 M), and L-NAME (1 mM) for 30 min, followed by stimulation with TNF␣ (10 ng/ml), LPS (1 g/ml), IL-1 (20 ng/ml), and IFN-␥ (20 ng/ml) for the indicated time periods or 24 h. In contrast, mouse endothelial cells were isolated from lungs of male C57BL/6 mice using Dynabeads magnetic beads (Invitrogen), as described previously (45), and according to the manufacturer's instructions. Mouse endothelial cells were cultured in Dulbecco's modified Eagle's medium supplemented with 20% fetal bovine serum, transfected with 80 nM miRNAs, and treated with 10 ng/ml TNF␣ for 24 h.
miRNA and mRNA quantification
Total miRNAs from cells, tissues, and sera were isolated using an miRNeasy mini kit or an miRNeasy serum/plasma kit (Qiagen, Hilden, Germany), as reported previously (3). cDNAs for determining miRNAs were prepared from 1 g of miRNAs using a miScript II RT kit. Quantitative real-time PCR (qRT-PCR) was performed with miScript SYBR Green PCR kit according to the manufacturer's instructions. The levels of primary, precursor, and mature miR-31 were analyzed using a miScript Primer Assay kit with miR-31-specific and universal primers. The relative levels of miR-31 were normalized to the housekeeping gene SNORD-95. In addition, total RNAs were isolated from cultured cells and vascular tissues using TRIzol reagent (Invitrogen) and were used to synthesize the firststrand cDNAs using Moloney murine leukemia virus reverse transcriptase (Promega, Madison, WI), followed by quantitation of eNOS mRNA levels by qRT-PCR. The mRNA levels of eNOS were normalized to glyceraldehyde-3-phosphate dehydrogenase (GAPDH). The primers used in this study are listed in Table S2.
Measurements of intracellular NO and cGMP
Intracellular NO levels were measured in situ using DAF-FM diacetate, as reported previously (3). HUVECs were transfected with 80 nM miR-31-5p mimic or inhibitor for 24 h and treated with TNF␣ for another 24 h. Cells were incubated with 5 M (final concentration) DAF-FM diacetate for 30 min in a CO 2 incubator. NO levels were determined using a confocal laser microscope. Cellular cGMP levels were determined using a cGMP assay kit (R&D Systems).
Assay of in vitro endothelial cell angiogenesis and trophoblast invasion
HUVECs were transfected with 80 nM miRNAs, followed by treatment with TNF␣ (10 ng/ml) or L-NAME (1 mM) for 24 h. HUVECs were incubated with or without vascular endothelial growth factor (VEGF, 10 ng/ml), and angiogenic properties were assessed as described previously (3). Endothelial cell proliferation was determined by [ 3 H]thymidine incorporation assay. The cell migration assay was performed using Boyden chambers. Tube formation was determined after culturing HUVECs on a layer of growth factor-reduced Matrigel. Trophoblast invasion was assayed in a co-culture system of HUVECs and trophoblast-derived HTR-8/SVneo cells (3). For trophoblast invasion assay, HUVECs (2 ϫ 10 4 cells) transfected with miRNAs and treated with TNF␣ were placed on the lower chambers of transwell plates with 6.5-mm diameter polycarbonate filters (Costar, New York), followed by treatment with or without L-NAME (1 mM), DETANO (100 M), ODQ (50 M), or 8-Br-cGMP (300 M). Trophoblast-derived HTR-8/SVneo cells (5 ϫ 10 3 cells) were placed into the upper wells pre-coated with 100 l of growth factor-reduced Matrigel (Corning Inc., Corning, NY) and cultured for 5 h in a CO 2 incubator as reported previously (3). Cells were fixed and stained with hematoxylin and eosin, and the upper surface of the filter was wiped with a cotton swab to remove nonmigrating cells. The cells that invaded to the lower side of the filter were observed using an inverted phase-contrast microscope (ϫ100), and images were captured with a video graphic system. Cell invasion was quantified by counting the cells in all fields in each assay.
Measurement of vascular tension
Human umbilical arteries (ϳ40 mm from the insertion point in the placenta) from normal pregnancies were dissected, and the Warton's jelly and connective tissue were removed in icecold oxygenated Krebs-Ringer bicarbonate solution (in mM; NaCl 118.3, KCl 4.7, MgSO 4 1.2, KH 2 PO 4 1.2, CaCl 2 1.6, NaHCO 3 25, glucose 11.1) as described previously (3). The normal and de-endothelialized vessels were cut into 3-mm rings and transfected with 80 nM of miRNAs in Opti-MEM reduced serum medium using Lipofectamine RNAiMAX. The vessel rings were pretreated with Bay 11-7082 (5 M) in Dulbecco's modified Eagle's medium for 30 min and stimulated with TNF␣ (10 ng/ml) for 24 h. Some were used for analysis of miR-31-5p, eNOS mRNA and protein, and cGMP using qRT-PCR, Western blotting, and an ELISA kit. Others were mounted onto stainless steel wire stirrups (150 m) in a multiwire myograph system (DMT-620, Aarhus, Denmark) and placed in 10-ml organ baths containing Krebs-Ringer solution. The rings were passively stretched at 30-min intervals in increments of 100 mg to reach optimal tone (200 mg) for human vessels. After the arterial rings had been stretched to their optimal resting tone, the contractile response to 100 mM KCl was determined. The response to a maximal dose of KCl was used to normalize responses to agonist across the vessel rings. The relaxant response of human vessels to CGRP-␣ was assessed as described previously (3).
Endothelial dysfunction by miR-31-5p Western blot analysis
HUVECs were suspended in RIPA buffer (50 mM Tris-HCl, pH 8.0, 150 mM NaCl, 1% Nonidet P-40, 0.5% deoxycholic acid, 0.1% SDS) and incubated on ice for 30 min for complete cell lysis. Cell debris was removed by centrifugation at 12,000 ϫ g for 15 min. Cell lysates (30 g of protein) were separated by SDS-PAGE, and target protein levels were determined by Western blot analysis (7).
Reporter gene assay
The plasmid psiCHECK-2-eNOS-3Ј-UTR was constructed by ligation of the PCR product (430 nucleotides) of the human eNOS 3Ј-UTR into a psiCHECK-2 vector at XhoI and NotI sites as described previously (4). The mutant psiCHECK-2-eNOS-3Ј-UTR was prepared by site-directed mutagenesis (Fig. 1B). HUVECs were transfected with 2 g/ml of either psiCHECK-2-eNOS-3Ј-UTR or its mutant vector or with pGL3-human eNOS promoter (1.6 kb)-Luc construct (7) alone or in combination with miR-31-5p mimic or inhibitor using Lipofectamine 3000 (Invitrogen). HUVECs were transfected with 2 g/ml psi-CHECK-2 vector containing either WT eNOS 3Ј-UTR or its mutant or pGL3-human eNOS promoter (1.6 kb)-Luc construct (7) alone or in combination with or without miRNAs using Lipofectamine 3000 (Invitrogen). After 4 h of incubation, cells were recovered in fresh medium for 24 h, followed by incubation with TNF␣ for 24 h. Reporter gene activity was assayed by a luciferase assay system or a dual-luciferase report assay kit (Promega, Madison, WI).
Isolation of exosomes
Serum samples (1 ml) were centrifuged at 1500 ϫ g for 10 min to remove cells and cell debris. The collected supernatant was centrifuged again at 17,000 ϫ g for 15 min to remove large vesicles, and the supernatant was spun again in an ultracentrifuge at 100,000 ϫ g for 1.5 h at 4°C. The pellets containing exosomes were carefully separated from the exosome-depleted supernatant and resuspended in PBS. Both the exosome suspension and the exosome-depleted serum were processed to extract miRNAs.
Statistical analysis
Quantitative data are expressed as mean Ϯ S.D. of at least three independent experiments performed in triplicate, except for presenting as mean Ϯ S.E. of the mean (S.E.) for vascular tone. Statistical analyses were performed with GraphPad Prism 6.0 for Windows (GraphPad Software). Statistical significance was determined using either the unpaired Student's t test or one-way analysis of variance followed by Tukey's post-hoc test, depending on the number of experimental groups analyzed. Significance was established at a p value Ͻ 0.05. | 2018-10-21T20:22:52.296Z | 2018-10-02T00:00:00.000 | {
"year": 2018,
"sha1": "fc0e544b1a1f09d6436697e584c086c12fb57ddc",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/293/49/18989.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "14cc09b92da5d570834910aebb515a9d97f14ad5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6533611 | pes2o/s2orc | v3-fos-license | Realizing Connected Lie Groups As Automorphism Groups Of Complex Manifolds
We show that every connected real Lie group can be realized as the full automorphism group of a Stein hyperbolic complex manifold.
Introduction
Saerens and Zame, and independently Bedford and Dadok proved that, given a compact real Lie group K there always exists a strictly pseudoconvex bounded domain D ⊂ C n such that Aut(D) ≃ K. By the theorem of Wong-Rosay (which states that every strictly pseudoconvex bounded domain with non-compact automorphism group is isomorphic to the ball) it is clear that an arbitrary non-compact real Lie group can not be realized as the automorphism of a strictly pseudoconvex bounded domain in C n . However, as we proved in an earlier paper [16], for any connected real Lie group G there does exist a complex manifold X on which G acts effectively. Moreover, X can be chosen in such a way that it enjoys several of the key properties of strictly pseudoconvex bounded domains. Namely, X can be chosen such that it is both Stein and hyperbolic in the sense of Kobayashi.
The purpose of the present note is to prove that it is possible to rule out additional automorphisms, i.e. it is possible to achieve Aut(X) ≃ G.
Theorem 1. Let G be a connected real Lie group. Then there exists a Stein, complete hyperbolic complex manifold X on which G acts effectively, freely, properly and with totally real orbits such that Aut(X) ≃ G.
The idea is to follow the strategy of Saerens and Zame: Construct the desired manifold as an open subset of a larger Stein manifold in such a way that the given group acts on this open subset. Ensure that every automorphism of this open subset can be extended to the boundary, then modify the boundary in such a way that this CR-hypersurface simply has no automorphisms other than those from the given group. The latter can be done using the fact that a CR-hypersurface (unlike a complex manifold) does have local invariants. A principal difficulty in this approach is to obtain an extension of automorphisms of the open subset to the boundary. If one is concerned only with compact Lie groups, then one can work with a strictly pseudoconvex bounded domain D. For such a domain it is evident that for every automorphism φ of D there exists a sequence x n ∈ D such that both x n and φ(x n ) converge to a strictly pseudoconvex point in the boundary. This is the starting point for the extension of the automorphism φ to the boundary ∂D.
Now, our goal is to obtain a result for arbitrary connected Lie groups, which are not necessarily compact.
This lack of compactness assumption creates some difficulties. There are two main problems: First, an arbitrary non-compact Lie group is not necessarily linear. For instance, the universal cover of SL 2 (R) cannot be embedded into a linear group. Second, as already mentioned, the theorem of Wong-Rosay implies that in general a noncompact Lie group can not be realized as the full automorphism group of a strictly pseudoconvex bounded domain with smooth boundary. Thus we have to work with domains which are not bounded or where the boundary is not everywhere smooth. The trouble is that it is therefore no longer clear that for every automorphism φ there exists a sequence x n in the domain such that both x n and φ(x n ) converge to a nice point in the boundary.
In [15] a result similar to ours is claimed for certain Lie groups with a rather sketchy outline of a possible proof.
The first of the aforementioned two problems is dealt with by assuming the group G to be linear while the second problem is simply ignored. Since the second problem is in fact a serious obstacle, the proof sketched in [15] can not be regarded as complete.
We proceed in the following way: To deal with the first problem, we note that every Lie algebra is linear by the theorem of Ado. Therefore, in a certain sense, every Lie group is linear up to coverings and the first problem can be attacked by working carefully with coverings.
For the second problem, we use bounded domains whose boundaries are smooth outside an exceptional set E which is small in a certain sense. Exploiting this smallness we prove that for every automorphism φ there must exist a sequence x n such that both x n and φ(x n ) converge to a boundary point outside the "bad set" E.
Once this has been verified, we can prove (using arguments similar to those used in [13], [2]) that φ extends as holomorphic map near lim(x n ), and use the theory of Chern-Moser-invariants to deduce that φ was in fact given by left multiplication with an element of G.
1.1. Disconnected Lie Groups. The result of Bedford and Dadok resp. Saerens and Zame is valid for all compact groups, not only connected ones. However, compactness implies that in this case there are no more than finitely many connected components.
We conjecture that our main theorem is valid for arbitrary real Lie groups, including those with finitely or countably infinitely many connected components.
As a first step regarding disconnected Lie groups, we proved in [17] that the statement of our main theorem does hold for countable discrete groups.
Linearization
Given a real Lie group G, we look for a bounded domain on which this group acts. For this purpose we use the theory of hermitian symmetric spaces.
We will need the following: LetG be a simply-connected real Lie group. Then there exists a natural number n and a Lie group homomorphism ξ :G → Sp(2n, R) such that the following conditions are fulfilled: 1. ξ has discrete fibers.
Proof. By Ado's theorem there is an injective Lie algebra homomorphism Lie(G) → Lie GL(m, R) for some m ∈ N. SinceG is simplyconnected, this induces a Lie group homomorphism ξ 0 :G → GL(m, R) with discrete fibers. Let V = R m and W = V ⊕ V * where V * is the vector space dual of V . Then W carries a natural symplectic structure given by which is evidently preserved by the natural diagonal action of GL(V ) on W . Hence there is an embedding i : GL(m, R) ֒→ Sp(2m, R). Let ξ 1 = i • ξ 0 :G → Sp(2m, R), H = ξ 1 (G) and H ′ its commutator group. Then H ′ is already closed in Sp(2m, R). The quotient group H/H ′ is a connected commutative real Lie group, hence H/H ′ ≃ (S 1 ) k × (R) l for some k, l ∈ N ∪ {0}. It is easy to see that there is a closed embedding j : H/H ′ ֒→ Sp(2m ′ , R) for some m ′ ∈ N . Furthermore there is an embedding ζ : Sp(2m, R) × Sp(2m ′ , R) ֒→ Sp(2n, R) with n = m+m ′ . Now let τ : H → H/H ′ denote the natural projection and define ξ :G → Sp(2n, R) by ξ(g) = ζ(ξ 1 (g), j(τ (ξ 1 (g)))).
Hermitian symmetric domains
For basic facts on symmetric spaces, see e.g. [9]. Let S = Sp(2n, R) and let K denote a maximal compact subgroup. Then the quotient manifold D 0 = S/K can be endowed with the structure of a hermitian symmetric domain. Furthermore there exist open embeddings ("Cayley transform") Q is a projective manifold (the "compact dual of D") and 3. the Sp(2n, R)-action on D 0 extends to an Sp(2n, C)-action on Q. Then there exists a natural number m ∈ N and points p 1 , . . . , p m ∈ D 0 such that We choose a sequence of points p i ∈ D 0 recursively. First p 1 is chosen arbitrarily. When p 1 , . . . , p k are already chosen, we define Then we proceed as follows: If dim(I k ) > 0, we choose p k+1 such that there is an element a k+1 in the connected component I 0 k such that a k+1 (p k+1 ) = p k+1 . This ensures dim I k+1 < dim I k . If dim I k = 0, then I k is countable. Thus is a countable union of nowhere dense analytic subsets of Q. It follows that Λ is a set of measure zero for any Lebesgue class measure on Q.
In particular Λ ∩ D 0 = D 0 and we can choose p k+1 ∈ D 0 \ Λ. By the definition of Λ this choice enforces I k+1 = {e}. Proposition 2. LetG be a simply-connected real Lie group. Then there exists a discrete central subgroup Γ such that for G =G/Γ the following properties hold: There exists a natural number N, a bounded domain D ⊂ C N , complex analytic subsets E C N , Z ⊂ D and a G-action on Z such that 1. There is a G-invariant non-empty open subset Ω of Z such that G acts freely, properly, and with totally real orbits on Ω.
Proof. By prop. 1, there is a discrete central subgroup Γ ofG such that G =G/Γ can be embedded into some Sp(2n, R) as closed Lie subgroup. Let D 0 = Sp(2n, R)/K be the associated hermitian symmetric space and Q and S C = Sp(2n, C) as described in the beginning of this section.
By lemma 1 there is a natural number m and a point p Because the S C -action on Q m is algebraic, the S C -action on Q m is algebraic as well. In particular every We obtain a fiber bundle τ : S C → G\S C , where G\S C denotes the quotient of S C by the left action of G. Let U ⊂ G\S C be a relatively compact open contractible subset and Ω = {g · p : g ∈ τ −1 (U)}.
Chern-Moser-invariants
4.1. Chern-Moser-invariants. For every real-analytic strictly pseudoconvex CR-hypersurface M in a complex manifold X and every point p ∈ M there is a system of local coordinates where ρ is a real-analytic function whose power series development is given as where F k,l,r is a polynomial of bidegree (k, l) in z andz and degree r in ℜ(w).
A point p ∈ M is called umbilical if F 2,2,0 = 0. For non-umbilical points we define scalar invariants K k,l,r (for k, l ≥ 2, r ∈ N) given by K k,l,r = ||F k,l,r || 2 where || || denotes the euclidean norm, i.e., the norm induced by the scalar product for which the monomials in the coordinates ℜ(w), z i ,z i constitute an orthonormal basis.
If x, y are non-umbilical points on M such that the CR-hypersurface germs (M, x) and (M, y) are isomorphic, then all these invariant K k,l,r must assume the same values at x and y.
For convenient application later on, we define K d = k+l=d K k,l,0 for d ≥ 4.
Jet bundles.
We recall the notion of jets (see [8]): For manifolds X and Y and points x ∈ X, y ∈ Y , the set of k-jets J k (X, Y ) x,y is the set of equivalence classes of map germs where two real-analytic map germs are equivalent iff their respective Taylor series developments agrees up to order k. J k (X, Y ) is the disjoint union of all J k (X, Y ) x,y (with x ∈ X and y ∈ Y ). There is a natural manifold structure on J k (X, Y ) for which we obtain a fiber bundle ("source map") α :
4.3.
Transversality. We will need the multijet transversality theo- Let W be a submanifold of codimension c in J k s (X, Y ). Then the multijet transversality theorem implies that the function is of codimension at least c in X (s) .
Remark. 1. In the statement on the codimension, the codimension of the empty set is to be understood as +∞. 2. A subset of a topological space V is called residual if it is the intersection of countably many open dense subsets. If V has the Baire property, then every residual subset of V is dense. The function spaces C ∞ (X, Y ) and C ω (X, Y ) have the Baire property (for any pair of manifolds (X, Y ).) 3. Similar results hold for the function spaces of type C ω , i.e. realanalytic mappings, which in fact can be deduced from the transversality results for C ∞ -maps, using the fact that C ω -maps are dense in C ∞ .
4.
In the real-analytic category, W does not need to be smooth, it suffices if W is a (possibly singular) real-analytic subset. As explained in [13], this can be verified using the fact that a real an- Let us now assume that there is a real Lie group G acting holomorphically on X with totally real orbits. Let us furthermore assume that the action in proper. Then orbits can be separated by invariant functions. Around any given point p ∈ X, we may choose local holomorphic coordinates x i in such a way that x i (p) = 0 ∀i and It follows that for a every real homogeneous polynomial P of degree As a consequence, we obtain the statement below: Lemma 2. Let G be a real Lie group acting holomorphically and properly on a complex manifold X with totally real orbits.
x,t denote the set of all k-jets induced by germs of G-invariant functions f for which the CR-hypersurface germ defined by f = t is strictly pseudoconvex around x. Let J k + (X, R) G = ∪ x∈X,t∈R J k + (X, R) G x,t . Then K 4 = . . . = K k = 0 defines a real-analytic subspace of codimension at least k − 3 in J k + (X, R) G . Now we can prove the proposition given below.
Proposition 3. Let G be a real Lie group acting holomorphically and properly on a complex manifold X with totally real orbits. Assume dim C (X) ≥ 2. Let p ∈ X.
Then G · p admits an open G-invariant neighbourhood Ω such that: 1. The inclusion map G · p ֒→ Ω is a homotopy equivalence.
Proof. Let r = dim R (X) − dim R (G), B the open unit ball in R r and i : B ֒→ X a real-analytic embedding with i(0) = p which is everywhere transversal to the G-orbits. Then W = G · i (B) is an open G-invariant neighbourhood of the G-orbit G · p. Since the G-action on X is free and proper, we may and do assume that the map G × B → W given by (g, An easy calculation in local coordinates shows that is strictly pseudoconvex for all sufficiently small ǫ > 0. We fix now a number 1 > δ > 0 such that W δ is strictly pseudoconvex. Then W δ is a G-invariant open neighbourhood of G · p fulfilling conditions (1) and (2) We define the "umbilical locus": :K 4 (j) = 0} and the "locus of coinciding scalar invariants": is an open subset in J d (B, R), U k and E k can be regarded as locally closed real-analytic subset in J d (B, R) resp. J d 2 (B, R). Fix k such that k − 3 > 2 dim R (B). Then lemma 2 implies that the codimension of E k exceeds the dimension of B × B.
The multijet transversality theorem implies that there is a residual set A ⊂ C ω (B, R) such that every f ∈ A is transversal to both U k and E k .
Since A is residual, it is dense in C ω (B, R). Therefore A×R intersects the open set Θ. Let (ρ 1 , t 0 ) ∈ (A × R) ∩ Θ. Let Σ 0 ⊂ W be the set of all points x ∈ W such that the CR-hypersurface {y ∈ W : ζ(ρ 1 )(y) = ζ(ρ 1 )(x)} is umbilical at x. Then transversality of ρ 1 with respect to U k implies that Σ 0 is a nowhere dense, locally closed real-analytic subset of W . As a consequence, we can find a real number t close to t 0 such that (ρ 1 , t) ∈ Θ and such that Ω = {y ∈ W : ζ(ρ 1 )(y) < t} has the following property: " Σ 0 ∩ Ω is nowhere dense in Ω." Now (ρ 1 , t) ∈ Θ implies that conditions (1) and (2) are fulfilled for our choise of Ω. Furthermore transversality of ρ 1 with respect to E k (in combination with codim R (E k ) > dim R (B)) implies that Ω fulfills condition (3) of the proposition. This completes the proof.
Privalov's theorem
We are now in position to use the classical theorem of Privalov in order to show that for every automorphism φ there is a sequence x n such that both x n and φ(x n ) converge to a point in the good part of the boundary. Furthermore letΩ denote the universal covering of Ω and π :Ω ′ → Ω ′ andM → M the corresponding coverings.
Then for every holomorphic automorphism φ ∈ Aut(Ω) there is a sequence x n inΩ and points q,q ∈M such that lim x n = q and lim φ(x n ) =q.
Proof. Fix φ ∈ Aut(Ω). Let ∆ be the unit disk in C,∆ its closure in C and ∂∆ its boundary.
We choose a C ∞ map ζ :∆ →Ω ′ such that 1. ζ| ∆ maps ∆ holomorphically intoΩ. 2. ζ −1 (M ) is a subset of positive Lebesgue measure in ∂∆ ≃ S 1 . Now we consider η : ∆ → C N given by η = π • φ • ζ. Then η is a N-tuple of bounded holomorphic functions. It follows ( [10], [12]) that the non-tangential limit exists almost everywhere on ∂∆. For t ∈ ∂∆, let lim n−t η(t) denote this non-tangential limit. Evidently lim n−t η(t) ∈ Ω ′ ∪ E whereever defined. We claim that A = {t : lim n−t η(t) ∈ E} is a set of measure zero. Indeed t ∈ A implies that for every holomorphic function f on C N which vanishes on E, we obtain If A is not a set of measure zero, it would follow from Privalov's theorem ( [10]) that f • η would vanish for every such f . But this would imply η(∆) ⊂ E, contradicting η(∆) ⊂ Ω. Thus A must be a set of measure zero. It follows that there exists a point q ∈ ∂∆ ∩ ζ −1 (M ) such that the non-tangential limit for η exists at q and is not in E.
Now fix a triangle T ⊂∆ with its three edges on ∂D one of which is q (T denotes the triangle with interior, i.e., the convex hull spanned by the three edges.) By the definition of the notion "non-tangential limit" we have a limit lim x∈T,x→q and thus a continuous mapη : T ∪ {q} → Ω ′ withη| T = η. Let W be a simply-connected open neighbourhood of v in Ω ′ , and V an open connected neighbourhood of q inη −1 (W ). Observe that π :Ω ′ → Ω ′ is an unramified covering. Since W is simply-connected, it follows that π −1 (W ) is a disjoint union of connected components each of which is isomorphic to W . Connectedness of V implies that φ(ζ(V )) is contained in one connected component of π −1 (W ). Together with lim x∈T,x→q η(x) = v this implies that there is a pointṽ ∈ π −1 (v) such that lim x∈T,x→q φ(ζ(x)) =ṽ =q For any sequence t n in int(T ) converging to t we now obtain a sequence x n = ζ(t n ) with convergent limits lim x n = q ∈M , lim φ(x n ) = q ∈Ω ′ .
Finally we note thatq cannot be inΩ: φ is an automorphism ofΩ and therefore lim x n ∈Ω implies that φ(x n ) cannot converge inside of Ω . Henceq ∈M.
Extension through the boundary
We need the following well-known extension result.
Proposition 5.
Let Ω be an open subset in a Stein manifold Z. Assume that there are points q,q ∈ ∂Ω, an automorphism φ ∈ Aut(Ω), and a sequence of points x n ∈ Ω with lim x n = q and lim φ(x n ) =q.
Assume in addition that ∂Ω is real-analytic and strictly pseudoconvex near q andq.
Then there exists an open neighbourhood V of q in Z and a holomorphic map Φ : V → Z such that Φ| Ω∩V = φ| Ω∩V .
Proof. First, [6] implies that φ can be extended to a continuous mapφ onΩ near q. Sinceφ is continuous andφ| Ω is holomorphic, it is clear thatφ| ∂Ω is a continuous CR-map. (For a not necessarily differentiable function the notion "CR-map" is defined via regarding derivatives in the sense of distributions. Then the condition "CR" translations into the vanishing of certain integrals involving test functions -a closed condition; hence holomorphy ofφ| Ω = φ implies thatφ| ∂Ω is a CRmap.) Thus [3] implies that this extension is already C ∞ and finally [1] or [5] yield that there is a holomorphic extension into some open neighbourhood.
Rigidity
Lemma 3. Let Ω be a strictly pseudoconvex domain in a Stein manifold V . Let f be a holomorphic function on V such that f (∂Ω) ⊂ R.
Then f is constant.
Proof. By the assumption of Ω being strictly pseudoconvex it follows that for every point p ∈ Ω close enough to the boundary there exists a continuous map ζ :∆ → V such that 1. ζ is holomorphic on ∆, 2. ζ(0) = p, 3. ζ(∂∆) ⊂ ∂Ω Now the maximum principle applied to the plurisubharmonic function g(x) = (ℑf (x)) 2 implies that ℑf (p) = 0. Thus the real-analytic function ℑf vanishes in some open subset of Ω and therefore (by identity principle) it vanishes everywhere. Hence f is both holomorphic and everywhere real-valued and therefore constant.
Proposition 6.
Let Ω be an open G-invariant subset of a complex manifold Z on which G acts freely with totally real orbits. Assume that the boundary ∂Ω is a smooth CR-hypersurface.
Let φ be an automorphism of Ω, q ∈ ∂Ω and V an open neighbourhood of q in Z such that φ| V ∩Ω extends to a holomorphic map φ : V → Z.
Assume that for every x ∈ V ∩ ∂Ω both x andφ(x) are contained in the same G-orbit.
Assume furthermore that ∂Ω is strictly pseudoconvex near q. Then there exists an element g ∈ G such that g · x = φ(x) for all x ∈ Ω.
Proof. Let g 0 ∈ G be such thatφ(q) = g 0 · q. We may now replace φ by the automorphism x → g −1 0 · φ(x) and thereby assume thatφ(q) = q. Now we have to show that φ = id Ω .
Let n = dim C (Ω) and d = dim R (G). Let i : B n−d = {v ∈ C n−d : ||v|| < 1} → Z be an embedding such that i(0) = q and that i(B n−d ) is everywhere transversal to the G-orbits. The G-action induces a real-analytic map ψ : Lie(G) × Z → Z given by ψ(v, x) = exp(v) · x. This extends to a holomorphic map ψ C : U → Z where U is an open neighbourhood of (0, q) in (Lie(G) ⊗ C) × Z. By appropriately shrinking V and U we may assume that U = N × V where N is an open neighbourhood of 0 in Lie(G) ⊗ C. Now we obtain a holomorphic map ζ : this map ζ yields local holomorphic coordinates near q. In these local coordinates is a holomorphic map all of whose components are real-valued on V ∩ ∂Ω. Because ∂Ω is strictly pseudoconvex near q, it follows that this map is constant (lemma 3). Sinceφ(q) = q, constancy means that it is constant zero. Thusφ ≡ id V . Finally, by identity principle it follows that φ(x) = x for all x ∈ Ω, as desired.
8. Reduction to the simply-connected case Lemma 4. Let G be a connected real Lie group,G its universal covering and Γ = π −1 ({e}) where π :G → G is the natural projection map.
Assume that there exists a simply-connected complex manifoldX with Aut(X) ≃G such that the Γ-action onX is free and properly discontinuous.
Let X =X/Γ. Then Aut(X) ≃ G, Proof. Every automorphism of X lifts to an automorphism ofX, be-causeX is a universal covering space for X. Therefore the automorphism group of X is isomorphic to N/Γ where N denotes the group of all elements of Aut(X) which normalize Γ. But Γ is the kernel of a group homomorphism, hence normal. Thus N =G and consequently Aut(X) ≃ N/Γ =G/Γ ≃ G.
It remains to be shown thatX can be constructed in such a way that X =X/Γ will be Stein and completely hyperbolic. Complete hyperbolicity is easy, sinceX being completely hyperbolic implies that X is completely hyperbolic, too.
The Stein property is more involved, since for an arbitrary unramified coveringX → X, Steinness ofX does not imply that X is Stein, too.
Proposition 7. Let G be a real Lie group, π :G → G its universal covering, Γ = π −1 (e), andX a complex manifold on whichG acts properly and freely with totally real orbits.
Let p ∈X.
(As usual,Ω ⊂Ũ is called locally Stein iff every point x ∈Ũ admits an open neighbourhood V in U such that V ∩Ω is Stein.) Proof. Essentially, we follow the argumentation in [16].
Let Z denote the center ofG. Then there exists a discrete cocompact subgroup Λ in Z such that Γ ⊂ Λ ( [16], lemma 1). Let G 1 =G/Λ and X 1 =X/Λ. Let G C be the simply-connected complex Lie group corresponding to the complex Lie algebra Lie(G) ⊗ C and j :G → G C the natural Lie group homomorphism induced by the Lie algebra embedding Lie(G) ֒→ Lie(G) ⊗ C.
Let Let n = dim C (X) and d = dim R (G) = dim C (G C ). Let i : B n−d = {v ∈ C n−d : ||v|| < 1} → W be a holomorphic embedding such that i(0) = p and that i(B n−d ) is everywhere transversal to the G-orbits.
We choose a small open neighbourhood N 1 ⊂ N of 0 in Lie G C such that the map ζ :G × N 1 × B →X given by ζ : (g, n, x) → g · ψ(n, z) has the property that ζ(g, n, z) = ζ(g ′ , n ′ , z ′ ) only if there is an element v ∈ LieG such that g ′ = g ·exp(v) and exp(−v) exp(n) = exp(n ′ ). This is possible, becauseG acts freely with totally real orbits. For x ∈ ζ(G × N 1 × B) we define ξ(x) ∈ Ad(G C ) by Ad(g · exp(n)) if x = ζ(g, n, z). Then ξ is a well-defined, holomorphic andG-equivariant map from anG-invariant open neighbourhood W 0 of p to Ad(G C ). Moreover ξ is constant along the orbits of the center Z ofG. Therefore it induces a holomorphic map ξ 1 : Observe that Ad(G C ) is a linear complex Lie group. It follows that Ad(G C ) is Stein ([11]) and hence admits a strictly plurisubharmonic exhaustion function ρ 1 : Ad(G C ) → R + .
Next we consider the real quotient map τ :X →X/G = Y . Let y 1 , . . . , y r be real-analytic local coordinates on Y with y i (τ (p)) = 0. Then x → i y i (τ (x)) 2 defines aG-invariant real-analytic function ρ 0 on a neighbourhood ofG · p, which is easily verified to be strictly plurisubharmonic nearG · p.
By appropriately shrinking W 0 and W 1 we may assume that there is an ǫ > 0 such that W 0 = {x : ρ 0 (x) < ǫ}.
Next we recall that by lemma 2 in [16] the natural map W 1 → Y 1 ≃ W 1 /G × Ad(G C ) is proper.
Therefore ρ 1 + ρ ′ 0 is a continuous exhaustion function on W 1 . On the other hand, this function is also strictly plurisubharmonic.
Proof of the Main theorem
Here we prove our main theorem.
Proof. LetG denote the universal covering of G, π :G → G the natural projection and Γ = π −1 {e}. By prop. 2 there is a quotient G 1 ofG by a central discrete subgroup Γ 1 and a G 1 -action on a complex manifold Ω 1 which is free, proper and with totally real orbits. Moreover, there is a number N, a bounded domain D ⊂ C N and closed complex analytic subsets Z ⊂ D, E ⊂ C N and an embedding of Ω 1 as open submanifold in Z such that the closure of Ω 1 in C N is contained in Z ∪ E and Fix p ∈ Ω 1 . We may replace Ω 1 by some appropriately choosen invariant open neighbourhood of G 1 · p. Therefore we may and do from now on assume that π 1 (Ω 1 ) ≃ π 1 (G 1 ) ≃ Γ 1 . (And we keep this assumption throughout all further replacements of Ω 1 by invariant open subsets of itself.) LetΩ denote the universal covering of Ω 1 .
From prop. 7 we deduce that, after replacing Ω 1 with some G 1invariant open subset, we may assume that U/Γ is Stein for every opeñ G-invariant locally Stein submanifold U ofΩ.
Next we apply prop. 3, again replacing Ω 1 by an appropriate smaller G 1 -invariant open submanifold. Now Ω 1 has a smooth, real-analytic and strictly pseudoconvex boundary B in Z, and there is a nowhere dense real-analytic subset Σ ⊂ B such that for every x, y ∈ B \ Σ the CR-hypersurface germs (B, x), (B, y) are isomorphic if and only if x = g · y for some g ∈ G 1 .
Let φ ∈ Aut(Ω). Let τ :Ω → Ω 1 be the covering map. We may assume that there is a G 1 -invariant open subset X ⊂ Z such that Ω 1 ∩ Z ⊂ X. Furthermore we may assume that the inclusion Ω 1 ֒→ X induces an isomorphism of the fundamental groups. Then τ :Ω → Ω 1 extends to a covering τ ′ :X → X withΩ ֒→X. LetΣ andB denote the preimages of Σ resp. B under τ ′ . By prop. 4 there is a sequence of points x n ∈Ω and points q,q ∈B such that lim x n = q and lim φ(x n ) =q. By prop. 5 it follows that φ extends to a holomorphic map Φ in an open neighbourhood U of q inX. Because φ −1 extends to a holomorphic map nearq by the same arguments and φ • φ −1 = id, this extension Φ is locally biholomorphic and Φ(B ∩ U) ⊂B.
For every z ∈B ∩U the CR hypersurface germs (B, z) and (B, Φ(z)) are isomorphic and consequently there is an element g z ∈G such that g z · z = Φ(z).
By prop. 6 it follows that there is one element g ∈G such that φ(z) = g · z for all z ∈Ω. Thus φ ∈G. Since φ was an arbitrary automorphism ofΩ, it follows that Aut(Ω) =G. By lemma 4 this implies that Aut(Ω) = G where Ω =Ω/Γ. Finally let us discuss the Stein condition and hyperbolicity. Since Ω 1 injects into a bounded domain D ⊂ C N , it is hyperbolic. Becausẽ Ω → Ω 1 andΩ → Ω are both unramified coverings, this implies the hyperbolicity of Ω. Moreover, by the same arguments as in [16], we may conclude that Ω is even complete hyperbolic.
Concerning the Stein property, let us recall application of prop. 7 further above. Our choice of Ω 1 at that time had the property that U/Γ was Stein for every locally Stein open subset U ofΩ. Subsequently we shrank Ω 1 , replacing it by some open subset with strictly pseudoconvex boundary. Clearly an open subset with strictly pseudoconvex boundary is locally Stein. Therefore Ω =Ω/Γ is Stein for our final choice of Ω 1 . | 2014-10-01T00:00:00.000Z | 2002-04-18T00:00:00.000 | {
"year": 2002,
"sha1": "af0c0b6cb55b630f1917897b9c728a799f6d5d10",
"oa_license": null,
"oa_url": "http://www.ems-ph.org/journals/show_pdf.php?iss=2&issn=0010-2571&rank=3&vol=79",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "af0c0b6cb55b630f1917897b9c728a799f6d5d10",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
182634035 | pes2o/s2orc | v3-fos-license | The Effectiveness of Oral Care Guideline Implementation on Oral Health Status in Critically Ill Patients
Intubated patients need specific oral care due to the use of endotracheal tubes. An oral nursing care guideline needs to be implemented to guide nurses in oral care in intubated patients. To test the effectiveness of oral nursing care guideline implementation. The Rogers' Diffusion of Innovations Theory was used to introduce an oral nursing care guideline to 28 nurses working in an intensive care unit in a hospital within 2 months, using mass and private communication within a hospital management system. The oral care guideline was introduced to 47 intubated patients. The accuracy of oral care practice was assessed by nurse research assistants, and patients' oral health status was examined by dental nurse research assistants. The accuracy of practice among nurses was found between 88% and 100%. Total 97.47% (n = 46) of patients had an acceptable oral health status after receiving oral care based on the oral nursing care guideline. The oral nursing care guideline was effectively implemented with high accuracy and could increase patient oral integrity after its implementation.
Introduction
Oral care is a vital procedure for critically ill patients in the intensive care unit (ICU). Oral care may affect the clinical result as well as the wellness of intensive care patients (Atay & Karabacak, 2014). The primary goal of oral care is to promote oral hygiene, to decrease microbial colonization in the oropharynx and dental plaque, and to reduce aspiration of contaminated saliva (Feider, Mitchell, & Bridges, 2010). In addition, oral care also helps to promote holistic patient care to increase patient comfort (Adib-Hajbaghery, Ansari, & Azizi-Fini, 2013) and prevent halitosis (Coker, Ploeg, Kaasalainen, & Fisher, 2013).
An oral nursing care guideline (ONCG) is required to guide nurses in delivering appropriate oral care. Guideline for oral care in intubated patients is widely available and followed in developed countries (Batiha et al., 2015). The study also mentioned that ventilator-associated pneumonia (VAP) rates could be reduced by 50%, the mechanical ventilator use was reduced from 7.3 days to 5.0 days, the VAP onset was delayed from 2.3 days to 4.9 days, and the mortality rates were reduced from 20% to 13.9% by following an oral care guideline (Batiha et al., 2015). Many organizations recommended the application of oral care guideline for critically ill patients. For example, American Association of Critical-Care Nurse and Centers for Disease Control and Prevention recommended oral care practice, which includes the use of chlorhexidine gluconate (0.12%) in oral care to reduce risk of VAP (American Association of Critical Care Nurses, 2017; Tablan, Anderson, Besser, Bridges, & Hajjeh, 2004), suctioning subglottic secretion before brushing the teeth 1 Faculty of Nursing, Prince of Songkla University, Thailand (Tablan et al., 2004), and brushing the patient's teeth for 3 to 4 minutes (American Association of Critical Care Nurses, 2017; Tablan et al., 2004).
While evidence of oral care practice in intubated patients in most developed countries is widely available, the availability of evidence about oral care practice in intubated patients in Indonesia is very limited. Research has shown that oral care in Indonesia was delivered by approximately 60% of the nurses due to an imbalance in nurse-to-patient ratio, inadequate equipment, and nurses did not understand how to perform oral care based on the procedure (Setianingsih, Riandhyanita, & Asyrofi, 2017). The nurse-to-patient ratio in the ICU of the same study was 1:4, lower than the required standard at 1:1, and there was a shortage of oral solutions and oral equipment. Furthermore, nurses' knowledge and use of evidence-based oral care practices remained variable for oral care in ICU patients (Setianingsih et al., 2017).
In addition, a published ONCG for intubated patients in Indonesia is not available as of today. Therefore, in this study, an ONCG for intubated patients with mechanical ventilators modified to fit with the local context has been introduced to nurses in an ICU in Indonesia using Rogers' Diffusion of Innovations Theory.
Literature Review
Poor oral health has been recognized to have consequences of some systemic diseases including respiratory diseases (Coker et al., 2013), specifically VAP (Yurdanur & Yagmur, 2016). VAP was a complication in 8% to 28% of patients receiving mechanical ventilation (Cirillo et al., 2015). The mortality rate of VAP was found between 24% and 60.90% and can reach 84.3% (Ganz et al., 2013;Inchai et al., 2015). The hazard from VAP had been increased by intubation since the primary reflects of the human body to dissipate aspirated microbes has been reduced by the intubation (Khan et al., 2017). Microbes in the oral cavity, such as A. baumannii, cause VAP and can be controlled by regular oral care (Feider et al., 2010;Safdar, Crnich, & Maki, 2005). The oral care must be managed to prevent these microbes to recolonize the mouth in critically ill patients admitted to ICU.
Intubated patients with mechanical ventilators need specific oral care due to their condition. Careful consideration should be given to the technique, equipment, solution, and frequency of oral care. The endotracheal tube may cause debris accumulation and provides a perfect environment for microbial growth, causes the mouth to be open continuously and leads to xerostomia, drying of the mucous membrane, accumulation of dental plaque, and reduction of the distribution of salivary immune factor (Blot, Vandijck, & Labeau, 2008).
The endotracheal tube can limit oral inspection and access to oral care and cause hypersalivation by inducing a hyperactive gag reflex (Blot et al., 2008).
Management of oral care in intubated patients includes oral assessment, selection of oral care equipment, solutions, and frequency (Yurdanur & Yagmur, 2016). Oral assessment resembles diagnostic procedures which provide valuable information to nurses for effective and efficient treatment and the possibility of complication. First, the equipment for oral care must be selected based on benefit, conveniences, harms, and other features, for example, its ability to remove plaque (Yurdanur & Yagmur, 2016). Toothbrush as part of standard oral care is preferred (Ames et al., 2011;Liao, Tsai, & Chou, 2015;Lorente et al., 2012;Prendergast, Jakobsson, Renvert, & Hallberg, 2012) in the removal of dental plaque in the oral cavity as dental plaque is proven to be effectively removed by mechanical disruption (Needleman et al., 2011;Scannapieco et al., 2009). Second, it is recommended that solutions for oral care should not irritate the mucosa, should not cause dry mouth, and should be able to remove plaque. Solutions for oral care may use chlorhexidine, saline solution, and purified water. Studies showed that even a concentration of 0.12% to 0.2% chlorhexidine was still effective for prevention of VAP (Ames et al., 2011;Liao et al., 2015;Needleman et al., 2011;Zuckerman, 2016). Third, the frequency of oral care varied between different studies. Recent study delivered oral care 2 times per day with a significant result of VAP reduction (Yao, Chang, Maa, Wang, & Chen, 2011). To increase the effectiveness of oral care, the frequency should be determined by daily oral assessment (Ames et al., 2011;Yurdanur & Yagmur, 2016).
Rogers' Diffusion of Innovations Theory has been used in nursing practices to introduce new innovations. This theory composes of four main elements including the innovation, communication channel, time, and social system (Rogers, 2003). Rogers (2003) described the innovation-decision process as ''an information-seeking and information-processing activity, where an individual is motivated to reduce uncertainty about the advantages and disadvantages of an innovation'' (p. 172). Then, the innovation would be adopted into the organization by innovation-decision process, which involves five steps of knowledge, persuasion, decision, implementation, and confirmation. These stages typically follow each other in a time-ordered manner with the influencing of social system. The Rogers' Diffusion of Innovations Theory has been used to introduce delirium screening test in mechanically ventilated patients (Bowen, Stanton, & Manno, 2012). Furthermore, the theory was also successfully used as a framework in the adoption of peripheral nerve block for orthopedic ambulatory surgery (Leggott et al., 2016). A study of evaluating Braden Pressure Ulcer Screening Scale implementation has also been successfully done in Bangladesh (Banu, Sae-Sia, & Khupantavee, 2014). It was believed that the innovation-decision process and the four main elements of Rogers' (2003) Diffusion of Innovations Theory would be successfully used as a framework to implement the innovation of ONCG for nurses working in an ICU context in Indonesia.
Objectives of the Study
The major aim of this study was to test the effectiveness of the implementation of ONCG for intubated patients in the ICU. The specific objectives were to examine nurses' accuracy of practice and patients' oral health status.
Design
This study used developmental research design where an ONCG for intubated patients was implemented within 2 months using mass and private communication within a hospital management system.
Research Questions
The research questions are as follows: (a) How many percentages of oral care practices were accurately performed? and (b) How many percentages of patients have good oral health status?
Sample
Participants in this study consisted of 28 nurses working in the ICU as nurse participants and 47 intubated patients admitted to the ICU during the study period as patient participants. All ICU nurses from the study site were included as nurse participants except the head nurse and five nurses who were research assistants (RAs).
Inclusion or Exclusion Criteria
For patient participants, all patients admitted in the study setting with the following inclusion criteria were recruited for this study: (a) aged 17 years or more and (b) orally intubated with a mechanical ventilator. The exclusion criteria were (a) edentulous, (b) facial fracture or trauma affecting the oral care cavity, (c) unstable cervical fracture, and (d) unstable vital signs.
Ethical Considerations
This study was approved by the institutional review board at the university with Certificate of Approval No. 2017 NSt -Qn 048 and the Research Committee at the hospital in which this study was performed. Permissions from the head of the ICU department, the head nurse, and nurses' participants were obtained prior to the study. Patient consent was obtained from patients or patients' family using consent forms that followed the hospital's standardized format.
Settings
This study was done in an ICU with 17 beds in a hospital in Indonesia. The ICU is a general ward with mixed cases from surgical and medical cases which are considered as life-threatening conditions and required ventilator support. Each bed is equipped with mechanical ventilator and bedside monitor. Heart rate, oxygen saturation, blood pressure, and cardiac rhythm were continuously monitored. In the ICU, every shift has six nurses in which one nurse is specialized for one cardiac surgery patient, whereas the other four nurses are responsible for three patients and one nurse is responsible for four patients.
Guideline Development
The ONCG used in this study consisted of six components, which were oral assessment, preparation of equipment and patients, oral care procedure, oral reassessment, patient monitoring and care, and documentation. The ONCG was developed from the American Association of Critical Care Nurse's Endotracheal Tube and Oral Care (AACN's ETT & OC) procedure (Wiegand, 2011) and current evidence (Feider et al., 2010;Munro & Grap, 2004;Prendergast, Hagell, & Hallberg, 2011) with adjustments to local conditions. During the guideline development, the department head, the head nurse, and staff nurses of ICU were contacted to obtain information about the current situation in the study setting, particularly on how nurses deliver oral care in intubated patients. This information was used to adjust the ONCG to be suitable for use in the study setting. The ONCG makes use of equipment available in the study setting. Furthermore, the ONCG has been translated into Bahasa Indonesia, which ensures the easy understanding by nurses and hospital management without language barriers. The ONCG has been validated by five experts of lecturers in nursing and dentistry, nurse practitioner, and dentist. The validity was checked using internal validity with a result of 1.0, and the reliability was checked by interrater reliability with a Kappa value of .96. The ONCG was tested for interrater reliability between researcher and five nurses RAs in 10 intubated patients with mechanical ventilators.
Measurement Tools
Four questionnaires were used in this study: Nurse Demographic Data Questionnaire (Nurse-DDQ), Patient Demographic Data Questionnaire (Patient-DDQ), Accuracy of Oral Nursing Care Practice was assessed using the Checklist Form of Accuracy of Oral Nursing Care Practice (AOP), and the oral health status was examined by mucosal-plaque score (MPS). The Checklist Form of AOP had 25 items. Each item had three responses format as correctly practiced oral care, incorrectly practiced oral care, and not practiced. The score of ''1'' was given to correct practice, and a score of ''0'' was given to an incorrect practice and not practiced for each item. The total score of AOP was transformed into a percentage by dividing the total score of correctly performed practice by total score of 25 and multiply by 100. The AOP was observed by nurse RAs once in the fifth week to the eighth week to evaluate the accuracy of oral nursing care practice for each nurse participant, while the MPS was assessed by dental nurse RAs once before receiving the first oral care, and once in fifth week to eighth week to assess patient's oral integrity after receiving oral care. The MPS describes the oral integrity condition, where the total MPS ranges from 2 to 8. A score of 2 to 4 indicates a good or acceptable oral integrity status and a score of 5 to 6 indicates an unacceptable oral integrity status, while a score of 7 to 8 indicates a poor oral integrity status. The ONCG, the Checklist Form of AOP, and the MPS were originally written in English and were translated into Bahasa Indonesia. The translation was performed using the back-translation technique.
The Checklist Form of AOP and the MPS validity were assessed by five experts with S-CVI results both equal to 1.0. The reliability of the AOP was tested by the researcher and nurse RAs in 10 patients and resulted in a Kappa value of .96. The reliability of the MPS was evaluated by dental nurse RAs and the researcher in 10 patients and yielded a Kappa value of .92.
Implementation Strategies
This study used Rogers' (2003) Diffusion of Innovation Theory to guide the implementation of an ONCG in intubated patients in the ICU. The implementation of the ONCG in this study used four elements as mentioned in Rogers' (2003) Diffusion of Innovation Theory: (a) the ONCG as the innovation; (b) workshop, booklet, printed presentation slides, demonstration, and private coaching as communication channels; (c) a period of 2 months as the time; and (d) hospital management as a social system. The ONCG consists of six steps: oral assessment, preparation for equipment and patients, oral care procedure, oral reassessment, monitoring and care, and documentation. The workshop was arranged as two 1-day workshops for two groups of nurses in the first week. All nurse participants were required to attend the workshop according to their shift schedules. During the workshop, a dentist delivered a presentation about the definition of oral care, its purposes, a complication of irregular oral care, the importance of oral care in intubated patients, and techniques in oral care. The researcher identified oral assessment and details of ONCG to the nurse participants. Booklets and printed presentation slides were provided for all workshop attendants at the end of the workshop. The booklets contained the ONCG and accompanied by an imageguided oral assessment tool for easy implementation and common understanding by all participants. Demonstrations were arranged after the workshop to show the oral assessment and the oral care process according to the ONCG in a real-life situation. All participants were required to attend this demonstration session. Private coaching sessions were provided for all participants to ensure that the innovation has been clearly understood by the participants. The coaching sessions were provided during the first to the fourth week by accompanying participants one-to-one and discussing the oral assessment and the oral care process step-bystep according to the ONCG. The time used in this research was 2 months, with the first month was focused on the implementation process, and the second month was dedicated for evaluation with the data collection process. An ICU in a hospital in Indonesia as the hospital management system was used in this research as the target system for implementation of the ONCG. Nurses in the ICU were the target for implementation, while the ICU's head nurse and department head were the support system. The ICU's head nurse was actively involved to support the research by encouraging implementation of oral care in intubated patients based on the guideline.
Data Collection
Seven RAs consisted of five nurse RAs and two dental nurse RAs were recruited for data collection. Five nurse RAs among ICU nurses were recruited to assess the accuracy of oral nursing care practice and thus were excluded from the study. In addition, two dental nurse RAs from another department of the hospital were recruited to assess the oral integrity of intubated patients. After finishing the workshop, in the first week to the fourth week, all nurse participants practiced delivering oral care based on the guideline under supervision. Nurses were accompanied when performing oral care to boost their confidence and to give detailed explanation until all nurses understood and were confident in performing oral care by themselves.
Patient participants were given oral care by nurse participants based on the ONCG on the first day of intubation until the extubation time. The daily oral care frequency was determined using oral assessment as practiced in a study (Chalmers, King, Spencer, Wright, & Carter, 2005). Patient in a healthy oral condition needs oral care 2 times a day, patient in a poor oral condition needs oral care 3 times a day, and patient in an unhealthy oral condition needs oral care every 4 hours. The data of AOP and MPS were collected during the fifth week to the eighth week.
Data Analysis
Data obtained from data collection were analyzed using descriptive statistics: percentage, mean, minimum-maximum, and standard deviation. Data were analyzed using computer statistical software.
Nurse Participant Demographic Characteristics
There were 28 nurses who participated in this study. The mean age of participants was 32.75 years (SD ¼ 5.60) of which 19 nurses (67.9%) were females. The majority of nurse participants held an associate degree or diploma in nursing (60.7%). The duration of service of nurses in ICU was 4 months to 17.83 years with a mean age of 5.12 years (SD ¼ 4.17 years). All nurses had never received formal training in oral care in intubated patients (Table 1).
Patients' Demographic Characteristics
There were 47 intubated patients who participated in this study. The mean age of patient participants was 48.43 years (SD ¼ 16.11). Most of the patient participants were postsurgery (89.4%). Eighty-five percent of participants (n ¼ 40) were intubated and supported with mechanical ventilators for 1 to 3 days with a median of 2 days (SD ¼ 1.61). The characteristics of the patient participants are presented in Table 2.
The Accuracy of Oral Nursing Care Practice in ICU
The result showed that 50% of nurse participants (n ¼ 14) had 100% accuracy of practice, followed by 42.9% of nurses (n ¼ 12) had 90% to 99% of the accuracy of practice and 7.1% of nurses (n ¼ 2) had 88% of the accuracy of practice. The complete presentation of the accuracy of oral nursing care practice per dimension is available in Table 3.
Patients' Oral Health Status
The result showed that before application of ONCG, the majority of patient participants (n ¼ 41, 87.2%) had a good or acceptable status of the MPS. Meanwhile, four patients (8.5%) had abundant amounts of confluent plaque (Table 4). After ONCG was delivered to patient participants, the result showed that the majority of patient participants (n ¼ 46, 97.9%) had a good or acceptable status of the MPS. For the plaque condition, none of the patient (0%) had abundant amounts of confluent plaque (Table 4). Note. SD ¼ standard deviation; HHD ¼ hypertensive heart disease.
Discussion
The results of the study showed that the implementation of ONCG using Rogers' Diffusion of Innovation Theory helped to increase the accuracy of oral care practice and improve patients' oral integrity.
Accuracy of Practice
Accuracy of practice of the ONCG implementation in the ICU was high. There are several factors that influenced the results and are explainable from the perspective of Rogers' Diffusion of Innovation Theory. The attributes of the innovation itself may contribute to the adoption process. This is similar to that stated by Rogers (2003), which mentioned the perceived attributes of innovation, such as relative advantages, compatibility, complexity, trialability, and observability would affect its rate of adoption. First, the innovation itself is probably of low complexity and simple enough for easy understanding. The ONCG as the innovation in this study contains only six basic elements including oral assessment, preparation, oral care, patient monitoring, oral reassessment, and documentation. Moreover, the ONCG has a high trialability, contains elements of regular oral care procedures, and based on up to date evidence-based practice. Second, the communication channel used in this study might also play an important role in the success of implementation. The five steps of diffused innovation including knowledge, persuasion, decision, implementation, and confirmation were incorporated into the communication channel. For example, mass media in this study included workshop, booklet, presentation printout, and demonstration, while interpersonal communication included private coaching and consultative sessions. Private coaching and consultative sessions were strategies used to increase nurse participants confidence in performing oral care based on the guideline. Private coaching and consultation were believed as the most powerful and effective approach to communication (Rogers, 2003). Third, the time dimension used in this study is believed to be appropriate for the successful implementation of the ONCG. This is similar to recent studies (Banu et al., 2014;Bowen et al., 2012) that used the time of 2 months for implementation of innovations in a limited single health-care setting with a successful result. In addition, the persuasion from the head nurse, senior nurses, and the researcher during the workshop and implementation phase also promote a positive attitude toward oral care in intubated patients. Senior nurses in each shift were encouraged to motivate and inspire other nurses to follow the oral care guideline in their shift. This method had also been used in a recent study (Bowen et al., 2012) and has had good results.
In this study, the results showed that nurse participants had 88% to 100% accuracy of oral care practice. The high percentage could be used to identify that nurses had adopted this ONCG into the ICU context in Indonesia. This successful oral care practice is due to the process of enhancing knowledge and practice of oral care to the ICU nurses. In addition, the private coaching and consultative sessions provided by the principle researcher would enhance the confidence of nurses in performing oral care practice in critically ill patients with ventilator.
The main obstacle for most nurse participants was in the preparation part, specifically in the step of putting tissue or towel across the patients' chest during oral care. This is because some nurses were not accustomed to using towel or tissue in any health-care procedure. It can be easily understood that when a nurse was not used to supply tissue or towel in their regular services, they tend to skip this step in the ONCG. The remaining 7.1% of nurse participants had an accuracy of 80% to 89%. This result is probably due to nurses' inability to remember all steps in sequence because the ONCG is newly implemented in the ICU. Although previous studies (Ibrahim, Mudawi, & Omer, 2015) showed that working experience and educational background were related to nursing care practice, this claim is not found in this study. The result showed that 50% of junior nurses who had working experience less than 5 years had a high accuracy of practice as compared with senior nurses (50%) who had working experience of 5 years or more. Moreover, the education level also did not relate to the accuracy of practice.
Patients' Outcome
The results of this study showed that patients' oral health status was improved after receiving oral care based on the guideline. The number of patients with a good or acceptable status of the mucosal score, plaque score, or MPS increased after receiving oral care. Furthermore, the number of patients with unacceptable or poor status was reduced after oral care delivery. This result might be caused by the implementation of the ONCG, particularly from continuous oral care and monitoring which ensured that oral condition of patients could be maintained. This is similar to a study by Ames et al. (2011) who used oral assessment to determine the frequency of oral care, where the oral assessment scores were improved after oral care. In this study, oral care was delivered according to the result of the oral assessment. During the study, all patients received oral care for a minimum of twice a day as recommended (Ames et al., 2011;Prendergast et al., 2011). In this study, oral care was delivered following the ONCG which required an oral assessment prior to oral care delivery, to determine the oral care frequency as practiced in previous studies (Ames et al., 2011;Prendergast et al., 2011). The combination of chlorhexidine 0.12% and the toothbrush was effective to remove dental plaque as shown by the result of oral reassessment after oral care delivery. The plaque score of patients was greatly improved with the reduction of patients with visible plaque depositions. In addition, the mucosal score of patients was slightly improved with a small increase in the number of patients with normal gingiva and mucosa. This is in accordance to previous studies (Khan et al., 2017;Needleman et al., 2011) which mentioned that oral care was effective to eliminate the bacteria colonization in dental plaque and to prevent and treat gingivitis.
Strengths and Limitations
This study has several strengths. First, this is the first reported study to implement ONCG for intubated patients in Indonesia. Second, the ONCG as the innovation was easy to use, very detailed, simple, and cost-effective. Third, the tools used in this study were available in English and Bahasa Indonesia and have been validated and tested for reliability with high validity and high reliability. Finally, the data were collected by RAs without interference from the researcher.
There are limitations attributable to this study. This study had a small-sample size, and thus, the result of this study cannot be generalized to other hospital settings. In addition, this ONCG has been developed for use in a hospital in a developing country where the context may be different from other settings.
Implications for Practice
The ONCG was found to be effective to increase the patients' oral integrity condition. Therefore, it is recommended that the ONCG should be formally adopted as a standard procedure in intubated patients' care. Furthermore, it is highly recommended that an oral assessment should always be done before doing oral care, and oral care should always be based on the result of the assessment, with a minimum of 2 times a day.
Conclusions
Results of this study showed that ONCG for intubated patients with mechanical ventilators was implemented effectively in the ICU. The ONCG could be implemented with an accuracy of practice of 90% or more. Oral health status, as patient outcome measures, was found to be acceptable after receiving oral care based on the ONCG. Due to these facts, it is recommended that hospitals could formally ratify the ONCG as a standard procedure in their ICUs. In addition, hospital management could follow Rogers' diffusion theory of adopting modern health-care technologies or innovation. Furthermore, it is highly recommended that nurses perform an oral assessment before doing oral care.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Graduate School, Prince of Songkla University through Thailand Education Hub for ASEAN Countries (TEH-AC) Scholarships. | 2019-06-07T21:13:20.235Z | 2019-05-01T00:00:00.000 | {
"year": 2019,
"sha1": "2e393b08d75353128e584eb0b177f58122ef42b4",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/2377960819850975",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "85d83904c6b13f13c51b3181fce52d64ba404c9b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53215658 | pes2o/s2orc | v3-fos-license | A vesicle-aggregation-assembly approach to highly ordered mesoporous γ-alumina microspheres with shifted double-diamond networks† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c8sc02967a
Highly ordered mesoporous γ-alumina microspheres with a shifted double-diamond mesostructure have been synthesized via a vesicle-aggregation-assembly approach.
Introduction
Porous alumina materials have received considerable attention owing to their wide applications in catalysis and adsorption realms, 1-8 the performance of which is strongly affected by their porosity, microstructures and crystalline phases. Enormous efforts have been devoted to the structural manipulation of mesoporous alumina, [9][10][11][12][13][14] ascribed to the superior features of ordered mesostructures, such as the relatively large pore size, high pore volume and low diffusion resistance. Nevertheless, different from the controllable sol-gel chemistry of silicates, [15][16][17][18][19] the fast hydrolysis-condensation reactions of alumina precursors always result in phase separation or undesirable coassembly with templates. On the other hand, the metastable transition phases of alumina also have a great inuence on their performance. Especially, g-phase alumina possesses various excellent properties (e.g., hardness, corrosion resistance, hydrolytic stability, amphoteric character and thermal stability), making it a promising candidate for further applications. 20 However, to obtain alumina materials with the g-phase, hightemperature treatment is usually inevitable, which always results in the destruction of the formed morphology and mesostructure. Therefore, it is highly desirable but rather difficult to fabricate mesoporous alumina materials with both highly ordered mesostructures and g-phase frameworks simultaneously.
Until now, several strategies have been performed to prepare such materials, including the nano-casting route and surfactant-directing route. For the former, ordered mesoporous carbons, owing to predesigned ordered mesostructures, rigid frameworks and easy selective removal, 21,22 are usually adopted as a hard template to synthesize ordered mesoporous galuminas. With ordered mesoporous carbon CMK-3 as a rigid template, Zhang and co-workers have prepared ordered mesoporous g-alumina by repetitive lling and hydrolysis processes and a subsequent stepwise calcination procedure. 23 By functionalizing the ordered mesoporous carbon with nitric acid and controlling the inltration times, Wu et al. demonstrated an effective approach to obtain ordered mesoporous g-alumina replicas with different pore architectures. 24 Nevertheless, these synthetic routes are laborious and time-consuming, which are poorly suitable for massive production. In contrast, the surfactant-directing route to prepare ordered mesoporous materials is more feasible and economical. However, for the alumina precursors, the susceptibility to hydrolysis and the tendency to crystallize at a high temperature always result in the formation of disordered mesostructured aluminas, [25][26][27] giving rise to great trouble in the controlled synthesis of ordered mesoporous g-aluminas. Until now, only few studies on the synthesis of such materials have been reported with this approach. 8,10,11,28 By using AlCl 3 $6H 2 O as a precursor and the block copolymer poly(ethylene-co-butylene)-b-poly(ethylene oxide) (KLE) as a so template, Kuemmel et al. have reported the synthesis of ordered mesoporous g-alumina through a dipcoating approach. 10 A detailed research on the preparation of mesoporous g-alumina was performed by Yan and co-workers with organic alumina as precursors and ethylene oxide-based surfactants as templates in the presence of additives such as nitric or citric acid. 11 Despite these successes, some drawbacks, such as the strict control of relative humidity, the assistance of chelants, the use of expensive surfactants and the adoption of complicated synthetic procedures, are always encountered in the so template-based synthesis. Furthermore, pore sizes of these mesoporous alumina materials are all less than 20 nm, impeding effective mass transportation especially in macromoleculeinvolved applications.
Recently, taking full advantage of the interlocking property of the bicontinuous mesostructures (double gyroid, 17,29-37 double diamond [38][39][40][41][42] and double primitive [43][44][45][46][47][48][49] ), Che and co-workers have demonstrated a novel approach to prepare ordered inverse materials (silica and titania scaffolds) with ultra-large mesopores (pore size > 50 nm) by using lab-synthesized block copolymers as templates through a multilayer core-shell bicontinuous microphase-templating route. 39,41,43 However, the morphology control of these novel mesoporous materials has not been achieved yet. Furthermore, this method was not successful to prepare mesoporous alumina analogues. Therefore, the synthesis of such ultra-large pore mesoporous g-alumina materials with a highly ordered mesostructure and a well-dened morphology simultaneously is still a great challenge and has not been reported yet.
Herein, for the rst time, we report a facile repeatable synthesis of highly ordered and ultra-large pore mesoporous galumina microspheres with a shied double-diamond mesostructure via a new vesicle-aggregation-assembly approach. With aluminum isopropoxide as a source and the amphiphilic diblock copolymer poly(ethylene oxide)-b-poly(methyl methacrylate) (PEO-b-PMMA) as a so template, the as-made Al 3+ -based gel/PEO-b-PMMA composite microspheres can be obtained via the hydrogen bonding interaction and co-assembly induced by the evaporation of the acidic tetrahydrofuran (THF)/H 2 O mixed solvents. These composite microspheres possess a diameter size of 1-12 mm and a unique inverse bicontinuous mesostructure (double diamond, Pn 3m). Aer a direct calcination at 900 C in air, the composite microspheres can be transformed into mesoporous g-alumina microspheres with retained morphology, although the shrinkage in the unit cell size is about 27.5%. Meanwhile, the mesostructural symmetry changes to a low shied inverse bicontinuous mesostructure (shied double diamond, Fd 3m) owing to the leaning of the two intertwined but disconnected networks. The highly ordered mesoporous g-alumina materials exhibit ultra-large mesopores ($72.8 nm), bicontinuous columnar frameworks and high thermal stability (as high as 900 C). It is remarkable that the mesoporous frameworks are composed of fully crystallized galumina nanoparticles with an average size of $15 nm. The ordered mesoporous g-alumina materials can be employed as a support of Au nanoparticles, and the formed Au/mesoporous g-alumina composites show excellent performance in the catalytic reduction of 4-nitrophenol.
Synthesis of ordered mesoporous aluminas
The diblock copolymer poly(ethylene oxide)-block-poly(methyl methacrylate) (PEO-b-PMMA) was prepared using an atom transfer radical polymerization (ATRP) method. The structural formula was calculated to be PEO 113 -b-PMMA 335 according to the 1 H nuclear magnetic resonance ( 1 H NMR) spectra, and the polydispersity index (PDI) was 1.31 based on gel permeation chromatography (GPC) tests (Fig. S1, ESI †). The detailed synthetic steps and characterizations are shown in the ESI. † The ordered mesoporous alumina samples were synthesized via a solvent evaporation induced vesicle-aggregation-assembly approach. In a typical procedure, aluminum isopropoxide (400 mg) was added into a THF solution (15 mL) containing PEO-b-PMMA (80 mg) with stirring for 30 min. Sequentially, concentrated hydrochloric acid (2.0 mL) was added into the above solution, followed by further stirring for 30 min. The obtained clear solution was poured into a Petri dish with a diameter of 15 cm to evaporate the reaction solvents at room temperature for 48 h, and further dried at 40 C for 48 h in an oven. The asmade Al 3+ -based gel/PEO-b-PMMA composites were scrapped and collected. The composites were calcined in air with a ramp of 1 C min À1 to 400 C for 4 h, followed by a ramp of 10 C min À1 to a certain temperature (500-900 C) for 1 h.
The procedure of Au-loading on mesoporous g-alumina and catalytic reduction of 4-nitrophenol were carried out according to previous reports 50,51 and are shown in the ESI. †
Measurements and characterization
Field-emission scanning electron microscopy (FESEM) images were taken using a Hitachi S4800 scanning electron microscope. Samples were directly dispersed onto conductive tapes attached on a sample holder for observation under vacuum. Transmission electron microscopy (TEM) experiments were conducted on a JEOL JEM-2100F microscope (Japan) operated at 200 kV. The samples for TEM measurements were suspended in ethanol and dropped onto Cu grids. To investigate the interior structures of the resultant mesoporous alumina microspheres, samples were embedded in a resin and cut into thin sections with a thickness of $100 nm in an ultramicrotome. Small-angle X-ray scattering (SAXS) measurements were carried out on a Xenocs XeUss 2.0 small-angle X-ray scattering system. Nitrogen sorption isotherms were measured at 77 K with a Micrometrics Tristar 2420 analyzer. Before measurements, all samples were degassed under vacuum at 180 C for 6 h. The Brunauer-Emmett-Teller (BET) method was utilized to calculate the specic surface areas using adsorption data in a relative pressure range from 0.075 to 0.225. Using the Barrett-Joyner-Halenda (BJH) model, the pore volumes and pore-size distributions were derived from the adsorption branches of isotherms, and the total pore volumes were estimated from the adsorbed amount at a relative pressure P/P 0 of 0.995. Wideangle X-ray diffraction (XRD) measurements were conducted on a Bruker D8 Advance X-ray diffractometer using a CuKa radiation source (l ¼ 1.5406Å). 27 Al MAS NMR experiments were performed on a Bruker 400WB AVANCE III spectrometer with 4 mm ZrO 2 rotors, spun at 12 kHz. Single excitation pulse experiments were performed with a 10 pulse width of 0.33 ms, an acquisition time of 10 ms and a relaxation delay of 0.3 s. The chemical shis were referenced to 1.0 M AlCl 3 solution. The UVvis spectra were recorded on a PerkinElmer Lambda 750S UV-vis spectrometer at 25 C.
Results and discussion
FESEM images show that the as-made Al 3+ -based gel/PEO-b-PMMA composites prepared from the vesicle-aggregationassembly approach are sphere-like particles with a wide size distribution (diameter size 1-12 mm) (Fig. S2, ESI †). High-magnication SEM images of a single microsphere reveal that the surface of the as-made microsphere is composed of obvious crystal facets ( Fig. 1a and b). Furthermore, two intertwined but disconnected nanorod networks embedded in a matrix can be clearly observed (Fig. 1b, marked by purple and green arrows), implying that a bicontinuous cubic mesostructure is formed. The shortest circuit of the ordered mesostructure is composed of six points to form a regular hexagon (Fig. 1b inset), indicating a bicontinuous double-diamond mesostructure (space group Pn 3m) based on Wells' theory. 52 The unit cell size is calculated to be $131 nm from the d-spacing value (d 110 ) of the double- diamond mesostructure. Aer a direct calcination at 900 C in air, both the spherical morphology and ordered mesostructure are well retained (Fig. 1c-e), indicating that the obtained mesoporous alumina microspheres have a high thermal stability. The shiing of two intertwined but disconnected alumina networks occurs (Fig. 1e, marked by purple and green arrows), suggesting that the mesostructural symmetry changes low from Pn 3m (double diamond) to Fd 3m (shied double diamond or single diamond). 33,42 In addition, the unit cell size of the mesoporous alumina microspheres reduces to $95 nm, which is much smaller than that ($131 nm) of the as-made Al 3+ -based gel/PEO-b-PMMA composites, indicating a large shrinkage ($27.5%) of alumina frameworks due to the crystalline-phase transformation and further condensation. More importantly, it can be clearly observed that the interior of the microspheres possesses a highly ordered mesostructure (Fig. 1f), which is also composed of two shied alumina networks. Therefore, these results clearly indicate that the whole alumina microspheres are composed of such intertwined but disconnected scaffolds with a uniform mesostructure.
TEM images of a microsection show that the mesoporous alumina microsphere prepared by the vesicle-aggregationassembly approach aer calcination at 900 C in air is composed of many irregular but ordered domains (Fig. 2a) Fig. 2d-f), respectively. According to the diffraction points, the unit cell size is calculated to be $97 nm, in agreement with that ($95 nm) obtained from SEM results. In addition, highresolution TEM (HRTEM) images of the alumina frameworks obviously show a lattice spacing of 0.199 nm corresponding to the d 400 of g-alumina (Fig. 2b and c), clearly indicating that the mesoporous alumina microspheres are composed of highly crystallized g-alumina networks.
The SAXS pattern of the as-made Al 3+ -based gel/PEO-b-PMMA composites can be assigned to the possible reections of Pn 3m symmetry (double diamond) (Fig. 3A(a)), while the intensity of these peaks is too weak to strictly prove the attribution of the structural symmetry. The broadened and weak SAXS peaks should be attributed to the slight distortion of the formed mesostructure in each ordered domain, which inevitably occurs during the microphase separation process in the conned space. Aer calcination at 900 C in air, the scattering peaks of the mesoporous alumina appear at different locations ( Fig. 3A(b)), suggesting that a change of structural symmetry occurs aer the removal of the amphiphilic block polymer templates. Furthermore, the allowed reections of the Fd 3m symmetry (single diamond), with a unit cell size of $99 nm based on the rst 111 reection, are consistent with the SAXS pattern of the obtained mesoporous g-alumina. Therefore, these results further conrm a structural change to the Fd 3m symmetry during the calcination, matching well with the SEM and TEM results. Nitrogen sorption isotherms of the ordered mesoporous alumina microspheres prepared by the vesicle-aggregationassembly approach aer calcination at 900 C in air exhibit representative type IV curves with a sharp capillary condensation step in a relative pressure range of 0.95-0.99 (Fig. 3B), suggesting an ultra-large mesopore. The hysteresis loop indicates high permeability between mesopores. The BET surface area and pore volume are calculated to be as low as 52 m 2 g À1 and 0.34 cm 3 g À1 . The relative low surface area should mainly be attributed to the undetectable contribution of micropore surface area, because almost no micropore is le in the highly crystallized columnar frameworks aer the high temperature calcination process. The pore size distribution derived from the adsorption branch reveals a pore size distribution at around 72.8 nm (Fig. 3C), which corresponds to the porous space created by the intertwined but disconnected scaffolds.
Thermogravimetric analysis (TGA) experiments were employed to elucidate the transformation process from the asmade Al 3+ -based gel/PEO-b-PMMA composites to the mesoporous g-alumina microspheres. Four obvious weight loss intervals can be observed (Fig. S3b, ESI †): (i) a preliminary weight loss of $10% below 110 C; (ii) an intense weight loss of $30% between 110 and 160 C; (iii) a slow weight loss of $20% between 160 and 350 C; (iv) a further loss of $10% until 400 C, then an almost stable weight until 900 C. The rst two weight loss intervals below 160 C are attributed to massive loss of H 2 O and chlorine compounds in Al 3+ -based gels with the increase of temperature (the removal of chlorine compounds is determined according to the following XRD results). 53,54 The third weight loss between 160 and 350 C occurs at the same interval as that of the template PEO-b-PMMA (Fig. S3a, ESI †), indicating the decomposition of the block copolymer. Finally, a further weight loss between 350 and 900 C can be associated with the removal of H 2 O during the condensation and crystalline-phase transformation of the alumina. 54 The wide-angle XRD pattern of the as-made Al 3+ -based gel/ PEO-b-PMMA composites shows many well-resolved peaks (Fig. 4A), which can be indexed to the crystalline structure of synthetic chloraluminite (JCPDS no. 44-1473). It can be seen that the relative intensities of diffraction peaks between the composites and synthetic chloraluminite are somewhat different, implying different contents of Al 3+ -based compounds in the Al 3+ -based gels. Aer the temperature reaches 100 C, the relative intensities of these diffraction peaks are completely different from that of the as-made Al 3+ -based gel/PEO-b-PMMA composites (Fig. S4A, ESI †), indicating that the contents of the components in the Al 3+ -based gels change greatly with increase of the temperature. Subsequently, no diffraction peak can be observed at 150 C ( Fig. S4B(b), ESI †), suggesting a rearrangement of the Al phase, which is in accordance with the intensive weight loss during this interval in the TGA curve. Aerwards, the frameworks are still composed of amorphous alumina aer the removal of the block copolymer template at 400 C ( Fig. 4B(a)), then begin to crystallize at 700 C (Fig. 4B(d)), and fully transform into the g-alumina phase at 900 C (Fig. 4B(f), JCPDS no. 10-0425). The average crystal size is calculated to be about 15 nm using Scherrer's equation. In addition, the elemental mapping image, together with the energy dispersive X-ray (EDX) spectrum, shows that Cl element is uniformly dispersed at a high content (up to 26.83 wt%) in the as-made Al 3+ -based gel/PEO-b-PMMA composites (Fig. S5, ESI †), further demonstrating that the Al 3+ -based gels are composed of various Cl-rich compounds. Aer calcination at 900 C in air, no Cl element is detected in the EDX spectra of the obtained mesoporous alumina, and the atom ratio of Al to O is close to 2 : 3 (Fig. S6, ESI †), suggesting a transformation of the frameworks to Al 2 O 3 . 27 Al-MAS NMR spectra of the as-made Al 3+ -based gel/PEO-b-PMMA composites reveal one resonance signal close to 0 ppm (Fig. 4C), suggesting that Al 3+ ions mainly exist in a 6-fold coordinated form. Aer calcination at 400 C to remove the template, three bands at around 63, 33, and 7 ppm are observed ( Fig. 4D(a)), which can be assigned to 4-(AlO 4 ), 5-(AlO 5 ), and 6fold (AlO 6 ) coordination, respectively. 55 As the temperature increases, the resonance band for 5-fold coordinated Al 3+ ions gradually weakens and completely disappears at 900 C (Fig. 4D), which is related to a transformation of the crystalline phase from amorphous alumina to g-alumina, agreeing well with the results from the wide-angle XRD measurements. An intermediate state of the reaction mixture aer evaporation for 2 h was captured (Fig. S7, ESI †). Because the evaporation rate of the solvent THF is much faster than that of the zeotropic solvent water, the mixed solvent at this stage is a water-rich solvent. It can be seen that the reaction solution is opaque aer the evaporation of most volatile solvent THF (Fig. S7a inset, ESI †). Furthermore, cryo-transmission electron microscopy (cryo-TEM) images show massive vesicles in the reaction mixture (Fig. S7, ESI †). These vesicles are in two states: a few vesicle-aggregates (Fig. S7a, marked by red arrows, ESI †) and separated vesicles under attaching with others (Fig. S7b, ESI †). These phenomena imply that the Al 3+ -based oligomers/ PEO-b-PMMA composite vesicles are formed during the evaporation of THF, and these vesicles further attach to each other into big aggregates. It clearly reveals the important intermediate status during the formation process of the resultant as-made Al 3+ -based gel/PEO-b-PMMA composite microspheres.
Based on the above results, we propose that the highly ordered mesoporous g-alumina microspheres with shied doublediamond networks are formed through a solvent evaporation induced vesicle-aggregation-assembly process (Scheme 1). In the rst stage, the water-insoluble amphiphilic diblock copolymer Fig. 4 Wide-angle XRD patterns of (A) as-made Al 3+ -based gel/PEO-b-PMMA composites prepared by the vesicle-aggregation-assembly approach and (B) ordered mesoporous aluminas obtained after calcination at (a) 400 C, (b) 500 C, (c) 600 C, (d) 700 C, (e) 800 C and (f) 900 C in air. 27 Al-MAS NMR spectra of (C) as-made Al 3+ -based gel/PEO-b-PMMA composites prepared by the vesicle-aggregation-assembly approach and (D) ordered mesoporous aluminas obtained after calcination at (a) 400 C, (b) 500 C, (c) 600 C, (d) 700 C, (e) 800 C and (f) 900 C in air. The peaks marked with a * label in (C) and (D) are spinning sidebands.
PEO-b-PMMA can be dissolved well in a strong acidic solution with a high THF/H 2 O volume ratio due to the good solubility in THF. As the volatile solvent THF evaporates, the long hydrophobic PMMA segments of the template PEO-b-PMMA tend to aggregate to form hydrophobic domains, while the hydrophilic PEO segments are retained in the solution. In order to reduce surface tension, the polymer molecules self-assemble into vesicles at a very early stage, with PEO segments as inner and outer Scheme 1 The formation process of ordered mesoporous g-alumina microspheres with shifted double-diamond networks via a solvent evaporation induced vesicle-aggregation-assembly approach.
Step 1: the formation of tiny Al 3+ -based oligomers/PEO-b-PMMA composite vesicles with PMMA segments as the hydrophobic interlayer and Al 3+ -based oligomer-associated PEO segments as the hydrophilic inner and outer walls caused by THF evaporation induced self-assembly. Step 3). In the subsequent calcination process, the two interpenetrating but disconnected alumina networks shi due to the decomposition of the template PEO-b-PMMA (Scheme 1, Step C). However, owing to the unique interlocking property of the bi-continuous cubic mesostructure, the collapse of the formed alumina microspheres is effectively avoided, only accompanied by a change of the structural symmetry from double diamond to shied double diamond (single diamond) (Scheme 1, Step 4). Furthermore, the amorphous frameworks can be transformed into the crystallized frameworks composed of g-Al 2 O 3 nanocrystals without the destruction of the formed ordered mesostructure as the temperature is increased. This should be attributed to the maximum release of internal stress due to the unique rod-like alumina frameworks during the rearrangement of the atoms. Therefore, the mesoporous g-Al 2 O 3 microspheres with a shied double-diamond mesostructure result from a complex process, mainly including the formation and aggregation of composite vesicles, the microphase separation between the hydrophobic domain and hydrophilic domain, and the shiing of two individual networks. In particular, the transformation of the mesophase from composite vesicle-aggregates to bicontinuous mesostructural composite microspheres is mainly attributed to the ratio change of the hydrophobic domain and hydrophilic domain as the reaction solvents evaporate. The highly ordered mesoporous g-alumina obtained aer calcination at 900 C in air was employed as a support of Au nanoparticles for the catalytic application. TEM and STEM The relationship between ln(C t /C 0 ) and reaction time (t), wherein the ratios of 4-nitrophenol concentration (C t at time t) to its initial value C 0 (t ¼ 0) are directly given by the relative intensity of the respective absorbance A t /A 0 .
images show that the frameworks of ordered mesoporous galumina are well retained aer the loading of Au nanoparticles ( Fig. 5a and S8a, ESI †). On the other hand, the uniform distribution of tiny Au nanoparticles (#2 nm) is conrmed by the elemental mapping image and the HRTEM image (Fig. S8d, ESI † and Fig. 5b), and the content of Au is determined to be 1.51 wt% according to the EDX results (Fig. S8e, ESI †). The HRTEM image of one Au nanoparticle shows the lattice fringes with a spacing of $0.24 nm (Fig. 5b inset), which corresponds to the d 111 of single-crystalline Au, further conrming that the Au nanoparticles are successfully synthesized and loaded on the columnar g-alumina frameworks by the post-impregnation method. In the catalytic reduction process, the mixed solution of 4-nitrophenol and sodium borohydride rstly shows a strong absorption peak at 400 nm (Fig. 5c), which reects the formation of 4-nitrophenolate ions. 51 Aer the addition of the Au/mesoporous g-alumina composite catalysts, the absorption peak at 400 nm decreases with time rapidly, and a new absorption peak appears and develops at 305 nm simultaneously, corresponding to the reduction of 4-nitrophenol to 4-aminophenol. The values of ln(C t /C 0 ) versus the reaction time (t) show a good linear tting and a kinetic constant k of 0.0888 min À1 (Fig. 5d), which is much higher than that of high Au-loaded mesoporous silica composites reported previously. 51 The enhanced catalytic performance may stem from smaller Au nanoparticles, larger mesopores and better accessibility.
Conclusions
In summary, a novel vesicle-aggregation-assembly approach induced by solvent evaporation has been demonstrated to synthesize highly ordered mesoporous g-alumina microspheres with a unique shied double-diamond mesostructure by using block copolymer PEO-b-PMMA as a so template and aluminum isopropoxide as a precursor in an acidic THF and water binary solvent. During evaporation of THF and subsequent water in the mixed solution, a complex co-assembly process, including formation of composite vesicles, aggregation between these vesicles and microphase separation of composite vesicle-aggregates, leads to the formation of as-made Al 3+ -based gel/PEO-b-PMMA composite microspheres with an inverse double-diamond mesostructure (Pn 3m). Moreover, aer calcination at 900 C in air to remove the template and crystallization of the alumina frameworks, a change of structural symmetry occurs from Pn 3m to Fd 3m, which results from the shiing of two intertwined but disconnected alumina networks. The obtained alumina microspheres are composed of columnar and crystallized g-alumina networks (nanocrystal size $ 15 nm), and these networks create porous space with ultra-large mesopores ($72.8 nm). Furthermore, the g-alumina microspheres exhibit thermal stability as high as 900 C. Finally, the novel mesoporous g-alumina material can be used as a support of Au nanoparticles in the catalytic reduction of 4-nitrophenol with sodium borohydride. This work may pave a promising way in the designed synthesis of novel mesoporous materials with unique mesostructures and morphologies.
Conflicts of interest
There are no conicts to declare. | 2018-11-15T16:51:26.277Z | 2018-08-17T00:00:00.000 | {
"year": 2018,
"sha1": "5873cda10748228c927ec8e3cc3a7b3cccb05a57",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2018/sc/c8sc02967a",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1da93f579ee063c31e63fa6cbdd52a3839cc8d8b",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
196172606 | pes2o/s2orc | v3-fos-license | Integration of Virtual Prototyping and Building Information Modeling to Optimize the Construction Site Planning and Management
Objectives: To study the possibility of using Building Information Modeling (BIM) to improve the efficiency of site planning and management spaces within the buildings. Method: In order to achieve this study, the literature and previous research in the field has been consulted, as well as the collection of data by interviewing a group of engineers through collection of documents and maps of the project and taking realistic picture of the buildings. Findings: By analyzing case study and finding real area data by using BIM technology and comparing it with the international standards for space management on campus, there was a significant difference. And it was found that there is no international standard used in the design of these spaces and it is not suitable for student and staff and teaching staff. The researcher recommends the use of BIM technology in the early stage of the project to reduce the design error in space management and the efficiency of this technology and it’s easy to obtaining information and data. Although there are obstacles to the use of BIM practices due to the reasons for the scarcity of qualified personal in this field, in the near future there will be the usage of large scale BIM system in the construction industry. Applications: BIM technology is one of the most important technologies in the construction industry and it increases the performance during project life cycle. As a result of the development of the construction industry, such technology has to be used, increasing design efficiency and improving construction work while providing more time and cost
Introduction
Numerous studies as of late are attempting to execute the utilization of Building Information Modeling (BIM) with a specific end goal to pick up benefits in various view points of the development procedure including the administration of the development site and the utilization of models nearby. From a site configuration perspective, two fundamental viewpoints are underlined by global looks into: 4D planning, Safety or both together. 4D booking research began quite a long while back with 3D CAD frameworks actualized with time so as to envision the works advance. With the improvement of BIM the exploration on 4D planning clearly preceded onward this framework since the opportunity to embed "schedule", robotizing 1 . BIM is so different from the traditional CAD method in building design 2 . Through the use of BIM approach by project participation, it is easy to exchange the information of project.
The BIM model can save the energy performance data such as power consumption, temperature, CO 2 emissions, occupancy and humidity, Site planning, green technology application, and many others. The important benefit of BIM model were seen through the lower cost of construction project by making building planning, design, construction, and Maintenance more efficient and deliver better value of project 3 .
BIM Programming
There are arrangements of organizations that are considered as an engineer for BIM programming and these products are shifted to play out different purposes. In 4 calls attention to that the major BIM Programming delivered via Autodesk, Graph iSOFT, and Bentley. BIM programming can be arranged into numerous gatherings, for example:
BIM Measurements
There are a few ideas, terms and applications created over the BIM, including what is known as the BIM measurements (3D, 4D, 5D, 6D, and 7D) as the BIM does not manage a 2D or 3D framework just, but instead surpasses it for different measurements. Each measurement includes certain reasons inside the development venture. Figure 1 demonstrates the measurements of the BIM and its substance.
• 3D measurement: This measurement is for perception where it enables to partners to see the working in a virtual domain before it is really manufactured and furthermore enables them to see refreshes for this Representation along the life of building 5 -7 . • 4D measurement: This measurement is worried about adding the booking to the past measurement (3D).
Theoretical Approach
• It will be focused on the theoretical analysis of scientific literature. It will start from the general concept of BIM as a new collaborative process, and then will continue with its analysis as an "n-D" tool. • General concept of Building Information Modeling as a tool and process to manage information in different area of the site planning.
Experimental Analysis
• General site planning vs. BIM site planning comparison. This quantitative analysis for a case study of already existing project, based on the automatic process of extracting information from BIM models. • Case study was modeled by Revit software according to BIM to calculate their actual area and compared with the international standard.
Case study Medicine Collage
Medicine collage is one of the important collage in diyala university campus in Iraq it consist of Six main building (deanship, library and four factual department). Local company was implemented this project. The type of contract is unit price contract and it was built in (2002). This collage has 422 students for the year of (2017-2018) for different department (Figure 2).
Part I: division of internal spaces of buildings
The ground floor and first floor of internal spaces of deanship and factual department for case study should be divided into (office room, lecture hall, laboratory and service room) and the library should be divided into (library, Lecture hall, book store, office room) this process is done by using architectural tab and room panel are shown in the Figure 3.
Part II: rooms schedule
Schedule is created for all building floors to determine their area. From "view" menu choose "schedule" Then chose room from "filter list" as shown in the Figure 4.
After created the room schedule the table was generated which contain the (number, name, area) of each room and space inside the buildings and contain the level of each room located in any floor for all buildings. The method of adding "parameter" based on the "parameter" availability in Rivet software (Figure 6 and 7).
Case Study Room Schedule
The researcher made room schedule for ground and first floor for all buildings. Then calculate actual value of area from Rivet software and comprising it by standard value of area to find the difference between them and to give the recommendations (Figure 8-10).
According to the code used in design the lecture hall and laboratories the area of students is about 86% from the total area of lecture hall and the area of each student is about 1.9 m2in lecture hall and 4m2in laboratories 12 . According to the code used in the room area for teaching staff is about 22m2 for professor and 20m2 for assistant and 15m2 for lecture 12 . According to the code used the room area for staff allocation is about 18m2 for two staff and 12m2 for one staff and (24-30) m2 for manager 12 .
The number of student according to standard is calculated by the flowing formula: Y = (86% AA /1.9) ……for student in lecture hall (1) Y = (AA /4) ……for student in laboratories (2) E= X-Y (3) Where: Y = the number of students according to standard. AA = actual value of area. AS =standard value of area. X = the number of students in real. E = the difference in the numbers of students. Q = Meet the requirements of standard.
Project I (Medicine Collage)
This project contain (22) room for staff allocation in the ground floor and (24) room in the first floor ( Figure 11). Table 1 presents the actual value of area for ground floor (project I) is meet the standard requirements except the two room (A21O21, A21O22) are not meet the standard requirement because is don't enough for one staff and need (2) m2. Table 2 presents the actual value of area for first floor (project I) is meet the standard requirements except the two room (A22O23, A22O24) are not meet the standard requirement because is don't enough for one staff and need (2) m2.
Project II (Medicine Collage)
This project divided for four parties (library, Lecture hall, book store, office room). The ground floor contains two book stores their area about (27m2) and two free unit their area about (21m2) the area of library and lecture hall is about (405m2) Figure 12-15.
The first floor contain (5) office room their area about (21m2), library and book store their area about (274m2), computer room their area about (45m2).
Project III (Medicine Collage)
This project contain (14) room in the ground floor and (22) room in the first floor. Table 3 presents a big difference between actual value of area that calculated by Revit software according to BIM and standard value of area in the case of staff room need (10,3,8) m2 to meet staff room requirement. And there are a big difference between the number of students in real and the number of students according to standard in the case of lecture hall the difference is about (104,53,24) and in the case of laboratory the difference is about (80,51,51,124) and don't meet the requirement of standard.
The researcher suggests that the development the buildings reallocating spaces according to international standards in order to accommodate the number of existing students and teaching staff or be the number of\ students suitable for existing spaces. Table 4-6 present a big difference between actual value of area that calculated by Revit software according to BIM and standard value of area. In the case of teaching staff the spaces don't meet teaching staff room requirement and meet standard requirement in the case of office room. | 2019-07-14T07:01:34.001Z | 2019-05-01T00:00:00.000 | {
"year": 2019,
"sha1": "331f1446336400059c4a219277fff4bd5b1d669e",
"oa_license": "CCBY",
"oa_url": "https://indjst.org/download-article.php?Article_Unique_Id=INDJST2007&Full_Text_Pdf_Download=True",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7b8254edd96589c85abed215678aa3e53608fa09",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
265918907 | pes2o/s2orc | v3-fos-license | Relationship between sociodemographic factors and selection into UK postgraduate medical training programmes: a national cohort study
Introduction Knowledge about allocation of doctors into postgraduate training programmes is essential in terms of workforce planning, transparency and equity issues. However, this is a rarely examined topic. To address this gap in the literature, the current study examines the relationships between applicants’ sociodemographic characteristics and outcomes on the UK Foundation Training selection process. Methods A longitudinal, cohort study of trainees who applied for the first stage of UK postgraduate medical training in 2013–2014. We used UK Medical Education Database (UKMED) to access linked data from different sources, including medical school admissions, assessments and postgraduate training. Multivariable ordinal regression analyses were used to predict the odds of applicants being allocated to their preferred foundation schools. Results Applicants allocated to their first-choice foundation school scored on average a quarter of an SD above the average of all applicants in the sample. After adjusting for Foundation Training application score, no statistically significant effects were observed for gender, socioeconomic status (as determined by income support) or whether applicants entered medical school as graduates or not. Ethnicity and place of medical qualification were strong predictors of allocation to preferred foundation school. Applicants who graduated from medical schools in Wales, Scotland and Northern Ireland were 1.17 times, 3.33 times and 12.64 times (respectively), the odds of applicants who graduated from a medical school in England to be allocated to a foundation school of their choice. Conclusions The data provide supportive evidence for the fairness of the allocation process but highlight some interesting findings relating to ‘push-pull’ factors in medical careers decision-making. These findings should be considered when designing postgraduate training policy.
Introduction Knowledge about allocation of doctors into postgraduate training programmes is essential in terms of workforce planning, transparency and equity issues. However, this is a rarely examined topic. To address this gap in the literature, the current study examines the relationships between applicants' sociodemographic characteristics and outcomes on the UK Foundation Training selection process. Methods A longitudinal, cohort study of trainees who applied for the first stage of UK postgraduate medical training in 2013-2014. We used UK Medical Education Database (UKMED) to access linked data from different sources, including medical school admissions, assessments and postgraduate training. Multivariable ordinal regression analyses were used to predict the odds of applicants being allocated to their preferred foundation schools. results Applicants allocated to their first-choice foundation school scored on average a quarter of an SD above the average of all applicants in the sample. After adjusting for Foundation Training application score, no statistically significant effects were observed for gender, socioeconomic status (as determined by income support) or whether applicants entered medical school as graduates or not. Ethnicity and place of medical qualification were strong predictors of allocation to preferred foundation school. Applicants who graduated from medical schools in Wales, Scotland and Northern Ireland were 1.17 times, 3.33 times and 12.64 times (respectively), the odds of applicants who graduated from a medical school in England to be allocated to a foundation school of their choice. Conclusions The data provide supportive evidence for the fairness of the allocation process but highlight some interesting findings relating to 'push-pull' factors in medical careers decision-making. These findings should be considered when designing postgraduate training policy.
bACkground Efforts to minimise the barriers against entry into medicine have had mixed success, despite policy and investment drives. [1][2][3][4] In the last 30 years, the UK medical student body has become increasingly diverse in terms of gender, ethnicity and age, but not in terms of socioeconomic background (an individual's or family's economic and social position in relation to others, based on income, education and occupation 5 ). Indeed, a recent independent review concluded that: 'Medicine has a long way to go when it comes to making access fairer, diversifying its workforce and raising social mobility. 6 Much research has examined the barriers associated with selection into medical school for those from lower socioeconomic groups. 7 8 While getting a medical school place is the first hurdle in medical education and training, those who successfully complete medical school then face many other selection challenges for postgraduate education and training. The precise nature of these differ by context-in some countries, like the UK and Australia, medical graduates apply for early-stage training programmes of 1 or 2 years, then apply for specialty training. In other countries, such as the USA and Japan, strengths and limitations of this study ► This is one of the first studies to use linked individual-level data from the UK Medical Education Database, enabling longitudinal analysis and comparisons across previously discreet datasets. ► A large-scale study that focuses on the time of exit from medical school and selection to the next stage of postgraduate medical training in the UK. ► The sample did not include international medical graduates or students who sat an aptitude test other than the UK Clinical Aptitude Test at the time of applying to medical school. ► We did not examine outcomes by individual medical schools because of non-convergence issues with statistics models.
Open access those graduating from medical schools apply directly for residency (specialty) training. Yet, relatively little is known about the relationship between individual characteristics, such as socioeconomic background and outcomes on selection processes for postgraduate medical training. The few studies addressing this tend to focus on selection into specialty training, and relate to the ethnic differences in the academic attainment of doctors, 9 10 gender or country of primary medical qualification. 11 12 To the best of our knowledge, there has been no research looking at the relationships between individual characteristics and allocation into the first stage of postgraduate medical education in the UK, Foundation Training. This is a generic 2-year training programme which bridges the gap between medical school and being eligible to apply for specialty (including general practice/family medicine) training. Successful completion of the first year of Foundation Training (FY1) is needed for full medical registration. The process of assigning applicants to positions is based on a matching algorithm between allocation score and applicant choice. Applicants with the highest ranking are most likely to receive their first choice of training post. The UK Foundation Programme Office (UKFPO) reports that around 20% of applicants do not get allocated to their first-choice foundation school, and 12% of applicants in 2016 were allocated to a foundation school that was lower than their fifth preference. 13 In 2009, the Department of Health in England commissioned a review of selection to the Foundation Programme. The aim of this review was to recommend a reliable, robust, valid, feasible and sustainable method for allocation which would minimise the risk of successful legal challenge. 14 A new tool, the situational judgement test (SJT), was introduced, replacing the old 'white space' questions on an online application form. Scores from the SJTs and the standardised Educational Performance Measure (EPM) (see methods section) are added together to form the Foundation Training application score.
The purpose of this study was to investigate the relationship between individual characteristics and allocation to Foundation Training. In seeking to understand the social equity and fairness of the postgraduate selection process, the present study tests the hypothesis that persistent inequalities continue to exist even after non-traditional students have gained access into medical school. Since students from low socioeconomic backgrounds face financial, social and cultural barriers to higher education in general, 15 we envisage that those who enter medicine face similar challenges. For example, because of the extra financial burden, students from less affluent backgrounds may opt out of intercalated degrees or medical electives abroad despite these being factors that contribute towards attainment at medical school and future progress. [16][17][18] Our aim, therefore, was to determine if the allocation of trainees to their preferred foundation schools differs on the basis of socioeconomic class or other individual characteristics.
Methods
This is a longitudinal, cohort study of students who entered UK medical schools in 2007 and 2008, and who commenced their postgraduate training in 2013 and 2014. We used linked individual-level data from the UK Medical Education Database (UKMED: https://www. ukmed. ac. uk/) as the basis for this study. UKMED allows the analysis of data from a number of sources, including medical school admissions, assessment and postgraduate training. UKMED also contains demographic data such as age, gender, ethnicity, and whether the individual was a school leaver or graduate at the time of entry to medical school. Variables relating to socioeconomic status are also available. These have been used widely in previous UK research examining factors that influence educational achievement of different types of pupils, particularly in terms of widening participation. [19][20][21][22] They included: parental occupation (derived from National Statistics Socioeconomic Classification); entitlement to free school meals (FSM); income support; participation of local areas (POLAR), which is an indicator of the participation of young people in higher education by UK geographical area; type of school (state funded or independent) and parental education. We also included place of medical graduation (UK country: England, Scotland, Wales and Northern Ireland) into the analysis.
Twenty-one foundation schools offered postgraduate training at the time of the study. Applicants rank their choice of the foundation school in order of preference , and allocation to Foundation training (offers) is based on an algorithm of the Foundation Training application score. This score is the sum of the overall medical school performance (EPM) and performance on the SJT. The EPM and SJT have a maximum score of 100 points, and an applicant's score out of 100 is their application score. The SJT is worth up to 50 points. 23 The EPM is also worth a maximum of 50 points and comprises three parts: medical school performance (34)(35)(36)(37)(38)(39)(40)(41)(42)(43), additional degrees, 0-5 and other educational achievements such as publications and prizes, 0-2. All students are ranked according to medical school performance and are then grouped into deciles with those in the lowest decile receiving 34 points and the highest decile receiving 43 points. This could be thought of as a baseline medical school performance of 33 points awarded to all students with 1-10 additional points corresponding to each decile of performance. Although the EPM and SJT together have a maximum score of 100 points, and an applicant's score out of 100 is their UKFPO application score. All applicants who have a satisfactory SJT are offered a foundation place. All things being equal, high performing applicants would be offered a place at a foundation school that was high on their preference list, and lower scoring applicants would be offered places at a foundation school that was lower in their order of preference. As a number of applicants can withdraw from the Foundation programme for various reasons, this sample contain only those who commenced Foundation Training in 2013 and 2014.
Open access
The Foundation Training (UKFP) application scores were not normally distributed. For that reason, we converted the application score into a 'percentile rank' to help us determine the individual ranking in relation to others within the sample. This allowed us to evaluate the effect of a change of one decile group. We used Kruskal-Wallis, and where necessary, Mann-Whitney U tests to compare the scores across independent groups. We also transformed the rank of the foundation school allocated into an ordinal dependent variable. The following values were assigned: (1) for being allocated to the applicant's first-choice foundation school; (2) for allocation to a second or third choice and (3) for allocation to a foundation school outside the applicant's top three choices. X 2 tests were used to examine the relationship between applicants' sociodemographic characteristics and the dependent variable, choice of foundation school. Standardised z scores for the Foundation Training application scores were calculated to permit comparison across the ordinal dependent variable. Finally, a multivariable ordinal logistic regression was used to estimate the effect of several factors against the outcome measure. The variable, UK country of medical qualification, was also fitted in the models to take account of various unmeasured characteristics associated with regional variation across the four UK countries which are not otherwise represented in the models. Its inclusion had the effect of greatly improving the model's predictive power based on log likelihoods. All the data analyses were done using IBM SPSS Statistics for Windows, V.24 (IBM).
Patient and public involvement
Patients and the general public were not involved in the design of this research. Ethics approval was not required because the focus of this study was a secondary analysis of anonymised data. For this study, the graduating cohort of 2012 (n=3702) was removed from the sample because the SJT component of the UKFP selection process was piloted that year, and did not contribute towards the Foundation Training application score. In addition, nearly 11.6% (n=1594) students were excluded from the analysis because they were on the Academic Foundation Programme (AFP), which has a different, and completely separate, selection process. Applicants to the AFP are nominated by their graduating medical school, and recruitment is coordinated at regional level and takes place nearly 6 months before the national application process.
Thus, the final eligible sample included 8467 Foundation Programme doctors (applicants who had accepted a place and commenced their training in 2013 or 2014). Table 1 displays their sociodemographic characteristics in relation to application scores. The table also shows the numbers and percentages of doctors and the preference category of their allocated foundation schools. Frequency data shows that 71.3% of doctors were allocated to their first-choice foundation school; 15.0% to their second or third choice and 13.7% to a foundation school that was not one of their top three choices. X 2 tests showed statistically significant associations between certain sociodemographic characteristics and category of allocation to foundation school. Female applicants were significantly more likely (p<0.001) than male applicants to be allocated to a higher choice foundation school (73% vs 69% first choice). Students who attended state-funded (high) schools, and those who entered medical school as graduates were significantly more likely to be allocated to a higher choice foundation school than students who attended privately funded school (p<0.01) (74% vs 67% first choice), or those who came into medical schools as typical school leavers vs graduates (p=0.016) (76% vs 70% first choice). A significantly larger proportion of applicants coming from families that were at some point recipients of income support (p=0.028), and those entitled to FSM (p=0.043) did not get a place in a higher choice foundation school (65% vs 73% and 70% vs 73% first choice, respectively). Applicants from white ethnic backgrounds were significantly more likely to be allocated to a higher choice foundation school than black or Asian applicants (p<0.001) (79% vs 47% and 56% first choice, respectively). The majority (93%) of non-Caucasian applicants had graduated from medical schools in England (93%), and nearly the same proportion accepted a Foundation Training post (90%) at an English foundation school.
The z-score of the Foundation Training application score for the first-choice group was 0.25 indicating that those applicants who were allocated to a foundation school of their first choice scored on average a quarter of a standard deviation above the average of all applicants in the sample.
Although differences in allocation to preferred school would be expected to reflect the individuals' selection scores (table 2), other sociodemographic factors might also be influential. Therefore, we performed an ordinal regression analysis to determine whether the odds of applicants getting allocated to a preferred foundation school differed significantly for different groups. Variables that were not significantly associated with allocation to foundation school during univariate analysis, using a conservative p<0.10, were removed from the regression models. Where two or more measured independent variables appeared to measure the same constraint, we only included the variable which we thought contributed more to the explanation of the dependent variable. For example, since a majority of the students on FSM also come from families that are a recipient of income support, the FSM variable was dropped from the regression models. Application scores were divided into deciles Open access according to percentile rank we had calculated to determine individual's score relative to others in the sample. Thus, applicants in the highest (90th-100th percentile) rank were assigned a value of 1, those in the second highest (80th-89th percentile) rank were assigned a value of 2 and so on until those in the bottom (0-10th percentile) were assigned a value of 10. We could then evaluate the effect of a change of one decile group. The results of multivariate ordinal regression are shown in table 3. Three separate models were fitted; model 1 without controlling for the effect of application score, model 2 after accounting for the effect of the application score and model 3 after accounting for the effect of the application score and the allocated foundation school competition ratio. The ordinal regression is an extension of the binary logistic regression. The ORs represent each of the cut-off points, and the odds are expressed as a single cumulative OR for each group. The β results in table 3 are log ORs. Negative values represent a reduction in the odds of being allocated a preferred choice. Positive values represent an increase in odds of being allocated a preferred choice. As log ORs are difficult to interpret, we present the OR, the exponential of β, in the text. In model 3, for every unit increase of the Foundation Training 'decile' application score rank (1 best, 10 worst), the odds of an applicant getting allocated to a foundation school they had ranked higher in their order of preference decreased, as it was multiplied by a factor of 0.58 (β=−0.551, p<0.001). In lay terms, after considering other factors, the lower an individual's application score, the less likely they are to get their preferred choices of foundation school.
The models confirmed that there are significant effects in allocation to preferred school related to certain sociodemographic variables. Notably, model 3 shows that after controlling for the presence of multiple factors, including the application score and the foundation school competition ratio, the following groups had significantly lower odds of being allocated to their higher choice foundation schools: those from non-white ethnic groups; those who attended privately funded (high) school; who came from areas with high proportion of young people in higher education (POLAR); who graduated from an English medical school.
The odds of an applicant of Asian ethnic group to be allocated to a foundation school that they had ranked of higher preference was 0.66 times (β=−0.410, p<0.001) that of a white applicant (just over half the odds). Similarly, given that the other variables in the model are held constant, the odds of a black applicant to be allocated to a foundation school of higher choice in the ranking order of preference were 0.61 times or approximately over half the odds of a white applicant (β=−0.490, p<0.001). Despite obtaining similar application scores, applicants who attended privately funded (high) schools had lower odds, by a factor of 0.77, compared to those who attended state-funded schools (β−=0.258, p<0.001). The odds for applicants who came from areas of high participation of young people in higher education (POLAR) to be allocated to a foundation school of their preferred choice were 0.66 times (β=−0.413, p<0.001) or 34% lower than applicants who came from areas of low participation. We also compared places of primary medical qualification by UK country. All other things being equal, the odds of applicants who graduated from medical schools in Wales (β=0.153, p=0.353) Scotland (β=1.172, p<0.001) and Northern Ireland (β=2.537, p<0.001) were 1.17 times, 3.22 times and 12.64 times (respectively), the odds of applicants who graduated from a medical school in England to be allocated to a foundation school of higher preference. After adjusting for Foundation Training application score, no statistically significant effects were observed for gender, socioeconomic status (as determined by income support) or whether applicants entered medical school as graduates, or not.
dIsCussIon This large-scale study of two cohorts of applicants for the first stage of postgraduate medical training in the UK provides reassuring data. First, there is a clear relationship between an individual's performance on foundation school selection (their application score), and whether or not they are allocated to their first choice of foundation school. Second, the foundation school selection process does not appear to discriminate against applicants from lower socioeconomic groups. On the other hand, after controlling for the effect of the application scores, our analysis indicated that certain sociodemographic factors-ethnicity, type of (high) school attended, being from an area of high educational participation and (UK) country of medical qualification-are strong predictors of allocation to preferred choices.
Open access
The pattern observed for areas of high educational participation and fee-paying high school appear, on face value, to discriminate against those from higher socioeconomic groups. This seems counter-intuitive given previous research indicates that social class is one of the factors associated with admission to medical school 3 24 25 and specialty choice. [26][27][28] Several related factors may explain this finding. For example, different foundation schools have differing competition ratios. The 2016 competition ratio, which is the number with first-choice preference divided by the number of training programme places available, was highest in London area (1.49), compared with South of England (1.25), Scotland (1.12), Northern Ireland (0.97), Wales (0.93) and the Rest of England (0.65). 23 A large proportion of UK medical schools and medical students are situated around London and the South of England 29 and many medical students wish to do their Foundation Programme in a familiar region or have the opportunity to access training in the capital. 30 Related to this, London and the South of England are where much of the UK population is based, and many medical students and early career doctors prefer to train and work nearby their family and friends. 31 32 Finally, applicants from areas of high educational participation and from independent schools-and note there is a strong relationship between these two factors 33 34-may be more likely to apply to highly competitive foundation schools (eg, London and the South of England). Taking these factors as a whole, London and the South of England are very popular places to train and work, and so there is more competition for places. Applicants who put these regions as their top choice(s) are therefore less likely to get their top choice(s). Interestingly, the less competitive foundation schools also have the highest number of 'home applicants', again supporting the suggestion that early career doctors wish to train and work near family and friends. 30 31 Our finding that those from Black and minority ethnic (BME) backgrounds are less likely to be allocated their first-choice foundation school is consistent with the wider literature on postgraduate training showing that those from BME backgrounds tend to do less well in many different medical examinations. 9 10 35 However, it is evident that, even after controlling for the effect of the application score and foundation school competition ratio, those from ethnic minorities appear to be disadvantaged. This finding may also be linked with the geographical preferences discussed above because a higher proportion of the UK medical student population from BME backgrounds live in London and the Southeast of England. 29 It is clear that the marked differences observed in the ethnic profile of medical students across the UK countries also continues into the Foundation Training. Although we have not carried out a 'head-to-head' comparison of diversity across the foundation schools, UK demographic patterns suggest that these differences relate to student/ foundation doctor origin/home (ie, the proportion of BME groups differs across different UK countries and cities).
Open access
The merit of this study is that it is one of the first to use the UKMED database. [36][37][38] UKMED links several large datasets together, enabling longitudinal analysis and comparisons across previously discrete datasets. Another merit of the study is it is large scale and focus on the time of exit from medical school and selection to the next stage of postgraduate training in the UK. As with any study, there are limitations. Some of the markers included in the analysis overlap, particularly socioeconomic class, ethnicity and graduating from English medical schools. This is unavoidable given the links between place, poverty and ethnicity in the UK. 39 40 The wider literature also shows that there is a link between class and university preferences, 41 42 and there are hints in the medical education literature that this might also be the case for medical school preferences. 43 The nature of the data, coupled with the complexities of 'class' in the UK, mean we could not separate out the individual contributions of these intersecting factors on foundation allocation. Our sample did not include international medical graduates, or students who sat an aptitude test other than the UKCAT at the time of applying to medical school. The reason for this is simple: other medical schools' admissions aptitude test data is not yet held by UKMED. However, the UKCAT is sat by 85% plus of those applying for medical school in the UK (personal communication, UKCAT 29 November 2017) so represents the majority of applicants. We would have liked to examine outcomes by individual medical school rather than just UK country (eg, Scotland, England, etc) given previous research has highlighted that students from different medical schools perform differentially on postgraduate examinations. 35 We were unable to do so because of non-convergence issues with the statistical models. The UKFP SJT is relatively new but there are already some indicators of its psychometric properties, particularly predictive validity. 44 45 The nature of our study design means we have identified patterns but further, qualitative work is required to explore the reasons for these patterns. Future research could also usefully focus on the next stage of postgraduate training by examining the relationship between allocation to foundation school and later specialty training allocation.
In conclusion, this large-scale study has shown there is a clear relationship between an individual's performance on application to Foundation Programme (their application score), and whether or not they are allocated to their first choice of foundation school. Second, the UKFP allocation process does not appear to discriminate against applicants from lower socioeconomic groups. However, ethnicity, type of (high) school attended, being from an area of high educational participation and (UK) country of medical qualification are strong predictors of allocation to preferred choices. The data provide supportive evidence for the fairness of the allocation process but highlight some interesting findings relating to 'pushpull' factors in medical careers decision-making. These findings should be considered when designing postgraduate training policy since this is an important stage of the trainee doctors' working careers. In particular, policy initiatives could focus on the benefits of training at a local foundation school.
Contributors JAC led the funding bid which was reviewed by KW, BK and PWJ. KW and PWJ advised on the nature of the data. BK managed the data, carried out the data analysis under the supervision of GJP and JAC, and wrote the first manuscript. GJP advised on all the statistical analysis. JAC guided the first draft of the introduction and discussion sections of this paper. BK and GJP wrote the first drafts of the methods and results sections. JAC edited the drafts. All authors reviewed and agreed on the final draft of the paper.
Funding This study is part of BK's doctoral programme of research funded by the UKCAT Research Panel, of which JAC is a member.
Competing interests KW is the special advisor (Recruitment) for the UK's Foundation Programme (UKFPO).
Patient consent Not required.
ethics approval The Chair of the local ethics committee ruled that formal ethical approval was not required for this study given the fully anonymised data was held in safe haven, and all students who sit UKCAT and GAMSAT are informed that their data and results will be used in educational research. All students applying for the UKFPO also sign a statement confirming that their data may be used anonymously for research purposes.No patients or the general public were involved in this research.
Provenance and peer review Not commissioned; externally peer reviewed. open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/ | 2018-07-12T07:59:36.303Z | 2018-06-01T00:00:00.000 | {
"year": 2018,
"sha1": "ee08f3548cb94b7a588f4f55571912ad46e1ace4",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/8/6/e021329.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "c52a7ef8d08533dd81555f00c9d6dec7b2d292f9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1227966 | pes2o/s2orc | v3-fos-license | Lamina-specific contribution of glutamatergic and GABAergic potentials to hippocampal sharp wave-ripple complexes
The mammalian hippocampus expresses highly organized patterns of neuronal activity which form a neuronal correlate of spatial memories. These memory-encoding neuronal ensembles form on top of different network oscillations which entrain neurons in a state- and experience-dependent manner. The mechanisms underlying activation, timing and selection of participating neurons are incompletely understood. Here we studied the synaptic mechanisms underlying one prominent network pattern called sharp wave-ripple complexes (SPW-R) which are involved in memory consolidation during sleep. We recorded SPW-R with extracellular electrodes along the different layers of area CA1 in mouse hippocampal slices. Contribution of glutamatergic excitation and GABAergic inhibition, respectively, was probed by local application of receptor antagonists into s. radiatum, pyramidale and oriens. Laminar profiles of field potentials show that GABAergic potentials contribute substantially to sharp waves and superimposed ripple oscillations in s. pyramidale. Inhibitory inputs to s. pyramidale and s. oriens are crucial for action potential timing by ripple oscillations, as revealed by multiunit-recordings in the pyramidal cell layer. Glutamatergic afferents, on the other hand, contribute to sharp waves in s. radiatum where they also evoke a fast oscillation at ~200 Hz. Surprisingly, field ripples in s. radiatum are slightly slower than ripples in s. pyramidale, resulting in a systematic shift between dendritic and somatic oscillations. This complex interplay between dendritic excitation and perisomatic inhibition may be responsible for the precise timing of discharge probability during the time course of SPW-R. Together, our data illustrate a complementary role of spatially confined excitatory and inhibitory transmission during highly ordered network patterns in the hippocampus.
INTRODUCTION
The hippocampus expresses a variety of highly ordered spatiotemporal activity patterns which are believed to underlie memory formation and memory consolidation (Buzsáki, 1989;Harris et al., 2003;Buzsáki and Draguhn, 2004). During immobility and slow-wave sleep of rodents, the CA3 network generates repetitive bursts of activity which propagate along CA1 and the subiculum towards deep layers of the entorhinal cortex (Buzsáki et al., 1992;Buzsáki, 1996). In extracellular field potential recordings, these sharp wave-ripple complexes (SPW-R) appear as monophasic synaptic potentials superimposed by a fast "ripple" oscillation at ∼200 Hz (Ylinen et al., 1995).
SPW-R provide a template for sequential activation of selected neurons which repeat previously acquired representations of space-or context-dependent experience (O'Keefe, 1976;Wilson and McNaughton, 1994;Harris et al., 2003). In addition, several studies show that individual cells or units are activated with astonishing temporal precision within individual ripple cycles, which last only ∼5 ms (Buzsáki et al., 1992;Ylinen et al., 1995;Csicsvari et al., 1999b). The mechanisms mediating selective and temporally precise activation of hippocampal neurons during such fast oscillations are, however, only partly understood. The intense activation of fast spiking interneurons during SPW-R suggests a role for phasic GABAergic inhibition (Csicsvari et al., 1999b;Klausberger et al., 2003;Ellender et al., 2010;Hájos et al., 2013).
Field potential or EEG recordings provide a spatially weighted average of all intrinsic and synaptic conductance changes detected by the recording electrode. Ion fluxes cause neuronal current sources or sinks which propagate along the dendritic-somaticaxonal axis of the cell and cause balancing currents of opposite sign at locations remote from the site of origin. In principle, lamina-specific recordings of field potentials should therefore be ideally suited to dissect the different components of a complex electrical network event. However, despite the highly ordered laminar structure of hippocampal networks it is still a major challenge to unravel the different components underlying extracellular field potentials (Johnston and Wu, 1995;Csicsvari et al., 2003;Pettersen et al., 2006;Makarova et al., 2011). A major experimental difficulty is given by the critical contribution of many different mechanisms to compound network events which can cause complete disruption of the studied pattern upon systemic application of receptor blockers or other drugs.
Here, we used spatially restricted application of excitatory and inhibitory receptor blockers during multi-laminar recording of SPW-R in CA1 to dissect the differential contribution of GABAergic inhibition and glutamatergic excitation to this highly patterned activity. Using an established in vitro model of SPW-R in mouse hippocampal slices we found strong contributions of both, rhythmic inhibition and excitation to ripple oscillations. The power and the leading frequency of rhythmic EPSPs and IPSPs, respectively, differ between different hippocampal layers, reflecting the strongly laminar structure of CA1. While excitatory transmission from upstream CA3 networks seems to be essential for neuronal recruitment, the precise timing depends more critically on inhibition in perisomatic layers. Thus our study reveals complementary functions of simultaneous glutamatergic and GABAergic influences during SPW-R.
MATERIALS AND METHODS
Experiments were performed on adult male C57Bl6 mice aged 4-8 weeks in compliance with German law and with the approval of the state government of Baden-Württemberg (Nr. T08/10). Brains of CO 2 -anesthesized mice were excised and cooled to 1-4 • C in artificial cerebrospinal fluid (ACSF) containing (in mM): 124 NaCl, 3 KCl, 1.8 MgSO 4 , 1.6 CaCl 2 , 10 glucose, 1.25 NaH 2 PO 4 and 26 mM NaHCO 3 , saturated with 95% O 2 /5% CO 2 (pH 7.4 at 37 • C). After removal of the cerebellum and frontal brain structures, we prepared horizontal slices of 450 µm on a vibratome (VT 1000 S; Leica, Germany). The tissue was allowed to recover for at least 2 h in a Haas-type interface recording chamber at 35 ± 0.5 • C before we started the experiments. Most slices used for recordings were from the middle part of the hippocampus.
Field potentials were recorded with bipolar platinum/iridium wires (Science Products, Hofheim, Germany) which were fixed in a line of eight electrodes in a custom-made holder. The distance between individual contacts was approximately 75 µm. This array was positioned perpendicularly to the CA1 pyramidal cell layer ( Figure 1A) such that all laminae from alveus to stratum lacunosum-moleculare were covered. Usually, electrodes #5 or #6 were placed over the pyramidal cell layer, as visible from the large positive amplitude of spontaneous sharp waves (see Section Results). Drugs were locally applied by leakage from large glass microelectrodes with tip diameters of ∼15 µm. This technique leads to local diffusion of substances into the tissue with a diameter at half maximal concentration of 262 ± 55 µm as has been assessed previously (Bähner et al., 2011). Pipettes were filled with 10 µM gabazine (Tocris Bioscience) or with 200 µM CNQX (Tocris Bioscience; Sachidhanandam et al., 2013;Sato et al., 2014) dissolved in ACSF and were placed on the surface of the slice at about 75 µm distance to the recording electrode in s. radiatum, s. pyramidale or s. oriens. Effects of the respective drugs were assessed 40 min after begin of the local application. For wash-out, the pipette was removed and data was recorded 60 min afterwards.
Field potentials were amplified 100 times by an EXT 10-2F amplifier (npi electronics, Tamm, Germany), low-pass filtered at 2 kHz and digitized at 5-10 kHz (CED1401 interface; Cambridge, UK). Data were sampled with the Spike-2 program (CED) and analyzed with custom-written routines in Matlab (The Math-Works, Natick, MA). Sharp waves were usually detected in s. radiatum from the channel with highest negative event amplitudes. When these events were locally suppressed following drug application to s. radiatum we searched for the largest positive transients in s. pyramidale. Detection threshold was usually 200 µV in low-pass filtered traces (corner frequency 50 Hz). In experiments showing rather low amplitudes threshold was lowered to 100 µV. Slices were excluded if baseline SPW-R frequency was <1 Hz. For statistical analysis, the channel with the largest positive sharp wave amplitude was chosen as representative data for s. pyramidale. Likewise, the channel with the largest negative sharp wave amplitude was chosen as representative data for s. radiatum.
For analysis of high-frequency oscillations (ripples), events underwent continuous wavelet transform (complex Morlet wavelet) starting 33 ms before and ending 67 ms after the peak of a detected sharp wave. We then calculated the peak power of ripples (frequency >140 Hz) and their leading frequency. Current source density (CSD) analysis of field potentials was computed using the spline inverse current source density analysis (iCSD) method (Pettersen et al., 2006). The respective Matlab routine was kindly provided by these authors.
For detection of extracellularly recorded action potentials ("units"), we applied a 500 Hz high-pass filter. Subsequently, single events were extracted by setting a threshold at four times standard deviation (SD) to an "up-only" filtered signal (Cohen and Miles, 2000). This threshold was raised stepwise up to seven times SD if visual inspection of multi-unit activity (MUA) autocorrelation histograms indicated a reduced signal-to-noise ratio. Slices were excluded from MUA analysis when units could not be unambiguously identified. Coupling jitter between units and ripples was calculated based on the width of peaks in the respective cross-correlation as previously described (Both et al., 2008). A similar approach was chosen for analysis of MUA autocorrelation histograms, and coupling jitter was calculated in the same way as for cross-correlograms.
Average data was determined from 5-min sections. In general, quantitative results are given as mean ± SEM or as the first and third quartiles (P25 and P75) if data was not normally distributed. For better visualization, local drug effects are normalized to the baseline value in some figures, whereas statistical significance was computed on the basis of the original values. Parametric tests were used if groups passed a normality test. Otherwise, nonparametric statistics were used. As differential pharmacologic effects were examined, sample size was rather small for each subgroup and data was not normally distributed in many cases. Therefore, nonparametric ANOVA (Friedman test) was conducted throughout this study to compare baseline, wash-in and wash-out condition. P values < 0.05 were regarded as significant. If no significant difference was revealed, the P value was specified. Otherwise, post hoc analysis (Dunn's multiple comparisons test) was performed and the P value of the post hoc test was specified.
RESULTS
Local field potentials (LFPs) were recorded from the CA1 region of 50 mouse hippocampal slices. We regularly observed spontaneous events resembling sharp waves and superimposed fast oscillations (ripples), similar to previous findings from rodents (SPW-R) in vivo (Buzsáki et al., 1992;Ylinen et al., 1995) and in vitro (Kubota et al., 2003;Maier et al., 2003). In order to dissect the laminar profile of SPW-R we used a linear array of eight equidistant extracellular electrodes which were placed perpendicularly to the pyramidal cell layer of CA1 between s. lacunosum-moleculare and the alveus ( Figure 1A). Previous work indicates that sharp waves in CA1 are generated by synchronous excitatory inputs from CA3 pyramidal cells via the Schaffer collateral (Buzsáki, 1986;Csicsvari et al., 2000;Maier et al., 2003Maier et al., , 2011Both et al., 2008). In line with this mechanism, the slow component of the spontaneous local field potential transients revealed a strong negative deflection in s. radiatum ( Figure 1B). In contrast, sharp waves were positive-going in s. pyramidale. Analysis of all eight recording positions confirmed this phase reversal between dendritic and somatic layers, with very low sharp wave amplitudes in the extreme positions (s. lacunosum-moleculare and s. oriens, respectively; Figure 1C). Current source density analysis (Mitzdorf, 1985;Pettersen et al., 2006) revealed pronounced current sinks in s. radiatum as well as current sources in s. pyramidale. This data is consistent with the reported excitatory input to the proximal dendritic layer and simultaneous perisomatic inhibition (Ylinen et al., 1995;Ellender et al., 2010;Maier et al., 2011). This hypothesis was subsequently tested by lamina-specific application of glutamatergic and GABAergic receptor antagonists, respectively.
A major fraction of excitatory synaptic inputs was antagonized with the AMPA/kainate glutamate receptor antagonist CNQX (200 µM). When applied to s. radiatum, CNQX reversibly reduced sharp wave amplitude in s. radiatum (Figures 2A,B). At the same time, SPW-R frequency in s. radiatum decreased (1.49 ± 0.16 Hz at baseline, 0.68 ± 0.21 Hz after local wash-in and 0.91 ± 0.18 Hz after wash-out; n = 7 slices; P < 0.01). In one out of seven slices SPW-R was completely abolished and started to recover after ∼2 min of drug washout. Field potential amplitudes in s. pyramidale were also significantly reduced. Conversely, when we applied CNQX to the pyramidal cell layer, sharp wave amplitude was stable in this layer, but showed a slight reduction in s. radiatum. In contrast to these findings no significant change of sharp wave amplitude or frequency was noted following application of CNQX in s. oriens ( Figure 2C). Together, these results indicate that sharp waves are indeed generated by a lamina-specific excitatory input to s. radiatum.
The positive-going sharp waves in s. pyramidale can be generated in at least two different ways: they may reflect balance currents following excitatory input to the dendrites or, alternatively, arise from outward currents generated by inhibition within the pyramidal cell layer itself (Johnston and Wu, 1995 pp. 426-434;Ylinen et al., 1995). We therefore applied the GABA A receptor antagonist gabazine (10 µM) to s. pyramidale. As a result, the positive field potential deflection in s. pyramidale was strongly diminished while the negative-going transient in s. radiatum remained unaffected (Figures 2D,E). In two of six slices, the transient in s. pyramidale reversed and we recorded negative deflections after local wash-in of the drug. When the GABAergic antagonist was applied to s. oriens, effects were very similar to those observed after disinhibiting s. pyramidale. Application of gabazine in s. radiatum had no significant effects ( Figure 2F). These results indicate the lamina-specific contribution of GABAergic inhibition in s. pyramidale and oriens to sharp waves. Sharp waves in CA1 were regularly superimposed by fast oscillations, reminiscent of hippocampal ripples in vivo (Buzsáki et al., 1992) and in vitro (Maier et al., 2003). These network oscillations were most pronounced in s. pyramidale but could also clearly be identified in the apical dendritic layer (s. radiatum) and in the proximal part of the basal dendritic layer (s. oriens; Figure 3A). The laminar distribution of ripple energy revealed a continuous decay between s. pyramidale and s. lacunosummoleculare and a similar, though much steeper decay in s. oriens ( Figure 3B). Current source density analysis confirmed the rapid interplay between sinks and sources in s. pyramidale and also in s. radiatum (Ylinen et al., 1995;Sullivan et al., 2011). Interestingly, the ripples had slightly lower frequency in s. radiatum as compared to s. pyramidale ( Figure 3C). We therefore examinedwith respect to the ripple oscillation in the pyramidal layer-the phase of fast oscillations recorded in s. radiatum during the course of SPW-R ( Figure 3D). Interestingly, ripple troughs in s. radiatum preceded corresponding peaks in s. pyramidale significantly at the beginning of a SPW-R. Towards the end of a SPW-R, this phase shift decreased systematically (Figures 3E,F). Thus, frequencies of ripples are not uniform across CA1, allowing complex temporal interactions between dendritic and somatic layers.
We next analyzed effects of CNQX and gabazine on ripple oscillations. Based on the laminar differences in ripple phase and frequency described above, we looked for different contributions of synaptic excitation and inhibition in the respective layers. Glutamatergic transmission was suppressed by local application of CNQX to s. radiatum, pyramidale or oriens, respectively. Despite a tendency to reduced ripple energy in all layers (Figure 4B), significant effects were layer-specific. Application of CNQX to s. radiatum clearly suppressed the fast oscillation within the same layer while an apparent reduction in s. pyramidale was not significant (Figures 4A,B). Application of CNQX to the pyramidal layer also attenuated ripples in s. radiatum, though this effect was less pronounced. No significant effects were observed following application in s. oriens (Figure 4B). These results indicate that glutamatergic transmission in s. radiatum contributes significantly to high-frequency oscillations in this layer.
In addition, we examined the role of synaptic inhibition for ripples. Application of gabazine to s. pyramidale or s. oriens consistently reduced ripple energy in s. pyramidale (Figures 4C,D). Following application to s. radiatum, however, no significant effects on ripples were observed. Interestingly, high-frequency oscillations within s. radiatum were unaffected by gabazine, even upon application within the same layer ( Figure 4D). Together, these data support the importance of rhythmic perisomatic or proximal-dendritic GABAergic inhibition for SPW-R (Ylinen et al., 1995).
Laminar block of excitatory and inhibitory transmission had differential effects on ripple frequency. Application of CNQX to s. radiatum had no significant impact on its median (181 ± 10 Hz at baseline, 178 ± 11 Hz after local wash-in and 185 ± 12 Hz after wash-out; n = 6 slices, P > 0.1) and variability. In contrast, ripple frequency variability was strongly increased following application of gabazine to s. pyramidale (semi-quartile range: 30.3 ± 4.6 Hz at baseline, 114.3 ± 20.4 Hz after local wash-in and 39.9 ± 5.7 Hz after wash-out; n = 6 slices, P < 0.05). Its median showed a tendency to increase (179 ± 9 Hz at baseline, 209 ± 22 Hz after local wash-in and 180 ± 10 Hz after wash-out; n = 6 slices, P > 0.05). These findings underline the key role of phasic inhibition for synchronization of fast oscillations during SPW-R (Buzsáki et al., 1992;Ylinen et al., 1995) specifically in s. pyramidale and s. oriens (Bähner et al., 2011).
Network oscillations have been suggested to entrain action potentials of multiple neurons into a common rhythm (Buzsáki and Draguhn, 2004). During SPW-R, in particular, unit discharges in s. pyramidale are tightly coupled to ripple troughs, as can be seen in cross-correlation histograms (Buzsáki et al., 1992; Figure 5A). Autocorrelation histograms underline this observation. They show peaks at intervals of about 5 ms, confirming periodic changes in discharge probability at ripple frequency (Csicsvari et al., 1999b). We tried to dissect the impact of different synaptic components on MUA.
Apparently, application of CNQX to any of the three layers tested interfered with recruitment of units in s. pyramidale. This effect was significant after application to s. radiatum and s. oriens ( Figure 5B). Coupling jitter, which was calculated from cross-correlation between MUA and field ripples, remained stable or was slightly reduced ( Figure 5C). MUA autocorrelation histograms also remained largely unaffected. After application of CNQX to s. radiatum, coupling jitter was 40.8 ± 3.7% at baseline, 38.8 ± 4.5% after local wash-in and 44.7 ± 4.5% after wash-out (n = 4 slices). Similar effects were observed after CNQX had been applied to s. pyramidale or s. oriens.
Finally, we examined if local application of gabazine would manipulate unit discharge behavior. In s. pyramidale, MUA frequency was not significantly affected ( Figure 5E). Coupling of MUA to ripple troughs, however, was reversibly impaired (Figures 5D,F). Analysis of autocorrelation histograms further indicated that unit firing got more disperse. Coupling jitters were 28.9 ± 0.8% at baseline, 40.2 ± 2.6% after local wash-in and 25.1 ± 1.7% after wash-out (n = 4 slices). Interestingly, application to s. oriens yielded very similar effects. Autocorrelation coupling jitter was 31.0 ± 2.0% at baseline, 43.7 ± 2.3% after local wash-in and 34.8 ± 3.3% after wash-out (n = 5 slices, P < 0.05). In contrast, no significant changes in MUA frequency, cross-or autocorrelation were observed after gabazine had been applied to s. radiatum. The precision of action potential timing thus crucially depends on inhibitory currents in s. pyramidale and s. oriens (Buzsáki et al., 1992;Ylinen et al., 1995) specifically in s. pyramidale and s. oriens.
DISCUSSION
Sharp wave-ripple complexes (SPW-R) reflect highly ordered activity patterns, which are believed to support specific cognitive functions like memory consolidation (Buzsáki, 1989). The underlying cellular mechanisms have been studied both in vivo (Buzsáki et al., 1992) and the in vitro slice preparation (Maier et al., 2003). These studies have shown that perisomatic inhibition is of key importance (Ylinen et al., 1995;Ellender et al., 2010) and that, at the same time, glutamatergic inputs from upstream projection neurons mediate synaptic excitation and propagation of activity (Buzsáki, 1986;Csicsvari et al., 2000;Both et al., 2008;Maier et al., 2011). SPW-R are recorded as an LFP in s. pyramidale during behavioral states of awake immobility, originally called large irregular activity (Vanderwolf, 1969). These transient field potentials have typical waveforms, laminar profiles and propagation patterns (Buzsáki et al., 1992;Ylinen et al., 1995;Chrobak and Buzsaki, 1996) which are also visible in hippocampal slice preparations (Kubota et al., 2003;Behrens et al., 2005;Both et al., 2008;Ellender et al., 2010). The relationship, however, between such extracellular compound potentials and the underlying excitatory and inhibitory synaptic currents, action potentials and other processes in multiple cells, is not trivial. In fact, LFPs result from a large variety of local and remote currents, including balance currents between different layers and far-reaching effects from remote current sinks and sources (Herreras, 1990;Johnston and Wu, 1995 pp. 426-434;Sirota et al., 2008;Kajikawa and Schroeder, 2011). The trilaminar anatomy of the cornu ammonis, which is formed by a line of multiple equally oriented cells ("open field" arrangement; Johnston and Wu, 1995 pp. 428 f.), provides ideal conditions to untangle the lamina-specific mechanisms underlying field potential deflections. Here we made use of a hippocampal slice preparation that preserves network activity patterns while allowing for flexible pharmacologic manipulation without systemic side-effects. Interpretation of our results should take into account that field potential recordings can be affected by potential fluctuations in remote areas. Diffusion of the drugs following local application is, however, more restricted. Therefore, pharmacological effects may have been underestimated as compared with bath application of drugs. Nevertheless, we demonstrate that glutamatergic and GABAergic receptor antagonists exert different and laminaspecific effects on sharp waves and superimposed ripple oscillations in CA1. Our data confirm a major excitatory input in s. radiatum which provides synaptic excitation at a different (lower) frequency than the resulting local network ripple within CA1. Moreover, both sharp waves and superimposed ripples are generated by both, excitatory and inhibitory inputs, with different contributions of either mechanism in different laminae. We report that local application of CNQX suppressed sharp waves in s. radiatum. This finding indicates that the corresponding sink is largely generated by synchronous activation of AMPA/kainate receptors from Schaffer collateral afferents (Buzsáki, 1986;Csicsvari et al., 2000;Both et al., 2008). A critical involvement of NMDA receptors seems unlikely, as SPW-R are insensitive to 2-amino-5-phosphonopentanoic acid (APV) under our recording conditions (unpublished finding). We cannot exclude that the high concentration of CNQX close to the tip of the application pipette did also affect GABA A receptormediated currents, as previously reported (McBain et al., 1992;Maccaferri and Dingledine, 2002). Local GABAergic potentials in s. radiatum, however, would be expected to generate positive field potential transients within the same layer, in contrast to our finding of reduced negative transients. Conversely, SPW-R amplitude in s. pyramidale was reduced following local application of gabazine. In some slices we recorded negative transients in the pyramidal layer. The source in the pyramidal layer thus seems to be largely due to active GABAergic outward currents (Ylinen et al., 1995), presumably evoked by parvalbumin-positive basket cells. These interneurons target the perisomatic compartment of CA1 pyramidal cells and are highly active during SPW-R (Ylinen et al., 1995;Csicsvari et al., 1999b;Klausberger et al., 2003;Bähner et al., 2011). Our data could, however, not demonstrate the contribution of passive return currents, as had been discussed previously (Ylinen et al., 1995). Thus, sharp waves in different layers are generated by clearly different processes which can be pharmacologically distinguished: synaptic excitation in the dendritic cell layer and synaptic inhibition in perisomatic regions. As a complicating factor, application of CNQX to s. radiatum suppressed sharp waves in s. pyramidale as well. This effect could be ascribed to a reduction of feed-forward inhibition (Gulyás et al., 1999;Pouille and Scanziani, 2001 laminar profile of SPW-R is predominately evoked by local active sinks and sources, respectively, rather than remote passive ones (Herreras, 1990;Johnston and Wu, 1995 pp. 426-434). How is the superimposed high-frequency oscillation generated? Previous work based on recordings from individual neurons in vivo (Ylinen et al., 1995;Csicsvari et al., 1999b) and in vitro (Bähner et al., 2011;Maier et al., 2011) suggests a key role for GABAergic interneurons, again presumably parvalbuminpositive basket cells. Those cells target the perisomatic compartment of CA1 pyramidal cells, show fast spiking strongly coupled to field ripples (Klausberger et al., 2003) and have therefore been proposed to generate the current sources at ripple frequency observed in s. pyramidale (Ylinen et al., 1995). Indeed, we observed that local application of gabazine reduces ripple energy in s. pyramidale. This suggests a substantial contribution of the predicted synchronous GABAergic currents. Effects on the sharp wave component seemed more pronounced, which could indicate that local excitatory currents in s. radiatum also have an impact on ripples in the pyramidal layer. It should be noted, however, that ripple energy is calculated as an integral of the continuous wavelet transform. Therefore, it might be considerably greater than zero even at baseline level, apparently attenuating drug effects. Moreover, high-frequency oscillations in the wavelet transform may contain rhythmically entrained unit activity which is still present after application of gabazine. In addition, our data indicate that GABAergic currents in s. oriens, which might tune axonal excitability, contribute to ripple oscillations. This would be consistent with an involvement of hypothesized axo-axonic gap junctions that allow ectopic action potential genesis (Bähner et al., 2011;Traub et al., 2012). Experimental work and modeling studies also indicate that phasic inhibition is crucial for the tight phase locking of pyramidal cell action potentials to ripples (Buzsáki et al., 1992;Bähner et al., 2011). Indeed, block of GABA A receptors strongly interfered with the coupling of pyramidal layer MUA to field ripples. This confirms that during SPW-R, fast-spiking interneurons act as a clock that precisely tunes the timing of principal cell discharges. Synchronous action potentials, in turn, might also contribute to the shape of high-frequency oscillations recorded in the pyramidal layer ("mini" population spikes Buzsáki, 1986;Ylinen et al., 1995).
Ripples have thus been a phenomenon primarily linked to the pyramidal layer. Initially, alternating sinks and sources were depicted as being confined to it (Ylinen et al., 1995). Recently, however, the coexistence of a concomitant high-frequency oscillation in s. radiatum has been reported in vivo (Sullivan et al., 2011). In vitro we observed a similar laminar CSD profile, depicting a fast oscillation of relevant energy also in s. radiatum. Interestingly, this oscillation is characterized by a slightly, but distinctively lower frequency. It is thus unlikely a mere epiphenomenon of ripples in the pyramidal layer. These radiatum "ripples" rather seem to be evoked by precisely timed local glutamatergic currents, as evidenced by their sensitivity to CNQX. Those might be elicited by Schaffer collateral inputs, considering that CA3 initiates SPW-R (Buzsáki, 1986) and shows slower high-frequency oscillations (Csicsvari et al., 1999a;Maier et al., 2003;Both et al., 2008;Sullivan et al., 2011). Though a recent study-without subregional coherence analysis-concludes that ripples are not transferred wave by wave (Sullivan et al., 2011), the tight crosscorrelation between single CA3 pyramidal cells and ripples in CA1 has been well-documented, especially for corresponding subregions Both et al., 2008). The CA3 network thus generates a highly-synchronized output pattern rather than providing diffuse excitation onto CA1 pyramidal cells. This signal very likely contains some frequency component slightly below the typical ripple spectrum, and should substantially contribute to high-frequency oscillations in CA1 s. radiatum. Nevertheless, this downstream CA1 network has intrinsic properties that allow the generation of ripples, as evidenced by recordings from CA1 minislices (Maier et al., 2003). The interplay between excitatory and inhibitory events has recently been directly demonstrated by whole-cell recordings from CA1 pyramidal cells. These show that excitation is phase-advanced at the beginning of SPW-R and a progressive synchronization with inhibition towards the end of each complex . Interestingly, we observed an analogous phase shift between ripple troughs (s. radiatum) and corresponding peaks (s. pyramidale), which progressively decreased during the course of individual SPW-R. This finding is consistent with a slower frequency in s. radiatum and underlines the existence of an additional high-frequency oscillation distinct from ripples in the pyramidal layer. In conclusion, CA3 principal neurons could assist in suprathreshold excitation of downstream neurons forming a cell assembly via precisely timed and spatially confined currents. On the other hand, the decreasing phase-shift between excitation and inhibition tightens the temporal window for spike generation towards the end of an indiviual SPW-R and hence might contribute to the termination of this sharply delineated network burst.
Though the relationship between LFP waveforms and the underlying multi-neuronal activity patterns may be complex (Henze et al., 2000;Csicsvari et al., 2003;Pettersen et al., 2006)-during SPW-R, they reflect a characteristic signature of different neuronal assemblies (Reichinnek et al., 2010). Our data indicate that SPW-R recorded in CA1 mainly reflect a weighted average of well-coordinated local synaptic currents. At least two distinct sources of high-frequency oscillations can be distinguished: in s. pyramidale, they seem due to GABAergic inputs from local interneurons, while in s. radiatum, the specific waveform is largely evoked by long-range glutamatergic afferents, likely from CA3. In addition, experimental and theoretical approaches suggest that supralinear dendritic interactions (Memmesheimer, 2010) and ectopic action potential generation (Bähner et al., 2011;Traub et al., 2012) might play a role in assembly formation. | 2016-06-17T12:40:42.744Z | 2014-08-25T00:00:00.000 | {
"year": 2014,
"sha1": "a09e236b51644f5e0c3b6f94a8c5cbd9cae5bdd9",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fncir.2014.00103/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a09e236b51644f5e0c3b6f94a8c5cbd9cae5bdd9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
9142059 | pes2o/s2orc | v3-fos-license | Acetylcholine Receptor Gating: Movement in the α-Subunit Extracellular Domain
Acetylcholine receptor channel gating is a brownian conformational cascade in which nanometer-sized domains (“Φ blocks”) move in staggering sequence to link an affinity change at the transmitter binding sites with a conductance change in the pore. In the α-subunit, the first Φ-block to move during channel opening is comprised of residues near the transmitter binding site and the second is comprised of residues near the base of the extracellular domain. We used the rate constants estimated from single-channel currents to infer the gating dynamics of Y127 and K145, in the inner and outer sheet of the β-core of the α-subunit. Y127 is at the boundary between the first and second Φ blocks, at a subunit interface. αY127 mutations cause large changes in the gating equilibrium constant and with a characteristic Φ-value (Φ = 0.77) that places this residue in the second Φ-block. We also examined the effect on gating of mutations in neighboring residues δI43 (Φ = 0.86), ɛN39 (complex kinetics), αI49 (no effect) and in residues that are homologous to αY127 on the ɛ, β, and δ subunits (no effect). The extent to which αY127 gating motions are coupled to its neighbors was estimated by measuring the kinetic and equilibrium constants of constructs having mutations in αY127 (in both α subunits) plus residues αD97 or δI43. The magnitude of the coupling between αD97 and αY127 depended on the αY127 side chain and was small for both H (0.53 kcal/mol) and C (−0.37 kcal/mol) substitutions. The coupling across the single α–δ subunit boundary was larger (0.84 kcal/mol). The Φ-value for K145 (0.96) indicates that its gating motion is correlated temporally with the motions of residues in the first Φ-block and is not synchronous with those of αY127. This suggests that the inner and outer sheets of the α-subunit β-core do not rotate as a rigid body.
I N T R O D U C T I O N
The diliganded gating isomerization of the acetylcholine receptor (AChR), between C(losed) and O(pen) structures, is a conformational "wave" that links a change in affi nity for ligands at the transmitter binding sites with a change in the ionic conductance of the pore. In the α-subunit, the fi rst group of amino acids to undergo a C→O structural change is near the transmitter binding sites Corringer et al., 2000;Chakrapani et al., 2003). Homologous residues in the ACh binding protein (AChBP) have been shown to change conformation as a result of agonist binding (Brejc, 2001;Celie et al., 2004;Hansen et al., 2005). The second group of residues to move in gating is near the base of the extracellular domain (ECD), in loops 2 and 7 (the "cys-loop") (Chakrapani et al., 2004). These ECD movements subsequently propagate into the transmembrane domain (TMD), toward an equatorial "gate" in M2 that regulates the conductance of the pore.
The relative timing of a residue's gating motion can be inferred from the rate constants of the diliganded C↔O gating reaction (Auerbach, 2007). The slope Φ of a log-log plot of the opening rate constant vs. the equilibrium constant for a series of mutations of a single residue is thought to give the relative time in the chan-nel-opening process that the mutated residue converts from C to its O structure. In the extracellular region of the α-subunit, Φ values decrease from the transmitter binding site (Φ = 0.93), to the cys-loop and loop 2 (Φ = 0.78), to the M2-M3 linker (Φ = 0.64).
Currently, there is a model for the structure of closedunliganded Torpedo AChRs, -4فÅ resolution (Unwin, 2005), in which the β cores of the ECD of the two α subunits are rotated with respect to those in the three nonα subunits. This observation led to the proposal that during diliganded C→O gating (as opposed to agonist binding) there is a symmetry-restoring rotation of the inner β-sheet of the α-subunit ECD.
Here we report the results of mutations on gating of two α-subunit residues that are near the top of the inner (strands 1, 2, 3, 5, 6, and 8) and outer (strands 4, 7, 9, and 10) β sheets of the α-subunit, Y127 (on β-strand 6), and K145 (on β-strand 7). αY127 lies at a boundary between the fi rst two Φ blocks, with its atoms <4 Å from residues in both αD97 in loop A (Φ = 0.93) in the Torpedo AChR structure (Unwin, 2005). However, αY127 and αD97 are 9ف Å apart in the mouse α-subunit ECD fragment structure (2qc1.pdb, Dellisanti et al., 2007). αY127 is located at or near the C terminus of β-strand 6, one position from the C128-C142 disulfi de bond that defi nes the cys-loop (loop 7) of the eukaryote pentameric receptor superfamily (Fig. 1). αY127 also is at a subunit interface and faces either the ε (γ in embryonic AChRs) or δ subunit, and for this reason the structure of this residue is poorly resolved in the monomeric ECD fragment (Dellisanti et al., 2007). Mukhtasimova and Sine (2007) found that the mutation αY127T substantially decreases K eq , as do the mutations εN39A and δN41A in nearby residues in these non-α subunits. Moreover, the effects of these perturbations were not independent, which suggests that these positions are coupled energetically and are a link for the intersubunit propagation of the gating conformational cascade.
In both the Torpedo AChR and ECD fragment structures, αK145 is <4 Å from two residues whose mutation signifi cantly changes K eq , αD200 (in loop C) and αY93 (in loop A) Akk, 2001). Although rate constants for only a few mutations of each of these positions have been measured, the values are consistent with a Φ-value near 1, which places these neighboring amino acids in the fi rst, Φ = 0.93 block. M144, next to K145 in sequence, was measured to have a Φ-value of 0.84 ± 0.05. Mukhtasimova et al. (2005) found that sub-stitution of A, Q, and E side chains at αK145 all reduce K eq substantially and that the effect of αK145E and αD200N mutations are not energetically independent, and proposed that interactions between αK145-αD200 vs. αK145-αY190 (based on structure) stabilize the C vs. O conformation, respectively.
We have extended these studies regarding αY127 and αK145 by more extensive Φ-value analysis, and have related the results to the ECD rotation hypothesis for gating. First, we measured rate constants from single-channel currents and estimated Φ for αY127 (all 20 natural amino acid side chains) and its neighbor in the δ-subunit, I43. Second, we estimated the magnitude of the energetic coupling between αY127 and either δI43 or αD97, in six different constructs. Third, we examined the kinetic behavior of mutations to residues in the ε, δ, and β subunits that are homologous to αY127. Fourth, we measured the diliganded gating rate constants of four mutants of position αK145. The results show that a point side chain substitution at αY127 can change K eq by a factor of ,000,092ف that αY127 is a member of the second Φ-block, that the coupling between αY127 and αD97 or δI43 is measurable but small (<1 kcal/mol). Regarding αK145, mutations alter the channel opening rate (relative to the change in K eq ) to a greater extent than for αY127, which suggests that Only the α ε (left) and ε subunits are shown; the horizontal lines mark, approximately, the membrane. In α δ the three Φ blocks that link the transmitter binding site with the gate are color coded as purple (Φ = 0.93, W149, K145), orange (Φ = 0.78, Y127), green (Φ = 0.65, S269), and red (Φ = 0.31, L251). (B and C) Expansion of boxed region in A. αK145 (purple) is on β-strand 7 and αY127 (orange) at the α ε /ε (B) and α δ /δ (C) subunit interface. αY127 is <4 Å from residues αD97 and αN94 in loop A (purple), αQ48 in loop 2 (orange), and εN39/δI43 in β-strand 1 (black). Structures were displayed by using PYMOL (DeLano Scientifi c). these two residues do not move synchronously in the gating reaction.
M AT E R I A L S A N D M E T H O D S
For the details of mutagenesis, expression, electrophysiology, rate constant determination, and Φ-value analysis, see Jha et al. on page 547 of this issue. In brief, mouse AChR subunits were transiently expressed in HEK 293 cells and recordings were from cell-attached patches (22°C, 001−ف mV membrane potential). Agonist was added to the pipette solution (500 μM ACh, 5 mM carbamylcholine, or 20 mM choline). Currents were analyzed with QUB software (www.qub.buffalo.edu). Opening and closing rate constants were estimated from interval durations by using a maximum-interval likelihood algorithm (Qin et al., 1997) after imposing a dead time of 25 μs.
Φ was estimated as the slope of the rate-equilibrium free energy relationship (REFER), which is a plot of log k o vs. log K eq . Each point in the plot represents the mean of at least three different patches.
Mutations of αY127 and its Homologues
In vertebrate α 1 subunits, position 127 is always a Y but in non-α 1 subunits it is never a Y (but is, rather, S, A, T, or V). A tyrosine at position 127 is a specifi c marker for the vertebrate neuromuscular α 1 -subunit. The location of Y127 in the Torpedo AChR structure is shown in Fig. 1. Fig. 2 and Table I show the results of single-channel kinetic analyses of wild-type AChRs plus all 19 natural amino acid substitutions at αY127. 16 of the mutations decreased K eq (D by -009,4فfold) while the three aromatic In the continued presence of such a high concentration of agonist, openings occur in clusters (open is down) separated by long nonconducting sojourns in "desensitized" states. Each cluster refl ects C↔O gating of a single AChR. (B) Three gain-of-function constructs (F, W, and H) were activated by 20 mM choline and the current elicited for all other mutants were by 500 μM ACh. Example cluster for WT is shown for both the agonists. Calibration bars: (horizontal = 100 ms for choline and ACh, vertical scale bar = 2 pA, choline and 6 pA, ACh). (B) There was no apparent correlation of the side chain hydrophobicity or volume with the change in the diliganded gating equilibrium constant (K eq ). The r values were 0.26 (hydrophobicity) and 0.28 (volume). side chains H, W, or F increased K eq (F by -95فfold). There was no correlation between side chain hydrophobicity or volume and the change in K eq . The change in K eq in AChRs having D vs. F at position 127 (in both α subunits) was -000,092فfold, which represents an energy difference of 4.7ف kcal/mol. For comparison, the maximum fold-changes in K eq caused by mutations of some other α-subunit residues are shown in Table II. In our hands, Y127 is the most sensitive position ever reported for a point side chain substitution in both α subunits. The substantial changes in K eq indicate that the energetic consequences of the mutations are substantially different in C vs. O, which implies that αY127 changes its structure, environment, or both (i.e., moves) in the gating reaction.
The mutation-induced changes in K eq at αY127 arose mainly from changes in the channel opening rate constant (k o ). Fig. 3 shows a REFER analysis (a log-log plot of k o vs. K eq ) of the mutational series at αY127. Each -01فfold change in K eq arose, on average, from an -2.6ف fold change in k o and an -6.1فfold change in k c . The slope of this relationship, Φ, was 0.77 ± 0.02. Notice that the results for AChRs activated by different agonists scatter about the same line and that the Φ estimate was similar regardless of whether the AChRs were activated by acetylcholine (0.85 ± 0.04), carbamylcholine (0.75 ± 0.2), or choline (0.75 ± 0.04) (Fig. 3).
The Φ-value for αY127 is the same as those for several residues in loop 2 and the cys-loop (Φ = 0.80 ± 0.05 and 0.78 ± 0.03) (Jha et al., 2007) and R209 in the pre-M1 linker (0.74 ± 0.02, on an E45A background) , but is different from those for the transmitter binding site (0.93 ± 0.02) (Grosman et al., 2000) and residue αD97 in loop A (0.93 ± 0.03) (Chakrapani et al., 2003). This result suggests that position 127 moves relatively early in the diliganded channel-opening process and that its gating motions are correlated temporally with other residues in the second (Φ = 0.78) gating block, but that these occur after those in the fi rst (Φ = 0.93) gating block. We measured the single-site association and dissociation rate constants (k + and k − ) and equilibrium dissociation constant (k + /k − = K d ) for ACh binding to the closed conformation in one mutant construct, Y127C (Fig. 4). In this mutant K d = 144 μM, which is in the range of previous measurements for wild-type AChRs exposed to 140 mM NaCl (100-150 μM) Chakrapani et al., 2003). Similarly, the association and dissociation rate constants in the mutant, k + = 2.10 8 M −1 s −1 and k − = 3.0 × 10 4 s −1 , were similar to wt values.
Kinetic Analyses of AChR Mutants
We also probed the effects on gating of mutations to residues in the β, ε, and δ subunits that are homologous to αY127. In the non-α subunits, which are homologous in both sequence and structure to the α subunits in the vicinity of αY127, the residue in question (βS127, εT127, or δS129) immediately preceded in sequence the extracellular disulfi de bond. Seven mutations of these three positions all yielded AChRs having wt-like gating behaviors (Table IV). αI49, δI43, and εN39 We next examined the gating properties of AChRs having mutations of residues that are close to αY127 (Fig. 1 B). αI49 is at the N terminus of β-strand 2, 5ف Å from αY127. The gating kinetics for three mutants of this position, C, V, and Y, did not change K eq by greater than threefold (Table I). Thus, we have no evidence that the αI49 side chain moves relative to its local environment between C and O conformations.
εN39 or δI43 are neighbors of αY127 in the companion, non-α subunit. A REFER analysis of position δI43 is shown in Fig. 5. All four of the tested substitutions decreased K eq , with Φ = 0.86 ± 0.10. Although this result indicates that δI43 moves early in the reaction, we are unable to distinguish this Φ-value from those of the fi rst (0.93; agonist and loops A, B, and C) and second (0.77; Y127, loop 2, and cys-loop) blocks of the α-subunit. At εN39, F and D substitutions caused a small (less than threefold) change in K eq , and the substitution of an Ile at this position also generated currents having wt-like kinetic behavior (when activated by 30 μM ACh). The substitution of an H increased the cluster open probability relative to the wt, but the kinetics of these intracluster intervals was complex, with at least two conducting and two nonconducting states apparent. Therefore, unambiguous values of k o and k c could not be estimated. These results suggest that εN39 moves during gating, but we were unable to estimate a Φ-value for this position.
Coupling of αY127 Gating Motions within and between Subunits
In the α-subunit, two residues in loop A, part of which contributes to the transmitter binding site, may be close to αY127: αD97 and αN94. Mutation of αD97 causes a substantial change in K eq and has a Φ-value that is different from that of αY127 (0.93 vs. 0.77). We therefore tested whether an interaction between αY127 and αD97 couples the gating motions (energy transfer) between the transmitter binding site (in the fi rst Φ-block) and the cys-loop (in the second Φ-block).
We probed a D97↔Y127 interaction by measuring the gating kinetics of AChRs having a mutation (in both Fold-change in gating equilibrium constants are from the literature and for diliganded gating using single channel analysis. ∆∆G = −RT ln(K eq wt /K eq mut ).
α subunits) at both of these positions (Table III). Six pairwise combinations were tested, with two different side chains at Y127 (H and C) and three different side chains at D97 (M, Y, and H). By themselves, the mutations at position 127 either reduced K eq (C, by 201-fold) or increased K eq (H, by 7.3-fold), while those at position 97 always increased K eq (M, Y, or H, by 5.5-, 20-, and 7.3-fold, respectively). The hallmark of energetic coupling between αY127 and αD97 is a fold-change in K eq with both sites mutated that is not equal to the product of the fold-changes for each site mutated. With Y127H (activated by choline), the observed values of K eq for the three D97 mutants were, on average, modestly (-5.2فfold) smaller than predicted assuming independence (Table III). With the Y127C constructs (activated by ACh), the observed values of K eq for the three D97 mutants were close to those predicted assuming independence. The average coupling energy was 0.53 kcal/mol for the Y127H background and −0.37 kcal/ mol for the Y127C background. These results suggest that the magnitude of the coupling energy can vary with the side chain substitution. However, the coupling energy was small for both of the two tested backgrounds, especially when one considers that this coupling energy is spread between two Y127-D97 pairs (two α subunits). Overall, the results suggest that although a large magnitude of energy change is associated with positions D97 and Y127 when examined individually, a D97↔Y127 perturbation in combination is not an important component of energy transfer within the transition state of diliganded gating. The assumption that the residues may be interacting at the Φ-block boundaries, however, is based on the proximity of the two residues in the Torpedo AChR structure. Two problems with this assumption are that Y127 and D97 are >9 Å apart in the mouse α-subunit fragment structure, and that neither structure refl ects a ligand-bound AChR. There is a reason to suspect that (Table I). Φ-Value was estimated as the slope of an unweighted linear fi t to a log-log plot of normalized k o vs. normalized K eq for all 19 mutants. The slope Φ = 0.77 ± 0.02 makes αY127 a member of the second Φ-block that includes the cys-loop and loop 2. The open circles, fi lled circles, and open squares are choline, ACh, and carbamylcholine data points, respectively. (Chakrapani and Auerbach, 2005). There is no signifi cant effect of this mutation on ACh binding to closed AChRs and we speculate that αY127 mutations that change K eq do so by changing the unliganded gating equilibrium constant rather than the closed/open affi nity ratio. Calibration bars for single channel traces: (horizontal scale bar = 100 ms, vertical scale bar = 6 pA).
loop A moves as a consequence of agonist binding (in addition to channel gating), so we do not know the separation between these residues in fully liganded AChRs. We next measured the extent of coupling between αY127H (7.3-fold increase in K eq ) and δI43H (13.8-fold decrease). Together, these mutations caused a 2.2-fold increase in K eq , whereas if they were independent we would expect a 1.9-fold decrease in K eq . This approximately fourfold effect indicates that there is modest degree of coupling between the αY127 and δI43 side chains (+0.84 kcal/mol; Table IV). Note that this interaction occurs at a single subunit interface and should therefore be considered to be substantially greater than the αY127-αD97 interaction. αK145 We measured the gating rate constants for four different mutations of αK145, which is on β-strand 6 ( Fig. 1). In the unliganded Torpedo structure, this residue is within 4 Å αD200 and loop A residue αY93, two residues that have been shown to move during diliganded C-O gating. K145 is also likely to be close to moving-residue αY190 (Chen et al., 1995) when the transmitter binding site is occupied by an agonist (Celie et al., 2004). Finally, αK145 is near αT202, a residue that has not yet been probed at the rate constant level.
All four of the mutations of K145 (C, A, R, and D) decreased K eq , by up to 282-fold (Table V). The causes of these decreases were, in all cases, almost exclusively due to decreases in k o . Fig. 6 shows the REFER for αK145. The Φ-value was 0.96 ± 0.04. (2007) studied the kinetic behavior of two αY127 mutants (F and T) plus εN39A and δN41A. Further, they measured the coupling between three pairs and two triplet combinations of these mutants. Although they studied human AChRs activated by ACh in 142 mM KCl and we studied mouse AChRs activated by ACh or choline in 140 mM NaCl, both sets of results are in general agreement. Mutations to Y127 have a profound effect on channel gating (K eq ), and this residue is a site where gating motions are coupled between subunits.
Mukhtasimova and Sine
The main difference in the two sets of results is in relation to the αY127F mutation. We measured a much larger increase in K eq for Y127F (58.7-fold vs. 2.2-fold increase). We speculate that this difference can be traced to an immeasurably fast opening rate constant for this construct in the experiments where the mutant AChRs were activated by ACh. In wt AChRs the difference in Cho Mutations of αD97 (M, H, and Y) on Y127H or Y127C constructs generally showed a fold-change in K eq approximately half that predicted from the product of the single-mutant fold-change. The coupling energies for both double mutant series are small, suggesting that the coupling energy is distributed across multiple sites along between the fi rst and second Φ blocks.
K eq for different agonists is manifest almost exclusively as a difference in the opening rate constant (Φ = 0.93; Grosman et al., 2000). Assuming that this pattern pertains to the Y127F mutant, then k o with ACh should be 004ف times larger than k o with choline (Chakrapani and Auerbach, 2005). In this case, our measurement for k o with choline (2853 s −1 ) translates to an opening rate of k o with ACh of >10 6 s −1 , which is too fast to be detected experimentally. Perhaps the brief gaps observed in the experiments with human AChRs (Fig. 2 and Table II in MS) did not arise from C↔O gating but rather from channel block by the agonist or some other process.
Our results do not agree with the proposal that aromatic side chains can be substituted at position αY127 without consequence. Mukhtasimova et al. (2005) also measured the gating rate constants for E, Q, and A mutants of αK145. They report that these mutations decrease k o but leave k c essentially unchanged is consistent with our estimated Φ value of 0.96 for this position.
Structure-Function
A D-to-F side chain substitution at αY127 changes K eq by nearly -000,092فfold. The magnitude of this change is substantially greater than that caused by any other ECD side chain substitution observed so far, even considering the fact that both α subunits carried the mutation. (The change in K eq would be -045فfold if the energy difference between C and O was equally distributed between the two α subunits).
The relationship between a change in structure and the magnitude of the change in K eq is complex. Although we measured K eq for all 20 natural side chains at αY127 and for four side chains at αK145, we are nonetheless unable to draw strong conclusions about the chemical natures of the forces behind the αY127 gating motions. We note, however, that the mutations of αY127 that increased K eq are aromatic and fl at. There is no apparent correlation between side chain volume or hydrophobicity and the magnitude of the change in K eq . Also, the charged side chains D, K, R, and E all reduced K eq at αY127 (by 4847-, 1282-, 553-, and 104-fold, respectively), and D and R reduced K eq at αK145 (by 282-and 60-fold, respectively), so the sign of the charge at both of these positions appears not to be an important determinant of K eq .
The gating motion of αK145 (as evidenced by the mutation-induced change in K eq ) occurs approximately synchronously (same Φ-value) as other residues near the (Table IV). Φ-Value was estimated as the slope of an unweighted linear fi t to a log-log plot of normalized k o vs. normalized K eq for all four mutants. The slope, Φ = 0.86 ± 0.10. Calibration bars for single channel traces: horizontal scale bar = 100 ms, vertical scale bar = 6 pA.
Construct
Agonist k 0 (s −1 ) k c obs (s −1 ) k c cor (s −1 ) K eq (k o /k c cor ) Normalized K eq (mut/wt) n In β, δ, or ε subunit, none of the mutants at residues homologous to Y127 show fold-change in Keq greater than threefold. These residues may not be moving during AChR gating. The abbreviations used here are the same as indicated earlier.
transmitter binding site, in loops A, B, and C. The movement of αK145 is correlated temporally with the movement of its close neighbors αD200 and αY93. The movement of αY127 occurs after the movement of αK145, and approximately synchronously with residues in the cys-loop and loop 2.
Rotation Hypothesis
The mutation-induced changes in K eq at positions αK145 and αY127 are consistent with the proposal that gating entails a rotation of the α-subunit β-sandwich core (Unwin et al., 2002). However, some observations of AChR function appear to be inconsistent with this hypothesis. (a) A substituted cysteine accessibility study of residues between L36 and I53 in strands β1 and β2 in the α7 AChR showed that the rates of reaction with MTSEA in the presence of ACh varied signifi cantly (McLaughlin et al., 2007). However, the rate of reaction decreased and increased, respectively, for the closely apposed residues M40 and N52, a result that is unexpected for a rigid body rotation of the β-core. (b) The effects of mutations on K eq have been measured for seven different residues that are in the inner β strands of the ECD core: αL40A (in strand 1), αI49C, V, and Y, αV54L, αR55A and W (in strand 2), and αA122L, αS126V and A, and αY127 (in strand 6). Of these constructs, only the αY127 mutants changed K eq by greater than threefold and, hence, gave a clear indication of motion. Although the lack of change in K eq does not unequivocally indicate a lack of gating motion, it would be surprising if a rotation altered the energetic environment only around αY127.
More residues (Celie et al., 2004) and mutations in both the inner and outer leafl ets of the β-core need to be tested to test the energetic consequences of such a rotation. (c) The asynchrony of motion (different Φ values) for αY127 and αK145 is unexpected if the β-core rotation was that of a rigid body motion. In summary, the results suggest that the hypothesis of a β-sandwich core rotation in the gating reaction is, at best, incomplete. Because the rotation hypothesis arose from a comparison of the structures of α vs. non-α subunits in unliganded AChRs, we speculate that such movements may occur upon ligand binding rather than channel gating.
Φ Map Fig. 7 shows the map of Φ superimposed on the mouse α-subunit fragment structure (2qc1.pdb, Dellisanti et al., 2007). The Φ values for the purple residues are ,39.0ف those for the orange residues are ,77.0ف and the white residues show no indication of a gating motion (∆K eq < threefold). This pattern suggests that the diliganded gating motions in the α-subunit mainly propagate along the α-ε (α-δ) subunit interface. Φ changes signifi cantly (by 61.0ف units) between αD97 and αY127 (which are within 4 Å in 2bg9.pdb and 9 Å in 2qc1.pdb), whereas Φ is the same for residues that (Table V). The Φ-value estimated for K145 is Φ = 0.96 ± 0.04. Calibration bars for single channel traces: horizontal scale bar = 100 ms, vertical scale bar = 6 pA.
Construct
Agonist k 0 (s −1 ) k c obs (s −1 ) k c cor (s −1 ) K eq (k o /k c cor ) Normalized K eq (mut/wt) n are separated by much larger distances. For example, residues separated by >20 Å and that have similar Φ values are αY127 and αF135 (Φ ف 0.78) and αD97 and αD152 (Φ ف 0.93). These results, along with similar comparisons elsewhere in the AChR, suggest that residues are grouped into contiguous domains within which members all have approximately the same Φ-value, and that Φ can change abruptly in space as expected from discrete boundaries. However, it is important to mention again that we cannot be certain that αD97 and αY127 are as closely apposed in the state of our reaction (the agonist-bound closed↔open) as they are in the unliganded Torpedo structure or the toxin-bound α-subunit fragment. The results indicate that fi rst gating Φ-block extends at least to αK145, and perhaps to αM144, which has a Φ-value of 0.84 ± 0.05 (Chakrapani et al., 2004).
Coupling
Our results indicate that there is only a small amount of energetic coupling (<0.6 kcal/mol) between αY127 and αD97 even though these side chains are close, are mutation-sensitive, and have different Φ values (Fig. 1). It is therefore unlikely that an interaction between these two residues is an important link in the propagation of the AChR gating conformational wave. Mukhtasimova and Sine (2007) found large coupling coeffi cients between the intersubunit pairs αY127T/ εN39A (1.7 kcal/mol) and αY127T/δN41A (3.8 kcal/ mol). Our estimate of coupling for the αY127H/δI43H pair was somewhat smaller (0.84 kcal/mol) but still larger than for the αY127/αD97 pair. Our results support the idea that αY127 is a site where the gating conformational cascade in the αsubunit is linked to that in the δ or ε subunits. The Φvalue of δI43 (0.86 ± 0.10) cannot be distinguished from those of either αD97 (0.93 ± 0.01) or αY127 (0.77 ± 0.02). Thus, we are unable to use Φ-value analysis to determine if the δ-subunit motions are synchronous with those of α, or, if not, which subunit precedes the other.
The Framework for AChR Gating
The results presented here and in the two companion papers support the idea that the framework for understanding the mechanism of diliganded AChR gating is that it is "brownian conformational wave." All of the 29 newly probed positions have Φ values that are similar to those previously reported for other amino acids in the extracellular region of the AChR α-subunit, and with magnitudes as expected based on location. There is little doubt that in the AChR, the map of Φ is highly organized and that residues are clustered into Φ blocks. Whatever mechanisms are proposed for AChR gating, and whatever physical interpretation is applied to Φ (relative timing, fractional side chain structure, multiple pathways), these must account for this highly ordered map of Φ values that has been derived from an extensive array of experiments.
The results do not support the notion that there is a single, rate-limiting structural transition that is the intersection of the C and O conformational ensembles. If there is a rotation of the α-subunit β-core, it is unlikely to be as a rigid body because αK145 on the outer sheet and αY127 on the inner sheet belong to two different Φ blocks. Although R209 and E45 both move and make a substantial energy contribution to the TR, these energy changes apparently do not arise from the perturbation of a salt bridge between this pair. The movement of the M2-M3 linker is an important TR event, but a full, cistrans isomerization of the P272 or G275 backbone is not necessary for effi cient gating. Rotations, electrostatic forces, changes in backbone bond angles, and hydrophobic interaction may occur in various regions of the protein, but each of these structural transitions contributes only a fraction to the total energy to the TR barrier.
the experimental results suggest that this TR barrier is a broad, corrugated, fl at plateau (Auerbach, 2005). The map and range of Φ values, the spatially distributed effects of mutations on K eq , and the rather weak coupling energies that we have observed between specifi c pairs of moving residues all suggest that the barrier for diliganded gating arises from the motions of many different metastable intermediate structures that are separated, sequentially, by small energy barriers. This energy distribution is certainly not isotropic, because some moving residues make larger energy contributions than others.
Several important regions of the AChR have not yet been mapped for Φ, including most of M1, the upper half of M2, and some regions of the ECD in the α-subunit, and many regions of the non-α subunits. This map of the TR, along with high resolution structures of the diliganded C and O end state ensembles, should serve as a guide for understanding the details of the structural transitions that constitute AChR gating.
We would like thank Mary Merritt and Mary Teeling for technical assistance.
Olaf S. Andersen served as editor. | 2014-10-01T00:00:00.000Z | 2007-12-01T00:00:00.000 | {
"year": 2007,
"sha1": "23504ba4ea0c5529e1bc6f8b552b561ad320547a",
"oa_license": "CCBYNCSA",
"oa_url": "http://jgp.rupress.org/content/130/6/569.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "23504ba4ea0c5529e1bc6f8b552b561ad320547a",
"s2fieldsofstudy": [
"Biology",
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
91650310 | pes2o/s2orc | v3-fos-license | Influence of desiccation on the survival of Bulinus globosus under laboratory conditions
Abstract Environmental changes are generally known to influence the distribution and abundance of schistosome intermediate host snails (IHs). However, the influence of hydrologic changes per ser on the length of survival of schistosome IHs is not fully understood. To explore how desiccation may influence the survival of Bulinus globosus, the main IHs of Schistosoma haematobium in southern Africa, we conducted a study under laboratory conditions where snails were subjected to periods of desiccation and their survival evaluated. Desiccation period from 28 to 49 days post-draining of water was associated with an increase in mortality of 33.2 and 42.4% in large (mean shell height 7.81 ± 0.44 mm) and small (mean shell height 5.94 ± 0.68 mm) B. globosus snails, respectively. Although the duration of desiccation had no effect on the depth of burrowing, large size snails burrowed deeper into the soil than small size snails. The LT50 and LT90 of snails designated as large (7.81 ± 0.44 mm) were 73.35 ± 10.32 and 110.61 ± 21.03 days, respectively. On the other hand, LT50 and LT90 for snails designated as small (5.94 ± 0.68 mm) were 59.64 ± 8.56 and 84.19 ± 12.09 days, respectively. The survival of B. globosus during desiccation depended on the size/age of the snail where large size snails aestivate and survive for a longer period by burrowing deeper into the soil. We therefore conclude that adult B. globosus may play a significant role in habitat recolonization after a period of drought which is a common phenomenon in schistosomiasis endemic areas when population crashes.
Introduction
Schistosoma intermediate host snails (IHs) are vulnerable to climatic changes (Yang et al. 2007;Morley and Lewis 2013). Given their role in schistosomiasis transmission, it is important to evaluate their responses to predicted future changes in aquatic habitats due to climate change. Furthermore, increased anthropogenic activities may negatively affect aquatic and freshwater ecosystems (Lytle and Poff 2004;Tisseuil et al. 2012) resulting in altered functional diversity, integrity of the freshwater ecosystems (Konar et al. 2013) and abundance of freshwater organisms (Kingsford 2011;Lund et al. 2016). Increasing evidence also suggest that differences in the response of IHs to climate change may lead to changes in their interspecific and timing of interactions (Paull et al. 2012), distribution and abundance (Zhou et al. 2008;Stensgaard et al. 2013). Therefore, it is important to investigate the adaptation strategies of IHs to abiotic stresses such as water reduction and desiccation in order to predict their future potential responses to climate change.
Bulinus globosus (Gastropoda: Planorbidae) is the major IH snail of Schistosoma haematobium in southern Africa (Appleton and Madsen 2012). The snail is aquatic and breeds in fresh water habitats and can reproduce through selfing or outcrosing depending on the environmental conditions (Jarne et al. 1992). Hence, Bulinus globosus is considered to have a high biotic potential and can breed throughout the year in aquatic environments (O'keeffe 1985). Its rapid growth and re-population rate of habitats even after population crashes due to drought (Harrison and Shiff 1966;O'keeffe 1985;Marti 1986) makes it critical to gather information on the behavior of the specie during desiccation in search of strategies to enhance the effectiveness of vector-control in schistosomiasis control programs.
Despite considerable efforts to control schistosomiasis, changes in climate and land use practices have led to the creation of more suitable habitats for IHs with subsequent increase in disease transmission (Yang et al. 2007;Zhou et al. 2008;McCreesh et al. 2015;Stanton et al. 2017;Yigezu et al. 2018). In addition, experimental (Paull and Johnson 2011;Kalinda et al. 2017a), field studies (Marti 1986), and predictive models (Manyangadze et al. 2016b;Kalinda et al. 2018) concluded that a rise in temperature may lead to an increase in snail fecundity and hence abundance. This may in-turn increase the risks of transmission of schistosomiasis due to the abundance of IHs. Nevertheless, efforts are being advocated to significantly reduce morbidity due to schistosomiasis through mass drug administration programmes (MDA) (Fenwick et al. 2009;Wang et al. 2009).
In the climate change-schistosomiasis theory, much emphasis has been placed on the effect of temperature on IHs (Yang et al. 2007;Kalinda et al. 2017aKalinda et al. , 2017b and disease dynamics (Stensgaard et al. 2013;McCreesh et al. 2015;Ngarakana-Gwasira et al. 2016), however, studies on the effect of abiotic disturbances such as desiccation through long term and seasonal droughts are limited. Ephemeral rivers and seasonal water ponds which are the common habitat for IHs of Schistosoma parasites are subject to water level alterations and drying due to normal seasonal cycles, climatic and eco-hydrologic factors (Darby et al. 2008;Whitehead et al. 2009). Furthermore, hydro-ecological models have also predicted a potential increase in the frequency and intensity of droughts (Van Vliet and Zwolsman 2008). This may have long-term implications on the population structure and abundance of IHs of parasites of medical and veterinary importance (Woolhouse and Taylor 1990;Bavia et al. 1999).
In the context of predicted climate shifts, correctly identifying the mechanisms through which desiccation and resumption of favorable conditions may affect IHs will improve the accuracy of predicting the potential impact of climate change on schistosomiasis. Process-based experiments may be essential in providing significant cues on the potential impact of desiccation on freshwater snails. This will significantly contribute to critical knowledge needed for an integrated approach to eradicate schistosomiasis and other snail-borne diseases, especially in sub-Saharan Africa where snails are now found in habitats initially thought to be unfavorable (Stanton et al. 2017). In view of this, the present study examined the survival of Bulinus globosus during substratum drying under laboratory simulated desiccation conditions.
Breeding of experimental animals
Bulinus globosus snails were collected from a habitat in Ingwavuma (latitude: À26.9965, longitude: 32.27257) in uMkhanyakude district of KwaZulu-Natal province of South Africa (Manyangadze et al. 2016b;Kalinda et al. 2017a). Snails were allowed to breed to give F1 generation snails that were used in the study. To create two distinct age groups, the laying of egg masses was staggered by creating a three 3 weeks age difference. This allowed us to have two groups of snails with age difference of 3 weeks. Snails were allocated to experimental jars when the young snails were 4 weeks old (average shell height of 4.03 ± 0.56 mm) and the large snails were 7 weeks old (average shell height of 6.32 ± 0.71 mm). The F1 generation was maintained in 2 L cubical plastic aquaria (size: 18 Â 18 Â 10) cm 3 which were filled with filtered pond water and maintained at room temperature (24.0-25.0 C). The snails were fed ad libitum on blanched lettuce and Tetramin tropical fish food (Tetra V R ). The experimental room had a photoperiod of 12-hour light/12-hour dark.
Experimental design
Snails were assigned to two groups; large size snails comprised of 7 weeks old F1 generation and small snails comprised of 4 weeks old young F1 generation. This was purposely designed to evaluate the influence of snail size/age on survival during desiccation. Four hundred and eighty snails split in two groups (experimental and control groups) were randomly allocated to 48 2-L cubical plastic aquaria, with each aquarium containing 10 snails. Two control groups with the same shell height range and age as the experimental groups were also included. Each aquarium was filled with a 5 cm layer of soil (to give sufficient depth for burrowing) collected from a natural breeding site of B. globosus (Manyangadze et al. 2016a;Kalinda et al. 2017a). Prior to its use, the soil was washed in hot boiling water to kill any organisms within it and a composite sample was profiled using the sedimentation method. Before snails were added to the experimental 2-L cubical plastic aquaria, each aquarium was filled with 400 mL of filtered pond water until the soil was saturated and the water level was 4 cm above the surface of the soil. After 48 hrs, B. globosus F1 generation snails from each group were randomly allocated to each On day 8 after acclimatization, water was gradually drained from the aquaria of the experimental groups only. Three times a week (every after 2 days), 30 mL (cumulatively, 90 mL a week) of water was drained from each experimental aquarium using a graduated pipette (Koprivnikar et al. 2014;Pozna nska et al. 2017). This process was done for a period of 3 weeks. In this period, 270 ml of water was drained from each aquarium. Feeding of snails in the experimental groups was stopped at day 21 from the beginning of the experiment while snails in the control group continued to be fed ad libitum. After water was drained in the experimental group, the soil was left to dry whereas water continued to be changed twice a week in the control group.
Measurement of water parameters, snail parameters and soil moisture
Water temperature, pH and conductivity in the experimental and control groups were monitored throughout the study period using the Hanna thermometer (Hanna Instruments HI 98129 PH/EC/TDS/ C Combo pocket instrument). The measurement of water quality parameters was stopped when the water dried off in the experimental group.
The study pre-determined the desiccation endpoints to be at day 57, 64, 71, and 78 (Betterton et al. 1988;Pozna nska et al. 2015). At each desiccation endpoint (Figure 1), 12 aquaria were randomly selected to determine snail survival and revival. This included six aquaria from the experimental and six aquaria from the control groups. For the experimental aquaria, we carefully dug through the soil layers using a spatula in order to retrieve the snails buried in the soil without damaging their shells. The height of soil in each aquarium had been marked in order to distinctly determine the depth burrowing. Furthermore, the drying soil was carefully removed along its developing cracks to avoid damaging the shells of any snails. Since most snails would be attached to the soil at the time of digging, the undisturbed soils were collected, and the depth taken depending on the locations of the snails which were often visible. The depth at which each individual snail within the aquaria was found was measured and recorded. This was done at each desiccation endpoint ( Figure 1) and the shell height of the snails was also measured.
We checked snail survival in the control experiments daily while in the experimental aquaria, this was done at each desiccation endpoint (Figure 1). To determine the status of snails dug at each desiccation endpoint (whether dead or alive), each snail was put in a small cup which had been filled with filtered pond water. The snails were then observed for three hours. Snails that opened their opercula with extension of their soft bodies were recorded as 'alive' and those that did not were recorded as 'dead'.
Determination of soil moisture was done using the gravimetric method (Reynolds 1970) at each desiccation endpoint. For the control aquaria, water was decanted from the aquaria and muddy soil was collected for determination of moisture content. For the soil from experimental aquaria, drying soil samples were also collected and three samples of 50 g of soil from each aquarium were oven dried at 105 C in an incubator (EcoTherm Labotec) for 24 hours. After oven-drying, the soil was re-measured to obtain the soil water content. The experiment was run until day 79 when it was terminated. Although snails in the control aquaria were still alive, all the snails in experimental aquaria had died and hence it was not possible to make comparisons with survival of the control snails.
Data analysis
We summarized the water quality parameters as means and standard deviation. We also compared shell height of the snails among groups at each desiccation endpoint using a one-way analysis of variance (ANOVA) after carrying out parametric tests of normality and homogeneity of variance to satisfy ANOVA assumptions. A generalized linear model with a Poisson link function was used to determine the influence of desiccation time on the number of snails that died after determining the number of snails that had resuscitated at each desiccation endpoint. The number of snails that died during the course of the experiment and those that were resuscitated was expressed as a proportion.
Probit regression model was also used to determine the lethal time (LT) of snail mortality of 50% (LT 50 ) and 90% (LT 90 ) for both large and small snails using the R package Mass (Ripley 2015). Data analysis was done using R 3.4.2 version (R Core Team. R: A language and environment for statistical computing, 2013).
Water quality parameters
The soil that was used comprised 42% sand, 44% silt, and 14% clay. Water temperature (±SD) in the four treatments ranged from 23.02 ± 0.12 C to 23.12 ± 0.03 C while pH was generally neutral (7.58 ± 0.05 to 7.59 ± 0.01) in all experimental groups including the control. The mean total dissolved solids ranged from 604.7 ± 70.64 to 851.9 ± 20.81 604 in all experimental groups including the control.
Snail growth
Length of desiccation had no effect on the shell height of both control and experimental snails (F 3,32 ¼2.84, p ¼ 0.0536). At desiccation endpoints day 57 and 78, the mean (±SD) shell height of small size control group was 6.50 ± 0.65 and 7.71 ± 0.72 mm, respectively. On the other hand, shell height of snails in the small size experimental group at desiccation endpoints day 57 and 78 was 5.27 ± 0.52 and 5.98 ± 0.56 mm, respectively ( Figure 2A). Significant differences in the height of snails was observed at desiccation endpoints day 64 (t ¼ 3.06 p ¼ 0.028) and 78 (t ¼ 2.72, p ¼ 0.042). The interaction between initial size of the snail and desiccation time led to no significant difference in the growth of snails in the small size control and experimental groups (F 3,16 ¼0.48, p ¼ 0.699).
The shell height of snails in the large size control group at desiccation endpoints day 57 and 78 was 8.90 ± 0.41 and 10.3 ± 0.69 mm, respectively. The shell length of snails in the experimental group at desiccation endpoints day 57 and 78 was 7.8 ± 0.35 and 7.9 ± 0.44 mm, respectively ( Figure 2B). The effect of the interaction between the initial size of the snail and desiccation time had no effect on the growth of snails in the large size control and experimental groups (F 3,16 ¼3.03, p ¼ 0.0653). Nevertheless, significant differences in the shell height of snails was observed at desiccation endpoints day 64 (t ¼ 3.05, p ¼ 0.0379), 71 (t ¼ 2.89, p ¼ 0.044) and 78 (t ¼ 3.40, p ¼ 0.0272).
Snail size and depth of burrowing into the soil
We found a significant size effect on the depth of burrowing (F 1,16 ¼26.08, p ¼ 0.001) such that large size snails burrowed deeper compared to small size snails (Figure 3). It was further observed that length of desiccation (F 3,16 ¼1.91, p ¼ 0.168) and the interaction between snail size and length of desiccation (F 3,16 ¼1.01, p ¼ 0.417) had no effect on depth of burrowing. On average, large size snails sampled on desiccation endpoints day 57 and 78 burrowed to the depth of 1.69 ± 0.27 and 2.09 ± 0.55 cm below the soil surface, respectively. On the other hand, small size snails sampled at desiccation endpoints day 57 and 78 burrowed to 0.84 ± 0.23 mm and 1.37 ± 0.33 cm below the soil surface.
There was a reduction in the amount of moisture content in the soil as the desiccation period increased ( Figure 4A). The number of snails that were resuscitated decreased with an increase in desiccation period (Coeff: À0.0338, z ¼ À2.71, p ¼ 0.007; Figure 4B). Furthermore, snail mortality increased linearly with an increase in the length of desiccation (Coeff: 0.0314, z ¼ 3.52, p < 0.001; Figure 4C).
Results show that large size snails were more resilient to desiccation than small size snails ( Figure 5). The lethal time (LT) at which 50% (LT 50 ) of the experimental large size snails died was 73.35 ± 10.32 days while it was 59.64 ± 8.56 days for the small snails. Furthermore, LT 90 for large size snails was 110.61 ± 21.03 days while it was 84.19 ± 12.09 days for the small size snails.
Discussion
This study was designed to mimic conditions to which snails are exposed to in their natural habitats. It was observed that B. globosus was tolerant to drying conditions and capable of surviving and resuscitation after being exposed to desiccating conditions for 50 days. The observed resilience of these snails to drying and their eventual resuscitation at the resumption of favorable conditions may potentially lead to repopulating habitats. These findings are in consonant with the conclusions made by Harrison and Shiff (1966) who suggested that B. globosus can rapidly colonize habitats and re-build its population density following resumption of favorable conditions. The study further showed that soil moisture is an important determinant of snail survival during desiccations. According to Stubbington and Datry (2013), IHs can remain alive in the soil as long as it remains moist. The results from the current study, in a broader sense, agree with earlier studies done by Shiff (1960) and Woolhouse and Taylor (1990). Snail size was associated with high tolerance to desiccation and reduced mortality. Although earlier studies had exposed snails to damp mud/soil (Shiff 1960;Chu et al. 1967;Woolhouse and Taylor 1990) to evaluate their resistance to desiccation, the inclusion of water and its gradual reduction in this study allowed us to explicitly observe the response of B. globosus snails to progressive desiccation.
Both large and small size snails burrowed into the soil, nevertheless, large size B. globosus snails burrowed deeper in search of moisture and this increased its survival. According to Pozna nska et al. (2015), IHs burrow into the soil as they follow the decreasing water levels and this increases chances of survival. Furthermore, the current study indicates that soil moisture content is a vital factor determining the length of snail survival and the number of snails that resuscitate. Our findings are corroborated by Barlow (1935) and Betterton et al. (1988) who suggested that desiccation rate affects both the survival and resuscitation of snails negatively. In the current study, the differences in the number of snails that resuscitated between the two groups may be particularly important in the repopulation of snail habitats. Large size snails with the shell height of 8 mm burrowed deeper into the soil. This collaborates with the finding of Betterton et al. (1988) who found that B. globosus with the shell height of about 9 mm burrowed deeper into the soils. Furthermore, Hira (1968) also observed that the survival of B. globosus was higher for snails with the shell height of 8.1-12.9 mm. On the other hand, Chu et al. (1967) indicated that Bulinus truncatus snails with the shell height of 4-8 mm survived far longer than those with the shell height of 8-12 mm. Collectively, our results suggest that tolerance to desiccation may be strongly associated with the size or age of the snail at the time of desiccation. Furthermore, our results suggest that adult B. globosus may play a significant role in habitat recolonization after a period of drought which is a common phenomenon in schistosomiasis endemic areas when population crushes.
Although studies focusing on the desiccation capacity and burrowing of IHs have largely been neglected, their importance may increase as the need to incorporate the ecology of snails in schistosomiasis control programs rise. Our study found that the survival of snails reduced linearly with an increase in desiccation time. The proportion of snails that may resuscitate at the end of the desiccation period may directly be influenced by the duration of the desiccation period. According to Paraense (1975), desiccation period longer than five months led to a complete eradication of Biomphalaria glabrate.
On the other hand, Woolhouse and Taylor (1990) observed that B. globosus was resilient to desiccation. This may have been enhanced by its ability burrow into the soil (Rubaba et al. 2016). A study by Sturrock (1970) further suggested that desiccation can reduce the population density of IHs and alter the age structure of resuscitating snails. Furthermore, Whitehead et al. (2009) suggested that desiccation at schistosomiasis transmission sites disrupts the population build-up of snails and increase snail mortality thus reduce the risks of schistosomiasis transmission. This process had earlier been used in Zimbabwe by Chandiwana et al. (1988) who demonstrated that effective management of water in irrigation schemes could reduce snail population. Results from the current study and studies done by others demonstrate the importance of manipulating snail habitats in combination with MDA for increased success of reducing the prevalence of schistosomiasis.
Our study provides insights into the relationship between snail survival, population extinctions, and recolonization of habitats. Field and experimental work has shown that the availability and persistence of water at transmission sites may increase snail population and potentially increase transmission of schistosomiasis. For example, water projects such as irrigation schemes were observed to enhance the creation of suitable site for IHs (Yang et al. 2005, Steinmann et al. 2006, increasing the potential risk of transmission of schistosomiasis. Our data showed that a reduction in soil moisture led to an increase in snail mortality. Predictive models have continued to project altered rainfall and water availability in certain parts of sub-Saharan Africa where B. globosus is the main IHs of S. haematobium. Results from our study indicate that such changes will be vital in the recolonization of snail breeding sites. Recolonization of habitats by IHs after resumption of favorable conditions such as water availability is consistent with results from other field experiments (Betterton et al. 1988;Manyangadze et al. 2016b). Bulinus globosus is a highly fecund snail (Marti 1986;Kalinda et al. 2017a) and rapidly recovers to populate habitats quickly (Harrison and Shiff 1966).
The current study focused on the potential influence of habitat change through water reduction on B. globosus population dynamics in terms of mortality and recovery. Burrowing and snail size played an important role in the survival of B. globosus snails. Increased snail survival especially at transmission sites increases the probability of recolonization and the risks of schistosomiasis transmission. Taking B. globosus as a model snail in the fight against schistosomiasis, increased duration of desiccation and potentially application of molluscicides at the inception of rains may further reduce snail population size and habitat recolonization.
Ethics approval
All experimental protocols and procedures of this study were reviewed and approved by the Animals Ethics Committee of the University of KwaZulu-Natal (UKZN) (Ref: REC/ 052/017PD) in accordance with the South African national guidelines on animal care, handling and use for biomedical research.
Availability of data and material
The data supporting the writing of this manuscript can be accessed from the project Centre of the Tackling Infection to Benefit Africa (TIBA), University of KwaZulu-Natal. Data can be requested by following the guidelines laid out in the Data Access Policy of the University of KwaZulu-Natal. | 2019-04-03T13:08:30.366Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "c871aaeb2eca2bcad720e7edecc16ac69064a231",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/02705060.2018.1520157?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "3688d73fe40653821806a5ec27618c21afaf5f67",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
86374323 | pes2o/s2orc | v3-fos-license | Simultaneous Visualization of 5S and 45S rDNAs in Persimmon (Diospyros kaki) and Several Wild Relatives (Diospyros spp.) by Fluorescent in situ Hybridization (FISH) and MultiColor FISH (MCFISH)
S ribosomal DNA (rDNA) was visualized on the somatic metaphase chromosome of persimmon (Diospyros kaki) and ten wild Diospyros species by fl uorescent in situ hybridization (FISH). The digoxigenin (DIG)-labeled 5S rDNA probe was hybridized onto the chromosomes and visualized by incubation with anti-DIG-fl uorescein isothiocyanate (FITC). Strong signals of 5S rDNA probe were observed on several chromosomes of Diospyros species tested. Furthermore, multicolor FISH using 5S and 45S rDNA probes differently labeled with DIG and biotin, revealed separate localization of the two rDNA genes on different chromosomes of Diospyros species tested, suggesting that 5S and 45S rDNA sites can be used as chromosome markers in Diospyros. The number of 5S rDNA sites varied with the Diospyros species. More 5S rDNA sites were observed in four diploid species native to Southern Africa than in three Asian diploid species. The former had four or six 5S rDNA sites while the latter had two. Three Asian polyploidy species had four to eight 5S rDNA sites. Among the Asian species, the number of 5S rDNA sites seemed to increase according to ploidy level of species. These features of 5S rDNA sites were very similar to those of 45S rDNA sites in Diospyros. Phylogenetic relationship between D. kaki and wild species tested are discussed based on the number and chromosomal distribution of 5S and 45S rDNA.
Persimmon (Diospyros kaki Thunb.) is one of the most important fruit trees cultivated for centuries in Japan (Sugiura and Subhadrabandhu, 1996). This species is also distributed in temperate regions of east Asia and at least 1000 cultivars have been developed. The genus Diospyros, to which persimmon belongs, consists of 400-500 species (Yonemori et al., 2000). Wild species of Diospyros are mostly diploid (2n = 2x = 30) with a small number of tetraploid species (2n = 4x = 60), while cultivated species, such as D. kaki, are hexaploid (2n = 6x = 90). Some of the seedless cultivars of D. kaki have been reported as nonaploid (2n = 9x = 135) (Tamura et al., 1998;Zhuang et al., 1990). Thus, it is speculated that a single or several diploid and/or tetraploid wild species were involved in the speciation of the cultivated polyploid Diospyros species.
So far, the relationship between wild and cultivated Diospyros species has been discussed based on the limited information obtained from isozyme, mitochondrial DNA and chloroplast DNA analyses (Nakamura and Kobayashi, 1994;Tao and Sugiura, 1987;Yonemori et al., 1998). Little information is available about the speciation of cultivated hexaploid species and the phylogenetic relationships among Diospyros species. Although chromosome numbers and nuclear DNA contents for Diospyros species have been reported (Tamura et al., 1998;Zhuang et al., 1990), only a few cytological studies have been conducted for the Diospyros. Small chromosomes of Diospyros (2-3 µm long on the average at metaphase) make observation with a light microscope diffi cult (Choi et al., 2002;Tamura et al., 1998).
Fluorescent in situ hybridization (FISH), using labeled specifi c DNA fragments as probes was developed in the late 1980s as a most effective plant cytogenetic technique (Jiang and Gill, 1994). Direct visualization of defi ned DNA sequences such as ribosomal RNA gene (rDNA) on the chromosomes has been applied to study the phylogenetic relationship in horticultural plants such as Allium and Brassica (Hasterok et al., 2001;Richroch et al., 1992). Furthermore, multicolor FISH (MCFISH) using 5S and 45S rDNA specifi c probes simultaneously have provided valuable information on the evolution of rDNA sites and the relationships between wild and cultivated polyploid species (Mishima et al., 2002;Raina and Mukai, 1999;Schrader et al., 2000;Taketa et al., 1999). Recently, FISH using a 45S rDNA probe was found useful to elucidate the chromosomal location and the variation in the number of sites of 45S rDNA in 10 Diospyros species (Choi et al., 2003).
In the present study, we performed FISH for physical mapping of 5S rDNA and MCFISH for simultaneous visualization of 5S and 45S rDNA genes on the somatic metaphase chromosomes of D. kaki and wild Diospyros species. The phylogeny of Diospyros species tested is discussed with special reference to variation in the numbers and chromosomal location of 5S rDNA sites compared with those of 45S rDNA.
PLANT MATERIALS AND CHROMOSOME PREPARATION.
Diospyros species used in this study are shown in Table 1 with their ploidy levels and regional distributions. Young roots were collected from rooted shoots in vitro or seedlings as described before (Choi et al., 2002). Root tips (1-2 cm long) were pretreated with 2 mM 8-hydroquinoline solution for 5 h at 4 °C and fi xed in a methanolacetic acid (3:1) solution. Chromosome samples were prepared by an enzymatic maceration and air drying method (Fukui, 1996). The enzyme solution was composed of 4% (w/v) cellulase RS (Yakult Honsha, Tokyo), 1% (w/v) pectolyase Y-23 (Kikkoman Co., Tokyo), 0.07 M KCl and 7.5 mM Na 2 EDTA (pH 4.0). 5S rDNA PROBE PREPARATION. For 5S rDNA detection, the 5S rDNA coding region was PCR amplifi ed from genomic DNA of Diospyros kaki using oligonucleotide primers designed by Fukui et al. (1994a): a forward primer 5S-F (5ʼ-GGATGCGATCATAC-CAGCAC -3ʼ) and a reverse primer 5S-R (5ʼ-GGGAATGCAA-CACGAGGACT-3ʼ). A standard PCR method was carried out using a thermal cycler (Perkin Elmer Cetus, Norwalk, Conn.). Total DNA of D. kaki isolated by the cetyltrimethylammonium bromide (CTAB) method (Doyle and Doyle, 1987) was used as a template DNA. For PCR, a 50-µL aliquot reaction mixture containing 50 ng of template DNA, 0.2 mM of dNTP mixture, 0.4 µM of each primer and 1 unit of Taq DNA polymerase (Takara Shuzo Co., Ltd., Tokyo, Japan) was used. Samples were amplifi ed by 35 thermal cycles of 94 °C for 30 s, 58 °C for 1.5 min, and 72 °C for 2 min. The PCR product was cloned into a TA cloning vector (Promega, Tokyo, Japan). Nucleotide sequences of the inserts of several clones were determined using an automatic DNA sequencer (ABI PRISM, Applied Biosystems, Calif.). The inserts of the cloned DNA were amplifi ed and labeled with digoxigenin (DIG)-11-dUTP (Roche, Mannheim, Germany) simultaneously by PCR using 5S-F and 5S-R primers as described by Fukui et al. (1994a).
The FISH procedure using 5S rDNA specifi c probe was the same as that described by Fukui et al. (1994a). The MCFISH procedure using DIG-labeled 5S and biotinylated 45S rDNA probes was performed as described by Taketa et al. (1999) with a slight modifi cation of probe concentration and its composition as follows. The probe mixture consisted of 20 ng of both labeled probes per slide that were dissolved in 15 µL mixture of 50% formamide in 2× SSC. The biotinylated probe was detected with avidin-rhodamine (1%, v/v, Roche, Mannheim, Germany) and biotinylated anti-avidin (1%, w/v, Vector Laboratories, Burlingame, Calif.) was applied as the secondary amplifi cation of the fl uorescent signals. The DIG labeled probe was detected with anti-DIG-fl uorescein isothiocyanate (FITC) (10%, w/v, Roche, Mannheim, Germany) at the step of secondary amplifi cation of biotinylated probe for simultaneous detection of both probes. The chromosomes on the slides were counterstained with 0.5 µg·mL -1 4, 6-diamidino-2-phenylindole (DAPI) and mounted with an antifadant solution as described by Choi et al. (2002). The chromosome samples were observed with a fl uorescent microscope (Axiophot, Zeiss, Oberkochen, Germany) with a high-sensitivity cooled CCD camera (PXL 1400, Photometrics, Ariz.). The B-and G-light excitation fi lters were used for the detection of FITC and rhodamine, respectively. The signal images were analyzed by imaging software (IPLab Spectrum 3.1, Signal Analytics, Calif.). More than 30 to 50 cells from at least ten metaphase slides in each species were observed for FISH and MCFISH. Figure 1 shows the nucleotide sequence corresponding to the coding region of 5S rDNA cloned from D. kaki. The fragment of 5S rDNA from D. kaki showed higher than 97% DNA sequence identity to that from tomato, tobacco and Arabidopsis thaliana.
5S rDNA SITES IN ELEVEN DIOSPYROS SPECIES.
FISH using DIG-labeled 5S rDNA as a probe was performed on somatic chromosomes in 11 Diospyros species with different ploidy levels. The 5S rDNA specifi c probe hybridized to various numbers of sites ranging from two to six in the diploid species. Diospyros glabura, native to southern Africa, showed four 5S rDNA-specifi c signals on the chromosomes ( Fig. 2A), and the other southern African species, D. austroafricana, D. lycioides and D. simii, appeared to have six 5S rDNA sites (Fig. 2B-D). Four Asian diploid species, D. ehretioides, D. morrisiana, D. oldhami and D. lotus, carried two 5S rDNA sites on the short arms of chromosomes ( Fig. 2E-H). A variation in signal intensity was frequently observed in D. lotus ( Fig. 2H and Fig. 3F, arrow). Three polyploid species carried four to eight rDNA sites on their chromosomes. Tetraploid D. rhombifolia had four chromosomes carrying 5S rDNA sites on the short arms of chromosomes (Fig. 2I). Hexaploid D. kaki bore eight 5S rDNA sites on its chromosomes (Fig. 2J). Three of the four pairs of 5S rDNA sites of D. kaki were visualized at the centromeric regions while the other pair was detected from proximal regions on the long arms ( Fig 2J, arrows) of chromosomes. Another hexaploid, D. virginiana, bore six 5S rDNA sites, which were visualized on the short arms of chromosomes (Fig. 2K). All green signals from fl uorescein isothiocyanate (FITC) were detected from centromeric or proximal regions on separate chromosomes of Diospyros species tested in this study. SIMULTANEOUS VISUALIZATION OF 5S AND 45S rDNA GENES ON THE CHROMOSOMES OF DIOSPYROS SPECIES BY MCFISH. MCFISH, performed using DIG-labeled 5S rDNA and biotin-labeled 45S rDNA probes, visualized 5S rDNA (green signals detected with anti-DIG-FITC) and 45S rDNA (red signals detected with avidinrhodamine) sites simultaneously on the chromosomes of the nine Diospyros species (Fig. 3A-I). The 5S and 45S rDNA sites were separately localized on different chromosomes in all species used in this study.
Discussion
Probes specifi c to 45S rDNA and its subunits (5.8S, 18S or 25S) have been reported as effective chromosome markers in fruit trees (Roose et al., 1998;Schuster et al., 1997;Yamamoto et al., 1999) including Diospyros species (Choi et al., 2003). Since the 5S rDNA gene can be another useful chromosome marker (Cuadrado et al., 1995;Hasterok et al., 2001;Kamisugi et al., 1994;Mukai et al., 1990;Roose et al., 1998;Schuster et al., 1997), we determined if the 5S rDNA specifi c probe could be a chromosome marker for Diospyros species in this study. The signal intensity was strong enough to be detected and the probe hybridized to several chromosomes suggesting that the 5S rDNA specifi c probe is an effective chromosome marker in Diospyros. Hence, the 5S rDNA gene, an especially effective marker for diploid species, was clearly visualized by FISH even in the hexaploid species, such as D. kaki and D. virginiana. Variations in the number of 5S rDNA sites of Diospyros species tested appeared to be quite similar to those of 45S rDNA sites (Table 2; Choi et al., 2003). The number of 5S rDNA sites in diploid species varied from two to six whereas that of 45S rDNA sites from two to eight. Four species, native to South Africa, had more 5S and 45S rDNA sites than those of Asian diploids. The geographical variability in the number of rDNA sites was observed in other plant species such as Oryza and Zamiaceae (Fukui et al., 1994b;Tagashira and Kondo, 2001). The recent study on the sequence analysis of the ITS and matK regions of Diospyros species revealed that African species were phylogenetically most distant species from D. kaki among the species tested (unpublished laboratory data). The difference in the number of 5S rDNA sites between African and Asian species as observed in 45S rDNA sites suggests that the southern African species have evolved forming an independent phylogenetic group in Diospyros. This could suggest that southern African species were not involved directly in the speciation of cultivated hexaploid species. Furthermore, within the Asian species, the numbers of 5S rDNA sites seemed to increase depending on the ploidy levels of species as did the number of 45S rDNA sites (Table 2). This result was different from many other plant species that did not show any correlation between the number of 5S and 45S rDNAs sites (Castilho and Heslop-Harrison, 1995;Mishima et al., 2002;Schrader et al., 2000;Taketa et al., 1999).
Since the chromosomal distribution of rDNA genes could be an additionally important indicator in the phylogenetic study among related plant species (Castilho and Heslop-Harrison, 1995;Maluszynska and Heslop-Harrison, 1993;Zhang and Sang, 1999), we mapped 5S and 45S rDNA sites of nine Diospyros species simultaneously by MCFISH (Fig. 3). In all nine species tested, 45S rDNA was located at the nucleolus organizer region (NOR) or other chromosomal regions while 5S rDNA was detected at the proximal or centromeric parts of chromosomes number of the target site in the chromosomes (Leitch and Heslop-Harrison, 1992). This length variation of 5S rDNA sites observed in D. lotus presumably originated from major deletion-insertion events in the region of the 5S rDNA repeat units during the evolution of species as reported in rye and tobacco (Cuadrado et al., 1995;Kitamura et al., 2000).
In conclusion, FISH and MCFISH using the 5S and 45S rDNA probes, successfully visualized their sites on the chromosomes of Diospyros. The pattern of localization of 5S and 45S rDNA genes on the chromosomes would serve as effective chromosome markers for Diospyros species. Furthermore, the similarity in pattern of variation in the number of 5S and 45S rDNA sites tested may have a signifi cant implication for elucidating the phylogenetic relationship and the speciation of polyploid species of Diospyros. other than those bearing the NOR. The different multiloci chromosomal distributions of the two rDNA genes showed that they are independent of each other as indicated in many other plants (Cuadrado et al., 1995;Kamisugi et al., 1994;Leitch and Heslop-Harrison, 1992;Rogers and Bendich, 1987). The localization of 5S and 45S rDNA genes on different chromosomes of Diospyros species tested suggests that reorganization has rarely occurred between the two rDNAs sites as indicated in Brasicca (Hasterok and Maluszynska, 2000). As shown in Fig. 2H (arrow), a variation in signal intensity between 5S rDNA sites was frequently observed in D. lotus. Although in situ hybridization is not a quantitative method, differences in signal strength are correlated with variation in copy | 2019-03-30T13:12:34.342Z | 2003-09-01T00:00:00.000 | {
"year": 2003,
"sha1": "9789a2fa9ce479b3d30b41b6469f4f360284df84",
"oa_license": null,
"oa_url": "https://journals.ashs.org/downloadpdf/journals/jashs/128/5/article-p736.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5c40cd3c0f3ef4389d34f3cc26e5143ea8fc9630",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
15741657 | pes2o/s2orc | v3-fos-license | Mealybug Chromosome Cycle as a Paradigm of Epigenetics
Recently, epigenetics has had an ever-growing impact on research not only for its intrinsic interest but also because it has been implied in biological phenomena, such as tumor emergence and progression. The first epigenetic phenomenon to be described in the early 1960s was chromosome imprinting in some insect species (sciaridae and coccoideae). Here, we discuss recent experimental results to dissect the phenomenon of imprinted facultative heterochromatinization in Lecanoid coccids (mealybugs). In these insect species, the entire paternally derived haploid chromosome set becomes heterochromatic during embryogenesis in males. We describe the role of known epigenetic marks, such as DNA methylation and histone modifications, in this phenomenon. We then discuss the models proposed to explain the noncanonical chromosome cycle of these species.
Epigenetics
The first appearance of the term epigenetics can be ascribed to Conrad Waddington, who stated in 1942 that "epigenetics is the branch of biology which studies the causal interactions between genes and their products, which bring the phenotype into being" [1]. In the modern view, epigenetics encompasses all those hereditary (genetic) phenomena not depending on the DNA sequence itself but on some functionally relevant molecular signatures which are imposed over the sequence ("epi" in Greek means "over"). All the systems involved in gene expression regulation are based on interactions between proteins and DNA. Some mechanisms inhibit or activate the expression of a single gene, acting on the promoter region, and thus reflect the structural organization of the gene itself (gene regulation). However, the epi-genetic systems can regulate phenotypic expression regardless of the gene sequence and are transmitted from one cell generation to the next one or from the parents to their progeny. These systems modulate the functional behavior of chromosomal regions, entire chromosomes, or even whole sets of chromosomes [2]. According to Denise Barlow "epigenetics has always been all the weird and wonderful things that cannot be explained by genetics." Epigenetic phenomena occur in all the kingdoms from yeast to metazoans and plants. Some are limited to just one or few species. For example, RIP (rearrangement induced premeiotically) [3] and MIP (methylation induced premeiotically) [4] were reported in fungi, where they seem to protect the genome from transposable elements. The term paramutation, on the other hand, was coined to describe a heritable change in gene expression of an allele imposed by the presence of another specific allele, which occurs only in plants [5]. Paramutation seems to require the physical interaction between the two homologous alleles [6], as does the quite similar transvection phenomenon described in Drosophila by Lewis in 1954 [7]. Other phenomena are universal, at least in eukaryotes. These include, for example, the double-stranded RNA-mediated posttranscriptional gene silencing (PTGS).
Classical genetics has always considered the two parental copies of a gene functionally equivalent in determining the offspring phenotype, regardless of their origin. Genomic imprinting identifies, instead, the epigenetic process by which specific genes, single chromosomes, or entire haploid chromosome sets exhibit a differential functional behaviour, that is, dependent upon their parental origin [8][9][10]. The first evidences for the existence of genomic imprinting (and indeed the first use of this term in a genetic sense) came from the early works by Helen Crouse, in 1960s, on the fungus gnat Sciara coprophila [11,12], and the subsequent studies on Coccidae [13,14], showing that reciprocal crosses are not always equivalent (reviewed in [15,16]). Nonetheless, those findings were seen as curiosities in their time, until imprinting evidence was uncovered in the mouse, in mid-1980s [17,18].
A strong impetus to genomic imprinting studies came from the demonstration that failure of imprinting is responsible for severe syndromes in humans (reviewed in [19,20]). For example, some human syndromes are caused by the transmission of both homologs from a single parent (uniparental disomy) [21].
The elaboration of the parent-of-origin-specific epigenetic information proceeds through three steps, namely, establishment, maintenance, and erasure ( Figure 1) (reviewed in [22,23]). During gametogenesis a genome-wide erasure of the parent-specific epigenetic "marks" occurs, followed by the establishment of the signatures specific for each sex. After fertilization, the differential epigenetic marks, carried by the two parental pronuclei, are maintained and faithfully transmitted through the subsequent mitotic divisions during development. The imprinting marks specific to each parental allele are then "read" by the cellular machinery and translated into a differential, parent-of-originspecific functional behavior. Genomic imprinting represents a paradigmatic example of epigenetic regulation, found not only in insects and mammals but also in yeast and plants [24,25].
Hereafter we will describe the unusual chromosome system of the Lecanoid coccids (mealybugs), and the molecular machinery which is used by males of these insects to perform one of the most striking epigenetic phenomena: the imprinted facultative heterochromatinization of the entire paternal haploid chromosome set.
The Mealybug Chromosome System
Coccid insects are very small, most species are less than one centimeter in length. This group of Hemiptera exhibits "sexual dimorphism." The body shape of females is globose and flattened, with the fusion of the head to thorax. Other segmental boundaries are often not clearly visible. Females are always wingless and frequently neotenic; they are covered with protective secretions such as wax, lacquer or, silk. Males are much smaller than females and have an elongated body with wings.
Coccid species are identified based on male and female morphology, as well as on the karyotype. At the beginning of last century, two large groups were identified on the basis of morphological criteria the Margaroididae and the Lecanodiaspidoidae [26]. The karyocytological analysis confirmed the validity of this subdivision [27]. The Margaroididae retain the XX-XO mechanism of sex determination. In the Lecano-Diaspidoidae there are no differentiated sex chromosomes but these species possess a very complex and intriguing chromosome system. In the male line of Diaspidoids the whole paternal chromosome complement is discarded from midcleavage embryo cells; while in Lecanoid (mealybug) males, the whole paternally derived chromosome set undergoes heterochromatinization and the males become functionally haploid, a condition known as parahaploidy ( Figure 2). After fertilization, all the embryo chromosomes are euchromatic. However, in female embryos all the chromosomes retain the euchromatic state, whereas in embryos destined to develop into males, the whole haploid set of paternal chromosomes becomes heterochromatic after the 7th cleavage division ( Figure 2) [28]. This implies that, at least in males, the parental origin of the two chromosome sets must be distinguishable until blastoderm stage, when the heterochromatinization process specifically acts upon the paternally derived chromosomes. The process of heterochromatinization may thus be fruitful in the investigation of the behavior of epigenetic marks before and across the onset of heterochromatinization (see Section 3 for details). The mealybug chromosome system exhibits the characteristics of a genuine imprinting phenomenon [29]. The imprint is established in the gametes, maintained through the embryonic and adult somatic cell divisions, and erased in the germline (Figure 1). In somatic cells, the heterochromatic paternal chromosomes cluster and form a chromocenter that makes it very easy to distinguish male from female embryos. The chromocenter is noticeable in the nuclei of most tissues except the Malpighian tubules and the gut, where facultative heterochromatin reverts to a euchromatic state [30,31]. The maternal euchromatic chromosomes are always distinguishable from the paternal ones until metaphase, when they too reach a high degree of condensation. Based on these features, paternal chromosome inactivation in male mealybugs represents, together with X-chromosome inactivation in female mammals, the most clear and largescale example of facultative heterochromatinization. Facultative heterochromatinization may be defined as the developmentally regulated and tissue-specific cis-spreading of a heterochromatic state onto a euchromatic region, with a remodeling of the chromatin conformation that eventually leads to inactivation of all the genes it harbors. Distinct from constitutive heterochromatin, facultative heterochromatin is not composed of specific DNA sequences and in general involves only one of the two homologous sites; in these aspects it represents a true epigenetic phenomenon.
The paternal origin of the heterochromatic set was established by Brown and Nelson-Rees (1961) [32]. These authors irradiated Planococcus citri males with X-rays prior to mating and then scrutinized their male offspring. Due to their holocentric nature, the chromosome fragments are not lost during paternal spermatogenesis and embryo development so the authors could demonstrate that the radiation-induced chromosomal aberrations were present only in heterochromatic haploid set of the sons. In contrast, in the male offspring of X-ray-treated females, only the euchromatic chromosomes were damaged. Using an analogous strategy, the same authors demonstrated the genetic inactivity of the heterochromatic set [32]. The parahaploid male progeny of X-ray-treated males exhibited normal vitality, whereas the survival of the diploid daughters decreased with increasing X-ray dose. This apparent paradox can be easily explained if one considers that, in the sons, any paternally-transmitted mutation was harbored by heterochromatic chromosomes and hence was not expressed, while in female progeny, any dominant lethal mutation was expressed. Nevertheless, the heterochromatic haploid chromosome set is not completely genetically inert in males since at least three different effects were found that could be ascribed to some residual activity of the paternal genome. First, the survival of male offspring of heavily X-ray-treated males (60.000 to 90.000 rep) depended on the amount of heterochromatic material, since the loss of heterochromatic fragments reduced the vitality [33]. The second effect was related to fertility: 100% of the male offspring of irradiated males (30,000 rep) survived, but a large percentage was sterile, and the frequency of sterile individuals increased with the radiation dose [33]. The third effect can be deduced by the observation that, in the male progeny of interspecific crosses, the heterochromatic set from one species could not be substituted for that (of an equivalent amount) of another [34]. Moreover, the activity of heterochromatic rDNA loci was demonstrated by the observation that in male cells ribosomal genes located on heterochromatic chromosomes were associated with nucleoli and nascent rRNA [35].
In mealybugs, the meiosis is atypical since the meiotic divisions are inverted. In male and female mealybugs, the first meiotic division is equational, with separation of sister chromatids, while the second one is reductional with segregation of the homologs (Inverted meiosis) [36,37]. However, in females the remaining meiotic events are canonical, since homologous chromosomes undergo crossing over and independent assortment. During male meiosis, each spermatogonial precursor cell produces a cluster of synchronously dividing spermatogonia. Each spermatogonium divides four times to produce a cyst of 16 primary spermatocytes which then undergo the two meiotic divisions (Figure 3) [30]. The reductional second meiotic division is characterized by a nonindependent assortment of chromosomes, with the maternal euchromatic set segregating from the paternal heterochromatic one through a monopolar spindle (Figure 4) [37]. The two meiotic divisions thus generate a quadrinucleate spermatid with two nuclei containing the maternally derived euchromatic chromosomes and other two nuclei containing the paternally derived heterochromatic ones. Only spermatids containing the euchromatic chromosomes differentiate into sperm, while the heterochromatic products fail to form mature sperm and slowly degenerate in situ. Figure 3: P. citri spermatogenesis. Each spermatogonial precursor cell produces a cluster of synchronously dividing spermatogonia, and after four mitotic divisions a cyst of 16 primary spermatocytes is obtained. Primary spermatocytes undergo an inverted type of meiosis, characterized by a nonindependent assortment. Meiosis produces a quadrinucleate cell with 2 elongated spermatids containing only the euchromatic chromosomes (gray staining), and 2 picnotic spermatids containing only the heterochromatic chromosomes (dark staining). Only euchromatic spermatids differentiate into 32 mature sperms. only the 32 "euchromatic" spermatids start the elongation process that ends with the production of 32 mature sperm ( Figure 3). As a consequence of this extreme meiotic drive, only the maternally derived euchromatic chromosomes are transmitted to the progeny.
In summary, mealybug males exhibit not only two relevant epigenetic phenomena, chromosome, imprinting, and facultative heterochromatinization on a genome-wide scale, but also a dramatic deviation from canonical meiosis represented by inverted meiosis, nonindependent chromosome assortment and extreme meiotic drive.
The Mechanisms of Imprinted Facultative Heterochromatization in Mealybug
We carried out an extensive scrutiny of the epigenetic mechanisms in the mealybug P. citri and found that the machinery underpinning imprinted facultative heterochromatinization involves HP1-like and HP2-like proteins, as well as specific posttranslational histone modifications [28,[38][39][40][41][42]. Chromatin remodeling events have been commonly indicated as a mechanism by which eukaryotic cells regulate most of the epigenetic phenomena (reviewed in [43]), though the relevance of histone modifications as the carrier of epigenetic memory has been questioned [44]. Chromatin remodeling involves the interplay of many different posttranslational modifications of histones and of a number of nonhistone proteins. Histone modifications play a central role in the regulation of gene expression, and this led some authors to postulate the existence of a "histone code" as a regulatory code modulating the potentialities of the genetic code [45,46]. Though the crosstalk of histone modifications does actually influence chromatin function, their combinations probably do not identify a true code. The interplay between the heterochromatin protein HP1 [47] and the lysine 9 trimethylated isoform of the histone H3 (K9H3me3) has been shown to be pivotal for the assembly of silent chromatin domains [48][49][50]. The human (SUV39H1), the murine (Suv39h), and the Drosophila (SU(VAR)3-9) histone methyltransferases (HMTases), that selectively diand trimethylate the histone H3 at lysine 9, generate a binding site for HP1 family proteins [48][49][50]. Moreover, in mammals, yeast, and Drosophila, it has been also shown that HP1 is, in turn, associated with the K9H3 HMTase, suggesting a self-maintenance model for the propagation of heterochromatic domains in native chromatin, that may well be responsible for epigenetic memory [51][52][53][54]. Facultative heterochromatinization in the nuclei of male mealybugs does not occur simultaneously in all cells of the 7th cleavage embryo but takes place as a wave, beginning at one end of the embryo and spreading to the other (Figure 2) [28]. HP1-like distribution in P. citri embryos was investigated using an antibody against Drosophila HP1 (C1A9 antibody [47]) [28]; this antibody recognized a protein of similar mass (29 kDa) which shared the Drosophila HP1 epitope [10,28]. The establishment of a well-formed chromocenter in male embryo nuclei was preceded by the appearance of aggregates of HP1-like immunostaining that then continued to decorate the male-specific heterochromatin [28]. These results led us to hypothesize that the P. citri HP1-like might play a causative role in facultative heterochromatin formation ( Figures 5 and 6). This hypothesis was confirmed by cloning the P. citri HP1-like gene [40] which was found to coincide with pchet2, a chromodomain-containing gene identified by Epstein and collaborators in 1992 [55]. The pchet2 sequence was used to construct double-stranded interfering RNA that was employed to knockout pchet2 expression in coccid embryos. The knockout resulted in the inhibition of facultative heterochromatin formation [40]. In fact, the lack of chromocenter development following PCHET2 depletion made it very difficult to distinguish male embryos. The role of PCHET2 was also confirmed in some adult tissues in which the reversion of heterochromatinization occurs. In gut tissues, for example, the loss of chromocenters was accompanied by the dispersion of the PCHET2 signal ( Figure 6).
The distribution of the histone modifications K9H3me3 and K20H4me3 in P. citri nuclei was also coincident with facultative heterochromatin [38,39]. Moreover, the immunological detection of these histone modifications in male embryos preceded the appearance of facultative heterochromatin. Significantly, pchet2 knockout led to the loss of immunostaining for both K9H3me3 and K20H4me3 [40]. . The meiotic spindle immunostainingstaining shows that meiosis II metaphase plates are associated with a monopolar spindle. Bar represents 10 μm. [40]. Interestingly, a study on the inactivation of the human X chromosome showed a similar colocalization of HP1 with K9H3me3 and K20H4me3 histone modifications on the inactive X [56]. The K9H3me3-HP1-K20H4me3 pathway is thus an evolutionarily conserved mechanism for epigenetic silencing of large chromosomal domains by facultative heterochromatinization.
The pattern of acetylation of histone H4 (AcH4), a histone modification that has been associated with active chromatin was investigated in P. citri by Ferraro and collaborators [57], who found that the male-specific heterochromatic chromocenter is devoid of this modification, as is also the case for the inactive X chromosome in female mammals [58].
Interestingly, all the factors implied in mealybug facultative heterochromatin assembly are already associated with constitutive heterochromatin. The same is true of heterochromatin protein HP2 that was isolated as a constituent of D. melanogaster constitutive heterochromatin. Using an antibody against the Drosophila HP2, we demonstrated that it also decorates the P. citri male-specific heterochromatin [41].
The imprinting cycle features (see the last paragraphs of Section 1) focused the search for its molecular mechanisms on DNA methylation, whose characteristics (establishment, maintenance, and erasure) fulfilled the requirements of imprinting cycle in mammals [23] (Figure 1). In chromosomal domains, where imprinted genes lie, sequence elements have been identified that are essential to the imprinted gene expression. These "imprinting control elements" (ICEs) are rich in CpG dinucleotides (many correspond to CpG islands), which exhibit parent-of-origin-specific differential DNA methylation. Following fertilisation, allele-specific methylation marks are maintained throughout development and modulate the imprinted differential expression of the alleles [59]. These regions of differential methylation (DMRs) may be either at the boundary between reciprocally imprinted genes or in the promoter of antisense silencing RNAs [60,61]. In mealybugs, the role of DNA methylation in imprinting was first studied by Scarbrough and collaborators in 1984 [62]. These authors showed the presence of methylated cytosines in the male genome of P. calceolariae and measured, by HPLC, the total amount of methylated cytosines in males (0.68 + 0.02%) and females (0.44 + 0.04%) [62]. However, these studies failed to directly correlate DNA methylation and chromosome heterochromatinization. The occurrence of CpG methylation in P. citri was confirmed by our group at both the molecular and cytological levels [63]. We showed that the paternally derived chromosomes were hypomethylated at CpG dinucleotides compared to maternal chromosomes in both males, where they were inactivated, and females, where they remained active. This result indicates that in mealybugs, as in mammals, parent-oforigin-specific differential DNA methylation is the molecular signal to imprint chromosomes. However, since in males paternal heterochromatic chromosomes are less extensively methylated than their maternal euchromatic counterparts, we concluded that DNA methylation in mealybugs does not induce genetic inactivation, as it occurs in vertebrates [63]. On the other hand, the lack of a direct correlation between DNA methylation and gene silencing seems to be a common feature in insects (reviewed in [64]).
The Mealybugs as a Paradigm of Epigenetics
The reprogramming of the parent-of-origin-specific epigenetic marks during gametogenesis is one of the key features of genomic imprinting. In mealybugs, the chromatin remodeling events that occur during gametogenesis and lead to the facultative heterochromatinization of an entire haploid set of chromosomes in the male progeny were thoroughly scrutinized by immunocytological analysis of male and female gametogenesis (Figure 7) [42]. K9H3me3, K9H3me2, K20H4me3, PCHET2, and HP2-like could not be detected in females from meiosis to mature oocytes, whereas in males, they marked all stages from spermatogonia to spermatids, with a distribution pattern that changed according to cell type. In spermatogonia, for example, whereas K9H3me3, K9H3me2, and PCHET2 were enriched within the heterochromatin, HP2-like and K20H4me3 were found in the euchromatin [42]. However, at the spermatid stage, K9H3me3, K9H3me2, PCHET2, and HP2-like reallocated over both the euchromatin-and the heterochromatin-containing spermatids, which were produced by nonindependent assortment during inverted meiosis. The redistribution of epigenetic signals in spermatids might be related to the establishment of parental imprinting. These results were in agreement with the model proposed by Brahmachari and collaborators [65,66], who described the reorganization of the male-specific NRC (nuclease-resistant chromatin) [67,68] during spermatogenesis. These authors found that NRCs were acquired during maturation by sperm nuclei that contained the maternal, originally NRC-free, chromosome set [66]. Following spermiogenesis, PCHET2, the mealybug HP1-like protein was lost from mature sperm, whereas K9H3me3, K9H3me2, K20H4me3, and HP2-like were still detectable, thus ruling out the possibility that PCHET2 could play a role in the imprinting mechanism. Sperm that entered the oocyte possessed distinct K9H3me3 and K9H3me2 signals that were still found in the early pronucleus. Thus, K9H3 di-and trimethylation turned out to be the best candidates for the marks that imprint the paternal chromosomes. Buglia and Ferraro reported that the two euchromatic spermatids originating from a single meiosis were labeled with different levels of K9H3me3 and of C1A9positive immunostaining, suggesting that the two resulting 8 Genetics Research International sperms produced male or female progeny according to the amount of these epigenetic factors [69]. However, the nuclei of quadrinucleate spermatids share a common cytoplasm thus making it unlikely that an enrichment of K9H3me3 in one of the "euchromatic" spermatids could occur independently from the other "euchromatic" spermatid. Accordingly, in our scrutiny of P. citri spermatogenesis, we failed to observe any significant difference of labeling between the two euchromatic spermatid nuclei stemming from the same meiosis, with any of the epigenetic factors we tested [42].
Taken as a whole, all these observations suggest that the sex determination of the zygote is very likely dependent upon some unknown factor(s) that is deposited in the cytoplasm of the egg by mother. This scenario is consistent with the studies of Nelson-Rees [70], who showed that the sex-ratio widely fluctuates from female to female and is markedly influenced by the mother's age. Additionally, in insects sex ratio can be affected by different environmental factors acting on parents, like extreme temperature, starvation, and lack of resources [71][72][73][74][75][76]. In P. citri females, various factors, such as population density [77,78], temperature [70,79], and mating age [70,78,80], were found to influence sex allocation. Ross and collaborators tested three environmental factors (rearing temperature, food deprivation, age of mating) and showed that the effect of high temperature was rather weak, food restriction appeared to be strongly associated with reduced longevity, while older age at mating affected sex allocation, resulting in female-biased sex ratios [81]. The mechanism of this phenomenon is still unclear although PCHET2 and the histone modifications involved in the facultative heterochromatization [10,28,37,40] are thought to be also involved in sex determination [82]. Females might alter the concentration of these proteins in their eggs to modulate the sex ratio of their broods. Along these lines, Buglia and collaborators observed increased concentrations of a C1A9 positive-staining protein in eggs of females that were aged prior to mating [83]. They supposed that these females would produce male-biased offspring (although the sex ratio data were not provided), whilst the opposite effect of maternal ageing prior to mating was observed in other studies [81].
We can hypothesize that the embryo cytoplasm, at blastoderm stage, determines whether the paternal chromosomes, which are marked by DNA hypomethylation [63] and K9H3 methylation [42], will undergo heterochromatinization or not, giving rise to a male or a female embryo, respectively. Given the causative role of PCHET2 in male-specific heterochromatin formation [40], the amount of PCHET2 in the developing embryo may be crucial to steer the embryo toward male or female development. As above reported, facultative heterochromatinization forms in 7th cleavage male embryos as a wave from one pole of the embryo toward the other [10], suggesting a graded distribution of PCHET2 in the embryo. Since PCHET2 could be evidenced neither in the sperm nor in the oocyte [42], its presence in the embryo should be the result of early de novo synthesis under the control of the above-mentioned maternal factor(s).
Khosla et al. have also suggested that a unique chromatin organization is a mechanism of genomic imprinting in coccids [65]. The nuclease resistant chromatin (NRC) [67] represents an altered organization of 10% of the paternal genome, not cytologically equivalent to heterochromatin, but perhaps containing the putative centres for facultative heterochromatin nucleation. At the cleavage stage a choice is made between the maintenance or loss of the NRC transmitted with the sperm, leading to a male or female developmental pathway, respectively [65]. The model of Koshla et al. [65] can be well reconciled with our cytological dissection of imprinting marks during spermatogenesis [42]. We can assume, as suggested by Khosla et al., that NRC regions represent chromosome inactivation centers scattered at many loci along the chromosomes. NRCs may be imprinted in the mature sperm by DNA hypomethylation and K9H3 trimethylation marks that then spread to the whole paternal genome. Then, in the cleavage embryo, some maternal factor(s) might regulate the amount of PCHET2, that gradually spreads from one embryo pole to the other. A critical amount of PCHET2 will then determine whether the paternal imprinted chromosomes will become heterochromatic, thus leading to male development, or will remain euchromatic, losing repressive histone modification and NRCs, and eventually leading to female development.
Perspectives
Based on the characteristics presented in this paper, the phenomenon of imprinted facultative heterochromatinization in mealybugs represents one of the most remarkable examples of epigenetics in eukaryotes. The mealybug chromosome system offers a very acute tool with which to dissect the phenomenon of facultative heterochromatinization and the mechanisms of parental imprinting. The conservation in mealybugs of almost all the epigenetic mechanisms that act in mammals strongly supports the use of these species as a model for epigenetics. Most epigenetic mechanisms, such as histone modifications and their interplay with the HP1 proteins, show the same functional role in mealybugs as in mammals; others, namely, DNA methylation, exhibit a different involvement in epigenetics.
From the above considerations it appears that a genomewide approach to map the distribution of epigenetic marks along coccid genome, represents a new challenge for the functional analysis of epigenomes. The epigenetic landscape of the mealybug genome might be useful (i) to determine if there are DNA sequences that act as inactivation centres, scattered along the chromosomes, as suggested by the inactivation of small fragments from irradiated chromosomes; (ii) to highlight the possible role of small RNAs in facultative heterochromatinization and imprinting; (iii) to analyze a specific functional role for the different histone modifications; (iv) to verify the presence and distribution of DNA methylation and its relationship to the histone modifications. | 2014-10-01T00:00:00.000Z | 2012-04-08T00:00:00.000 | {
"year": 2012,
"sha1": "c96815d24ef7a0f03fad2fcb57763bfbeba3592b",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/gri/2012/867390.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2770d4a37fee135175e5503d4207ec633fe3d62",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
260643270 | pes2o/s2orc | v3-fos-license | Complement Inhibitors for Advanced Dry Age-Related Macular Degeneration (Geographic Atrophy): Some Light at the End of the Tunnel?
Geographic atrophy (GA) affects around 5 million individuals worldwide. Genome-wide, histopathologic, in vitro and animal studies have implicated the activation of the complement system and chronic local inflammation in the pathogenesis of GA. Recently, clinical trials have demonstrated that an intravitreal injection of pegcetacoplan, a C3 inhibitor, and avacincaptad pegol, a C5 inhibitor, both statistically significantly reduce the growth of GA up to 20% in a dose-dependent fashion. Furthermore, the protective effect of both pegcetacoplan and avacincaptad appear to increase with time. However, despite these anatomic outcomes, visual function has not improved as these drugs appear to only slow down the degenerative process. Unexpected adverse events included conversion to exudative NV-AMD with both drugs. Occlusive retinal vasculitis and anterior ischemic optic neuropathy have been reported in pegcetacoplan-treated eyes.
Introduction
Age-related macular degeneration (AMD) is a significant cause of blindness, representing 8.7% of all blindness cases worldwide. The projections show that its incidence will increase by 2040 and may affect up to 400 million people [1]. Late AMD has two components, a neovascular (NV) and a non-neovascular component. The advanced late stage of the non-NV component is called geographic atrophy (GA) [2]. GA affects around 5 million individuals worldwide [1,3,4].
GA is characterized by an insidious and progressive loss of photoreceptors, retinal pigment epithelium (RPE) and the choriocapillaris. It typically starts affecting the perifoveal region and spares the central fovea until the very end [4,5]. Recognizing spectral-domain optical coherence tomography's (SD-OCT) ability to discriminate the different macular anatomic layers and its widespread availability in the daily management of AMD, recently, the Classification of Atrophy Meeting Group (CAM) [6] defined incomplete retinal pigment epithelium outer retinal atrophy (iRORA) and complete retinal pigment epithelium outer retinal atrophy (cRORA) in the hopes of better characterizing GA. These definitions were based on the extension of the RPE and outer retina loss as seen with SD-OCT [7]. cRORA was defined as (1) a region of hypertransmission of at least 250 µm in diameter; (2) a zone of attenuation or disruption of the RPE of at least 250 µm in diameter; (3) evidence of overlying photoreceptor degeneration, whose features include outer nuclear layer (ONL) thinning, external limiting membrane (ELM) loss and ellipsoid zone (EZ) or the interdigitation zone (IZ) loss; and (4) the absence of scrolled retinal pigment epithelium (RPE) or other signs of an RPE tear [7] (Figure 1). iRORA was defined by the following criteria: (1) a region of signal hypertransmission into the choroid, and (2) a corresponding zone of attenuation or disruption of the RPE, with or without the persistence of basal laminar deposits (BLamD) and evidence of overlying photoreceptor degeneration, i.e., the subsidence of the inner nuclear layer and outer plexiform layer (OPL), presence of a hyporeflective wedge in the Henle fiber layer (HFL), thinning of the ONL, disruption of the ELM or disintegrity of the EZ, and when these criteria did not meet the definition of cRORA ( Figure 2). The pathophysiologic mechanisms involved in the initiation and progression of GA remain poorly understood. Several clinical trials have tested different drugs with different mechanisms of action including brimonidine, ciliary neurotrophic factor delivered by encapsulated cell therapy, an antiamyloid beta monoclonal antibody and visual-cycle modulators such as fenretinide and emixustat, among others. None of these approaches have yielded positive results [8][9][10][11].
Genome-wide, histopathologic, in vitro and animal studies have implicated the activation of the complement system and chronic local inflammation in the pathogenesis of GA [12][13][14][15][16]. The complement system is composed of more than 30 proteins in the plasma that are part of the innate immune system. The three main functions of the complement system are defense against infection, bridging the adaptive and innate immune system and the disposal of waste by mediating the clearance of immune complexes and apoptotic cells [17]. The complement system can be activated via three pathways, namely the classical pathway, the lectin pathway and the alternative pathway. Each of these pathways have different activators [17]. For instance, antigen-antibody complexes are bound by circulating C1q activating the classical pathway. Pathogens that express mannan binding lectin on their surface lead to the activation of the lectin pathway. The alternative pathway appears to be in a constant low state of activation where C3 is spontaneously hydrolyzed into C3a and C3b. The alternative pathway may be amplified via a feedback loop. All the above pathways converge upon a common terminal pathway that sequentially assembles C5b, C6, C7, C8 and C9, culminating in the formation of the membrane attack complex (MAC). The MAC promotes cell lysis by forming pores across the cellular bilipid layer [4,[17][18][19].
The main sources of complement components and many circulating complement regulatory proteins are liver hepatocytes, which release these proteins into the bloodstream. However, in certain tissues where there is limited access to these circulating proteins, the machinery to synthesize complement extrahepatically exists. A quantitative polymerase chain reaction analysis revealed that the cells in the human RPE-choroid complex express a complete set of transcripts associated with both the alternative and classical complement pathways. In contrast, with the exception of C5 and C7, the other components of the lectin and terminal pathway relies on the systemic circulation to be delivered to the human RPE-choroid complex.
Under normal conditions, complement-related gene expression is limited to the terminal and inhibitors of the alternative pathways [16]. In susceptible individuals that carry risk variants of complement components, the expression of these variants in the RPE may lead to dysregulation and over-reactivity in the complement cascade. These events can promote AMD by several mechanisms. For instance, drusen represent the hallmark of early and intermediate AMD. They are extracellular deposits composed of complement activators, complement regulatory proteins and complement factors. This suggests that a chronic local inflammatory and immune-mediated event at the level of the RPE-Bruch's membrane complex may play a central role in drusen biogenesis [16]. Hyperactivity in the MAC complex may promote the lysis of cells in the RPE, choriocapillaris and photoreceptors [20]. The complement fragments C3a and C5b direct macrophages and microglia cells into the subretinal space, promoting inflammation [20]. The Y402H variant of factor H can promote the accumulation of phagocytes, leading to inflammation in the subretinal space and the RPE cells [20,21]. Dysregulation in the complement cascade can lead to NLPR3 inflammasome activation. The NLPR3 inflammasome is a group of proteins composed of NLPR3, the adaptor molecule ASC and caspase 1. Once activated, this system can promote the secretion of the cytokines interleukin-1β and IL-18, which then lead to a specific type of cell lytic death called pyroptosis [4,20,22].
Recently, clinical trials have demonstrated a reduction in the growth of GA following the inhibition of C3 and C5. The United States Food and Drug Administration (USFDA) has recently approved pegcetacoplan (Sifovre, Apellis Pharmaceuticals Inc., Waltham, MA, USA), a C3 inhibitor, for the treatment of GA. Avacincaptad pegol (Zimura, Iveric Bio, Parsippany, NJ, USA), a C5 inhibitor, has been granted a Fast Track designation by the USFDA. The purpose of the current manuscript is to review the inhibition of the complement factors C3 and C5 in patients with GA.
Pegcetacoplan (Sifovre)
Pegcetacoplan binds to C3 and C3b and regulates the overactive complement system. It consists of two copies of a tridecapeptide that are covalently conjugated to a linear polyethylene glycol molecule through a Lys linker to enhance its half life [18,23]. The phase 2 FILLY study included 246 patients with GA [24]. The diagnosis of GA had to be confirmed by blue fundus autofluorescence (FAF) with an area of atrophy ≥2.5 mm 2 and ≤17.5 mm 2 . In addition, any hyperautofluorescence in the junctional zone of the GA area had to be present. Eyes with multifocal lesions had to have at least one lesion ≥1.25 mm 2 [24]. The patients were randomized to monthly injections or injections every two months (EOM) or sham intravitreal injections of 15 mg of pegcetacoplan [24].
The growth of GA is dependent on the baseline area of GA [25]. The square root transformation of the GA lesion area corrects for this dependency [26]. The primary outcome of the FILLY trial was the 12-month change in the square root transformed area of the atrophy extension from baseline as measured by FAF. A statistically significant reduction in the growth rate of the GA square root transformed area was seen in both the monthly (29%) and the EOM (20%) arms when compared to the control arm. This beneficial effect was most pronounced between months 6 and 12 of the treatment, where the reduction in the growth rate was 45% and 33% in the monthly and EOM arms, respectively. Conversely, if the treatment was stopped at month 12, the effect of reducing the extent of the GA growth was significantly reduced [24]. The FILLY study identified both an extrafoveal GA lesion and a larger low luminance deficit (LLD) as independent risk factors of GA progression. It also showed that after correcting for these factors, the treatment effect was maintained [27]. Intravitreal pegcetacoplan was also able to significantly reduce photoreceptor loss and thinning, which was assessed by a fully automated deep learning algorithm that segmented the RPE and photoreceptors in SD-OCT volume scans [28]. Pegcetacoplan also significantly reduced the progression of iRORA to cRORA. At a one-year follow up, iRORA progressed to cRORA in 50.0% of the monthly arms, 60% of the EOM arms and 82% of the arms in the sham group. These results suggest that pegcetacoplan may be beneficial in earlier stages of AMD [29]. Pegcetacoplan did not affect foveal encroachment by GA during the 12 months of treatment in this study [24].
Even though the primary endpoint of the pegcetacoplan trials were based on FAF imaging, FAF has certain shortcomings that need to be stated. FAF identifies RPE atrophy, but it does not identify photoreceptor loss and it does not assess the status of the junctional zone, nor does it reliably identify RPE atrophy within the fovea. In contrast, SD-OCT can reliably overcome these FAF shortcomings [28,30]. Mai et al. [30] compared and correlated the FAF and the SD-OCT outcomes of the FILLY trial. SD-OCT was found to be as reliable as FAF at determining GA lesion growth. In addition, SD-OCT was able to assess EZ impairment, leading the authors to conclude that SD-OCT could be a more sensitive monitoring tool during GA treatment [30]. Another post hoc analysis of the FILLY trial analyzed the SD-OCT images at baseline, 2 months, 6 months and 12 months. Deep learning automated segmentation of the RPE and photoreceptor thickness was performed. Intravitreal pegcetacoplan led to a reduction in photoreceptor loss and thinning when compared to the sham injections. These results provide proof of principle that intravitreal C3 inhibition can preserve photoreceptors [28].
In another post hoc analysis of the FILLY trial, Vogl et al. [31] used deep learning algorithms to identify disease activity and the effects of pegcetacoplan on the progression of GA. The GA lesions on the Heidelberg Spectralis SD-OCT were automatically segmented. The local progression rate was calculated by using a growth model that measured the local growth of the GA lesion margins. Furthermore, they also looked at the mean photoreceptor thickness, hyper-reflective foci concentration and the direction of the GA growth at each individual point on the GA lesion margin. They found that the local progression rate was slower in areas with thicker photoreceptor layers and lower hyper-reflective foci concentrations. For lesions that are actively growing towards the fovea, the closer they get to the center of the fovea, the more the growth slows. There is a peak progression rate at 1 mm eccentricity to the fovea. These researchers confirmed that pegcetacoplan was able to significantly slow down the local progression rate of GA [31].
Outcomes related to visual acuity and the quality of vision did not demonstrate any beneficial effect of pegcetacoplan. All three groups showed a gradual decline in vision, low-luminance BCVA and LLD without any significant differences between the three groups [24,32,33].
The pivotal phase 3 clinical trials OAKS and DERBY enrolled 637 and 621 patients, respectively [32]. Both studies had the same design, and their inclusion criteria were similar to those from FILLY. The primary outcome measured was the growth of the GA area from baseline to month 12. Pegcetacoplan significantly reduced the growth of GA by 21% in the monthly arms and 16% in the EOM arms in the OAKS trial. In contrast, the primary outcome was not achieved in DERBY since the reductions obtained were only 12% and 11% for the monthly and EOM arms, respectively [32]. As time went on, the percent reduction in GA growth grew in the pegcetacoplan-treated eyes as compared to the sham-treated eyes. At 24 months, a reduction in GA growth of 36% was achieved in the group receiving monthly injections and 29% in the EOM arms of the DERBY trial compared to a 24% reduction in the monthly arms and 25% in the EOM arms in OAKS [33].
Given the important role that the complement system plays in fighting infection, one of the major concerns of using complement inhibitors is the theoretical risk of an increase in infections. In the FILLY trial, culture-positive infectious endophthalmitis was reported in 2.3% of the eyes in the monthly group compared to 0% in the sham group. In the EOM group, there was a single case (1/79 = 1.3%) of endophthalmitis, which happened to be culture-negative [24]. The 24-month combined incidence of endophthalmitis from OAKS and DERBY was 0.5% in both the monthly and EOM groups. The combined rate of ocular inflammation was 3.8% in the monthly injections group and 2.1% in the EOM group. No cases of retinal vasculitis were reported [33]. However, on 15 July 2023, the American Society of Retinal Specialists (ASRS) Research and Safety in Therapeutics (REST) Committee warned its members that six patients had developed occlusive retinal vasculitis following an administration of pegcetacoplan. These occurred between 7 and 13 days after the pegcetacoplan injection. Up until then, approximately 60,000 vials of pegcetacoplan had been distributed. No specific lots were identified (https://www.asrs.org/clinical/clinical-updates/9327/ASRS-Research-and-Safetyin-Therapeutics-REST-Committee-Update-on-Adverse-Events, accessed on 22 July 2023).
An unexpected dose-dependent increased rate of the new onset exudative NV-AMD was observed in the eyes treated with pegcetacoplan when compared to the sham-treated eyes. In the FILLY study, the eyes treated with pegcetacoplan monthly converted 20.9% of the time vs. 8.9% in the EOM eyes vs. 1% in the sham-treated eyes [24]. In the OAKS and DERBY studies, the conversion rates to exudative NV-AMD were much lower at 6.0%, 4.1% and 2.4% in the monthly, EOM and sham-injection groups, respectively [32]. These rates increased to 11.9%, 6.7% and 3.1% in the monthly, EOM and sham group, respectively, at 24 months [33]. The risk factors for the development of macular neovascularization (MNV) included MNV in the fellow eye at baseline and the presence of the double-layer sign (DLS) during SD-OCT at baseline [34]. The DLS consists of two highly reflective layers, the RPE and another highly reflective layer beneath the RPE, typically found in areas of the branching vascular network seen in polypoidal choroidal vasculopathy (PCV) [35]. However, the DLS is not pathognomonic of PCV. Other studies found a high correlation between the presence of the DLS and the presence of type 1 MNV or nonexudative neovascular AMD [36,37]. The mechanism of exudative MNV following complement inhibition remains unclear.
Ischemic optic neuropathy (ION) was reported in 1.7%, 0.2% and 0% of eyes in the pegcetacoplan monthly, EOM and sham groups, respectively. All of these eyes had discs at risk and multiple systemic risk factors. The underlying mechanism behind the presentation of ION has yet to be determined [33].
Avacincaptad Pegol (Zimura)
Eculizumab is a murine humanized monoclonal antibody that targets C5. C5 inhibition potentially preserves the anti-inflammatory properties of C3a. It has been approved for the treatment of paroxysmal nocturnal hemoglobinuria and atypical hemolytic uremic syndrome. Patients underwent systemic treatment with eculizumab via an intravenous infusion of 600 mg to 1200 mg weekly for 24 weeks. Systemic eculizumab did not slow down the growth of GA [38].
Avacincaptad pegol is a pegylated RNA aptamer with a high affinity to complement factor C5 [39]. The GATHER1 study was a phase 2/3 study that consisted of two parts. In part 1, 77 patients were randomized to receive 1 mg of avacincaptad or 2 mg of avacincaptad or sham monthly intravitreal injections. In part 2, 209 patients were randomized to receive 2 mg of avacincaptad or 4 mg of avacincaptad or sham monthly intravitreal injections. The 4 mg dose of avacincaptad was delivered by two intravitreal injections of 2 mg [40]. Eligible patients had to have GA without the involvement of the foveal center, and it had to be located within an area of 1500 mm from the center of the fovea. The total area of GA had to be between 2.5 and 17.5 mm 2 , as determined by blue FAF imaging. Patients with multifocal GA were also included if they had at least one focal lesion of ≥1.25 mm 2 [39,40].
As in the pegcetacoplan studies, square root transformations of the GA lesion area were performed to compare the growth rates of the GA lesion size. At a 12-month follow up, the monthly 2 mg arm had a 27.4% reduction in GA growth compared to the sham group. The 4 mg monthly arm had a similar reduction in GA growth of 27.8% when compared with the sham group at month 12 [39]. The earliest growth-reducing effect of GA was seen at 6 months, with a 28.4% reduction for the 2 mg injection arm and a 26.6% reduction for the 4 mg arm [39]. At an 18-month follow up, there was a further reduction in the growth in both treatment arms with respect to the sham arm. The 2 mg and 4 mg arms had a 28.1% and 30% reduction in the GA lesion size, respectively [40].
Based on the favorable results of the GATHER1 study, the confirmatory phase 3 study GATHER2 was designed and executed. In GATHER2, 447 patients participated and were randomized to receive 2 mg of avacincaptad and sham injections [41]. A total of 2 mg of Avacincaptad was used since the efficacy of the 2 and 4 mg was similar and the safety profile was better in the 2 mg arm [40]. The inclusion criteria were identical to GATHER1. At 12 months, a statistically significant reduction in the mean rate of the GA growth of 14.3% by using a square root transformation was reported in the group treated with 2 mg of avacincaptad versus the sham-treated group [41].
Similar to pegcetacoplan, in the GATHER1 study, the avacincaptad-treated eyes had a higher risk of converting to exudative NV-AMD. Exudative NV-AMD was diagnosed by clinical examination, SD-OCT or fluorescein angiography. OCT angiography was not available. At a 12-month follow up, 9.0% of the eyes in the 2 mg cohort and 9.6% of the eyes in the 4 mg cohort developed MNV [39]. By 18 months, a conversion was observed in 11.9%, 15.7% and 2.6% of cases in the 2 mg, 4 mg and sham group, respectively. The fellow eyes converted from 3% to 3.6%. Unfortunately, the patients that developed MNV were exited from the study; therefore, the details of the clinical course and impact on BCVA are limited [40]. In GATHER2, at 12 months, 6.7% and 4.1% of the avacincaptadand sham-treated eyes developed MNV. In GATHER2, there were no cases of intraocular inflammation, endophthalmitis or ION [42].
The Supplementary Table S1 summarizes the DERBY, OAKS, GATHER1 and GATHER2 studies.
Conclusions and Future Directions
Since treatment options are now available for patients with GA, a clear distinction should be made between atrophy that is secondary to an inherited retinal disease and atrophy that is secondary to AMD [43]. Multimodal imaging is particularly useful when making this distinction. For instance, end-stage Stargardt disease may easily be confused with GA secondary to AMD. Both OCTA and indocyanine green angiography (ICG-A) may differentiate between these two conditions [44,45]. The areas of atrophy typically manifest ICG-A hypocyanescence whereas atrophic areas in AMD typically manifest iso or mild hypercyanescence [44]. OCT-A imaging showed that eyes with macular atrophy secondary to Stargardt disease had choriocapillaris loss in the center with persistent tissue at the margins. In contrast, in the AMD eyes, the choriocapillaris was present but rarified [45].
Despite the success shown by both C3 and C5 inhibitors delivered intravitreally in slowing the progression of GA, many challenges still lie ahead. Both have demonstrated dose-dependent reductions in the rate of growth of GA. A post hoc analysis of the FILLY trial showed that pegcetacoplan reduced the rate of progression from iRORA to cRORA, suggesting that it may be beneficial in earlier stages of AMD [29]. The protective effect of both pegcetacoplan and avacincaptad appear to increase with time. However, despite these anatomic outcomes, visual function has not improved as these drugs appear to only slow down the degenerative process [24,33,[39][40][41]. Patients that opt to be treated by any of these drugs will soon realize that there is a very high treatment burden of injections with no apparent benefit perceived by them. The key questions remaining involve the timing of the initiation of the treatment in an individual patient. How early should we intervene to avoid the development of irreversible damage? Would the outcomes be the same if patients are treated earlier? Unfortunately, we do not currently have the answers to these and many other questions.
GA lesions are very heterogeneous and vary particularly in their growth patterns. Reported GA growth rates vary anywhere from 0.53 to 2.6 mm 2 per year [46]. There are many factors that influence the direction of growth (towards the fovea vs. towards the periphery) and the velocity of growth. The FAF pattern in the junctional zone of the GA lesion may predict the rates of progression of GA. Eyes with banded or diffuse hyper FAF patterns demonstrated faster growth rates when compared to eyes without hyper FAF patterns or with just focal hyper FAF patterns [47]. Others have reported that the presence of reticular pseudodrusen (subretinal drusenoid deposits) accelerates the growth of GA [48][49][50]. Lesion size also determines growth rates. Smaller lesions tend to grow slower whereas larger lesions tend to grow faster. Extrafoveal and multifocal lesions also grow faster than foveal and unifocal lesions [46,51]. In patients whose fellow eye also harbors a GA lesion, the GA lesion grows faster [52]. Choriocapillaris flow deficits and impairment may also affect the growth rates of GA lesions [53,54]. Patient ethnicity may also play a role. A recent study highlighted the differences in GA lesion phenotype, growth rate and associated features in Asians compared to non-Asians [55]. Asian patients exhibited thicker choroids, fewer drusen and smaller GA lesions with fewer GA foci compared to non-Asian patients. The proportion of eyes with a diffuse or banded FAF pattern was similar between both groups. In general, Asian eyes with GA had a slower lesion growth than non-Asian eyes [55]. Ideally, risk stratification should help in decision making when deciding to consider initiating therapy with these novel drugs. Before we can use risk calculators that take into account the direction of growth, location and predicted growth rate as suggested by Guymer, much work still needs to be conducted. In order to accomplish this, a recent editorial by Guymer [43] emphasizes the need to image all of our GA patients with FAF and SD-OCT. Artificial intelligence algorithms may then use these images to help the decision-making process.
A better understanding of the role of the complement system in the pathogenesis of AMD is definitely needed. A genetic analysis of 47 genes that included CFH, CFI, C2/CFB and C3 was undertaken in the FILLY trial. None of these genes influenced the response to treatment with pegcetacoplan. Two genetic factors, rs2230199 in C3 and rs3750846 in ARMS2, were found to influence the rate of growth of GA, regardless of the treatment group to which the patients were assigned. This indicates that pegcetacoplan slows GA progression independent of these genetic risk factors [24]. Furthermore, a recent post hoc analysis on aqueous humor and plasma samples of 81 patients from the Chroma and Spectri trials showed that the complement levels or activities in the aqueous humor or plasma did not correlate with the GA lesion size or growth rate [56].
A significant concern with these complement inhibitors is the conversion to exudative NV-AMD, which appears to be dose dependent with both C3 and C5 inhibition [34,42]. It remains unclear if these patients will require a temporary or chronic VEGF suppression. Despite timely treatment with anti-VEGF agents, these eyes could potentially develop a further loss in vision and experience an increased treatment burden as well. Several hypotheses have been put forth to explain this phenomenon [34,57]. As OCTA becomes more available, it is hoped that future trials will incorporate it to further assess the mechanisms involved in the conversion of GA to exudative NV-AMD following C3 or C5 inhibition. Similarly, the unexpected findings of ION and the recent alert regarding occlusive retinal vasculitis are worrisome and merit further study [33].
In summary, the intravitreal inhibition of either C3 or C5 slows down the progression of GA. It is hoped that these results stimulate further research in the field to obtain additional insights that can fuel future drug research and development in GA and its precursor states so that patients with this affection may see some light at the end of the tunnel.
Conflicts of Interest:
The authors declare no conflict of interest. Lihteh Wu has received lecture fees from Bayer, Roche and Lumibird Medical. | 2023-08-07T15:02:01.883Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "a6d2d2a9b56e6fedd57175c74e42a83a16d3761b",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "5dc4e231fd6e8991636067dadfe43d78cb2034d4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231769213 | pes2o/s2orc | v3-fos-license | Exploring the Effect of Pulsed Electric Fields on the Technological Properties of Chicken Meat
Pulsed electric field (PEF) is a non-thermal technology which is increasingly drawing the interest of the meat industry. This study aimed at evaluating the effect of PEF on the main technological properties of chicken meat, by investigating the role of the most relevant process parameters such as the number of pulses (150 vs. 300 and 450 vs. 600) and the electric field strength (0.60 vs. 1.20 kV/cm). Results indicated that PEF does not exert any effect on meat pH and just slightly affects lightness and yellowness. Low-intensity PEF treatments improved the water holding capacity of chicken meat by significantly (p < 0.001) reducing drip loss up to 28.5% during 4 days of refrigerated storage, without damaging proteins’ integrity and functionality. Moreover, from the analysis of the process parameters, it has been possible to highlight that increasing the number of pulses is more effective in reducing meat drip loss rather than doubling the electric field strengths. From an industrial point of view, the results of this explorative study suggested the potential of PEF to reduce the undesired liquid inside the package, thus improving consumer acceptance.
Introduction
Pulsed electric field (PEF) is a non-thermal technique that relies on the application of high-intensity electrical pulses for short duration times to biological tissues placed between two electrodes [1]. Albeit this technology was first adopted in the food industry about 50 years ago, PEF is still considered an emerging technology and is increasingly raising the interest of the scientific community due to its sustainability, environmental friendliness, and recent advancements in industrial applications [2,3]. PEF is applied as a single treatment or in combination with other technologies to synergistically enhance product quality, microbial stability, and process yields [4,5]. The abovementioned goals can be achieved through the modulation of several process parameters (e.g., electric field strength, number of pulses, time of treatment, etc.) leading to the electrical breakdown of cell membranes (i.e., electroporation) and resulting in the creation of pores, which work as conductive channels [6,7].
Although in the past 50 years several mechanisms about pore formation have been supposed [8][9][10], recent advancements in the field highlighted that the main mechanism of electroporation involves the development of aqueous pores in the lipid bilayer of the cell membrane, along with variations to individual membrane lipids and proteins, as suggested by Kotnik et al. [11]. In more detail, the same authors reported that pulsed electric fields trigger peroxidation to the membrane lipids, which causes deformations at the level of the lipid tails, thus increasing the permeability of the bilayer to water, ions, and other small molecules [12]. Depending on the electric field strength, the electroporation process can be either reversible or irreversible [13]. In the first case, the cell might be damaged but still able to fully recover, while in the second the membrane integrity is lost, causing an extensive leakage of intracellular content and, eventually, cell death [6,14]. Based on the reversibility of the process, PEF technology can have different applications in the food industry. Irreversible electroporation at high field strengths (e.g., 10-50 kV/cm) is particularly studied as an alternative to traditional thermal processing for microbial inactivation, with the advantage of minimally altering sensorial and nutritional characteristics of foods [1,14,15]. On the contrary, irreversible electroporation at lower field strength values (0.5-5 kV/cm), might be fruitfully exploited to enhance mass transfers during further technological processes such as drying, freezing, freeze-drying, and osmotic dehydration [8,[16][17][18].
Having this in mind, the attraction towards the application of PEF in the meat industry has increased in recent years, as evidence grows on its ability to induce microstructural changes that might lead to the improvement of meat functional properties [19]. However, based on the available literature, a remarkable number of studies concerning the application of pulsed electric fields on pork and beef has been carried out over the past 20 years, while studies regarding the effect of PEF on poultry meat are still scarce. According to the last reports of the Organization for Economic Co-operation and Development (OECD), poultry is the most widely eaten type of meat worldwide [20]. With growing trends in further processing and increasing rates of broiler breast muscle abnormalities [21], meat technological properties (e.g., water holding capacity, emulsifying, and gelling properties, etc.) are gaining progressively more importance and the application of innovative technologies is actually considered as the most promising option for the improvement of such properties in poultry processing plants [1].
Considering these aspects, the knowledge concerning the feasibility of emerging technologies on chicken meat should be deepened. For this purpose, this study aimed at evaluating the implications of different PEF treatments on the main quality traits and the technological properties of chicken meat. Two separate trials were carried out as part of the experimental design of this study. For the first experiment, a total of five chicken breasts (Pectoralis major muscle) belonging to the same batch of broilers which were homogenous in weight, sex, and age were purchased at 24 h postmortem from a local market (Coop Italia Società Cooperativa, Cesena, Italy) and chilled at 4 • C ± 1. Breast muscles were cut to obtain both the left and right fillets. According to the scheme reported in Figure 1, with the aid of a scalpel, a total of five subsamples of 2 cm 3 and weighing about 10 g were cut following fibers direction from the cranial position of each fillet (n = 10 subsamples/breast muscle), thus obtaining a total of 50 meat subsamples.
Materials and Methods
Samples thus obtained were randomly divided into five experimental groups according to PEF treatments (Table 1). [16]. 3 Calculated by multiplying frequency (i.e., number of pulses/s) and total treatment time.
Treatment parameters were selected on the basis of previous studies that have reported that the electroporation process in animal tissues occurs when an electric field strength equal to or higher than 0.6 kV/cm is applied [4,19,22]. PEF treatments were performed using a laboratory-scale PEF system (Mod. S-P7500, Alintel, Pieve di Cento, Italy) delivering a maximum output current and voltage, respectively, of 60 A and 8 kV. The generator provides monopolar rectangular-shaped pulses and adjustable pulse width (5-20 µ s), pulse frequency (50-500 Hz), and total treatment time (1-600 s). For this experiment, meat samples (dimension of 2 × 2 × 2 cm, weighing about 10 g) were placed in the treatment chamber (5 cm length × 5 cm width × 5 cm height) which consisted of two parallel stainless-steel electrodes (3 mm thick) with a 4.7 cm fixed gap. Output voltage and current were monitored using a PC-oscilloscope (Picoscope 2204a, Pico Technology, Cambridgeshire, UK). Meat samples were treated at room temperature in tap water, with an initial electrical conductivity of 246 ± 2 μS/cm at 18 °C, measured using an EC-meter (Mod. Basic 30, Crison, Barcelona, Spain), while control samples were not subjected at any treatment or exposed to water. Since it is generally held that the electric field distribution strongly depends upon the orientation of the electric field with respect to the muscle fibers [23], meat samples were placed between the electrodes in such a manner that the electric field was delivered perpendicularly to the muscle fibers to facilitate the eventual electroporation phenomenon. During PEF treatments, frequency (50 Hz), the distance between electrodes as well as the pulse width (20 µ s) and the sample:water mass ratio (w/w, 1:10) Treatment parameters were selected on the basis of previous studies that have reported that the electroporation process in animal tissues occurs when an electric field strength equal to or higher than 0.6 kV/cm is applied [4,19,22]. PEF treatments were performed using a laboratory-scale PEF system (Mod. S-P7500, Alintel, Pieve di Cento, Italy) delivering a maximum output current and voltage, respectively, of 60 A and 8 kV. The generator provides monopolar rectangular-shaped pulses and adjustable pulse width (5-20 µs), pulse frequency (50-500 Hz), and total treatment time (1-600 s). For this experiment, meat samples (dimension of 2 × 2 × 2 cm, weighing about 10 g) were placed in the treatment chamber (5 cm length × 5 cm width × 5 cm height) which consisted of two parallel stainlesssteel electrodes (3 mm thick) with a 4.7 cm fixed gap. Output voltage and current were monitored using a PC-oscilloscope (Picoscope 2204a, Pico Technology, Cambridgeshire, UK). Meat samples were treated at room temperature in tap water, with an initial electrical conductivity of 246 ± 2 µS/cm at 18 • C, measured using an EC-meter (Mod. Basic 30, Crison, Barcelona, Spain), while control samples were not subjected at any treatment or exposed to water. Since it is generally held that the electric field distribution strongly depends upon the orientation of the electric field with respect to the muscle fibers [23], meat samples were placed between the electrodes in such a manner that the electric field was delivered perpendicularly to the muscle fibers to facilitate the eventual electroporation phenomenon. During PEF treatments, frequency (50 Hz), the distance between electrodes as well as the pulse width (20 µs) and the sample:water mass ratio (w/w, 1:10) were kept constant, while the electric field strength and the treatment time were modulated according to what is reported in Table 1. Temperature changes due to PEF treatments, measured with a temperature probe (mod. TESTO 445, Testo GmbH & Co, Milan, Italy), were negligible.
Trial 2
As for the second trial, another batch of five chicken breasts (Pectoralis major muscle) homogenous in weight, sex, and age were purchased at 24 h postmortem from the same local market and collected as described for experiment 1. Meat samples were then randomly divided into five experimental groups according to PEF treatments ( Table 2) and handled as described for the first experiment. [16]. 3 Calculated by multiplying frequency (i.e., number of pulses/s) and total treatment time.
Analytic Determinations
The pH of meat samples was assessed following the procedure proposed by Jeacocke [24]. In detail, 2.5 g of manually minced meat were homogenized for 30 s at 13,500 rpm by UltraTurrax T25 Basic (IKA-Werke, Staufem im Breisgau, Germany) in 25 mL of a 5 mM sodium iodoacetate and 150 mM potassium chloride solution (pH 7.0). The pH of the homogenate was then assessed by a pH meter (mod. 3510, Jenway, Staffordshire, UK) previously calibrated at pH 4.0 and 7.0. The pH measurements of the PEF-treated samples were carried out immediately after the treatments.
Color
Color (CIE L* = lightness, a* = redness, and b* = yellowness) of meat samples was assessed through a reflectance colorimeter (mod. Chroma Meter CR-400, Minolta, Milan, Italy) equipped with an illuminant source C and previously calibrated with a reference color standard ceramic tile. Due to the small dimension of the meat samples, color measurements were performed as a single measure before and immediately after the treatment, and data were expressed as the difference in lightness, redness, and yellowness (∆L*, ∆a*, ∆b*, respectively) following PEF process.
Drip Loss
The term "drip loss" refers to the fluid released by raw meat through passive exudation during refrigerated storage [25]. Meat samples having homogenous weight (10 g) and dimensions (2 cm 3 ) were individually weighed, placed in plastic boxes, and stored at 4 • C for 96 h. Then, each sample was cleaned from eventual superficial liquid accumulation using a paper towel and weighed to assess drip loss, calculated according to the formula: Drip loss (%) = [(initial weight − weight after storage)/initial weight] × 100 Drip loss was assessed by repeatedly measuring the same sample after 24, 48, 72, and 96 h of refrigerated storage.
Protein Solubility
Protein solubility was measured according to Warner et al. [26] with slight modifications. In detail, sarcoplasmic protein solubility was determined by homogenizing for 30 s at 13,500 rpm by UltraTurrax T25 Basic (IKA-Werke, Staufem im Breisgau, Germany) 1 g of meat in 10 mL of ice-cold 25 mM potassium phosphate buffer (pH 7.2). Homogenates were placed at 4 • C for 20 h and then centrifuged at 2600× g for 30 min. The protein concentration of the supernatant was measured by Bradford assay, using bovine serum albumin as standard. Total protein solubility (myofibrillar + sarcoplasmic) was similarly determined by homogenizing the same meat aliquot in 1.1 M KI and 0.1 M potassium phosphate buffer (pH 7.2). Myofibrillar protein solubility was then calculated as the difference between total and sarcoplasmic protein solubility.
Protein Denaturation Enthalpy
Total protein denaturation enthalpy of control and treated samples was assessed through a differential scanning calorimeter (DSC) DSC Q20 (TA Instrument, Wetzlar, Germany), equipped with a low-temperature cooling unit Intercooler II (Perkin-Elmer Corporation, Waltham, MA, USA). Temperature and melting enthalpy calibrations were performed with ion exchanged distilled water (mp 0.0 • C) and indium (mp 156.6 • C), while the heat flow was calibrated using the heat of fusion of indium (∆H = 28.71 J/g).
For the calibration, the same heating rate and dry nitrogen gas flux of 50 mL/min used for the analysis were applied. According to Baldi et al. [27], each muscle sample was weighed (about 25 mg) into a 50-µL aluminum pan, sealed hermetically, and then loaded into the DSC instrument at room temperature. The heating rate of DSC scans was 5 • C/min over a range of 20-90 • C. Empty aluminum pans were used as reference and for baseline corrections. Three replications for each meat sample were performed and results were elaborated through a PeakFit Software (SeaSolve Software Inc., Framingham, MA, USA).
Statistical Analysis
Data were checked for normality by means of Shapiro-Wilk test (SAS Institute Cary, NC, USA), and variables characterized by a non-normal distribution were properly transformed. Then, data obtained from experiments 1 and 2 were separately analyzed using the one-way ANOVA option of the general linear model (GLM) procedure of SAS software (SAS Institute, Cary, NC, USA), considering PEF treatment as the main effect. Means were then separated using Tukey's honestly multiple range test of the GLM procedure and considered significant at p < 0.05. Subsequently, differences among experimental groups were explored through preplanned orthogonal contrasts, which allow to involve linear combinations of group mean vectors instead of linear combinations of the variables. In more detail, multiple orthogonal contrasts analysis was performed in order to estimate the main effects of the treatment (control vs. treated), the electric field strength (low vs. high), as well as the number of pulses (150 vs. 300 and 450 vs. 600 for experiment 1 and 2, respectively) on the qualitative and technological traits of chicken meat. Table 3 shows the results concerning the effects of PEF treatment on pH, color variations, and water holding capacity of chicken meat samples. The application of pulsed electric fields did not exert any effect on meat pH, regardless of both the electric field strength and the number of pulses. Regarding the color parameters, while both a* and b* were not affected by the PEF treatments, the lightness of meat (L*) was significantly modified, with T3 showing the greatest variations (3.38) in absolute terms. Intriguingly, the application of low electric field strengths (0.6 kV/cm) was associated with a significant increase in the lightness of meat if compared to higher electric field strengths (1.2 kV/cm) (p < 0.01), while the number of pulses did not exert any significant effect. Table 3. Average pH, color variation (∆L*, ∆a*, ∆b*), and water holding capacity (drip loss, %) of chicken meat samples subjected or not to pulsed electric field treatments. Data are expressed as least square means ± standard deviation. a-d = means lacking a common letter significantly differ (p < 0.05). * = p < 0.05; ** = p < 0.01; *** = p < 0.001; n.s. = not significant. The most promising result is the one concerning meat water holding capacity, measured through the drip loss analysis. Aside from T1 group, meat samples subjected to PEF treatments showed significantly lower drip losses if compared to control after 24 h of refrigerated storage. Among the experimental groups, T4, characterized by the highest total specific energy input (1.19 kJ/kg, Table 1), exhibited the lowest values (1.85%). Overall, the analysis of preplanned orthogonal contrast evidenced that the application of PEF allowed to significantly reduce the loss of liquids from meat after 24 h of refrigerated storage (p < 0.001). In detail, both the electric field strength and the number of pulses exerted a remarkable effect on the drip loss (p < 0.05 and 0.001, respectively), however, an increase in the number of pulses (from 150 to 300) allowed to reduce drip losses to a greater extent if compared to the increment in the electric field strength (from 0.6 to 1.2 kV/cm). While no difference was detected at 48, 72, and 96 h of storage, total drip loss was found to be significantly different among the experimental groups (p < 0.05), as a direct consequence of the results obtained after the first sampling time. The application of PEF significantly reduced total drip loss of meat by 28.5% if compared to untreated samples (5.22 vs. 7.30%, p < 0.01). Intriguingly, total drip loss was found to be significantly affected by the number of pulses applied (p < 0.05), rather than the electric field strength.
Experiment 1
Results concerning protein solubility and total denaturation enthalpy of chicken meat subjected to PEF treatments are shown in Table 4. The application of pulsed electric field did not exert any effect on the abovementioned parameters, regardless of neither the electric field strength nor the number of pulses applied. Table 4. Average protein solubility (mg/g) and total protein denaturation enthalpy (∆H, J/g) of chicken meat samples subjected or not to pulsed electric field treatments. Data are expressed as least square means ± standard deviation. n.s. = not significant.
Experiment 2
Since the outcomes obtained from experiment 1 suggested that the number of pulses exerts a predominant effect on meat technological properties rather than the electric field strength, a second trial was carried out in order to extend the experimental design by further increasing the number of pulses while keeping fixed the levels of electric field strength. Table 5 shows the results concerning pH, color variations, and water holding capacity of samples as affected by PEF treatments. Similarly to what was observed in experiment 1, the application of PEF did not exert any effect on meat pH, regardless of both the electric field strength and the number of pulses applied. Regarding color parameters, meat redness (a*) was not modified after any of the selected treatments, while both L* and b* parameters significantly varied after the application of PEF (p < 0.05 and 0.001, respectively). In more detail, meat samples subjected to T7 and T8 showed the most relevant variations of L* parameter (+2.60 and +2.34, respectively). As for b*, T5 was associated with a significant decrease in the yellowness of meat (−1.79), while no differences were detected among the other experimental groups, which shared increased b* values. The analysis of preplanned orthogonal contrast evidenced that a higher number of pulses (600 vs. 450) led to a remarkable increase of both L* and b* parameters (p < 0.01 and 0.001, respectively), while the application of low electric field strengths (0.6 kV/cm) was associated to a significant decrease in b* values (p < 0.01).
According to what was found in experiment 1, the application of PEF significantly affected the water holding capacity of meat samples after 24 h of refrigerated storage (p < 0.001). In more detail, meat samples subjected to T7 and T8 showed significantly lower drip loss values if compared to the other experimental groups (p < 0.05), while T5 and T6 exhibited average values analogous to those detected for untreated samples. Overall, orthogonal planned contrast analysis showed that both the electric field strength and the number of pulses exerted significant effects on the drip loss of meat (p < 0.05 and 0.001, respectively). However, an increase in the number of pulses (from 450 to 600) allowed to reduce drip loss values of almost the double if compared to what was observed for the increment in the electric field strength (from 0.6 to 1.2 kV/cm) (31.0% vs. 15.4%). While no differences were detected at 48, 72, and 96 h of storage, total drip loss was found to be significantly different among the experimental groups (p < 0.01), as a consequence of the results detected at 24 h. Overall, PEF-treated samples exhibited an 11.5% reduction in drip loss values (p < 0.05) if compared to their untreated counterparts. In more detail, the application of a higher number of pulses (600 instead of 450) allowed to obtain a significant reduction (p < 0.01) in the total amount of liquids lost from meat during the refrigerated storage.
Results concerning protein solubility and total protein denaturation enthalpy of chicken meat subjected to PEF treatments are shown in Table 6. The application of PEF treatments did not exert any effect on the solubility of proteins, with the only exception of the sarcoplasmic fraction solubility that was found to be significantly higher in samples subjected to T8 (p < 0.05). Furthermore, the analysis of preplanned contrasts allowed to evidence that the application of 600 pulses led to a significant increase in the solubility of sarcoplasmic proteins, which improved by about 12% if compared to 450 pulses (p < 0.01).
Regarding total protein denaturation enthalpy, the application of pulsed electric fields did not exert any significant effect, regardless of neither the electric field strength nor the number of pulses applied. Table 5. Average pH, color variations (∆L*, ∆a*, ∆b*), and water holding capacity (drip loss, %) of chicken meat samples subjected or not to pulsed electric field treatments. Data are expressed as least square means ± standard deviation. a,b = means lacking a common letter significantly differ (p < 0.05). * = p < 0.05; ** = p < 0.01; *** = p < 0.001; n.s. = not significant. Table 6. Average protein solubility (mg/g) and total protein denaturation enthalpy (∆H, J/g) of chicken meat samples subjected or not to pulsed electric field treatments. Data are expressed as least square means ± standard deviation. a,b = means lacking a common letter significantly differ (p < 0.05). * = p < 0.05; ** = p < 0.01; n.s. = not significant.
Discussion
The application of PEF is drawing the interest of the meat industry thanks to its ability to accelerate mass transfers during drying and brining, improve tenderization, enhance the micro diffusion of water-binding agents and increase meat safety [4]. However, the utilization of PEF to improve the technological properties of chicken meat is still an unexplored field. Overall data evidenced that PEF treatments tested within the experiments do not affect meat pH, besides the number of pulses and the electric field strength applied. This result corroborates what was found by Khan et al. [28], where the application of pulsed electric fields at both 0.30 and 1.25 kV/cm did not change the pH of chicken breast muscles. Similar results were also observed on beef [22,29]. However, the application of PEF with high total specific energy input (>150 kJ/kg) was found to be associated with a significant decrease in muscular pH values, due to a modification in meat conductivity caused by the leakage of intracellular liquids following cellular electroporation [30]. Considering that the highest total energy input achieved within this study was 2.42 kJ/kg (Tables 1 and 2), it is reasonable to assume that the treatments performed in the experiments were not strong enough to induce cell breakdown and thus the leakage of cellular fluids.
Only few studies are available in the literature concerning the effect of PEF on meat color, which is known to be the main factor leading customers' purchasing choices. A recent study carried out in the U.S. reported that consumers have a clear preference towards lighter colored poultry meats over darker ones, especially for breast meat [31]. The effect of PEF on meat color strongly depends upon the magnitude of the temperature increase during the treatment, which might alter the redox state of myoglobin [19]. Indeed, highintensity treatments coupled with elevated numbers of repeats may increase samples' temperature, thus promoting myoglobin oxidation and meat discoloration [5]. Data from the available literature showed that PEF treatments with high total specific energy input (>50 kJ/kg) caused a significant decrease of meat lightness [32], while milder settings did not affect color parameters of both beef and turkey meats [33,34]. Although lowintensity treatments (<5 kJ/kg) were performed within this study, PEF slightly changed meat lightness and yellowness, while redness was not affected. However, the trends observed for meat color variations were controversial between the two experiments and might be complex to interpret. Indeed, while in the first experiment a lower electric field strength and number of pulses were associated with an increase in meat lightness, in the second trial they were linked to a decrease in both lightness and yellowness of samples. Considering the low total specific energy input of the treatments (see Tables 1 and 2) and the poor content of myoglobin in chicken meat [35], it is reasonable to assume that color variations were not likely due to an alteration of meat pigments caused by an increase in sample temperature during the process, but it might be linked to a possible redistribution of water within cellular compartments after the PEF application. Indeed, PEF might have favored the movement of water within cellular spaces, thus leading to a change in the refractive properties of the tissue [36]. Albeit further research should be performed to test this hypothesis, it is noteworthy to mention that meat color variations induced by PEF treatments in this study were negligible and probably not detectable by the human eye.
In muscular tissue, water is distributed in several compartments and can be present either as bound or free water: the first is tightly fastened to meat proteins, while the second is held between myofilaments and myofibrils, as well as outside the fibers [37]. There is growing evidence concerning the ability of PEF to modify the microstructure of meat through the formation of pores, thus suggesting its potential to influence the water holding capacity of meat by facilitating water movements [5]. Overall outcomes of this study indicated that low-intensity PEF treatments can significantly improve water holding capacity in chicken breast fillets by reducing the loss of liquids from meat from 13% up to 28.5% during 4 days of refrigerated storage. Moreover, in both the experiments, doubling the number of pulses reduced drip losses to a greater extent if compared to increasing the electric field strength. There are conflicting reports in the literature concerning the role of PEF on the water holding properties of meat; the divergences found among the studies are ascribable to the intrinsic properties of the muscle (i.e., amount of fat and connective tissue, pH, fiber diameter, muscle contractile state), the dimension and the initial moisture of the samples as well as PEF processing conditions (i.e., total specific energy input, number of pulses, frequency, etc.) [19]. It is generally believed that high-intensity PEF treatments leading to irreversible electroporation (i.e., irreversible electrical breakdown of cell membranes) are associated with an increase of drip loss from meat due to a number of mechanisms including protein denaturation, myofibril fragmentation as well as cell rupture and leakage of cell fluids into extracellular spaces [38]. Considering the utilization of low total specific energy inputs and short exposure times, it can be hypothesized that PEF treatments performed within this study have induced reversible cell electroporation, meaning that cell membranes momently modified their permeability without loosening their integrity [4,7]. Within this scenario, the reasons behind the remarkably reduced drip losses of meat following low-intensity PEF treatment might be different. The first hypothesis deals with a possible re-compartmentalization of moisture following cellular electroporation that might have facilitated water movements within extra-and intracellular compartments [1,5]. Indeed, the exposure of skeletal muscle tissue to a sufficiently high external electric field likely caused a rapid increase in membrane permeability (i.e., membrane electroporation) due to the formation of temporary pores in the phospholipid bilayer of the cell membranes [39]. The abovementioned pores are defined as "aqueous" since they are particularly hydrophilic; indeed, the application of the electric field causes the exposure of the polar heads of the membrane's phospholipids allowing greater interaction with the water molecules [7]. Most theoretical works on electroporation suggest that following a period ranging from milliseconds up to few minutes after the field is removed, the pores reseal [39]. Given these compulsory details, it could be hypothesized a water re-compartmentalization following the temporary changes in membrane permeability that might have favored the transition of water from extra-to intramyofibrillar spaces within skeletal muscle tissue. Thus, water molecules penetrated into the lipid bilayer might be trapped into the pores, thus leading to a reduced water loss from meat. The second hypothesis is related to the potential of PEF to change the polarity of amino acids' side chains, responsible for their hydrophilic or hydrophobic behavior [40]. In 1999, Yeom et al. [41] put forward the theory that high intensity PEF treatments are able to modify secondary and tertiary structures of proteins by increasing the content of β-sheets to the detriment of α-helices. This theory was further validated in 2008 by Zhao and Yang [3], who suggested that, as a direct consequence of proteins' conformational changes, PEF might influence proteins' hydrophobicity. Later on, the ability of PEF to break covalent bonds and generate new sorts of interactions within the peptide chains was reported by Poojary et al. [15]. Within this context, we could hypothesize that low intensity PEF treatments performed within this study may have induced proteins' conformational changes which likely caused a modification of the attraction/repulsion interactions between polar and apolar amino acids, thereby enhancing the interactions between proteins and water molecules. However, based on the available literature and considering the different PEF intensity applied, it is not possible to recognize which of the two abovementioned mechanisms, taken individually or jointly, is responsible for the increased water holding capacity of meat following PEF. Thus, further studies must be carried out to deeply investigate water distribution in the muscle after the application of PEF. From a technological point of view, the potential of PEF to reduce water loss from meat during refrigerated storage might reduce the presence of undesired liquid inside the meat packages, improving consumer acceptance.
Within this experiment, the application of PEF did not exert any effect on the solubility of chicken meat protein fractions. Results in the literature concerning the role of PEF on muscular protein solubility are lacking, however, those available highlighted that the functional properties of proteins are drastically impaired as the intensity of PEF and the treatment time increase [42]. Accordingly, a recent study performed on proteins isolated from pale, soft, and exudative (PSE) chicken meat reported that the solubility of the myofibrillar fraction improved with the application of 18 kV/cm, while a further increase in the electric field strength was associated to a worsening of protein functionality, due to the occurrence of protein denaturation and aggregation [43]. Thus, since myofibrillar proteins are of great importance for meat technological properties (e.g., water holding capacity) [44], it is essential to modulate process parameters in order to avoid detrimental effects on meat proteins and their ability to hold water molecules. Considering the low electric field intensities applied within this study (<2.5 kV/cm), it might be reasonable that PEF did not negatively affect proteins solubility, thus preserving their functional properties. Moreover, DSC technique allowed to evidence that the application of PEF did not trigger protein denaturation processes, thus corroborating the outcomes concerning protein solubility which is generally considered as an indicator for protein denaturation level. However, considering the explorative approach of this study, further research should be carried out to validate these results and investigate the effective potential of this emerging technology to be applied in the poultry field.
Conclusions
This explorative study allowed to broaden the knowledge about the role of pulsed electric fields on the technological properties of chicken meat. Overall outcomes suggest that PEF treatments performed within the experiments improve the ability to retain water in meat without damaging proteins' integrity and functionality. Based on these results, the application of PEF might represent a fruitful opportunity for the poultry industry to reduce the presence of undesired liquid inside the meat packages, improving consumer acceptance and meat yields. Further investigations must be carried out to deeply understand the mechanisms involved in the enhancement of water holding capacity after the application of PEF and establish whether this characteristic is preserved also during subsequent meat processing and cooking operations.
Data Availability Statement:
The data presented in this study are available on request from the corresponding authors.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-02-03T06:17:16.425Z | 2021-01-25T00:00:00.000 | {
"year": 2021,
"sha1": "22732ef70e95607b9cc049bf6537bf8bfc12ab35",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/10/2/241/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "683b412448a42c44b907074d770675fb7839a881",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
201129738 | pes2o/s2orc | v3-fos-license | Skin Cancer Diagnosis by Using Fuzzy Logic and GLCM
Image processing is one of the most strong and popular computer science technologies increasingly used today especially with medical sciences. It’s commonly used to diagnose and detect many kinds of cancer diseases early such as skin cancer, and others. In this paper two techniques have been used to detect Skin Cancer. These two techniques are Fuzzy logic and GLCM (Gray Level Co-occurrence Matrix) where they can distinguish among cancerous skin and non-cancerous. The distinguish operation is based on extracted featured values from GLCM. The features GLCM include are Contrast, Correlation, Energy, Entropy, and Homogeneity. However, our contribution is a new algorithm for diagnosis two phase, the first is the normal situation and second is the skin cancer. After the design and implementation of the algorithm the result was good as we can see in the implementation section.
Introduction
Skin disease as the most widely recognized growth in human starts in the skin. A few tumors likewise can begin in different organs and spread on the skin, yet these malignancies are not considered as skin ones. The diverse sorts of skin diseases generally can be ordered as malignant melanoma and non-melanoma skin growth (NMSC), the last including Basal Cell Carcinoma and Squamous Cell Carcinoma as the major subtypes. Malignant melanoma, as one of the sorts of skin malignancy, has achieved the most noteworthy rate of among a wide range of growth. It is subsidiary from epidermal melanocytes and can emerge in any tissue which contains these cells; however, it is Showing up on the lower appendages in females and on the back inn males (1)As it happens on the skin surface; along these lines, it is distinguishable by visual assessment. The clinical appearance is changed by the sort and site of the tumor. Much exertion has been made over the most recent two decades to enhance the clinical analysis of melanoma. These incorporate option imaging innovations, for example, dermoscopy and a few analytic calculations. Dermoscopy is a non-invasive diagnostic technique for the early diagnosis of melanoma and the assessment of other pigmented and non-pigmented sores on the skin that are not also observed with the unaided eye (2). Polarized dermatoscopy doesn't be in contact with the skin. They can be immediately looked over numerous lesions. As a rule, the polarized view is as best as the fluid immersion technique and may be ideal for assessing vessels. However, it might be useful to wipe a scaly lesion with oil to improve the view. The surface scale may also be evacuated by rehashed tape stripping. It can likewise automate the investigation, and in this way diminishes the measure of dull and repetitive undertakings to be finished by doctors. "The rest of this paper is organized as follows: Section 2" "describes related work on skin cancer image" classification. Section 4" explains the components of the proposed system to assist in the skin cancer"detection and prevention. Section 5 the fuzzy logic. Section6 reports the experimental results. Damilola [3] proposed automatic diagnosis of skin cancer using well-defined segmentation and classification technique. Arivazhagan [4] developed texture analysis based method for recognizing human skin diseases. In this method they classified skin diseases by extracting Independent components. Sparavigna and Marazzato [5] proposed a texture based method in which differences in color and coarseness of skin are quantitatively evaluated by using statistical approach to the pattern recognition.
[6]" utilizes morphologic operators in segmenting a d wavelet analysis to extract the feature which culminated in to better melanoma diagnosis system. Alcon, J. F. " [7] "has used pigmented skin lesion's images, acquired using consumer digital camera for automatic melanoma diagnosis with an accuracy of 86%, sensitivity of 94% and specificity of 68%. Odeh, S. M." [8] "presented a diagnosis system based on Neuro-Fuzzy inference system based algorithm for three different types of skin lesions. Ogorzalek, M. J. " [9]" proposed computer aided enhanced diagnostic tools for non-standard image decomposition. Blackledge, J.M." [10] "uses recognition and classification of digital images with texture based characterization of digital images. He also describes fuzzy logic and membership function theory based decision engine. Patwardhan, S. V." [11] "uses wavelet transformation based skin lesion images classification system which utilizes a semantic representation of spatial frequency information contains in the skin lesion images."
SKIN AN OVERVIEW
Skin is one of the most incredible which is created by God in the human body. The main function for it is to protect the body from infection. Also, it has the ability to protect our body from ultraviolet (UV) radiation. Its consider as a storehouse of water and fat. there are several layers for the skin, one of them is the" epidermis and dermis are the main layers." "Epidermis: The epidermis is the outermost layer of the body skin.it has a primary function to protect the human body and provide an effective barrier from the outside world. The thickness of the epidermis varies in different types of skin." "Dermis: The dermis is the middle layer of skin. It is a place stand between the hypodermis layer and epidermis layer. The dermis is composed of many parts like connective tissue, cells, ground substance, can contain blood vessels, sweat glands, fat and hair follicles it ranges from 1-4mm in thickness. It is usually much thicker than epidermis follicles." "Sweat glands: Sweat glands function`s to regulate temperature and remove waste by secreting water, sodium salts and nitrogenous waste (such as urea) onto the skin surface. " "Fat: Fat is a macronutrient for the body. It is also called as triglycerides. Fats are solids at room temperature. " "Hair Follicle: The hair follicle is a part of a skin organ from which hair grows. Usually, there are hair follicles all over the skin, without the lips, palms of the hands, and soles of the feet." " Connective Tissue: Connective tissue is one of the four types of biological tissue. It supports connect or separate different types of tissues and organs in the body. [12] ''Hypodermis: The hypodermis is the most inner layer of the skin. It invigilates the dermis and is attached to the latter of dermis. It is essentially composed of a type of cells specialized in accumulating and storing fats, known as adipocytes. The hypodermis acts as an energy reserve. Skin cancer and its type: Skin cancer is abnormal growth of skin cells. There are several types of skin cancer and most Common are melanoma and non-melanoma. Non-melanoma are basal cell skin cancer and squamous cell skin cancer." " Melanoma: Melanoma usually can occur on the skin. It starts in melanocytes. Melanoma can grow in a short time. It can spread to many parts of the body." " Basal cell skin cancer: skin cancer is basal cell carcinoma. it is very slow growing and does not spread to other parts of the body. Because of proper treatment basal cell cancer are completely cured. Basal cell skin cancer: It is skin cancer and basal cell carcinoma."
PROPOSED METHOD
In project we suggest algorithm for diagnosis the normal and cancer for skin, our algorithm consist of (6step) and the result of these step gave good result and we done this algorithm in two stage (training stage and testing stage. We need to make all the pictures has the same feature and" has the same environments to get the real results for all the samples by resize all images and convert them to gray level, In this step the color image is converts into grey image" to applying tools of image processing, since the color image has the ability to carry much more information than a gray image. So it would be more difficult to process with our need for non-colored images shown in figure 3.
The result
We used two groups of images, one of them is for training stage and the second group of images is for testing stage .we found that the results from analysis step by using GLCM which produce many features but not all of them works in classification step, only nine of them work for classification such as (sum, standard, variance ,median ,maximums, minimums, skewness, mean ,entropy) and the ranges of these features are collected as shown in(table 1) and the output range in shown in (table 2),The ranges of input is typically changed and can be input for classification as shown in figure 10, The ranges of output can be shown in figure 11. Figure 11: range of output.
Conclusion
In project we used GLCM for image analysis and the for features extraction and we found that (9) features as (SUM, mean, STD, variance, median, maximum, skewness, entropy and minimum) working for classification features and can be easily classify or diagnosis (normal and abnormal) from the image. We tried may analysis methods to obtain features to be used in classification or diagnosis but this method did not given good features such as DWT (Discrete Wavelet Transform) and DCT (Discrete Cosine Transform) because the image in causes (normal and cancer) are so closed and similar in color and background skin texture and tumor shape…. etc. But we found that shape for tumor only if we cut the background will be asymmetric and after using GLCM we got the features can be separate the normal from abnormal. The second thing is segmentation part play create role for get the features. In segmentation part "we create mask, this mask have two values (0,1) black and white only by using threshold to cut the" tumor area from the background. In diagnosis we used fuzzy logic diagnosis because we used input as range not only one value and also the output. | 2019-08-22T20:24:46.153Z | 2019-07-01T00:00:00.000 | {
"year": 2019,
"sha1": "75da6069356265892a371b6c345267717da22d67",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1279/1/012020",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "27fc75036b41bfc7dd65cd8688374e4ea4e37efc",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
22407604 | pes2o/s2orc | v3-fos-license | Inhibition of p38 Mitogen-activated Protein Kinase Impairs Influenza Virus-induced Primary and Secondary Host Gene Responses and Protects Mice from Lethal H5N1 Infection
Background: Early cytokine dysregulation upon infection with highly pathogenic avian influenza viruses (HPAIV) is a major determinant of viral pathogenicity. Results: p38 MAPK controls HPAIV-induced gene expression by regulating interferon synthesis and subsequently interferon signaling, whereas its inhibition protects mice from lethal infection. Conclusion: p38 MAPK is crucial for the induction of hypercytokinemia upon infection. Significance: Targeting p38 MAPK is a promising approach for antiviral intervention.
Highly pathogenic avian influenza viruses (HPAIV) induce severe inflammation in poultry and men. One characteristic of HPAIV infections is the induction of a cytokine burst that strongly contributes to viral pathogenicity. This cell-intrinsic hypercytokinemia seems to involve hyperinduction of p38 mitogen-activated protein kinase. Here we investigate the role of p38 MAPK signaling in the antiviral response against HPAIV in mice as well as in human endothelial cells, the latter being a primary source of cytokines during systemic infections. Global gene expression profiling of HPAIV-infected endothelial cells in
the presence of the p38-specific inhibitor SB 202190 revealed that inhibition of p38 MAPK leads to reduced expression of IFN and other cytokines after H5N1 and H7N7 infection. More than 90% of all virus-induced genes were either partially or fully dependent on p38 signaling. Moreover, promoter analysis confirmed a direct impact of p38 on the IFN promoter activity. Furthermore, upon treatment with IFN or conditioned media from HPAIV-infected cells, p38 controls interferon-stimulated gene expression by coregulating STAT1 by phosphorylation at serine 727. In vivo inhibition of p38 MAPK greatly diminishes virus-induced cytokine expression concomitant with reduced viral titers, thereby protecting mice from lethal infection. These observations show that p38 MAPK acts on two levels of the anti-viral IFN response. Initially the kinase regulates IFN induction and, at a later stage, p38 controls IFN signaling and thereby expression of IFN-stimulated genes. Thus, inhibition of MAP kinase p38 may be an antiviral strategy that protects mice from lethal influenza by suppressing excessive cytokine expression.
Systemic infection of humans and birds with highly pathogenic avian influenza viruses (HPAIV) 4 of the H5N1 subtype is characterized by severe internal bleeding, multiorgan failure, and hyperreaction of the host immune response that leads to massive overproduction of cytokines and chemokines known as the "cytokine storm." There is evidence that the latter contributes to the pathogenesis of human H5N1 disease (1)(2)(3), but the source of this cell-intrinsic phenomenon is still under investigation. So far, neuro-and endothelial cell tropism are known to play a role in HPAIV infections (4,5) and, recently, specific response patterns upon H5N1 infection of endothelial cells (ECs) have been shown to play a crucial role in overwhelming proinflammatory responses compared with infections with low pathogenic strains. This induction of proinflammatory and antiviral genes is strongly regulated by the nuclear factor -light chain enhancer of activated B-cells (NF-B) and is specifically modulated by transcriptional regulators HMGA1 (high-mobility group protein HMG-I/HMG-Y) and NFATC4 (nuclear factor of activated T-cells, cytoplasmic 4) (6,7). In contrast, Hui and colleagues (8) showed that the cytokine response in primary human macrophages mainly depends on IRF3 (interferon regulatory factor 3) and activator protein 1 (AP1) signaling.
However, innate immune cell recruitment and early innate cytokine and chemokine production in the lungs of infected mice have been shown to be independent events, and are both regulated by the pulmonary endothelium (9). The blockade of inflammatory signaling in these cells drastically reduces the immune pathologic effects of HPAIV infection in mice. Therefore, it is very likely that endothelial cells are the main modulators of the HPAIV-induced cytokine storm, whereas lung epithelial cells and inflammatory infiltrates are currently thought to be central for cytokine dysregulation (10).
Cells respond to virus infection by launching a broadly reactive antiviral program, mainly orchestrated by the key cytokine interferon  (IFN). It has been shown that the initial induction of IFN transcription depends mainly on the same three transcription factors that are thought to be crucial for the cytokine storm: IRF3, NF-B, and AP1 (6,8,11). Upstream of these transcription factors are signaling cascades, which allow the fine regulation of each signaling step within the cascade. Finally, IFNs activate a signal transduction pathway that triggers the transcription of a diverse set of genes that, in total, establish an antiviral response in target cells (12).
Infection with influenza A virus leads to the activation of a variety of intracellular signaling pathways including all four so far known mitogen-activated protein kinase (MAPK) cascades (13). MAP kinases are able to regulate gene expression at both the transcriptional and post-transcriptional levels by different mechanisms, thereby controlling diverse cellular processes (14). Among the different MAP kinase subgroups, a strong link has been established between the p38 pathway and inflammation. It has been postulated that diseases like Alzheimer and inflammatory bowel disease are associated with dysregulation of the p38 pathway (15,16) and it has been shown that a variety of pathogen-or cell stress-related stimuli can activate p38 MAPK (17,18). Therefore, the kinase plays an essential role in the production of proinflammatory cytokines such as IL-1, TNF-␣, and IL-6 (19). In recent years, special consideration has been given to the p38 pathway concerning its role in stimulation-specific interferon induction and signaling. Recently, the involvement of p38 MAPK in the HPAIV-mediated dysregulation of cytokine expression in primary human monocyte-derived macrophages and bronchial epithelial cells was hypothesized (8,20,21). Furthermore, hyperactivation of p38 and increased cytokine concentrations in plasma samples from patients infected with severe seasonal influenza have been reported (22).
In this study, biochemical as well as genetic tools were used to dissect the role of the p38 pathway in the HPAIV-induced cytokine storm in primary endothelial cells. Global gene expression profiling confirmed that nearly all (94%) HPAIVinduced genes are either partially or fully dependent on this pathway. Further analysis showed that p38 acts not only in the primary induction of cytokines but also affects the secondary cytokine-induced response by modulating the JAK-STAT pathway. Moreover, this study provides evidence for the first time that inhibition of p38 MAPK significantly protects mice from lethal influenza by reducing cytokine-induced pathogenicity. Therefore, interference with the p38 MAPK pathway might be a new target for therapeutic intervention in HPAIV infection.
EXPERIMENTAL PROCEDURES
Ethics Statement-All animal studies were performed in compliance with animal welfare regulations of the German Society for Laboratory Animal Science (GV-SOLAS) and the European Health Law of the Federation of Laboratory Animal Science Associations (FELASA). The protocol was approved by the State Agency for Nature, Environment and Consumer Protection (LANUV), Germany (permission number Az 8.87-50. 10.36.09.007).
Mouse Experiments-BALB/c mice were obtained from the Harlan-Winkelmann animal breeding facilities. Eight-to 10-week-old mice were anesthetized by intraperitoneal injection of 200 l of solution of 0.5% ketamine (Ceva) and 0.1% xylazine (Ceva) in PBS. Mice were infected or stimulated by the intranasal route in a 50-l volume as indicated. Health status of the animals was monitored daily. In agreement with animal welfare regulations, mice were killed upon a body weight loss of 25%. Mouse survival curves are represented by Kaplan-Meier analysis.
Reagents and Plasmids-Cells were preincubated with different concentrations of the p38-specific inhibitors SB 202190 or SB 203580 (DMSO soluble, Calbiochem) for 30 min at 37°C before infection or stimulation as indicated. BALB/c mice were treated intraperitoneally with 20 mg/kg/day SB 202190 hydrochloride (Synkinase) or SB 203580 hydrochloride (Axon). These pyridinyl-imidazol components specifically inhibit p38␣ and - by competing with ATP for the same binding site.
Recombinant human IFN and -␥ were obtained from the PBL Interferon Source and used in concentrations from 100 to 500 units/ml as indicated. The double-stranded RNA analog poly(I:C) was purchased from Amersham Biosciences. Mice were stimulated with 1 g of poly(I:C) in 50 l of PBS via the intranasal route.
The retroviral expression plasmid pCFG5-IEGZ HA was previously described (6). The expression plasmid pRC/CMV STAT1␣ Y701F was obtained from Addgene, provided by J. Darnell (Laboratory of Molecular Cell Biology, Rockefeller University, New York) and previously described (24). The open reading frame from STAT1 Y701F was cloned into pCFG5-IEGZ HA and the double mutant (Y701F/S727A) was obtained by site-directed mutagenesis. Primer sequences are included in supplemental Table S1. The dominant-negative MKK6 mutant expressing vector pCFG5-IEGZ MKK6A was a kind gift from E. Serfling (Institute of Pathology, Wuerzburg). The luciferase reporter construct pTATA-IFN-luc was a kind gift from J. Hiscott (Lady Davis Institute, Montréal, Canada) and contains the whole IFN promoter upstream of the luciferase gene. Reporter gene constructs pTA-ISRE-luc and pTA-GAS-luc were obtained from Clontech. Upstream of the luciferase gene, pTA-ISRE contains five copies of the ISRE-binding sequence and pTA-GAS-luc two copies of the STAT1 enhancer element.
Plaque Titration-Plaque forming units of a given virus suspension were determined by a standard plaque assay as described earlier (25). Mouse lung titers were analyzed at the times indicated. Lungs were collected and placed in PBS with Collagenase A (0.7 mg/ml; Roche) to obtain a 10% tissue homogenate and incubated for 90 min at 37°C. Next, the samples were homogenized by passing them through a 20-gauge needle (0.5 mm diameter) and centrifuged at 10,000 ϫ g for 5 min. Supernatants were taken for plaque titration.
Retroviral Gene Transfer-The empty retroviral vector pCFG-IEGZ HA or pCFG5-IEGZ STAT1/MKK6 expressing the different phospho-mutants were transfected in Phoenix packaging cells (Orbigen) with polyethyleneimine, selected with 250 g/ml Zeocin (Invitrogen), and retrovirus-containing supernatants were harvested and used for infection of A549 cells as previously described (6). Retrovirally transduced A549 cells were selected with 250 g/ml Zeocin for 2 weeks to obtain stable cell lines and the efficiency of retroviral gene transfer was measured by flow cytometric detection of recombinant enhanced GFP (EGFP), which was coexpressed with the gene of interest, using a FACSCalibur cytometer (BD Biosciences) 48 h after transduction. Transduction rates ranged from 90 to 100% and stable STAT1 mutant-expressing cells were subcloned to obtain equal expression levels of the transgenes as measured by Western blot.
RNA Isolation, cDNA Synthesis, and qRT-PCR-Total RNA from cells was isolated using the RNeasy Kit (Qiagen) according to the manufacturer's instructions. Lungs from mice were collected at the time points indicated and total RNA was isolated using TRIzol reagent (Invitrogen). TRIzol lysis was performed according to the manufacturer's protocol, introducing a secondary phase separation step. Samples were homogenized using a FastPrep-24 homogenizator (MP Biomedicals) with Lysing Matrix D (MP Biomedicals). Isopropyl alcohol-precipitated RNA was dissolved in 0.3 M NaAc (pH 5.2) and phenol (pH 4.1-5.6) was added in a ratio of 1:1 (v/v). After vortexing and centrifugation (4°C, 5 min, 13,000 ϫ g), chloroform was added to the RNA-containing upper phase (1:1, v/v) and again vortexed and centrifuged (4°C, 5 min, 13,000 ϫ g). Subsequently, RNA was precipitated by adding 96% EtOH (1:3, v/v) to the upper phase and followed by washing.
Three micrograms of total RNA were reverse transcribed with Revert AID H Minus Reverse Transcriptase (MBI Fermentas) and oligo(dT) primers according to the manufacturer's protocol. The cDNA was used for qRT-PCR, which was performed using a Roche LightCycler 480 and Brilliant SYBR Green Mastermix (Agilent) according to the manufacturer's instructions. Primer sequences are included in supplemental Table S1. Relative changes in expression levels (n-fold) were calculated according to the 2 Ϫ⌬⌬CT method (27).
Luciferase Assay-Transfection of Vero or A549 cells with different luciferase reporter plasmids (0.3 g) was performed with Lipofectamine 2000 (Invitrogen) according to the manufacturer's instructions. Luciferase assays were carried out 24 h post-transfection as previously described (28). Relative light units were normalized to protein concentrations determined with a standard Bradford assay.
DNA Microarray and Statistical Data Analysis-Primary HUVEC were treated with 20 M SB 202190 or DMSO for 30 min at 37°C and subsequently infected with FPV for 5 h with a multiplicity of infection (m.o.i.) of 5 or left uninfected. Total RNA was isolated from three independent experiments using the RNeasy kit (Qiagen). Samples were processed for microarray hybridization using Affymetrix Human Genome 133 Plus 2.0 Gene Arrays according to the manufacturer's protocol. The GeneChip Scanner 3000 detected fluorescent signals were recorded and computed by Affimetrix GeneChip Operating Software version 1.4. Parts of the data set concerning FPV-infected HUVEC and control HUVEC have also been used in a previous study by our group (7).
For a more elaborate data analysis, the Expressionist Suite software package from GeneData (Basel, Switzerland) was used as previously described (29). Only genes with a fold-change (FC) of Ͼ2.0 or Ͻ2.0 and p Յ 0.05 (paired t test) of three independent experiments were considered as regulated. "On/off"regulated genes were evaluated as described (29), considering genes with on/off ratios of 0:3, 0:2, 1:3, 3:0, 2:0, and 3:1, respectively. From this group, only regulation with a high FC of Ն5 and p Ͻ 0.05 were included in the list of regulated genes to differentiate on/off phenomena occurring around the background threshold. Principle component analysis was applied to mathematically reduce the dimensionality of the entire spec-trum of gene expression values of the microarray experiment to three components (30).
By definition, strictly p38-dependent genes were either fully switched off or reduced below threshold in the presence of SB 202190. By this mathematical method the strength of mRNA induction as well as constitutive expression on a high level may not be reflected to calculate strict versus partial dependence. Therefore, the distinction might be to some extent arbitrarily.
To identify functional categories of genes that are overrepresented in the data sets of regulated genes, Gene Ontology (GO) annotations to every probe set spotted on the Affymetrix 133 Plus 2.0 Array were assigned and compared with the distribution of GO annotations in the gene group of interest by applying the Fisher exact test. In the case of genes that were represented by two or more probe sets, only one transcript was taken into account to avoid potential bias.
p38 MAPK Is Activated Upon Influenza A Virus Infection in
Endothelial Cells-Compared with low pathogenic influenza viruses, HPAIV are unique in inducing a broad spectrum cytokine response, contributing to virus-associated immune pathology. Previously, activation of p38 MAPK was shown to modulate antiviral signaling responses in bronchial epithelial cells and monocyte-derived macrophages upon influenza virus infection (20,21). Furthermore, differential activation of diverse MAPKs was observed in macrophages upon influenza infection with various subtypes (31). But, so far, nothing is known about the activity patterns of p38 in endothelial cells upon infection with diverse influenza isolates. Therefore, primary HUVEC were infected with two highly pathogenic subtypes of human or avian origin, A/Thailand/KAN-1/2004 (H5N1) or A/FPV/Bratislava/79 (H7N7). Both influenza isolates activate p38 upon infection, demonstrated by phosphorylation of threonine (Thr 180 ) and tyrosine (Tyr 182 ) residues in the activation loop of p38 MAPK (Fig. 1A, upper panels). Activation of p38 was comparable, despite slight differences in viral repli-cation kinetics. Viral polymerase protein PB2 of the H5N1 (KAN-1) strain was detected moderately earlier ( Fig. 1A, middle panels), but the 9-h post-infection titers were not significantly higher compared with the H7N7 (FPV) isolate as determined by standard plaque titration (Fig. 1B).
p38 MAPK Signaling Has a Major Impact on the HPAIVinduced Gene Profile-To gain broad insight into what role p38
MAPK might play in the HPAIV-induced cytokine storm, a comparative global gene expression study was performed. HPAIV-induced gene expression was monitored at 5 h postinfection, which is well within the first replication cycle of influenza virus infection and thus minimizes the effects of secondary infection. In this array, primary HUVEC were preincubated with 20 M SB 202190, a specific p38␣/ inhibitor, or left untreated for 30 min and subsequently infected with the HPAIV strain FPV (H7N7). Total RNA of infected and uninfected control HUVEC was processed for microarray hybridization and the data sets from untreated FPV-or mock-infected cells were used for comparison (7). FPV infection led to the up-regulation of 82 mRNAs, including 19 genes being switched on compared with mock-infected control cells. More than 4000 mRNAs were down-regulated or switched off upon H7N7 infection. The final analysis exclusively focused on up-regulated genes because unspecific 5Ј cap snatching mechanisms (32) or interference with the processing of cellular RNAs by the viral NS1 protein (33) significantly contribute to the process of gene down-regulation by influenza viruses, which makes the identification of specifically down-regulated genes impossible. To verify the impact of p38 MAPK on the FPV-induced gene profile, principal component analysis was performed, displaying all influenza-inducible genes as vector clouds in a threedimensional vector space. Here, the consistency of the gene profiles within the same experimental group was confirmed, indicating reliable reproduction of data in the different experiments ( Fig. 2A). Furthermore, clear separation of the experimental groups could be observed, illustrating that the induced gene expression patterns in these groups were specific and distinct from each other. Comparison of the two different data sets of FPV-infected HUVEC showed that inhibition of p38 led to a partial reversion of the FPV-induced gene spectrum. This result implies that either only a distinct gene subset is strictly dependent on p38 signaling or that there are some genes that show only a slight requirement for the kinase. Indeed, mathematical analysis showed that 71% of FPV-induced genes were strictly dependent (switched off in the presence of SB 202190) and 23% partially dependent on p38 signaling (Fig. 2B and supplemental Table S2). Although this distinction might be to some extent arbitrary, more than 90% of the FPV-induced genes were found to be p38 dependent.
By functional clustering according to gene ontology annotations, Viemann and colleagues (7) revealed that the majority of mRNAs induced upon FPV infection belong to the inflammatory viral response and cell-cell signaling categories (7). p38-dependent genes cluster into the immune/inflammatory response and chemotaxis categories, demonstrating that p38 MAPK plays a prominent role in the expression of the major gene groups induced by HPAIV and is thereby crucial for the dysregulation of cytokines and chemokines (Fig. 2C). To validate the microarray data, quantitative real-time RT-PCR (qRT-PCR) analysis was performed for a subset of FPV-induced mRNAs in the presence or absence of SB 202190. All tested FPV-induced mRNAs were at least partially down-regulated upon p38 inhibition, as shown in supplemental Fig. S1. To exclude nonspecific off-target effects of the inhibitor in microarray experiments, qRT-PCR analysis was performed for different FPV-induced mRNAs in the presence or absence of different doses of SB 202190 (5, 10, and 20 M) in comparison with a second p38-specific inhibitor SB 203580 (Fig. 2D). As expected, treatment with both inhibitors led to significantly reduced cytokine expression levels in a concentration-depen-dent manner. These observations reflect the prominent role of p38 MAP kinase in the expression of HPAIV-induced genes.
p38 MAPK Activity Is Required for H5N1-induced Expression of IFNs and ISGs-In 2009, Nencioni and colleagues (34) described a decrease in viral titers due to the retention of viral ribonucleoproteins in the nucleus when p38 MAPK was inhibited in Madin-Darby canine kidney cells. Furthermore, it was shown that virus internalization is decreased upon p38 inhibition in bronchial epithelial cells (35). To ensure that the observed effects on gene induction were not due to replication differences, the potential influence of SB 202190 on viral propagation of FPV and KAN-1 was assessed in HUVEC by standard plaque assays. The results shown in Fig. 3, A and B, clearly indicate that the inhibitor did not significantly affect replication efficiency of the two viruses in endothelial cells. Thus, the observed impact of p38 inhibition on gene expression in HUVEC is replication independent and is primarily due to direct interference with the inducing signaling pathways.
To assure that the observed effects of p38 MAPK signaling on the HPAIV-induced gene profile were not specific for the avian isolate FPV but also true for other highly pathogenic isolates of other subtypes, HUVEC were infected in the presence or absence of SB 202190 with the human H5N1 isolate KAN-1. Total RNA was isolated 2, 4, and 8 h post-infection, and qRT-PCR analysis was performed for a subset of KAN-1-induced mRNAs. Because inflammatory and immune genes were mainly affected by p38 inhibition, this study is focused on IFN, as a major mediator of the innate antiviral response (Fig. 3C) and interferon-stimulated genes (ISGs) (Fig. 3D). qRT-PCR data confirmed the dependence of IFN and ISG production on p38 MAP kinase signaling in cells infected with either FPV or KAN-1. Furthermore, nonspecific off-target effects of SB 202190 on KAN-1-induced cytokine expression were ruled out in an inhibitor-independent approach by using siRNAs specific for different p38 isoforms, p38␣ (MAPK14) and p38 (MAPK11) (supplemental Fig. S2, A and B). This method also allows to evaluate the role of the two different isoforms, and the results indicate a more prominent function of p38␣ in cytokine expression upon H5N1 infection especially in the case of IFN.
Viral RNA is the main pathogen-associated molecular pattern recognized by different pattern-recognition receptors, inducing the type I IFN response in virus-infected cells. Especially detection of viral 5Ј-triphosphate RNAs by the cytoplasmic helicase RIG-I (retinoic acid inducible gene I) plays an important role in influenza A virus infection (36). To test if the blockade of p38 MAPK inhibits viral RNA-induced signaling, HUVEC were transfected with RNA from uninfected or HPAIV-infected A549 cells in the presence or absence of SB 202190 for 3 h (supplemental Fig. S2C). In contrast to RNA from uninfected cells, stimulation with total RNA from virusinfected cells led to an induction of IFN and ISG mRNAs in a p38 MAPK-dependent fashion. These results clearly show that p38 plays an important role in the induction of IFN and consequently in the expression of interferon-stimulated genes upon HPAIV infection in endothelial cells, confirming previous results obtained in monocytic cells (8).
p38 MAPK Signaling Has a Direct Impact on IFN Promoter Activity-p38 MAPK signaling can influence the production of IFN and ISGs on different levels. It has recently been shown that there is a stimulation-specific contribution of p38 MAPK to IFN gene expression in human macrophages (37) that might be due to the activation of ATF-2 (38) as well as the cross-regulation of NF-B (39) and IRF3 (40), the major components of the IFN enhanceosome. Moreover, p38 MAPK modulates IFN signaling by affecting the JAK/STAT pathway. For instance, p38 can positively regulate JAK/STAT signaling by the phosphorylation of STAT1 at serine 727 (Ser 727 ) or by the activation of cytosolic phospholipase A 2 (41,42). Modulation of STAT signaling may in turn influence IFN expression as a positive regulatory feedback loop. One of the first ISGs produced in response to IFN signaling is interferon regulatory factor 7 (IRF7). IRF7 can form homo-or heterodimers with IRF3 and replaces IRF3 in the later stages of IFN expression, thereby again regulating IFN expression (12). To address whether p38 MAPK signaling mainly affects the initial production of IFN or IFN signaling on the STAT1 level in response to influenza A virus infection, experimental systems assessing both signaling steps were used.
The first indication of a direct impact of p38 MAP kinase on the IFN enhanceosome activity in response to HPAIV infection was provided by the observation that a reduction in IFN mRNA levels occurs as early as 2 h post-infection when p38 is inhibited (Fig. 3C). To verify that p38 MAPK directly affects the production of IFN on the promoter level, Vero cells lacking functional type I IFN genes were transfected with total RNA isolated from HPAIV-infected A549 cells and an IFN reporter plasmid (Fig. 4A). Reduction of IFN reporter activity in the presence of SB 202190 indicated a direct effect of p38 signaling on IFN expression, independent of any auto-or paracrine actions of viral RNA-induced type I IFN. Furthermore, nonspecific off-target effects of the inhibitor were ruled out by using a dominant-negative mitogen-activated protein kinase kinase 6 mutant (MKK6A) acting upstream of p38 MAPK (supplemental Fig. S3A).
The release of type I IFNs and other JAK/STAT activating cytokines from influenza virus-infected cells can be monitored by STAT1 phosphorylation on tyrosine at position 701 (Tyr 701 ), which is a hallmark of IFN signaling. To explore the impact of p38 MAPK on IFN mRNA production in HPAIV-infected cells on the protein level, conditioned medium experiments were performed with DMSO-or SB 202190-pretreated HUVEC, which were infected with KAN-1 for 5 h (5 m.o.i.). Supernatants were subsequently transferred to untreated HUVEC for 15 min and STAT1 phosphorylation was assessed. Fig. 4B shows that the observed STAT1 Tyr 701 phosphorylation induced by the conditioned media from infected HUVEC (lane 7) was reduced when p38 MAPK was inhibited in the donor cells (lane 8). Conditioned supernatants from uninfected HUVEC had no effect on STAT1 phosphorylation (lanes 5 and 6). These results clearly show that p38 signaling is required for primary expression of IFNs and other STAT1-activating cytokines.
IFN Signaling Is Modified by p38 MAPK Activity-
The results so far raised the question as to whether the observed impact of p38 inhibition on secondary ISG expression is simply due to reduced levels of IFNs in the primary response to infection or whether there are additional steps in signaling that are regulated by the kinase. To discriminate between these two scenarios, the consequences of SB 202190-mediated p38 inhibition on the induction of ISG mRNAs were tested after stimulation with conditioned medium. This allows for producing a realistic stimulation with the complete set of IFNs and other cytokines and chemokines that are released upon HPAIV infection. Conditioned media from mock-or KAN-1-infected HUVEC were transferred to SB 202190-or DMSO-pretreated acceptor cells, respectively. qRT-PCR confirmed efficient IFN mRNA expression in donor cells (Fig. 5A). A potential transfer of infectious particles onto the reporter cells was ruled out by gene-specific qRT-PCR of viral genes. No viral genomic RNA or mRNA of M1 were detectable (data not shown). The reporter cells treated with KAN-1-conditioned medium showed expression of ISGs such as MX1 or OAS1, and the inhibition of p38 MAPK activity led to a significant reduction in mRNA induction (Fig. 5B). These observations clearly indicate that besides the impact of p38 on the primary viral gene induction, the kinase additionally controls the induction of IFN-or other STAT-activating cytokine-stimulated gene expression.
To further confirm that p38 really acts on IFN-induced gene expression responses, the effect of p38 inhibition on cells stimulated for 2, 4, or 8 h with IFN or -␥ in the presence or absence of SB 202190 was examined. Fig. 5C shows reduced IFN-induced expression of the ISG mRNAs for MxA and IP10 in HUVEC at all time points when p38 signaling was impaired. Interestingly, this was also true for the IFN␥-induced genes GBP1 and VCAM1 (Fig. 5D). These results were confirmed by promoter analysis of interferon-stimulated genes using the dominant-negative MKK6 mutant thereby ruling out nonspecific off-target effects of the inhibitor (supplemental Fig. S3B).
MAP Kinase p38 Directly Influences ISG Promoter Activity by the Phosphorylation of STAT1 at Ser 727 -A common mediator of signaling induced by type I and II IFN is STAT1. In the case of IFN stimulation, it forms the IFN-stimulated gene factor 3 together with STAT2 and IRF9 and activates transcription from promoters containing interferon-stimulated response elements (ISRE). Upon stimulation with IFN␥, STAT1 homodimers are formed that enhance interferon ␥-activated site (GAS)-dependent gene transcription. It has previously been proposed that STAT1 serine phosphorylation caused by a variety of stimuli is sensitive to the inhibition of p38 MAPK (42,43), but there is also evidence of the existence of a STAT-independent mechanism upon IFN stimulation (44). Although tyrosine phosphorylation of STAT1 constitutes the essential prerequisite for biological activity by triggering DNA binding after nuclear accumulation, phosphorylation at serine 727 seems to be required for full transcriptional activity induced by IFNs (24,45).
To study the impact of STAT1 Ser 727 phosphorylation on the production of ISGs and its dependence on p38 MAPK activity in HPAIV infection, A549 lung epithelial cells were used. These cells allow the efficient transfection of dominant-negative STAT1 constructs that is not possible in HUVEC. A549 showed the same dependence of IFN and ISG induction on functional p38 MAPK signaling as observed in HUVEC (supplemental Figs. S3 and S4A) and inhibitory effects of the compound or p38␣ MAPK knockdown on viral replication in A549 could also be ruled out (supplemental Fig. S4B, C). Western blot analysis confirmed that infection with 5 m.o.i. of both H7 (FPV) and H5 (KAN-1) subtype viruses resulted in the activation of p38 (Fig. 6A, middle panels, lanes 3 and 4), which occurred simultaneously with the phosphorylation of STAT1 at Ser 727 (upper panels). To examine whether p38 MAPK activity is required for Ser 727 phosphorylation, A549 cells were pretreated with 5 or 10 M SB 202190 for 30 min, infected with FPV (5 m.o.i.), and subsequently incubated with the respective concentrations of SB 202190 for the indicated time points (Fig. 6B, left). Western blot analysis confirmed reduced STAT1 Ser 727 phosphorylation (upper panel, lanes 7, 8, 11, and 12) in a concentration-dependent manner when p38 was inhibited, implying that the kinase is required for serine phosphorylation of STAT1 in influenza virus infection. Efficient infection was confirmed by immunoblotting for viral proteins PB2 and M1 (middle panels).
Furthermore, nonspecific off-target effects of SB 202190 on the FPV-induced STAT1 Ser 727 phosphorylation were ruled out by using a second p38-specific inhibitor, SB 203580 (Fig. 6B, right, upper panel, lanes 6 and 9). In addition, the dependence of STAT1 Ser 727 phosphorylation on a functional p38 pathway was analyzed by molecular means by using MAPK14-specific siRNA (Fig. 6C, left, upper panel, lane 4), showing a reduction in phosphorylation levels upon knocked down p38␣ by ϳ70% (Fig. 6C, right).
To rule out any secondary effects of released IFNs on STAT1 phosphorylation at Ser 727 , Vero cells lacking functional type I IFN genes were pretreated with different concentrations of SB 202190 or DMSO and subsequently infected with FPV (5 m.o.i.) for the indicated time points (Fig. 6D, left). Immunostaining of phosphorylated STAT1 at Ser 727 clearly demonstrates the dependence of this post-translational modification on functional p38 signaling in the context of HPAIV infection (upper panel, lanes 11 and 12). Efficient viral propagation was confirmed by immunoblotting for viral proteins PB2 and M1 (middle panels). In addition, these findings were confirmed by a second p38-specific inhibitor SB 203580 (Fig. 6D, right, upper panel, lane 9) and by an inhibitor independent approach by using MAPK14-specific siRNA (Fig. 6E, left, upper panel, lane 4). Here, as observed in A549 cells, FPV-induced STAT1 Ser 727 phosphorylation was reduced by ϳ70% when p38␣ was knocked down (Fig. 6E, right). Furthermore, an influence of p38 MAPK inhibition on viral replication in Vero cells was ruled out by standard plaque assays (Fig. 6F).
To further verify the crucial role of p38-mediated STAT1 Ser 727 phosphorylation on IFN-induced gene transcription, reporter gene assays in cells that stably express a dominantnegative STAT1 (Y701F) or STAT1 (Y701F/S727A) double mutant were performed. Equal expression levels of the different mutants were confirmed by Western blot analysis of total cell lysates (Fig. 6G). Mutant-expressing cells were stimulated with IFN (ISRE-luc) or -␥ (GAS-luc) for 8 h. Fig. 6H shows that stimulation with IFN induced ISRE-dependent promoter activity around 4-fold in empty vector-expressing cells. Expression of the dominant-negative STAT1 Y701F mutant decreased promoter activity by 50% and, indeed, expression of the STAT1 double mutant (Y701F/S727A) further reduced IFN-induced promoter activity. In the case of GAS promoter activity, only empty vector-expressing cells showed increased activity after IFN␥ stimulation (Fig. 6I). Expression of the dominant-negative STAT1 Y701F mutant led to a complete loss of activity that was not further affected by S727A mutation.
p38 MAPK Inhibition Leads to the Suppression of Early Innate Immune Responses in Vivo-The induced cytokine storm during severe influenza infections leads to major morbidity and mortality. A significant association between excessive early cytokine response, immune cell recruitment, and poor outcome has been documented for avian H5N1 infection (2). To investigate the role of p38 MAPK signaling in the dysregulation of cytokine expression after infection with a human pathogenic H5N1 isolate in vivo, BALB/c mice were infected with 10 ϫ LD 50 KAN-1 that had never been passaged in mice. Infection of BALB/c with this strain causes severe disease, resulting in neurological deficits and high mortality rates. Directly after infection, mice were treated intraperitoneally with 20 mg/kg of water-soluble SB 202190 hydrochloride or vehicle once per day. Two days post-infection, lungs from mice were extracted and total RNA was isolated for qRT-PCR. Fig. 7A shows lung IFN mRNA levels from PBS-treated control mice in comparison to KAN-1-infected mice. The KAN-1-induced expression of IFN mRNA observed in vehicle-treated mice was nearly completely abolished in the presence of the p38 inhibitor SB 202190. This was also true for different ISGs such as OAS1 and IP-10. Interestingly, the transcription of NF-Bdependent gene IL6 was also significantly suppressed after p38 MAPK blockade (right). These results impressively illustrate the importance of p38 in the induction of the cytokine storm in vivo. To verify whether the observed suppression of cytokine expression after p38 inhibition occurs due to alterations in viral propagation, BALB/c mice were infected with 10 ϫ LD 50 KAN-1 and treated with SB 202190 hydrochloride or vehicle directly after infection, as described above. Two days post-infection, lungs from mice were extracted and viral lung titers were determined via standard plaque titration. Following SB 202190 treatment, viral titers were decreased by ϳ10-fold compared with the lung titers of vehicle-treated mice (Fig. 7B, left). Interestingly, these effects on viral replication were not observed upon treatment with another p38 inhibitor (SB 203580 hydrochloride, Fig. 7B, right), whereas reduced cytokine expression levels upon KAN-1 infection could be observed, although less pronounced compared with SB 202190 treatment (supplemental Fig. S5).
Considering a possible attenuated viral propagation after p38 inhibition, a decrease in cytokine expression would not be sur-prising. Thus, cytokine induction was determined in a replication-independent system using the double-stranded RNA analog poly(I:C). BALB/c mice were treated intraperitoneally with 20 mg/kg SB 202190 hydrochloride or vehicle 3 h prior to intranasal stimulation with 1 g poly(I:C). Lungs from mice were extracted 6 h post-stimulation and total RNA was isolated for qRT-PCR. Fig. 7C shows the lung cytokine mRNA levels from PBS-treated control mice in comparison to poly(I:C)-stimulated mice. The repression of dsRNA-induced responses in the lungs of mice that were treated with SB 202190 (Fig. 7C) reflects the direct function of p38 MAP kinase in the induction of excessive cytokine and chemokine production.
Dysregulation of the innate cytokine response indicates disease severity and death during HPAIV infection (1,2). To analyze whether blunting the expression of cytokines using the p38 inhibitor SB 202190 could protect mice from lethal influenza infection, BALB/c mice were infected with 10 ϫ LD 50 of KAN-1 and treated intraperitoneally with SB 202190 hydrochloride or vehicle, as described earlier. Fig. 7D shows the body weight and survival curves of KAN-1-infected mice treated with SB 202190 in comparison to vehicle-treated mice. Although vehicletreated mice lost body weight as early as 5 days post-infection, weight loss in SB 202190-treated mice was delayed and was detectable 8 days post-infection. Both experimental groups showed peaks in weight loss 10 days post-infection, with more pronounced effects in control mice. This is supported by the survival curves that show enhanced survival of SB 202190treated mice. Only one-third of these mice died, whereas nearly 85% mortality was observed in vehicle-treated mice. Furthermore, these results on survival proportions could be fully confirmed with the p38-specific inhibitor SB 203580 hydrochloride (Fig. 7E). Considering the fact that viral propagation was not affected by SB 203580 treatment, these findings demonstrate that early suppression of cytokine amplification by inhibiting p38 MAP kinase activity significantly protects mice from lethal influenza virus infection and impressively emphasizes the important role of the cytokine storm in the viral pathogenicity of HPAIV.
DISCUSSION
Human influenza virus infections caused by highly virulent H5N1 variants are associated with high mortality rates that clearly emphasize the need for further knowledge about the biologic characteristics of these infections to identify new targets for antiviral interventions. It has been shown that HPAI viruses induce generalized infections associated with an overwhelming production of cytokines and chemokines that has been hypothesized to contribute to viral pathogenesis (1, 2). Endothelial cells have been shown to play a major role in cytokine dysregulation (6,7,9). Furthermore, innate immune cell recruitment and early innate cytokine expression seem to be independent events with endothelial cells at the center of both processes (9), possibly repositioning the role of immune cells in this context to secondary importance. So far, different transcription factors like IRF3, NF-B, AP1, and NFATC4 have been shown to be involved in H5N1-induced hyperactivation of the innate immune system, with their significance depending on the cell type used (6 -8). Furthermore, it is well known that p38 MAP kinase represents an important factor in various inflammatory diseases, modulating the actions of the aforementioned transcription factors upon different stimuli in different cell types (38 -40). The aim of this study was to analyze the impact of p38 MAP kinase on HPAIV-induced cytokine induction in primary endothelial cells. Furthermore, the role of the kinase in H5N1 pathogenicity was analyzed in vivo, identifying p38 MAPK as a likely target for antiviral intervention for the first time.
Global mRNA profiling confirmed that upon infection of primary endothelial cells with a highly pathogenic influenza A virus of the H7N7 subtype, p38 had an excessive impact on the FPV-induced transcriptome. By mathematical calculation, nearly all HPAIV-induced genes were either partially (23%) or fully (71%) dependent on this pathway; the majority of these mRNAs belong to the immune/inflammatory response and chemotaxis gene ontology categories. These findings indicate a pivotal role of p38 MAPK in HPAIV-induced cytokine and chemokine dysregulation. Furthermore, this suggests a function for this signaling pathway in immune cell recruitment upon influenza infection by regulating the expression of a number of chemotactic cytokines such as CXCL9, -10, -11, and CCL5 by endothelial cells; these cells are of primary importance for innate immune cell recruitment (9). Additional experimental approaches are required to reveal whether p38 MAPK is critical for chemokine expression and subsequent immune cell recruitment upon HPAIV infection in vivo, as has been shown for enteric bacterial infection of the colonic mucosa (46).
Interestingly, IFN mRNA was found among the FPV-induced mRNAs that were switched off in the presence of the p38 inhibitor SB 202190, highlighting the obligatory dependence of HPAIV-induced IFN production on functional p38 signaling. So far, IRF3 and NF-B have been shown to be essential for activation of the IFN promoter in H5N1-infected endothelial cells (6,7). Although IRFs are also the most abundant transcription factors modulating the FPV-induced transcriptome, as determined by promoter analysis, IFN induction could not be confirmed to be NF-B-dependent upon H7N7 infection (7). In contrast, the present study shows that p38 MAPK activity is needed for both H7N7-and H5N1-induced IFN expression, indicating a more global role of p38 signaling in cytokine induction provoked by HPAIV infection. This was confirmed by infection studies and stimulation with total RNA from FPV-or D and E, body weight curves; animals were excluded from the analysis when reaching less than 75% of the initial body weight. D, mean % body weight of 1-9 (initial group size) animals normalized to initial weight Ϯ S.E. from two independent experiments is depicted. Survival curves; %-survival of 1-9 (initial group size) animals from two independent experiments is depicted. E, mean % body weight of 1-10 (initial group size) animals normalized to initial weight Ϯ S.E. is depicted. Survival curves; %-survival of 1-10 (initial group size) is depicted.
KAN-1-infected cells, respectively. Similar results were obtained in alveolar epithelial cells, indicating a general impact of p38 signaling on HPAIV-induced IFN production that is not cell type specific. This supports previous results from Hui and colleagues (8) obtained in primary human macrophages.
Besides a direct impact on the IFN promoter that could be verified experimentally, not only by inhibitor studies but also by the overexpression of a dominant-negative MKK6 mutant acting upstream of p38 MAPK, induction of HPAIV-induced ISGs was decreased upon p38 inhibition and siRNA-mediated knockdown. Furthermore, all genes previously described to be ISGs were found among the p38-regulated genes. Stimulation with type I and type II IFN or virus-conditioned media revealed a direct dependence of ISG expression on p38 MAPK activity in endothelial cells. In addition to the aforementioned IRF transcription factors, promoter analysis of the FPV-induced transcriptome identified the ISRE consensus motif to be overrepresented within the group of up-regulated genes (7). One of the major factors needed for type I and type II IFN signaling is STAT1. Upon activation, STAT1 participates in gene induction via ISRE and GAS consensus motifs, thereby inducing the transcription of ISGs. One possibility for p38-dependent enhancement of ISG expression is the direct phosphorylation of serine 727 within the STAT1 protein. This phosphorylation site has been shown to be important for full transcriptional activity induced by IFNs (24,45). Although the role of p38 MAPK signaling in Ser 727 phosphorylation upon IFN signaling is still controversially discussed, the present study clearly shows for the first time the obligatory dependence of influenza virus-induced Ser 727 phosphorylation of STAT1 upon functional p38 MAPK signaling. Such a tyrosine phosphorylation-independent Ser 727 phosphorylation step has been observed upon stimulation by different stress stimuli and was shown to be important for transcriptional effects that are independent of STAT1 binding to DNA. Therefore, it is likely that STAT1 can function as a transcriptional co-activator by interacting with DNA-bound factors (47), thereby modulating influenza A virus-induced cytokine and chemokine expression.
Several studies have suggested that the STAT2 transactivating domain provides IFN-stimulated gene factor 3 transactivation function asserting that the STAT1 transactivating domain and its phosphorylation at serine 727 are not essential for type I IFN-induced transcriptional activity (48,49). Here, a more recent report showing a beneficial impact of Ser 727 phosphorylation on type I IFN-induced transcription could be confirmed by promoter studies using STAT1 Y701F and the Y701F/S727A double mutant (45).
Dysregulation of early cytokine induction in HPAIV infection appears to determine disease severity by enhancing viral pathogenicity (1,2). The same cytokines that orchestrate the infiltration of immune cells, resulting in phagocytosis and intracellular killing of the pathogen and the control of infection, are responsible for tissue remodeling and organ damage when produced in excessive amounts. Consequently, although their primary function is to protect the host and repair tissue when injured, these cytokines are mediators of disease and thus are targets for anti-inflammatory therapy (50). Inhibition of p38 MAP kinase in vivo by intraperitoneal treatment with the chemical compound SB 202190 clearly demonstrated that H5N1-induced cytokine hyperactivation was nearly completely abolished and this impairment was independent of reduced viral replication, as shown by stimulation with the dsRNA analog poly(I:C). A major drawback of anti-inflammatory therapy against infections is a reduction in the antiviral host gene response, allowing the pathogen to propagate unhindered, thereby supporting further spread. In contrast, SB 202190-mediated inhibition of p38 MAPK signaling in vivo even led to reduced viral propagation, indicating the presence of a virussupportive function for p38 in influenza A virus-infected animals. Previously, it was hypothesized that virus internalization is impaired by p38 MAPK inhibition due to reduced early endosome antigen 1 (EEA1) phosphorylation upon TLR4-MyD88 signaling, which has been described to enhance endocytosis (35). Furthermore, the retention of viral ribonucleoprotein complexes in the nucleus was observed with defective MAPK p38 signaling and linked to reduced phosphorylation of viral nucleoprotein (34). Although all involved proteins are expressed in HUVEC, reduced viral replication was not observable. Similar results were obtained in A549 cells, even though all the proteins concerned were expressed. Whether these mechanisms described in vitro might be the reasons for impaired viral replication in vivo needs to be further analyzed.
In conclusion, the present study reveals for the first time that inhibition of the p38 MAPK pathway significantly protects mice from lethal H5N1 infection. Furthermore, an overall virus-supportive function of p38 MAP kinase was confirmed in infected animals, although considerable levels of ongoing viral replication were still observed. These findings demonstrate that early suppression of cytokine amplification by inhibiting p38 activity significantly leads to protection of mice from lethal influenza virus infection and impressively emphasizes the important role of the cytokine storm in the viral pathogenicity of HPAIV. Targeting p38 MAP kinase might thus be a promising approach for antiviral intervention. | 2018-04-03T01:55:01.138Z | 2013-11-04T00:00:00.000 | {
"year": 2013,
"sha1": "c5f1fe5fea92eb3a7ece8fb0e085a908a748ec0b",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/289/1/13.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f51690f06959963143629555fbb5c51c90aa3000",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
237495274 | pes2o/s2orc | v3-fos-license | Bacterial supergroup‐specific “cost” of Wolbachia infections in Nasonia vitripennis
Abstract The maternally inherited endosymbiont, Wolbachia, is known to alter the reproductive biology of its arthropod hosts for its own benefit and can induce both positive and negative fitness effects in many hosts. Here, we describe the effects of the maintenance of two distinct Wolbachia infections, one each from supergroups A and B, on the parasitoid host Nasonia vitripennis. We compare the effect of Wolbachia infections on various traits between the uninfected, single A‐infected, single B‐infected, and double‐infected lines with their cured versions. Contrary to some previous reports, our results suggest that there is a significant cost associated with the maintenance of Wolbachia infections where traits such as family size, fecundity, longevity, and rates of male copulation are compromised in Wolbachia‐infected lines. The double Wolbachia infection has the most detrimental impact on the host as compared to single infections. Moreover, there is a supergroup‐specific negative impact on these wasps as the supergroup B infection elicits the most pronounced negative effects. These negative effects can be attributed to a higher Wolbachia titer seen in the double and the single supergroup B infection lines when compared to supergroup A. Our findings raise important questions on the mechanism of survival and maintenance of these reproductive parasites in arthropod hosts.
| INTRODUC TI ON
Wolbachia are maternally inherited, obligatory intracellular, endosymbionts of the order Rickettsiales (Hertig & Wolbach, 1924), which are widely found in arthropods and filarial nematodes (Bandi et al., 1998;Rousset et al., 1992;Weinert et al., 2015). To enhance their own transmission, these bacteria often alter host reproductive biology with mechanisms like male-killing, feminization, parthenogenesis, and cytoplasmic incompatibility (CI) (Werren et al., 2008). While CI leads to an increase in the number of infected individuals in the population, male-killing, and feminization shifts the offspring sex ratio towards females, which is the transmitting sex for Wolbachia. Thus, Wolbachia increases the fitness of the infected hosts, over the uninfected ones, as it increases its own rate of transmission. The vast majority of Wolbachia-host association studies reveal many negative effects on the hosts. In addition to reproductive traits, many other life-history traits like longevity and developmental time are also known to be compromised. A review of such negative effects of Wolbachia on hosts where CI is prevalent is presented in Table 1. In Trichogramma kaykai and T. deion, the infected (thelytokous) line shows reduced fecundity and adult emergence rates than the antibiotically cured (arrhenotokous) lines (Hohmann et al., 2001;Tagami et al., 2001). Leptopilina heterotoma, a Drosophila parasitoid, has adult survival rates, fecundity, and locomotor performance, of both sexes, severely compromised in Wolbachia-infected lines (Fleury et al., 2000). Larval mortality has been observed in both sexes of insecticide-resistant Wolbachia-infected lines of Culex pipiens (Duron et al., 2006). Wolbachia infections can also result in a range of behavioral changes and altered phenotypes in Aedes aegypti (Turley et al., 2009). While these cases highlight a parasitic effect of Wolbachia, there are several examples where no such effect is discernible (Hoffmann et al., 1996). Moreover, there are also examples where Wolbachia has now become a mutualist and offers specific and quantifiable benefits to its host. One such example of an obligate mutualism with Wolbachia has been reported in the common bedbug Cimex lectularius where Wolbachia, found to be localized in bacteriomes, provides essential B vitamins needed for growth and fertility (Hosokawa et al., 2010). Such examples of arthropod-Wolbachia mutualism have now been reported from various arthropod taxa (Miller et al., 2010;Pike & Kingcombe, 2009). This shift from parasitic to mutualistic effect can also happen in facultative associations as seen in Drosophila simulans, where within a span of just two decades, Wolbachia has evolved from a parasite to a mutualist (Weeks et al., 2007).
The negative effects of Wolbachia on their hosts are not unexpected. The presence of bacteria within a host entails sharing of nutritional and other physiological resources (Kobayashi & Crouch, 2009;Whittle et al., 2021), especially with Wolbachia, as they are obligate endosymbionts and cannot survive without cellular resources derived from their hosts (Foster et al., 2005;Slatko et al., 2010). Accordingly, Wolbachia is known to compete with the host for key resources like cholesterol and amino acids in A. aegypti (Caragata et al., 2014). The precise molecular mechanisms of many of these negative effects have not been ascertained and are generally ascribed to partitioning-off of host nutrients for its benefit, but what is clear is that Wolbachia infections can impose severe nutritional demands on their hosts (Ponton et al., 2014).
However, it is also known that Wolbachia can elicit antipathogenic responses from their hosts where the host resistance or tolerance to the infection increases (Zug & Hammerstein, 2015). For example, Wolbachia induces host methyltransferase gene Mt2 towards antiviral resistance against Sindbis virus in D. melanogaster (Bhattacharya et al., 2017). Wolbachia can utilize the immune deficiency (IMD) and Toll pathways (Pan et al., 2018) and increase reactive oxygen species (ROS) levels in Wolbachia-transfected A. aegypti mosquitoes, inhibiting the proliferation of the dengue virus (Pan et al., 2012). Such immune responses require additional allocation of resources, which can further affect other physiological traits of the host. This concept of a "cost of immunity" is wellestablished and suggests a trade-off between immunity and other life-history traits (Zuk & Stoehr, 2002). For example, elevated ROS levels negatively affect many host traits like longevity and fecundity (Dowling & Simmons, 2009;Monaghan et al., 2009;Moné et al., 2014;Selman et al., 2012). Thus, there is sufficient evidence to conclude that Wolbachia can have substantial negative effects on the overall fitness of its host.
One of the arthropod hosts infected by Wolbachia is the parasitoid wasp Nasonia vitripennis. N. vitripennis, being cosmopolitan, has been used to study Wolbachia distribution, acquisition, spread, and Wolbachia-induced reproductive manipulations (Landmann, 2019;Werren et al., 2008). However, the effect of the endosymbiont on the life-history traits of this wasp remains poorly understood with conflicting reports. N. vitripennis harbor two Wolbachia supergroup infections, one each from supergroup A and supergroup B (Perrot-Minnot et al., 1996), and the presence of these two infections has been found in all lines of N. vitripennis from continental North America to Europe (Raychoudhury et al., 2010), indicating that it has reached fixation across the distribution of its host. The two Wolbachia in N.
vitripennis together cause complete CI, but single infections of supergroup A Wolbachia cause incomplete CI while supergroup B infections still show complete CI (Perrot-Minnot et al., 1996). In some N. vitripennis lines, Wolbachia has been reported to cause enhanced fecundity (Stolk & Stouthamer, 1996), but a similar effect has not been observed in some other lines (Bordenstein & Werren, 2000).
In this study, we investigate, what, if any, are the negative effects of CI-inducing Wolbachia infections in N. vitripennis. We investigate the effects of Wolbachia infections in a recently acquired line of N.
vitripennis from the field. This line, like other N. vitripennis lines, has two Wolbachia infections, one each from the supergroup A and B.
Sequencing of the five alleles from the well-established multi-locus strain typing (MLST) system (Baldo et al., 2006)
| Nasonia vitripennis lines used, their Wolbachia infections, and nomenclature
The N. vitripennis NV-PU-14 line was obtained from Mohali, Punjab, India, in 2015. NV-PU-14 was cured of Wolbachia by feeding the females with 1 mg/ml tetracycline in 10 mg/ml sucrose solution for at least two generations (Breeuwer & Werren, 1990). The curing was confirmed by PCR using supergroup-specific ftsZ primers (Baldo et al., 2006), and CI crosses between the infected and uninfected lines. NV-PU-14 also served as the source strain for separating the two Wolbachia infections into single A and single B infected wasp lines.
To separate the Wolbachia supergroups, we utilized relaxed se- lines were in culture for 3 years, many of the infected lines were cured again to obtain "recently cured" lines to minimize the effects of any host divergence that might have accumulated within them.
Another N. vitripennis line, NV-KA, obtained from Bengaluru, Karnataka, India, in 2016, was similarly named wAwB(KA). The MLST sequences of the two Wolbachia strains (one each from supergroups A and B), even in wAwB(KA), were found to be identical to wAwB(PU), and were also identical to all other N. vitripennis studied across the world (Prazapati, personal communication). wAwB (KA) was also cured of Wolbachia to obtain 0(wAwB KA).
All these wasp lines were raised on Sarcophaga dux fly pupae with a generation time of 14-15 days at 25°C, 60% humidity, and a continuous daylight cycle.
| Sequential mating and sperm depletion of the males
To test the effect of Wolbachia on male reproductive traits like mating ability, individual males were assayed for the number of copulations they can perform and sperm depletion. As N. vitripennis is haplodiploid, every successful mating will result in both female and male progenies while an unsuccessful one will result in all-male progenies. The males used were obtained from virgin females hosted with one fly pupa for 24 h and were not given any external sources of nutrition (usually a mixture of sucrose in water) before the experiment.
Each male was then mated sequentially with virgin females from the same line. At the first sign of a male not completing the entire mating behavior (Jachmann & Assem, 1996), it was given a rest for half an hour and was subjected to mating again until it stopped mating altogether. The mated females were hosted after a day with one fly pupa for 24 h. The females were then removed, and the offsprings were allowed to emerge and then counted. The average number of copulations and the number of copulations before sperm depletion, were compared using the Kruskal-Wallis test with a significance level of .05. Mann-Whitney U test, with a significance level of .05, was used for comparisons between two groups.
| Host longevity, family size, and fecundity
To test whether the presence of Wolbachia has any influence on longevity, emerging wasps of both sexes were kept individually in ria vials at 25°C, without any additional nutrition. Survival following emergence was measured by counting the number of dead individuals every 6 h. The Kaplan-Meier analysis, followed by log rank statistics, was used to identify differences between the strains with a significance level of .05.
To test for the effect of Wolbachia infections on the adult family size of virgin and mated females, each female was sorted at the pupal stage and separated into individual ria vials. To enumerate the brood size of mated females, some of these virgins were offered single males from the same line and observed till mating was successful.
All the females were then hosted individually with one fly pupa for 24 h. These were kept at 25°C for the offspring to emerge, which were later counted for family size, by randomizing the ria vials in a double-blind assay. The differences between groups were compared using the Kruskal-Wallis test with a significance level of .05. The Mann-Whitney U test, with a significance level of .05, was used to compare two groups.
To investigate whether Wolbachia affects the female fecundity, emerged females were hosted with one host for 24 h. The host pupa was placed in a foam plug, so that only the head portion of the pupa was exposed and available for the females to lay eggs. The females were removed after 24 h, and the eggs laid were counted under a stereomicroscope (Leica M205 C). The differences in fecundity were compared between groups using the Kruskal-Wallis test with a significance level of .05. The fecundity difference between two groups was compared using the Mann-Whitney U test with a significance level of .05.
| Estimation of the relative density of Wolbachia infections across different developmental stages of N. vitripennis
To collect the different developmental stages, females were hosted
| The presence of Wolbachia reduces the life span of both males and females
Wolbachia can compete with the host for available nutrition, which can increase nutritional stress, resulting in a shortened life span for many hosts (Caragata et al., 2014;McMeniman et al., 2009).
Therefore, we first investigated the effect of Wolbachia infections on the survival of both male and female wasps. As Figure 1
| The presence of Wolbachia reduces the number of copulations a male can perform
Wolbachia is known to be associated with a reduction in the number of mating a male can perform in Ephestia kuehniella (Sumida et al., 2017). is evident is that the presence of Wolbachia is also associated with a reduction in the capability of a male to mate. Furthermore, by curing the infected lines again, we show that this decrease is not due to the host genotype but is an effect of the presence of Wolbachia in these lines.
| Wolbachia-infected males deplete their sperm reserves faster than the uninfected ones
Nasonia vitripennis males are prospermatogenic (Boivin et al., 2005), where each male emerges with their full complement of mature sperm and has not been reported to produce any more during the rest of their life span (Chirault et al., 2016). Thus, if a single male is mated sequentially with as many females as it can mate with, it should eventually run out of this full complement of sperm and produce all-male broods even after successful copulation. As Figure 2, indicates, each male did run out of sperm at the tail end of this continuous mating and produced only male progenies (shown by black dots). We looked at the number of mating done by these males before sperm depletion to see whether Wolbachia affects the sperm production in the males. As shown in Figure 3 F I G U R E 2 Wolbachia-infected males show a reduction in the number of copulations. Males from different Wolbachia infection status strains were mated sequentially until each of them stopped mating. Some of the matings had "no emergence" of progenies because of poor host quality (shown by white dots). The results show that the presence of Wolbachia is associated with the reduction in the number of copulations a male can perform. The figure also shows whether the progenies of these sequential copulations produce any daughters or not, as a measure of sperm depletion. The details of sperm depletion are shown in Figure 3. Sample sizes for the strains 0(PU), wA(PU), 0(wA PU), wB(PU), 0(wB PU), wAwB(PU), and 0(wAwB PU) were n = 7, n = 7, n = 7, n = 6, n = 5, n = 6, and n = 7, respectively.
| Wolbachia-infected females produce fewer offspring
Wolbachia is known to have a negative impact on the progeny family size of its host (Hoffmann et al., 1990;Hohmann et al., 2001). To test whether a similar effect is seen in N. vitripennis, we enumerated the family sizes for both virgin and mated females for the four different Wolbachia-infected lines and their recently cured counterparts.
As Figure 4
| Wolbachia negatively impacts the fecundity of infected females
To check whether the differences in the family sizes between the different infected lines of N. vitripennis were due to the number of eggs being laid by the females, we looked at the fecundity of both virgin and mated females across these lines. Among the virgin females The results thus suggest a negative effect of Wolbachia on egg production in females. The assay also established that the difference in family sizes can be due to the differences in the fecundity of the females.
| Relative Wolbachia density in single and multiple Wolbachia infections N. vitripennis lines
Wolbachia density has a major role to play in expressing the effects of the infection on host biology (Hoffmann et al., 1996;Min & Benzer, 1997). An increase in cellular Wolbachia density is often associated with a greater expression of their effects (Breeuwer & Werren, 1993). Thus, we estimated Wolbachia titers across the dif-
| DISCUSS ION
The results from this study (summarized in Table 2) However, the egg to larval to pupal stage mortality could also have an effect on the brood sizes, but these were not assayed.
In most cases, these negative effects disappear with the removal and wB(PU) lines (i.e., a synergistic effect). Since the two supergroup infections are bidirectionally incompatible with each other, it is plausible that they are also competing for the host nutrition, which can further enhance the negative impacts of these infections.
Previous reports have suggested a direct correlation between
Wolbachia density and the level of CI (Breeuwer & Werren, 1993;Dutton & Sinkins, 2004;Ikeda et al., 2003;Noda et al., 2001;Ruang-Areerate & Kittayapong, 2006). Our results also suggest that the cost of Wolbachia maintenance is correlated with the density of the Wolbachia titer and hence shows a milder intensity of the effect of CI. Our results also confirm these previous reports of the positive correlation between Wolbachia abundance and the level of CI induced not only in N. vitripennis but also in other insect taxa as well Kondo et al., 2002).
wB ( shows incomplete CI ( Figure S1). Thus, higher levels of Wolbachia in wB(PU) than in wA(PU) can also explain the more severe effects in wB(PU) than wA(PU).
The negative fitness effects of CI-inducing Wolbachia, and nutritional competition raises important questions on the maintenance of these endosymbionts over long evolutionary time scales.
Theoretical studies indicate that evolution towards mutualism can aid the long-term persistence of these maternally inherited reproductive parasites (Prout, 1994;Turelli, 1994 (Zug & Hammerstein, 2015). Host suppressor alleles have been identified, which confer resistance against feminizing (Rigaud et al., 1999) and male-killing Wolbachia (Hornett et al., 2006). However, no such host genetic factors have been found for CI-inducing Wolbachia, especially in N. vitripennis. Therefore, a possible explanation for the maintenance of these multiple infections then comes from the high efficiency of transmission of these infections in N. vitripennis, which is nearly 100% (Breeuwer & Werren, 1990). Theoretical studies also suggest that even in the presence of selective pressures, multiple infections are maintained and transmitted owing to the fitness advantages conferred and CI (Vautrin et al., 2008).
Another possibility can be that these Wolbachia infections in N.
vitripennis are relatively recent, the evidence of which comes from the rapid spread of Wolbachia in populations of N. vitripennis across North America and Europe (Raychoudhury et al., 2010). These recent infections, although bearing a cost on the host at present, might eventually lead to the evolution of host resistance against them.
Our results indicate supergroup B to be a "stronger" Wolbachia than supergroup A and any competition for nutritional resources and niche habituation between them should drive out supergroup A Wolbachia. Moreover, wA(PU) has milder effects on females with the reduction in longevity being the only pronounced negative effect.
Therefore, the continuation of this supergroup infection is difficult to explain. One possibility could be the supergroup A infection conferring mutualistic effects on the host. This strain is closely related to other supergroup A Wolbachia strains like wMel in D. melanogaster and wHa, wAu, and wRi in D. simulans (Díaz-Nieto et al., 2021). wMel in D. melanogaster and wHa, wAu, and wRi in D. simulans are known to provide defense against viral infections to their hosts (Bhattacharya et al., 2017;Pimentel et al., 2021;Teixeira et al., 2008). The continued presence of supergroup A Wolbachia in N. vitripennis could be due to such defenses against viral infections, but this hypothesis remains to be tested.
The higher cost of maintenance of supergroup B Wolbachia can be an attribute of the CI phenotype induced by supergroup B Wolbachia. Complete CI (i.e., nearly 100%) are rare events reported mainly for supergroup B Wolbachia in Culex pipiens, Aedes aegypti (Sinkins et al., 2005;Xi et al., 2005), and N. vitripennis ( Figure S1 and Bordenstein et al., 2006). This essentially means that nearly the entire sperm complement of each male has the Wolbachia-induced CI-inducing Wolbachia is known to have negative effects on various physiological traits in the vast majority of its host population (summarized in Table 1). The present study also suggests such effects, or a "cost," associated with the maintenance of Wolbachia infection in N.
vitripennis. This is in contrast to the previous reports suggesting positive fitness effects (Stolk & Stouthamer, 1996) and no fitness effects (Bordenstein & Werren, 2000) of Wolbachia on N. vitripennis. However, the strain used are all from India and the negative effects seen can be unique to these lines. Although the lines used here have the same or very similar Wolbachia as far as sequence uniformity is concerned across the five MLST alleles, other lines from other continents need to be analyzed to confirm whether this effect is ubiquitous in N. vitripennis.
ACK N OWLED G M ENTS
We thank the Indian Institute of Science Education and Research (IISER) Mohali for the funding and graduate fellowship for AT.
Partial funding was obtained from grant no. BT/PR14278/ BRB/10/1417/2015, Department of Biotechnology, Government of India, awarded to Nagaraj Guru Prasad and RR.
CO N FLI C T O F I NTE R E S T
The authors declare no conflict of interest.
DATA AVA I L A B I L I T Y S TAT E M E N T
The raw data for all the experiments have been archived at Dryad with https://doi.org/10.5061/dryad.w0vt4b8s9, https://datad ryad. | 2021-09-14T13:23:44.777Z | 2021-09-10T00:00:00.000 | {
"year": 2022,
"sha1": "e646c839435dac93152389d3f58f64061a82afb3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1101/2021.09.10.459769",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "ad2c2cab1f6d221b06e5cab8006472305b9a37a8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
253841140 | pes2o/s2orc | v3-fos-license | Studying hemispheric lateralization of 4-month-old infants from different language groups through near-infrared spectroscopy-based connectivity
Introduction Early monolingual versus bilingual experience affects linguistic and cognitive processes during the first months of life, as well as functional activation patterns. The previous study explored the influence of a bilingual environment in the first months of life on resting-state functional connectivity and reported no significant difference between language groups. Methods To further explore the influence of a bilingual environment on brain development function, we used the resting-state functional near-infrared spectroscopy public dataset of the 4-month-old infant group in the sleep state (30 Spanish; 33 Basque; 36 bilingual). Wavelet Transform Coherence, graph theory, and Granger causality methods were performed on the functional connectivity of the frontal lobes. Results The results showed that functional connectivity strength was significantly higher in the left hemisphere than that in the right hemisphere in both monolingual and bilingual groups. The graph theoretic analysis showed that the characteristic path length was significantly higher in the left hemisphere than in the right hemisphere for the bilingual infant group. Contrary to the monolingual infant group, the left-to-right direction of information flow was found in the frontal regions of the bilingual infant group in the effective connectivity analysis. Discussion The results suggested that the left hemispheric lateralization of functional connectivity in frontal regions is more pronounced in the bilingual group compared to the monolingual group. Furthermore, effective connectivity analysis may be a useful method to investigate the resting-state brain networks of infants.
Introduction: Early monolingual versus bilingual experience a ects linguistic and cognitive processes during the first months of life, as well as functional activation patterns. The previous study explored the influence of a bilingual environment in the first months of life on resting-state functional connectivity and reported no significant di erence between language groups.
Methods: To further explore the influence of a bilingual environment on brain development function, we used the resting-state functional near-infrared spectroscopy public dataset of the -month-old infant group in the sleep state ( Spanish; Basque; bilingual). Wavelet Transform Coherence, graph theory, and Granger causality methods were performed on the functional connectivity of the frontal lobes.
Results:
The results showed that functional connectivity strength was significantly higher in the left hemisphere than that in the right hemisphere in both monolingual and bilingual groups. The graph theoretic analysis showed that the characteristic path length was significantly higher in the left hemisphere than in the right hemisphere for the bilingual infant group. Contrary to the monolingual infant group, the left-to-right direction of information flow was found in the frontal regions of the bilingual infant group in the e ective connectivity analysis.
Discussion:
The results suggested that the left hemispheric lateralization of functional connectivity in frontal regions is more pronounced in the bilingual group compared to the monolingual group. Furthermore, e ective connectivity analysis may be a useful method to investigate the resting-state brain networks of infants. KEYWORDS infant brain network, resting-state, functional near-infrared spectroscopy, functional connectivity, hemispheric lateralization
Introduction
The human brain is a complex dynamic system that usually behaves as a structurally or functionally interrelated network (1). Studying neural connectivity patterns could provide worthy insights into the working mechanisms of the human brain. The linguistic function influenced brain development by age factors (2). However, the neural mechanisms of bilingual factors based on brain connectivity during early brain development remain unclear. Previous studies showed that early environmental factors including caregiver education level or social and economic status can alter functional brain connectivity (3,4). A study proposed that the bilingual environment of the infant in the first months of life leads to a series of biochemical events at the microscopic level. This increased the generation of cellular substrates that regulate neuroplasticity and the timing of their synthesis. This may lead to structural changes at the macroscopic level, as manifested by an increase in the size of specific brain language areas and stronger connectivity between brain regions associated with language function (5).
Until now, few studies focused on functional connectivity differences in resting-state brain networks in infants as young as 4-month-old. Based on these findings, the purpose of this study was to compare whether 4-month-old infants growing up in monolingual and bilingual environments exhibit differences in functional connectivity. To answer this question, Blanco B et al. (6) studied resting-state functional connectivity (RSFC) at the group level in 4-month-old infants with functional near-infrared spectroscopy (fNIRS). However, statistical tests showed no differences in RSFC between infants growing up in a monolingual/bilingual background at 4 months of age. We intended to perform a complementary analysis from the perspective of hemispheric lateralization based on their study.
RSFC can be defined as synchronized brain activity between brain regions that function together in supporting functionally relevant sensory and cognitive processes. RSFC can be measured in humans of all ages. It provides a vehicle for neurospecialization throughout the lifespan. Studies showed that RSFC is reliable for accessing language-related networks in clinical settings (7). fNIRS is a promising technology. It allows continuous data acquisition for long periods of time with high temporal sampling rates. In addition, there is a little physical limitation on the participants. As an emerging neuroimaging tool, fNIRS spectroscopy has been successfully used to localize brain activation during cognition and to determine the functional connectivity of resting-state brain activity (8).
MRI studies of adults suggested that long-term exposure to two languages may alter functional connectivity in the brain (9, 10). Studying the RSFC in monolingual and bilingual infants may shed light on the effects of long-term environmental factors (e.g., early bilingual experiences) on the intrinsic properties of functional brain systems.
Based on previous studies of lateralization related to functional brain organization (11)(12)(13), and considering the spatial resolution and coverage of the fNIRS setting, it is expected that differences in functional connectivity will appear in regions of the frontal lobe involved in language networks in bilingual infants.
Data acquisition
The experimental dataset of the study subjects was taken from the Open Science Framework (OSF) website (6). The subject composition of this dataset included Basque monolingual infants (33), Spanish monolingual infants (30), and bilingual (Spanish-Basque) infants (36) with a mean age of 4 months old ( Figure 1A). The datasets were recorded during the infant's sleep.
Pre-processing of data
The fNIRS signal was pre-processed using the results from the dataset by Blanco et al. (14) after pre-processing, the dataset was pre-processed in MATLAB using internal scripts and third-party functions. The pre-processing flow is shown below ( Figure 2).
(1) First, the raw light intensity data are converted into optical density variations (15).
Analysis of data
To explore differences in brain network connectivity in infants from different language backgrounds. Functional connectivity, effective connectivity, and graph theory analysis were performed on 9-min sleep resting-state fNIRS signals from a group of 4-month-old infants from three language backgrounds. . /fpsyt. .
Analysis of the functional connectivity of the bilateral frontal lobes
Functional connectivity is a non-directional method for analyzing brain functional connectivity. It assesses the statistical correlation of activity in different regions of the brain from a functional integration perspective (23). In this study, the functional connectivity of bilateral frontal lobes was analyzed by Wavelet Transform Coherence (WTC) method. The WTC method is a time-frequency analysis method that has been successfully applied to the field of resting-state brain network connectivity (24). It can evaluate the local intercorrelation of two time series. Therefore, it can reveal local phase information that is difficult to find by traditional time series analysis methods (e.g., Fourier transformation) (25). This study used the wavelet transform coherence toolkit developed by Grinsted et al. (download at: http://noc.ac.uk/using-science/crosswaveletwavelet-coherence). First, the correlation between symmetrical channels in the left and right frontal regions (e.g., 1-24 and 2-25 are corresponding channels) was analyzed using the WTC ( Figure 1B). All eight channel pairs (1-24, . . . 8-31) were located in the left/right frontal regions. Channels 1, 3, and 5 are located in the left frontal middle gyrus. Channel 2 is located in the left frontal superior gyrus. Channels 4, 6, 7, and 8 are located in the left inferior frontal gyrus. Channels 24, 26, and 28 are located in the right middle frontal gyrus. Channel 25 is located in the right superior frontal gyrus. Channels 27, 29, 30, and 31 are located in the left inferior frontal gyrus ( Figure 1C). The mean values of coherence value in the 0.02-0.25 Hz band were calculated to represent the strength of the RSFC. We conducted eight one-way ANOVA on the functional connectivity values between different language groups for each of the eight channel pairs. Multiple comparison analysis was performed between different language groups for channel pairs with significant one-way ANOVA. The Fisher-z transform was performed on the coherence values before the multiple comparison analysis.
Functional connectivity analysis of frontal regions in the ipsilateral hemisphere
Correlation analysis of the time series of HbO signals between any two channels in the left and right hemispheric interfrontal regions (L1-L8, R1-R8) was performed using the WTC analysis, respectively. The average of the coherence values in the 0.02-0.25 Hz band was calculated to characterize the functional connectivity strength within the cerebral hemispheres. After this step, the left and right intra hemispheric frontal functional connectivity matrices were obtained. The average intra hemispheric functional connectivity matrix based on HbO signals between subjects in each of the three language background conditions was plotted separately. Each element of the functional connectivity matrix is the correlation value between the two channels represented by its horizontal and vertical coordinates. We first performed a two-way ANOVA .
Small-world characterization in the ipsilateral hemisphere
First, we traversed the all channels in the left and right cerebral hemispheres to calculate all the two wavelet coherence value sequences of the HbO signal and obtain the adjacency matrix. The i-th row of the adjacency matrix represents the correlation value between channel i and other channels, which represents the strength of functional connectivity. Then the statistical properties of the small-world network are calculated based on the adjacency matrix. The clustering coefficient is analyzed to evaluate the degree of node aggregation and the characteristic path length is analyzed to evaluate the global characteristics of the network. The distance d ij between node i and node j in the network is the number of shortest path edges connecting the two nodes. The characteristic path length L mean is the average of the shortest paths between any two points in the network, where N is the number of nodes of the network., i.e., The clustering coefficient is used to characterize the clustering properties of the network nodes. Assuming that node i in the network is associated with k i edges, it is known that between these k i nodes, there are at most k i * (k i -1)/2 edges, and the actual number of edges existing between these k i nodes is E I . The ratio of the actual number of edges existing between these k i nodes, E I , to the total maximum possible number of edges, k i * (k i -1)/2, is defined as the clustering coefficient of node i, C i , i.e., The clustering coefficients of all nodes in the network C i are averaged to obtain the clustering coefficients of the whole network C Mean , where N is the number of nodes of the network, i.e., In addition, the brain function network studied in this paper is a weightless network, in which there are two states between any two nodes, connected or unconnected. To distinguish between connected and unconnected, we need to define a threshold T. When the connection strength between two nodes is greater than or equal to this threshold, the two nodes are considered to be connected; when it is less than this threshold, the two nodes are considered to be unconnected. In the fNIRS signal, once the threshold is set, the brain network can be built to study the small-world properties. Among them, it should be noted that if the threshold is set too high, it will lead to too many isolated nodes in the network and may miss some important information; if it is set too low, the connectivity density is too large and it will cause too many pseudo-connections. The range of the threshold T is determined by the conditions of the average node degree and sparsity of the network. It is required that all subjects satisfy the following two conditions (26): First, the average node degree is greater than twice the natural logarithm of the number of nodes N, in this paper N = 23. Therefore, for all subjects, the average node degree of the network must be no <6. Second, the network density (the ratio of the number of connected edges actually present in the network to the maximum number of possible edges in the network) is <50%. The threshold T to satisfy the above condition ranges from 0.51 < T < 0.71. To verify the robustness of the graph theory analysis results, we set the threshold T to range from 0.51 to 0.71, step size 0.01. Finally, we averaged the 21 C and L values for each subject separately (27).
Analysis of the e ective connectivity of the bilateral frontal lobes
Effective connectivity analysis is a directional method of brain connectivity analysis. The effective connectivity between .
/fpsyt. . different brain regions describes the causal relationship between the interactions of different brain regions. The analysis results can reflect the information flow transmission between different brain regions. In this study, the effective connectivity of the bilateral frontal lobes was analyzed using the Granger Causality (GC) mathematical model (23, 28). The calculation of the GC analysis was based on an autoregressive model. In the process of analyzing two data series using GC analysis. Signal A is said to "Granger cause" signal B if information from the past of A helps to better predict B than only considering information from the past of B (28). In order to explore the hemispheric lateralization properties of infants under different language background conditions. This study used GC analysis for the corresponding channels between the frontal lobes of the left and right hemispheres (e.g., 2-25 and 5-28 are corresponding channels). The GC values were calculated for the corresponding channel left-to-right and rightto-left in bilateral frontal lobes, respectively. The GC value from the left-to-right direction minus the GC value from the right-toleft direction was defined as the difference of influence (DOI), which indicates the net causality from the left-to-right direction (29). The DOI values were calculated for each channel pair within the left/right frontal region based on the HbO signals. For example, channel 1 in the left frontal region was paired with channel 24 in the right frontal region. Then we averaged the DOI values of all channel pairs as the value of effective connectivity for the subject (27). DOI values were calculated separately for each language background condition.
In this study, we used the Granger causal connectivity analysis MATLAB toolkit (Granger causal connectivity analysis, GCCA) developed by the University of Sussex to analyze the RSFC (29). This toolkit includes data pre-processing, data stationary check, model validity and consistency verification, and DOI calculation. The choice of the lag period in the GC model affects the Granger output results. Therefore, in order to verify the robustness of the effective connectivity analysis (28), multiple lag period parameters (2nd order, 3rd order, 4th order, and 5th order) were carried out with the GC method. We first performed four one-way ANOVAs on the DOI values between different language groups at four orders (2, 3, 4, and 5). Multiple . /fpsyt. . comparison analyzes were conducted for orders with significant one-way ANOVA results, and FDR correction was applied to the statistics.
Results of functional connectivity analysis of bilateral frontal lobes
Statistical tests showed differences in resting-state brain functional connectivity between infants with different language backgrounds (Figure 3). Only the results of the one-way ANOVA of the channel pair "L7-R7" on the functional connectivity values between the three language groups were significant [F (2, 87) = 5.813, p = 0.0043]. The results of the multiple comparison analysis between the functional connectivity values of the three language groups showed that the Basque group was significantly higher than the bilingual group [t (87) = 3.409, p = 0.0021]. No significant differences were found between the Basque group and the Spanish group [t (87) = 1.744, p = 0.0696]. There was no significant difference between the Spanish-speaking group and the bilingual group either [t (87) = 1.665, p = 0.0696]. The results of the multiple comparison analysis were corrected using a false discovery rate (FDR) correction to control for the expected percentage of false "discoveries." Unfortunately, no significant differences were observed in the mean strength of functional connectivity of all eight channel pairs in the frontal lobe.
Results of functional connectivity analysis of ipsilateral frontal lobes
Each element of the functional connectivity matrix is the correlation value between the two channels represented by the horizontal and vertical coordinates (Figure 4).
The mean values of the elements in the functional connectivity matrix were calculated to characterize the functional connectivity strength of the frontal lobes on both sides. The elements on the sub diagonal of the functional connectivity matrix showed the autocorrelation results for each fNIRS channel with a value of 1. These values were not involved in the process of calculating the mean of the connectivity matrix. A two-way ANOVA showed that the interaction term was not significant [F (2, 174) = 0.2888, p = 0.7495] ( Figure 5).
Results of small world characterization in the hemisphere
Based on the functional connectivity matrix of the brain network. The average L and C values based on functional connectivity for three groups in the left and right frontal were .
/fpsyt. . displayed in Table 1, and the asterisks ( * ) represent significant differences (p < 0.05, FDR correction). Only the bilingual infant group showed significant differences in L value [t (29) = 2.497, p = 0.0185], which indicated that infants in the bilingual group had more optimal left hemisphere lateralization compared to infants in the monolingual group. No significant differences were found in the monolingual group. We noticed that part of the p-values of the bilingual group were significant when 0.61 < T < 0.71 ( Figures 6A,B).
Results of e ective connectivity analysis of bilateral frontal lobes
We noticed that for all lag period parameters, DOI values were negative in the left and right frontal lobes of the monolingual group, while DOI values were positive in the frontal lobes of the bilingual group ( Figure 7A). The results showed that the direction of effective connectivity between the left and right frontal lobes of bilingual infants was opposite to monolingual infants ( Figure 7B Figure 7B). The results of the two multiple comparison analyzes were corrected using a false discovery rate (FDR) correction to control for the expected percentage of false "discoveries."
Discussion
This study aimed to explore the hemispheric lateralization properties of RSFC in 4-month-old infants based on fNIRS. Functional connectivity in frontal regions of the left and right hemispheres was measured using WTC analysis and GC analysis. The hemispheric lateralization properties of functional connectivity were verified using graph-theoretic analysis.
Several research groups reported the use of restingstate functional near-infrared spectroscopy (R-fNIRS) for the assessment of functional connectivity (8,29,30). The results have shown that RSFC analysis based on R-fNIRS data is valid and reliable for studying brain function in healthy and diseased populations. Hemodynamic regulation in the resting state and the connections within and between functional networks can be measured efficiently by optical methods. In addition, gender differences in brain networks in visual working memory have been previously investigated using GC methods (27). The above demonstrated that the analysis methods in this study are reasonable and valid.
The results of the unilateral functional connectivity analysis showed left hemispheric lateralization of functional connectivity in frontal regions in both the bilingual and Basque infant groups. The results of the graph theory analysis confirmed the finding that the left hemisphere lateralization of functional connectivity was more pronounced in bilingual infants. Significant differences in the mean characteristic path lengths between the left and right hemispheres were also shown in the results of the graph theory analysis. This may imply that the intrinsic functional organization of infants growing up in a bilingual environment is more shaped. We will further conduct a more detailed analysis later using other parameters of the small-world characteristics and other analysis methods. In this way, we will find enough evidence to make the conclusions more convincing. Previous studies have shown that frontal and temporal brain regions associated with language function have significant left hemisphere lateralization in adult and adolescent groups (9,31,32). This confirmed the reliability of our findings. No significant differences were found in the grouplevel analysis of bilateral and unilateral functional connectivity, which was generally consistent with the findings derived by Blanco et al. (6). However, there were significant differences in functional connectivity strength between language groups in channel pair 7-30, which is located in the inferior frontal gyrus. This may indicated a key role of the inferior frontal gyrus in the development of language function in infants, and we will follow up with the further analysis of the inferior frontal gyrus language center. The results of the effective connectivity analysis showed that the direction of information flow in the frontal regions of the bilingual infant group was from left to right, whereas the direction of information flow in the monolingual infant group was from right to left. In addition, the average brain connectivity between the bilingual and monolingual infant groups was more significantly different in the effective connectivity analysis than in the functional connectivity analysis. Medvedev et al. (33) have shown that the direction of information flow in effective connectivity analysis is related to the dominant center of the . /fpsyt. . brain in the resting state. This also laterally confirmed the more significant left hemispheric lateralization of brain functional connectivity in frontal regions of bilingual infants. Based on the above findings, we concluded that the effective connectivity analysis could be used as a complement to the resting-state brain network analysis in the present study. We expected that the findings of this paper can be used as a reference for the study of the lateralization of brain functional networks. We will also further improve the study, such as studying the differences in the lateralization of brain networks in different age groups of monolingual infants and bilingual infants. Furthermore, we will verify and supplement the conclusions of this study, and provide a more comprehensive guide for the application of fNIRS spectroscopy in the analysis of resting-state brain functional connectivity.
Conclusion
In this study, we found that the functional brain connectivity of infants growing up in a bilingual environment had more significant left hemisphere lateralization properties compared to those growing up in a monolingual environment. Furthermore, effective connectivity analysis can complement the results of resting-state brain network analysis.
Data availability statement
Publicly available datasets were analyzed in this study. This data can be found here: https://doi.org/10.17605/OSF.IO/ 7FZKM. | 2022-11-25T15:05:36.354Z | 2022-11-24T00:00:00.000 | {
"year": 2022,
"sha1": "3f981bd6ecc3c1a734f8fec508b3c4aa5c6e462d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "3f981bd6ecc3c1a734f8fec508b3c4aa5c6e462d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
226234374 | pes2o/s2orc | v3-fos-license | Influence of Changes in Bone-Conduction Thresholds on Speech Audiometry in Patients Who Underwent Surgery for Otosclerosis.
OBJECTIVES
Otosclerosis is an underlying disease of the bony labyrinth that results in hearing loss. In some cases, the involvement of the bony part of the cochlea results in mixed hearing loss. The aim of this analysis was to seek a correlation between the results of speech audiometry tests and the changes in bone-conduction thresholds observed after surgical treatment.
MATERIALS AND METHODS
The analysis included 140 patients who were hospitalized and surgically treated for otosclerosis. The patients who were treated with stapedotomy were divided into subgroups based on the value of the bone-conduction threshold before the surgery. An audiological assessment was performed, with pure-tone threshold audiometry and speech audiometry tests taken into account.
RESULTS
The effectiveness of the surgery was judged by the change in the speech audiometry test results after 12 months of observation. After the surgery, it was found that a significant improvement, characterized as achieving 100% understanding of speech, occurred in 61.90% of the patients.
CONCLUSION
There is a correlation between the improvement in speech audiometry tests and bone-conduction curve after stapedotomy. The changes achieved in the bone-conduction curve at the frequency range up to 3,000 Hz (hertz) had a significant impact on the improvements in speech audiometry test results. Higher frequencies provide more data for improving the hearing process. A mean bone-conduction threshold between 21 and 40 dB (decibels) in the pure-tone audiometry examination performed before surgery is a favorable prognostic factor in the improvement of the bone-conduction threshold after surgery.
INTRODUCTION
Otosclerosis is a primary disease of the bony labyrinth and is characterized by progressive hearing deterioration. In the literature, the first report on otosclerosis appeared in 1735. The case described by Antonio Maria Valsalva from Bologna referred to the immobilization of the stirrup plate in the oval window. The definition and concept of otosclerosis as an isolated affliction of the labyrinth capsule was introduced to the otologic terminology by Politzer in 1894.
Effective surgical treatment is reflected in an improvement of the course of the air conduction threshold curve. The area between the air and bone-conduction threshold curve is reduced, which is referred to as the closure of the cochlear reserve. Favorable prognostic factors for improving hearing after surgery on the middle ear are good bone conduction and a large air-bone gap in the audiometry test performed before the surgical procedure [1][2][3][4][5] .
In Fleischer's classification, the postoperative outcome is predicted using the mean cochlear reserve measured preoperatively at frequencies of 500, 1,000, and 2,000 Hz in the tonal audiogram (Table 1) [6] .
For many years, the classification by G. Shambaugh has also been used, which treats the improvement in hearing as dependent on the preoperative threshold values of bone conduction.
On the basis of the bone thresholds, one can determine with fairly high accuracy the so-called Shambaugh-Carhart hearing prognosis curve (Table 2) [7] .
Currently, the practice of presenting the results of otosclerosis treatment by comparing only the mean cochlear reserves before and after surgery is not recommended. The American Academy of Otolaryngology-Head and Neck Surgery Committee on Hearing and Equilibrium guidelines recommend assessing otosclerosis treatments by analyzing the change in the bone-conduction deficit based on its thresholds at 0.5, 1, 2, and 3 kHz in correlation with the change in the cochlear reserve [8] .
Hearing is one of the human senses, and its most important role is the perception and analysis of speech. The treatment of choice for otosclerosis is a surgery aimed at improving ossicular chain movement by using a prosthesis in place of the stapes superstructure. Restoration of right ossicular chain movement after stapedotomy leads not only to changes in the air-bone gap but also to objectively measurable changes in bone-conduction thresholds.
The understanding of speech is an important element in the subjective evaluation of hearing improvement after surgery for otosclerosis.
The aim of this analysis was to determine the correlation between the results of speech audiometry tests with the changes in bone-conduction thresholds observed after stapedotomy surgery.
MATERIALS AND METHODS
The study included 140 patients who underwent first-time surgery for otosclerosis between 2010 and 2016. The patients' ages ranged from 19 to 71 years old, and the mean age was 39.31 years. The study group was composed of 87 women between 19 and 71 years old (average age, 40.33 years old) and 53 men between 27 and 59 years old (average age, 38.23 years old). Most patients were between 41 and 50 years old. A distinctly lower number of operations were performed in patients over 60 years of age ( Figure 1).
In all patients, an interview; a physical otorhinolaryngological examination; and a complete set of audiological examinations, including an audiometric examination, tuning fork tests (with a c2 512 Hz tuning fork), and pure-tone threshold audiometry and speech audiometry tests were performed. The audiometric tests were performed in a sound-proof and sound-absorbent cabin in the audiometric laboratory. In the cabin, the equivalent A-weighting-corrected sound level did not exceed 25 dBA (L Aeq (a-weighted equivalent sound level) = 25.1 dBA). The intensity of the stimulus for each tested frequency was determined with an accuracy of 5 dB. During the examination, the room was occupied only by the patient being examined. Threshold values for air and bone conduction were determined by audiometer equipped with TDK 39 headphones (MIDIMATE 622, Madsen, Dybendalsvaenget, Taastrup, Denmark). The audiometer met the International Organization for Standardization (ISO) standards for air (ISO0389-1985) and for bone conduction (ISO7566-1987).
The evaluation of the patient's understanding of speech was carried out using an AAD80 audiometer (Zalmed, Warsaw, Poland) and a Technics amplifier and cassette player (Panasonic, Osaka, Japan). The fluctuations of the signal level did not exceed 1.5 dB, and the signal-to-noise ratio was above 63 dB. To complete the evaluation, the NLA-93 (new articulation list -93) word test was used. The test material consisted of 10 balanced lists containing 24 monosyllabic nouns in each list. The test was balanced acoustically, grammatically, phonemically, semantically, energetically, and structurally. The test was always carried out by the same person with the same apparatus, and the control values were established based on a study of healthy Poles [9] .
The mean air-bone gap and the mean bone-conduction threshold were calculated as the arithmetic mean across the speech frequencies tested (500 Hz, 1,000 Hz, 2,000 Hz, and 3,000 Hz) both before the surgical procedure and 12 months after the operation.
•
Better speech comprehension after stapedotomy correlates with changes in bone conduction thresholds. • Improvement in the social communication is observed for changes in bone conduction thresholds up to 3000 Hz. • The primary goal of otosclerosis treatment is not to close the cochlear reserve but to improve speech understanding MAIN POINTS The data collected during the study were statistically analyzed (analysis of variance in Statistica, StatSoft, Cracow, Poland). Fisher's exact test, also known in the literature as the analysis of variance test, which is sometimes used as an alternative to the analysis of variance test for categorical data in the literature. Two-by-two (2x2) multileveled contingency tables (cross-tabulation tables) were used to verify non-parametric hypotheses.
Cramér's coefficient was used to assess the correlation between two variables.
Statistically significant results were indicated when p<0.05.
When analyzing the results of the operation, the patients were grouped based on the type of operation performed: Group A, stapedotomy (N=126 people) and Group B, stapedectomy (N=14 people).
Given the small number of patients for whom a stapedectomy was performed, this group of patients was excluded from further analysis.
Patients were operated on by two otosurgeons using the same surgical technique and with similar experience in the surgical treatment of otosclerosis. Classic stapedotomy was performed for each patient with a piston-like prosthesis with a diameter of 0.6 mm. This procedure allowed the analyzed patients to be treated as a homogeneous group.
The patients who were treated with a stapedotomy were divided into subgroups based on the bone-conduction threshold before the surgery. Owing to the need to create groups of sufficient size that would enable a stronger statistical analysis, Shambaugh's classification (Table 2) was modified, and a division was proposed as shown in Table 3.
The diagrams obtained from the speech audiometry examinations performed before and 12 months after surgery were analyzed for all patients. This resulted in the identification of two groups of patients, those showing an improvement and those showing no improvement in the free-field speech discrimination test.
RESULTS
On preoperative examination, all patients reported hearing loss, and 81.42% (N = 114 patients) complained of tinnitus. The subjectively perceived hearing loss was unilateral in 27.15% of patients (N = 38) and bilateral in 72.85% (N = 102).
The mean values of the air-bone gap and the mean bone-conduction thresholds, both at the beginning of the treatment and after 12 months of observations, were analyzed for the different bone-conduction groups (Table 4). Twelve months following the surgery, the change in the mean air-bone gap with respect to the presurgical gap was statistically significant in each of the bone-conduction groups. Simultaneous analysis of the changes in the mean air-bone gap between particular groups did not show a significant difference.
In the analysis of changes in the mean bone-conduction threshold over the 12-month observation period, a statistically significant improvement was observed in the A II group. The change in the mean value of the bone-conduction threshold in the A I and A III groups after the same period of time was not statistically significant (Table 5).
In addition, irrespective of the previous division of the patients, those that showed an improvement in the results of the speech audiometry test after surgical treatment were examined.
In the preoperative evaluation, 42.14% of patients had a 100% understanding of speech, and 57.86% of patients did not have full understanding of speech. In 18.57% of the patients, the articulation curve assumed the shape of a bell (in terms of speech audiometry test results, this shape is characteristic of cochlear involvement in perceptive hearing loss), and in 81.43%, it had a slanted course (typical of conductive hearing loss in speech audiometry test results).
355
Wiatr and Wiatr. Speech Audiometry Changes after Stapedotomy The effectiveness of the surgery was judged by the change in the speech audiometry test results after 12 months of observation; it was found that a significant improvement, characterized as achieving 100% understanding of speech, occurred in 61.90% of the patients. In 11.43% of the patients, the articulation curve continued to take the shape of a bell, and in the remaining 88.57%, it continued to be slanted ( Figure 2).
The correlation between the changes in the speech audiometry results and the changes in the bone-conduction threshold was analyzed in the group of patients with improvements in the speech audiometry test results after 12 months of observation ( Table 6). The results showed a statistically significant change in the bone-conduction threshold value for the studied frequencies (p < 0.05). The patients achieved 100% understanding of speech in the examination carried out 12 months after surgery.
The results for the group of patients with poorer results in speech audiometry following the surgery were examined with analysis of variance (Table 7). Compared with the presurgical values, the mean bone-conduction thresholds at 500 Hz, 1,000 Hz, and 3,000 Hz obtained 12 months after the operation were not significantly different (p > 0.05). At 2,000 Hz, a statistically significant improvement in the mean bone-conduction threshold was found (p<0.05).
DISCUSSION
Konopka et al. [10] noted that in numerous publications reporting the results of the treatment of this disease, the main emphasis was on the assessment of the closure of the air-bone gap and not on the assessment of speech understanding [11][12][13][14] . However, it should be remembered that the primary goal of otosclerosis treatment is, in fact, not to close the cochlear reserve but to improve speech understanding.
Bone conduction is a complex and multicomponent process. It is impossible to explain the observed changes using one mechanism. Bone-conduction thresholds in otosclerosis may be affected by various factors, such as proteolytic enzymes released from otosclerotic lesions in the bone tissue adjacent to the cochlea, damage to the sensory-nervous apparatus, deterioration over time, or the Carhart effect. The analysis of the correlation of bone-conduction thresholds with changes in the results from the speech audiometry tests showed that, in groups in which significant improvement in the speech audiometry results had been achieved, a statistically significant improvement in the mean bone-conduction threshold was observed. When carried out in the group of patients with no improvement in speech audiometry, the same analysis did not reveal a significant improvement in the mean bone-conduction threshold at 500 Hz, 1,000 Hz, and 3,000 Hz. However, a statistically significant improvement occurred at 2,000 Hz. This improvement has no significant effect on the final result of the speech audiometry test. This is explained by the phenomenon of redundancy, described by Bocca and Callearo [15] , which characterizes all-natural languages and refers to the full content of information in the transmitted verbal material. A common conclusion reached by various research centers indicates that for the national language, an improvement in the bone-conduction thresholds at frequencies higher than 3,000 Hz is not important in terms of understanding speech [15] . On the other hand, according to the literature, higher frequencies provide a set of additional data that are very important for the quality of hearing (the naturalness of speech and music signals) [16,17] . The results obtained here require further analysis with regard to comparisons with other studies. | 2020-10-28T19:20:49.484Z | 2020-10-19T00:00:00.000 | {
"year": 2020,
"sha1": "32ceb8f5c9706e305384cd528a0f0b0cae624d51",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5152/iao.2020.8139",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "05152f8d5d19b500af312ef0a2d97f9ade56ce48",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250150316 | pes2o/s2orc | v3-fos-license | The Nexus of Service Quality, Customer Experience, and Customer Commitment: The Neglected Mediating Role of Corporate Image
Quality of service is a major determinant of customer commitment to the organization. Therefore, it is important to understand the importance of service quality for the corporate image as well. In this study, the predicting roles of quality of service and customer experience have been unveiled in customer commitment through the mediating effect of corporate image. The population frame used in this study is the customers of logistic services providers in China. Total data from the 366 customers have been used to analyze the hypotheses formulated. The sample has been selected using convenience sampling and the software used for data analysis is Smart-PLS. The analytical technique used is partial least square structural equation modeling. Results of the study show that service quality and customer experience have a significant role in the customer commitment to the suppliers. In addition, it has also been found that service quality and customer experience have a major contribution to building the corporate image of the services suppliers. Further, corporate image played a significant mediating role in the relationship between service quality and customer commitment. The study has theoretically contributed to the body of literature by finding the importance of service quality for predicting customer commitment to the suppliers.
INTRODUCTION
Price, quality, and functionality are all important product/service features, and their implications on customer behavior have already been studied extensively by scholars and practitioners. Conversely, in the 20th century, there was a significant shift out from a strict transactional to a more relational perspective. This shift toward commitment emphasizes on the value of creating and sustaining a greater commitment of customers instead of only transactions done by them (Coviello et al., 2002). More recently, a new trend has emerged, with a greater emphasis on customer experience, such as a crucial organizational essential (Klaus and Maklan, 2013). Commitment is an important factor in establishing and managing long-term meaningful relationships, according to mainstream consumer research (Alamgir and Uddin, 2017).
Likewise, in customer-based organizational management, commitment is seen as a necessary prerequisite for achieving desirable outputs like the confirmation process, prospective intents, and revenue generation. It has also been evident that there is a rising prevalence of research in marketing that examines brand commitment (Rucker et al., 2014). In the domain of organizational management, this study utilizes a suitable theoretical framework consisting of the affective, calculative, and normative dimensions of customer commitment. This research is consistent with the prior study which classifies the principle of commitment as a multidimensional concept (Eisingerich and Rubera, 2010).
In a variety of areas, including marketing the framework is commonly used for empirical study. The three aspects of commitment are built on well-defined conceptions that encompass both subjective and attitudinal (calculative and prescriptive) elements of customer behavior (Allen and Meyer, 1990). It is assumed that customer commitment could be achieved through service quality and customer experience (CE). Management of the customer experience, which is simply described as producing a positive customer experience is being heralded as the new frontier in which businesses will compete for a share of the market. According to Keiningham et al. (2017), 89% of businesses intended to operate largely based on the experiences of customers. Furthermore, management of customer experience has recently arisen as a unique specialty in response to the constantly increasing demand for knowledge in the industry by organizations (Fatma, 2014). Despite this focus, agreement among managers and academics about what constitutes customer experience, how it is measured, and how it varies from contextual factors is developing, but it has yet to be studied. Therefore, there is no broad agreement on which parts of CE require evaluation and analytics. There is also no evidence available on the relationship between CE and other more recognized business frameworks (Gao et al., 2021). Authors feel that comprehending how CE fits into the comprehensive business literature requires research in more established marketing structures (Keiningham et al., 2017).
The importance of CE in the marketing of brands has gained a lot of theoretical and empirical backing. Nevertheless, some disagreement prevails over its ability to generate brand loyalty (Lemon and Verhoef, 2016). In a research conducted by Francisco-Maffezzolli et al. (2014), it was identified that there was an indirect association of CE with the commitment of the customers. Moreover, it was also assessed that the direct effects of CE created a huge ambiguity toward customer commitment (Ramaseshan and Stein, 2014), which exposed a significant gap in the research for the direct association of both the variables. There has been a need to explore the direct relationship of CE with customer commitment. It is to evaluate the impact of customers' experience with the brand and their commitment to the brand. It will contribute to future studies if this association comes out significant. The companies will start focusing on this aspect for providing a better experience to the customers. The market is fast evolving, and we are now living in a period of a thriving multinational service economy that prioritizes service quality (Hsu et al., 2006).
Recognizing what customers expect from a service can help businesses allocate resources and make adjustments depending on customer needs. As a result, a thorough grasp of what a client considers to be greater service became a critical concern and demand for every business or company's process (Chien and Chi, 2019). According to a related study, improving service quality became one of the most important management strategies for increasing customer commitment and engagement (Su and Teng, 2018). It is one of the most important variables impacting a company's performance (Ismail et al., 2021). Previously, service quality has not been explored as having impact on customer commitment; rather, it was tested along with customer commitment for evaluating customer loyalty in many cases (Izogo, 2017).
This posed a huge gap that both are the different aspects of organizational management and should be tested in a way to evaluate the role of service quality toward customer commitment. It has been observed that service quality was associated with customer satisfaction, loyalty, and perceived value in past (Özkan et al., 2020). According to Reichheld (1993), the main objective of the business management teams is to determine whether the services supplied to the customers have any link to the demands of the customers or not. This kind of service could have provided a competitive edge to the service providers in the marketplace due to the loyalty factor. The commitment of the customers could be utilized as an assessment approach through which the reactions of the customers could be assessed as an aggregate approach for the services and products (Roy et al., 2022). In a consumer market, offering high-quality service which results in delighted consumers is the key to long-term competitive advantage. The service quality provided all along the supply chain may aid in the development of loyal customers, which would lead to corporate success. Service quality is linked to corporate success, budget savings, customer happiness, consumer loyalty, and profits (Nguyen et al., 2018). Service quality is an evaluative phenomenon of bi-directional exchanges, involving service users and service providers at the dyadic level (Prakash, 2019). Hence, it is assumed that providing good quality services to the customers could influence their commitment levels. Moreover, with the influence of service quality, the thing which may boost the customer commitment is the corporate image as suggested by Chien and Chi (2019).
Previously, the corporate image has been explored as a mediator between service quality and customer satisfaction (Chien and Chi, 2019), which suggested its further utilization as a mediator, leaving a gap in marketing research for service quality provision. This kind of mediating role of corporate image helps provide the best services to the customers. If companies possess a good corporate image, then it helps the customers to make decisions about the purchase of their products. Therefore, utilizing corporate image as a mediator helps shape the servicecustomer relationships. In a scholarly study, the reputation of the company or image, customer happiness, and firm performance are all very significant (Chien and Chi, 2019). Scholars defined image as a general characteristic of a firm that makes reference to which a firm is seen as excellent or negative (Alamgir and Uddin, 2017). To be more specific, corporate image relates to the public's perceptions of a corporation (Alamgir and Uddin, 2017). Kim et al. (2020) argue that image or repute is significant because it shows how a business is compared with its competitors in terms of stakeholder perceptions of the company's willingness to operate in a given way.
The corporate image may influence a company's capability to increase pricing without losing customers (Kim et al., 2020). Corporate image has been associated with a lot of variables studied in past such as loyalty, service quality, customer satisfaction, and so on. Majorly, it has been studied as a mediator between service quality and customer satisfaction (Chien and Chi, 2019). It was also observed that corporate image along with corporate reputation was more concerned with organizations associated with the service sector (Özkan et al., 2020). Developing a stronger corporate image with such a solid organizational reputation is especially crucial for service firms in this regard. It is thought to be considerably aiding in establishing client commitment (Yasin, 2021). Therefore, the use of corporate image as a mediator between service quality and customer commitment is useful.
Customer commitment is typically improved by positive opinions of how a firm operates, and financial service organizations are familiar with building/maintaining a positive image of the company and maintaining a solid corporate reputation will provide a long-term competitive edge. As a result, a good company image and corporate reputation are valuable assets for a service business, because clients have many options (Özkan et al., 2020). Keeping in view the significance of corporate image in developing customer commitment in China, this research utilized corporate image as a mediator. A corporate image is a powerful tool of CSR in organizational processes. It helps in developing a bond between customers and the enterprise. This research helps practice corporate social responsibilities in enterprises. Therefore, it will add to the literature of CSR-based firms. This research is based on some research questions which are given as follows: RQ1. How service quality and customer experience can play a role toward customers' commitment? RQ2. How corporate image helps in mediating the relationships of service quality, customer experience, and customer commitment?
To answer these questions, this study tries to find out the possible relationships between service quality, customer experience, and customer commitment. This research also figured out the mediating role of corporate image in between service quality, customer experience, and customer commitment.
THEORETICAL SUPPORT
The expectation disconfirmation theory (EDT) by Oliver and DeSarbo (1988) is a basic theory used by marketing and consumer behavior experts. EDT theory is defined as a unit of set expectations in relation to confirmation or disconfirmation (Oliver and DeSarbo, 1988). Expectations are a collection of beliefs that a consumer has about a service or product and disconfirmation is the contrast between pre-consumption perception and post-consumption reality. This disparity may have both beneficial and bad implications. Positive variance obtained through analysis indicates that the experience of the customers after consuming the products is better than the expectations which they had before consumption. Similarly, the negative variance obtained through analysis indicates that the experience after the consumption of products is not as good as the expectations were before the consumption (Yi et al., 2021).
Negative disconfirmation is discontent with a specific product or service; positive disconfirmation is contentment with products and services provided by the brands (Yi et al., 2021). Customer satisfaction leads to customer commitment to the product and the brand (Sigit Parawansa, 2018). Therefore, this theory has a significant contribution to customer commitment. Consumer behavior and marketing, human capital, recreational behavior, medical, sociology, service quality, brand management, and administration are just a few of the areas where the theory is applied (Uzir et al., 2021). Hence, we inferred that assessing the impact of service quality on customer commitment could be based on principles of EDT.
Customers expect an advantage or efficacy from a service or product that exceeds their expectations, according to EDT. The comparison indicates if clients are satisfied or dissatisfied with the service, then it leads to their commitment to the service brands. This commitment is determined by the level of service offered by the organization when delivering purchased products to customers, as well as how they view the company's service and image (Uzir et al., 2021). This research also draws support from the theory of cognitive psychology (Folkes, 1988). Based on this theory, it can be drawn that corporate image can filter the psychological dimension of customers toward buying the services and products. It may have a facilitating role in defining the commitment of customers to the company's services and products. Drawing on relationship marketing theory (Berry, 1995), this research seeks to investigate the possible relationships between service quality, customer experience, corporate image, and customer commitment. Since the 1970's, relationship marketing has been offered as a theory. It refers to the process of establishing, developing, and managing long-term, mutually beneficial connections between two entities. According to the theory, relationship marketing allows a company to gain a deeper understanding of individual customers' needs, enhancing its ability to meet customer aspirations with greater success (Saglam and El Montaser, 2021). Relationship management is no longer about gaining a greater portion of the customer's money, but instead about gaining a bigger share of the customer's mind, thoughts, and personal wealth. Customer commitment is the goal of relationship marketing, which may lead to recurrent purchases by loyal customers (Roberts-Lombard, 2020).
Long-term interactions among partners can be professionally handled through an engaged approach based on relationship marketing principles. It will also result in a better knowledge of their different needs and desires, lowering the chances of a failed relationship. As a result, relationship marketing seeks to gain satisfied customers who are willing to commit to a long-term connection with the company (van Tonder and Petzer, 2018). This research got support from these theories and tried to find out the possible relationships between service quality, customer experience, mediating corporate image, and customer commitment. These theories have a strong backing for customers' engagement and satisfaction which ultimately leads to improved and enhanced customer commitment to the services and products of the organizations. These relationships usually occur as an exchange between the customers and organizations.
Service Quality, Corporate Image, and Customer Commitment Service quality was described by Gronroos (1988) as the consequence of an assessment procedure in which consumers evaluate their aspirations with the services they consider to have ended up getting. It is defined as the level of quality of services and products given to consumers, as well as their level of satisfaction toward services. It is the product of the interplay between customers' conceptions of expected and perceived services, as well as the workable causal relationships such as technical ability and features of the product and image (Uzir et al., 2021). Service quality has been identified as a critical and focal aspect in customer company organizations by Omar et al. (2021).
Service quality (SQ) is a model and is referred to as SERVQUAL in different literature. This model is based on five dimensions such as assurance, empathy, tangibility, reliability, and responsiveness (Uzir et al., 2021). This model is utilized for measuring and capturing service quality for customers. Tangibility refers to the appearance of the service with reference to its surroundings which makes sure the existence of the service. Empathy refers to the attention given to the customers passionately or individually. Dependability and consistency of the services are referred to as reliability. The fourth dimension of SQ is responsiveness which refers to as the provision of the services in light of willingness to serve the customers. The last dimension is about assurance which deals with confidence and trust elements of service quality offered to customers (Kim, 2021).
According to SERVQUAL, service quality has a positive and important influence on customer satisfaction and commitment in a range of circumstances, such as on home service, hospitality business owners in Southeast Asia, the life insurance companies in Malaysia, the healthcare sector in India, eatery enterprises in Korea, and food retail in Chile, as well as the financial sector globally (Uzir et al., 2021). The SQ model, which has been used in a variety of service organizations to examine the influence of client satisfaction and other aspects of service quality, focuses on distinct dimensions of service quality. The numerous prominent characteristics of a service business have also been demonstrated to be the most promising aspects of consumer commitment (Yi and Nataraajan, 2018).
Many of the past studies have also focused on evaluating the impact of service quality on customer satisfaction which leads to customer commitment in the context of logistics and delivery of digital purchases (Buldeo Rai et al., 2019). Out of the two elements of corporate image, the first aspect is in terms of function and the second component is in terms of emotion. The features of an element are observable and may be measured. While the emotional principle has intangible characteristics associated with psychological characteristics such as feelings, attitude, and perception of the company, the psychological element has particular characteristics which are associated with psychological characteristics like sentiments, mindset, and image of the service (Chattananon et al., 2007).
Management literature proposed some well-known as well as similar brand image or corporate image models that identify the factors that influence a positive corporate image (Chattananon et al., 2007). Some of the scholars have also looked into the relationships between service quality and customer commitment and found significant associations (Chenet et al., 2010). This study found an association between service quality and different parameters of customer-related attributes which ultimately affected the customers' commitment to the services. Similarly, a few researchers found an association between service quality and corporate image and in this scenario, service quality showed a significant association with the corporate image (Özkan et al., 2020). A corporate image is a bridging tool between firms and customers. Customers are more oriented toward the firms which practice CSR in the processes. Therefore, based on this literature support, we proposed the following hypothesis.
H 1 : Service quality has a strong association with customer commitment H 3 : Service quality has a possible association with corporate image Customer Experience, Corporate Image, and Customer Commitment What people desire are not objects but gratifying experiences (Abbott, 1956). Historians expanded on this perspective in the 1980's by addressing the experience dimensions of services, which go beyond intellectual or cognitive events (Holbrook and Hirschman, 1982). The customer experience (CE) refers to any services and products exchange and hence encompasses the consumer's whole shopping experience. This includes a customer's intellectual, emotive, behavioral, and interpersonal reactions to brand-related activities. Numerous CE interpretations can be found in the literature, indicating a discrepancy on how to define it (Lemon and Verhoef, 2016). CE is a multifaceted concept that focuses on a customer's cognitive, emotional, behavioral, perceptual, and interpersonal reactions to a company's products throughout the entire buying process (Lemon and Verhoef, 2016).
The following remarks are based on the evaluated CE definitions. Most scholars regard CE as a very subjective, personal idea that varies depending on the interactions that make up the customer's whole buying process. Unlike engagement which represents a consumer's interest in a given brand contact, CE encompasses the whole purchasing experience of the customer (Hollebeek and Rather, 2019). CE is also commonly thought of as a multi-dimensional phenomenon that indicates a consumer's reaction to a specific product stimulus. CE is usually understood to have intellectual, emotive, behavioral, sensory, and social aspects; however, there is some controversy about its multiplicity. It seems to be the focused experiential topic which can vary from macro-level to micro-level items. Although CE may span the full customer experience, sub-experiences could concentrate on certain touch points during the purchase process (Khan et al., 2020).
The term commitment refers to an explicit or implicit promise by trading partners to maintain their connection indefinitely. According to Moorman et al. (1992), commitment is defined as a customer's desire to sustain a valuable relationship by exerting full effort in doing so. As a result, commitment implies the consumer's willingness to make concessions to ensure long interpersonal advantages, as well as the person's attachment to the brand. This kind of attachment with the brand makes him forgiving for the concerns attached to the brand. It also results in less influence by the competitive nature of the brands. Whereas, the meaning of customer commitment is up for discussion, authors find certain similarities in most approaches. Commitment refers to a person's natural desire to stay with their exchange relationship (Khan et al., 2020).
Second, client commitment is frequently regarded as a multifaceted notion. Authors concentrate on the analytical and emotive commitments of clients, which fulfill their emotional and cognitive needs, accordingly. Normative commitment, on the other hand, focuses on concerns that customers believe they should do (Khan et al., 2020). Going through these definitions of customer commitment and customer experiences, it was assumed that customer experiences might have a direct relationship which is a dimension of the customer purchase journey. Moreover, keeping in view the functionality and significance of the corporate image, it was assumed that customer experience could affect both, corporate image and customer commitment. Therefore, the authors devise the following hypotheses: Customer experience has an impact on customer commitment H 4 : Customer experience has an impact on corporate image
Mediating Role of Corporate Image
As researchers, we are doing our best to identify the elements that have a significant impact on consumer purchasing decisions in the financial sector. Several studies have shown that corporate image has a substantial impact on customer behavior and business efficiency. Consumers, investors, and various stakeholders all observe the company's brand or image, which establishes an image and reputation. As a result, businesses are now becoming increasingly conscious of the importance of preserving and enhancing their corporate image among the stakeholders. To protect the company's reputation in a competitive market, the corporate image must represent the company's goals, beliefs, and ethics. It aids the organization in distinguishing its image from that of its rivals by presenting a sense of originality (Zameer et al., 2015).
The corporation's goals and ideals should be expressed in both forms, that is, visual and non-visual. Trademarks such as emblems, monograms, advertisements, and clothing make up the visual aspect. Training techniques, techniques, and language were all included in the non-visual format. Corporate image is malleable and could be influenced by external circumstances. The administration shall continually consider the organization's image and reputation. Administrators have acknowledged the value of the corporate image in recent years, but it is tough to explain it to various audiences (Herstein et al., 2008). The spectrum of interpretations that are retained in the brains of stakeholders is represented by the corporate image. It is primarily formed by technical skill, such as how the audience perceives the quality service, and it is also generated by functional value, which includes how products are provided (Zameer et al., 2015).
Based on the literature discussed above in previous sections, service quality had shown its impact on the corporate image (Chenet et al., 2010;Özkan et al., 2020). It has also been studied and investigated as a mediator in different contexts having a facilitating role in organizational management regarding customers' purchase activities. Some of the investigations focused on the mediating role of the corporate image between service quality and customer satisfaction (Chien and Chi, 2019). Some of the researchers also found the significant mediating role of the corporate image between service quality and students' loyalty (Yingfei et al., 2021). This literature support proposed that there could have been a mediating role of the corporate image between service quality, customer experience, and customer commitment as well. Therefore, the following hypotheses were suggested.
H 5 : There could be a possible mediation of corporate image between SQ and customer commitment H 6 : There could be a possible mediation of corporate image between CE and customer commitment Based on the above literature support and hypothesis following framework has been developed (see Figure 1).
METHODOLOGY
For analysis purposes, the quantitative research design has been chosen with the deductive approach. It helps to confirm/deject the hypotheses based on the relationships quantified, found significant or not. Certain tests are proposed in the research to reach the conclusion. This approach helps in reducing the chances of any bias in the statistical analysis. In this study, the data collection was conducted through self-administered surveys to acquire data from the potential participants. The population frame used in this study is the logistic organizations that are offering logistic services in numerous supply chains. The convenience sampling technique has been used in this study to choose the sample (Etikan et al., 2015). It is considered beneficial because of the convenient availability of the participants along with cost-effective and time-saving abilities Nawaz et al., 2022). The collected data was run for statistical analysis to reach the conclusions regarding the hypotheses formulated in the literature review. The sample size in this study was 366. The unit of analysis in this study was the customers of the organizations that provide logistic services (Centobelli et al., 2020). The time horizon was cross-sectional as the data were collected at one point in time. The ethical considerations of the research had been met by letting the respondents fill out the survey anonymously and it was promised to maintain their anonymity. The data were analyzed using the software Smart-PLS version 3.3.5, as it is a vigorous and robust software that easily deals with the non-normal and small data sizes giving accurate results. This uses partial least square structural
Measurement
The study has used the questionnaire survey method for the purpose of data collection. The questionnaire had been made on a 5-point Likert scale (1-5) where 1 represented "strongly disagree" and 5 represented "strongly agree." The scale for the variable of service quality addressing the five facets of empathy (2items), assurance (4-items), responsiveness (2-items), reliability (3-items), and tangibility (5-items) have been addressed in the composite variable of service quality consisting of 16 items in total taken from the study by (Chien and Chi, 2019). The scale for the variable of a corporate image consisting of seven items was also taken from the study by (Chien and Chi, 2019). The scale for the variable of customer experience consisting of three items was taken from the study by (Bawack et al., 2021). The scale for the variable of customer commitment consisting of five items was taken from the study by (Yilmaz Uz, 2019). All these scales had been screened in the initial stage using validity and reliability tests.
Demographics Details
The results for the demography of the respondents show that male and female participation in the study was almost equal. While, the highest number of participants showed they possessed a bachelor's degree followed by master's. The highest participation had been seen from the age group of 31 to 40 years followed by 41 to 50 years. Further, most of the respondents had an experience of <1 year with their service provider followed by 1-3 years and so on. The results of the demographic analysis can be seen in Table 1.
DATA ANALYSIS AND RESULTS
The analytical technique used for data analysis in the study is structural equation modeling. The details for the data analysis are given below.
Measurement Model
The figure obtained for the measurement model is given below (see Figure 2). Table 2 shows the output statistics of the measurement model. It includes the convergent validity which consists of factor loadings, average variance extracted, and the variance inflation factors. The cut-off margin given in the literature for the factor loadings is 0.6 (Xiaolong et al., 2021). In present study, all the items have been loaded with factor loadings on their respective factors with more than 0.6. In the present study, all the values have been reported above 0.6, thus indicating significant factor loading of the items on their respective variables. The average variance extracted, according to Archer et al. (2021), should be above 0.5. In the present study, Table 2 shows that all the values for AVE are above 0.5 indicating a substantial explanation of the variance by the items of the variables. Similarly, the last measure for the convergent validity is the variance inflation factor. The value of VIF, according to (Craney and Surles, 2007;Waheed and Baig, 2017), should be <5.5 to show that the issue of collinearity is not established among the variables or items. In the present study, all the values for VIF meet the acceptance range as given in the literature.
The reliability of the scales explains the internal consistency of the scales used for a particular study. The scales used in the present study have been checked through the Cronbach alpha and the composite reliability. According to Bujang et al. (2018), the values for Cronbach alpha and the composite reliability should be above 0.7. In the present study, all variables of the study show significant statistics for these criteria, thus falling within the acceptance range.
The distinction among the variables measuring the related concepts has been checked with the help of discriminant validity. For this, the tests of heterotrait monotrait ratio (HTMT) and the Fornell and Larcker criteria have been used. These two tests are the most commonly used tests for checking the discriminant validity of the scale (Nawaz et al., 2019. Table 3 shows the Fornell and Larcker Criteria results. This table indicates that all the values obtained in this table have the highest values at the top of each column indicating the absence of collinearity and a higher degree of distinction indicating the presence of discriminant validity (Fornell and Larcker, 1981). Similarly, the second measuring test is the HTMT ratio, which also helps in indicating the presence of a higher degree of correlation or collinearity among the variables. Table 4 indicates the results of the test. According to Franke and Sarstedt (2019), the values obtained in the table should be <0.9 for scales to pass the criteria of discriminant validity. In this study, all the values obtained through the HTMT ratio are all below 0.9 indicating the presence of discriminant validity.
R-square values obtained in this study have been found really good. R-square indicates the model fit through regression fit, also known as the coefficient of determination. In this study, the highest r-square value has been obtained for the variable corporate image which is 70%. It is followed by the other variable customer commitment which shows the r-square value of 52.7%.
The f-square value shows the effect size among the relationships established in the literature. In this study, the highest f-square value has been obtained for the relationship between service quality and corporate image which is 0.83 which is a large effect size. It is followed by the relationship between customer experience and customer commitment which is 0.14 and it is considered as a medium effect size.
Structural Model
The structural model, also known as the inner model, has been used to accept or deject the hypotheses of the study. For this purpose, the study used the structural equation modeling technique considering the t-statistic and the p-values. It is done through the resampling technique of bootstrapping at a 95% confidence interval. Therefore, the hypotheses are accepted or rejected at 5% (Andrade, 2019). Output for the structural model is given in Figure 3.
Results of the study showing direct effects have been given in Table 5. The findings have indicated that service quality has a significant and positive impact on the customer commitment toward the logistic services. This indicates the acceptance of the first hypothesis that service quality has an impact on customer commitment with t-statistic = 2.67, p < 0.05 (H1). The results supported the second hypothesis of the study that postulated that customer experience has an impact on customer commitment with t-statistic = 5.69, p < 0.05. The results for the third hypothesis showed t-statistic = 12.73, p < 0.05; therefore, it is accepted that service quality has an impact on the corporate image. The fourth direct effect of the study was also accepted with t-statistic = 2.81, p < 0.05; therefore, it is accepted that customer experience has a positive impact on the corporate image of the organizations. The indirect effects of the study have been reported in Table 6. The first indirect hypothesis is about the mediating role of the corporate image between the relationship of service quality and customer commitment. The results of the data analysis show t-statistic = 2.46, p < 0.05, thus supporting the hypotheses and accepting H5. Furthermore, the second indirect hypothesis is about the mediating role of the corporate image in the relationship between customer experience and customer commitment. The results of the hypotheses show t-statistic = 1.54 and p > 0.05 indicating that data do not support the hypotheses, hence rejected (H6).
DISCUSSION
This study was conducted to evaluate the impacts and roles of service quality and customer experience on customer commitment. This research was focused on finding the associations between these aspects of customers and producers of the services and products. This empirical study also tried to find out the mediating role of the corporate image between service quality, customer experience, and customer commitment. This kind of relationship of the corporate image is necessary as it provides an indication of CSR at the firm level. The customers are influenced by the corporate image of the firm. It happens because the corporate image has been used as a tool for implementing CSR at the organizational level. The association of service quality, customer experience, and corporate image was also analyzed in this study. Previously, research has been done in the direction of associations between service quality, corporate image, and customer satisfaction (Chien and Chi, 2019). Similarly, some of the researchers tried to find out the relationship between customer experience and customer commitment and found significant and positive associations (Keiningham et al., 2017). This study found a significant and positive association between service quality and customer commitment. These kinds (Chien and Chi, 2019), as service quality affected the behaviors of the customers positively in that research. Customer satisfaction has been affected by service quality in their research which leads to customer commitment. Hence, it is assumed that customer commitment to the services and products of the brands is the depiction of their satisfaction with the quality of services. This research assessed the commitment of customers regarding logistics and delivery of products and services. Similar results have also been obtained previously in the investigations evaluating the impact of service quality on customer satisfaction which leads to customer commitment in the context of logistics and delivery of digital purchases (Buldeo Rai et al., 2019). This study also looked into the relationship of service quality with corporate image, which also showed a positive and significant association. It indicated that if the quality of services is improved in terms of empathy, assurance, tangibility, responsiveness, and reliability, then it could lead to developing a better corporate image.
These dimensions were also tested by (Uzir et al., 2021) in evaluating the impact of service quality on corporate image and customer satisfaction. The results were similar to current research which supported the association between service quality and corporate image. Other direct relationships of customer experience, corporate image, and customer commitment were also studied in this research. The results indicated that customer experience had a strong association with both the corporate image and customer commitment. These results might be obtained because the experience of customers plays an important role in their decision-making about buying and not buying the products and services of the companies. These results are also supported by the fact that customer experiences are multidimensional constructs concentrating on their cognitive, emotional, behavioral, sensory, and social responses to a firm's products across the complete buying experience (Lemon and Verhoef, 2016). Therefore, a sense of satisfaction arises among the customers which leads to a commitment to products and services. These results are also in agreement with a few scholars in the recent past who tried to find out the relationship between customer experience and customer commitment and found significant and positive associations (Keiningham et al., 2017). The indirect or mediating effects of the corporate image were also tested in this study. The results indicated that there was not a positive mediation of corporate image between service quality and customer commitment. These contradictory results to previous research by (Uzir et al., 2021) indicated that there was no need to spend more on developing a corporate image when the organization was already providing good quality services to the customers. It could not aid in boosting the relationship of service quality with customer commitment. The possible reason may lie in the fact that if brands provide assurance, reliable product, tangibility, empathy, and proper responses, then customers do not need to go through the established image of the products and services of the corporation.
The last indirect effects of the corporate image between customer experience and customer commitment were significant, indicating that only customer experiences were not enough for developing a sense of commitment in the customers for buying activities of products and services of brands. It aided in the building of customer commitment. Previously, it also has been explored and investigated as a mediator in a variety of scenarios, with a facilitating role in organizational management when it comes to customer purchasing activities. Some of the studies looked at the impact of company image in mediating the relationship between service quality and customer happiness (Chien and Chi, 2019). Some investigations discovered that company image plays an important role in mediating the relationship between SQ and student loyalty research (Hassan et al., 2020).
Managerial Implications
The findings of this study imply certain implications as well for the service quality of the logistic industry. First of all, it is important for the organizations to improve the service quality so they would enrich their corporate image among the customers which would help them access and penetrate new markets. The word of mouth has always been a major source of informal marketing that brings the business to the firms (Fazli-Salehi et al., 2019). Further, it is also important that the logistic industry finds new ways of collaboration to strengthen their services to pick a major chunk in the market that contributes to achieving a better corporate image and ultimately a better satisfaction level of the customers.
Theoretical Contribution
The present study has contributed to the literature on the service industry and service quality by examining how and to what extent customer experience and service quality plays a role in customer commitment. It further has added to the literature that service quality and customer experience have a positive and significant impact on the corporate image of the service providers. This also has shown that the corporate image of the service providers is an important and significant mediating factor in bridging the effect of service quality in the customer commitment.
Limitations and Future Research
Like other research studies, this study also indicates certain limitations. First of all, the results of the present study have been drawn based on only one service industry that is logistics which may show some biases in the results. Therefore, it is recommended that this study is conducted in other serviceproviding industries as well to endorse the findings of the present study. Second, the present study has measured the mediating role of corporate image between service quality and customer commitment, as it seems interesting to conduct this study with a moderating variable to see if the leadership of the organizations play role in affecting the customer commitment. Furthermore, it is recommended to study other mediating variables such as technology, digitalization on the customer retention that they have modified the approaches and perspectives of the customers in the services industry.
CONCLUSION
Investigating the important drivers of corporate image toward customer commitment has been crucial in this competitive scenario logistic industry especially after the post-pandemic situation, since customer experience and service quality have been considered the important factors in this situation. Therefore, this study has attempted to investigate the role of service quality (considering the five major components of service quality: responsiveness, empathy, reliability, tangibility, and assurance) and customer experience in maintaining the corporate image of logistic service providing firms and how it consequently affects the customer commitment. This study was carried out with the help of the customers of service providing logistic industry in China. The study has found that service quality and customer experience play a vital role in building the corporate image which further contributes to the customer commitment to these organizations. Moreover, the study has also found a significant mediating role of the corporate image in the relationship between service quality and customer commitment.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
AUTHOR CONTRIBUTIONS
ZM conceived and designed the concept. BK-H collected the data and supervised. YY wrote the article. All authors read and agreed to the published version of the manuscript.
FUNDING
The study was funded by scientific project code (21BJY056) Research on the Mechanism and Path of High-quality Development of the Innovation-Driven Trade under the New Pattern of Double Cycle. The authors would like to give their appreciation for the acknowledgment of the Ningbo Key Research Base for Philosophy and Social Studies (Regional Open Cooperation and Free Trade Zone Research Base) for the valuable contribution to the success of our project. | 2022-07-01T13:45:47.623Z | 2022-06-30T00:00:00.000 | {
"year": 2022,
"sha1": "90fb64521e1fb36a6ce3c67c721a230843c7460c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "90fb64521e1fb36a6ce3c67c721a230843c7460c",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220872785 | pes2o/s2orc | v3-fos-license | The Importance of Trust in Knowledge Sharing and the Efficiency of Doing Business on the Example of Tourism
The ability to share knowledge in an organization may determine its success. Knowledge is one of the basic resources of an enterprise, being also the basis for undertaking various types of strategic actions. Knowledge management should be focused in the organization on such processing of all available information to lead to the creation of value defined by employees of the organization and by customers. In order to raise the issue of knowledge sharing, trust should be mentioned. Trust is a factor conditioning effective atmosphere and cooperation in an organization. The main purpose of the article is to present the relationship between trust and knowledge sharing, taking into account the importance of this issue in the efficiency of doing business. To formulate conclusions, data from surveys carried out in 148 different tourist facilities were used. Data were collected by applying the diagnostic survey method and by using a survey technique based on a prepared questionnaire. The results showed that trust is important in sharing knowledge and was found to play an important role in achieving a high level of performance efficiency. The study consists of an introduction, literature review, research results and discussion of results. At the end of the article, conclusions, restrictions and recommendations for future research are presented.
Introduction
Nowadays, in a knowledge-based economy, trust is becoming more and more important, treated as an extremely valuable element in company management. It influences interpersonal relations and causes that the employee has a greater willingness and awareness to control their own actions objectively. Knowledge that is a strategic resource can contribute to gaining a competitive advantage of an enterprise on the market. Transfer and exchange of knowledge is an essential basis for creating new ideas and developing new opportunities [1]. The most important aspect related to knowledge management is the need to share it and this applies to all employees of the enterprise. Knowledge is social in nature, in the sense that it arises from the process of continuous communication between people. Encouraging people to share knowledge rather than passively accumulate it is considered the first step towards effective, pro-development management in a modern enterprise. It is difficult to talk about knowledge if one does not take into account cooperation between people and the existence of conditions for cooperation [2]. According to Handy [3], trust is one of the life bases of every organization. He distinguished seven principles that are to govern trust in the enterprises of the future: trust cannot be blind, it must have boundaries, it requires learning, it is absolute, it requires bonds, requires personal contacts, requires leaders. Research on sharing knowledge somewhat narrows this issue either for analysis based on a specific industry or it focuses on certain aspects [4,5]. Knowledge sharing is a multi-level analysis model [6] and is considered important for the functioning of an
•
Knowledge duplication is a form of central control of the knowledge dissemination process. The purpose of this is to quickly provide knowledge to many employees. These resources should be distributed immediately and permanently so that users have access to them. Knowledge duplication concerns two important areas, which are the implementation of employees in organizational culture and their training. In the first case, it is about familiarizing employees with applicable norms and values, informing them about the role they will play in the organization and the requirements they will face, and in the second, about their professional development.
•
Sharing experiences from previously implemented projects and documenting them. The tools supporting this process are IT networks (Internet, intranet, extranet), teamwork software or expert systems.
•
Exchange of current experience, leading to the development of knowledge. The exchange of experience is possible thanks to the use of benchmarking teams (which look for the best solutions outside the company, their task is also to support the transfer of the best solutions created within the company, with particular emphasis on improvements in key processes in the organization), teams for the best solutions (informal exchange of information between employees with the possibility of using information and telecommunications technologies).
There is explicit and implicit knowledge in every organization. Trust plays a very important role in the process of obtaining tacit knowledge. This is intuitive knowledge, related to the experience of employees, and very often determines the functioning and success of an organization. The quiet-wise exchange is fostered by results-based trust and cognitive trust [9].
Trust always has a positive impact on the functioning of an organization. Knowledge transfer and exchange is a proven way to build trust in groups that enable shared learning. The process of building trust largely depends on the quality of knowledge and the pace at which knowledge is exchanged [10]. Trust depends on risk, is related to the dependence of two people, trust is accompanied by vulnerability, it is related to expectations about the future [11]. In addition, trust influences the organization's coordination, triggers creative thinking, encourages participation in transactions, promotes exchange of information, increases the company's ability to survive a crisis, is a key factor in building a network of cooperation and social cohesion and enables the creation of civic culture. It is an important element in teamwork, developing interpersonal relations, leadership, setting goals and negotiations [10,11]. Finally, trust is the main organizational value, achievement of which requires strong ethical attitudes, and in practical management of determining operational values. This value affects economic results and should be the subject of lasting desires and actions. It is a conviction that the undertaken actions will lead to achieving the set goals and obtaining benefits for all stakeholders [11].
The purpose of the article is to present the relationship between trust and knowledge sharing, taking into account the importance of this issue in the efficiency of doing business.
Literature Review
In the literature on the subject, it is believed that knowledge sharing is most positively related to knowledge management in organizations [3,12,13]. According to Paliszkiewicz [14], the objectives of knowledge management are as follows: • increasing the results of business operations; • achieving the company's goals; • overall company development; From the definition perspective, knowledge sharing is the exchange of knowledge, experience and skills throughout the organization [15], it is a mutual transfer, i.e., the exchange of knowledge, understood as all information, abilities, skills and experiences relevant from the organization's point of view. The goal of this process is to transform the individual knowledge of each participant in the process into organizational knowledge [16]. Knowledge sharing is a centrally managed process of knowledge dissemination within a specific group of employees or knowledge transfer between individuals or teams of employees [16]. Knowledge transfer is the acquisition of knowledge from a database or the right source and its transfer to the recipient and its proper assimilation and use. G. von Krogh, I. Nonaka and M. Aben [17] indicate three conditions that are necessary for successful knowledge transfer: • knowledge transfer participants must be aware of the circumstances in which they exchange knowledge, • while waiting for the transfer of knowledge, its profitability must be studied, • they must be properly motivated to carry out knowledge transfer.
E.K. Sveiby presents nine basic streams of knowledge flow in the organization [18]. In the internal communication these are the following transfers of knowledge: • between units/employees; • from employees to the internal structure; • from the internal structure to the individual competences; • within the internal structure (construction of integrated IT systems).
In the organization's communication with the environment, the streams of knowledge flow relate to the following transfer: • from outside employees; • from the environment to employees; • from the environment to the internal structure; • from the internal structure to the external structure (e.g., customer database); • between organizations from the environment with which the company cooperates (e.g., how to make our clients contact each other).
Each time all activities related to the flow of knowledge are aimed at achieving greater efficiency. Trust is undoubtedly of great importance in the process of sharing knowledge. The specificity of the issue of trust concerns various scientific disciplines: management, psychology, sociology, economics and philosophy. Trust, in general, is a kind of assumption relating to the future behavior of other people, including certain assumptions that determine the further behavior of the individual [19].
Trust can also be defined as a subjective prediction of the level of probability of the attitude of the other party, determining the undertaking of specific actions by an individual or a group. This means that trust refers to a situation where the likelihood of the other party taking specific actions is so high that the individual or group decides to cooperate [20].
Organizational trust is a mechanism based on the assumption that other members of a given community are characterized by honest and cooperative behavior based on shared standards [21]. It can also be stated that trust is a certain belief based on which individual A in a particular situation Information 2020, 11, 311 4 of 18 agrees to dependence on individual B (person, entity, organization) with a sense of relative security, although negative consequences are possible [22]. Trust in the organization shown to other employees is based on the principle of reciprocity, according to which something should be done for a colleague, without expecting immediate compensation, but hoping that in the future this or that colleague will do a favor [23]. The trust that occurs in an organization is very often associated with the issue of organizational culture and the appropriate organizational climate [2,24]. It is important that the work atmosphere is favorable to cooperation and creative action. There are several factors expressing the phenomenon of trust [24]: instinctive feeling, person A will not act against person B, honesty and justice, positive expectation, positive interpersonal relationships, credibility, good will and finally effective action. According to Paliszkiewicz [25], trust in an organization has an impact on many factors, including motivation, training and development processes, which may further contribute to achieving higher operational efficiency.
Building trust in an organization is a long-term process and depends on many factors: organizational culture or broadly understood human resources policy applied to employees. In a situation where employees are convinced that the organization is properly fulfilling its goals and mission and treating employees correctly, then their credibility will continue to grow. They will represent an attitude open to change and innovation, which will contribute to the significant development of the company. Knowledge management is the awareness that sharing knowledge and its wise use is effective in the development of a company and employees cooperating with each other get better results. A knowledge-based economy is the most effective way of managing. The issue of knowledge sharing is not without significance here and is affected by [26,27]: • factors depending on the organization (integration of the idea of haring knowledge with business strategy, organizational culture, teamwork support, direct management support and the example set by the leaders at the top, providing time and creating opportunities to transfer knowledge, atmosphere, work environment, lack of employee's fear of career development or loss of position, appreciating and rewarding behaviors related to knowledge sharing, communication system efficiency, availability and quality of information technology, company size, industry and organizational structure); • interpersonal factors (interpersonal relationships, reciprocity, commitment, trust in the proper use of knowledge, identification with specific behavior, avoidance of embarrassment, sense of belonging to a group or team, seeking of community and cooperation); • individual factors (greed, willingness to profit, fear of punishment, self-esteem, personality traits such as optimism, self-confidence, altruism, openness to experience, costs and time to acquire knowledge, age, gender, education, family status, work experience, work position); • factors depending on knowledge (type of knowledge determining the possibilities and time of its transfer).
There is explicit and implicit knowledge in the organization. Trust plays a very important role in the process of obtaining tacit knowledge. This is intuitive knowledge, related to the experience of employees and very often determines the functioning and success of an organization. As Holste, Fields [9], Levin, Cross [28] wrote, the sharing of silent knowledge is fostered by results-based trust and cognitive trust, while competence and confidence trust favors the reception and its transfer.
Trust will always have a positive impact on the functioning of the organization. Thanks to this, it is possible to exchange knowledge in the organization and jointly build the culture of the learning organization, and this in turn can translate to the organization's successes.
In the following, the concept of efficiency and economic effectiveness will be presented. There is strong competition in the modern market economy. Entity owners, including those from the tourism industry, must compete with each other. In addition to various aspects of competitiveness like, among others, location, tourist attractions, quality of services, etc., the economic efficiency of the project is also important. An entrepreneur in a market economy acts both as a buyer of necessary Information 2020, 11, 311 5 of 18 production factors and a seller of manufactured products or services. The manufacturing process and the technologies used to achieve economic benefits from the activities undertaken depend on the information held and his decisions. The concept of efficiency is ambiguous and interpreted differently. As E. Skrzypek points out, "it is defined by terms such as operational capability, positive result, profitability, productivity, effectiveness, purposefulness, rationality, cost effectiveness or utility" [29]. It refers to the relationship between various "effects, objectives, inputs and costs" [29] in different perspectives. This concept works in social sciences, is theoretically recognized by economists, sociologists, financiers, management specialists but also used in practice by economic analysts, company managers, etc. Juchniewicz indicates that the concept of efficiency refers to the economy as well as to business activities, that is to the functioning of enterprises, processes, management, decision making, management and finance or investment. The concept of efficiency comes from English and means effective, efficient, real [30]. In a broader sense, it means the benefits achieved by a given country, economy, enterprise from conducted activity [31]. In the literature, the concept of economic efficiency can be often found and it means "action without waste and focused on achieving the best result within the available resources and technologies [32]". Samuelson and Nordhaus point out that efficiency means there is no waste. They refer to the production capacity curve, indicating that an efficient economy is on the edge of production capacity [33]. This concept is also equated with allocation efficiency (also called Paret efficiency) meaning reaching the limit of possible utilities [33]. "Efficiency is a process in which society extracts maximum satisfaction from consumers using the available means" [34]. Economic efficiency can be considered in a narrow and broad sense. In a narrow sense, it is understood as the ratio between the value of expenditure incurred and the value of effects obtained thanks to it, that is, as the ratio between the amount of expenditure of used materials and the quantity of goods produced [35]. In a broad sense, it means the best results in the production or distribution of goods and services at the lowest costs [35]. It is through the prism of economic efficiency that the competitive possibilities of business ventures are determined. M. Szudy indicates that economic efficiency is one of the conditions for achieving economic success at the level of the entire economy as well as at the level of individual entities [36]. Economists consider efficiency in the context of the functioning of business entities in short and long periods, as well as on a microand macroeconomic scale. The microeconomic approach refers to the enterprise-it is a real ability to improve market position and achieved results [29]. According to Bojarski, macroeconomic efficiency consists in taking into account all direct and indirect effects of a given undertaking in the national socio-economic system and selecting this undertaking which is the most beneficial from the point of view of economic efficiency for the entire economy [37]. Szudy indicates that the effective functioning of the economy requires efficiency in three dimensions: static, dynamic and distribution. Static efficiency is identified with the management of specific resources in order to avoid waste, dynamic efficiency is associated with the process of increasing resources, assets through creativity and action despite the risk, and distribution efficiency relates to the recognition by society of a fair distribution of social product [38]. The efficient use of economic resources takes place in accordance with the principle of rational management. Efficiency in managing is one of the ways to assess the functioning of a household, enterprise, defined as the relation of effects to the means used [29]. The concept of efficiency is related to the functioning of the organization. It is often defined as the organization's ability to achieve its goals and strategies. One can distinguish organizational effectiveness and management effectiveness. Efficiency is therefore an important tool for measuring management effectiveness. The efficiency of an organization means its effectiveness and productiveness measured by the degree of achievement of relevant goals. According to Drucker, effectiveness is the degree of goal achievement [39]. Management effectiveness is a measure of the efficiency of the person in charge, his predispositions, it means also creativity in the formulation and achievement of goals set. Drucker points out that effectiveness is "a key element in human and organizational development that serves the self-realization and ability of modern society to survive" [39]. According to Lawlenss, the effectiveness of an organization, that is its Information 2020, 11, 311 6 of 18 efficiency, depends on the following variables: performance, morale (the degree to which members' needs are met), adaptability, flexibility, institutionalization and stability [40].
Various types of efficiency can be distinguished in the literature. J. Dąbrowski distinguishes three types of economic efficiency: technical, economic and social. Technical management efficiency refers to "the properties of things used in the manufacturing process, indicating the relationship of benefits obtained (e.g., the number of operations performed) to expenditure, e.g., (energy consumption, time needed). In a broader sense (...) it applies to the entire production process (technics, technology, work organization) and indicates the relation of the amount of product produced to the factors of production involved" [41]. This efficiency is expressed in natural units and plays an important role in determining the other types of efficiency. The economic efficiency of management indicates that it can be analyzed in relation to the technical and economic and socio-economic aspects. This term is defined as the relation of obtained effects to incurred outlays taking into account prices of production factors [41]. Efficiency in a technical and economic context is expressed in cost intensiveness, outlay efficiency, productivity or profitability. It defines the relationship between the resources used and the utility values produced [42]. The analysis of economic efficiency in the social-economic context additionally takes into account the property rights to resources [41]. The third type of economic efficiency relates to social issues. In the efficiency analyzes, social effects of management are compared with the fulfillment of society's expectations. In a broader sense, it is associated with the management process, i.e., management rationality related to the well-being of the whole society. Pszczołowski after Kotarbiński also distinguishes three meanings of efficiency: objective (dealing with the development of science), economic and technical and social, understood as the relationship between inputs and the effects of these inputs manifested in the sphere of norms and values, society and ecology [43].
The literature on the subject analyzes the efficiency of business activities from various perspectives, it may relate to the efficiency of production or the efficiency of the organization. "A comprehensive approach to efficiency should include an assessment of: social purposefulness of operations, economic rationality of organizational processes and financial efficiency of management" [44].
It is also worth paying attention to the dimensions of efficiency. There are many dimensions of effectiveness in the literature-Martyniak, Bielski, Matwiejczuk, Pohl, Skrzypek, and Łoś wrote about it. The most common dimensions are as follows [45]: • economic-in the context of economy, productivity, profitability, • market-referring to the degree of satisfying the client's needs as well as in relation to the efficiency category in the strictly market and market economy dimension, From the tourism perspective, the efficiency of a tourist enterprise's activities can be analyzed on an economic, social and ecological level, and broadly covers most of the dimensions mentioned.
In the next part of the study we will also discuss the issue of efficiency measures. Contemporary business entities conduct management efficiency analyzes on an ongoing basis, using various sets of measures. However, this measurement is often difficult, as Głodziński indicates, it may result from [46]: • "Quantitative immeasurability of some (partial) effects/outlays, • immeasurability of value of some (partial) effects/outlays imperfections of measuring tools, • simultaneous use of the same outlays to obtain different, separately analyzed effects in the absence of an exact division possibility, • the lack of a direct cause-effect relationship between effects and expenditures, while the presence of an indirect relationship (often only of an intuitive nature), • lack of comparability between inputs and results as a result of their presentation using various measurement units".
One of the concepts of efficiency analysis is a multidimensional approach. Especially this type of approach, where "the assessment takes into account different aspects and different points of view" [47] is important in assessing service activities, including in the tourism industry. The multidimensional approach in the analysis of economic efficiency has been the subject of many studies [48,49].
Głodziński presents the procedure for transforming various categories of efficiency into economic efficiency, indicating that it requires the application of a specific standardization procedure. "For normalization to be possible, the impact of effects and production factors on the economic situation of the object (...) should be measurable. This means that the results and outlays identified must be quantified (quantitative measurability) and then valued (measurability of value)" [46].
The quantification of results and outlays takes place by determining the quantitative outlays for financial efficiency-financial aspects, for organizational effectiveness of organizational aspects, for technical efficiency of technical aspects, for social efficiency of social aspects for environmental performance of environmental aspects, and similarly for marketing and legal effectiveness of these aspects and evaluation of individual results and outlays in terms of value in monetary terms 1 [46].
Łoś A. proposed a model for measuring efficiency in a tourist enterprise, pointing to the mutual relations between three approaches to efficiency, in relation to the Kaplan and Norton concept [50]. He takes into account efficiency from a narrow perspective from an enterprise point of view and efficiency from a broad perspective from an economic and social point of view. It emphasizes important features that distinguish tourism, i.e., immateriality, incompatibility, as well as the special conditions of the business and services provided, which include seasonality as well as the rigidity of supply and the specificity of individual types of tourist services, e.g., catering, spa, hotel, etc. The proposed model covers three areas: dimension, main groups of criteria and detailed groups of criteria.
The proposed model includes three planes: dimension, main groups of criteria and detailed groups of criteria. The efficiency perspective has been divided into three zones I, II and III. The model includes, in zone I, dimension A divided into A1 and A2, in zone II, dimension A and dimension B divided into B1, B2, B3 and B4. Zone III contains next to A and B the dimension C divided into C1, C2 and C3. Zone I covers the economic and financial dimension-covering two efficiency criteria-A1 profitability of services and A2 service productivity and efficiency. The profitability of services in the case of hotels or tourist facilities can be measured by: ROA total assets return on investment, ROE equity return ratio, ROI return on investment, RevPar room revenue or GOPRAR gross operating profit per room. In the model of hotel efficiency evaluation as detailed measures of service productivity and efficiency (A2), the following were proposed: the indicator of the technical utilization of the facility, the indicator of the average number of nights spent and the indicator of the seasonal use of the accommodation facility. The main efficiency criteria for the organizational and market dimension, B, include: B1-achievement of objectives, where the detailed measure of effectiveness for the hotel was the degree of achievement of the objective in relation to the expenditure on the purchase of the automatic reservation system (cost reduction by 10%),
Materials and Methods
In relation to the considerations made in this article, research was carried out on the issue of the importance of trust in knowledge sharing, including a reference to efficiency. The conducted research concerned the tourism industry. The survey covered 148 employees representing various types of tourism companies. The research was conducted at the turn of 2019/2020. The diagnostic survey method was used, using a questionnaire that included several parts: • the first part included questions about trust; • the second part covered the importance of trust in knowledge sharing; • the third part was about the importance of trust in knowledge sharing regarding efficiency; • the fourth part concerned information on the subject.
The survey was anonymous. Data collected during the research were presented graphically and in tabular form. The study involved 99 women (67%) and 49 men (33%). Most people are people up to 45 years of age with higher education. Most of the respondents were representatives of micro-enterprises (36%), small enterprises (59%) and medium-sized enterprises (5%). All companies whose employees took part in the survey have been on the market for 2-5 years. The main purpose of the elaboration was to collect information in surveys from people who, working in a given organization for several years, could significantly convey their observations on the examined issue. Therefore, employees who worked in organizations operating on the market for a minimum of 2-5 years were taken into account. This period was considered sufficient to make observations and indicate relevant elements that were significant for the study. The research was carried out as a pilot study to identify the issue and to become well prepared for the actual research.
Results
During the research, the respondents were first presented with phrases regarding the perception of trust which allowed them to present their opinions on the meaning of this concept (Figure 1). The survey questionnaire used the approach developed in the literature [52].
As part of the study, the respondents received a spreadsheet in which areas of organizational trust were presented in six blocks: • employees' competences and attitude: I am a competent employee; I am an involved employee; people in the organization are happy to share their knowledge with colleagues; people in the organization openly admit mistakes if they made them; • work atmosphere: there is a nice atmosphere at work; there is no lobbying in the organization; I always say what I think openly; employee evaluation is fair; assessment criteria are precise and clearly defined; • remuneration policy as well as development and promotion opportunities: I am satisfied with the remuneration policy; the organization has a policy of equal opportunities; the organization is involved in employee training and development.
The survey was anonymous. Data collected during the research were presented graphically and in tabular form. The study involved 99 women (67%) and 49 men (33%). Most people are people up to 45 years of age with higher education. Most of the respondents were representatives of micro-enterprises (36%), small enterprises (59%) and medium-sized enterprises (5%). All companies whose employees took part in the survey have been on the market for 2-5 years. The main purpose of the elaboration was to collect information in surveys from people who, working in a given organization for several years, could significantly convey their observations on the examined issue. Therefore, employees who worked in organizations operating on the market for a minimum of 2-5 years were taken into account. This period was considered sufficient to make observations and indicate relevant elements that were significant for the study. The research was carried out as a pilot study to identify the issue and to become well prepared for the actual research.
Results
During the research, the respondents were first presented with phrases regarding the perception of trust which allowed them to present their opinions on the meaning of this concept (Figure 1). The survey questionnaire used the approach developed in the literature [53]. In each of the indicated research areas, they assigned a grade on a scale of 1-5 to each wording, where 1 meant weak significance and 5 important. In the areas of image of the organization, competencies and attitude of management, competences and attitude of employees or the work atmosphere, the most responses (about 90%) indicated a grade 4 on a five-point scale. Slightly less optimistic scoring was received by the remuneration policy statement and the possibility of development and promotion-grade 3 (77%). The weakest score (2) among the respondents was given to the area of knowledge of the organization's mission, vision and goals (96%). The obtained results show that in the surveyed companies employees identify with the organization, they are proud that they work in it, and this is a very positive feeling, especially in the aspect of the process of sharing knowledge. It is also clear that the example "goes from above", i.e., the respondents emphasized the importance of drivers in the process of sharing knowledge. The respondents drew attention to the issues of pay policy in the companies in which they work. This proves that this is a very important element for employees, which if met could contribute to much higher achieved results. An area that raises many doubts is knowledge of the mission, vision and goals of the organization. The results obtained and the low rating confirm the fact that in many companies too little attention is paid to this issue.
For the needs of the study, selected analyzes were prepared with the use of the Statistica program. The scales were summed up and a box chart was prepared in which the data obtained during the research was used. The chart related to the areas of organizational trust in which the respondents used a 5-point Likert scale. The chart specifies the median, the first and third quartiles, and the maximum and minimum values. The box-and-whisker chart gives the opportunity to determine which assessments were usually indicated by the respondents (most often they indicated grade 4) and also, based on the width of the box, the answers diversity can be seen (the largest relates to pay policy as well as development and promotion opportunities) (Figure 2). Figure 3 shows how respondents assess the knowledge sharing process. Here, several statements are presented, which in the authors' opinion best reflect the importance of this issue. The obtained answers indicate the diversity of respondents' approach. Most people (95%) said that this is a necessary process in every organization for it to develop and succeed on the market. Approximately (91%) indicated that this is a process in which management's example plays a very important role. This means that respondents see a significant role of superiors in this process. For the respondents it was also important that in order to increase the willingness to share knowledge among employees, one should pay attention to additional remuneration that could improve employee satisfaction. A contented employee will work more efficiently what will also translate into better results achieved by the organization. The surveyed employees from the tourism industry also pointed out that in the area of knowledge sharing it is important to share it skillfully, what means that not every employee can do it. Figure 3 shows how respondents assess the knowledge sharing process. Here, several statements are presented, which in the authors' opinion best reflect the importance of this issue. The obtained answers indicate the diversity of respondents' approach. Most people (95%) said that this is a necessary process in every organization for it to develop and succeed on the market. Approximately (91%) indicated that this is a process in which management's example plays a very important role. This means that respondents see a significant role of superiors in this process. For the respondents it was also important that in order to increase the willingness to share knowledge among employees, one should pay attention to additional remuneration that could improve employee satisfaction. A contented employee will work more efficiently what will also translate into better results achieved by the organization. The surveyed employees from the tourism industry also pointed out that in the area of knowledge sharing it is important to share it skillfully, what means that not every employee can do it. Figure 3 shows how respondents assess the knowledge sharing process. Here, several statements are presented, which in the authors' opinion best reflect the importance of this issue. The obtained answers indicate the diversity of respondents' approach. Most people (95%) said that this is a necessary process in every organization for it to develop and succeed on the market. Approximately (91%) indicated that this is a process in which management's example plays a very important role. This means that respondents see a significant role of superiors in this process. For the respondents it was also important that in order to increase the willingness to share knowledge among employees, one should pay attention to additional remuneration that could improve employee satisfaction. A contented employee will work more efficiently what will also translate into better results achieved by the organization. The surveyed employees from the tourism industry also pointed out that in the area of knowledge sharing it is important to share it skillfully, what means that not every employee can do it. The research also used a combination of results on an ordinal scale-the so-called semantic differential, thanks to which it is possible to draw conclusions on which elements men and women pointed out in the analyzed area. The analysis of knowledge sharing shows that women largely paid attention to such factors as preparation (to convey knowledge well), the superior sets an example (an example comes from the top), additional gratification (for willingness to share knowledge). Men indicated: necessity of the process, efficiency of operation (Figure 4).
Information 2020, 11, x FOR PEER REVIEW 11 of 18 The research also used a combination of results on an ordinal scale-the so-called semantic differential, thanks to which it is possible to draw conclusions on which elements men and women pointed out in the analyzed area. The analysis of knowledge sharing shows that women largely paid attention to such factors as preparation (to convey knowledge well), the superior sets an example (an example comes from the top), additional gratification (for willingness to share knowledge). Men indicated: necessity of the process, efficiency of operation (Figure 4). Table 1 lists the knowledge sharing barriers identified by the respondents. These barriers were divided according to the literature approach. Among organizational barriers, the lack of a transparent motivating system rewarding knowledge sharing (68%) comes first, the less-important but also visible ones include 'outdated' organizational culture (7%), no positive examples from the top of the organizational hierarchy (8%) or lack of indicating appropriate benefits from knowledge sharing (6.8%). As the less important respondents included: organizational hierarchy, lack of appropriate procedures or inadequate work atmosphere. Among individual barriers, the most important turned out to be the difference in the level of knowledge, experience (45%), lack of time (30%), or a sense of danger that sharing knowledge may harm us (9%) or even a personal dislike of others (7%). In the case of technological barriers, most respondents indicated a lack of training in the use of modern technologies in knowledge sharing (89%), or a lack of consistency between expectations and technical capabilities (7%). The question can be asked, how could one organizationally contribute to the removal of these barriers, is it even possible? Further in the research, the respondents were asked such a question. The obtained answers clearly indicate the great importance of financial incentives that would allow company employees to intensify the need for such activities. The surveyed employees of companies from the tourist industry also noted the importance of training in this area.
Another issue that was highlighted in the research concerned organizational support for this process. As many as 112 people said that the management pays great attention to this process. A tendency has been noticed that the smaller the company, the greater the emphasis on knowledge sharing and the indication on the high degree of trust among colleagues. The surveyed employees also noticed that a very important role was played by the work atmosphere, which made employees willing to participate in this process. In the next part of the research, the effects of knowledge sharing at the individual and organization level were highlighted. Table 1 lists the knowledge sharing barriers identified by the respondents. These barriers were divided according to the literature approach. Among organizational barriers, the lack of a transparent motivating system rewarding knowledge sharing (68%) comes first, the less-important but also visible ones include 'outdated' organizational culture (7%), no positive examples from the top of the organizational hierarchy (8%) or lack of indicating appropriate benefits from knowledge sharing (6.8%). As the less important respondents included: organizational hierarchy, lack of appropriate procedures or inadequate work atmosphere. Among individual barriers, the most important turned out to be the difference in the level of knowledge, experience (45%), lack of time (30%), or a sense of danger that sharing knowledge may harm us (9%) or even a personal dislike of others (7%). In the case of technological barriers, most respondents indicated a lack of training in the use of modern technologies in knowledge sharing (89%), or a lack of consistency between expectations and technical capabilities (7%). The question can be asked, how could one organizationally contribute to the removal of these barriers, is it even possible? Further in the research, the respondents were asked such a question. The obtained answers clearly indicate the great importance of financial incentives that would allow company employees to intensify the need for such activities. The surveyed employees of companies from the tourist industry also noted the importance of training in this area.
Another issue that was highlighted in the research concerned organizational support for this process. As many as 112 people said that the management pays great attention to this process. A tendency has been noticed that the smaller the company, the greater the emphasis on knowledge sharing and the indication on the high degree of trust among colleagues. The surveyed employees also noticed that a very important role was played by the work atmosphere, which made employees willing to participate in this process. In the next part of the research, the effects of knowledge sharing at the individual and organization level were highlighted.
Organizational Barriers N = 148
No transparent incentive system favoring knowledge sharing 68 "Outdated" organizational culture 7 No positive examples from management 8 No indication of the sharing knowledge benefits 7 Organizational hierarchy 0, 7 No consistency between knowledge sharing and achieving organizational goals 5 No proper procedures 3 Inadequate work atmosphere 1
Individual Barriers N = 148
Age differences 3 Gender differences 2 Cultural differences 2 Differences in knowledge, experience 45 The sense of danger that sharing knowledge can harm us 9 No time 30 Personal dislike of others 7 No language knowledge 1
Technological Barriers N = 148
No training in the use of modern technologies in knowledge sharing 89 No IT support 1 No consistency between expectations and technical capabilities 7 Reluctance to use IT tools in the process of sharing knowledge 3 Table 2 presents the effects of knowledge sharing that are noticed by the respondents. These effects were divided into two areas: individual and organization. In the area of individual's effects, the respondents noted an increase in competences (90%), personal development (82%), loyalty to the company (66%), openness to others (68%) and a sense of importance (82%). Other effects were: loyalty to the company, reduction of stress and proper organization of work. Another area of effects that was examined concerned the organization. At this level, respondents considered the improvement of operational efficiency (93%), company development (93%) and achievement of competitive advantage (93%), as well as increased trust among colleagues (93%) to be the most important. The next recommendations included improvement of work organization or appropriate atmosphere of cooperation. As part of the research, a question was also asked whether, in the opinion of the respondents, trust was an element that helped/facilitated the process of knowledge sharing. Here, the relationship between the age of respondents and the verification of this statement was noticed. People under the age of 35 have stated that trust played a key role in knowledge sharing. The respondents in the range 36-40 no longer fully confirmed this statement. However, those over 40 years of age were very skeptical about trust. The respondents were also asked if they observed that over the last few years the efficiency of the company in which they work increased. Improving cooperation 67 The right working atmosphere 80 Increased trust among colleagues 93
Group integration 53
Equalizing of differences in knowledge 13 Source: own study * Respondents had the opportunity to choose a maximum of 3 effects in each area.
The next question in the research questionnaire concerned the issue of efficiency. People were asked about how they evaluate defined performance planes in the companies in which they work (the study did not take into account figures related to efficiency, but only the subjective assessment of the respondents). From the tourism perspective, the efficiency of a tourist enterprise can be analyzed on an economic, social and ecological level. For the purposes of the conducted research, a development plane was added to assess the effectiveness at various levels, taking into account the specificity of the tourism industry. Each of the subjects in each of the planes was to assign a grade on a scale of 1 (hardly visible) to 5 (very visible). The results are shown in Table 3. The economic efficiency rating was assessed by the vast majority (87.2%) on grade 4. Only 6.1% of respondents indicated that it is hardly visible. In the social plane, the highest number of respondents (79.1%) gave a grade of 3, no respondent gave the lowest score-1. In the ecological plane, the highest number of respondents indicated the lowest score (66.2%), but the highest score also appeared (5) for 1.4% of respondents. Interesting results were obtained in the development plane. As many as 94.6% of respondents indicated note 4. This shows that in the companies in which they work, a lot of attention is paid to new skills, which in the long run can contribute to increased business efficiency.
The scales were summed up and a box chart was prepared in which the data obtained during the research was used. The chart related to the areas to determine the effectiveness assessment on various levels (economic, social, ecological and developmental). There, the greatest diversity of responses concerns social and ecological levels. The most frequently chosen answers relate to grades 3 and 4. It is worth noting that very few people assigned the lowest grade 1 (Figure 5).
Discussion
The research represents only a small sample of respondents who have expressed their subjective assessment of the importance of trust in knowledge sharing, taking into account the effectiveness of operations. The analyzed issue is undoubtedly important from the point of view of theoretical and practical considerations. The perception of the process of sharing knowledge also confirms a lot of scientific research. For example, Flaszewska [54] and Ryszko [55], based on the conducted research, stated that the most favorable factors for sharing knowledge could be financial incentives in the form of higher remuneration and additional bonuses, which are not always met in organizations. Paliszkiewicz [2] states that encouraging people to share knowledge is considered to be the first step towards effective, pro-development management in a modern enterprise. If difficulties arise in the area of knowledge sharing, they translate into a reduction in the efficiency, effectiveness and competitiveness of the organization. This statement is confirmed by research conducted in 500 largest American companies, which prove the scale of losses they suffered each year as a result of ineffective knowledge sharing or non-sharing of knowledge. The consequences of this state of affairs were, among others, project delays, professional burnout and waste of resources [56]. Sharing, flow and transfer of knowledge within an organization enables more cost and quality effective tasks to be carried out. It also enables the introduction of new participants to the knowledge exchange network, which is extremely important in the conditions of the information society, including the network [57]. The results have significant management implications. They indicate an important direction that should be strengthened in the use of all available human resource management tools so that, thanks to trust in the knowledge sharing process, the best economic results for the company can be achieved. This concept and its implementation requires an appropriate approach of the management because as the respondents noticed, "the example goes from above". The support of the supervisor and the quality of communication have a significant impact on the trust of the first-line employee [58]. The importance of trust is also indicated in their research by Jabłoński A., Jabłoński M. [59]. It is worth to use in the organization the incentive possibilities that exist to bring the expected result. The use of motivational tools will contribute to greater confidence, which will facilitate the process of sharing knowledge. Effective motivation will
Discussion
The research represents only a small sample of respondents who have expressed their subjective assessment of the importance of trust in knowledge sharing, taking into account the effectiveness of operations. The analyzed issue is undoubtedly important from the point of view of theoretical and practical considerations. The perception of the process of sharing knowledge also confirms a lot of scientific research. For example, Flaszewska [53] and Ryszko [54], based on the conducted research, stated that the most favorable factors for sharing knowledge could be financial incentives in the form of higher remuneration and additional bonuses, which are not always met in organizations. Paliszkiewicz [2] states that encouraging people to share knowledge is considered to be the first step towards effective, pro-development management in a modern enterprise. If difficulties arise in the area of knowledge sharing, they translate into a reduction in the efficiency, effectiveness and competitiveness of the organization. This statement is confirmed by research conducted in 500 largest American companies, which prove the scale of losses they suffered each year as a result of ineffective knowledge sharing or non-sharing of knowledge. The consequences of this state of affairs were, among others, project delays, professional burnout and waste of resources [55]. Sharing, flow and transfer of knowledge within an organization enables more cost and quality effective tasks to be carried out. It also enables the introduction of new participants to the knowledge exchange network, which is extremely important in the conditions of the information society, including the network [56]. The results have significant management implications. They indicate an important direction that should be strengthened in the use of all available human resource management tools so that, thanks to trust in the knowledge sharing process, the best economic results for the company can be achieved. This concept and its implementation requires an appropriate approach of the management because as the respondents noticed, "the example goes from above". The support of the supervisor and the quality of communication have a significant impact on the trust of the first-line employee [57]. The importance of trust is also indicated in their research by Jabłoński A., Jabłoński M. [58]. It is worth to use in the organization the incentive possibilities that exist to bring the expected result. The use of motivational tools will contribute to greater confidence, which will facilitate the process of sharing knowledge. Effective motivation will also lead to a favorable atmosphere among colleagues, i.e., it can be considered that in this way it will contribute to the construction of an appropriate organizational culture model, which will provide a positive background for implementing the knowledge sharing process.
Conclusions
Nowadays, trust is an important element in the functioning of any organization. It is crucial in building friendly interpersonal relations. Finally, it can be compared to a huge force that affects the efficiency of doing business. Trust causes an increased willingness to act and jointly pursue organizational goals. According to Rudzewicz [59], organizations should influence employee relations and their work satisfaction, what will result in an increased level of effectiveness of the entire organization. Based on the research carried out and the results obtained, it can be concluded that in the group of surveyed employees of organizations from the tourism industry, the knowledge sharing process is recognizable and is considered to occur in every organization. The conducted research is only a certain picture of the phenomenon, which for each organization is of great importance in achieving the desired efficiency of doing business. The greater the awareness of the benefits of sharing knowledge, the greater the guarantee of positive effects. Trust in this case plays a very important role because it determines the satisfactory achievement of results. The management, as can be seen from the conducted research, is also very aware of the importance of the process of sharing knowledge because as stated by the surveyed managers, they set a good example. It should be noted that this may determine the appropriate human resource policy and organizational culture focused on sharing knowledge. The respondents drew attention to many barriers that accompany this issue, thus expressing how much more needs to be done in this regard. There are also expectations in this respect regarding rewards for sharing knowledge or training in the use of various types of tools. The respondents also pointed out the visible effectiveness on various levels: economic, social or developmental. The study allowed for collecting opinions among employees of companies from the tourism industry. The tourism industry is specific when it comes to trust issues. Here, employees very often compete in some way, which may not contribute to a positive attitude towards knowledge sharing. It can be stated that the collected material is the basis for the research tool improvement and gives the prospect of a broader research in this field and industry, which can contribute to formulating a knowledge-sharing strategy in an efficiency-based organization what can lead to better and better results for companies from the tourism industry. | 2020-06-11T09:08:58.670Z | 2020-06-10T00:00:00.000 | {
"year": 2020,
"sha1": "97645558d5899db4c979edb171d1d3dd5e4c5ba5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2078-2489/11/6/311/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "ddfa13558fb38cc72149e8288d9152c03618aafe",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science",
"Business"
]
} |
115674502 | pes2o/s2orc | v3-fos-license | Acoustic horn design for joining metallic wire with flat metallic sheet by ultrasonic vibrations
Ultrasonic Metal Welding is a green manufacturing technique and one of the most advanced solid state welding processes in which similar or dissimilar metallic components are joined by the application of high frequency vibrations (> 20 kHz) and pressure. Ultrasonic metal welding is accompanied by slip and plastic deformation so that the base metals being welded will not melt and in turn forms a homogenous coalescence of two metals at the joining area so that the joint retains the parent metal properties. The major problem faced by the industries using ultrasonic metal welding process is the poor weld quality and weld strength. The design of acoustic horn or sonotrode plays a dominant role in producing quality welds. The primary function of sonotrode is to vibrate at a level required for welding and also to transmit the vibration energy to the point where welding of metals takes place. For producing quality welds, the vibration energy is to be transmitted to the weld interface without much loss. Therefore, there is a need for accurate design of sonotrodes in ultrasonic metal welding process. This work focuses on designing a stepped sonotrode used for joining metallic wire with metallic sheet based on significant design parameters such as amplitude gain and von Mises stress factor using modal and harmonic analysis. Experimental trials are conducted using the stepped sonotrodes and the effectiveness of the designed sonotrodes is evaluated based on improvement of strength of the joint in tension.
Introduction
Ultrasonic Metal Welding (USMW) is a joining process in which two metallic specimens are joined by the application of ultrasonic vibrations under moderate pressure where the vibrations are applied at the interface between the weld specimens.USMW is the one of the most widely used advanced methods for welding non-ferrous metals such as aluminium (Al), copper (Cu), nickel (Ni), gold (Au) and silver (Ag).It eliminates most of the problems existing in traditional welding techniques, such as the fusion of metal through the application of heat by flame or electric arc in combination with cleaning and fluxing agents and sometimes filler metals.Ultrasonic welding of a metallic wire to a flat metallic sheet such as terminal plate or lead tab is a good example that needs in depth study.
This type of joints is encountered in numerous industrial applications such as the consumer durable products, automotive components, switch gears, bus bars, fuses, circuit breakers, ignition modules, contacts, starter motors, microelectronic wires and battery connectors.The schematic representation of ultrasonic metal welding is shown in Fig. 1.
The main components of the USMW machine are the converter, transducer, booster, sonotrode, anvil and pneumatic system.The converter raises the standard electrical supply to the required operating frequency.The piezo-electric transducer converts high frequency electric supply into ultrasonic vibrations.The booster modifies the amplitude of the vibrations as applied to the sonotrode.The sonotrode transmits the vibrations to the work material which are held on an anvil in lap joint configuration.The pneumatic system moves the ultrasonic stack up and down to apply the required clamping pressure on the specimens to be welded.The quality of joints is influenced by predominant factors such as the composition and geometry of the weldment, hardness of the work piece, cleanliness of the surfaces to be joined, selection of welding conditions -power, clamping pressure, amplitude of vibration, weld time and tooling.During ultrasonic welding, the acoustic horn or sonotrode is in direct contact with the work material.The role of the sonotrode is to transmit and concentrate ultrasonic vibration at a spot where the materials are to be welded.The basic requirement of design of sonotrode is to obtain optimum amplitude of vibration at interface where weld is to be formed.The design of sonotrode is influenced by many factors such as the shape, size, profile and material.Due to the dynamic nature of sonotrode, significant research activities are carried out to study the dynamic characteristics of sonotrode configurations, fundamental modal properties such as the natural frequencies and mode shapes of the sonotrode using analytical and finite element analysis methods.
Fig. 1.Ultrasonic metal welding system components Kim et al. [1] discussed a method for designing the ultrasonic metal welding sonotrode based on modal and harmonic analyses.It was found that the largest amplitude change occurred at the tip of the sonotrode when the length of sonotrode is kept equal to half of the wave length of the sound wave passing through the sonotrode.The frequency characteristics of the sonotrode were analysed using ANSYS software.Ziad Al Sarraf [2] presented an approach for design and simulation of the ultrasonic sonotrode for spot welding that resulted in improved welding performance.Constantin et al. [3] performed finite element analysis on various sonotrodes profiles such as conical, exponential and parabolic cubical sonotrodes.The variation of the oscillation amplitude for different sonotrode profiles was reported.Jeong Seok Seo et al. [4] developed a one-wave length sonotrode operating at a resonant frequency of 40 kHz.The performance of the sonotrode was effectively verified by conducting experiments.This study also showed that the tip of the sonotrode was made to vibrate at the resonant frequency close to the working frequency of the metal welder used in the experiments.Vinod et al. [5] employed finite element method for designing sonotrodes used for ultrasonic machining.A systematic procedure was developed for the design of sonotrode.The double conical shape sonotrode was considered and various stress components in the sonotrode for several frequencies were studied and concluded that the stress induced in the sonotrode was minimum at resonance condition.Nad [6] reported that the performance of an ultrasonic welding equipment depends on proper design of sonotrode.The dynamic characteristics of different geometrical shapes of sonotrodes were presented in this work.Modal analysis was carried out by numerical simulation using finite element method.Amin et.al [7] established a computer aided design procedure for selection of sonotrode profile and material based on finite element analysis.A new design profile was suggested using parts with different geometries.An optimization procedure was adopted to obtain maximum magnification and safe working stresses based on the material used for fabrication of the sonotrode.Shuyu [8] analysed the propagation characteristics of the longitudinal and torsional vibrations in sonotrodes with exponential profiles and obtained the conditions for resonance of the longitudinal and torsional vibrations.Jeongdong et al. [9] designed a sonotrode for high power ultrasonic transducers using classical mathematical methods and numerical method.Based on the analysis, it was determined that a sonotrode with catenoidal profile provide larger amplification compared to the sonotrode with exponential profile.It was also indicated that the sonotrode with least curvature could possibly achieve larger amplification.Kuo et al. [10] made an attempt to design a rotary ultrasonic milling tool using finite element method.The harmonic piezoelectric vibrations of the ultrasonic milling system were simulated by finite element method to study on frequency, amplitude of the vibration and stress distribution.
Based on literature survey, it was found that only a few research works were reported by the researchers pertaining to the design issues of the stepped sonotrode.Evaluation of design of stepped sonotrode particularly used in lateral drive ultrasonic metal welding systems for joining metallic wire and sheet using design of experiments approach involving significant process parameters seems to be not reported.This work was carried out to fill this gap.The research work as reported in this paper, comprising of finite element analysis combined with experimental validation of performance of the sontrode will contribute an effective design approach to academic and industry practitioners.
Two different types of stepped sonotrode are considered in this work as shown in Fig. 2. The types of sonotrode are: Type I-the existing traditional sonotrode with dissimilar cross section and abrupt change in cross section commonly used for joining metallic wire with metallic sheet specimens in lateral drive ultrasonic metal welding systems, Type II-a sonotrode with similar circular cross section and gradual change in cross section using a tapered profile at the middle of the sonotrode.A systematic study based on the methodology as shown in Fig. 3 is carried out for designing a stepped sonotrode using finite element analysis employing ABAQUS software followed by experimental investigation with the following objectives: 1) To extract the mode shapes and the corresponding natural frequencies of the stepped sonotrodes.CAD software is used to model the profile based on theoretical calculations.
2) To determine the dynamic characteristic design parameters such as amplitude gain and von Mises stresses developed in the stepped sonotrodes from finite element analysis.
3) To study the performance of the sonotrodes by conducting experiments and compare them for optimum performance.Joint strength in tension is measured for this purpose.
Design of sonotrode
Stepped sonotrode profile is used in variety of ultrasonic metal welding applications.The stepped sonotrode offer many advantages such as higher amplitude gain, ease of manufacture and extended service life [11].Therefore, it is essential to design a sonotrode with higher amplitude gain while keeping the stress levels in the sonotrode as minimum as possible.The critical aspect of design of sonotrode for ultrasonic metal welding is that resonance frequency of the sonotrode must match the working frequency of the ultrasonic transducer.It is important to consider various aspects such as the selection of the working frequency, selection of the sonotrode material, establishment of the velocity of sound propagation in the selected sonotrode material and the calculation of the dimensions of the sonotrode.The governing equation for sonotrode is derived by considering an elastic medium subjected to longitudinal vibration.When a long slender bar as shown in Fig. 4 is excited along the axis, longitudinal vibration occurs, and it undergoes axial deformation.Therefore, a straight bar with length and cross section area is subjected to a longitudinal displacement given by .The basic assumptions are that the medium is homogeneous, isotropic and has free-free boundary condition.The governing equation of the bar is derived by considering that refers to a point location along the section of the bar and the bar has a small element .When this element is subjected to an applied force , the element moves from its initial position with a displacement and the change in displacement is + .The length of the element is increased to | + as shown in Fig. 4. The stepped sonotrode is designed on the basis of axial vibration of an elastic member with varying cross section.The plane wave propagation in the bar is assumed to be only in axial direction and propagation along lateral directions is neglected.The generalized wave equation, which governs the acoustic behaviour of the sonotrode with change in cross section is shown in Eq. ( 1): where is velocity of propagation of sound through sonotrode and is given by Eq. ( 2): Here, is the frequency at which sonotrode vibrates (20,000 Hz), is the wave length of ultrasonic waves in m, is the Modulus of Elasticity in N/m 2 and is the density of the material in kg/m 3 .
The selection of material is the fundamental step in the design of sonotrode.The materials such as tool steel, die steel, maraging steel and titanium are commonly used for making sonotrode used for welding soft metals.The sonotrode material considered in this work is AISI D2 steel and the material properties of the AISI D2 steel is shown in Table 1.Sonotrodes may be of full wavelength or half wave length.The half wave length sonotrode is generally used for ultrasonic metal welding applications so as to save the material and reduce manufacturing cost.The half wave length sonotrode is considered in this work.The geometrical dimensions and the material of the sonotrode play a significant role in enhancing the performance of the sonotrode.The geometry and governing dimensions of the standard stepped sonotrode is shown in Fig. 5.
The total length of the sonotrode ( ) considered in this work is half of the wave length (/2) of waves passing through it and is divided into two equal parts L1 and L2.The total length of the sonotrode is calculated using Eq. ( 2): Normally, the area of cross sections at the larger end and the smaller end are in the ratio 2:1 ratio for steel and in the ratio 3:1 for titanium.This is for improved acoustical performance [9].The specifications of the designed sonotrode are shown in Table 2.The commercially available software Pro/Engineer Wildfire 4.0 is used to develop CAD models of the sonotrode.The sonotrodes are modeled based on the calculated theoretical dimensions.
Finite element analysis of sonotrode
The commercially available ABAQUS/CAE 6.12 "Standard / Explicit Model" software is used in this work to analyse the design of the ultrasonic metal welding sonotrode.The modal analysis is performed on the sonotrodes to obtain possible mode shapes and the harmonic analysis is performed to obtain amplitude gain and von Mises stress developed in the sonotrodes.Since sound wave propagates through the sonotrode, the generalized wave Eq. ( 3) is applicable in this analysis.The meshing of the CAD model requires care as the accuracy of the results depends on the type of element, number of elements and variation in size of the elements.The element used in meshing the sonotrode models is standard C3D10.This element is a general purpose tetrahedral element with a very attractive capability for automatic meshing and provides better results in vibration analysis.Table 3 shows the FEM models of sonotrode.During analysis, the material of the sonotrode is assumed to be homogenous, isotropic and there is no change in material properties along the length of the sonotrode.The fixed boundary condition as shown in Eq. ( 4) is applied at the transducer end of the sonotrode by constraining all degrees of motion.The range of frequency is set between 0-25 kHz.Lanczos method is used to determine the natural frequency of the sonotrode: , , = 0, = 0, = 0, = 0, = 0, = 0, = 0, = 0, = 0, = 0.
Various mode shapes of the sonotrodes such as bending, extension and twisting and the corresponding natural frequencies are obtained using finite element analysis and are shown in Table 4. From the mode shapes identified, the extension mode shape is selected according to the application and then harmonic analysis is carried out for the extension mode shape of each of the types of sonotrode.The pre-processing of Type II sonotrode for harmonic analysis is shown in Fig. 6 and the analysis was carried for both the types of sonotrodes.Generally, for many of the ultrasonic metal welding applications pertaining to joining of metallic wire and sheet, the input amplitude of vibration of the sonotrode used is in the range of 30 to 60 µm and the clamping pressure used is in the range of 2 to 5 bar.In this work, analysis is carried out by providing input amplitudes of 30 µm, 40 µm, 50 µm and 60 µm at the transducer end of the sonotrode and the JOURNAL OF VIBROENGINEERING.NOVEMBER 2018, VOLUME 20, ISSUE 7 maximum clamping pressure of 5 bar applied on the tip of the sonotrode as shown in Fig. 6.Since the sound wave propagates through the sonotrode, the length of the sonotrode is made equal to half the wave length [1] to obtain maximum amplitude of the vibration at the tip of the sonotrode.It is essential to have a gain for the amplification of the vibration amplitude from the transducer end to the required level at the tip.The amplitude gain is computed theoretically as 2.98 using Eq. ( 5) [3].From the results obtained from harmonic analysis as shown in Fig. 7 and Fig. 8 the amplitude gain is found to be 2.35 for Type I sonotrode and 2.85 for Type II sonotrode for an input amplitude of 30 µm.The results of analysis are compared with the amplitude gain value obtained theoretically.It is clear that for Type II sonotrode the amplitude gain is in close agreement with amplitude gain value obtained theoretically than that for Type I sonotrode: where, -diameter at the larger end in mm, -diameter at the smaller end in mm.In practice, the duration of safe working of sonotrode for producing defect free joints is of significant requirement.The service life of the sonotrode is a vital parameter which ultimately depends on both design of sonotrode and the operational parameters.The stresses developed at the time of operation must be at safe levels for a sonotrode to have effective useful life.On account of this, an analysis is performed to determine the von Mises stress in both the types of sonotrodes.The von Mises stress obtained using the input parameters of 30 µm amplitude and 5 bar pressure for the two types of sonotrodes are shown in Fig. 9. Similarly, analysis is carried out to determine von Mises stress at other input amplitudes of 40 µm, 50 µm and 60 µm.Since it is not possible to put all the analysis results pictorially the analysis results are shown in Table 5 and the values are plotted in Fig. 10.From the harmonic analysis it is evident that Type II sonotrode has a significant amplitude gain of 2.85 and minimum von Mises stress of 1027 N/mm 2 at maximum input amplitude of 60 µm which is significantly low compared with Type I sonotrode.On an average, there is 57 % of reduction in stress levels at different input amplitudes in Type II sonotrode.The experiments are conducted to study the performance of the sonotrode.
Details of experiments
The purpose of conducting experiments is to evaluate the performance of the customized Type II sonotrode over the standard Type I sonotrode.The performance of sonotrodes can be evaluated by comparing the strength of the joint in tension obtained by both the sonotrodes.The Type II sonotrode is fabricated using MAKINO CNC machining center.The experimental trials are conducted based on Taguchi's method.Taguchi's method of experimental design provides a simple, efficient and systematic approach for conducting experimental trials.Based on considering three factors at three levels, L9 orthogonal array is employed [13] for conducting experimental trials.Ultrasonic welding trials are conducted using a conventional lateral drive ultrasonic metal welding machine (2500 W, 20 kHz) for different ranges of weld parameters.The specimens used in this work are the copper sheet (as received) of 100 mm length, 25 mm width, 0.31 mm thickness and the copper wire (as received) of 0.91mm diameter and 100 mm length with an overlap length of 6 mm in lap joint configuration [14].The schematic representation of the joint is shown in Fig. 11.The experimental setup and the fabricated Type II sonotrode are shown in Fig. 12.The quality of the joint depends on many parameters.In this work, the parameters such as weld time, clamping pressure and amplitude of vibration of the sonotrode [15][16][17] are identified as control factors and varied at three levels as shown in Table 6.The strength of the joint in tension is considered as the response variable.Two replicates of each experimental trial are conducted and the average of two results is considered as the strength of the joint in tension.The weld specimens are prepared according to ASTM international codes [12] for testing the strength of the joint by tensile loading.A 10 kN tensile testing machine is used to determine the joint strength of the specimens.The specimens (as received) are cleaned with acetone to remove the surface impurities.The experimental trials conducted using Type I sonotrode is shown in Table 7 and the experimental trials conducted using Type II sonotrode is shown in Table 8.Based on the experimental results, it is found that the strength of joints obtained using the Type II sonotrode is more than the strength of specimens obtained using Type I sonotrode.The welded specimens and the tensile testing loading conditions of the welded specimens are shown in Fig. 13.The increase in tensile strength of the welded joints for each experimental trial using Type II sonotrode is compared with strength of the joint values obtained using Type I sonotrode.The percentage increase in strength of the joint for each experimental trial is shown in Fig. 14.This increase in strength of the joint thus obtained using Type II sonotrode is due to higher amplitude gain and of the changes in the tip of the sonotrode.For the weld joint to have adequate strength, the specimens are to be pressed and rubbed against each other by the sonotrode tip.
The Type I sonotrode tip has a square tip of side 5 mm for pressing the metallic wire on the flat surface wherein the metallic wire tends to slip when both clamping pressure and ultrasonic vibrations are applied through the sonotrode on the weld specimens.In Type II sonotrode, the sonotrode tip is made uniform and has a rectangular tip of length of 20 mm and width of 5 mm wherein the ultrasonic vibrations and clamping pressure are transmitted efficiently to the weld specimens thereby improving the strength of the joint.Hence, the performance of customized Type II sonotrode is effective than Type I sonotrode.
Conclusions
In this work, an investigation was carried out for designing sonotrodes used for joining metallic wire with a flat metallic surface.The dynamic characteristics of the two different types of stepped sonotrodes were studied and reported.The finite element analysis results and experimental results revealed the following with regard to design, analysis and performance of the sonotrode.
The modal analysis demonstrated various mode shapes in bending, extension and twisting modes of sonotrodes.The sonotrodes were designed to operate at longitudinal extension mode shape.These mode shapes for Type I and Type II sonotrode were obtained at 19552 Hz and 19814 Hz respectively using finite element analysis.
Harmonic analysis was performed to determine the amplitude gain and the stresses induced in the sonotrodes.The amplitude gain of the stepped sonotrode was analytically computed as 2.98 and the amplitude gain results obtained using ABAQUS software were 2.35 and 2.85 for Type I and Type II sonotrodes respectively.
The von Mises stress developed in both the types of sonotrode was analyzed and it was found that the Type II sonotrode was subjected to minimum stress level compared to Type I sonotrode.The stress levels were reduced by 57 %.This reduction in stress was due to the provision of gradual change in cross section at the middle of the sonotrode and could result in enhancement of useful life of the sonotrode.
The performance of the sonotrodes was evaluated by conducting experimental trials.The tensile strength of the weld specimens obtained using Type I sonotrode was compared with tensile strength of the weld specimens obtained using Type II sonotrode.The joint strength in tensile loading conditions was increased by 37 % using Type II sonotrode.The scope of this work can be further extended with a study on service life of sonotrode based on wear at the tip of the sonotrode, change in geometric shape, geometric dimensions and material of the sonotrode based on shape optimization.based on shape optimization.The T-Peel strength of the joint can also be considered for comparison of strength of the joints.
Fig. 2 .
Types of stepped sonotrode: a) type I, b) type II
Fig. 14 .
Fig. 14.Percentage increase in strength of the joint using Type II sonotrode 2958.ACOUSTIC HORN DESIGN FOR JOINING METALLIC WIRE WITH FLAT METALLIC SHEET BY ULTRASONIC VIBRATIONS.
Table 1 .
Material properties of AISI (American iron and steel institute) D2 steel
Table 3 .
FEM models of various types of stepped sonotrode
Table 5 .
Simulation results of von Mises stresses at 5 bar pressure
Table 6 .
Process parameters and the levels
Table 7 .
Experimental results using type I sonotrode
Table 8 .
Experimental results using type II sonotrode | 2019-04-16T13:29:07.923Z | 2018-11-15T00:00:00.000 | {
"year": 2018,
"sha1": "152f8fb7668566dbe0bd9dfbbae414b015b937c0",
"oa_license": "CCBY",
"oa_url": "https://www.jvejournals.com/article/19648/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "152f8fb7668566dbe0bd9dfbbae414b015b937c0",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
236469953 | pes2o/s2orc | v3-fos-license | Effects of Facets of Mindfulness on College Adjustment Among First-Year Chinese College Students: The Mediating Role of Resilience
Introduction College life is a challenging stage for students to transition from adolescence to early adulthood. College students need to adjust to various problems, including those related to learning, campus life, interpersonal relationships, career selection, emotions, and self. The aim of this study was to test the associations between different facets of mindfulness, resilience, and college adjustment, as well as the mediation effect of resilience between mindfulness and college adjustment among first-year college students. Methods This survey study recruited 765 first-year college students in China. The psychological variables were assessed by the Five Facet Mindfulness Questionnaire, the Connor–Davidson Resilience Scale, and the Chinese College Student Adjustment Scale. Results It has been showed in the current study that mindfulness and resilience were positively correlated with college adjustment. Resilience significantly mediated the associations between four dimensions of mindfulness (ie, describing, acting with awareness, observing and non-reactivity) and college adjustment. Conclusion The findings support the potential importance of enhancing mindfulness and resilience to facilitate adjustment among first-year college students. Limitations and implications are discussed.
Introduction
College students need to adjust to various problems, including those related to learning, campus life, interpersonal relationships, career selection, emotions (eg, homesickness, depression, and dissatisfaction), and self. 1,2 College life also represents a challenging stage, as this period is a transition from adolescence to early adulthood, and people in transitional stage are particularly vulnerable to the development and chronification of emotional/mental disorders. 3 Therefore, the best possible care needs to be ensured for these age groups. The effects of adaptation to a new life situation on short-term and long-term well-being, mental health, and quality of life have been well documented. 4,5 Successful adjustment to college life is negatively associated with depressive symptoms, academic stress, and psychological distress. [6][7][8] It also has lasting consequences on the development of personality and careers in adulthood. 4 Mindfulness is a multi-dimensional concept and has received much attention in contemporary clinical and social psychology given its apparent benefits for stress coping, behavior and emotion regulation, psychological health, and interpersonal relationships. 9 This study aimed to test the roles of different facets of mindfulness in college adjustment and the potential mediators of the relationships among first-year college students.
Mindfulness and College Adjustment
Mindfulness can be a positive trait that assists new college students in successfully adapting to their college life. Mindfulness entails being aware of the present moment experience in a non-judgmental manner, and is a multifaceted concept (ie, non-reactivity to inner experience; observing/attending to sensations, thoughts or feelings; acting with awareness; describing feelings with words; nonjudging of inner experience). 10,11 People with high levels of mindfulness tend to be aware of their body sensations, thoughts, and emotions with less reactivity and keep a stance of equanimity instead of engaging in suppression or excessive fixation. The non-judgmental view and acceptance-based approach can provide a metacognitive insight into an understanding of, and acceptance of intrapersonal and interpersonal difficulties. [12][13][14][15] The clarity and vividness of the experience and orienting to the present moment with curiosity and openness can facilitate one's psychosocial adjustment to life changes and new environments. [16][17][18] Although a number of studies have reported positive correlations between mindfulness and psychosocial wellbeing, 11,[19][20][21][22][23][24] we only found two studies testing the association between mindfulness and college adjustment. 25,26 One study included 92 first-year students and reported positive associations between mindfulness and college adjustment. 25 However, this study had a small sample size, and did not report the role of each mindfulness facet in college adjustment. The other study recruited undergraduate students (N=2496) and found the component of mindfulness is a strong predictor of college adjustment. 26 Scholars have highlighted the importance of examining mindfulness as a multi-faceted construct, as the specific mindfulness facets may correlate differentially with aspects of psychological adjustment. 27,28 For example, one recent study including 353 undergraduates (55.8% were first-year students) reported significant effects of non-reactivity and nonjudging on stress and greater emotional well-being. 29 Another study among 310 undergraduates found observing was negatively related to self-reported physical health, acting with awareness and non-judging were positively related to emotional well-being, and non-judging was positively associated with social functioning. 30 If particular mindfulness facets predict adjustment more robustly, those facets could be emphasized in mindfulness-based interventions to enhance their effectiveness.
Resilience as a Potential Mediator
Resilience is an important trait that helps individuals to cope with adversity and achieve successful adjustment and personal growth during trying circumstances. 31 Ryff, Singer, Love, and Essex (1998) defined resilience as the capacity to maintain and recover high well-being when facing life changes and adversities. 32 Previous research has demonstrated that resilient individuals could maintain their physical and psychological health via buffering negative consequences from difficult times, 31 and via enhancing psychological well-being. 33 Resilience has been found to enhance adaptive coping, positive emotions, life satisfaction, interpersonal satisfaction, and personal growth, [34][35][36][37][38][39][40][41] which are closely related to one's psychosocial adjustment to new environments. Thus, resilience may be an important source of college adjustment among new college students. One study reported a positive effect of resilience on adjustment to college among 514 first-year undergraduate students in the southern United States. 42 Moreover, resilience may play as a mediator between mindfulness and college adjustment. A review study of trait mindfulness and resilience to trauma suggested that a mindful and accepting orientation towards negative experience can help prevent ruminative and depressogenic thinking, hence promoting resilience following trauma. [43][44][45] Mindfulness has demonstrated the potential to foster resilience, as mindful people are better able to respond to difficult situations without reacting in automatic and nonadaptive ways and are open to new perceptual categories, tend to be more creative, and can better cope with difficult thoughts and emotions without becoming overwhelmed or shutting down. 46,47 One study among 141 college students reported positive relationships between mindfulness and resilience. 48 Furthermore, only one study has tested the mediation role of resilience and showed that it was an important mediator between mindfulness and subjective well-being among college students. 34 This study hypothesized that resilience would significantly mediate the association between mindfulness and college adjustment among new college students.
The Present Study
This study aims to investigate the relationships among mindfulness, resilience, and college adjustment in first- year college students. It is hypothesized that mindfulness would be positively associated with college adjustment. Based on the above stated rationale and the existing literature showing that mindfulness is an antecedent to resilience, 44,[48][49][50][51] and resilience is positively correlated with subjective well-being and positive coping with life changes/adversities, [34][35][36][37][38][39][40][41] it is expected that mindfulness would exert a significant indirect effect on college adjustment through the mediation effect of resilience. Furthermore, since mindfulness is a multi-faceted construct, this study tested how different facets would be associated with resilience and college adjustment ( Figure 1).
Participants and Procedure
This paper-and-pencil survey was conducted in a convenience sample in Pingdingshan University, Henan province, China, during September and October 2018. The inclusion criteria of this study were as follows: 1) being a first-year college student; and 2) willing to participate in the survey. Students from 15 classes (each class including 45-55 students) were invited and completed the survey during the 20-minute class breaking. One of the authors explained to the participants that participation was anonymous, voluntary and refusals would have no negative consequences, and was available throughout the survey process to answer any queries raised by the participants. Data confidentiality was guaranteed and only the researchers could access the data. The participants took 10-15 minutes to complete the questionnaire.
In total, 767 students agreed to participate in the study and 2 did not complete the questionnaire. Among 765 participants, about half of them (54.4%) aged 18-19 years, and 71.6% of them were female. About half of the participants (48.8%) majored in liberal arts/social science. Most of them (78.0%) reported their family monthly income equaling to RMB 5000 or less (Table 1).
Ethical Statements
This study following the Declaration of Helsinki. Ethical approval was obtained from the The Survey and Behavioral Research Ethics Committee of the Chinese University of Hong Kong (Ref# 055-18). Informed consent has been shown to all the participants to ensure they know that this study is anonymous, no incentive was provided, and they can quit this study at any time without any punishment. Written consent was obtained from each participant. The participants
Measures
The questions and scales were listed in the questionnaire in the following order, ie, background variables, mindfulness, resilience, and college adjustment).
Mindfulness
Trait mindfulness was measured by the Chinese version of the 39-item Five Facet Mindfulness Questionnaire (FFMQ). 52 The FFMQ measures five unique mindfulness facets: observing (eg, "When I'm walking, I deliberately notice the sensations of my body moving"), describing (eg, "I am good at finding words to describe my feelings"), acting with awareness (eg, "When I do things, my mind wanders off and I'm easily distracted"), non-judging of inner experience (eg, "I criticize myself for having irrational or inappropriate emotions"), and non-reactivity to inner experience (eg, "I perceive my feelings and emotions without having to react to them"). Participants rated each item on a 5-point Likert scale, ranging from 1 (never) to 5 (always). Scores of the negative-worded items were reversed when calculating mean scores and internal reliability of the scale. Higher mean scores indicated higher levels of mindfulness. In the present study, Cronbach's alphas of the subscales were 0.78, 0.80, 0.85, 0.71, and 0.56, respectively.
Resilience
The Chinese version of the 25-item Connor-Davidson Resilience Scale was used to assess participants' psychological resilience. 31,53 The sample items include "Past success gives confidence for new challenge" and "When things look hopeless, I don't give up". Respondents rated items on a Likert scale from 0 (not true at all) to 4 (true nearly all the time). Higher scores suggested greater resilience. The Cronbach's α was 0.93 in the current sample.
College Adjustment
The 66-item Chinese College Student Adjustment Scale (CSAI) was used to test college adjustment in the aspects of learning adaptivity, interpersonal adaptivity, selfadaptivity, career choice adaptivity, livelihood selfmanagement adaptivity, environmental general evaluation, and somatic-mental symptom. 54 Items (eg, "I can handle daily affairs independently"; "I seldom take part in other activities except study") were rated on 5-point Likert scales, ranging from 1 (disagree) to 5 (agree). Higher scores indicated better overall adjustment to college life. In the present study, the Cronbach's α of the overall scale was 0.94, while some of the subscales had relatively low reliabilities (Cronbach's α<0.50) in our sample. Thus, we used the total score in the analyses.
Data Analysis
Descriptive statistics were computed for the background and psychological variables. Associations between background variables and psychological variables were calculated by Pearson's correlation analysis/independent t-test/ ANOVA. Pearson's correlation analysis was conducted to examine the associations among mindfulness, resilience, and adjustment. SPSS version 17.0 was used; p values of 0.05 or less indicated statistical significance.
AMOS.17.0 was used for model testing. Path analysis was used to test the proposed mediation model. Goodness of fit was tested by using the χ 2 test, the Comparative Fit Index (CFI), and the root mean square error of approximation (RMSEA). Non-significant p values (>.05) of the χ 2 test represent adequate model fits; CFI values >0.95 and RMSEA values <0.08 indicate good model fit. Standardized path coefficients (β) and unstandardized path coefficients (B) were reported. Bootstrapping analyses tested the mediation hypotheses. The 95% confidence intervals (CI) of the indirect effects were obtained from 5,000 bootstrap samples. A statistically significant mediation effect was observed when the CI did not include zero. A widely accepted rule of thumb is 10 cases/observations per indicator variable in setting a lower bound of an adequate sample size for path analysis, 55 ; the sample size of 80 would be adequate (8 observed variables). Thus, our sample size was sufficient for all the analyses.
Preliminary Analyses
The means, standard deviations (SD), and correlation coefficients of the studied psychological variables are displayed in Table 2. Only age was significantly associated with college adjustment (r = 0.08, p = 0.03) and thus would be adjusted for further analyses. Although all the participants were firstyear college students, those who were older had higher levels of college adjustment. Other background variables were not significantly associated with college adjustment (p > 0.05). The FFMQ subscales of observing, describing, acting with awareness, and non-reactivity were positively associated with resilience and college adjustment, respectively (p < 0.05). However, the non-judging subscale was negatively associated with resilience (r = −0.22, p < 0.01) and non-significantly associated with college adjustment (r = 0.05, p > 0.05). Resilience was positively associated with college adjustment (r = 0.60, p < 0.01).
In addition, the direct effects of describing (B = 0.07, β = 0.11, p < 0.01) and acting with awareness (B = 0.24, β = 0.39, p < 0.01) on college adjustment were significant and positive, respectively. However, observing, nonjudging, and non-reactivity were not significantly associated with college adjustment (p < 0.05). It suggested that resilience partially mediated the associations between describing/acting with awareness and college adjustment, while fully mediated the associations between observing/ non-reactivity and college adjustment.
Discussion
The current study represents the first attempt to test the relationships among mindfulness, resilience, and college adjustment and the mediation role of resilience in explaining the association between mindfulness and college adjustment among first-year college students. This is also the first study investigating these relationships in a Chinese population. The hypotheses were generally supported by the data.
We found that four dimensions of mindfulness (ie, observing, describing, acting with awareness, and nonreactivity) were positively associated with resilience, consistent with previous findings in college students and in
1105
other populations. [43][44][45]48 Indeed, a mindful and accepting orientation and reacting without automatic and nonadaptive ways toward negative experience helps prevent rumination and maladaptive coping, thereby promoting resilience following trauma. [43][44][45] In addition, the study found that mindfulness (the four dimensions of mindfulness) was also positively associated with college adjustment, in line with previous studies in Western cultures. 25,29 Individuals who tend to be aware of their body sensations, thoughts, and emotions with less reactivity and have a metacognitive insight into, understanding of, and acceptance of new challenges are more likely to adjust to college life and new environment. [12][13][14][15] Mindfulness may also help to reduce students' maladaptive coping and behavioral reactions to new challenge and stress. For example, a study found that mindfulness could mediate the relationship between antagonism and problem gambling in late adolescents. 56 Both mindfulness and college adjustment are multi-dimensional concepts and their relationships may be explained by different mechanisms. Future work may test how different dimensions of mindfulness may affect the aspects of learning adaptivity, interpersonal adaptivity, self-adaptivity, career choice adaptivity, livelihood self-management adaptivity, environmental general evaluation, and somatic-mental symptom of college adjustment. Mindfulness-based interventions may provide a practical means of enhancing these characteristics of resilience and facilitate college adjustment. 57,58 Future studies should test the efficacy of such interventions in enhancing college adjustment in fresh college students. Unexpected results regarding the non-judgment dimension of mindfulness were found. First, we did not find significantly positive associations between non-judgment and resilience/college adjustment in our sample. In addition, we found that the mean score of non-judging was relatively low in our sample, lower than the scale midpoint, while the other four subscales of mindfulness had mean scores higher than their scale midpoints. Furthermore, non-judging was negatively correlated with observing and non-reactivity, inconsistent with previous results in college students. 59,60 Similarly, Baer et al (2004) also reported a significant negative correlation between non-judging and observing. 61 The authors explained that for individuals with no meditation experience, non-judgment with respect to thoughts and feelings in daily experience might not necessarily mean attending or non-reacting to the experience, but that people with meditation experience should be expected to show higher levels of all the five dimensions of mindfulness and positive correlations among them. Non-judgment means letting go of the automatic judgments that arise in one's mind with every experience he/she has, which is a core component of mindfulness. 10,11 People without meditation may automatically avoid negative experiences (instead of accepting it or letting it go), and view such reactions as non-judgmental. Future studies should test the potential moderating effect of presence of meditation experience. Qualitative studies are encouraged to clarify how the local sample understood the items and concept of non-judgment and mindfulness as a whole in the local culture. Some studies pointed out that Chinese individuals tend to be self-critical, 62 and thus may have low levels of selfacceptance and non-judgment. Comparison studies across cultures are warranted.
Resilience was positively associated with college adjustment. It added evidence to the argument that resilience is an important trait that can assist individuals in quickly and adaptively coping with challenges. [34][35][36][37][38][39][40][41] Furthermore, resilience might be a mechanism underlying the mindfulness-adjustment relationship. The mediation effect of resilience highlights the importance of resilience in conveying the beneficial effects of mindfulness on college adjustment. Specifically, resilience could fully explain the associations between observing/non-reactivity and college adjustment; it partially mediated the describing/acting with awareness-adjustment associations. The theoretical underpinning for this result is that the observing, awareness without maladaptive reactivity, and acceptance aspects of mindfulness may facilitate the develop of resilience characteristics, such as selfefficacy, zest, psychological flexibility to challenges, and these characteristics of resilient individuals may enhance adjustment to college life and other well-being outcomes (eg, life satisfaction). 34 The partial mediation suggests that there may exist other mediators that can explain the describing/acting with awareness-adjustment associations. For example, it is possible that individuals with an ability to act with awareness may stay in touch with contextual cues and readily available sources of positive reinforcement within their environment, thus, adapting to college successfully. Future work should include these potential mediators in the mindfulness-adjustment model. Such mediation research may help counselors and intervention developers to understand why particular persons may not be responding to a mindfulness-based intervention and allow them to tailor interventions to individual needs. For example, in order to enhance college adjustment, mindfulness practice may need to explicitly link its concepts to how to enhance one's resilience when facing challenges. In addition, given the importance of resilience in facilitating college adjustment, other resilienceenhancement interventions, such as problem solving training to enhance stress coping skills and interpersonal-based programs to promote social support, may also benefit these fresh students.
This study has some limitations. First, the cross-sectional study only provided evidence for the associations among mindfulness, resilience, and college adjustment. Longitudinal studies are needed to establish the causal linkages among these variables. Second, the data relied on self-report measures. The non-reactivity subscale of the FFMQ received relatively low reliability. Caution is needed when interpreting the results. Future studies should validate the results with other well-validated scales of non-reactivity. In addition, this study only focused on how mindfulness and resilience may relate to college adjustment. Some potentially important factors of college adjustment, such as self-esteem and social support, were not included in the model, and are worthy of future examination. Finally, the study utilized a convenience sample based on students in attendance during the period of data collection. Convenience sampling limits generalization to broader populations. Future studies need to replicate our findings in a random sample and in different cultural populations.
In conclusion, we found that mindfulness and resilience were positively correlated with college adjustment, and resilience significantly mediated the associations between four dimensions of mindfulness (ie, describing, acting with awareness, observing and non-reactivity) and college adjustment. The findings provide preliminary support for colleges to develop strategies that promote mindfulness and resilience among first-year college students to facilitate their college adaptation and enhance their wellbeing outcomes in various aspects of college life.
Funding
The study is funded by Research Fund for Young Scholars of Pingdingshan University (PXY-QNJJ-2018011).
Disclosure
The authors report no conflicts of interest in this work. | 2021-07-29T05:27:16.115Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "086a086a160a13b2c910b159ddceea57c4e16074",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=71940",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "086a086a160a13b2c910b159ddceea57c4e16074",
"s2fieldsofstudy": [
"Education",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245628746 | pes2o/s2orc | v3-fos-license | Two-Stage Segmentation Framework Based on Distance Transformation
With the rise of deep learning, using deep learning to segment lesions and assist in diagnosis has become an effective means to promote clinical medical analysis. However, the partial volume effect of organ tissues leads to unclear and blurred edges of ROI in medical images, making it challenging to achieve high-accuracy segmentation of lesions or organs. In this paper, we assume that the distance map obtained by performing distance transformation on the ROI edge can be used as a weight map to make the network pay more attention to the learning of the ROI edge region. To this end, we design a novel framework to flexibly embed the distance map into the two-stage network to improve left atrium MRI segmentation performance. Furthermore, a series of distance map generation methods are proposed and studied to reasonably explore how to express the weight of assisting network learning. We conduct thorough experiments to verify the effectiveness of the proposed segmentation framework, and experimental results demonstrate that our hypothesis is feasible.
Introduction
The atrium is a component of the heart, one of the most important organs of humans, and its operation is closely related to human health. Atrial fibrillation is a common and persistent arrhythmia. When it occurs, the body's heartbeat will be fast and irregular, and the atria will not contract normally, which may cause thrombosis to block blood vessels and increase the risk of stroke and heart failure.
In order to confirm the location of the lesion or compare the structure of organs and tissues, medical image analysis usually requires a professional diagnostician to manually mark the target area to gain a deeper understanding of its anatomy. An important reason for the poor treatment of atrial fibrillation in existing studies is the lack of in-depth understanding of the anatomical structure of the atrium. Although the expert's manual segmentation of medical images can reconstruct the atrium for further research, this requires experts to have professional knowledge and rich work experience, and the cost of training such a doctor is huge. Therefore, it is of great significance to use intelligent computer methods to automatically segment the atrial structure in medical images to assist doctors in researching and treating atrial fibrillation.
The traditional image research technology is mainly to manually design features and then use machine learning algorithms to classify image features. As an emerging branch of machine learning, deep learning transforms the original feature representation space into another space through feature transformation layer by layer, thereby making tasks such as recognition, classification, and segmentation easier [1][2][3][4]. Compared with traditional artificial methods to construct features, this way of learning samples through big data can better characterize the rich information inherent in the data. The success of deep learning in the computer vision domain has also brought many inspirations to medical image research.
However, a noticeable defect in medical imaging is that the partial volume effect of organs or tissues can easily lead to unclear edges or blurry edges that restrict precise segmentation [10]. Combining the significance of atrium segmentation, this paper explores how to use deep learning to strengthen the learning of features near the edge of ROI to improve the performance of the left atrium MRI segmentation.
In summary, our main contributions are as follows: Regarding the distance map as the learning weight of the edge region, we propose a new segmentation framework based on two-stage learning, specifically: (1) use a simple two-stage network as the basic network framework and design a branch in its first stage to incorporate distance map information; (2) we design and discuss three methods for generating distance maps with edges as the target to effectively express the weights used to guide deep learning; (3) in order to further optimize network learning, Distdice Loss is proposed to emphasize the contribution of distance map to network training. In the end, the Dice score and Assd of the method we constructed on the ASC data set are 94.10% and 0.82 mm, which are improved by 2.72% and 0.53 mm compared with the two-stage network, respectively; (4) moreover, experimental results on the dataset demonstrate that our network sets a new state-of-the-art performance on the left atrium MRI segmentation dataset.
Two-Stage Learning
In addition to using the end-to-end one-stage training method to segment medical images, some scholars have made many attempts using the two-stage idea and achieved exciting results [11][12][13]. Two-stage learning usually implements rough segmentation in the first stage and then puts rough segmentation into the second stage to continue training. It allows the deep neural network to learn features more effectively and achieve precise segmentation by giving training instructions or using specific skills in the first or second stage. Tang et al. [14] used a fully convolutional neural network to roughly segment the liver area in the first stage and crop the CT sub-images into the input of the second stage. Based on this, an edge enhancement network is proposed to segment the liver and tumor at the same time more accurately. Boot et al. [15] proposed a novel deep learning method based on a two-stage target detector that combines the enhanced Faster R-CNN and Libra R-CNN structure for target detection. The segmentation network is placed on top of the previous structure to accurately extract and position various features (i.e., edges, shapes). Jiang et al. [16] proposed a two-stage cascaded U-Net, using a variant of U-Net as the first stage network to obtain a rough prediction. Then, in the second stage, increase the width of the network and use two decoders to improve performance. The prediction map is refined in the second stage by cascading the preliminary prediction map with the original input to take advantage of the automatic context. The results of these studies fully illustrate the potential of the two-stage in the field of image segmentation. We will follow the steps of these studies and use the advantages of the two-stage to improve the performance of segmentation.
Distance Transformation
The idea of distance transformation has been widely used in many fields, including computer vision [17], image analysis [18], pattern recognition [19], and so on. The distance transformation algorithm can be used for shape matching and interpolation, skeleton extraction, separation of glued objects, target refinement, etc. Distance transformation is generally used to transform binary images [20]. In the image space, the pixels in a binary image can be divided into background pixels and target pixels. Take the case where the target pixel is 1 as an example: the pixel value of the target area is equal to 1 and the pixel value of the background area is equal to 0. The distance image generated by the distance transformation is a grayscale image rather than a binary image. The gray value represented by each pixel in this gray image is the distance from that pixel to the nearest background pixel.
Suppose there is a binary image with a connected area, which is the target area. Let P stand for the target pixel set, Q stands for the background pixel set, and D stands for the distance map. Then the distance transformation can be defined as: First, the target pixels in the image are divided into external points, internal points, and isolated points. As shown in Figure 1, the left image is a schematic diagram of internal points and the right image is a schematic diagram of isolated points. Consider the center pixel and its four-neighborhood pixels: if the center pixel is the target pixel and its fourneighborhood pixels are also target pixels, it means that the center pixel is an interior point; if the center pixel is the target pixel and its four neighboring pixels are all background pixels, then this center pixel is an isolated point. Pixels that are neither internal points nor isolated points in the target area are boundary points. Then calculate the internal points and non-internal points in the binary image to form point sets C 1 and C 2 , respectively. For each internal point in C 1 , the minimum distance from the pixel point in C 2 is calculated through the distance function and the set of these minimum distances constitutes C 3 . Next, calculate the maximum value max and minimum value min in C 3 . Taking a two-dimensional RGB image as an example, the gray value N obtained by conversion of each internal point can be expressed as: Here, C 3 (x, y) represents the shortest distance from the pixel in C 1 to the pixel in C 2 . The distance function used in this paper is Euclidean distance and the distance transformation is Euclidean distance transformation. The formula for calculating the Euclidean distance is as follows:
Overall Network Architecture
In our method, the expected distance map after distance transformation with the edge of the left atrium as the target area is a grayscale image and also a weight map: the closer the area to the edge of the left atrium, the larger the pixel value is, and vice versa. It is conceivable that using such a weight map to participate in training can make the network pay more attention to the area near the edge of the left atrium. Moreover, the distance map is generated offline, not bringing additional overhead to training. Figure 2 is a schematic diagram of the overall architecture of the method. In the figure, "Label" represents the real label, "Map label" represents the distance map generated based on the real label, "Input" represents the training image, "Distance map" and "Segmentation", respectively, represent the distance map and the rough segmentation result output by the network in the first stage, and "Output" represents the output of the second stage of the network. The network training is divided into two stages. In the first stage, a variant of the U-Net [21] structure is used as the training network (U-Net1). It adds a branch parallel to the original U-Net up-sampling path. For clarity, the down-sampling path in the original U-Net is named image encoder and the up-sampling path is named image decoder. The newly added up-sampling branch is named distance decoder. The image encoder is composed of an initial convolutional layer and three basic modules. All convolutional layer kernels are equal in size to 3 and the number of channels in each layer is 16, 32, 64, 128 in sequence. The basic module of each layer consists of a convolution module and a down-sampling operation. Each convolution module is composed of two convolutional layers and the group normalization and ReLu activation function are inserted before each convolutional layer.
The decoding part of the network trained in the first stage has two branches: the image decoder and the distance decoder. These two decoders share the encoder from the image mentioned above. Before the feature map is down-sampled by the decoder, it will skip connection with the input of the encoder of the same level. The image decoder and the distance decoder are also composed of three basic modules, each of which is composed of a convolution module and an up-sampling operation. The up-sampling operation of the image decoder uses transposed convolution, while the up-sampling operation of the distance decoder uses trilinear interpolation. After connecting the final up-sampling results of the image decoder and the distance decoder in the channel dimension, they are input into the second-stage network (U-Net2) for training. The configuration of the second-stage network is the same as that of the first-stage network, but the distance decoder is deleted. Use the softmax function to output the final prediction.
Distance Map Generation
The primary purpose of this method is to obtain a learning weight map that can assist in the segmentation of the left atrium edge. When the pixel value of the target area in the label is 1 and the pixel value of the background area is 0, the distance map generated according to the label should satisfy that the closer the target edge area is, the larger the pixel value, and vice versa. Corresponding to the pixels in the original image, the pixel values in the distance map represent the strength that the network needs to learn. In order to find a distance map that can effectively represent the learning intensity, this section discusses three different ways of generating distance maps and then validates the performance of the three methods in subsequent experiments.
The processing step of the first method, named Method A, is to obtain the edge image of the left atrium and reverse it. Then, generate a distance map according to the Euclidean distance transformation. Next, the distance map is normalized to [0, 1] and the result of subtracting the distance map from 1 is used as the final distance map for supervision. At this time, the supervision distance map satisfies that the pixels close to the edge have a more enormous value. Figure 3a shows the distance map generated by this method. The second method, called Method B, is derived from the paper [22]. First, we perform distance transformation on the real label area and then subtract the results from the maximum distance value generated. Take the absolute value of the above result and multiply it with the original label to generate an error compensation distance map. Second, reverse the original label and perform the same steps above to calculate the distance map inside the left atrium. Third, normalize the results generated in the first two steps separately and add them in voxel mode to obtain the final result. Figure 3b shows the distance map generated by this method.
In addition to the above two methods, we also explored Method C to generate distance maps. In the distance maps generated by Method A and Method B, the pixel value represents the distance to the target area. Assume an extreme situation: only the pixels in the target area are infinitely close to the target area and the other pixels are the opposite. Therefore, we tried a simple and extreme distance map: directly use the edge of the left atrium as a supervised label to guide the training of the distance decoder. Figure 3c shows the distance map generated by this method.
Loss Function
As shown in Figure 2, the proposed network framework needs to design three loss functions which are used for the first-stage image decoder branch training, the distance decoder branch training, and the second-stage training. The image decoder branch training is no different from regular segmentation, so the loss function is always set to Dice Loss [23]. We discussed two loss functions for the training of the distance decoder branch: Mean Absolute Error Loss (MAE Loss) and Mean Square Error Loss (MSE Loss) [24]. MAE Loss represents the sum of the absolute difference between the label and the prediction, while MSE Loss can represent the expectation of the square of the difference between the label and the prediction.
Compared with the general segmentation, the input of the second stage adds a distance map in addition to the original image. In order to emphasize the contribution of distance map to training, we propose Distdice Loss, which uses a distance map to give weight to each pixel based on Dice Loss: In Formulas (4)-(6), Y, y i represent labels, P, p i represent the predictions output by the second-stage network, and D represents the distance map output by the first-stage distance decoder.
Dataset
The Atrial Segmentation Challenge (ASC) 2018 dataset is a public dataset for left atrium segmentation tasks. It used a total of 154 cases of 3D MRI data. The original resolution of the data is 0.625 × 0.625 × 0.625 mm³. The University of Utah (NIH/NIGMS Integrated Biomedical Computing Center (CIBC)) provided most of the data, and the rest came from multiple other institutes. All clinical data have been approved by institutional ethics. Each 3D MRI patient data are acquired using a clinical whole-body MRI scanner, and the patient data contain the original MRI scan and the corresponding left atrium annotation, which is manually marked by medical experts. The original MRI is grayscale, and the labels are in binary format. The data set is split into a training set and a test set, of which 100 patient data are used for training and 54 patient data are used for testing. Since the official test set is not available, our experiment re-adjusts the training set randomly to select 80 MRI scans for training and the remaining 20 MRI scans for evaluation.
Implementation Details
The experiment is based on Linux Ubuntu 16.04 LTS system and PyTorch deep learning framework. Each experiment uses a NVIDIA GeForce GTX 1080 Ti graphics card with 11G of memory. Before the experiment, the distance map was generated according to the three methods introduced in Section 3.2. The evaluation standard to measure the accuracy of prediction is the Dice similarity coefficient [25] and the average symmetric surface distance (Assd) [26].
All data are normalized, and a complete input image is randomly cropped according to the size of 232 × 232 × 32 and the batch size is set to 1. Gradient descent uses Adam optimizer, and the initial learning rate is 1 × 10 −4 . The update method of the learning rate is shown in Formula (7), where α 0 is the initial learning rate and α is the current learning rate. In addition, e in Formula (7) is the current epoch and N is the maximum training epoch, which is set to 110. As the training progresses, the learning rate will slowly decay until it reaches zero.
Effectiveness of Two-Stage Learning
In order to explore the performance of the one-stage network and the two-stage network, we compared the two networks based on experiments. The two-stage network is similar to the network structure shown in Figure 2, but the distance decoder branch is deleted. In other words, only the output of the first-stage image encoder is used as the second-stage input. The one-stage network is a classic 3D U-Net network, and its structure is the same as the second stage of the two-stage network.
As shown in Table 1, the two-stage achieves 5.04% and 8.12 mm improvements over the original 3D U-Net in the Dice score and Assd, respectively, which verifies the effectiveness of the two-stage learning. In addition, Figure 4 shows a schematic diagram of partial segmentation results and the segmentation difference between one-stage and two-stage methods. The units of Dice score and Assd in Figure 4 are % and mm, respectively. It can also be intuitively observed from the figure that the segmentation of the two-stage network is closer to the ground truth. Table 1. Performance comparison between one-stage network and two-stage network.
Effectiveness of Distance Map
The method designed in this paper is based on the idea that the distance map generated with the edge of the left atrium as the target has a more significant weight in the area close to the edge, which can guide the network to pay more attention to the edge and strengthen the learning of edge features. Therefore, finding a distance map that can reasonably represent the edge learning weight becomes a key point. This section compares and analyzes the three distance maps introduced in Figure 3 from an experimental point of view.
We compare the segmentation performance with different design choices and show the results in Table 2. The network structure used here has been described in detail in Section 3.1. From the table, we can observe that: (1) the three distance maps generated by Method A (Figure 3a), Method B (Figure 3b), and Method C (Figure 3c) bring 2.72%, 1.88%, and 1.98% improvements in average Dice score, and 0.53 mm, 0.43 mm, and 0.47 mm improvements in average Assd compared to the two-stage network, respectively; and (2) among the three methods, Method A has achieved the highest performance, which brings 0.84% and 0.74% improvements in average Dice score and 0.10 mm and 0.06 mm improvements in average Assd compared to Method B and Method C, respectively. Table 2. Performance comparison of distance map methods.
Network Dice (%) Assd (mm)
Two-stage 91.38 1.35 Design with Figure 3a 94.10 0.82 Design with Figure 3b 93.26 0.92 Design with Figure 3c 93.36 0.88 As shown in Table 2, although Method C uses the edge of the left atrium as the distance map, it can also improve network performance, which can prove that the idea of using the distance map module is correct and feasible. However, the information provided by Method C is quite limited and cannot provide continuous information of strong and weak changes like a real distance map, so the performance of this method is not optimal. In addition, the performance of Method B is lower than that of Method A, and even slightly worse than that of Method C. The reason may be that, although the distance map generated by Method B can provide continuous information, the intensity of pixels at the same distance inside and outside the left atrium edge is asymmetrical, which may interfere with the learning of the network. The abovementioned results prove that the distance map generated by Method A can provide the most reasonable auxiliary information to help network learning. Figure 5 shows a schematic diagram of partial segmentation results. The units of Dice score and Assd in Figure 5 are % and mm, respectively.
Network Optimization
The method proposed in this paper is based on the network architecture shown in Figure 2, and the distance map generation method adopts Method A introduced in Section 3.2. Based on this, this section mainly explores the optimization process of this method. Table 3 shows the results of comparative experiments on using different loss function combinations to optimize training. The loss function used by the image decoder in the first stage is always Dice Loss. In Table 3, MAE Loss and MSE Loss represent the optional loss function used by the distance decoder in the first stage, and Dice Loss and Distdice Loss only represent the optional loss function for the second stage of network training. As shown in Table 3, when the distance decoder branch uses MSE Loss and the second-stage training of the network uses Distdice Loss, the highest segmentation level can be achieved, with an average Dice score of 94.10% and an average Assd of 0.82 mm. Figure 6 shows the segmentation of training using different combinations of loss functions. The units of Dice score and Assd in Figure 5 are % and mm, respectively.
Comparison of Other Methods
Tabel 4 summarizes the quantitative results of our proposed method and several state-of-the-art methods, including LG-ER-MT [27], DUWM [28] , MC-Net [29], V-net [30], Bayesian V-net, and AJSQnet [31]. Among them, LG-ER-MT, DUWM, and MC-Net utilized the semi-supervised strategy with uncertainty prediction, while V-net, Bayesian V-net, and AJSQnet are trained by all labeled data, and Bayesian V-Net utilized the Bayesian network to adapt the vanilla V-Net. MC-Net has its best Dice of 90.34% and Assd of 1.77 mm in semisupervised filed. For the other general methods, AJSQnet has the best Dice of 91.14% and V-Net of Bayesian version has the best Assd of 1.52 mm. However, it is worth noting that our proposed two-stage method guided by distance transformation further outperforms MC-Net, AJSQnet, and Bayesian V-Net in terms of both metrics Dice and Assd, and the corresponding scores are 94.10% and 0.82 mm. Our method brings 3.76%, 2.80%, and 2.96% improvements in average Dice score and 0.95 mm, 0.78 mm, and 0.70 mm improvements in average Assd compared to MC-Net, AJSQnet, and Bayesian V-Net, respectively. Table 4. Performance comparison of our method and compared methods.
Discussion and Conclusions
Medical images contain plentiful information, which is very suitable for using deep learning to mine valuable information. However, the crucial problem is that the edges of organ tissues should provide potential information as a boundary becomes visually blurred due to objective reasons such as the partial volume effect. Therefore, we aim to conduct meaningful experiments on the edge of medical images, and one idea worthy of expansion is distance transformation. In addition, two-stage learning has shown advantages in improving the network structure and facilitating training guidance, making it gradually become a research method of medical image segmentation that has attracted much attention.
Based on the above, we propose a two-stage segmentation method for medical images based on distance transformation. By using the edge of the left atrium as the target area for distance transformation, the obtained distance map can be used as a learning weight map to make the network pay more attention to the area near the edge of the organ. The training is divided into two stages in total. In the first stage, two branches are derived to predict the rough segmentation of the left atrium and the distance map, respectively, and the two are merged into the second stage of training to obtain accurate segmentation results. The experimental results proved that our idea is practical and effective.
There are still limitations in our study. On the one hand, our method has three loss functions: the first-stage image decoder training, the first-stage distance decoder training, and the second-stage training. This paper only discusses the loss functions of the first-stage distance decoder training and the second-stage training. In the future, we will focus on exploring the three loss functions for joint training and exploring the space of optimization models. On the other hand, this article only conducted experiments on left atrium MRI images. Other forms of medical images (such as X-ray, CT, etc.) are different from MRI images in terms of generation principles and image characteristics, which may affect the performance of the algorithm. The generalization ability of the algorithm in other organs and other forms of medical imaging needs further verification.
In conclusion, the method proposed in this paper takes advantage of the feature that the pixel value in the distance map obtained by the distance transformation will change with the distance from the target area. It improves the accuracy of image segmentation through a two-stage training method, which provides new ideas for exploring medical image segmentation. | 2022-01-02T16:05:29.071Z | 2021-12-30T00:00:00.000 | {
"year": 2021,
"sha1": "68537904be328c9912a8626f061bffd72903ebf5",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "842f4aae0f24e2b788981ff8241913052c179c6e",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
591726 | pes2o/s2orc | v3-fos-license | Proteasomal Degradation and N-terminal Protease Resistance of the Codon 145 Mutant Prion Protein*
An amber mutation at codon 145 (Y145stop) of the prion protein gene results in a variant of an inherited human prion disease named Gerstmann-Sträussler-Scheinker syndrome. The characteristic features of this disorder include amyloid deposits of prion protein in cerebral parenchyma and vessels. We have studied the biosynthesis and processing of the prion protein containing the Y145stop mutation (PrP145) in transfected human neuroblastoma cells in an attempt to clarify the effect of the mutation on the metabolism of PrP145 and to gain insight into the underlying pathogenetic mechanism. Our results demonstrate that 1) a significant proportion of PrP145 is not processed post-translationally and retains the N-terminal signal peptide, 2) most PrP145 is degraded very rapidly by the proteasome-mediated pathway, 3) blockage of proteasomal degradation results in intracellular accumulation of PrP145, 4) most of the accumulated PrP145 is detergent-insoluble, and both the detergent-soluble and -insoluble fractions are resistant to mild proteinase K (PK) treatment, suggesting that PK resistance is not simply because of aggregation. The present study demonstrates for the first time that a mutant prion protein is degraded through the proteasomal pathway and acquires PK-resistance if degradation is impaired.
dation, and in some cases, can transmit the disease (PrP Res ) (1)(2)(3). Recent NMR studies on recombinant PrP have shown that the N-terminal domain (23-120 in mouse recombinant PrP) is highly flexible and has a random coil structure, whereas the C-terminal region (129 -219) contains two short -sheet structures and three ␣-helical domains (4,5). The major conformational change that causes PrP C to become the pathogenic and infectious PrP Res isoform is thought to involve refolding of the region between residues 90 and 112, which would lead to conversion of the region containing the two short -sheet structures and of the first ␣-helix into a large -sheet formation (3,6). However, the remaining C-terminal structures including the two other ␣-helices and the disulfide bond need to be preserved for PrP Res to be infectious (3,6).
The events leading to the conversion of PrP C to PrP Res are, at present, not fully understood. In the inherited prion diseases, which have only been associated with mutations in the PrP gene (PRNP), the mutation is believed to destabilize the mutant PrP (PrP M ), which then undergoes a spontaneous conformational change into the protease-resistant and pathogenic form (7)(8)(9). Twenty-three pathogenic mutations in PRNP have been reported to date, which are associated with three phenotypes: Creutzfeldt-Jakob disease, fatal familial insomnia, and Gerstmann-Strä ussler-Scheinker disease (GSS) (8,9). Despite their congenital presence, all mutations cause diseases that become symptomatic in the adult or advanced age. Of the seven PRNP mutations associated with GSS, a chronic cerebellar ataxia and dementia characterized by the presence of prominent amyloid plaques containing internal PrP M fragments, all are missense mutations except for the mutation at codon 145 that replaces tyrosine (TAT) with a stop (TAG) codon (Y145stop) (10 -12).
The 145 mutation is especially challenging because it results in the premature termination of protein synthesis and yields a truncated PrP (PrP 145 ) that lacks the C-terminal 146 -231 amino acids including the glycosylphosphatidyl inositol anchor, which links PrP to the cell surface, and the two sites for N-glycosylation, which are known to stabilize the PrP molecule (13,14). Thus, although PrP 145 includes the 90 -112 segment where the major conformational changes take place and almost all the ϳ80 -147 internal fragment that is found in PrP amyloid, it lacks the C-terminal region containing the two ␣-helices and the disulfide bond, which are required for the PrP C -PrP Res conversion (15,16). Moreover, most of the PrP 145 is likely to include the 23-90-residue N-terminal region of PrP, which has been shown to be superfluous for conversion to PrP Res (15)(16)(17). Yet, the Y145stop mutation is associated with a phenotype not basically different from that of the GSS subtypes associated with other mutations, which do not result in the truncation of PrP M (10). The only significant difference is the presence of numerous PrP-amyloid deposits in the cerebral vessels rather than in the brain parenchyma as in the other GSS subtypes (10 -12).
Currently, there are no animal or cellular models of the Y145stop mutation. Expression of the truncated PrP 145 was not detected in transgenic animals and transfected neuroblastoma cells following deletion of the 144 -231 region, which results in a PrP M identical in primary structure to that generated by the Y145stop mutation (17,18). To examine the effects of this mutation on the metabolism of PrP 145 , hence to gain a better understanding of the mechanisms involved in the pathogenesis of this GSS variant, we have transfected human neuroblastoma cells with PRNP constructs carrying the Y145stop mutation or wild type PRNP. We report, for the first time, that mutant PrP 145 is degraded through the proteasomal pathway. Inhibition of proteasomal degradation results in the accumulation of PrP 145 in intracellular compartments, including the endoplasmic reticulum (ER), the cis-medial-Golgi compartment, and the nucleus. Most of the accumulated PrP 145 is aggregated and partially resistant to mild proteinase K treatment. Proteaseresistant PrP 145 is also present in the detergent-soluble fraction, suggesting that the PrP 145 protease resistance is not simply because of aggregation.
Materials, Cell Culture Conditions, and Production of Transfected
Cell Lines-Opti-MEM, fetal bovine serum, penicillin/streptomycin, methionine, and cysteine-free Dulbecco's modified Eagle's medium, and Lipofectin were from Life Technologies Inc.; hygromycin B and lactacystin were from Calbiochem; Tran 35 S-label was from ICN; protein A-Sepharose was from Amersham Pharmacia Biotech; all other chemicals were purchased from Sigma. Transfected M17 cells expressing wild type or mutant prion protein (Y145amber) were generated as described (13,19). All cultures were maintained at 37°C in Opti-MEM supplemented with 5% fetal calf serum and penicillin-streptomycin in a humidified atmosphere containing 5% CO 2 . Cultures of transfected cells were supplemented with 500 g/ml hygromycin. The following antibodies were used: anti-N, rabbit antiserum to synthetic peptide corresponding to human PrP residues 23-40 (B. Ghetti, Indiana University); 3F4, a monoclonal antibody that recognizes an epitope on human PrP residues 109 -112 (R. Kascsak, New York State Institute for Basic Research in Developmental Disabilities); anti-protein disulfide isomerase (M. Lamm, Case Western Reserve University), anti-calnexin rabbit immune serum (A. Helenius, Yale University), anti-␣-mannosidase II (M. Farquhar, Univ. California, San-Diego), anti-cathepsin-D (R.A Nixon, Harvard University), and anti-Man-6-phosphate (L. Traub, Washington University School of medicine).
In Vitro Transcription and Translation-The codon 145 mutant was originally produced in the phagemid pVZ1, which contains both T7 and SP6 bacteriophage RNA polymerase promoters, using oligonucleotidedirected mutagenesis. Using a BamHI-cleaved template, run-on capped RNA was produced using the Cap-scribe system (Roche Molecular Biochemicals) as recommended by the manufacturer. Transcripts were analyzed using ethidium bromide-stained gels to assess their purity. The in vitro transcription products were translated into protein using the Promega message-dependent rabbit reticulocyte system with or without added canine pancreatic microsomes (Promega) to cleave the signal peptide. To control for microsome activity, a transcript derived from -lactamase was used. The conditions used in the translation reaction were essentially as described by the manufacturer.
Metabolic Labeling, Immunoprecipitation, and Western Blots-In a typical experiment, 9 ϫ 10 6 cells were used for each condition. Equal amounts of total protein was used from cells expressing either normal or mutant PrP. Immunoprecipitation and Western blots were performed essentially as described (19), with the following modifications. For pulse-chase experiments, cells were preincubated in the presence or absence of the indicated inhibitors (lactacystin 80 M, N-acetyl-leucylleucyl-norleucinal (ALLN) 80 M, brefeldin A 5 g/ml) for 1 h before labeling with 0.166 mCi/ml of Tran 35 S-label (ICN) in methionine-cysteine-free Dulbecco's modified Eagle's medium with 5% dialyzed serum. Where indicated, appropriate inhibitors were included in the labeling and chase medium. At the indicated times, the medium was collected to check any secreted PrP 145 , and the cells were lysed in a buffer containing 0.5% Nonidet P-40, 0.5% deoxycholate, and 10 mM EDTA in Trisbuffered saline (20 mM Tris, 150 mM NaCl, pH 7.4) containing a mixture of protease inhibitors. Cell debris was cleared by centrifugation at 290 ϫ g, and the clarified cell lysate and medium samples were subjected to immunoprecipitation with the appropriate antibodies in the presence of 1% bovine serum albumin and 0.1% N-lauryl sarcosine. Protein-antibody complexes bound to protein-A-Sepharose (Amersham Pharmacia Biotech) were washed four times with 0.5 ml of wash buffer (150 mM NaCl, 10 mM Tris-HCl, pH 7.8, 0.1% N-lauryl sarcosine, and 0.1 mM phenylmethylsulfonyl fluoride), and bound protein was eluted by boiling in sample buffer (Tris-HCL, pH 6.8, 3% SDS, 10% glycerol, 5% -mercaptoethanol) and analyzed by SDS-PAGE fluorography. PrP bands were quantitated by PhosphorImager analysis (Molecular Dynamics). For Western blots, proteins from cell lysates were precipitated with 5 volumes of cold methanol at Ϫ20°C, fractionated by SDS-PAGE, and electrophoretically transferred to Immobilon-P (Millipore) for 2.5 h at 70 volts at 4°C. Membranes containing transferred proteins were blocked in Tris-buffered saline containing 10% nonfat dry milk and 0.1% Tween 20 for 1 h at 37°C and probed with anti-PrP antibodies (anti-N diluted 1:4000, 3F4 diluted 1:50,000, or anti-C diluted 1:3000) dissolved in antibody dilution buffer (Tris-buffered saline, 1% normal goat serum, and 0.05% bovine serum albumin). Immunoreactive bands were detected with the appropriate secondary antibody conjugated to horseradish peroxidase (anti-rabbit diluted 1:3000, anti-mouse diluted 1:4000) and visualized on an autoradiographic film by ECL (Amersham Pharmacia Biotech). To quantitate the relative density of immunoreactive bands, exposed autoradiographic film was scanned at 42-mm resolution with a GE10 densitometer and quantitatively analyzed using Quantity One software (PDIG20, QS30).
For biotinylation of surface proteins, untreated, or cells treated with ALLN for 2 h were biotinylated with 0.2 mg/ml of sulfo-N-hydroxysuccinimide-biotin in PBS for 15 min on ice. Excess biotin was quenched with 50 mM glycine (in PBS), and after three more washes with PBS, the cells were lysed as above and subjected to immunoprecipitation with 3F4. The immunoprecipitated proteins were fractionated by SDS-PAGE and electroblotted to Immobilon-P, and the biotinylated PrP was detected by horseradish peroxidase-conjugated streptavidin and ECL.
Detection of Associated Chaperone Proteins with PrP 145 -Cells expressing PrP C or PrP 145 were radiolabeled for 2 h in the presence or absence of 80 M ALLN and lysed with a nondenaturing buffer containing 1% CHAPS or 2% Triton X-100 in the presence of a mixture of protease inhibitors. The lysate was subjected to immunoprecipitation with anti-KDEL (Stressgen), anti-calnexin, or anti-Grp 94 antibodies (Stressgen) and fractionated by SDS-PAGE to check co-immunoprecipitation of any of the PrP C or PrP 145 forms with the above ER chaperones. In a parallel experiment, the PrP was immunoprecipitated from the lysates, and the presence of any associated chaperones was evaluated by immunoblotting the electrophoretically transferred proteins by specific antibodies.
Detection of Ubiquitinated PrP-For detecting ubiquitinated PrP 145 , untreated and ALLN-treated PrP C -and PrP 145 -expressing cell lysates were immunoprecipitated with 3F4 as above. After fractionating on SDS-PAGE, the immunoprecipitated proteins were transblotted and probed with anti-ubiquitin antibodies 1510, 5-25 (Chemicon), or LB112 (J.Q. Trojanowski, University of Pennsylvania). Immunoreactive bands were detected with appropriate horseradish peroxidase-conjugated secondary antibodies. Alternatively, PrP C and PrP 145 cells were transfected with myc-tagged ubiquitin-expressing plasmids H 6 M-Ub, or H 6 M-Ub K48R (dominant negative ubiquitin mutant) (Ron R. Kopito, Stanford University). Steady state levels of PrP C and PrP 145 were determined in transient transfectants by Western blotting. To detect ubiquitinated PrP 145 , transient transfectants were radiolabeled in the presence of ALLN, immunoprecipitated with anti-myc antibody to isolate myc-tagged ubiquitin-conjugated PrP 145 , and analyzed by SDS-PAGE.
Confocal Immunofluorescence Microscopy-For immunofluorescent staining of PrP, transfected cells expressing PrP C or PrP 145 were grown on poly-D-lysine-coated glass coverslips overnight and treated with lactacystin in complete medium for the indicated times. Control cells (0 h) received only medium. For washout experiments, cells treated with lactacystin for 4 h were washed with complete medium, and incubated in fresh medium for 0 -4 h. At the indicated times, cells were rinsed with PBS and fixed in 3% paraformaldehyde for 30 min at room temperature. Free aldehyde groups were quenched with 50 mM NH 4 Cl (in PBS), and the cells were permeabilized with PBS containing 0.1% Triton X-100, 0.1% SDS, 2.5 mM MgCl 2 , and 5 mM KCl for 5 min. Nonspecific sites were blocked with PBS containing 10% goat serum and 0.2% bovine serum albumin, followed by 0.2% gelatin in PBS. Cells were then incubated with anti-PrP antibody (3F4, diluted 1:25), followed by fluorescein-conjugated secondary antibody for 35-40 min each. For subcellular immunolocalization, subsequent incubations were done with anti-calnexin, anti-␣-mannosidase II, anti-cathepsin-D, or anti-mannose-6-phosphate antibodies, respectively, followed by rhodamine-conjugated secondary antibodies. The cells were rinsed in PBS, mounted in gel-mount (Biomeda Corp., Fostar City, CA), and observed using a laser scanning confocal microscope (Bio-Rad). To compare the fluorescence intensity of lactacystin-treated and washout samples accurately, all experimental conditions were kept constant, including antibody titer, total cell confluence, laser strength, magnification, and basal instrument settings. A single 0.5-m optical section was photographed in each case.
Assay of "Detergent Insolubility" and Proteinase K Resistance-Aggregation, partial resistance to proteinase K (PK) digestion, and insolubility in nonionic detergents is the hallmark of PrP Res . However, when applied in this context, detergent insolubility implies insolubility of PrP in a buffer containing 0.5% Nonidet P-40 and 0.5% sodium deoxycholate when centrifuged at 100,000 ϫ g for 1 h. Untreated, and cells treated with lactacystin for two h were lysed at 4°C in 1 ml of Tris buffer containing 0.5% of Nonidet P-40 and sodium deoxycholate each. After a brief centrifugation at 290 ϫ g, cell debris (P 1 ) was discarded, and the supernatant (S 1 ) was divided into two equal parts. One part was set aside, and the other was centrifuged at 100,000 ϫ g for 1 h at 4°C to obtain a high speed supernatant (S 2 ) and pellet fraction (P 2 ). The pellet fraction P 2 was resuspended in the same volume of buffer as S 2 and sonicated. Equal aliquots of S 1 (before ultracentrifugation), S 2 , and P 2 fractions were immunoblotted with 3F4. The S 2 and P 2 fractions of lactacystin-treated cells were also assayed for resistance to mild PK digestion as described previously (19).
To check if aggregated PrP 145 is immunoprecipitated efficiently, lysates from cells incubated in the presence or absence of 80 M of ALLN for 2 h were subjected to ultracentrifugation as described above to separate the S 1 , S 2, and P 2 fractions. Each fraction was immunoprecipitated with 3F4, and the remaining supernatant was methanol-precipitated and immunoblotted with 3F4 to detect any nonimmunoprecipitated PrP remaining in the supernatant. The 15.5-kDa band was consistently observed in the supernatant of ALLN-treated cells. However, if the lysates were first boiled in the presence of 0.5% SDS, diluted 5-fold, and then subjected to immunoprecipitation, no PrP was detected in the supernatant, suggesting that the inefficient immunoprecipitation of the 15.5-kDa form is because of aggregation.
PrP 145 Is Expressed as Two Isoforms at Steady State-On
blots immunostained with the anti-PrP antibody 3F4 (to residues 109 -112), the normal or cellular prion protein (PrP C ) migrates as three bands corresponding to the unglycosylated (U), intermediate (I), and highly glycosylated (H) forms of 27, 29 -30, and ϳ33-42 kDa, respectively (Fig. 1A). In contrast, PrP 145 migrates as two bands, a lower band of 14 kDa (PrP14), the expected molecular mass for PrP with the 145 amber mutation, and an upper, more prominent band of 15.5 kDa (PrP15.5) that accounts for 66% of the total PrP 145 (Fig. 1A). As expected, both bands are readily detected with the 3F4 and anti-N antibodies but not with anti-C-terminal antibody (data not shown). Immunoprecipitation of PrP 145 (which includes both PrP14 and PrP15.5) from cells radiolabeled with [ 35 S]methionine and cysteine for 2 h with the 3F4 antibody shows a similar pattern (Fig. 1B), except that PrP15.5 accounts for only 25% of the total PrP 145 . The cell-associated pool of PrP 145 at steady state is approximately 9 times smaller than that of PrP C (Fig. 1A). Moreover, Ͻ1% of the PrP 145 is recovered from the medium, suggesting that the low expression of PrP 145 is not because of secretion.
Three experiments were performed to determine whether PrP15.5 represents a form of PrP 145 with an uncleaved Nterminal signal peptide of 22 amino acids: 1) metabolic labeling with [ 35 S]cysteine, a residue that is present only in the signal peptide; 2) cell-free translation with radiolabeled methionine and cysteine or only cysteine in the absence or presence of microsomes to obtain a translation product with or without the signal peptide, respectively; 3) a short pulse of 30 s with 35 Slabeled methionine and cysteine followed by a chase to inves-tigate whether PrP15.5 converts to the PrP14 form. Metabolic labeling of cells with cysteine for 2 h shows only PrP15.5, whereas both forms are detected when cells are radiolabeled with methionine and cysteine (data not shown). After cell-free translation with radiolabeled methionine and cysteine in the absence of microsomes, only PrP15.5 is retrieved, most of which is converted to the PrP14 form when the microsomes are added co-translationally (Fig. 1C). Translation in the presence of radiolabeled cysteine yields only PrP15.5, which disappears when microsomes are added, confirming that PrP15.5 includes the N-terminal signal peptide that is lost on addition of microsomes (Fig. 1C). The bands obtained in vitro co-migrate with PrP15.5 and PrP14 from radiolabeled cells (Fig. 1C). The short pulse-chase experiment shows that at the end of a 30-s pulse, PrP15.5 is predominant and accounts for 55% of the total PrP 145 (Fig. 1D). The ratio between the two forms is reversed at the end of 2.5 min (Fig. 1D), whereas the total PrP 145 remains unchanged during this time period. This finding suggests that PrP15.5 is converted into the PrP14 form. Both forms then decrease rapidly during the chase, so that only 16% of total PrP 145 remains after 30 min, most of which consists of PrP14 (Fig. 1D).
Immunofluorescence analysis by double immunostaining with anti-PrP and an antibody to calnexin, an endoplasmic reticulum (ER)-specific protein, shows that although PrP C is mostly distributed at the cell surface and in the Golgi region (Fig. 1E, panel 1) (19), a very small amount of PrP 145 is detected in an intracellular compartment, and it co-stains with the Golgi marker ␣-mannosidase II (Fig. 1E, panel 3). No ER localization is observed with the ER marker calnexin (Fig. 1E, panel 2), probably because the small quantity of PrP 145 that escapes degradation transits through the ER very rapidly. No PrP 145 is detected on the plasma membrane either by immunostaining (Fig. 1E, panels 2 and 3) or cell surface biotinylation (data not shown), excluding the possibility that PrP15.5 is inserted into the cell membrane through the signal peptide.
Together, these results show that 1) PrP 145 is unstable and is not detected in the culture medium in significant amounts, 2) PrP15.5 represents a PrP 145 form with an uncleaved N-terminal signal peptide even though it is apparently translocated efficiently into the ER (see below), 3) no PrP 145 is detected at the cell surface. The very low expression of PrP 145 compared with PrP C suggests that PrP 145 turns over very rapidly. The under-representation of PrP15.5 after radiolabeling and immunoprecipitation (25%) when compared with the steady state level by Western blot analysis (66%) (Figs. 1, A versus B) raises the possibility that this form is not immunoprecipitated efficiently because of a change in its conformation, as previously observed with the PrP M Q217R (19) (see below).
PrP 145 Is Rapidly Degraded in a Pre-Golgi Compartment by the Proteasomal Pathway-To determine whether PrP 145 is degraded by the lysosomes or in a pre-Golgi compartment, we carried out pulse-chase analysis either in the presence of various lysosomal inhibitors (leupeptin, ammonium chloride, or chloroquine) or by blocking transport beyond the ER-cis-Golgi compartment by incubating cells at 15°C or treating them with brefeldin A (20). Neither the inhibition of lysosomal activity (data not shown) nor the block of vesicular transport at low temperature ( Fig. 2A) or brefeldin A (Fig. 2B) blocked the degradation of PrP 145 , although, as expected, the rate of degradation was slower at 15°C as compared with 37°C (see Fig. 1D). A similar analysis at 37°C shows that although PrP C matures into various glycoforms and is stable, 70% of the total being present after 2 h of chase, most of the PrP 145 disappears rapidly, and only 12% remains after 2 h (data not shown). Taken together, these results demonstrate that PrP 145 turns over in a pre-Golgi compartment and is not degraded through the lysosomal pathway.
Because both membrane and secretory proteins can be degraded through the proteasomal pathway (21-23), we evaluated this possibility by treating cells expressing PrP 145 with the proteasomal inhibitor lactacystin or ALLN during pulsechase experiments. In cells treated with lactacystin or ALLN, 48 and 36%, respectively, of PrP 145 remain after a 2-h chase, as compared with 12% in untreated cells ( Fig. 2C; p Ͻ3 ϫ 10 Ϫ4 at 1 h*, and p Ͻ 6 ϫ 10 Ϫ5 at 2 h**; n ϭ 3). PrP14 accounts for most of the protected PrP 145 (Fig. 2C). There is no change in the kinetics of turnover of PrP C in the presence of ALLN or lacta-cystin under the same experimental conditions (data not shown). To check if the PrP 145 that accumulates intracellularly following proteasomal inhibition is ubiquitinated, ALLNtreated cells were immunoprecipitated with 3F4, and the immunoprecipitates were immunoblotted with a panel of antiubiquitin antibodies to detect any ubiquitinated PrP forms. Alternately, PrP 145 cells were transfected with normal or a dominant negative mutant of ubiquitin followed by immunoprecipitation with 3F4. No ubiquitinated PrP 145 was detected in either case, and the co-expression of mutant ubiquitin did not stabilize PrP 145 (data not shown).
To evaluate if PrP 145 rescued from proteasomal degradation 1-6). E, immunofluorescent staining of PrP with 3F4 (green) and of calnexin, an ER marker (red), shows that PrP C is localized to the cell surface and the Golgi region (panel 1). Similar staining of PrP 145 -expressing cells shows a small amount of PrP 145 at steady state, mainly co-localizing with the Golgi marker ␣-mannosidase II (red; panel 3). No co-localization of PrP 145 is seen with the ER marker calnexin (red; panel 2).
is secreted, cells expressing PrP 145 were radiolabeled for 2 h in the continuous presence of lactacystin or ALLN, and the cell lysate and culture medium were subjected to immunoprecipitation with 3F4. Only PrP15.5 was secreted, although the intracellular pool of PrP14 exceeded that of PrP15.5 (Fig. 2D). No PrP 145 form was detected in the medium if cells were labeled in the presence of brefeldin A (data not shown).
Accumulated PrP 145 Is Aggregated and Less Sensitive to Protease Digestion-To investigate whether the PrP 145 that accumulates in the absence of proteasomal degradation is aggregated, untreated cells and cells treated with lactacystin for 2 h were lysed in a buffer containing the nonionic detergents Nonidet P-40 and sodium deoxycholate, conventionally used for detecting aggregated PrP. After pelleting the cell debris (P 1 ) by a low speed centrifugation, the supernatant (S 1 ) was centrifuged at 100,000 ϫ g for 1 h to separate a soluble (S 2 ) and an insoluble (P 2 ) fraction. In untreated cells, almost all of the PrP 145 is recovered in the high speed supernatant or soluble (S 2 ) fraction, and a small amount of PrP15.5 is detected in the P 2 fraction (Fig. 3A, upper panel). In lactacystin-treated cells, the amount of PrP 145 is at least 4-fold higher, and a significant proportion of it is present in the pellet (P 2 ) as an insoluble fraction. This fraction contains mostly PrP15.5 (Fig. 3A, lower panel) and increases in amount with extended chase time (data not shown). The insolubility of PrP15.5 and inefficient immunoprecipitation especially in the presence of proteasomal inhibitors was further confirmed when we subjected untreated and lactacystin-treated cell lysates to immunoprecipitation and immunoblotted the remaining supernatant with the same antibody. In untreated cells, virtually no PrP14 or 15.5 could be detected in the supernatant, whereas in lactacystin-and ALLN-treated cells, significant amounts of mostly PrP15.5 remained in the supernatant with nonimmunoprecipitated proteins. However, when the lysates were boiled in the presence of SDS before immunoprecipitation, no PrP15.5 remained in the supernatant in treated or untreated cells (data not shown). These findings may explain the preferential detection of PrP15.5 in immunoblots as compared with pulse-labeled immunoprecipitates (see Figs. 1, A and B) if the presence of the signal peptide induces a change in its conformation that leads to inefficient immunoprecipitation. None of the PrP 145 forms (PrP14 or 15.5) were associated with any of the ER chaperones, Grp78, Grp94, or calnexin (data not shown).
The sensitivity to PK of the PrP 145 that accumulates follow- ing treatment with lactacystin was tested on immunoblots of lactacystin-treated cells by digesting the S 2 and P 2 fractions with 3.3 g/ml of PK for 1 to 10 min. PrP C (data not shown) and untreated PrP 145 were degraded completely after 1 min of PK treatment (Fig. 3B). In contrast, a 1-10-min treatment with PK of S 2 fraction obtained from mutant cells exposed to lactacystin resulted in the presence of two additional bands of ϳ14 and ϳ12 kDa. This finding suggests that both PrP14 and 15.5 comprise a 1.5-2 kDa smaller fragment, which is weakly resistant to PK treatment (Fig. 3B). However, the gel mobility of the "PK-resistant" PrP 145 forms makes the interpretation of the data difficult and the possibility that the two PK-resistant bands represent the intact and truncated forms of PrP14, whereas PrP15.5 is completely digested cannot be excluded. Paradoxically, PrP15.5 present in the insoluble P 2 fraction is less PK-resistant, because it is almost completely digested after a 5-min treatment (Fig. 3B). Nonetheless, after a 1-min treatment, a small amount of the 1.5-2-kDa lower band is still detectable, supporting the conclusion that PrP15.5 also generates a smaller PK-resistant fragment (Fig. 3B). The ϳ14-kDa and ϳ12-kDa PK-resistant PrP 145 forms were detected, not only by the 3F4 antibody but also by an antibody to the PrP N terminus (antiserum to residues 23-40 of PrP) (data not shown). Thus, the PK digestion must occur at the PrP 145 C terminus, beyond residue 112, which is recognized by 3F4, and not around residue 90, as observed in the full-length PrP Res . Overall, only ϳ1% of the PrP 145 remains after 10 min of PK digestion (Fig. 3B).
PrP 145 Rescued from Proteasomal Degradation Is Found in the Nucleus-The intracellular distribution of PrP 145 accumulated after inhibition of proteasomal function was analyzed by immunofluorescence of PrP 145 in cells treated with lactacystin for 0 -4 h (Fig. 4). In untreated cells, PrP 145 immunoreactivity was restricted to the Golgi region, as previously shown (Fig. 4A, 0 h, left panel; Fig. 1E, panel 3). After incubation with lactacystin for 1-4 h, the reactivity was detected in punctate and vesicular structures, although it remained most intense in the Golgi region (Fig. 4A, 1-4 h, left panels). Strikingly, a diffuse PrP staining of the nucleus sparing the nucleolus was also detected with longer treatment (Fig. 4A, 2-4 h, left panels). Replacement of the lactacystin-containing medium with normal medium followed by a chase for 4 h gradually restored the original PrP 145 distribution (Fig. 4A, right panels, 0 -4 h). PrP immunoreactivity first decreased in punctate and vesicular structures (Fig. 4A, 0 -1 h, right panels), then in the nucleus (Fig. 4A, 1-2 h, right panels), and finally was detected only in the Golgi region as in the untreated cells expressing PrP 145 (Fig. 4A, 4 h, right panel). The vesicular staining co-localized with calnexin, an ER marker (Fig. 4B, right panel), whereas the reactivity adjacent to the nucleus co-localized with the Golgi marker ␣-mannosidase II (Fig. 4B, left panel). Nuclear immunoreactivity of PrP 145 in lactacystin-treated cells co-localized with the nuclear marker DAPI and was also observed after immunostaining of deplasticized sections of plastic-embedded cell preparations as well as with immunofluorescence of isolated nuclear fractions (data not shown). No immunoreactivity was observed in the nuclei of PrP C -expressing or nontransfected M17 neuroblastoma cells treated with lactacystin or ALLN for 4 h (data not shown). The accumulated PrP 145 did not colocalize with the late endosomal-lysosomal marker cathepsin-D or with the lysosomal marker mannose 6-phosphate (data not shown). Thus, upon inhibition of proteasomal degradation, PrP 145 accumulates in membrane-bound compartments, including the ER, Golgi apparatus, and the nucleus. DISCUSSION The Y145stop mutation in the human prion protein gene, PRNP, is associated with a GSS variant of prion disease. Previous attempts to generate a model of this GSS variant in transgenic mice and transfected cells have failed because no expression of the mutant PrP 145 could be detected in these models (18,24). We now demonstrate that in a transfected cell model, PrP 145 is expressed in two truncated forms, one of which conserves the signal peptide. Both forms are unstable and are rapidly degraded through the proteasomal pathway. However, both accumulate in significant quantities in intracellular compartments and become aggregated and weakly protease-resistant when proteasomal degradation is impaired. These findings A, untreated cells and cells pretreated with lactacystin for 2 h were lysed with nonionic detergents, and cell debris was pelleted by a low speed centrifugation. The low speed supernatant or detergent-soluble (S 1 ) fraction was centrifuged at 100,000 ϫ g to obtain high speed detergent soluble (S 2 ) and insoluble (P 2 ) fractions. Immunoblotting of an aliquot of each fraction with 3F4 shows that most of the PrP 145 from untreated cells partitions in the S 2 fraction (lane 2), and very little is recovered in the pellet P 2 (lane 3). In lactacystin-treated cells, in contrast, most of the PrP 145 fractionates in the insoluble P 2 fraction, where PrP15.5 accounts for virtually all the PrP 145 (lane 3) (the amount of PrP 145 in lactacystin-treated lysates is at least 4-fold higher than in untreated cells. An underexposed fluorogram is shown for the sake of clarity). B, immunoblotting of PrP 145 present in the soluble (S 2 ) and insoluble (P 2 ) fractions obtained from cells exposed to lactacystin for 2 h reveals the presence of PK-resistant PrP 145 in both fractions after treatment with 3.3 g/ml of proteinase K. In S 2 , both PrP14 and 15.5 forms are present and appear to migrate ϳ1.5-2 kDa faster after PK treatment, perhaps because of the generation of a shorter PK-resistant fragment (lanes 3-6) (see text for alternative explanations). A significant amount of the PrP15.5 in the P 2 fraction also resists 1 min of PK treatment (lanes 7-10). In contrast, untreated PrP 145 is completely degraded after a 1-min treatment (lanes 1-2) (untreated PrP 145 in lanes 1 and 2 is overexposed to emphasize its complete digestion by PK in 1 min). may resolve the dilemma posed by the previous models. They also widen the spectrum of pathogenetic mechanisms that may be involved in prion diseases and provide novel avenues of investigation toward the understanding of this puzzling GSS variant.
The Uncleaved Signal Peptide Predisposes PrP 145 to Aggregation-Inefficient cleavage of the N-terminal signal peptide because of naturally occurring mutations within the signal has been shown to be pathogenic in various conditions, but it is unprecedented in prion diseases (25)(26)(27)(28). This "proform" appears to accumulate intracellularly and tends to aggregate more readily than the signal-cleaved form. The inefficient immunoprecipitation of this form is probably because of a change in conformation of its "soluble" pool, in addition to the formation of detergent-insoluble aggregates (see below). Thus, under normal experimental conditions, the amount of the signal peptide containing PrP 145 recovered in immunoblots is more than four times the amount recovered after immunoprecipitation. After lactacystin treatment, the signal peptide containing PrP 145 accounts for almost all of the detergent-insoluble and weakly protease-resistant aggregates that accumulate intracellularly. With continuous lactacystin treatment, both the sig-nal-uncleaved and -cleaved forms are secreted into the medium through a brefeldin A-sensitive pathway, although the signaluncleaved form comprises the major secreted form. The preferential detection of the latter in the medium could be because of its greater stability. Thus, both forms translocate into the ER lumen, and the signal-uncleaved form is not inserted in the lipid bilayer through the signal peptide. This conclusion is consistent with the lack of detectable PrP 145 on the cell surface by either immunofluorescence or biotinylation. None of the PrP 145 forms were found to be bound to any of the major ER-specific chaperones, either in the presence or absence of lactacystin.
PrP 145 Is Degraded by the Proteasome-The PrP 145 has a half-life of ϳ10 min and at steady state is nine times less abundant than PrP C . Therefore, it is by far the most unstable of all the forms of mutant PrP we have examined to date (13,19). The lack of all major post-translational modifications and presence of the signal peptide, both of which target PrP 145 for rapid degradation and aggregation, easily explain the marked instability of PrP 145 .
The turnover of both PrP 145 forms that persists at 15°C and in the presence of brefeldin A point to a pre-Golgi site of degradation. Following inhibition of proteasomal degradation, PrP 145 accumulates primarily in the ER, Golgi, and in the nucleus, but apparently not in the late endosomes or lysosomes. The precise site of PrP 145 proteasomal degradation has not been established in this study. PrP 145 might be degraded by proteasomes on the cytosolic face of the ER membrane, as has been reported for the T-cell receptor ␣-chain and other ER luminal and secretory proteins (29 -36). It would then accumulate upstream in the secretory pathway in the ER and Golgi and also diffuse to the nucleus from the cytosol when the degradation is blocked. Recently, cytosolic accumulation of two transmembrane proteins, presenilin-1 and cystic fibrosis transmembrane regulator, has been described upon inhibition of proteasomal function (37). We do not observe significant accumulation of PrP 145 in the cytosol after proteasomal inhibition. Instead, PrP 145 seems to be specifically targeted to the nucleus by a nuclear localization signal that becomes functional when the carboxyl end of the protein is truncated at residue 145. One type of nuclear localization sequence comprises one or more clusters of basic amino acid residues, which, however, lack tight consensus sequence (38). Interestingly, the N terminus of PrP has a cluster of amino acids (KKRPKP) similar to the SV-40 large T antigen nuclear localization signal (PKKKRKV). Studies are ongoing to establish if this sequence functions as a cryptic nuclear localization signal.
We did not detect ubiquitinated PrP 145 even though our data prove conclusively that the proteasomal pathway degrades PrP 145 . Whether PrP 145 is degraded without ubiquitination as observed for other proteins (29) or is tagged by some other ubiquitin-like protein (39) remains to be determined.
Applicability of the Present Model to the 145 GSS Human Disease-The PrP 145 forms present several unusual characteristics when they are compared with the other mutant PrPs expressed in transfected cells (13,14,19). First, they are highly unstable and are for the most part rapidly degraded. Second, when degradation is impaired, they become partially resistant to protease digestion. Third, paradoxically, the PrP 145 is more protease-resistant in the dispersed than in the aggregated form and, in this form, the protease-resistant fragments include the intact N terminus, whereas a 1.5-2-kDa sequence located at the C terminus remains protease-sensitive. This contrasts with the data from the other transfected cell models of inherited prion diseases in which the mutant PrP spontaneously becomes protease-resistant, the protease-resistant fraction is present only in the aggregated form, and the protease-resistant core includes residues ϳ90 -231 (14,19,41). The present findings argue that also the N-terminal region of PrP, including the signal peptide, may aggregate and become weakly resistant to proteases. Moreover, they indicate that because PrP 145 is also resistant to protease treatment in the dispersed state, the protease resistance is not exclusively because of aggregation but to other mechanisms such as, for example, the presence of a protective ligand or the adoption of a -sheet conformation in a monomeric or oligomeric state. A recent report supports this assumption (47). It has also been recently shown that the PrP 121-231 C terminus segment can adopt a -sheet conformation at acidic pH, independently of other segments (40).
The salient histopathological features of the human Y145stop variant of GSS are the widespread PrP amyloid deposits in vessels and parenchyma of brain and the presence of intraneuronal fibrillary inclusions called neurofibrillary tangles, whereas spongiform degeneration is lacking (7,11,12). The amyloid deposits have been shown to immunostain with antibodies raised to the N-terminal 25 amino acids of PrP, indicating the presence of N-terminal fragment(s) of PrP 145 (11). In addition, an N and C terminus-truncated ϳ7.5-kDa PrP fragment has been detected in monomeric and oligomeric forms, which by epitope mapping is believed to include amino acids 90 -147 (12). A ϳ7.5-kDa PrP fragment has also been isolated from the amyloid deposits of other GSS variants associated with PRNP point mutations, and it has been found to be the only PK-resistant PrP form recovered from the brain when spongiform degeneration is absent (8,10,42). In the P102L GSS variant, the ϳ7.5-kDa fragment has been shown to span residues 78 -82 to residues 147-150 by sequence and mass spectrophotometric analyses (42). Therefore, the 7.5-kDa fragment present in the amyloid deposits of Y145stop GSS variant is likely to include residues ϳ80 to ϳ145 and to be the only major PK-resistant PrP fragment present in the brain parenchyma of subjects affected by this disease.
It is not immediately evident how the findings of the human disease and the present findings can be reconciled. We did not find a 7.5-kDa PrP fragment or any fragment of smaller size. Data obtained from cell models of inherited prion diseases have been compared with those obtained from brains affected by the corresponding disease in a previous study (13). It was found that although the cell model does not form a PK-resistant PrP comparable with that of the disease, it reproduces the metabolic changes occurring in the mutant PrP in the brain (13). Therefore, it is reasonable to postulate that PrP14 and 15.5 forms are expressed in the brain of the subjects carrying the Y145stop PRNP mutation and are in large amount cleared through the proteasomal pathway. Effective proteasomal degradation of PrP 145 along with the presence of the PrP C encoded by the normal allele may prevent the expression of disease until adult age. However, a decrease in proteasomal function with advanced age or the low but continuous intracellular accumulation and secretion of the aggregated and weakly PK resistant PrP 145 would result in the formation of the highly amyloidogenic ϳ7.5-kDa PrP fragment and formation of amyloid deposits. Future studies of Y145stop GSS variant-affected brains should search for the presence and distribution of the PrP14 and 15.5 forms. It would be important to determine whether PrP 145 is present in aggregated and weakly PK-resistant form and whether some of it is located inside the nucleus. These findings would provide indirect evidence that proteasomal degradation is impaired in the human disease.
Other neurodegenerative diseases have also been shown to involve the proteasome. Recently, it has been shown that the proteasomal system participates in the metabolism of amyloid  peptide, the main component of the amyloid accumulating in Alzheimer's disease (33). The presence of PrP 145 in the nucleus also provides an interesting analogy with a group of inherited neurodegenerative diseases, which include Huntington's chorea and forms of cerebellar ataxia. In each of these diseases, the presence of polyglutamine repeat expansions leads the mutated protein to adopt a -sheet structure and to form insoluble, ubiquitinated aggregates in the nucleus (43)(44)(45)(46), consistent with proteasomal involvement in these diseases as well. Studies aimed at evaluating changes in proteasomal function with advancing age will provide important information regarding the role of this organelle in the pathogenesis of these disorders and potential therapeutic approaches. | 2018-04-03T03:05:09.937Z | 1999-08-13T00:00:00.000 | {
"year": 1999,
"sha1": "6f56fb3ac2f586d1cd6fb0070ee8d6f5aa8795c7",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/274/33/23396.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "65142faad272f7a30410c71afc2c401dc4af406d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
247824737 | pes2o/s2orc | v3-fos-license | The Development of Videoconference-Based Support for People Living With Rare Dementias and Their Carers: Protocol for a 3-Phase Support Group Evaluation
Background: People living with rarer dementias face considerable difficulty accessing tailored information, advice, and peer and professional support. Web-based meeting platforms offer a critical opportunity to connect with others through shared lived experiences, even if they are geographically dispersed, particularly during the COVID-19 pandemic. Objective: We aim to develop facilitated videoconferencing support groups (VSGs) tailored to people living with or caring for someone with familial or sporadic frontotemporal dementia or young-onset Alzheimer disease, primary progressive aphasia, posterior cortical atrophy, or Lewy body dementia. This paper describes the development, coproduction, field testing, and evaluation plan for these groups. Methods: We describe a 3-phase approach to development. First, information and knowledge were gathered as part of a coproduction process with members of the Rare Dementia Support service. This information, together with literature searches and consultation with experts by experience, clinicians, and academics, shaped the design of the VSGs and session themes. Second, field testing involved 154 Rare Dementia Support members (people living with dementia and carers) participating in 2 rounds of facilitated sessions across 7 themes (health and social care professionals, advance care planning, independence and identity, grief and loss, empowering your identity, couples, and hope and dementia). Third, a detailed evaluation plan for future rounds of VSGs was developed. Results: The development of the small groups program yielded content and structure for 9 themed VSGs (the 7 piloted themes plus a later stages program and creativity club for implementation in rounds 3 and beyond) to be delivered over 4 to 8 sessions. The evaluation plan incorporated a range of quantitative (attendance, demographics, and geography; pre-post well-being ratings and surveys; psycholinguistic analysis of conversation; facial emotion recognition; facilitator ratings; and economic analysis of program delivery) and qualitative (content and thematic analysis) approaches. Pilot data from round 2 groups on the pre-post 3-word surveys indicated an increase in the emotional valence of words selected after the sessions. Conclusions: The involvement of people with lived experience of a rare dementia was critical to the design, development, and delivery of the small virtual support group program, and evaluation of this program will yield convergent data about the impact of tailored support delivered to geographically dispersed communities. This is the first study to design and plan an evaluation of VSGs specifically for people affected by rare dementias, including both people living with a rare dementia and their carers, and the outcome of the evaluation will be hugely beneficial in shaping specific and targeted support, which is often lacking in this population.
Introduction
Background Support groups for people caring for or living with dementia (collectively, people affected by dementia) may be characterized as peer support groups facilitated by individuals with lived experience of dementia, educational or psychotherapeutic groups facilitated by professionals, or a combination of these components [1]. Support groups for people affected by dementia have been shown to reduce depression and carer burden as well as improve self-esteem, well-being, and quality of life [2,3]. These benefits have been shown in both in-person and virtual support group contexts [4][5][6][7][8] with multicomponent groups, involving input from peers and professionals, and a focus on psychoeducational and emotional support alongside experience-led guidance being most effective [9].
These groups tend to focus on providing support for the most common forms of dementia such as typical Alzheimer disease and vascular dementia. Although living with or caring for someone with any form of dementia can be a very isolating and lonely experience [6,10], this is a particular concern for people affected by rarer forms of dementia [11][12][13]. Rare dementia is an umbrella term referring to atypical, inherited, and young-onset conditions, often characterized by progressive difficulties with cognitive symptoms other than memory [14,15]. As individuals diagnosed with rarer dementias tend to be younger than those diagnosed with typical Alzheimer disease and vascular dementia, they have additional concerns including work, mortgages, and young families [16][17][18][19]. Rarer dementias also vary with regard to symptom presentation and impact on caregivers [20][21][22]. In addition, given the wide geographical spread, there is often a lack of tailored and specific local support available to people affected by these dementias, which is exacerbated by the often long journey to receiving a diagnosis [23].
In March 2020, the United Kingdom entered a nationwide lockdown owing to the COVID-19 pandemic caused by SARS-CoV-2. The restrictions resulting from the COVID-19 pandemic lockdown led to an increase in loneliness and isolation for people affected by dementia [24][25][26] and severely affected those with rarer forms of dementia [27]. For example, individuals with behavioral variant frontotemporal dementia often experience behavioral disinhibition and compulsions, making it difficult to follow government guidelines on social distancing, whereas those with semantic dementia find it difficult to understand the restrictions in place because of difficulties with comprehension. Those diagnosed with posterior cortical atrophy often rely on touch to help with navigation because of difficulties with vision and spatial awareness, which increases the likelihood of spreading the virus [28]. In response to the pandemic, there was a rapid implementation of a number of telehealth and tele-support services [25,29,30]. These services increase accessibility for people with long-term health conditions and those in rural areas, who would usually have to travel long distances to access health and social care services [31]. Online support groups may also provide an additional benefit for people affected by rarer dementias, even as restrictions lift, as due to the typically younger age of onset, carers and those with the diagnosis may still be working, potentially alongside managing childcare needs, and may therefore benefit from the additional flexibility that these groups provide [8,18,19].
Videoconferencing support groups (VSGs) are a type of web-based support [5,8,32]. VSGs have been found to have similar treatment outcomes when compared with in-person groups [32] and have also been shown to improve dementia caregivers' mental health outcomes [7], including a decrease in burden and an increase in perceived social support and positive perceptions of caregiving [33]. In addition, Banbury et al [4], who implemented a 6-session videoconferencing peer support group for isolated carers of people with dementia, found that some group participants were more comfortable with videoconferencing than in-person groups, as they were in their own homes during the meeting, which felt like a safe space to share.
There is a lack of research on the benefits of VSGs specifically for people with rare dementias. In one of the few studies conducted with caregivers of people living with frontotemporal dementia, O'Connell et al [8] found benefits of VSGs, particularly in terms of being with caregivers who were in a similar situation to themselves with regard to age, relationship with the person with dementia, and their spouse's diagnosis. Importantly, this group did not take place in the individuals' homes but required group members to travel to their local health center to access the group, and group members reported difficulties in social connectivity because of the small screen sizes. Further research is needed to develop virtual support groups that can meet the unique needs of this population in an effective and sustainable way.
Objectives
Considering the barriers to support group access for people with rarer dementias and the additional need for support during the COVID-19 pandemic, we aimed to develop a series of facilitated VSGs tailored to the needs of people affected by rare dementias. Using the study by Hales and Fossey [34] as a guide, along with principles related to user-centered design [35], we describe the development, coproduction, field testing, and evaluation plan for these groups.
Information, Knowledge Gathering, and Coproduction
Coproduction is an iterative process of discussions with experts by experience, clinicians, and academics to develop VSGs within the context of Rare Dementia Support (RDS). RDS is an organization that supports people affected by posterior cortical atrophy, familial Alzheimer disease, familial frontotemporal dementia, frontotemporal dementia, primary progressive aphasia, young-onset Alzheimer disease, and dementia with Lewy bodies. Before this process, RDS involved large in-person support groups for each disease type; held 3 to 4 times per year in London; smaller regional support groups; as well as one-to-one information, guidance, and advice. The in-person support groups (n=40 to >120) included a mixture of professional and member talks, question and answer, and smaller breakout group sessions (approximately, n=20) covering a range of topics, including postdiagnostic support, communication strategies, legal matters, regional support, activities, and caring in the later stages. It had been a long-held service development plan for RDS to offer smaller group sessions in addition to the larger support group meetings, enabling RDS to address topics in further depth and in a more intimate setting than was possible within the larger support group context because of limited time and large-group size.
Consulting Academic Literature
The development of VSG topics continues to be informed by the literature, along with consultation with academic and clinical experts. The Mental Health America Support Group Facilitation Guide [36] was adapted and used as a framework to guide facilitators throughout the online group discussion process. Given the lack of research into support services for rarer dementias described earlier, we focused on young-onset dementia (YOD) for the literature search because of the higher prevalence of rarer dementias in this population [14,37]. A recent study found that one-third of individuals with YOD receive their diagnosis via the memory clinic, a quarter via neurological services, and less than a fifth via young-onset specialist services [38]. The follow-up support that these individuals receive is incredibly variable, with nearly a third of individuals diagnosed with YOD reporting that they do not have any routine follow-up appointments [38]. Therefore, it is important that those affected by these conditions are educated on how to access health and social care services that may be able to provide additional postdiagnostic and ongoing support, as they may not otherwise be linked with these services. It is also important that people diagnosed with YOD have their own dedicated space [39], where they can share openly with their peers. In addition, people affected by YOD frequently experience feelings of predeath grief, which is associated with perceived stress, depression, and carer burden [40][41][42]. Predeath grief can include feelings of loss resulting from ongoing changes in roles, relationships, and identities [43,44] and may be of particular concern for individuals affected by YOD because of changes in areas such as employment, finances, and child support [17].
Consulting With Experts by Experience
Building on the initial experience of RDS and understanding the literature, RDS staff had a number of conversations with people affected by rare dementias in the early stages of the national lockdown. These conversations increased the awareness of the lack of support services available and highlighted the need for a support group specifically for people living with a rare dementia. They also highlighted specific themes to be covered (eg, educating RDS members on advance care planning and the role of health and social care professionals), which was particularly important given the lack of literature on support needs related to rare dementias.
Consulting With Clinicians
The subsequent small-group discussion themes were developed based on these discussions and integrated with the views of RDS (consultant) neurologists (n=7) who had years of experience interacting with people affected by rare dementias both individually and at support group meetings.
Consulting With Academic Experts
Clinical academics who worked on the RDS Impact Project [45] had a number of discussions to integrate theoretical perspectives with earlier discussions and consequent adaptations to the planned groups. These discussions with experts by experience, academics, and clinicians led to a decision on initial content and themes for the field-testing round of groups discussed below (Table 1), as well as a model describing flow through the groups and intended inputs and outputs ( Figure 1).
Field Testing
The groups, based on the coproduced topics and process model detailed earlier, were subjected to field testing in 2 rounds between May 2020 and September 2020. The aims of field testing were to deliver a service during a pandemic while also making refinements to topics and understanding optimal processes for round 3 of groups where more formal evaluation is proposed.
Recruitment
RDS members (approximately, n=2000) received an email with the dates and a brief description of the VSG topics. They were asked to respond with their preferred groups as soon as possible, and recruitment for the groups was closed when the groups reached capacity (round 1=96 spaces available; round 2=132 spaces available).
Participants
In total, 154 RDS members (N=175; round 1: n=76, 43%; round 2: n=99, 57%) registered across the first 2 rounds of the VSGs, with 21 of those members registering for both rounds. These members included people living (n=27, 15%) or caring for someone now or in the past (n=127, 73%) with a diagnosis of a rare dementia.
Inclusion criteria were participants (1) aged ≥18 years, (2) with the capacity to consent to take part in the VSGs, and (3) with access to a device and internet connection that would enable VSG participation.
Ethics Approval and Consent
The VSGs are part of the larger RDS Impact Project, conducted under the University College London Research Ethics Committee (8545/004: RDS Impact Study). See the study by Brotherhood et al [45] for details on the ethical procedures and consent.
Delivery
The RDS VSGs were conducted virtually using the GoToMeeting (GTM; LogMeIn Inc) videoconferencing platform. The group facilitators determined the number of sessions, with some groups held as one-off sessions, some as a series of 3 to 4 sessions, and other groups as ongoing. The format of the groups was also at the discretion of the facilitators, with some small groups being primarily experience-led and others being topic-focused or information-based.
Learnings From Field Testing
Initially, small online group discussions were offered as one-off, 1-hour-long information and experience-led sessions for RDS members. Sessions were subsequently increased to 1.5 hours to enable sufficient time for experience sharing alongside planned content. In addition, in the independence and identity, grief and loss, and empowering your identity groups, the facilitators felt that the connections made between group members and the scope of what could be covered within the group warranted a series of 4 sessions, rather than a one-off session. Once the 4 sessions were complete, participants were given the opportunity to continue to meet on a fortnightly or monthly basis, with light touch facilitation. Alongside these ongoing sessions, members who were connected with each other during the sessions were offered the option of one-to-one buddying. Further adaptations were made with regard to the timing of the sessions in the empowering your identity group for people living with dementia, which initially took place in the afternoon; however, the group members found it very difficult to concentrate at that time, so the facilitators moved the subsequent sessions to midmorning.
Facilitators also met as a group to provide feedback on challenges arising from VSG facilitation and strategies for managing them. Shared learnings from this discussion included the challenges of facilitating online groups, such as creating a safe and comfortable environment when not meeting face-to-face and assessing risk in a virtual context. There were a number of downsides to the virtual aspect of the groups, such as the anxiety of managing technological issues within the sessions, lack of in-person debriefing and reflection with colleagues, difficulty in trying to read attendees' body language and nonverbal communication, and absence of boundaries between the facilitators' work and home spaces. The benefits of using technology included the depth of conversations and insights shared by group members, indicating that they felt comfortable being open in the context and safety of their own homes, and the ability to privately address any of the individual participant's concerns through the use of the chat function.
Participants
In round 3, participation in small groups will be offered to the wider RDS membership, with an invitation window of 6 weeks.
On the basis of the recruitment for rounds 1 and 2, we estimated 50 to 100 participants per round.
Group Size
In round 3, group size will be reduced from 12 to 10 members per group, in accordance with facilitator feedback and carer preferences in previous research [8].
Topics
The third round of small groups will include the topics from the first 2 rounds of groups, as well as a later stages program for carers and a creativity club for people living with dementia (Table 1).
Sampling Approach
All nonprofessional RDS members will receive an invitation via email to express their interest in the third round of small groups.
Data Handling
All VSGs, apart from the couples' sessions in round 2 and the creativity club in round 3, will be recorded and automatically transcribed via the GTM platform. The recordings and transcriptions are stored securely on the University College London Data Safe Haven, which is only accessed by RDS Impact Study researchers. Once uploaded, the original files are deleted from GTM. As the accuracy of automated transcription is variable, meeting recordings will also be outsourced for professional transcription.
Overview
The first 2 rounds of VSGs were offered as a rapid service response to the pandemic, without a research plan to assess their impact. On the basis of field testing conducted during round 2 ( Figure 2), a set of quantitative and qualitative investigations was designed to describe and measure the impact of round 3. Specific hypotheses for the quantitative investigations include (1) that session participation in small groups will be associated with increased in-the-moment well-being and (2) that participation, both within and across sessions, will be associated with enhanced social connectedness. Qualitative analysis will explore questions related to understanding how peer support groups work (eg, How are different types of social support delivered in peer support groups?) and specific questions related to the different themes of the groups (eg, In what ways are carers' senses of identity impacted when supporting someone living with a rare dementia?).
Figure 2.
Pilot data from the "3 words" evaluation collected from participants (N=35) in wave 2 small-group conversations. Box and whisker plots show emotional valence, arousal, and dominance plus concreteness ratings of pre-and postsession words (N=301 words; 153 presession words and 148 postsession words). Linear mixed effects models were fitted for each linguistic score using STATA, including participant as a random effect (to account for nonindependence of words produced by each participant) and session theme as a fixed factor, and checking for normality of residuals (independent residual errors for the participants), heteroscedasticity, and linearity of the model. Pre-post differences were significant for all linguistic variables except concreteness, with valence and dominance scores increasing while arousal decreased (coefficients with P value, 95% CIs: valence coefficient=0. 12
Attendance, Demographics, and Geography
Participation in the groups will be evaluated against a range of factors, including gender, age, relationship to people living with dementia, diagnosis of person living with dementia, severity (judged by RDS facilitators using the Global Deterioration Scale) [46], frequency of RDS service use, and location. Key research questions include whether small online groups facilitate access to support for geographically dispersed members and later-stage carers, relative to standard face-to-face services typically delivered from our central London base.
Participant Ratings and Surveys
At the beginning and end of each session, participants will be asked to click a link to a web survey and (1) choose 3 words that describe how they are feeling in the moment and (2) complete the Canterbury Wellbeing Scale [47]. This will involve moving a web-based visual analog slider on a scale from 0 to 100 to indicate how they are feeling in the moment along the established 5 dimensions of happiness, wellness, interest, optimism, and confidence, including additional scales for stress and social connectedness. Three-word responses will be evaluated for emotional content using normative data for emotional valence, arousal, and dominance plus concreteness [48,49] and the Linguistic Inquiry and Word Count (LIWC) automated classification of positive and negative emotion words [50].
Linguistics
Online support groups also offer rich multimodal data on participants' thoughts, emotions, and behaviors, which can deepen our understanding of support group processes. Voice recordings mean that linguistic analysis tools such as LIWC [50] have been used to explore the relationship between dropout rates and the level of emotional support received within sessions [51], and differences in the manner of expression between online and face-to-face support groups in young adults living with cancer [52].
The recorded conversations will be transcribed and evaluated cross-sectionally (within sessions) and longitudinally (across sessions) for evidence of thematic development and group cohesion. Specific features of interest include (1) participation, in terms of frequency, quantity, and equality of verbal contributions by individual participants and facilitators; (2) emotional content, such as the emotional valence, arousal and dominance of language used (quantified using norms in the study by Hollis and Westbury [48] and LIWC software, as per the "3 words analysis"); and (3) prevalence of specific features, such as incomplete propositions, hedges, signs of agreement (eg, in terms of use of names and grunts), and changes in pronoun (eg, "I" to "we") and tense use (eg, past vs present vs future orientated utterances).
FaceReader
Facial video data have been analyzed with facial emotion recognition software such as FaceReader (version 7.0; Noldus Information Technology) to track changes in emotional regulation as markers of therapeutic effectiveness in individuals with borderline personality disorder [53]. Although not previously used with RDS groups, the exploratory use of these tools may yield novel metrics of group behavior, which through automation can be applied efficiently to future evaluations of the impact of online support groups.
Video recordings of the online meetings will be processed using FaceReader software, which classifies expressions into the categories of happy, sad, angry, surprised, scared, disgusted, and neutral and generates measures of the intensity of each individual emotion, valence (intensity of "happy" minus the intensity of the negative expression with the highest intensity), and arousal (based on the activation of 20 facial action units).
In addition to quantifying overall differences in valence and arousal within and across sessions as the VSG conversations develop, FaceReader data will be used to explore (1) the relationship between the valence of facial emotion and verbal content of current conversation and (2) the cohesion of the group, taking the statistical variance of valence and arousal metrics among individual listeners to the current speaker in the group as proxies of cohesion.
Facilitator Ratings: Curative Climate Instrument
To complement participant ratings and observational linguistic and video data, facilitators of each group will be asked after each session to complete an adapted version of the Curative Climate Instrument [54] examining the processes of catharsis, cohesion, and insight within a small group. Originally designed as a measure for individual participants, facilitators will be asked to rate 13 of 14 statements reframed from a facilitator perspective (eg, "People were responsive to each other and made contact with each other through language, gesture, etc.") for both levels of agreement (on a Likert scale from 1=strongly disagree to 7=strongly agree) and confidence in their agreement rating (from 1=extremely unconfident to 7=extremely confident).
Phase 3: Qualitative Analysis Plan
The qualitative data (ie, transcriptions of the VSGs) will be analyzed to explore questions related to peer support groups overall, as well as questions that are specific to the different themes of each group, including those in the subsequent sections.
Qualitative Content Analysis
A directed content analysis [55] of all VSGs will be conducted to explore the question "How is social support delivered in peer support groups?," with a coding framework based on the social support categories by Cutrona and Suhr [56] and the Social Support Behavior Code by Suhr et al [57]. Instrumental, tangible, emotional, and esteem support types will be coded for, and examples of each can be seen in Table 2. Suggestions and advice Instrumental support "I'm just going to put (helpful organization's phone number) in the chat and if you (facilitator) could send it to people." Direct task Tangible support "The biggest problem I see is that we've all got the same problem that, unfortunately, we're watching loved ones deteriorate. We know that there isn't going to be any difference other than a slow deterioration, and we just adjust every time something happens." Understanding and empathy Emotional support "I think it is bureaucracy and you have done well to get through it and stand firm...I think you have been brilliant doing that."
Thematic Analysis
Thematic analysis [58] of facilitator peer support sessions will explore the benefits and challenges of offering small peer support group discussions in a web-based format for people affected by rarer dementias to consolidate learning and develop recommendations for other facilitators embarking on similar initiatives. Benefits may relate to increased accessibility for those who would be unable to travel to in-person meetings because of their location, difficulties using public transport, or caring commitments. Challenges such as those relating to technology (eg, managing background noise and feedback), emotional impact (eg, lack of opportunities for informal conversations over coffee before and after meetings), and other factors will also be explored.
Thematic analysis will also be used to explore questions specific to the themes on which the small-group discussions were focused. For example, for the "Hope and dementia" group theme, how is the sense of hope challenged and sustained for people caring for a loved one with a diagnosis of a rare dementia? For the "independence and identity" group theme, how are the individual and shared identities of those living with a rare dementia and their carers impacted by the diagnosis?
Economic Analysis
An exploratory analysis of the costs of developing the small online groups will be conducted using a microcosting approach from a societal perspective. We will microcost the development and delivery of the intervention to provide a clear representation of the costs of establishing these small groups.
Intervention costs will be requested from the groups. This list is not exhaustive but must include (1) cost of setting up the groups, (2) annual overheads, (3) cost of group materials (print costs and design costs), (4) salary costs for group and program facilitators, (5) training costs for facilitators, and (6) support costs for facilitators and volunteers.
We will also ask the groups to estimate any inputs, financial, time, or otherwise, so that these costs will also be accounted for.
Planned Analysis
We will take guidance from the UK Treasury Office Magenta Book in planning and designing the economic evaluation of this program [59].
This could include the following: • Cost-benefit analysis using any participant questionnaire results to calculate quality-adjusted life years alongside the costs and benefits calculates the net present value of the program [60]. Cost-benefit analysis undertaken from a societal perspective allows the costs and benefits to be considered separately to consider a net monetary benefit or a ratio of benefits to costs, and considers all the costs and benefits to society [61]. Using deterministic sensitivity analysis, we will adjust the values for individual and multiple parameters and we will vary the discount rate from 0% to 3.5% to generate a range of scenarios, as recommended by the UK Treasury Green Book [62].
• Return on investment analysis, which would estimate, pound for pound, the return on investment from providing the VSGs.
• Cost consequence analysis works well with return on investment to quantify outcomes without traditional market values [61]. Cost consequence analysis allows for the outcomes to be quantified and related to costs for each separate course of action, where the final outcomes may be multidimensional; that is, to consider the range of relevant costs and outcomes, both anticipated and unanticipated.
The health economics component of this study will be written in accordance with the Consolidated Health Economic Evaluation Reporting Standards statement [63].
Principal Predictions
We anticipate that connecting people affected by rare dementias together and providing a virtual space where they can share their experiences with others who are affected by the same conditions will be reflected by increased in-the-moment well-being outcomes, as well as an enhancement of social connectedness. We hope to develop a greater understanding of what works and does not in group peer support to improve service delivery for those affected by rare dementias.
To develop support tailored to the specific needs of people affected by rare dementias, it is vital that individuals with lived experience are involved in the design process. RDS members have played a significant role in the development of previous research projects [64,65], and their valuable input continues to shape both the RDS service and associated research, including the development of VSGs.
Although these groups were primarily developed in response to increased support needs during the COVID-19 pandemic, the recordings, pre-and postsession well-being measures, and participant feedback also provided an incredibly rich data source. We described a comprehensive evaluation plan using the data collected from these groups, the outcome of which will be used to further adapt and refine our web-based support provision. In addition, we hope that the learnings from these evaluations will be beneficial for other services that provide support for people with dementia as well as other health conditions, especially those that are rare and where individuals are geographically dispersed.
There are limitations with regard to this study, as the rapid setup of the groups meant that there was limited opportunity for comprehensive feedback and refinement of the group format and delivery ahead of the initial rounds. However, VSG development has also led to rapid learning for RDS staff regarding how to facilitate groups in a web-based context. The evaluation of these groups will further enable the development of a comprehensive framework for the delivery of online support for people affected by rare dementias across one-to-one, family, small-group, and large-group webinar formats.
In addition, because of the setup of the groups, all participants were required to familiarize themselves with the use of videoconferencing software, which likely would have excluded a number of individuals, particularly those living with dementia with additional accessibility needs. The study by O'Connell et al [66] suggests that participants should be provided with the option of joining via phone and video calls when conducting research remotely. This option was made available during the consenting process; however, it was not encouraged during the VSGs. If the participants had issues with internet connectivity during the session, the option of joining via phone was made available at that point. Future iterations might consider providing the option of joining via phone from the outset, rather than purely to overcome technical difficulties, although the impact on group dynamics of members joining via phone versus video may also need to be assessed.
Conclusions and Future Directions
This paper has highlighted the importance of specific and targeted support delivered via VSG for people caring for or diagnosed with a rare dementia, the importance of coproduction, and the need for comprehensive evaluation of these groups to determine their effectiveness, and to further adapt and shape services to meet member needs in future. More broadly, the methods and findings of this work may also be of interest to other dementia-related service providers and providers of other long-term care conditions. This work is part of the RDS Impact Project (The Impact of Multicomponent Support Groups for Those Living With Rare Dementias) and is supported jointly by the Economic and Social Research Council (ESRC) and the National Institute for Health and Care Research (grant ES/S010467/1).
The ESRC is a part of UK Research and Innovation. The views expressed are those of the authors and not necessarily those of the ESRC, UK Research and Innovation, National Institute for Health and Care Research, or Department of Health and Social Care. Rare Dementia Support is generously supported by the National Brain Appeal [67]. | 2022-03-31T16:51:15.130Z | 2021-12-02T00:00:00.000 | {
"year": 2022,
"sha1": "b0bb32543640ca6fe293ad1a4e9dac86fef944d6",
"oa_license": "CCBY",
"oa_url": "https://www.researchprotocols.org/2022/7/e35376/PDF",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b187bfca430313f82b814830a12a27130d1eaca8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268911859 | pes2o/s2orc | v3-fos-license | Simplified Procedure to Determine the Cohesive Material Law of Fiber-Reinforced Cementitious Matrix (FRCM)–Substrate Joints
Fiber-reinforced cementitious matrix (FRCM) composites have been largely used to strengthen existing concrete and masonry structures in the last decade. To design FRCM-strengthened members, the provisions of the Italian CNR-DT 215 (2018) or the American ACI 549.4R and 6R (2020) guidelines can be adopted. According to the former, the FRCM effective strain, i.e., the composite strain associated with the loss of composite action, can be obtained by combining the results of direct shear tests on FRCM–substrate joints and of tensile tests on the bare reinforcing textile. According to the latter, the effective strain can be obtained by testing FRCM coupons in tension, using the so-called clevis-grip test set-up. However, the complex bond behavior of the FRCM cannot be fully captured by considering only the effective strain. Thus, a cohesive approach has been used to describe the stress transfer between the composite and the substrate and cohesive material laws (CMLs) with different shapes have been proposed. The determination of the CML associated with a specific FRCM–substrate joint is fundamental to capture the behavior of the FRCM-strengthened member and should be determined based on the results of experimental bond tests. In this paper, a procedure previously proposed by the authors to calibrate the CML from the load response obtained by direct shear tests of FRCM–substrate joints is applied to different FRCM composites. Namely, carbon, AR glass, and PBO FRCMs are considered. The results obtained prove that the procedure allows to estimate the CML and to associate the idealized load response of a specific type of FRCM to the corresponding CML. The estimated CML can be used to determine the onset of debonding in FRCM–substrate joints, the crack number and spacing in FRCM coupons, and the locations where debonding occurs in FRCM-strengthened members.
Introduction
Fiber-reinforced cementitious matrix (FRCM) composites have attracted the interest of the civil engineering industry as an alternative to fiber-reinforced polymer (FRP) composites for strengthening/retrofitting existing concrete and masonry members.FRCMs comprise open mesh textiles embedded within an inorganic matrix.The textile can be made of various types of fiber, e.g., carbon, basalt, glass, and polyparaphenylene benzobisoxazole (PBO), whereas the matrix is usually a cement-or a lime-based mortar.FRCMs are externally bonded (EB) to existing concrete and masonry members and can be used to improve bending [1][2][3][4] and shear strengths [5][6][7][8], as well as the axial compressive capacity of predominantly axially-loaded members [9][10][11][12].EB FRCM reinforcement generally fails due to debonding of the composite at the FRCM-substrate interface, with or without damage of the substrate, or at the matrix-fiber interface [13].Understanding the FRCM bond behavior is thus fundamental to properly assess the reinforcement effectiveness.The bond between FRCM and different substrates was studied using direct shear tests and small-scale beam tests [14][15][16][17][18].In the single-lap direct shear test set-up recommended by the Italian [19] and European [20] acceptance criteria for FRCM composites, the FRCM strip is applied to one face of the substrate block and a portion of textile is left bare at the loaded end (beyond the bonded area) to be gripped and pulled by the testing machine, while the substrate is constrained (Figure 1a).During this test, the load P applied to the FRCM textile and the relative displacement between the textile and the substrate at the composite loaded end, named global slip g, are measured.An idealized load response obtained by the direct shear test of an FRCM-substrate joint that failed due to debonding at the matrix-fiber interface is shown in Figure 1b.This load response comprises an initial ascending branch and a subsequent descending branch that ends with a constant applied load P f .P f is provided by friction at the matrix-fiber interface after debonding has occurred along the entire bonded length and was observed for different FRCM composites [21,22].The presence of friction is responsible for a peak load P* higher than that associated with the onset of debonding, provided that the bonded length is greater than the minimum length needed to fully develop the bond stress transfer mechanism, i.e., the effective bond length [23].
Materials 2024, 17, x FOR PEER REVIEW 2 of 13 damage of the substrate, or at the matrix-fiber interface [13].Understanding the FRCM bond behavior is thus fundamental to properly assess the reinforcement effectiveness.The bond between FRCM and different substrates was studied using direct shear tests and small-scale beam tests [14][15][16][17][18].In the single-lap direct shear test set-up recommended by the Italian [19] and European [20] acceptance criteria for FRCM composites, the FRCM strip is applied to one face of the substrate block and a portion of textile is left bare at the loaded end (beyond the bonded area) to be gripped and pulled by the testing machine, while the substrate is constrained (Figure 1a).During this test, the load P applied to the FRCM textile and the relative displacement between the textile and the substrate at the composite loaded end, named global slip g, are measured.An idealized load response obtained by the direct shear test of an FRCM-substrate joint that failed due to debonding at the matrix-fiber interface is shown in Figure 1b.This load response comprises an initial ascending branch and a subsequent descending branch that ends with a constant applied load Pf.Pf is provided by friction at the matrix-fiber interface after debonding has occurred along the entire bonded length and was observed for different FRCM composites [21,22].The presence of friction is responsible for a peak load P* higher than that associated with the onset of debonding, provided that the bonded length is greater than the minimum length needed to fully develop the bond stress transfer mechanism, i.e., the effective bond length [23].The bond behavior of an FRCM-substrate joint can be described using the differential equation [23]: where s(x) is the matrix-fiber slip (the reference system is shown in Figure 1a), τzx(x) is the matrix-fiber shear stress, pf is the matrix-fiber contact perimeter, Ef is the textile elastic modulus, and Af is the textile cross-sectional area.In the remainder of the paper, only shear stresses in the direction of the load will be considered and the subscript zx will be omitted for the sake of brevity.Equation ( 1) is based on the assumption of a pure Mode-II loading condition at the interface where debonding occurs.This assumption, which is often adopted to describe the results of single-lap direct shear tests, is supported by the presence of the matrix layer that covers the textile in FRCM composites, which contrasts the effect of a possible Mode-I loading component.The bond behavior of an FRCM-substrate joint can be described using the differential equation [23]: where s(x) is the matrix-fiber slip (the reference system is shown in Figure 1a), τ zx (x) is the matrix-fiber shear stress, p f is the matrix-fiber contact perimeter, E f is the textile elastic modulus, and A f is the textile cross-sectional area.In the remainder of the paper, only shear stresses in the direction of the load will be considered and the subscript zx will be omitted for the sake of brevity.Equation ( 1) is based on the assumption of a pure Mode-II loading condition at the interface where debonding occurs.This assumption, which is often adopted to describe the results of single-lap direct shear tests, is supported by the presence of the matrix layer that covers the textile in FRCM composites, which contrasts the effect of a possible Mode-I loading component.
Once the relationship between the matrix-fiber shear stress and slip, i.e., the interfacial cohesive material law (CML), is known, Equation (1) can be used to describe the stress transfer mechanism along the joint bonded length and study the contribution of EB FRCM strips to the capacity of strengthened members [24].Various shapes of the shear stress-slip relationship were proposed in the literature (Figure 2).Among them, an exponential CML was proposed in [25] to describe the matrix-fiber bond behavior of PBO FRCM-concrete joints.Different piece-wise functions were also proposed.A trilinear CML was used in [26][27][28], while an elasto-brittle relationship and a rigid-cohesive CML were used in [29] and [30], respectively.Finally, a rigid-trilinear CML was proposed to obtain finite values of the effective bond length in PBO FRCM-substrate joints [23].These shapes can be adopted to describe the CML associated with interfaces with different mechanical properties.
Once the relationship between the matrix-fiber shear stress and slip, i.e., the interfacial cohesive material law (CML), is known, Equation (1) can be used to describe the stress transfer mechanism along the joint bonded length and study the contribution of EB FRCM strips to the capacity of strengthened members [24].Various shapes of the shear stressslip relationship were proposed in the literature (Figure 2).Among them, an exponential CML was proposed in [25] to describe the matrix-fiber bond behavior of PBO FRCM-concrete joints.Different piece-wise functions were also proposed.A trilinear CML was used in [26][27][28], while an elasto-brittle relationship and a rigid-cohesive CML were used in [29] and [30], respectively.Finally, a rigid-trilinear CML was proposed to obtain finite values of the effective bond length in PBO FRCM-substrate joints [23].These shapes can be adopted to describe the CML associated with interfaces with different mechanical properties.A procedure to calibrate a trilinear CML (see Figure 2) using the load response obtained with direct shear tests of FRCM-substrate joints was proposed by the authors in [31].Since FRCM composites can be manufactured using textiles with different fibers, layouts, and types of matrixes and can be applied to various substrates, FRCM-substrate joints often have a peculiar behavior.To verify the capability of the procedure proposed in capturing the complex behavior shown by various FRCM-substrate joints, in this paper it was applied to carbon, AR glass, and PBO FRCM composites applied to concrete and masonry substrates.
Calibration of the Proposed Trilinear CML
The proposed trilinear CML consists of a linear ascending branch up to the slip s0 and maximum shear stress τm, followed by a linear descending branch up to the slip sf and a constant branch, corresponding to the friction shear stress τf.τf could also be equal to zero.The trilinear CML proposed can be calibrated starting from the applied load P-global slip g response obtained with a direct shear test (experimental P-g response) following 6 steps.These steps were previously described by the authors in [31] and are recalled here for the sake of clarity.The experimental P-g response consists of a set of applied forces j P ( = 1 2 j , ,...,N ) and corresponding measured global slips j g ( = 1 2 j , ,...,N ).
Step 1.In general, experimental data present small oscillations that can affect the calibration procedure proposed.In the first step, Equations ( 2)-( 4) were employed to reduce these oscillations and obtain a set of global slips k g and a set of applied load k P ( = 1 2 k , ,..., N ) of j g and j P ( = 1 2 j , ,...,N ), respectively: A procedure to calibrate a trilinear CML (see Figure 2) using the load response obtained with direct shear tests of FRCM-substrate joints was proposed by the authors in [31].Since FRCM composites can be manufactured using textiles with different fibers, layouts, and types of matrixes and can be applied to various substrates, FRCM-substrate joints often have a peculiar behavior.To verify the capability of the procedure proposed in capturing the complex behavior shown by various FRCM-substrate joints, in this paper it was applied to carbon, AR glass, and PBO FRCM composites applied to concrete and masonry substrates.
Calibration of the Proposed Trilinear CML
The proposed trilinear CML consists of a linear ascending branch up to the slip s 0 and maximum shear stress τ m , followed by a linear descending branch up to the slip s f and a constant branch, corresponding to the friction shear stress τ f .τ f could also be equal to zero.The trilinear CML proposed can be calibrated starting from the applied load P-global slip g response obtained with a direct shear test (experimental P-g response) following 6 steps.These steps were previously described by the authors in [31] and are recalled here for the sake of clarity.The experimental P-g response consists of a set of applied forces P j (j = 1, 2, . . ., N) and corresponding measured global slips g j (j = 1, 2, . . ., N).
Step 1.In general, experimental data present small oscillations that can affect the calibration procedure proposed.In the first step, Equations ( 2)-( 4) were employed to reduce these oscillations and obtain a set of global slips g k and a set of applied load P k (k = 1, 2, . . ., N) of g j and P j (j = 1, 2, . . ., N), respectively: where N is the number of elements in g k and P k , which can be determined using a trial and error procedure until a satisfactory solution is obtained, and int(Z) denotes the positive integer number nearest to the rational number Z.
Step 2. The friction shear stress τ f (see Figure 2), if any, can be determined from the approximately constant applied load P f at the end of the P-g response of specimens that showed matrix-fiber debonding.According to the procedure proposed, P f is the average applied load for global slips higher than g f , which is the global slip associated with a slope of the P-g response lower than a certain P ′ f that needs to be defined by the user: Once P f is known, τ f can be obtained as the constant shear stress acting at the matrixfiber interface along the bonded length ℓ (Figure 1): Figure 3a shows an idealized P k − g k curve with the indication of g f and of the average applied load for g k ≥ g f , whereas Figure 3b shows the corresponding where N is the number of elements in k g and k P , which can be determined using a trial and error procedure until a satisfactory solution is obtained, and int(Z) denotes the positive integer number nearest to the rational number Z.
Step 2. The friction shear stress τf (see Figure 2), if any, can be determined from the approximately constant applied load Pf at the end of the P-g response of specimens that showed matrix-fiber debonding.According to the procedure proposed, Pf is the average applied load for global slips higher than gf, which is the global slip associated with a slope of the P-g response lower than a certain ′ f P that needs to be defined by the user: with Once Pf is known, τf can be obtained as the constant shear stress acting at the matrixfiber interface along the bonded length (Figure 1): Figure 3a shows an idealized − Step 3. The slope h of the ascending branch of the CML can be computed from the slope p0 of the ascending branch of the P-g curve.The slope of the ascending branch of the CML can be computed as Step 3. The slope h of the ascending branch of the CML can be computed from the slope p 0 of the ascending branch of the P-g curve.The slope of the ascending branch of the CML can be computed as where P 1 and P 2 are the applied loads associated with 0.1P* and 0.5P* and g 1 and g 2 the corresponding global slips extracted from the P j −g j response.It should be noted that this method works provided that g 1 and g 2 are smaller than s 0 , which should be verified at the end of the procedure.If the slip s 0 resulting from the procedure is smaller than g 2 , the procedure can be repeated using a P 2 smaller than 0.5P*.The slope h of the ascending branch of the trilinear CML can be obtained as Step 4. In this step, the oscillation of the ascending and part of the descending branches of the P-g response (note that the descending branch is considered only to ensure that the stress transfer mechanism is fully established) are reduced via Equations ( 11)- (13).In particular, global slips g k and corresponding applied loads P k (k = 1, , . . ., N + 1) were obtained: where N + 1 is the number of P k and g k points obtained and j max is the index of the maximum load in the set of P j .
Step 5.This step allows for identifying the slip s f at the onset of debonding.Equation ( 14) is employed to compute the shear stress τ k associated with each g k ( k = 1, 2, . . ., N): The τ k − g k response represents the experimental CML obtained from the P k − g k response.s f is the slip corresponding to τ f in the τ k − g k response and can be computed as where k fr is the minimum index k such that τ k f r −1 > τ f and τ k f r < τ f .Figure 4 shows the τ k − g k relationship provided by Equation ( 14) considering the P k − g k response obtained with the idealized response of Figure 3a, where the horizontal constant branch starting at s f , computed by Equation ( 15), is indicated with a red line.It should be noted that Equation ( 14) was obtained from the well-known fracture mechanics relationship in Equation ( 16) [32], which is valid only if the free end slip is null.It should be noted that Equation ( 14) was obtained from the well-known fracture mechanics relationship in Equation ( 16) [32], which is valid only if the free end slip is null.
Step 6.The fracture energy G F , which is the area below the CML from s = 0 to s = s f , can be obtained by applying the trapezoidal rule to the τ k − g k relationship: Step 6a.Since the applied load is assumed to be evenly distributed across the composite width, i.e., there is no width effect, the fracture energy G F can be obtained, as an alternative to the procedure in Step 6, by rearranging Equation ( 16) and considering the debonding load P deb , i.e., the applied load associated with the onset of debonding (g = s f ): Figure 5 shows the identification of P deb on the idealized load response of Figure 3a.It should be noted that Equation ( 14) was obtained from the well-known fracture mechanics relationship in Equation ( 16) [32], which is valid only if the free end slip is null.
Step 6.The fracture energy GF, which is the area below the CML from s = 0 to s = sf, can be obtained by applying the trapezoidal rule to the τ k − k g relationship: Step 6a.Since the applied load is assumed to be evenly distributed across the composite width, i.e., there is no width effect, the fracture energy GF can be obtained, as an alternative to the procedure in Step 6, by rearranging Equation ( 16) and considering the debonding load Pdeb, i.e., the applied load associated with the onset of debonding (g = sf): Figure 5 shows the identification of Pdeb on the idealized load response of Figure 3a.Step 7. The trilinear CML peak shear stress τ m and corresponding slip s 0 can be from the fracture energy G F , slope of the ascending branch h, shear stress at the onset of debonding τ f , and corresponding slip s f : Figure 6 shows the trilinear CML obtained using the procedure proposed, compared with the experimental τ k − g k curve (see Figure 4).In Figure 6, τ k f r and g k f r were replaced with τ f and s f , respectively.= 0 τ m hs (21) Figure 6 shows the trilinear CML obtained using the procedure proposed, compared with the experimental τ k − k g curve (see Figure 4).In Figure 6, τ Final checks.To confirm that the calibrated trilinear CML correctly and accurately describes the experimental response, it can be substituted in Equation ( 16) to obtain the analytical load response to be compared with the corresponding experimental P-g relationship.However, since Equation ( 16) assumes infinite bonded length, the trilinear CML should be used to solve Equation ( 1) and compared with the experimental load response to assure that the free end slip can be neglected (see Step 5).
Results and Discussion
The procedure proposed was applied to obtain the CML that describes the matrixfiber interface of various FRCM composites.Namely, the experimental load responses of PBO FRCM-concrete joints [23,33], carbon FRCM-masonry joints [34], glass FRCM-concrete joints [35], and glass FRCM-masonry joints [36] were considered.For each type of composite, the P-g responses obtained with two single-lap direct shear tests were analyzed.All composite strips applied either to a concrete block or to a masonry wallet included a single layer of textile except for two PBO FRCM-concrete joints [33], which included two layers of textile.The geometrical and mechanical properties of the textile and matrix comprising the composite strips are provided in Table 1, where tf = textile equivalent thickness, b * = width of a single textile yarn, ff = textile tensile strength, Ef = textile elastic modulus, fmu = matrix compressive strength [37], and fmt = matrix flexural strength [37].Final checks.To confirm that the calibrated trilinear CML correctly and accurately describes the experimental response, it can be substituted in Equation ( 16) to obtain the analytical load response to be compared with the corresponding experimental P-g relationship.However, since Equation ( 16) assumes infinite bonded length, the trilinear CML should be used to solve Equation ( 1) and compared with the experimental load response to assure that the free end slip can be neglected (see Step 5).
Results and Discussion
The procedure proposed was applied to obtain the CML that describes the matrixfiber interface of various FRCM composites.Namely, the experimental load responses of PBO FRCM-concrete joints [23,33], carbon FRCM-masonry joints [34], glass FRCMconcrete joints [35], and glass FRCM-masonry joints [36] were considered.For each type of composite, the P-g responses obtained with two single-lap direct shear tests were analyzed.All composite strips applied either to a concrete block or to a masonry wallet included a single layer of textile except for two PBO FRCM-concrete joints [33], which included two layers of textile.The geometrical and mechanical properties of the textile and matrix comprising the composite strips are provided in Table 1, where t f = textile equivalent thickness, b * = width of a single textile yarn, f f = textile tensile strength, E f = textile elastic modulus, f mu = matrix compressive strength [37], and f mt = matrix flexural strength [37].
Table 1.Geometrical and mechanical properties of the textile and matrix comprising the composites.
The FRCM strips considered had different bonded lengths ℓ and widths b 1 , including a different number of longitudinal yarns n.Each specimen was named following the notation adopted in the corresponding publication.The geometrical properties of the FRCM strips of each specimen, including the number of layers L and the textile cross-sectional area A f , are provided in Table 2, along with the peak load attained P*. Figure 7 shows that, despite the irregularity of the experimental τ k − g k responses (due to the numerical differentiation of the experimental P j − g j responses), the simple trilinear model allows for capturing the experimental P-g responses up to the onset of debonding for different FRCM composites.
Three main critical aspects can be identified in the proposed procedure.The first critical aspect is related to the determination of s f .This slip is defined in Step 5 as the minimum slip corresponding to the crossing of the horizontal line τ = τ f by the τ k − g k response.Due to its irregularity, the τ k − g k could cross the τ = τ f line at several slips, as happens in Figure 7b,d,h.In such cases, assuming that s f is located in the descending portion of the τ k − g k response, s f should be chosen so that for slips greater than s f the shear stress τ k is similar to τ f .This is the reason why s f ∼ = 1.4 mm was chosen for specimen DS_300_50_1 instead of s f ∼ = 0.8 mm.A rational criterion to establish whether the right value of s f has been identified consists of decreasing N, which entails for a smoother τ k − g k response, and checking if a similar s f is obtained.
The second critical aspect arises from the assumption that the experimental free end slip is zero.The correctness and eventually the influence of this assumption should be checked by comparing the analytical P(g) response obtained with Equation ( 16), which assumes zero slip at the free end, and the P(g) response obtained with the procedure described in [42], which is based on Equation (1) and allows for nonzero slip at the free end.The two P(g) responses should be consistent, at least up to g = s f .
The third critical aspect arises from the assumption that the bonded length adopted in the experimental tests is greater than the effective bond length.If the bonded length of the FRCM composite considered is not known from previous work, it is necessary to apply the procedure with experimental results obtained with different bonded lengths and check that the obtained CMLs do not depend on the bonded length.If a dependency of the CML on the bonded length is found, it is possible that the short bonded lengths are shorter than the effective bond length.Consequently, the CMLs determined based on the P-g response of those specimens should be disregarded.
The results obtained confirmed that the proposed procedure can be effectively adopted to obtain the CML from the load response of direct shear tests, without the need for a direct measurement of the composite axial strain.Furthermore, the CML shape adopted provided a simple solution of the differential equation in Equation (1).Due to the complex behavior of FRCM-substrate joints, the procedure required a careful analysis of the load response obtained, since slight variations in the CML can be obtained by varying, for instance, the parameters considered to reduce the oscillations in the load response (see Step 1).However, the final checks proposed allow for verifying that the CML calibrated correctly reproduces the experimental results.
Glass FRCM-masonry DS_300_50_c_2 The second critical aspect arises from the assumption that the experimental free end slip is zero.The correctness and eventually the influence of this assumption should be checked by comparing the analytical P(g) response obtained with Equation ( 16), which assumes zero slip at the free end, and the P(g) response obtained with the procedure described in [42], which is based on Equation (1) and allows for nonzero slip at the free end.The two P(g) responses should be consistent, at least up to = f g s .
The third critical aspect arises from the assumption that the bonded length adopted in the experimental tests is greater than the effective bond length.If the bonded length of the FRCM composite considered is not known from previous work, it is necessary to apply the procedure with experimental results obtained with different bonded lengths and check that the obtained CMLs do not depend on the bonded length.If a dependency of the CML on the bonded length is found, it is possible that the short bonded lengths are shorter than the effective bond length.Consequently, the CMLs determined based on the P-g response of those specimens should be disregarded.
The results obtained confirmed that the proposed procedure can be effectively adopted to obtain the CML from the load response of direct shear tests, without the need for a direct measurement of the composite axial strain.Furthermore, the CML shape adopted provided a simple solution of the differential equation in Equation (1).Due to the complex behavior of FRCM-substrate joints, the procedure required a careful analysis of
Conclusions
In this paper, an analytical procedure to determine a trilinear CML of FRCM-substrate joints was applied to carbon, AR glass, and PBO FRCM composites applied to concrete and masonry substrates.The results obtained allowed for drawing the following main conclusions: • The proposed procedure may be used to estimate the parameters of a trilinear CML able to accurately reproduce the experimental load response.Attention should be paid in determining the parameters needed for the procedure.However, the accuracy of the procedure can be assessed by comparing the analytical load response provided by the calibrated CML with the experimental load response.• The proposed procedure represents a valuable tool to estimate the CML of FRCMsubstrate joints that can then be used to identify fundamental features of the FRCM composite, such as the onset of debonding in FRCM-substrate joints, the crack number and spacing in FRCM coupons, and the locations where debonding occurs in FRCMstrengthened members.• The proposed procedure allows for simply and rapidly obtaining the parameters of the trilinear CML, which can be used in nonlinear finite element models to estimate the behavior of concrete or masonry structural members strengthened with FRCM composites.
Figure 1 .
Figure 1.(a) Sketch of a specimen used in single-lap direct shear tests; (b) Idealized load response obtained by a direct shear test of a FRCM-substrate joint that failed due to debonding at the matrixfiber interface.
Figure 1 .
Figure 1.(a) Sketch of a specimen used in single-lap direct shear tests; (b) Idealized load response obtained by a direct shear test of a FRCM-substrate joint that failed due to debonding at the matrixfiber interface.
with the indication of gf and of the average applied load for ≥ k f g g , whereas Figure 3b shows the corresponding ′ −
Figure 3 .
Figure 3. Idealized (a) P k − g k and (b) corresponding P ′ k − g k responses.
Figure 4 .
Figure 4. Idealized τ k − k g curve and slip sf at the beginning of the friction branch (the constant friction branch is indicated with a red line). g
Figure 4 .
Figure 4. Idealized τ k − g k curve and slip s f at the beginning of the friction branch (the constant friction branch is indicated with a red line).
Figure 4 .
Figure 4. Idealized τ k − k g curve and slip sf at the beginning of the friction branch (the constant friction branch is indicated with a red line).
Figure 5 .
Figure 5. Debonding load Pdeb on the P-g response obtained from Figure 3a.Figure 5. Debonding load P deb on the P-g response obtained from Figure 3a.
Figure 5 .
Figure 5. Debonding load Pdeb on the P-g response obtained from Figure 3a.Figure 5. Debonding load P deb on the P-g response obtained from Figure 3a.
Figure 6 .
Figure 6.Comparison between the experimental τ k − k g curve (in black) and the corresponding trilinear CML obtained with the procedure proposed (in red).
Figure 6 .
Figure 6.Comparison between the experimental τ k − g k curve (in black) and the corresponding trilinear CML obtained with the procedure proposed (in red).
Figure 7 Figure 7 .
Figure 7 shows that, despite the irregularity of the experimental τ − k k g responses
Figure 7 . 1
Figure 7.Comparison between analytical and experimental load responses and corresponding CML: (a,b) PBO FRCM-concrete joints; (c,d) PBO FRCM-concrete joints with two layers of textile; (e,f) carbon FRCM-masonry joints; (g,h) bare glass FRCM-concrete joints; (i,j) coated glass FRCMmasonry joints.Three main critical aspects can be identified in the proposed procedure.The first critical aspect is related to the determination of sf.This slip is defined in Step 5 as the minimum slip corresponding to the crossing of the horizontal line τ = τ f by the τ − k k g re-
Author Contributions:Funding:
Conceptualization, F.F., T.D., and C.C.; methodology, F.F.; software, T.D.; validation, F.F. and C.C.; formal analysis, T.D.; investigation, T.D.; writing-original draft preparation, T.D. and F.F.; writing-review and editing, C.C.All authors have read and agreed to the published version of the manuscript.This research was funded in part by the National Center for Transportation (Washington State University) grant RES515729.Drs.D'Antino and Focacci acknowledge the support of the DPC-ReLUIS 2022-2024 project (WP 14) funded by the Italian Department of Civil Protection.
Table 2 .
Geometrical and mechanical properties of the textile and matrix comprising the composites. | 2024-04-05T15:32:42.875Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "6b0fbe8644a44fe9de78ddef56fd4cd8e451d946",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/17/7/1627/pdf?version=1712062236",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9075b4f5a73f059764857385e4b7a2582df62bae",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235347030 | pes2o/s2orc | v3-fos-license | Prediction of Type 1 Diabetes at Birth: Cord Blood Metabolites vs Genetic Risk Score in the Norwegian Mother, Father, and Child Cohort
Background and aim: Genetic markers are established as predictive of type 1 diabetes, but unknown early life environment is believed to be involved. Umbilical cord blood may reflect perinatal metabolism and exposures. We studied whether selected polar metabolites in cord blood contribute to prediction of type 1 diabetes. Methods: Using a targeted UHPLC-QQQ-MS platform, we quantified 27 low-molecularweight metabolites (including amino acids, small organic acids, and bile acids) in 166 children, who later developed type 1 diabetes, and 177 random control children in the Norwegian Mother, Father, and Child cohort. We analyzed the data using logistic regression (estimating odds ratios per SD [adjusted odds ratio (aOR)]), area under the receiver operating characteristic curve (AUC), and k-means clustering. Metabolites were The Journal of Clinical Endocrinology & Metabolism, 2021, Vol. 106, No. 10, e4063–e4071 doi:10.1210/clinem/dgab400 Clinical Research Article ISSN Print 0021-972X ISSN Online 1945-7197 Printed in USA https://academic.oup.com/jcem e4063 compared to a genetic risk score based on 51 established non-HLA single-nucleotide polymorphisms, and a 4-category HLA risk group. Results: The strongest associations for metabolites were aminoadipic a id (aOR = 1.23; 95% CI, 0.97-1.55), indoxyl sulfate (aOR = 1.15; 95% CI, 0.87-1.51), and tryptophan (aOR = 0.84; 95% CI, 0.65-1.10), with other aORs close to 1.0, and none significantly associated with type 1 diabetes. K-means clustering identified 6 clusters, none of which were associated with type 1 diabetes. Cross-validated AUC showed no predictive value of metabolites (AUC 0.49), whereas the non-HLA genetic risk score AUC was 0.56 and the HLA risk group AUC was 0.78. Conclusions: In this large study, we found no support of a predictive role of cord blood concentrations of selected bile acids and other small polar metabolites in the development of type 1 diabetes.
Type 1 diabetes is usually preceded by a prodromal phase characterized by islet autoantibodies, often appearing in early childhood years before diagnosis (1). HLA and other genetic factors clearly contribute to type 1 diabetes susceptibility (1), but the increasing incidence implicate nongenetic factors (2). The typically early seroconversion to islet autoantibodies suggests that early life is important (3)(4)(5).
Maternal age, obesity, and birth weight are early life nongenetic risk factors relatively consistently associated with childhood-onset type 1 diabetes (6)(7)(8)(9)(10)(11). Obesity, dysglycemia, kidney function, and related traits, both in nonpregnant and in pregnant women, are associated with perturbations in small metabolites such as amino acids, creatinine, and bile acids (12)(13)(14)(15), many of which have also been associated with birth weight (12). Metabolites such as glucose, lipids, amino acids, and bile acids can cross the placenta, often bidirectionally, via free diffusion and placentally expressed transmembrane transporters (16)(17)(18). Many small metabolites in cord blood are thus correlated with maternal levels during the third trimester (12). For example, plasma creatinine, a marker of kidney function, largely reflects maternal levels when measured in cord blood (19), but has been linked to birth weight (20,21). Maternal circulating bile acids, which may be influenced by maternal gut microbiota, may program offspring metabolism, or influence their microbiome, and have been linked to insulin resistance (14,22,23). Yet, there is only 1 small study (15 cases and 24 controls) in cord blood (24) and 1 study using dried blood spots (25) to date on nonlipid metabolites and later type 1 diabetes. The authors are not aware of any previous study investigating maternal or newborn plasma bile acids and subsequent type 1 diabetes risk.
The aim of the study was to test if selected small metabolites in cord blood, or combinations of these, could predict future risk of offspring type 1 diabetes in the Norwegian Mother, Father, and Child Cohort Study (MoBa), 1 of the largest pregnancy cohorts in the world. To ensure robust quantification, and to minimize multiple testing problems, we chose a targeted metabolomics approach focusing on small metabolites (molar mass ranging from 75 to ~500 g/ mol) with previous evidence for association with metabolic traits (13). In addition, we investigated established genetic susceptibility markers for comparison of predictive values among biomarkers present at birth.
Participants and study design
We designed a nested case-control study in the MoBa cohort (26), which recruited ~114 000 pregnant mothers (41% eligible participated) nationwide from 1999 through 2008 (last birth in 2009). The current study uses data from cord blood samples and repeated questionnaires collected during pregnancy and up to child age 6 months (27). All participating mothers gave written informed consent. The establishment of MoBa and initial data collection was based on a license from the Norwegian Data Protection Agency and approval from The Regional Committees for Medical and Health Research Ethics. The MoBa cohort is now based on regulations related to the Norwegian Health Registry Act. The current study was approved by The Regional Committees for Medical and Health Research Ethics. Children who developed type 1 diabetes by February 5, 2014, were identified by register linkage to the Norwegian Childhood Diabetes Registry (28) and selected as cases. A random sample of the cohort was included as controls. Case and control samples were retrieved simultaneously and treated equally. In total, 166 children were type 1 diabetes cases and 177 children were controls (Fig. 1). Baseline characteristics for those with available blood samples were largely similar to the whole MoBa cohort, except a lower proportion of cesarean section and preterm birth (29).
Cord blood metabolomics profiling
Blood was sampled from the umbilical cord vein using a syringe immediately after birth, collected in a EDTA container and shipped to the biobank for plasma separation and storage at -80°C (30). We used a UHPLC-QQQ-MS platform for targeted metabolomics with focus on robust measurement as described by Ahonen et al (13). Briefly, measurements were normalized, outliers were removed, and then metabolites were excluded on the basis of high missingness (n = 1) or low coefficient of variation (n = 3) before analysis. Details are given in the Supplemental Appendix (31). In total, 27 metabolites were included for analysis, representing various metabolic pathways. A brief description of the selected metabolites is provided in Supplemental Table 1 (31). Mean values and number of nonmissing measurements of the metabolites, in cases and controls, are shown in Supplemental Table 2 (31). The distributions of the plasma metabolite concentrations were approximately symmetric after log-transformation (Supplemental Figure 1) (31). Z scores were calculated to compare metabolites across different scales.
Genotyping assays and genetic risk scores
Participants were genotyped for established type 1 diabetes susceptibility single-nucleotide polymorphisms (SNPs) including HLA tagSNPs using a custom Illumina Golden Gate assay (Illumina, San Diego, CA), as described earlier (32). We calculated a 51-SNP non-HLA type 1 diabetes genetic risk score (GRS), weighted by the natural log-odds ratio of type 1 diabetes per risk allele reported from large genome-wide association studies (Supplemental Table 3 (31), with further details given in the online supplement to (33)). HLA class II alleles were imputed using the HLA*IMP:02 web service (34) (details given previously (32)) and all HLA genotypes were subsequently confirmed by allele specific PCR (35).
Other covariates
Information on birth weight, maternal age at delivery, and delivery mode was obtained from the nationwide Medical Birth Registry of Norway (36). Information regarding maternal prepregnancy body mass index (BMI) and smoking during pregnancy was obtained from midpregnancy and child's 6-months-of-age questionnaires. Questionnaires can be accessed at www.fhi.no/moba. Maternal type 1 diabetes data were obtained from questionnaires and the Norwegian Patient Registry. Variables were categorized as shown in Table 1.
Statistical methods
We used logistic regression with child type 1 diabetes as outcome, adjusted for selected variables, unless noted otherwise. The primary and secondary analyses were decided a priori. Our primary analysis was to estimate the logit-linear association between each single cord blood metabolite and child type 1 diabetes risk. Tertiles of each metabolite were used as categorical exposures to test our linearity assumption. We generated 6 clusters (groups of individuals with similar patterns across all metabolites passing quality control) using k-means clustering, assessing whether the 6 clusters associated with type 1 diabetes risk, to investigate if any pattern of metabolites associated with type 1 diabetes.
Secondary analyses included restricting the primary analysis to children carrying type 1 diabetes risk HLA genotypes. We also evaluated the predictive performance of
e4065
all the metabolites simultaneously, by calculating the area under the receiver operating characteristic curve (AUC) and compared with the AUC of well-established genetic type 1 diabetes susceptibility markers. A model including z scores of all analyzed metabolites, a model using HLA and a model using a weighted GRS were used to generate predictions for later type 1 diabetes based on metabolites, HLA, and non-HLA GRS, respectively. Five-fold cross validation was used in the AUC analysis to adjust for potential overfitting. We also investigated groups of metabolites by using the sum of amino acid levels, sum of bile acid levels, and the ratio of taurine/glycine-conjugated bile acids as exposures. Exploratory analyses included principal component analysis (PCA) using principal components with an eigenvalue above 1 as predictors, and the MetaboAnalyst web service for metabolite set enrichment analysis (37). Details of the statistical analysis are given in Supplemental Methods (31).
Missing values were excluded in the analyses, with the exception of the cluster, AUC, PCA, enrichment, and groups (amino and bile acids) and ratios of metabolites analyses, where we imputed the sample mean for missing values to allow inclusion of all metabolites. We used clustered sandwich estimator for standard errors to account for correlation between siblings. We present nominal P values (not corrected for multiple testing), and 95% CIs for the odds ratio excluding 1.00 were considered equivalent to a nominal significance at the 2-sided 5% level. We planned a priori to use false discovery rate and calculate q-values to account for multiple testing with 27 metabolites. Including those that quit smoking shortly before or during pregnancy because the protective association with type 1 diabetes has been observed in those that smoked throughout pregnancy (49). The following covariates were included in our primary adjustment model: sample batch, date of run, child's sex, cesarean delivery, gestational age at birth (in weeks), maternal age, prepregnancy BMI, smoking in pregnancy, and parity (categorized as shown in Table 1). Analyses were run in R version 4.0.2 (38) and Stata release 16 (Stata Corp LLC, College Station, TX).
Results
Pairwise correlation coefficients among metabolites ranged from 0.77 for glycocholate-taurocholate and 0.62 for tyrosine-phenylalanine, to inverse correlations of -0.74 for glutamine-glutamic acid and -0.55 for taurine-glutamine, but most metabolite pairs showed little correlation (Supplemental Figure 2) (31).
Main analysis: Association of individual metabolites and clusters with type 1 diabetes
The strongest associations were observed for aminoadipic acid (adjusted odds ratio [aOR] = 1.23; 95% CI, 0.97-1.55), indoxyl sulfate (aOR = 1.15; 95% CI, 0.87-1.51), taurochenodeoxycholic acid (aOR = 1.12; 95% CI, 0.87-1.44), and tryptophan (aOR = 0.84; 95% CI, 0.65-1.10), with other aORs close to 1.0. No metabolite was nominally significantly associated with type 1 diabetes (Fig. 2). Categorizing metabolites into tertiles did not show deviation from linearity, except for phenylalanine (Supplemental Figure 3) (31). The second tertile of phenylalanine was associated with lower risk of type 1 diabetes, whereas the lowest and highest tertile had neutral risk (Supplemental Figure 3) (31), meaning the linear estimate reported must be cautiously interpreted. Adjusting for batch, date of run, and maternal BMI or birthweight only, variables that could be the most relevant for metabolite levels and offspring type 1 diabetes did not appreciably change our results (data not shown). When individuals were clustered using k-means clustering into 6 groups of cord blood metabolite profiles, no cluster was significantly associated with type 1 diabetes (Fig. 2).
Secondary analyses: Multivariable prediction of type 1 diabetes at birth with receiver operating characteristic curves Using all 27 metabolites to predict later type 1 diabetes with multiple logistic regression gave an AUC of 0.66, the non-HLA genetic risk score alone an AUC of 0.57, and HLA risk group alone gave an AUC of 0.81, and all combined an AUC of 0.87 before cross validation (Fig. 3A). Because the non-HLA SNP was externally weighted, it was not expected to give a too optimistic prediction Figure 2. Shows the results from the cluster analysis (with a heatmap representing mean metabolite concentrations in each cluster) and the results from the main analysis (logistic regression, with each metabolite in a separate model) as a forest plot. *Adjusted odds ratio (aOR) for sample batch, date of run, child's sex, cesarean delivery, length of pregnancy (in weeks), maternal age, prepregnancy body mass index (BMI), smoking in pregnancy, and parity.
(overfitting), whereas the HLA and metabolite predictions were not externally weighted. After accounting for potential overfitting using 5-fold cross-validation, the metabolites showed no predictive value, whereas the HLA group and non-HLA genetic risk score were only marginally attenuated after cross validation. HLA remained the clearly most important predictor (Fig. 3B). This illustrates that the initial suggestive predictive utility of metabolites was likely because of overfitting.
Other secondary analyses
Analyzing metabolite groups (by summing the z scores of amino acids, or bile acids, and using these variables as exposures) did not show any statistical association with later type 1 diabetes (amino acids aOR = 1.00; 95% CI, 0.96-1.03; bile acids aOR = 1.02; 95% CI, 0.95-1.09). The taurine/glycine-conjugated bile acid ratio was not associated with offspring type 1 diabetes (aOR = 1.07; 95% CI, 0.91-1.26). Restricting our analysis to only children carrying HLA genotypes conferring increased type 1 diabetes risk gave similar results as the main analysis (Supplemental Table 4) (31).
Exploratory analyses
Metabolite set enrichment analysis did not show any statistically significant results (Supplemental Figure 4) (31). PCA analysis of the metabolites detected 8 principal components with eigenvalues larger than 1. The cumulative proportion of variance of the components was 0.62, and none of these components was associated with later type 1 diabetes (Supplemental Figure 5) (31). Pairwise plotting of the principal components did not show any clear pattern (an example is shown in Supplemental Figure 6) (31).
Discussion
In this study, we investigated selected small polar metabolites measured in cord blood and their potential association with later type 1 diabetes. No metabolites, or cluster of individuals with different metabolite profile, were significantly associated with later type 1 diabetes. (24). La Marca et al reported lower levels of alanine and carnitines in children who later developed type 1 diabetes compared with children who did not using dried blood spots taken shortly after birth (25). Our findings are not consistent with results from these studies, but we did not measure carnitines. Cord blood and dried blood spots might also not be directly comparable.
5-fold Cross-validated AUCs
Metabolomics during infancy and early childhood and association with islet autoimmunity or type 1 diabetes have been investigated in a few previous studies The DIPP cohort reported lower glutamic acid and tryptophan at 3 or 6 months of age (q-values < 0.1) (39), and lower concentrations of branched chained amino acids, phenylalanine and tyrosine in peripheral blood mononuclear cells at 12 months of age (not significant after multiple testing correction), in children who developed type 1 diabetes compared with controls (40). In the international The Environmental Determinants of Diabetes in the Young (TEDDY) cohort, Johnson et al measured metabolites at 9 months of age and reported dicarboxylic acids (adipic acid being most significant) to be associated with increased risk of subsequent islet autoimmunity (41). Li et al, also based on the TEDDY cohort, reported alanine and β-hydroxybutyric acid at 12 and 24 months of age, respectively, to be associated with lower risk of developing islet autoimmunity in those developing autoantibodies against insulin first (42). The Norwegian Environmental Triggers of Type 1 Diabetes study reported differences in amino acid levels, albeit not statistically significant after correction for multiple testing (43), and the German BABYDIAB study reported lower methionine levels (44), after development of islet autoantibodies. There are also 2 publications from TEDDY and 1 from the Diabetes Autoimmunity Study in the Young (DAISY) using polar metabolites to predict later islet autoimmunity. Stanfill et al used classification algorithms to determine the most predictive features of islet autoimmunity in the TEDDY study (504 samples) and reported adipic acid, creatinine, and leucine as influential metabolites (45). Webb-Robertson et al used a machine learning approach to predict islet autoimmunity using data from the TEDDY study (157 case-control pairs), reporting azelaic acid and adipic acid as important features (46). Frohnert et al predicted seroconversion to islet autoimmunity in the DAISY study (22 cases and 25 controls) and reported 3-methyloxobutyrate (a precursor to valine for leucine synthesis) and pyroglutamate acid (a derivative of glutamic acid) as features that were often selected by the algorithm used (47).
We did not find any strong associations between specific metabolites measured at birth and later type 1 diabetes. Likewise, we did not find any cluster or group of metabolites that were overrepresented in children who later developed type 1 diabetes, and the metabolites studied had low predictive value for later type 1 diabetes. It must be kept in mind that earlier studies are heterogenous in most aspects: sample handling, measurement method, exposures measured, results reported, statistical analysis, and endpoint (for an overview, see Supplemental Table 5) (31), which makes results difficult to compare. Few studies adjusted for covariates but adjusting for covariates or stratifying by type 1 diabetes HLA risk did not change our estimates to a large degree. Approaches to control multiplicity varies and it is not always clear how the problem has been handled in previous studies. It is not known if participants/ samples included in subsequent publications from the same cohort (such as publications from the DIPP and TEDDY cohorts) overlap to a large degree or are separate. With the exception of Orešič et al (24) and la Marca et al (25), other studies have not measured polar metabolites at birth. There could also be differences between countries or the population studied that lead to different results.
The previously published prediction studies using postnatal circulating metabolites are likewise not directly comparable, and there are few consistent findings. The reported AUCs differ across studies, as does the data analysis and validation approach. Frohnert et al, starting with 1552 features (and 22 cases), reported an AUC of 0.92 for prediction of islet autoimmunity in the DAISY study, without a holdout validation set (47). Webb-Robertson et al, using the TEDDY cohort, reported that the top 42 features (of 221) gave an AUC of 0.65 for prediction of islet autoimmunity in the holdout validation set, appreciably lower than the AUC of 0.74 estimated with cross-validation of the training set (46). As seen in our results, cross-validation is essential to avoid overfitting, even when using few features and a relatively large sample size (166 cases, 177 controls). Several factors influence the potential for overfitting and bias, but in general, the larger the number of predictors in relation to the number of subjects with outcome, and the more flexible modelling approach (allowing nonlinearities and interactions), the larger the potential for overfitting. Using a nonvalidated AUC analysis could lead to the misinterpretation that these metabolites have predictive value for later type 1 diabetes in the same range as well-established genetic risk factors, and would add to the predictive value of genetic factors. Cross-validation shows that this overly optimistic results is due to overfitting even with a simple logistic regression model, whereas wellestablished genetic factors had an essentially unchanged e4069 AUC. This is consistent with the fact that overfitting bias tend to be smaller the greater the a priori evidence for the predictors used.
Although our approach was to start with a limited number of predictors (n = 27) in the model without statistical selection, an approach using statistical selection, such as stepwise selection or least absolute shrinkage and selection operator, would not select any metabolites and thus not obtain an overfitted model. We nevertheless think our results provide an important message regarding potential bias in multivariable prediction of type 1 diabetes or other outcomes. Although overfitting and multiple testing are related concepts, reducing the influence of these requires different techniques. Overfitting is a problem of multivariable prediction and can be minimized by cross-validation and independent validation data. For correcting multiple statistical testing, we had planned to control the false discovery rate at 5%, and compute q-values. Because no single metabolite was significant at the nominal 5% level, we conclude that no metabolite was significant, and q-values do not make any sense in this case (48).
Interpretation and implication of results
Our goal was to investigate if metabolites at birth could be predictive of later type 1 diabetes by studying selected metabolites both singly and combined and investigating their predictive value. Our interpretation of the data is that these metabolites, measured in cord blood, are of limited importance in offspring type 1 diabetes. The evidence that intrauterine exposures are associated with type 1 diabetes is not extensive (3). This does not mean that no metabolic changes occur closer to development of disease, but these possible differences do not seem present at birth for the measured metabolites. Lack of predictive value of these metabolites also does not negate the existence of other perinatal exposures that may influence the risk of type 1 diabetes, if we assume such factors are not strongly correlated with the metabolites measured in our study. The relatively consistently replicated associations between maternal obesity, birth weight, and type 1 diabetes could exert their potential effects through other means.
Strengths and limitations
A strength of the study is the use of cord blood samples, which allows us to study exposures before onset of autoimmunity and dysglycemia, and characterize influences during intrauterine life, including potential maternal exposures in pregnancy. We include children from the general population, not selected by family history or HLA risk genotypes, which makes our results more applicable to the general population. This study is the largest measuring polar metabolites in cord blood. The targeted approach allowed us to quantify a set of metabolites with a biological basis. This reduced problems of multiplicity (multiple testing and overfitting), with the potential cost of not including potentially relevant metabolites. Still, increasing the number of measured metabolites would come at a cost of increasing multiplicity problems. As with any observational study, we cannot rule out unmeasured confounders.
Conclusions
In this large study, the selected polar metabolites measured in cord blood were not associated with later type 1 diabetes in the offspring. author has applied a CC BY public copyright license to any Author Accepted Manuscript (AAM) version arising from this submission.
Author Contributions: Conception and design: G.T., L.C.S., K.S. Literature search: G.T., L.C.S. Acquisition of pregnancy cohort data: L.C.S., K.S. Acquisition of incident type 1 diabetes data: T. Skrivarhaug, G.J., P.R.N. Measurement of cord blood metabolites: T. Suvitaival, L.A., C.L.Q. Data cleaning and preparation: G.T., T. Suvitaival. Planning statistical analyzes: L.C.S., G.T., K.S. Performing statistical analyses: G.T. Interpretation of data: all authors. Drafting the manuscript: G.T. Revising the manuscript critically for important intellectual content: all authors. Final approval of the version to be published: all authors. Taking responsibility for the integrity of the data and the accuracy of the data analysis: G.T., L.C.S. Obtaining funding: L.C.S., K.S.
Preprint Disclosures: The authors have nothing to disclose. No potential conflict of interest relevant to this article was reported. The authors alone are responsible for the content and writing of the paper.
Data Availability: Aggregated data are available from the authors upon reasonable request. The consent given by the participants does not open for storage of data on an individual level in repositories or journals. Access to individual level data sets requires an application, approval from The Regional Committee for Medical and Health Research Ethics in Norway, and an agreement with MoBa. | 2021-06-06T06:16:36.291Z | 2021-06-04T00:00:00.000 | {
"year": 2021,
"sha1": "e14a4f6d76741cf338f9759296ff965c5be3d9a6",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/jcem/article-pdf/106/10/e4062/40443195/dgab400.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a43d31593408e1e96135f287fc2085a1c8492492",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257137450 | pes2o/s2orc | v3-fos-license | Water Extract of Chrysanthemum indicum L. Flower Inhibits Capsaicin-Induced Systemic Low-Grade Inflammation by Modulating Gut Microbiota and Short-Chain Fatty Acids
Systemic low-grade inflammation induced by unhealthy diet has become a common health concern as it contributes to immune imbalance and induces chronic diseases, yet effective preventions and interventions are currently unavailable. The Chrysanthemum indicum L. flower (CIF) is a common herb with a strong anti-inflammatory effect in drug-induced models, based on the theory of “medicine and food homology”. However, its effects and mechanisms in reducing food-induced systemic low-grade inflammation (FSLI) remain unclear. This study showed that CIF can reduce FSLI and represents a new strategy to intervene in chronic inflammatory diseases. In this study, we administered capsaicin to mice by gavage to establish a FSLI model. Then, three doses of CIF (7, 14, 28 g·kg−1·day−1) were tested as the intervention. Capsaicin was found to increase serum TNF-α levels, demonstrating a successful model induction. After a high dose of CIF intervention, serum levels of TNF-α and LPS were reduced by 62.8% and 77.44%. In addition, CIF increased the α diversity and number of OTUs in the gut microbiota, restored the abundance of Lactobacillus and increased the total content of SCFAs in the feces. In summary, CIF inhibits FSLI by modulating the gut microbiota, increasing SCFAs levels and inhibiting excessive LPS translocation into the blood. Our findings provided a theoretical support for using CIF in FSLI intervention.
Introduction
Certain foods, such as chili, litchi, pepper, etc., are known as "heating" foods in traditional Chinese medicine. The excessive consumption of "heating" foods may cause a number of disorders, such as red and swollen eyes, acne, sores and ulcers in the mouth and tongue, swollen gums, sore throat, yellow urine, constipation and other symptoms; these symptoms are known as "shanghuo" (heating-up) in Chinese medicine [1][2][3]. Modern medicine defines "shanghuo" as a kind of systemic and chronic low-grade inflammation characterized by a significant increase in inflammatory factors such as tumor necrosis factorα (TNF-α) [4]. Prolonged systemic low-grade inflammation can cause substantial damage to the body and induce chronic diseases such as obesity, diabetes and depression [5][6][7]. The mechanism of food-induced systemic low-grade inflammation (FSLI) has yet to be established. Generally, nonsteroidal anti-inflammatory drugs (NSAIDs) are used to treat high-grade inflammation clinically. However, FSLI is a long-term condition that requires prolonged periods of medication, but such long-term treatment with NSAIDs can induce a Nutrients 2023, 15, 1069 2 of 14 number of side effects in the gastrointestinal tract, liver, nervous system and other organs of the body [8]. Therefore, persistent FSLI requires more appropriate prevention and intervention strategies with fewer side effects. Traditional Chinese medicine believes that medicine and food have a homologous relationship, and food components can also act as medicine to prevent and treat various disorders. Thus, according to this tradition, some foods are commonly used to replace anti-inflammatory drugs to inhibit inflammatory "shanghuo". The Chrysanthemum indicum L. flower (CIF) is one such herb that is often added to tea to relieve sore throats. Several reports have shown that CIF has antioxidant activity in vitro [9][10][11][12]. However, the mechanism of action of CIF in FSLI has yet to be elucidated to provide a solid scientific foundation for its applications as a herbal medicine in preventing and treating FSLI.
Previous studies have shown that FSLI is associated with a diet-induced disorder in the intestinal microbiota [13,14]. However, in most of the existing studies, the direct injection of LPS was used to establish the inflammatory model, which may not represent the situation of low-grade inflammation induced by unhealthy diet. To overcome this problem, a foodinduced model of systemic low-grade inflammation needs to be established. Studies have reported that the excessive consumption of chili in some people can induce inflammatory "shanghuo" [1,15,16]. However, the role of gut microbiota in the process of inflammation induced by chili has not been reported in the literature.
In this study, capsaicin, a major component of chili (a representative "heating" food), was used to establish a FSLI model by the oral gavage of mice for 7 days. The CIF were then orally administered to the mice at different doses as the treatment. Inflammatory factors, gut microbiota and SCFAs were analyzed to determine their correlations. The objective of the study was to investigate the anti-inflammatory effect of CIF on capsaicin-induced FSLI and elucidate its mechanism of action, so as to provide a theoretical basis for the development of functional foods and nutritional supplements with efficient anti-systemic inflammation effects.
Preparation of CIF Extract
Dry CIF were purchased from Antai Biotechnology Co., Ltd., (Shenzhen, China). They were mixed with distilled water in 1:8 ratio (w/w), heated and simmered for 1 h for three sessions, followed by filtration with busher funnel. The filtrate was concentrated to 400 mL at (47 ± 1 • C) by rotary evaporator (N-1100V-WB, Tokyo Physicochemical Equipment Co., Tokyo, Japan). The concentrate was desiccated in a laboratory oven for 62 h at 60 • C to produce the CIF powder. The CIF powder was stored at −80 • C in a refrigerator and diluted to 0.4, 0.2, 0.1 g/mL with pure water before use. The main components of CIF extract were 3,4-dihydroxybenzoic acid, chlorogenic acid, luteoloside and linarin. The extraction methods and the main components are reported in the literature [17,18].
Preparation of Laboratory Animals
Six-week-old female C57BL/6 J mice, each weighing 20 ± 2 g, were purchased from Beijing HuaFukang Biotechnology Co., Ltd., (Beijing, China) and raised in specific pathogenfree (SPF) conditions, with the license number SCXK (Beijing) 2014007. The mice were first acclimated to an environment of 25 ± 2 • C, humidity 55% ± 5% and a 12 h light/dark cycle. During the experiment, adequate feed and water were provided. After 3 days of adaptive feeding, the mice were randomly divided into 5 treatment groups: model group (CHCB group), low-dose group (CHL group), medium-dose group (CHM group) and high-dose group (CHH group), along with a blank control group (CHCA group). Mice in treatment groups were treated with 14.4 g·kg −1 ·day −1 capsaicin (purity > 99%, Guangzhou Bosen Pharmaceutical Co., Ltd., Guangzhou, China) solution orally for 7 days. Starting from day 10, mice in the CHL, CHM and CHH groups were treated with 7, 14 and 28 g·kg −1 ·day −1 of CIF for 3 days. The lowest dose used in this study was 567 mg/kg as human dose, which was within the range of 278.17 ± 358.0 mg/kg reported in the literature [19]. To convert [20]. The medium and high doses were 2 times and 4 times those of low doses, respectively.
Determination of LPS and Inflammatory Factors in the Serum of Mice
At the end of treatment, the mouse blood was collected into a pyrogen free centrifuge tube. The blood was maintained at 25 • C for 30 min and centrifuged in a refrigerated centrifuge at 8000 rpm and 4 • C for 5 min and the serum was collected. Lipopolysaccharide (LPS) in the serum was assessed using the limulus amebocyte lysate kit (Xiamen Limulus Reagent Biotechnology Co., Ltd., Xiamen, China). Serum levels of tumor necrosis factorα (TNF-α), interleukin-1β (IL-1β), interleukin-6 (IL-6) and interleukin-10 (IL-10) were determined by respective ELISA kits (Shenzhen Xinbosheng Biotechnology Co., LTD., Shenzhen, China).
Analysis of Mouse Gut Microbiota
Mice feces were directly collected using a 1.5 mL sterile centrifuge tube on the last day of the experiment. Fecal samples were analyzed by 16SrDNA high-throughput sequencing method. Then, sample species information was obtained by comparing with database to obtain species category of gut bacteria. The methods of analyzing gut microbiota followed the methods described in [21].
Determination of SCFAs Content
A 50 mg fecal sample (collected in 2.4) was added to 100 µL of 15% phosphoric acid, 100 µL of 125 µg/mL isohexic acid solution as internal standard and 900 µL of ether and the mixture was homogenated for 1 min. The suspension was centrifuged at 12,000 RPM and 4 • C for 10 min and the supernatant was filtered through a 0.22 µm organic micropore membrane. Acetic acid, propionic acid, butyric acid, isobutyric acid, valeric acid and isovaleric acid in the filtrate were analyzed by gas chromatography mass spectrometry (GDC/GC-MS-QP2010, Shimadsu, Japan). A total of 1 µL of sample was injected into GC-MS, which was equipped with a VF-Wax column. Helium was the carrier gas at a flow rate of 1.0 mL/min. The injection temperature was 250 • C and the ion source temperature was 230 • C.
Statistical Analysis
SPSS 24.0 was used to analyze all the results, and each analysis was replicated 5 times (n = 5). All experimental data were expressed as mean ± SD. One-way Analysis of Variance (ANOVA) was performed to compare means in different groups, and a value of p < 0.05 was considered statistically significant.
Effects of Capsaicin and CIF on the Levels of Serum Inflammatory Factor in Mice
To determine whether capsaicin treatment can induce inflammation in mice and the effects of CIF on capsaicin-induced inflammation, the typical inflammatory factors TNF-α, IL-1β, IL-6 and IL-10 in the mouse serum were measured ( Figure 1). Compared with the CHCA group, the serum TNF-α level in the CHCB group was increased by 105.8% (p < 0.05), while IL-6 and IL-10 were decreased by 70.4% and 30.4% (p < 0.05), respectively. These results demonstrated that capsaicin gavage successfully induced FSLI in the mice. On the other hand, compared with the CHCB group, the TNF-α level of mice in the CHM and CHH groups decreased by 54.9% and 62.8% (p < 0.05), respectively. Furthermore, treatment with high-dose CIF increased the levels of IL-6 in the blood of mice by 150.4% (p < 0.05), while the levels of IL-10 in serum significantly increased (p < 0.05) in all groups. In sum, CIF administration led to decreases in the proinflammation factors (TNF-α) and increases in the level of anti-inflammation factors (IL-6 and IL-10) in the serum of FSLI mice. treatment with high-dose CIF increased the levels of IL-6 in the blood of mice by 150.4% (p < 0.05), while the levels of IL-10 in serum significantly increased (p < 0.05) in all groups. In sum, CIF administration led to decreases in the proinflammation factors (TNF-α) and increases in the level of anti-inflammation factors (IL-6 and IL-10) in the serum of FSLI mice.
Effects of Capsaicin and CIF on Serum LPS Levels in Mice
Lipopolysaccharide(LPS) is a major component of the cell wall of gram-negative bacteria, and can induce inflammation in vivo when circulating in the blood of the organism [22]. The LPS in the mouse serum was measured in this study ( Figure 2). Compared with the CHCA group, LPS concentration in the serum of mice in the CHCB group increased 2.33-fold (p < 0.05). After intervention with medium and high doses of CIF, the LPS levels in the treatment groups decreased by 75.56% and 77.44%, respectively (p < 0.05). These results further confirmed that capsaicin gavage induced FSLI in the mice, and treatment with medium and high doses of CIF effectively reduced LPS in the FSLI mice.
Effects of Capsaicin and CIF on Serum LPS Levels in Mice
Lipopolysaccharide(LPS) is a major component of the cell wall of gram-negative bacteria, and can induce inflammation in vivo when circulating in the blood of the organism [22]. The LPS in the mouse serum was measured in this study ( Figure 2). Compared with the CHCA group, LPS concentration in the serum of mice in the CHCB group increased 2.33-fold (p < 0.05). After intervention with medium and high doses of CIF, the LPS levels in the treatment groups decreased by 75.56% and 77.44%, respectively (p < 0.05). These results further confirmed that capsaicin gavage induced FSLI in the mice, and treatment with medium and high doses of CIF effectively reduced LPS in the FSLI mice.
Effects of Capsaicin and CIF on the Diversity of Intestinal Microflora in Mice
Alpha (α) diversity is one of the important indices reflecting the abundance, evenness and diversity of gut microbiota. As shown in Table 1, the Shannon, Simpson, Chao1 and Ace indices of the CHCB group were significantly lower (p < 0.05) compared with the CHCA group, demonstrating that feeding mice with capsaicin significantly reduced the abundance and diversity of gut microbiota in the mice. On the other hand, all of the indices in the CHL and CHH groups were significantly higher than in the CHCB group (p < 0.05), suggesting that the intervention with low and high doses of CIF enhanced and restored the richness and diversity of gut microbiota that had been affected by feeding the mice with capsaicin. The Venn diagram presents a breakdown of the common and unique operational taxonomic unit (OTU) numbers among each group (Figure 3). Compared with the 287 OTUs in the CHCA group, a markedly reduced number of 101 OTUs were found to be unique to the CHCB group. The number of OTUs endemic to the CHCB group was 37, which was considerably lower than the 89 OTUs endemic to the CHL group and 68 OTUs endemic to the CHH group ( Figure 3A). The distances between the samples were determined in the principal coordinate analysis (PCoA), which reflects the difference in β diversity of gut microbiota ( Figure 3B). The results were not significant, suggesting similar structural characteristics of gut microbiota in each group. In sum, the feeding of mice with capsaicin reduced the richness and diversity of their gut microbiota, while treatment with CIF restored and enhanced the richness and diversity.
Effects of Capsaicin and CIF on the Diversity of Intestinal Microflora in Mice
Alpha (α) diversity is one of the important indices reflecting the abundance, evenness and diversity of gut microbiota. As shown in Table 1, the Shannon, Simpson, Chao1 and Ace indices of the CHCB group were significantly lower (p < 0.05) compared with the CHCA group, demonstrating that feeding mice with capsaicin significantly reduced the abundance and diversity of gut microbiota in the mice. On the other hand, all of the indices in the CHL and CHH groups were significantly higher than in the CHCB group (p < 0.05), suggesting that the intervention with low and high doses of CIF enhanced and restored the richness and diversity of gut microbiota that had been affected by feeding the mice with capsaicin. The Venn diagram presents a breakdown of the common and unique operational taxonomic unit (OTU) numbers among each group (Figure 3). Compared with the 287 OTUs in the CHCA group, a markedly reduced number of 101 OTUs were found to be unique to the CHCB group. The number of OTUs endemic to the CHCB group was 37, which was considerably lower than the 89 OTUs endemic to the CHL group and 68 OTUs endemic to the CHH group ( Figure 3A). The distances between the samples were determined in the principal coordinate analysis (PCoA), which reflects the difference in β diversity of gut microbiota ( Figure 3B). The results were not significant, suggesting similar structural characteristics of gut microbiota in each group. In sum, the feeding of mice with capsaicin reduced the richness and diversity of their gut microbiota, while treatment with CIF restored and enhanced the richness and diversity. Figure 4 shows the effects of capsaicin and CIF on the gut microbiota composition of mice. At the phylum level, the abundance of Bacteroidetes and TM7 in the CHCB group decreased by 79.63% and 97.99%, respectively, while the abundance of Firmicutes increased by 86.15% (p < 0.05) compared with the CHCA group, suggesting that the structure of the gut microbiota was significantly altered by feeding the mice with capsaicin. After CIF intervention, however, the abundance of TM7 was significantly increased 7.73-, 2.90-and 7.90-fold in the CHL, CHM and CHH groups, respectively, compared with the CHCB group, while the abundance of Bacteroidetes did not change significantly (p < 0.05) ( Figure 4A). These results suggest that the CIF restored the structural changes in gut microbiota caused by capsaicin. Figure 4 shows the effects of capsaicin and CIF on the gut microbiota composition of mice. At the phylum level, the abundance of Bacteroidetes and TM7 in the CHCB group decreased by 79.63% and 97.99%, respectively, while the abundance of Firmicutes increased by 86.15% (p < 0.05) compared with the CHCA group, suggesting that the structure of the gut microbiota was significantly altered by feeding the mice with capsaicin. After CIF intervention, however, the abundance of TM7 was significantly increased 7.73-, 2.90-and 7.90-fold in the CHL, CHM and CHH groups, respectively, compared with the CHCB group, while the abundance of Bacteroidetes did not change significantly (p < 0.05) ( Figure 4A). These results suggest that the CIF restored the structural changes in gut microbiota caused by capsaicin. At the genus level, feeding the mice with capsaicin caused the abundance of Xanthomonas, Stenotrophomonas, Anaerofustis and Lactobacillus in the CHCB group to increase 340.87-, 730.97-, 6.38-and 6.49-fold (p < 0.05), while the abundance of Ruminococcus was decreased by 65.5%, compared with the CHCA group. However, CIF intervention resulted in the levels of Xanthomonas and Stenotrophomonas in inflamed mice decreasing by more than 99.7% compared with the CHCB group. The abundance of Anaerofustis in the CHL and CHM groups decreased by 76.62% and 83.25% (p < 0.05), respectively, compared with the CHCB group. The levels of Lactobacillus in the CHL, CHM and CHH groups decreased by 28.57%, 84.94% and 71.40%, respectively (p < 0.05), while the abundance of Ruminococcus species increased by 137.74%, 61.50% and 263.90%, respectively, compared with the CHCB group.
Effects of Capsaicin and CIF on Gut Microbiota at Phylum and Genus Levels in Mice
In sum, feeding the mice with capsaicin caused a significant decrease in the species diversity of their gut microbiota, and changes in the dominant species, while CIF treatment restored the species diversity reduced by capsaicin.
Correlation between Inflammatory Factors and Gut Microbiota
To find the relationship between capsaicin-induced inflammation and the gut microbiota in the mice as well as the mechanism of the CIF treatment, the Pearson correlation coefficient was used to analyze the correlation between inflammatory factors and gut microbiota ( Figure 5). The change in the proportion of Verrucomicrobia was positively correlated with LPS. There was a negative correlation between Tenericutes and IL−10 level. The number of Lactobacillus had a significant positive correlation with LPS and TNF−α (p < At the genus level, feeding the mice with capsaicin caused the abundance of Xanthomonas, Stenotrophomonas, Anaerofustis and Lactobacillus in the CHCB group to increase 340.87-, 730.97-, 6.38-and 6.49-fold (p < 0.05), while the abundance of Ruminococcus was decreased by 65.5%, compared with the CHCA group. However, CIF intervention resulted in the levels of Xanthomonas and Stenotrophomonas in inflamed mice decreasing by more than 99.7% compared with the CHCB group. The abundance of Anaerofustis in the CHL and CHM groups decreased by 76.62% and 83.25% (p < 0.05), respectively, compared with the CHCB group. The levels of Lactobacillus in the CHL, CHM and CHH groups decreased by 28.57%, 84.94% and 71.40%, respectively (p < 0.05), while the abundance of Ruminococcus species increased by 137.74%, 61.50% and 263.90%, respectively, compared with the CHCB group.
In sum, feeding the mice with capsaicin caused a significant decrease in the species diversity of their gut microbiota, and changes in the dominant species, while CIF treatment restored the species diversity reduced by capsaicin.
Correlation between Inflammatory Factors and Gut Microbiota
To find the relationship between capsaicin-induced inflammation and the gut microbiota in the mice as well as the mechanism of the CIF treatment, the Pearson correlation coefficient was used to analyze the correlation between inflammatory factors and gut microbiota ( Figure 5). The change in the proportion of Verrucomicrobia was positively correlated with LPS. There was a negative correlation between Tenericutes and IL−10 level. The number of Lactobacillus had a significant positive correlation with LPS and TNF−α (p < 0.05), while the number of Akkermansia was correlated with the level of LPS (p < 0.05). In addition, RF−39 was negatively correlated with the level of IL−10 (p < 0.05).
Effects of Capsaicin and CIF on SCFAs Content of Feces in Mice
Feeding the mice with capsaicin led to significant reductions in the level of SCFAs in the feces of the mice (Figure 6). The levels of acetic acid, propionic acid, butyric acid, isobutyric acid and total SCFAs in the CHCB group were reduced (p > 0.05) by 75.5%, 73.1%, 67.0%, 58.5% and 74.3%, respectively, compared with those in the CHCA group, while the concentrations of valeric acid and isovaleric acid did not differ significantly. On the other hand, the treatment of the capsaicin-fed mice with CIF resulted in restoration and increases in the SCFA levels in the mice. Compared with the CHCB group, the total SCFA production in the CHL, CHM and CHH groups increased by 357.0%, 751.6% and 549.5%, respectively (p < 0.05), while butyric acid in these groups increased (p < 0.05) by 23.1%, 287.3% and 25.9%, respectively. Total SCFAs and acetic, propionic, isobutyric, isovaleric and valeric acids were increased (p < 0.05) by 843.1%, 650.2%, 242.0%, 227.9% and 746.6% in the CHM group, respectively, compared with the CHCB group. The levels of these acids in the CHH group increased by 632.5%, 565.7%, 127.7%,84.5% and 167.5%, respectively, compared with the CHCB group. Overall, these results indicate that capsaicin intervention significantly reduced the production of SCFAs in mice, while treatment with CIF significantly increased the production of SCFAs. Medium and high doses of CIF effectively increased the synthesis of SCFAs in mice.
Effects of Capsaicin and CIF on SCFAs Content of Feces in Mice
Feeding the mice with capsaicin led to significant reductions in the level of SCFAs in the feces of the mice (Figure 6). The levels of acetic acid, propionic acid, butyric acid, isobutyric acid and total SCFAs in the CHCB group were reduced (p > 0.05) by 75.5%, 73.1%, 67.0%, 58.5% and 74.3%, respectively, compared with those in the CHCA group, while the concentrations of valeric acid and isovaleric acid did not differ significantly. On the other hand, the treatment of the capsaicin-fed mice with CIF resulted in restoration and increases in the SCFA levels in the mice. Compared with the CHCB group, the total SCFA production in the CHL, CHM and CHH groups increased by 357.0%, 751.6% and 549.5%, respectively (p < 0.05), while butyric acid in these groups increased (p < 0.05) by 23.1%, 287.3% and 25.9%, respectively. Total SCFAs and acetic, propionic, isobutyric, isovaleric and valeric acids were increased (p < 0.05) by 843.1%, 650.2%, 242.0%, 227.9% and 746.6% in the CHM group, respectively, compared with the CHCB group. The levels of these acids in the CHH group increased by 632.5%, 565.7%, 127.7%,84.5% and 167.5%, respectively, compared with the CHCB group. Overall, these results indicate that capsaicin intervention significantly reduced the production of SCFAs in mice, while treatment with CIF significantly increased the production of SCFAs. Medium and high doses of CIF effectively increased the synthesis of SCFAs in mice.
Discussion
Previous studies use direct LPS injection to induce systemic low-grade inflammation in laboratory animals such as mice [23]. The drawback of this practice is that the inflammatory effects induced by LPS may not be the same as or accurately represent those induced by unhealthy diet. To overcome this drawback, an inflammation model induced by food components is needed. Capsaicin is the component in chili that gives the "hot" sensation and is widely believed in Chinese medicine to cause "shanghuo" or heating up symptoms in the body. Some studies have reported the beneficial effects of low-dose capsaicin on the health of mice [24][25][26], while others have shown that high doses of capsaicin can cause intestinal inflammation and physiological disorders [27][28][29][30]. In the present study, a FSLI model via the oral administration of high doses of capsaicin was established. Our results showed that high doses of oral capsaicin significantly increased the serum TNF-α levels in mice to almost twice those of the control group, indicating that excessive capsaicin can induce FSLI in mice. Our findings demonstrate the feasibility of establishing a model of FSLI using capsaicin, which was reported here for the first time.
Chrysanthemum indicum L. is a herb that is commonly added to tea or infused directly as a tea analogue in China to prevent or alleviate "hotting up" symptoms, especially in summer time. People usually add 25-50 g of CIF to 250-500 mL of water, so many studies use the water extract of CIF with concentrations in the range of 0.1-0.2 g/mL [31,32]. The lowest concentration in this study was 0.1 g/mL, which was in line with people's habitual intake. As a Chinese herbal medicine, the average human equivalent dose value is 278.1 ± 358.0 mg/kg, and the values for single-herb are 322.7 ± 488.4 mg/kg [19]. The lowest dose used in this study was 567 mg/kg as a human dose, which was within the range reported in the literature. Medium and high doses were 2 times and 4 times those of low doses, respectively. Chrysanthemum indicum L. is rich in polyphenols such as luteolin, caryolane, acacetin, apigenin, which have been shown to have anti-inflammatory activity [33]. However, there is relatively scant information on the effect of CIF as a whole in treating systemic low-grade inflammation. In this study, we found that exposure to medium and high doses of CIF significantly reduced the levels of the inflammatory factor TNF-α, which was significantly increased by feeding the mice with capsaicin, implying that CIF was able to alleviate capsaicin-induced systemic inflammation. Also, we found that the serum IL-6 level decreased significantly in capsaicin-induced systemic inflammation, contrary to previous reports that cancer-related chronic inflammation is accompanied by an increase in IL-6 [34]. Treatment with high-dose CIF led to significantly elevated levels of IL-6, restoring it to the levels in the control group. Furthermore, feeding capsaicin to mice significantly reduced their level of serum IL-10. Previous reports have shown that IL-10 is an anti-inflammatory factor that inhibits the synthesis of pro-inflammatory factors [35]. The IL-10 levels in the mice significantly increased after the CIF intervention. Taken together, these results demonstrated that CIF was able to reduce systemic low-grade inflammation induced by capsaicin by lowering the proinflammation factors such as TNF-α and increasing the level of anti-inflammation factors including IL-6 and IL-10.
Previous studies have shown that excessive LPS translocation into the blood is one of the main causes of systemic inflammation [36,37]. However, it is not clear whether food-induced FSLI is mediated via a similar mechanism. The results obtained in this study showed that the concentration of LPS in the inflamed mice was more than twice that of the control group, suggesting that high-dose capsaicin promoted excessive LPS translocation into the blood and induced systemic inflammation. On the other hand, CIF treatments significantly decreased the levels of LPS in serum, indicating that CIF effectively prevented the release of a large amount of LPS into the blood, and thereby reduced the level of inflammation. This agreed with previous in vitro cell culture studies that showed that CIF treatments reduced LPS-induced inflammation. As the concentration of LPS in blood is implicated in intestinal mucosal permeability, it can be speculated that CIF might reduce the intestinal barrier permeability, thus preventing the release of LPS into the blood and reducing the level of inflammation. However, further studies are needed to confirm the effects of enhancing intestinal barrier integrity.
In the past 10 years, there have been numerous studies investigating the role of dietary intake in altering gut microbiota. However, these studies are focused on the effects of major food components such as fat, sugar and dietary fiber on the gut microbiota, while relatively few studies have examined the effect of minor components such as capsaicin on gut microbiota and their relationship with systemic inflammation, especially induced by unhealthy diet. An appropriate amount of capsaicin is found to have a positive regulatory effect on the structure and function of gut microbiota [25]. In the present study, high-dose capsaicin caused significant decreases in both the abundance and diversity of the gut microbiota of mice. Moreover, the structure of microflora also changed significantly. Gut microbiota have been reported to play a key role in the development of inflammation [38], and alterations of gut microbiota caused by capsaicin are a potential pathway for its inflammatory effects. On the other hand, intervention with medium and high doses of CIF restored the abundance and diversity of gut microbiota, indicating that CIF probably inhibited FSLI by modulating the structure of gut microbiota. One unanticipated finding was that the abundance of Lactobacillus increased significantly after the oral administration of high-dose capsaicin, while the Pearson correlation analysis indicated that the number of Lactobacillus were significantly related to the Serum LPS and TNF-α content. This result contradicted the common belief that an increased abundance of Lactobacillus can reduce inflammation levels [39][40][41], indicating that large increases in Lactobacillus may also contribute to systemic inflammation. Wang et al. found that CIF can decrease the abundance of Lactobacillus in metabolic hypertensive rats [42], but no studies have shown whether CIF can decease the abundance of Lactobacillus in a FSLI model. This study found that intervention with medium and high doses of CIF can decrease the amount of Lactobacillus increased by high-dose capsaicin, and the abundance of Lactobacillus was not significantly different with the control group, which partially agreed with our results. One of the main phenolic components of CIF, chlorogenic acid, has been reported to improve the relative abundance of Lactobacillus in a mouse model of dextran sulfate sodium-induced colitis [43]. However, the results of this study found that CIF decreased the abundance of Lactobacillus. The effect of CIF may be to restore the abnormal abundance of Lactobacillus induced by high-dose capsaicin, rather than simply affecting its rise or fall. Pearson correlation analysis indicated that the number of Lactobacillus were significantly related to the Serum LPS and TNF-α content, which is closely related to the level of FSLI. The large increase in Lactobacillus may potentially contribute to capsaicin-induced gut microbiota disorder and inflammation, but further studies are needed to confirm the hypothesis. After the intervention with medium and high doses of CIF, the abundance of Lactobacillus became comparable to that of the control group, indicating that CIF can effectively rebalance the structure of gut microbiota by restoring the abundance of Lactobacillus. The main components of CIF are polyphenols; most polyphenols (90-95%) cannot be absorbed by the gastrointestinal tract directly, but are transported to the colon and synthesized by specific bacteria to synthesize SCFAs [44].
Short-chain fatty acids play a key regulatory role in a variety of metabolic functions of the host. Butyric acid can adjust tight junction proteins to enhance the function of the intestinal barrier, while acetic acid and butyric acid inhibit intestinal inflammation [45][46][47]. Medium and high doses of CIF significantly increase the concentrations of acetic acid and butyric acid in mouse feces. Further, CIF treatment significantly increased the abundance of Ruminococcaceae, which are a family of butyrate-producing bacteria, suggesting that CIF promotes the synthesis of butyric acid, with the consequent enhancement of the intestinal barrier function, which in turn can hinder the entry of LPS into the blood and reduces the level of inflammation.
Conclusions
This study shows that excessive amounts of capsaicin can trigger gut microbiota disorder, increase the degree of LPS released into the blood and induce low-grade inflammation in the system. Therefore, capsaicin can be used to establish a FSLI model. On the other hand, treatments with medium and high doses of CIF can help restore the structure of gut microbiota, increase the production of SCFAs such as butyric acid, prevent the entry of LPS into the blood and inhibit FSLI. These findings provide scientific evidence to support the use of CIF for preventing and treating inflammation induced by unhealthy diet and are a foundation for the development of CIF-based functional foods and drinks for the prevention of chronic low-grade inflammation-related disorders.
Institutional Review Board Statement:
The animal study protocol was approved by the Animal Ethics Committee of Guangdong Ocean University (approval ID GDOU-20190724).
Informed Consent Statement: Not applicable.
Data Availability Statement: Sequencing reads were deposited in the NCBI's sequence read archive under accession number PRJNA906648. Further data that support the findings of this study are available from the corresponding author upon reasonable request.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-02-24T17:24:57.519Z | 2023-02-21T00:00:00.000 | {
"year": 2023,
"sha1": "4788ffa91a5352937d153fe8df60a6bd41f600e3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/15/5/1069/pdf?version=1676964876",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ad5a05302d2b964a7526be8212e6016ed7afbc5c",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12761791 | pes2o/s2orc | v3-fos-license | A phase transition in random coin tossing
Suppose that a coin with bias theta is tossed at renewal times of a renewal process, and a fair coin is tossed at all other times. Let mu_\theta be the distribution of the observed sequence of coin tosses, and let u_n denote the chance of a renewal at time n. Harris and Keane showed that if sum_{n=1}^infty u_n^2=\infty, then mu_theta and \mu_0 are singular, while if sum_{n=1}^{infty} u_n^2<infty and theta is small enough, then mu_theta is absolutely continuous with respect to mu_0. They conjectured that absolute continuity should not depend on theta, but only on the square-summability of {u_n}. We show that in fact the power law governing the decay of {u_n} is crucial, and for some renewal sequences {u_n}, there is a {phase transition at a critical parameter theta_c in (0,1): for |theta|<theta_c the measures mu_theta$ and mu_0 are mutually absolutely continuous, but for |theta|>theta_c, they are singular. We also prove that when u_n=O(n^{-1}), the measures mu_theta for theta in [-1,1] are all mutually absolutely continuous.
dichotomy for independent sequences reduces, in the case of coin tosses, to the following: Theorem A ( [13]) Let µ 0 be the distribution of i.i.d. fair coin tosses on {−1, 1} N , and let ν θ be the distribution of independent coin tosses with biases {θ n } ∞ n=1 .
For a proof of Theorem A see, for example, Theorem 4.3.5 of [7].
Harris and Keane conjectured that singularity of the two laws µ θ and µ 0 should not depend on θ, but only on the return probabilities {u n }. In particular, they asked whether the condition ∞ k=0 u 2 k < ∞ implies that µ θ ≪ µ 0 , analogously to the independent case treated in Theorem A. We answer this negatively in Sections 4 and 5, where the following is proved.
Notation: Write a n ≍ b n to mean that there exist positive finite constants C 1 , C 2 so that C 1 ≤ a n /b n ≤ C 2 for all n ≥ 1.
(ii) The bias θ can be a.s. reconstructed from the coin tosses {X n }, provided θ is large enough. More precisely, we exhibit a measurable function g so that, for all θ > [10] imply that θ s is the critical parameter for µ θ to have a square-integrable density with respect to µ 0 .) asymptotics of u n critical parameters There are renewal sequences corresponding to the last row for which 0 < θ s < θ c = 1; see Theorem 1.4 and the remark following it. Theorem 1.1(ii) shows that for certain chains satisfying ∞ n=0 u 2 n < ∞, for θ large enough, the bias θ of the coin can be reconstructed from the observations X. Harris and Keane described how this can be done for all θ in the case where Γ is the simple random walk on the integers, and asked whether it is possible whenever n u 2 n = ∞. In Section 6 we answer affirmatively, and prove the following theorem: Theorem 1.2. If n u 2 n = ∞, then there is a measurable function h so that θ = h(X) µ θ -a.s. for all θ.
In fact, h is a limit of linear estimators (see the proof given in Section 6). Theorem 1.2 is extended in Theorem 6.1.
There are examples of renewal sequences with k u 2 k < ∞ which do not exhibit a phase transition: Theorem 1.3. If the return probabilities {u n } satisfy u n = O(n −1 ), then µ θ ≪ µ 0 for all 0 ≤ θ ≤ 1.
For example, the return probabilities of (even a delayed) random walk on Z 2 have u k ≍ k −1 .
Remark: The significance of this result is that the asymptotic conditions on {u n } still holds if the underlying Markov chain is altered to increase the transition probability from the origin to itself.
This result is proved in Section 9. It is much easier to prove that µ θ and µ 0 are always mutually absolutely continuous in the case where the Markov chain is "almost transient", for example if u k ≍ (k log k) −1 . We include the argument for this case as a warm-up to Theorem 1.3. In particular, we prove the following theorem: To prove Theorem 1.3 and Theorem 1.4 we refine this and show that The model discussed here can be generalized by substituting real-valued random variables for the coin tosses. We consider the model where observations are generated with distribution α at times when the chain is away from o, and a distribution η is used when the chain visits o.
Similar problems of "random walks on scenery" were considered by Benjamini and Kesten in [3] and by Howard in [11,12]. Vertices of a graph are assigned colors, and a viewer, provided only with the sequence of colors visited by a random walk on the graph, is asked to distinguish (or reconstruct) the coloring of the graph.
The rest of this paper is organized as follows. In Section 2, we provide definitions and introduce notation. In Section 3, we prove a useful general zero-one law, to show that singularity and absolute continuity of the measures are the only possibilities. In Section 4, Theorem 1.1(i) is proved, while Theorem 1.1(ii) is established in Section 5. We prove a more general version of Theorem 1.2 in Section 6. In Section 7, we prove a criterion for absolute continuity, which is used to prove Theorem 1.4 in Section 8 and Theorem 1.3 in Section 9. A connection to long-range percolation and some unsolved problems are described in Section 10.
2. Definitions. Let Υ = {0, 1} ∞ be the space of binary sequences. Denote by ∆ n the n th coordinate projection from Υ. Endow Υ with the σ-field H gen-erated by {∆} n≥0 and let P be a renewal measure on (Υ, H), that is, a measure obeying k=1 denote the inter-arrival times of the renewal process: If S n = inf{m > S n−1 : ∆ m = 1} is the time of the n th renewal, sequence. We will use f n to denote P[T 1 = n].
In the introduction we defined u n as the probability for a Markov chain Γ to return to its initial state at time n. If ∆ n = 1 {Γn=o} , then the Markov property guarantees that (2.1) is satisfied. Conversely, any renewal process ∆ can be realized as the indicator of return times of a Markov chain to its initial state. (Take, for example, the chain whose value at epoch n is the time until the next renewal, and consider returns to 0.) Thus we can move freely between these points of view. For background on renewal theory, see [8] or [15].
Suppose that α, η are two probabilities on R which are mutually absolutely continuous, that is, they share the same null sets. In the coin tossing case discussed in the Introduction, these measures are supported on {−1, 1}. Given a renewal process, independently generate observations according to η at renewal times, and according to α at all other times. We describe the distribution of these observations for various choices of η.
Let R ∞ denote the space of real sequences, endowed with the σ-field G generated by coordinate projections. Write η ∞ for the product probability on (R ∞ , G) In the case where η is the coin tossing measure with bias θ, write Q θ for Q η .
The random variables Y n , Z n are defined by Y n (y, z, δ) = y n , Z n (y, z, δ) = z n .
Finally, the random variables X n are defined by The distribution of X = {X n } on R ∞ under Q η will be denoted µ η . 3. A Zero-One Law and Monotonicity. We use the notation established in the previous section. Let G n be the σ-field on R ∞ generated by the first n coordinates. If µ β and µ π are both restricted to G n , then they are mutually absolutely continuous, and we can define the Radon-Nikodym derivative ρ n = dµπ dµ β | Gn . Write ρ for lim inf n→∞ ρ n ; the Lebesgue Decomposition Theorem (see Theorem 4.3.3 in [7]) implies that for any A ∈ G, where µ sing π ⊥ µ β . Thus to prove that µ π ≪ µ β , it is enough to show that where E is the exchangeable σ-field. The Hewitt-Savage Zero-One law implies that E, and hence T (∆), is trivial.
can be written as follows that Class Theorem implies that all bounded G × G × H-measurable functions obey (3.5). We conclude that T (Y, Z, ∆) is trivial. ✷ Proposition 3.2. Either µ π and µ β are mutually absolutely continuous, or Proof. Suppose that µ π ⊥ µ β . From (3.2), it must be that ρ < ∞ with positive µ π probability. Because the event {ρ < ∞} is in T , Lemma 3.1 implies ρ < ∞ µ π -almost surely. Using (3.2) again, we have that µ π ≪ µ β . The same argument with the roles of β and π reversed, yields that µ β ≪ µ π also. ✷ We return to the special case of coin tossing here, and justify our remarks in the introduction that for certain sequences {u n }, there is a phase transition. In particular, we need the following monotonicity result.
Suppose now that µ θ1 ⊥ µ 0 . Then for 1 2 < γ < 1 and ℓ a slowly varying function. If Remark. The conditions on θ specified in the statement above are not vacuous.
That is, there are examples where the lower bound on θ is less than 1. There are random walks with return times obeying u n ≍ n −γ , as shown in Theorem 4.3.
By introducing delays at the origin, u 1 can be made to be close to 1, so that Proof. Let E denote expectation with respect to the renewal measure P and let E θ denote expectation with respect to Q θ . Let u r = max{u i : i ≥ 1} and assume for now that r = 1.
Note that we have defined things so that c(n) ≍ n −p , where p < 1 − γ. Then We need the following simple lemma: Taking expectation proves the lemma. ✷ By this lemma, and thus Combining (4.9) and (4.12), we find that Also, (4.13) follows from Lemma 4.2, and the last term in (4.14) comes from the contributions when j = 0.
If A n is the event that there is a run of length k(n) after epoch k(n) and before n, then (4.15) and the second moment inequality yield Finally, we have and by the Zero-One Law (Lemma 3.1) we have that Q θ [lim sup A n ] = 1. A theorem of ErdHos and Rényi (see, for example, Theorem 7.1 in [21]) states that under the measure µ 0 , L n / log 2 n → 1, where L n is the length of the longest run before epoch n. But under the measure µ θ , we have just seen that we are guaranteed to, infinitely often, see a run of length (1 + ǫ) log 2 n before time n.
Apply the proceeding argument to this subsequence to distinguish between µ θ and µ 0 . ✷ Proposition 4.3. There exists a renewal measure P with u n ∼ Cn −γ for Proof. For a distribution function F to be in the domain of attraction of a stable law, only the asymptotic behavior of the tails F (t), 1 − F (−t) is relevant (see, for example, Theorem 8.3.1 in [4]). Thus if the symmetric stable law with exponent 1/γ is discretized so that it is supported on Z, then the modified law An example of a Markov chain with U n ≍ n 1/4 will be constructed by another method in Section 8.
5.
Determining the bias θ. In this section we refine the results of the previous section and give conditions that allow reconstruction of the bias from the observations.
For a ≥ 1, let Because ET i = ∞, Cramér's Theorem (see, e.g., [6]) implies that Λ * (a) > 0 for all a. Since lim a↑∞ P[T 1 ≤ a] = 1, it follows that lim a↑∞ Λ * (a) = 0. Also, It is convenient to reparameterize so that we keep track of ϕ Hence, The maximum of ψ(ϕ, ·) over (0, 1] is attained, so we can define We show now that ψ(ϕ, ξ 0 ) > ψ(ϕ, 1), a fact which we will use later (see the remarks following Theorem 5.1). Let ℓ = min{n > 1 : f n > 0}, and note that of length 1 and ⌊ǫk⌋ inter-renewal times of length ℓ, then in particular there are at least k renewals. Consequently, Taking logs, normalizing by k, and then letting k → ∞ yields Thus for ǫ bounded above, the left-hand side of (5.19) is bounded below by Since the derivative of h 2 tends to infinity near 0, there is a positive ǫ where the difference is strictly positive. Thus, the maximum of ψ(ϕ, ·) is not attained at ξ = 1.
In particular, for ϕ > ψ −1 (γ) (equivalently, θ ≥ 2 ψ −1 (γ) − 1), we can recover ϕ (and hence θ) from X: , (see the comments before the statement of Theorem 5.1) we have that The right-hand side of (5.22) is the upper bound on θ c obtained in Proposition 4.1, while the left-hand side is the upper bound given by Theorem 5.1. Thus this section strictly improves the results achieved in the previous section.
We begin by proving that R(X) ≤ ζ ∨ 1, or equivalently, that as the waiting time at n until the next renewal (the residual lifetime at n). We and consequently we have Taking expectations over (∆ n+m+1 , . . . , ∆ n+k(n) ) in (5.26) gives that The equality in (5.27) follows from the renewal property, and clearly the righthand side of (5.27) is maximized when m = 1. Therefore the right-hand side of (5.25) is bounded above by We now examine the probability Q θ [E n | ∆ n+1 = 1] appearing on the righthand side of (5.28). Let N [i, j] def = j k=i ∆ k be the number of renewals appearing between times i and j. In the following, N = N [n + 1, n + k(n)]. We have By conditioning on the possible values of N , (5.29) is bounded by Hence, returning to (5.28), Let q = c(1 − ψ(ϕ)), and since c > ζ ∨ 1, we have that q + γ > 1. Letting Then, using (5.32), it follows that a)a −1−q , and U n ≤ Cn 1−γ , the right-hand side of (5.33) is bounded above by We have that (5.34), and hence (5.33), is bounded above by Since q + γ > 1, (5.35) is bounded as L → ∞. We conclude that (5.31) is summable. Applying the Borel-Cantelli lemma establishes (5.24).
It is convenient to couple together monotonically the processes X θ for different θ. See (3.6) in the proof of Proposition 3.3 for the construction of the coupling, and let {V i } be the i.i.d. uniform random variables used in the construction.
First, using the coupling, we have that R(X θ ) ≥ R(X 0 ) = 1. Hence, It is enough to show that if c < ζ, then The importance of the coupling and the last condition is that a good run in I i implies an observed run (X j = 1 ∀ j ∈ I i ).
The probability of G n i is given by is the probability of at least ξ 0 k(n) renewals in the interval I i , given that there is a renewal at ik(n). Note that p i ≡ p 1 for all i, by the renewal property.
Following the proof of Proposition 4.1, we define D n = n/k(n) j=1 1 G n j , and compute the first and second moments of D n . Using (5.36) gives By definition of Λ * , we can bound below the probability p 1 : For n sufficiently large, where ǫ > 0 is arbitrary. Thus, plugging (5.39) into (5.37) shows that for n sufficiently large, gives that for n large enough, We turn now to the second moment, which we show is bounded by a multiple of the square of the first moment.
We compute the probabilities appearing in the sum by first conditioning on the renewal process: Then observe that (5.46) is bounded above by u σ + u σ+k(n) + · · · + u σ+mk(m) . Taking expectation over σ in (5.48), and then plugging into (5.44) shows that where we have used the expression (5.37) for E θ D n . Finally, using (5.49) in (5.41) yields that Now, we have, as in the proof of Proposition 4.1, that G n i happen infinitely often. But since a good run is also an observed run, also the events happen infinitely often. But, if R jk(n) ≥ k(n), then certainly R jk(n) ≥ k(jk(n)).
Thus, in fact the events We conclude that ζ ≤ R(X).
It is not hard to verify that E θ T n = θ and sup n Var θ (T n ) < ∞. Since {T n } is a bounded sequence in L 2 (µ θ ), it has an L 2 -weakly convergent subsequence.
Because the limit T of this subsequence must be a tail function, T = θ a.s.
Finally, standard results of functional analysis imply that there exists a sequence of convex combinations of the estimators T n that tends to θ in L 2 (µ θ ) and a.s.
The disadvantage of this approach is that the convergent subsequence and the convex combinations used may depend on θ; thus the argument sketched above only works for fixed θ. The proof of Theorem 6.1 below provides an explicit sequence of estimators not depending on θ.
We return to the general setting described in Section 2. A collection Ψ of bounded Borel functions on R is called a determining class if µ = ν whenever R ψdµ = R ψdν for all ψ ∈ Ψ.
The following theorem generalizes Theorem 1.2. For each pair m i < n i , let Let {ǫ j } be any sequence of positive numbers. We will inductively define We now show how to define (m i+1 , n i+1 ), given n i , so that (6.50) is satisfied.
Observe that
Fix k, and write m, n for m ℓ , n ℓ respectively. We claim that Then applying Cauchy-Schwarz to the right-hand side of (6.53) bounds it by establishing (6.52). Using the bound (6.52) in (6.51), and recalling that |ψ| ≤ 1, .
Consequently,
Proof. In this proof the probability space will always be Υ 2 , endowed with the product measure P 2 , where P is the renewal probability measure. Let be the number of joint renewals in the interval [m, n].
First we show that
∀ C, T 1 + · · · + T k ≥ e Ck eventually. (8.66) Observe that . since we have assumed that u n ≤ C 2 n −1 . Thus the expectation of the sum on the right in (8.68), for C large enough, is finite. Thus the sum is finite ∆-almost surely, so the conditions of Lemma 7.1 are satisfied. We conclude that µ η ≪ µ α .
Let A(s) = ∞ n=1 f n s n be the moment generating function for the distribution of the time of first return to x 0 for the chain with transitions P , and let B(s) be the corresponding generating function but for the chain P ′ and state y 0 . Then the generating function for the distribution of the time of the first return of Φ to Proof. Let S 1 , S 2 , . . . be the times of successive visits of Φ to X × {y o }, and Observe that Y is a Markov chain with transition matrix P ′ , so {T k } has the distribution of return times to y 0 for the chain P ′ .
We use the following Tauberian (ii) W (y) ≍ y α ℓ(y) for large y.
We now exhibit Markov chains with no phase transition.
Proposition 8.5. There is a Markov chain that satisfies U n ≍ log log n, and u n ≤ Cn −1 .
The last example is closely related to the work of Gerl in [9]. He considered certain "lexicographic spanning trees" Let S ′ n and T ′ n denote the renewal times and inter-renewal times of another independent renewal process. Recall that J is the total number of simultaneous In this section, we prove the following: Theorem 9.1. When u n = O(n −1 ), the sequence {q n } defined in (9.70) decays faster than exponentially almost surely, that is, n −1 log q n → −∞ almost surely.
We start by observing that the assumption u n ≤ c 1 /n implies a bound for tails of the inter-renewal times: Indeed, by considering the last renewal before time (1 + a)n, Choosing a large yields (9.71).
Let ω(n) be any function going to infinity, and denote m(n) := n log nω 2 (n) .
Below, we will often write simply m for m(n).
We can conclude that log r n ≤ log m(n) n + log R n . (9.76) Notice that m(n) n = e O(n log log n) when ω(n) is no more than polylog n; for convenience, we assume throughout that ω 2 (n) = o(log n). Hence, if we can show that log R n n log log n → −∞ almost surely, (9.77) then by (9.76), it must be that (9.74) holds.
For any n-element set A ⊂ [m(n)], we use the following notation: • For any k ≤ m ′ , let I(k) be the set of indices i such that {T i } i∈I(k) are the k largest inter-renewal times among We have where x 0 = 0 and S 0 def = 0. Recalling that u n ≤ c 1 /n, we may bound the righthand side above by . In what follows, k 0 (n) def = 10(log nω(n)) 2 . Assuming this for the moment, we finish the proof of the theorem. The following summation by parts principle will be needed. .
This proves the lemma. ✷ Lemma 9.4. Write {T i } n i=1 in decreasing order: Then lim inf Proof. It suffices to prove this lemma in the case where u n ≍ n −1 , because in the case where u n ≤ cn −1 , the random variables T i stochastically dominate those in the first case.
. From [5], it can be seen that Since (m ′ log m ′ ) −1 m ′ i=k0(n)+1 log(T i /c) has a nonzero liminf by Lemma 9.4, we see that log log n log Rn n log n is not going to zero, from which follow (9.77) and the theorem. Let I(k) = {r 1 < r 2 < · · · < r k } be a uniform k-subset of [m(n)]; then the event G n,m has the same probability as the event G n,m , defined as for all n-element sets A = {x 1 < · · · < x n = m} ⊆ [m] and k satisfying m ≥ k > k 0 (n), at least kn/(6m log log n) of the intervals [x i−1 + 1, x i ] contain an element of I(k).
Equivalently, G n,m is the event that for all n-element sets A = {x 1 < · · · < x n = m} ⊆ [m] and k satisfying k > k 0 (n), at least kn/(6m log log n) of the intervals Finally, G n,m can be rewritten again as the event for k obeying m ≥ k > k 0 (n), no kn/(6m log log n)−1 of the intervals [r i , r i+1 − 1] together contain n points.
Proving the inequality (9.83) is then the same as proving that We have that G n,m = ∩ m k=k0(n)+1 G n,m,k .
A coin is chosen for each cluster (connected component), and labels are generated at each x ∈ Z + by flipping this coin. The coin used for vertices in the cluster of the origin is θ-biased, while the coin used in all other clusters is fair. The bonds are hidden from an observer, who must decide which coin was used for the cluster of the origin. For certain Γ (e.g., for the random walks considered in Section 4), there is a phase transition: for θ sufficiently small, it cannot be determined which coin was used for the cluster of the origin, while for θ large enough, the viewer can distinguish. This is an example of a 1-dimensional, long-range, dependent percolation model which exhibits a phase transition. Other 1-dimensional models that exhibit a phase transition were studied by Aizenman, Chayes, Chayes, and Newman in [2].
• In Sections 4 and 8, we constructed explicitly renewal processes whose renewal probabilities {u n } have prescribed asymptotics. Alternatively, we could invoke the following general result.
• An extended version of the random coin tossing model, when the underlying Markov chain is simple random walk on Z, is studied in [17]. Each vertex z ∈ Z is assigned a coin with bias θ(z). At each move of a random walk on Z, the coin attached to the walk's position is tossed. In [17], it is shown that if |{z : θ(z) = 0}| is finite, then the biases θ(z) can be recovered up to a symmetry of Z.
Some unsolved problems. Recall that ∆ and ∆ ′ denote two independent and identically distributed renewal processes, and u n = P[∆ n = 1]. The distribution of the sequence of coin tosses, when a coin with bias θ is used at renewal times, is denoted by µ θ .
1. Is the quenched moment generating function criterion in Lemma 7.1 sharp? | 2014-10-01T00:00:00.000Z | 2001-10-01T00:00:00.000 | {
"year": 2004,
"sha1": "eb074b4d615de5d2e7d5b30d2149c8a8325ac316",
"oa_license": "implied-oa",
"oa_url": "https://doi.org/10.1214/aop/1015345766",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "c6559fe6e57652bf82e783ad0f2c038fbf28eae8",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
251696750 | pes2o/s2orc | v3-fos-license | Contemporary Review of Submandibular Gland Sialolithiasis and Surgical Management Options
One of the most common disorders of the salivary glands is obstructive sialolithiasis. Salivary gland obstruction is important to address, as it can significantly impact patient quality of life and can progress to extensive cellulitis and abscess formation if left untreated. For small and accessible stones, conservative therapies often produce satisfactory outcomes. Operative management should be considered when stones are inaccessible or larger in size, and options include sialendoscopy, laser lithotripsy, extracorporeal shockwave lithotripsy, transoral surgery, and submandibular gland adenectomy. Robotic approaches are also becoming increasingly used for submandibular stone management. The purpose of this review is to summarize the modern-day management of submandibular gland obstructive sialolithiasis with an emphasis on operative treatment modalities. A total of 77 articles were reviewed from PubMed and Embase databases, specifically looking at the pathophysiology, clinical presentation, diagnosis, and management of submandibular sialolithiasis.
Introduction And Background Introduction
Sialolithiasis is the most common disorder of the major salivary glands, accounting for 50% of major salivary gland diseases [1,2]. While sialolithiasis has been noted in approximately 1% of autopsy reports, clinically significant sialolithiasis is less common, with studies showing an incidence of 30 to 60 symptomatic cases requiring treatment per million individuals per year [3][4][5]. Patients with obstructive sialadenitis present with a history of recurrent painful periprandial swelling of the involved gland, best known as the "meal-time syndrome," which may often be complicated by recurrent bacterial infections with fever and purulent discharge at the papilla [6]. If left untreated, salivary gland obstruction can progress to extensive cellulitis, abscess formation, and airway compromise [7].
Sialolithiasis occurs in the submandibular gland in about 80-90% of cases [8]. Historically, submandibular sialoadenectomy was the treatment of choice for complicated or recalcitrant sialolithiasis. Recent advances in minimally invasive treatment options for submandibular sialolithiasis have led to successful stone removal with high rates of gland preservation. This article provides an update on contemporary management of submandibular gland sialolithiasis, with an emphasis on minimally invasive treatment modalities including sialendoscopy and transoral robotic surgery (TORS).
Level of Evidence
Study Design 1 Properly powered and conducted randomized clinical trial; systematic review with meta-analysis 2 Well-designed controlled trial without randomization; prospective comparative cohort trial 3 Case-control studies; retrospective cohort study 4 Case series with or without intervention; cross-sectional study 5 Opinion of respected authorities; case reports
Pathophysiology
Sialoliths are composed of both organic substances, including glycoproteins, mucopolysaccharides, and cellular debris, as well as inorganic substances, which consist mainly of calcium carbonates and calcium phosphates [9]. Other inorganic components of sialoliths include minerals such as calcium, magnesium, phosphate, manganese, iron, and copper. The organic substances often predominate in the center of the stone, while the periphery is mostly inorganic [9].
While the pathogenesis of sialolithiasis is unknown, two major hypotheses have been proposed. Some speculate that intracellular microcalculi excreted into the canal may become a nidus for further calcification [10,11]. A second hypothesis suggests that food, debris, or bacteria in the oral cavity can migrate into the salivary ducts and become the nidus for further calcification [12]. Both hypotheses maintain that an initial organic nidus progressively grows by the deposition of layers upon layers of inorganic and organic substances [9].
The etiologic factors responsible for an increased incidence of sialolithiasis in certain individuals remain unknown. While it was traditionally believed that high calcium intake contributes to an increased risk of salivary stone formation, a study investigating the geographic distribution of water hardness showed no link between higher-mineral-content water and increased incidence of salivary stones [13]. Results of a national study in Sweden suggested that genetics may play a role, as familial clustering was common in patients with sialolithiasis. Other risk factors commonly attributed to salivary gland stone formation include dehydration and smoking, use of diuretics and anticholinergic medications, history of gout or trauma, and chronic periodontal disease [7,14,15].
Salivary stones occur predominantly in the submandibular gland, likely because Wharton's duct has a longer course, upward salivary flow, and increased alkalinity and viscosity of the saliva when compared with Stensen's duct [16]. The mean size of submandibular stones is 7.3 mm with an average growth rate of around 1 mm per year, although giant sialoliths measuring up to 70 mm have been described [9,[17][18][19][20]. In general, stones greater than 15 mm in size are considered giant salivary gland calculi [20]. The majority of submandibular stones are located in the distal third of the duct or at the hilum of the gland, while pure intraparenchymal stones are infrequent [2]. The location of submandibular stones is important for guiding management, and a variety of diagnostic modalities may be used for submandibular stone localization.
Clinical presentation
Sialolithiasis is primarily a clinical diagnosis based on patient history and physical examination. Patients commonly report a sudden onset of swelling and pain in the affected gland that is associated with eating, referred to as "meal-time syndrome" Acute sialadenitis in the setting of sialolithiasis presents with pain, swelling, and erythema around the gland. Fevers and chills may also be present [15].
On physical exam, palpation of the floor of the mouth in a posterior-to-anterior direction may allow the sialolith to be seen at the opening of Wharton's duct or palpated along its course [15]. The gland itself may be tender to palpation, particularly in the presence of sialadenitis. Additionally, compression of a salivary gland should cause clear saliva to flow from the associated duct; if this does not occur, a stone may be obstructing salivary flow. Finally, purulent discharge at the orifice raises concern for acute bacterial sialadenitis [7].
Review Assessment and diagnosis
Aside from patient history and palpation of the duct, various imaging techniques are available for the diagnosis of sialolithiasis ( Table 2) [21]. Imaging can identify a sialolith, an abscess, or mimickers of sialolithiasis such as neoplasms. Imaging modalities include conventional sialography, computed tomography (CT), ultrasound (US), and magnetic resonance (MR) sialography [8,9].
Conventional Sialography
In conventional sialography, the duct is cannulated and a radiopaque dye is injected before plain films are taken [22]. Although rarely performed today, it is still regarded as one of the best diagnostic techniques for visualizing the detailed anatomy of the salivary ducts, as it can demonstrate the main duct as well as all its branches, from primary to quaternary ones [23]. The sensitivity of conventional sialography in sialolith detection ranges from 64 to 100%, while its specificity ranges from 88 to 100%. With the use of subtraction, the sensitivity of stone detection increases to 96-100% while specificity is as high as 88-91% [24].
The disadvantages of sialography include exposure to ionizing radiation and iodine contrast media, pain during contrast medium insertion into the salivary ducts, and calculi dislodgement towards the gland.
Additionally, the quality of the resulting image depends on the experience of the operator performing the cannulation and sialography evaluation [23]. The technique is also associated with several complications, including salivary duct perforations, inflammation, adverse reactions to iodine contrast medium, and bleeding [23]. It is also contraindicated in patients with acute sialadenitis or contrast allergy [7]. Due to these drawbacks, sialography has been largely supplanted by newer imaging modalities [23].
Computed Tomography
High-resolution neck CT is one of the most commonly used modalities for the evaluation of salivary stones [25]. Most stones contain enough calcium to be visible with non-contrast CT, and fine cuts should be used so that small stones are not missed. Non-contrast CT has high sensitivity and specificity for salivary stone detection: a retrospective cohort study reported a sensitivity of 98% and a specificity of 88% using sialendoscopy as the reference standard [4]. It is particularly useful in cases where few small calculi are suspected that may be missed with other diagnostic modalities, especially ultrasound (US).
Two main drawbacks of non-contrast CT imaging are that it exposes the patient to ionizing radiation and provides less detail of ductal dilation and other intraductal or glandular pathologies than conventional or MR sialography [23]. Contrast-enhanced CT imaging may be performed in addition to non-contrast CT studies to provide further detail for the evaluation of complicated stone disease. However, this results in doubling radiation exposure for the patient and carries additional risks associated with intravenous contrast use such as anaphylaxis and acute kidney injury [25]. Traditionally, it has been thought that contrastenhanced CT should not be used as a stand-alone study because of the concern that blood vessels may resemble small sialoliths and lead to false-positive diagnoses. However, a recent study by Purcell et al. reported a 98% accuracy rate for contrast-enhanced CT for the diagnosis and exclusion of salivary calculi, with the conclusion that there may be no difference in the diagnostic accuracy between contrast-enhanced and non-contrast CT studies [25].
Ultrasound
Ultrasound is another frequently used modality for the diagnosis of submandibular sialolithiasis. Sialoliths typically appear as echogenic round or oval structures producing an acoustic shadow on US [23]. Stones may also lead to proximal distension of the duct, which may be seen in US. The detection of fine stones may be helped by sialogogue injection, which causes salivary duct dilatation and thus facilitates sialolith visualization [23].
US is most suited for stones larger than 2 mm, greater than 90% of which can be detected by ultrasound [26]; stones smaller than 2 mm may not produce any acoustic shadow and are therefore difficult to detect [23]. US has been reported to have a sensitivity ranging from 59% to 94% and a specificity ranging from 87% to 100% for the detection of submandibular sialolithiasis [4,23,24,27]. The differential diagnoses that can arise when using US for sialolithiasis include sarcoidosis, Sjögren syndrome, disseminated lymphoma, and hematogenous metastasis [27]. Advantages of US include its noninvasive nature, relatively low cost, and lack of radiation exposure. Disadvantages include the need for an experienced operator and low sensitivity for detecting salivary gland neoplasms or stone-related complications such as strictures [7].
US and non-contrast CT can also be used in combination for the detection of submandibular sialoliths, which allows for the advantages of one modality to compensate for the disadvantages of the other [4]. CT imaging is more sensitive for individual sialoliths and can illustrate multiple stones, whereas US can demonstrate duct dilation when stones are difficult to visualize directly. CT also provides complementary information regarding possible glandular abscess or tumor, while obtaining US provides radiologists a point of comparison should they be called into the operating room to assess for stones [4].
Magnetic Resonance (MR) Sialography
MR sialography is a noninvasive alternative to conventional sialography in that it does not require salivary duct cannulation or ionizing radiation and contrast exposure. Unlike conventional sialography, it may also be carried out during acute inflammation of the salivary gland [23]. Studies of MR sialography suggest that it may have superior sensitivity to US and a lower procedural failure rate than conventional sialography [24,28]. The sensitivity and specificity of MR imaging sialography are reported to be 91% and 94%-97% respectively [29].
The main advantages of MR sialography are the precise evaluation of the salivary duct including proximal branches, and the detection of very small stones which may not be found with other diagnostic techniques [30]. Additionally, it does not require an experienced operator and permits concomitant evaluation of the salivary glandular parenchyma [31]. The primary drawback of MR sialography is that the diagnosis of sialolithiasis is almost entirely indirect, relying on findings such as ductal obstruction with signal loss and prestenotic dilation. Therefore, some small stones that do not cause full occlusion, such as those found near duct openings or in intraglandular ducts, may remain undetected. Dental amalgams may also lead to distortion artifacts with this modality [28,30].
Primary Care Management
Conservative management is the mainstay of treatment in the majority of patients presenting to a primary care clinician for sialolithiasis [32]. Patients should be instructed to maintain hydration, apply heat to the involved area, use nonsteroidal anti-inflammatory drugs to reduce pain and inflammation, and massage the gland to promote ductal outflow. Nonpharmacologic agents that promote salivary flow, such as lemon wedges and tart candies, may be helpful. After the resolution of the episode, risk factors for sialolithiasis should be identified and modified to prevent future episodes [7].
If sialadenitis is suspected because of increasing pain, fever, or purulent drainage, anti-staphylococcal antibiotics should be administered [15,33]. If there is not an improvement in symptoms within a week of treatment, a culture of any ductal discharge should be obtained and the antibiotic coverage should be broadened until culture results are available [15,34]. US or CT imaging with contrast may be performed if there are signs suggestive of an abscess, such as fluctuance, erythema, and warmth [7].
Interventional Management
Patients who have symptoms of obstruction lasting more than a few days should be considered for operative management, as should those with recurrent episodes of sialolithiasis due to risk for chronic sialadenitis and loss of glandular function [7]. Patients with sialadenitis that worsens or shows no improvement with antibiotics also require operative evaluation as they are at risk for the development of salivary gland abscess and spread of the infection to the floor of the mouth, potentially leading to airway compromise [35].
The objective of interventions for submandibular stone extraction is generally to save the gland. The classic algorithm first reported by Marchal et al. in 2003 is still commonly used, which recommends sialendoscopy with wire basket extraction for stones smaller than 4 mm, and laser lithotripsy with sialendoscopic extraction for stones greater than 4 mm.
With the advent of new surgical approaches, refinement of existing interventions leading to decreased associated morbidities, and additional studies reporting on the efficacy of the various treatment options, an updated treatment algorithm is necessary for the management of submandibular sialolithiasis (Figure 1). We recommend that stones up to 5 mm can be removed with sialendoscopy using simple basket extraction. Stones between 5 and 7 mm should be removed either with combined sialendoscopy and transoral surgery or using laser lithotripsy with sialendoscopy. Stones larger than 7 mm and those near the hilum or partially inside the gland are amenable to combined approaches. Transoral robotic surgery can also be used to facilitate combined approaches. The following sections discuss these treatment modalities in detail.
Sialendoscopy
Sialendoscopy is a minimally invasive technique for submandibular stone visualization and removal that has the potential to avoid nerve injury, facial scarring, and oral injury associated with traditional open surgery ( Table 3) [36]. The sialendoscope combines a delicate semi-rigid fiberoptic endoscope, an irrigation port, and a working channel in a single instrument. The endoscope broadcasts high-definition images to a monitor. Irrigation is used to dilate the ducts, permitting exploration of the branches of the salivary system. The working channel is a conduit for instruments, including custom-designed baskets, graspers, and guidewires that can be used to remove salivary stones [9]. The advent of sialendoscopy has significantly reduced the number of salivary glands removed due to sialolithiasis [2,37,38]. Submandibular sialoliths of up to 5 mm in diameter can be successfully removed through sialendoscopy alone, and the technique is especially useful for mobile stones lying freely in the duct lumen. When used for these indications, submandibular stones can be extracted under endoscopic control without additional interventions or fragmentation in greater than 80% of cases [37,[39][40][41][42].
Sialendoscopy
Sialoliths may be mechanically removed by a basket, mini forceps, grasper, or balloon [43,44]. Several factors are involved in choosing an instrument, including mobility, connection to the ductal wall, size, and ability to bypass stones [43]. For freely floating stones, endoscopic removal is most commonly performed with the use of a basket. Balloons also are suitable tools for the removal of small mobile sialoliths. In cases in which the sialolith cannot be bypassed, mini forceps or a grasper can be used to remove the stone [45].
While considered a safe procedure, complications have been reported following sialendoscopy. The overall complication rate has been reported to be around 3%, which includes ductal strictures, perforations, ranula formation, lingual and facial nerve injuries, infection, and bleeding [43,46]. Many of these complications, as well as treatment failure, can be avoided by selecting patients most amenable to sialendoscopy alone, which are primarily those with stones <5 mm in size.
Sialendoscopy With Laser Lithotripsy
Submandibular stones between 5-7 mm in size may be fragmented in the duct lumen using endoscopicallyguided laser lithotripsy before manual extraction [47]. Holmium:YAG (yttrium-aluminum-garnet) laserassisted lithotripsy is the most common variation of this technique utilized for salivary gland stones and has shown to be an effective, safe, and relatively simple method for treating larger submandibular sialoliths [48]. Results from recent studies show that the rate of successful stone extraction for carefully selected patients undergoing sialendoscopy and laser lithotripsy ranges from 81% to 100% [49][50][51][52].
One of the major advantages of this method is the direct visualization of the stone as well as assessment of the ductal system before, during, and after the intervention, as compared to extracorporeal shockwave lithotripsy (ESWL) [48]. By using laser lithotripsy with sialendoscopy, larger stones that otherwise would not be amenable to sialendoscopy alone may be removed. This also obviates the need for more invasive surgical management. However, there are still risks of perforation, stricture, and thermal injuries to nerves and vessels with this technique, which may occur at rates as high as 13% [53]. With continuous cold saline rinsing and avoidance of exposing the ductal walls to the laser, these risks can be minimized [37]. Stents may also be placed after laser utilization to prevent the formation of ductal stenosis [40].
Extracorporeal Shockwave Lithotripsy (ESWL)
Another option for the fragmentation of large sialoliths is to perform ESWL. For this technique, US imaging is used to focus an electromagnetic or piezoelectric shock wave on a submandibular sialolith to fragment the stone. US is also used to continuously monitor the degree of stone fragmentation during each therapeutic session and to avoid lesions to the surrounding tissues [47]. ESWL permits fragmentation of stones of any size and location, which are then excreted either spontaneously by salivary flow through Wharton's duct or manually with sialendoscopy.
Since ESWL is performed as an in-office procedure, it offers several advantages over other interventions in that it is easy to perform, repeatable, and well-tolerated. Most notably, if stone fragments can pass spontaneously, the greatest benefit of ESWL is the avoidance of anesthesia in the operating room. The main drawback of ESWL is that stones often cannot be completely cleared by salivary flow and residual fragments can cause recurrences. For this reason, sialendoscopy is often performed following ESWL treatment, although this combination precludes the advantages of using ESWL as an in-office procedure [47].
The effectiveness of ESWL for complete stone clearance ranges from 26-81% [47,[54][55][56]. According to a large study of over 400 patients by Capaccio et al., complete clearance of residual stone fragments was achieved in 28% of cases with a distal location and 49% of cases with a hilo-parenchymal stone location [47,54]. In general, the success rate for ESWL drops with an increasing stone diameter and perihilar or intraparenchymal submandibular gland stones of < 7 mm are the best candidates for ESWL in countries where it is approved [37,47]. Currently, the technique is not approved by the Food and Drug Administration for use in the United States.
Combined Approach of Open Transoral Surgery With Sialendoscopy
Larger (≥8 mm) submandibular stones as well as those that are difficult to access with sialendoscopy alone, such as intraparenchymal and perihilar stones, can be removed using a combined approach pairing sialendoscopy and transoral stone removal ( Figure 2) [57,58]. First described by Francis Marchal in 2007, this technique involves the use of the sialendoscope to localize the sialolith before performing transoral surgery for stone extraction [42]. Sialendoscopy can again be performed after stone removal to check for additional sialoliths and to remove stone remnants. Successful stone removal rates of 69%-100% have been reported using the combined approach, with a recent large study of the combined approach for hilar or parenchymal submandibular stones by Capaccio et al. reporting a stone removal rate of 98.5% [59]. Furthermore, submandibular gland preservation rates as high as 95% using the combined approach have been published [18,42,[58][59][60]. While large submandibular sialolith removal can be attempted with transoral surgery alone, this can be challenging when stones are in a hilo-parenchymal location, making stone localization difficult. Additionally, limited exposure to the floor of the mouth due to reduced mouth opening, large teeth, or obesity will also make stone removal challenging without sialendoscopy. This further complicates the identification and preservation of the lingual nerve and the placement of sutures in the salivary duct for repair after sialolithotomy [57,58].
Therefore, the combined approach technique offers several advantages over transoral stone removal alone. First, stones can be visualized directly with the sialendoscope and do not rely solely on bimanual palpation. Second, fixation of the stone with the basket and manipulation to a better location within the duct can facilitate precise surgical removal. Additionally, the ability to inspect for stone fragments after sialolith extraction helps detect incomplete treatment and prevent sialolith recurrence. Finally, the ability to irrigate the site of duct repair to check for leaks is another advantage of combining sialendoscopy with transoral removal [58]. As a result of these additional benefits, the combined approach is now the standard of care for larger submandibular stones ≥8 mm in size.
Sialadenectomy
Only 2% to 5% of patients with submandibular sialolithiasis require submandibular gland excision [6,61]. Today, sialadenectomy for submandibular sialolithiasis is primarily reserved for cases where combined or minimally invasive approaches are unsuccessful. It may also be utilized in patients with recurrent stones or for patients who cannot tolerate a second procedure. On removal of the gland and duct, no further obstructive symptoms will occur, resulting in a definitive cure for unilateral sialolithiasis.
The transcervical approach to sialadenectomy is the most common, as it provides direct exposure to the gland and can be performed relatively quickly. However, complications such as scarring, nerve injury, and hematoma may occur [62,63]. To minimize the morbidities associated with the conventional transcervical approach, other techniques for sialoadenectomy may be used. Gland resection via an intraoral approach minimizes visible scarring but has associated risks of ranula formation, salivary fistula, lingual nerve injury, and scar contracture limiting tongue movement [64,65]. Submental approaches to gland excision have also been described with possibly improved cosmetic results compared with the transcervical approach [66]. Finally, endoscope-assisted submandibular sialadenectomy through the transoral approach is another option for sialadenectomy, which further minimizes incision length, scarring, blood loss, and risk of nerve injury [63].
Sialoadenectomy should be avoided whenever possible for several reasons. As the gland's function is completely lost after this procedure, patients' quality of life may be significantly impacted. While young patients may compensate for this loss with secretions from the other salivary glands, the function of these other glands in older patients may already be limited; as such, sialadenectomy can lead to xerostomia and significant functional impairments in eating and swallowing within this population [62]. Gland excision is a more difficult procedure than other minimally invasive techniques and therefore has a greater risk of injury to the lingual, marginal mandibular, and hypoglossal nerves [7]. Visible scarring is also a concern with sialadenectomy, especially when performed through the common transcervical approach [63].
Robotic Surgery for Submandibular Sialolithiasis
Transoral robotic surgery (TORS) utilizing the da Vinci surgical system has been utilized for various diseases of the head and neck, including resection of oncologic disease of the oropharynx, hypopharynx, larynx, and parapharyngeal spaces, as well as for salivary gland disorders, including removal of floor of mouth ranulas, tumors of the submandibular gland, and salivary gland stones [57,[67][68][69][70]. Robotic-assisted procedures applied specifically to submandibular stone management include the combined approach of TORS and sialendoscopy, as well as robotic sialadenectomy [57]. The use of TORS is an appealing alternative to open approaches for salivary gland diseases that may offer better surgical access, less scarring with improved cosmesis, diminished blood loss, shorter hospital stays, and lower morbidity overall [57,71]. According to a study conducted by Tampino et al., in the combined TORS and sialendoscopy approach, there was a 94% success rate in their 33 patients, with 15.1% of patients experiencing transient tongue paresthesia [72].
The primary advantages of TORS over open approaches for submandibular stone management are that it overcomes the challenges of the reduced operative field between the tongue and the mandible, and helps prevent injury to the delicate structures in the floor of the mouth [57,70]. The magnification and dexterity provided by the robotic surgical system allow precise dissection and preservation of the lingual nerve, hypoglossal nerve, and Wharton's duct. According to Cammaroto et al., the TORS approach is recommended due to its improved hemostatic control of the facial artery, which can be difficult to manage in an intraoral open approach [73].
Additionally, compared to traditional transoral surgery, TORS offers an improved and direct view of the floor of the mouth, and thus allows the entire treatment team to collaborate and participate in the surgery [57]. The ability to perform a 4-handed surgery is an added benefit that facilitates working with multiple instruments in the small field of the oral cavity [68].
Robotic-assisted sialolithotomy is often used in combination with sialendoscopy, utilizing the da Vinci robot system for the surgical portion of the procedure [57,70]. Transoral robotic sialolithotomy has the advantage of removing larger stones intact, since extracorporeal and laser lithotripsy may increase the risk of injury to surrounding soft tissues while also adding substantially to operative time if the fragments become embedded in the ducts [74][75][76]. If the stone cannot be removed, the TORS approach also facilitates subsequent sialadenectomy. Removal of large submandibular stones through a combined approach with TORS and sialendoscopy has been successfully performed in multiple institutions, though a 2% rate of lingual nerve damage has been reported with the combined approach when compared to the 0% of damage in the sole TORS approach [57,70,72,76].
Robotic-assisted sialadenectomy is another minimally invasive surgery with distinct benefits over open alternatives. Lee et al. described a postauricular approach for robotic sialadenectomy and reported satisfactory cosmetic results, decreased operative times compared to endoscopic gland resection, and no postoperative complications [74,75,77]. A similar study performed by Singh et al. found that while operative times were longer and had more drainage in robotic sialadenectomy compared to traditional transcervical approaches, cosmetic outcomes were significantly improved [74]. Finally, DeVirgilio et al. published a case series of patients undergoing robotic sialadenectomy with a modified postauricular facelift approach and found less scarring and improved cosmetic outcomes compared with endoscopic gland resections while describing no complications [75].
Conclusions
The management of submandibular sialolithiasis has undergone enormous changes over the last several decades. With the introduction of minimally invasive techniques for stone management, including sialendoscopy and robotic-assisted surgery, even larger sialoliths can now be removed with minimal surgical morbidity and high gland preservation rates of greater than 95%. In general, stones up to 5 mm can be removed with sialendoscopy alone, stones between 5 and 7 mm are more amenable to sialendoscopy with laser lithotripsy, and stones larger than 7 mm, as well as perihilar or intraparenchymal stones, can be treated with sialendoscopy-assisted transoral surgery. Robotic surgery is also becoming increasingly popular for salivary stone management and may facilitate both sialolithotomy and sialadenectomy. If treatment is performed according to treatment algorithms incorporating these techniques, most patients can be successfully cured of sialolithiasis with minimal morbidity, complications, or recurrence.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2022-08-21T15:16:00.956Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "cbbfe1e7d11daef90d5b9c7e3bd7976adb63ebb2",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/108084-contemporary-review-of-submandibular-gland-sialolithiasis-and-surgical-management-options.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2d0735877793294bbe821ccf1ba7379ac6be50e1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5234796 | pes2o/s2orc | v3-fos-license | High-intensity laser application in Orthodontics
ABSTRACT Introduction: In dental practice, low-level laser therapy (LLLT) and high-intensity laser therapy (HILT) are mainly used for dental surgery and biostimulation therapy. Within the Orthodontic specialty, while LLLT has been widely used to treat pain associated with orthodontic movement, accelerate bone regeneration after rapid maxillary expansion, and enhance orthodontic tooth movement, HILT, in turn, has been seen as an alternative for addressing soft tissue complications associated to orthodontic treatment. Objective: The aim of this study is to discuss HILT applications in orthodontic treatment. Methods: This study describes the use of HILT in surgical treatments such as gingivectomy, ulotomy, ulectomy, fiberotomy, labial and lingual frenectomies, as well as hard tissue and other dental restorative materials applications. Conclusion: Despite the many applications for lasers in Orthodontics, they are still underused by Brazilian practitioners. However, it is quite likely that this demand will increase over the next years - following the trend in the USA, where laser therapies are more widely used.
INTRODUCTION
The term LASER is an acronym for Light Amplification by Stimulated Emission of Radiation. The first effective laser was developed in the 1960s, although the theoretical framework had been laid in the early 20 th century. Since then, lasers have been widely used in several routine applications, as laser pointers, barcode reading devices, CD/DVD/Blu-ray readers, scanners, firearms sights, fiber-optic communication, visual aids and graphic design for the cinema industry, and ultimately in health care, for Medical, Physical Therapy, and Dental fields.
In Dentistry, lasers are used in two major applications: biostimulation and surgery. The lasers applied for biostimulation procedures -in other words, for the activation of regenerative and healing processes -are the so called low-level laser therapy (LLLT) and operate under 500 mW. For this purpose, the diode and helium neon (HeNe) lasers stand out, depending on their active medium. Lasers that work beyond the 500 mW range are applied for high-intensity laser therapies (HILT), also called surgical lasers, given their tissue cutting capacity. For such uses, the CO 2 , Nd:YAG, Erbium (Er:YAG, and Er,Cr:YSGG) and diode lasers are the main examples.
In Orthodontics, LLLT has been applied to relieve pain associated with orthodontic movements, 1-3 accelerate bone regeneration after rapid maxillary expansion, [4][5][6] and enhance orthodontic tooth movement. [7][8][9][10][11][12] Although there are several protocols for using LLLT, its effectiveness is determined by the frequency of applications in-between the activation appointments, which makes it less attractive as a routine procedure.
HILT, on the other hand, has become increasingly popular among orthodontists, especially in the USA. It is used for quickly and effectively addressing soft tissue complications associated to orthodontic treatment 13 through bloodless and atraumatic surgical interventions. The benefits of using HILT for soft tissue oral surgery include better hemostasis, decreased postoperative pain and infection rate, minimal tissue contraction, little or no need for sutures, shorter surgical stages, decreased trauma, edema and scarring, [14][15][16][17][18][19][20] besides the reduced need for local anesthetics.
Among the high-intensity laser therapies to be used in orthodontics, the diode lasers play a special role given its superficial cutting ability, providing safer procedures after its shallow penetration and reduced risks of causing pulp damage. Besides, HILT devices tend to be more portable and cost-effective. 21 Its wavelength ranges between 810 and 1,064 nm 21 and is absorbed by pigmented tissues, that contain haemoglobin, melanin, and collagen. Provided recommended protocols are observed, they have no impact on dental or bone tissues for they have greater affinity for soft tissues. 22 The laser/tissue interaction may lead to coagulation, denaturation of proteins, vaporization, and carbonization in the affected areas depending on the amount of energy emitted. This process seals the blood vessels promoting hemostasis, inhibits the pain receptors on the incision area, lowers the risk of infection, and enhances healing. 22 This study will explore HILT indications for addressing soft tissue problems associated with orthodontic treatment as well as other hard tissue and dental materials applications.
HILT INDICATIONS IN ORTHODONTICS
The main indication for HILT in orthodontics is in soft tissue surgeries such as gingivectomy, ulotomy, ulectomy, frenectomies, and fiberotomy. Before delving into each of them, some shared features of surgical procedures shall be presented.
Having in mind that diode lasers emit light to tissues through optical fiber cables or disposable optical fiber tips, the first step is to prepare this part of the instrument. For laser devices that use optical fiber cables, clinicians should properly remove the external coating to expose the internal glass fiber unit before using it. Before every single patient session, 2 to 3 mm of the fiber should be cut off/removed to prevent cross-contamination. Having done that, the tip should be "activated" by applying some sort of pigment on it, as to concentrate energy on that area -articulating paper can be used for that purpose (Fig 1). For disposable optical fiber tips, there is no need to cut off the tip, but the tip activation step is still required. 13 Following this first step, laser intensity settings must be adjusted. It is advisable to use as low power as convenient to a given procedure, in order to avoid thermal damage to surrounding tissues. 13 Most soft tissue procedures can be performed at a 1 to 1.2 W setting. In higher density soft tissues areas, like the palate or distally to lower molars, settings as high as 1.4 W may be required, while frenectomies usually require settings of around 1.6 W. 13 Once power set up is complete, the energy delivery mode should be selected: continuous or pulsed waves. Whenever working with a diode laser, continuous waves should be set for most ablation procedures. 13 Next step is the patient's anesthesia. In some cases, the anesthetic gel is enough. 13 After drying out the soft tissue area, the topical anesthetic gel is rubbed over the area with a cotton swab for 3 to 4 minutes. For higher-density tissue areas like the palate, distally to erupting molars, and at the frenulum, only the anesthetic gel may not be enough and injected local anesthesia is required. 13 Practitioners should assess both the anesthetic effect and the patient's sensitivity to decide if more local anesthesia is required.
The surgical procedure per se starts when the surgeon moves the end of the laser tip with a slow and careful "paint brushing" motion, making contact with the target tissue. Special attention should be taken towards avoiding to keep the laser tip for too long over a specific area, so as not to carbonize or unnecessarily damage the tissue. For ablation procedures, the use of the aspirator is paramount to remove vapor fumes, that may contain bacteria and unpleasant odors. It is also recommended to use a wet gauze 13 to wipe off the tissue build ups that eventually start to accumulate at the tip of the instrument.
While no specific postoperative care is required, patients should still be advised to keep the treated area clean and free of biofilm. Special care should be taken as to avoid extremely hot and cold foods that may cause pain or other complications. Ordinary pain killers can be prescribed after more complex surgical procedures. 13
Gingivectomy
Patients undergoing orthodontic treatment with fixed appliances may develop gingival hyperplasia, mostly caused by chronic inflammatory processes. [23][24][25] In such cases, gingival hyperplasia can be treated with gingivectomy together with improved mechanical plaque control, as well as with better overall oral hygiene by the patient. [23][24][25] Gingivectomies can also be of great help for bonding the braces in patients with short clinical crowns as well as improving the aesthetics after the orthodontic treatment (Fig 2).
Regardless of whether the gingivectomy is to be performed with a scalpel or HILT, periodontal surgery principles and the concept of biological width should be abided by at all times. Biological width is around 3 mm, comprised of 1 mm junctional epithelium, 1 mm supercrestal attached connective tissue and 1 mm gingival sulcus. If this biological width is invaded by an excessive removal of gum tissue after gingivectomy, this tissue may grow back again because the biological width tends to return to its original dimensions. Chronic inflammation and unpredictable bone loss are commonly seen when the limits of the restoration exceed the biological width after surgery. With that in mind, cases that require significant tissue removal deliver better results whenever treated with crown lengthening procedures. The gingivectomy involves anesthesia and periodontal probing, in order to assess the tissue that will be safely removed around the teeth. This probing can be used as a reference to point out the amount of tissue to be removed. were no significant changes to the gingival sulcus depth, probably caused by the gingival margins relapsing into physiological conditions. 25
Ulotomies and ulectomies
The surgical exposure of fully impacted or partially erupted teeth is performed as to allow the bonding of orthodontic devices on tooth surface and is another application of HILT in Orthodontics (Figs 3, 4, and 5).
An attentive assessment of the soft tissue to be excised must be carried out in anticipation. Ulotomies and ulectomies have good results when the tissue is keratinized. The surgical exposure of impacted teeth through non keratinized or unattached gingiva may result in loss of attached gingiva when those teeth are brought into the arch line through orthodontic forces. 13 The laser intensity should be set to 1 to 1.2 W. When applied to areas with higher soft tissue density, like the palate or distally to lower molars, a power setting as high as 1.4 W 13 may be required.
The laser power should initially be set at 1 to 1.2 W. 13 The optical fiber should be positioned towards the gingival tissue perpendicularly to the clinical crown and the removal of redundant tissue should start from the proximal aspect. A saline embedded gauze should be used to clean the area and remove remnants of gingival tissue. After applying the laser, the gingival margin should be assessed and refined with chisels or a 15C scalpel, if needed.
Laser-aided gingivectomy procedures do not require the use of surgical dressing, and the prescription of pain killers depends on patients' tolerance. Another important aspect to be assessed is the post surgical tissue healing since the contraction of the margins may yield poor functional and aesthetic results after scar tissue is formed. A previous study found that the use of CO 2 laser to treat gingival hyperplasia during orthodontic treatment had led to increased clinical crowns only during the immediate postoperative. However, between 30 to 60 days after surgery there
Labial frenectomy
The labial frenum is a membrane that stretches along the midline from the internal surface of the lip to the alveolar mucosa. Among other functions, it limits lip movement, stabilizes the midline, and prevents unnecessary exposure of both alveolvar mucosa and gingiva. In newborns, the upper labial frenum extends all the way to the incisive papilla and takes part in suckling function. As teeth start to develop, the frenum goes through a gradual atrophy and attaches to the alveolar mucosa. However, in some cases the upper labial frenum keeps attached to the incisive papilla (low frenum attachment or frenum hypertrophy). A low frenum may cause interincisal diastemas. The diagnosis of this condition can be done by observing wether stretching the lip leads to an ischemic incisive papilla. A differential diagnosis will rule out mesiodens (X-ray), habits, absence of lateral incisors or hereditary factors. 26 Although less frequently, hypertrophic lower labial frenum may also cause diastemas, 27 but leading instead to another common problem: gingival recessions that may occur when the frenum is attached too close to the marginal gingiva. 13 The correlation between the upper lip frenum and interincisal diastema has caused upper labial frenectomies to be performed on a regular basis as a preventive procedure until the mid-1940s. 28 Not long after that, clinicians started to realize that there was a tendency for attachment to gradually migrate from the palatal to the buccal aspect throughout life as a consequence of the alveolar growth and the eruption of incisor teeth. Therefore, the current recommendation is to wait until the permanent canines erupt before proceeding to a frenectomy.
Regarding HILT surgical technique, after anesthesia and laser preparation stages, the lip should be stretched as to allow for an anatomical assessment of the frenum. Laser irradiation starts from the central part of the frenum towards the sulcus until the redundant frenum tissue is removed (Fig 6). Right then, it is noticed how fibers are removed under no bleeding. The recommended laser setting is 1.6 W. 13 When frenum fibers are deeply inserted, detaching periodontal instruments might be used to excise them. Laser therapy should not be used adjacently to bone tissues given the risk of thermal damage and tissue necrosis. No suturing is required since a secondary intention healing will take place.
Lingual frenectomy
Ankyloglossia is the abnormal development of the tongue, characterized by the presence of a short and tight lingual frenum that limits tongue movements. It may lead to swallowing and speech disabilities, malocclusion as well as potential periodontal problems. 13 A visual diagnosis can be done by asking the patient to elevate the tongue to the palate, making good evidence of the typical heart-shaped tongue with movement limitations (Fig 7A). The lingual frenum ablation can be performed in patients of any age, though early surgery is recommended.
In Brazil, the tongue-tie test law (which was signed on June 20 th , 2014 under Federal Law number 13.002) makes lingual frenum testing compulsory for all newborns. A qualified health-care professional (i.e.: a dentist or speech therapist) must perform the assessment protocol for this test. The clinician should elevate the baby's tongue to see whether it is tied as well as observe baby's suckling and crying behaviors. The tongue-tie test should be performed in the maternity within the baby's first month, for an early diagnosis. This will help prevent breast-feeding problems that may lead to weight loss and unnecessary bottle-feeding. When an abnormally hypertrophic frenum is diagnosed, surgical removal is recommended. 29 Since the frenum tissue is rather fibrous, local anesthesia should be injected at the base of the frenum. Laser power settings are adjusted to lowest intensity required for that surgery. While some authors recommend 1.6 W 13 as the ideal setting, in the described case the laser was set to 1.5 W (Fig 7B). After activating the tip with articulating paper, surgery only starts when the laser is horizontally brushed as to detach the frenum, that should be kept stretched for better results (Fig 7C, D, E). Immediately after surgery, a broader range of tongue movement can be perceived (Fig 7F). A week after the procedure, a better shaped tongue tip and an improved range of motion can be observed, as well the presence of granulation tissue, as a result of the secondary intention healing (Fig 7G).
Fiberotomy
The tendency for rotated teeth to relapse is one of the main challenges faced after an orthodontic treatment. The slow turnover rate of supracrestal fibers are likely to be responsible for this relapse, which occurs in 48% of cases of patients wearing retainers for as long as 10 years, and is often proportional to the severity of the initial rotation. Apart from the use of orthodontic retainers, only a few additional strategies have been proposed to minimize relapse -the most widely discussed of which is the circumferential supracrestal fiberotomy. 30 A scalpel is used in the traditional approach, but HILT can be alternatively used while achieving equally satisfactory results. 30 Erbium mediated lasers are the most recommended devices for fiberotomy (i.e: Er:YAG or ER,Cr:YSGG) for they enable fiber ligaments to reestablish a tissue pattern without inducing superficial necrosis. Although diode laser is another alternative for this procedure, it may cause tissue sloughing, what hinders tissue recovery, extending thus the healing period. So far, we have only found animal studies using diode laser for fiberotomy during the retention period after the orthodontic treatment. 31 Despite the promising results 31 , further research is required before this procedure can be routinely adopted.
Removal of ceramic brackets
HILT was used to remove ceramic brackets in several studies, with various laser types: Er:YAG, 32 Nd:YAG, 33 CO 2 , 34 and diode laser. 35 However, questions concerning damages to the dental pulp (due to overheating) as well as to the enamel surface still remain. Further research and more in vivo studies are required in order to demonstrate that HILT can be safely used for this purpose.
Enamel etching
The acid-etching concept was proposed by Buonocore 36 in 1955, and the most common approach consists in applying 37% phosphoric acid to etch the enamel prior to the bonding step. In Orthodontics, HILT may be alternatively used with Er:YAG lasers to promote an appropriate adhesive strength in brackets, with the advantage of avoiding tissue reaction against the acid and gaining better control over the area to be etched. This naturally prevents demineralization from happening in surfaces larger than the bracket area itself. 37 Sant'Anna EF, Araújo MTS, Nojima LI, Cunha AC, Silveira BL, Marquezan M special article
Prevention of white spots
The formation of white spot lesions around braces occur in 50% of treatments. Besides recommending a thorough hygiene required for preventing such lesions, orthodontists can also use fluoride-releasing materials such as Fugi Ortho LC (GC América Inc., Chicago, IL, USA). HILT can also be used as an alternative. According to in vitro studies, the CO 2 laser can change enamel surface around the brackets by reducing carbonate and phosphate contents, what also counts as a caries preventing strategy in these areas. 38,39
Recycling of brackets
The recycling of debonded brackets is a cost-effective option that can be performed through differ-ent methods in dental offices. Aluminum oxide blasting is a very common technique for cleaning brackets. However the use of Er:YAG laser is an effective process to remove the adhesive from the bracket pad, causing minor impacts to the material and providing brackets with close-to-new bonding strength.
DISCUSSION
Although most lasers used in dental practice are relatively user-friendly, precautions should be taken for securing a safe and effective operation. First, everyone subject to laser exposure should wear safety glasses (Fig 8) -that includes dental professionals, assistants, patients, and any other people in the room (patient's family or friends, for example). The safety glasses should be specifically chosen according to the High-intensity laser application in Orthodontics special article laser wavelength. Although most lasers emit wavelengths that escape the visible part of the spectrum, their irradiation must not be neglected and caution should be taken. Besides the use of glasses, accidental exposure to laser beams can be avoided by signaling the risk areas with warning signs, limiting access to risk areas, minimizing reflective surfaces, and keeping the equipment under good operation conditions.
Despite the many advantages of using HILT described in this study, some of the laser features should be taken into account before choosing it as the treatment modality. One of the main factors to be considered is that, in spite of favouring healing, laser therapies healing period tend to last longer than conventional dental procedures. The healing period after laser therapy may take 2 to 3 weeks when compared to other procedures that usually take 7 to 10 days. 40 The use of laser therapies in dental practice is acknowledged in Brazil, with specific training programs available in the market. While any practitioner may operate laser devices, it is recommended that dentists should seek specific training since under-graduate programs seldom include a few hours of laser device practice, if any at all.
In Brazil, the price of an LLLT equipment ranges between R$ 3,000 to R$ 8,000. HILT equipments tend to be more expensive -a diode laser device is currently priced at R$ 35,000. Considering the investment disbursed by dental professionals to training courses, as well as to purchase and maintain dental equipment, it is recommended that patients be charged specific fees for laser therapy and other laser-aided procedures, even if they are regular ortho patients.
CONCLUSION
Despite the many possible indications for Lasers in Orthodontics, it is still underused by Brazilian professionals. However, it is quite likely that this demand will increase over the next years -following the trend in the USA, where laser therapies are more widely used. | 2018-04-03T00:30:59.488Z | 2017-11-01T00:00:00.000 | {
"year": 2017,
"sha1": "8fe879d60fd75d09d1968511c783d69d484bb8f5",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/dpjo/v22n6/2176-9451-dpjo-22-06-00099.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8fe879d60fd75d09d1968511c783d69d484bb8f5",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
242311890 | pes2o/s2orc | v3-fos-license | Osteoprotegerin prompts cardiomyocyte hypertrophy via FAK/BECLIN1 mediated autophagy inhibition
Aim It has been reported that Osteoprotegerin (OPG) induces cardiomyocyte hypertrophy, but the mechanism remains unclear. This study was to investigate the role of Focal Adhesion Kinase (FAK) pathway in the OPG induced hypertrophy in cultured cardiomyocytes. Methods The H9C2 line of rat cardiomyocytes were treated with OPG at different concentrations and the cellular hypertrophy was evaluated. Meanwhile, the activity of FAK and other the phosphorylation kinases were detected. Autophagy flux assay was performed in absence and presence OPG. The interaction between proteins was analyses using Co-Immunoprecipitation assay. Results We found that OPG induced cardiomyocyte hypertrophic response, accompanied by dramatic increases a series of inflammatory factors and cytokines, as well as collagen synthesis. Also OPG inhibits autophagy and induces FAK phosphorylation. FAK silencing using si-RNA abrogates the effect of OPG on autophagy and cellular hypertrophy. Furthermore, Co-Immunoprecipitation assay reveals that OPG inhibits autophagy through enhancing the binding of FAK and Beclin1 Tyr 233. Conclusion The FAK/Beclin1 signal pathway is essential for the OPG induced autophagy inhibition and hypertrophic response in cultured H9C2 cells.
an independent risk factor for cardiovascular morbidity and mortality. To date, the mechanism for LVH has not been well understood.
Osteoprotegerin (OPG) is a member of the tumor necrosis factor receptor superfamily of cytokines and a soluble receptor for the receptor activator for nuclear factor-κB ligand (RANKL) [2,3]. It has been shown that OPG stimulates cardiomyocyte hypertrophy, indicated by increased cell surface size, protein synthesis per cell, and a series of hypertrophy-related markers [18]. Consistent with this in vitro study, a clinical study showed that OPG in the coronary circulation is associated with concentric LVH [8]. However, the mechanism under which OPG regulates cardiomyocyte hypertrophy warrants further study.
It is well established that the adhesion and accumulation of extracellular matrix (ECM) is necessary for LVH development [17]. The accumulation of ECM is mediated mainly by integrin / Focal adhesion kinase (FAK), which serve as mechanotransducers during normal heart development and in response to physiological and pathophysiological signals such as high blood pressure [20]. As a primary integrin effector, FAK is rapidly activated by mechanical stimuli in cultured neonatal rat ventricular myocytes and in overloaded myocardium of adult animals [15]. In endothelial cells, OPG activates FAK /steroid receptor coactivator (Src) / extracellular signal-regulated kinases (ERKs) signaling and FAK phosphorylation is associated with the morphological changes [12]. A recent study further confirmed that FAK activation suppresses autophagy and initiates hypertrophic growth in cultured cardiomyocytes [5]. Based on these findings, we hypothesize that FAK mediated the OPG induced cellular hypertrophic process. In recent years, cardiomyocyte autophagy has been considered to play a role in controlling the hypertrophic response, but the conclusions are controversial. We thus invested the putative role of autophagy in OPG induced cellular hypertrophy in cultured cardiomyocyte.
Cell culture
Rat cardiomyocyte H9C2 cells were maintained in DMEM containing 10% heatinactivated foetal bovine serum (Life Technologies, Carlsbad, CA, USA) in an incubator in an atmosphere of 5% CO 2 , 95% air at 37°C. After achieving 70-80 % confluence, cells were treated with recombinant human OPG (R&D Systems ,Mineapolis, MN, USA) at indicated concentrations (see in the results session)
Morphologic assays in Cardiomyocytes
Cells were grown on the Nunc™ Lab-Tek™ II Chamber Slide™ System (ThermoFisher scientific, USA) to examine changes in cell morphology after OPG treatment. Briefly, cells were gently washed with Phosphate Buffered Saline (PBS) and fixed in 4%paraformaldehyde, followed by a staining with fluorescein isothiocyanateconjugated Phalloidin (Sigma-Aldrich, St. Louis, MO, USA) for 30 min . Cellular hypertrophy was evaluated by measuring cardiomyocytes surfaces using a digital image analysis system (Leica QwinV3, Leica Microsystems Ltd, Cambridge, UK). Five random fields (with approximately 10-15 cells per field) from every sample were averaged and expressed as um 2 /cell. All experiments were repeated 3 times.
Measurement of Protein Synthesis in Cardiomyocytes
The H9C2 cells were gently washed with Phosphate Buffered Saline and trypsinized (0.25% trypsin, Thermo Fisher Scientific, USA ) and counted using a cell counting chamber (Beckman Coulter, Fullerton, CA) and then lysed. The cell lysates were prepared to determine protein content by Bradford protein assay. Then the protein synthesis of cells was determined by dividing the total amount of protein by the number of cells, namely, protein per cell.
Concentrations of protein in whole cell extracts were determined using the BCA protein assay kit. FAK, Akt,ERK, PI3K, MAPK and their phosphorylated antibodies were purchased from Santa Cruz Biotechnology (Santa Cruz, CA). Binding of secondary antibodies was detected using anti-rabbit or anti-mouse IgG-HRP (1:2,000, sc-2005, Santa Cruz Biotechnology). Image acquisition was performed on an enhanced chemiluminescence detection system (Tanon, Shanghai, China). ImageJ software was used to quantify the density of the specific protein bands.
In addition, the autophagic puncta accumulation in the cytoplasm was detected using the Autophagy Detection kit in accordance with the manufacturer's instructions (Abcam, USA). Briefly, cells were maintained in serum-free Earle's Balanced Salt Solution for 2 hour to induce autophagy. Then cells were fixed using 4% paraformaldehyde fixative. The autophagy status was evaluated Briefly, cells were grown to 70% confluence and were stained with Autophagy Detection Reagent and Hoechst 33342 nuclear dye solution at 37 °C for 30 min in the dark before being imaged with a Nikon TE2000 Microscope (Nikon, Tokyo, Japan). Slides were sealed with coverslip using mounting medium and then analyzed by confocal microscopy.
Confocal images were collected on Nikon A1 microscope using a 60x oil immersion objective lens and NIS Elements software.The total numbers of puncta in the cytoplasm were quantified using Image J and normalized by cell number in each field. A total of 5 fields were measured for each cell group and assays were repeated 3 times.
Intracellular Reactive oxygen species (ROS) determination
To detect intracellular ROS, cells (5 × 104 cells/ml, 100 μl/well) were seeded in a 96well black plate. After treatment, cells were incubated with 10 μM DCFH-DA probe at 37°C for 20 min. Then cells were washed twice with PBS. The fluorescence was read with a fluorescence microplate reader at excitation/emission of 535/610 nm.
Statistical Analysis
An analysis of variance (ANOVA) was used on all variables to determine whether significant differences existed between groups. When a significant F-ratio was obtained, a Tukey HSD post hoc test was used to identify statistically significant differences (p<0.05) among the means. Statistical analyses were performed using JMP software (SAS Institute Inc., Cary, NC), and all values are expressed as means ± SE.
OPG induced cellular hypertrophy
In consistence with previous studies, we found that OPG treatment at an increasing concentration stimulates cells to a hypertrophic phenotype, indicated by increased cellular surface and protein content per cell in a dose dependent manner. The prohypertrophic effect started at the concentration of 25 ng/mL and reaches maximum effect at 250 ng/mL for both cellular surface and protein content per cell (Fig. 1A and 1B).We then used 100 mg/mL OPG to treat cells at different time points. We found that OPG also includes hypertrophic response in a time-dependent manner, starting from 12 hours and reached maximum effect at 48 h thereafter and remains stable until 72 hours. In the following studies, we used 100 ng/mL for 24 hours as a treatment condition (Fig. 1C and 1D).
OPG inhibited autophagy
To investigate the effect of OPG on autophagy, we first checked the autophagic puncta accumulations in cells after OPG treatment. As shown in Fig. 2A, OPG treatment significantly attenuated the accumulation of autophagic puncta in H9C2 cells in comparison to control cells without OPG treatment, suggesting OPG may exert a inhibitory effect on autophagy. To confirm this hypothesis, we next used the western blot analysis to determine changes in LC3-II and p62 in cultured H9C2 cells.
LC3-II expression is a marker for complete autophagosomes, while p62 is a ubiquitin-binding protein involved in lysosome-or proteasome-dependent protein degradation. Treatment with OPG resulted in a considerable reduction in LC3-II, but increased p62 levels (Fig. 2B), suggesting OPG inhibits early stage of autophagosome formation. To confirm this finding, we used Baflomycin A1 (Baf A1, 10 nM for 6 hours ), a lysosome inhibitor, to conduct an autophagy flux assay. As shown in Fig. 2B, compared to cells treated with Baf A1 alone, BafA1 and OPG cotreatment did not increase the LC3-II expression and nor did it alter p62 levels, implying that OPG did not block the late stage LC3-II degradation, but hinders the early stage autophagosome formation. Taken together, the above data provide clear evidence to support the notion that OPG inhibits autophagy in cultured H9C2 cells.
OPG induced hypertrophy is autophagy dependent
Increasing evidence has revealed an important pathogenic role of altered autophagy in cardiac hypertrophy [14,16] and recent reports have overwhelmingly supported that autophagy insufficiency contributes to maladaptive cardiac remodeling and hypertrophy [21].
In order to clarify the role of autophagy in OPG-induced hypertrophy, we used 3-MA, an autophagy inhibitor and Rapamycin, an autophagy inducer, to see their effect on OPG induced hypertrophy. Interestingly, we found that Rapamycin attenuated the OPG-induced hypertrophy while 3-MA further enhanced this response, confirming that autophagy mediates OPG induced hypertrophy in H9C2 cells.
OPG inhibited autophagy is not through ROS
We found that OPG increased the ROS level in cultured cells, indicated by DHE staining. ROS can inhibit autophagy by downregulating ULK1 mediated p53 pathway in selenite-treated NB4 cells [7]. Many chemical compound modulate autophagy through ROS pathway [4,10,13,24]. Thus, we test if ROS scavenger Nacetylcysteine has any role in OPG inhibited autophagy. We did not see the cotreatment with N-acetylcysteine justly slightly affects the autophagy status in cells treated with OPG, suggesting that OPG inhibited autophagy is not through ROS pathway (Data no shown).
OPG inhibited autophagy via FAK phosphorylation
Previous studies revealed that OPG activates the integrin/FAK signaling, resulting in the activation of Akt. Inhibition of both FAK and Akt signaling significantly inhibited OPG-mediated attenuation of TRAIL-induced apoptosis [12]. OPG also stimulated ERK1/2 and p38 MAPK signaling pathways in different cell types[1, 6,11,23]. In cultured H9C2 cells, we found that treatment with OPG induced a robust increase in phosphorylation of FAK at Tyr-397 and Tyr-925, together with the increased PI3K, Akt, ERK and p38MAPK (Fig. 4A).
Subsequently, we silenced these above-mentioned proteins using si-RNA technique (Fig. 4B), to check their role in OPG induced autophagy. Of note, we found that siRNA-mediated knockdown of FAK fully abrogated the OPG-induced changes in p62 levels and LC3 modification as well as the puncta numbers in the cytoplasm (Fig. 4C and 4D). On the other hand, the knockdown of PI3K, Akt, ERK and MAPK did not, or slightly, if any, affect the effect of OPG on autophagy status and cell hypertrophy markers (data not shown).
OPG boosted FAK and Beclin1 interaction and prompts Beclin1 phosphorylate at 233
Beclin1 is a Core Protein regulating autophagy. It has been demonstrated that phosphorylation of Beclin1 at Tyr233 limit its binding with the complex I and complex II, a necessary step for autophagy activation [5]. As shown in Fig. 5A, OPG treatment led to phosphorylation of Beclin1-Tyr-233 in cells (Fig. 5A,left). A recent study has identified the FAK binds Beclin1 and induces phosphorylation at Beclin-Tyr-233 site, we thus checked if FAK mediated OPG-induced phosphorylation of Beclin1-Tyr-233. As expected, with FAK silencing, OPG per se failed to phosphorylate Beclin1-Tyr-233 (Fig. 5A, right). Silencing Akt, activation of p38 mitogen-activated protein kinase (MAPK) and Phosphoinositide 3-kinases (PI3Ks) did changed the OPG induced phosphorylation of Beclin1-Tyr-233 and subsequent autophagy inhibition (data not shown). Next, we used Beclin1 mutant (Tyr-233 to Phe-233) to test the significance of Beclin1-Tyr-233 in OPG induced cellular hypertrophy. Compared to wild type controls, we observed that mutation of Tyr-233 to Phe-233 completely abolished OPG induced cellular hypertrophy, suggesting phosphorylation of Beclin1-Tyr-233 is critical for OPG/FAK pathway in inducing hypertrophic response (Fig. 5B). To check if OPG affects the binding between FAK and Beclin1, we conducted the CO-IP assay. As shown in Fig. 5C, OPG enhanced the binding between FAK and Beclin1 (Fig. 5C).
Discussion
Our previous study reported that OPG treatment stimulates cardiomyocyte hypertrophy but the mechanism remains large unknown. In this study, we found that FAK mediated OPG induced cardiomyocyte hypertrophy via suppression of Beclin1 dependent autophagy. To the best of our knowledge, this is the first study reveal the significance of FAK and autophagy pathway in OPG induced cardiomyocyte hypertrophy.
OPG has been reported to attenuate TRAIL-induced apoptosis in a variety of cancer cells, including ovarian cancer cells. OPG-mediated protection against TRAIL has been attributed to its decoy receptor function. However, a recent study reveals that OPG can attenuate TRAIL-induced apoptosis in a TRAIL binding-independent manner, but through the activation of integrin/FAK/Akt signaling in OC cells. In pulmonary arterial hypertension animal model, OPG facilitates PAH pathogenesis by regulating pulmonary arterial smooth muscle cell proliferation, through Integrin αvβ3/FAK/AKT Signaling. OPG induces cytoskeletal reorganization and activates FAK, Src, and ERK signaling in endothelial cells. Consistent with this finding, we observed that OPG induced FAK phosphorylation. Importantly, silencing FAK abolished the OPG induced autophagy suppression and hypertrophy response, suggesting FAK is the main mediator of OPG in this process. So far, how OPG affects autophagy in cardiomyocyte remains unclear. Previous studies revealed that OPG inhibits osteoclast differentiation and bone resorption by enhancing autophagy via AMPK/mTOR/p70S6K signaling pathway in vitro [19]. Also, OPG inhibits osteoclast bone resorption by inducing autophagy via the AKT/mTOR/ULK1 signaling pathway. Contrary to these studies, we found that OPG actually exerts an inhibitory effect on autophagy in H9C2 cells. We postulate there may exist a cell type specific effect of OPG. Increasing evidence has revealed an important pathogenic role of autophagy in cardiac hypertrophy and heart failure.
Although an early study suggested that cardiac autophagy is increased and that this increase is maladaptive to the heart subject to pressure overload, more recent reports have overwhelmingly supported that autophagy insufficiency contributes to cardiac remodeling and heart failure. Modulation autophagy has emerged as a new approach for the prevention and treatment of cardiac hypertrophy. For example, Oridonin protects against cardiac hypertrophy by promoting p21-related autophagy [22]. Hexarelin protects cardiac H9C2 cells from angiotensin II-induced hypertrophy via the regulation of autophagy. Resveratrol prevents chronic intermittent hypoxia-induced cardiac hypertrophy by targeting the PI3K/AKT/mTOR pathway. In our study, we found that inhibition of autophagy by 3-MA dramatically further enhanced the OPG induced hypertrophic markers while Rapamycin rescued these changes, suggesting the importance of autophagy in OPG induced hypertrophic characteristics in H9C2 cells.
In order to further elucidate the molecular mechanism OPG regulates autophagy and hypertrophy, we studied the interaction between FAK and Beclin1, a major autophagy regulator which plays a critical role in both autophagosome formation and autophagosome-lysosome fusion, and recent studies reveal that Beclin1 serves as a nexus for autophagy regulation in response to various signaling pathways. ET C. has identified Tyr-233 (a site highly conserved from lower vertebrates to mammals; Table 1) as the only FAK phosphorylation site on Belcin1. Indeed, in this study, we observed that FAK mediated the OPG induced Belcin1-Tyr-233 phosphorylation. It has been reported that phosphorylation of Beclin1 at Tyr-233 can restrain its binding ability to complex I (Atg14L/Vps15/Vps34) as well as complex II (UVRAG/Vps15/Vps34), which are essential steps for autophagosome formation and autophagosome-lysosomal fusion, respectively [5]. So it is plausible to reason that OPG activates FAK, which in turn phosphorylates Belcin1-Tyr-233, causing inability to form complex I and II, thus hinders autophagy activation.
Conclusions
In summary, we discovered the mechanism under which OPG induces hypertrophic response in cultured H9C2 cells. Autophagy suppression via FAK/Beclin1 pathway may serve as target for the future therapeutic approaches against cardiac hypertrophy. | 2019-12-19T09:11:09.980Z | 2019-12-17T00:00:00.000 | {
"year": 2019,
"sha1": "260c3680255bdc61d1fa488f5c1d321936859f6a",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-9636/v1.pdf?c=1631843449000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6b81e07036793260a620c347ac1a68634a0ed889",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": []
} |
258843805 | pes2o/s2orc | v3-fos-license | CRISPRimmunity: an interactive web server for CRISPR-associated Important Molecular events and Modulators Used in geNome edIting Tool identifYing
Abstract The CRISPR-Cas system is a highly adaptive and RNA-guided immune system found in bacteria and archaea, which has applications as a genome editing tool and is a valuable system for studying the co-evolutionary dynamics of bacteriophage interactions. Here introduces CRISPRimmunity, a new web server designed for Acr prediction, identification of novel class 2 CRISPR-Cas loci, and dissection of key CRISPR-associated molecular events. CRISPRimmunity is built on a suite of CRISPR-oriented databases providing a comprehensive co-evolutionary perspective of the CRISPR-Cas and anti-CRISPR systems. The platform achieved a high prediction accuracy of 0.997 for Acr prediction when tested on a dataset of 99 experimentally validated Acrs and 676 non-Acrs, outperforming other existing prediction tools. Some of the newly identified class 2 CRISPR-Cas loci using CRISPRimmunity have been experimentally validated for cleavage activity in vitro. CRISPRimmunity offers the catalogues of pre-identified CRISPR systems to browse and query, the collected resources or databases to download, a well-designed graphical interface, a detailed tutorial, multi-faceted information, and exportable results in machine-readable formats, making it easy to use and facilitating future experimental design and further data mining. The platform is available at http://www.microbiome-bigdata.com/CRISPRimmunity. Moreover, the source code for batch analysis are published on Github (https://github.com/HIT-ImmunologyLab/CRISPRimmunity).
INTRODUCTION
The CRISPR-Cas systems are not only fascinating systems to study the adapti v e immunity of prokaryotes against vi-W94 Nucleic Acids Research, 2023, Vol. 51, Web Server issue ral infection, but also have been extensively investigated for their applications in the de v elopment of powerful genome editing tools. In the decade since CRISPR-Cas9 was introduced as a genome editing technology, the field of gene editing has undergone transformati v e advancements in basic r esear ch and clinical applications ( 1 ). The advent of programmable genome editing technologies has enabled the application of disease-specific diagnostics and cell and gene therapies (2)(3)(4)(5)(6). Among these technologies, CRISPR-Cas9 and CRISPR-Cas12a, which are deri v ed from bacterial immune systems, are the most widely used genome editing enzymes that use an RNA-guided mechanism to target and cleave DNA sequences ( 7 , 8 ). However, their high frequency of of f-target ef fects ( ≥50%) is an obstacle for their application in the therapeutic and clinical application ( 9 ). Fortunately, viruses have evolved multiple anti-defense mechanisms to specifically inhibit CRISPR activity, such as diverse anti-CRISPR proteins (Acrs), which have enormous potential to be de v eloped as modulators of genome editing tools ( 10 , 11 ).
Recent discoveries of new CRISPR-based enzymes, CRISPR-associated enzymes and anti-CRISPR proteins has greatly enhanced our understanding of the CRISPR-Cas system's role in microbes and its utility for genome editing in other cells and organisms ( 12 ). Se v eral databases have emerged to provide both known and predicted information for CRISPR and anti-CRISPR proteins (Tab le 1 ), and se veral w e b serv ers hav e been de v eloped to predict and characterize the CRISPR-Cas and anti-CRISPR systems (Tab le 2 ). Howe v er, these r esour ces often only focus on specific areas of CRISPR-related research and provide limited information and services. Additionally, there is currently no available tool for identifying novel class 2 CRISPR-Cas loci. It is imperati v e to have comprehensive data combined with multi-perspecti v e mining to improv e our understanding of RNA-guided DNA / RNA cleavage systems, but such comprehensi v e online r esour ces or w e b servers are still lacking.
Her e, we pr esent CRISPRimmunity (CRISPRassociated Important Molecular e v ents and Modulators Used in geNome edIting Tool identifYing), a new w e b server designed to facilitate CRISPR-oriented discovery. This platform improves the accuracy of Acr prediction by integrating a 'self-targeting' (ST), 'guilt by association' and machine learning a pproach. Meanw hile, CRISPRimm unity can identify novel class 2 CRISPR-Cas loci through a comparati v e genomics screening approach, and se v eral identified candidates have been experimentally validated for in vitro cleavage activity. In addition, CRISPRimmunity is able to dissect the important molecular e v ents during the co-evolution of CRISPR and anti-CRISPR mechanisms, including CRISPR-Cas system detection and classification, Acr and Acr-associated (Aca) protein annotation, ST spacer search (STSS), repeat classification, prophage detection, and bacteria-phage interaction detection. CRISPRimmunity can be utilized either as a w e b server ( http://www.microbiome-bigdata.com/CRISPRimmunity ) without registration for general users to study individual input sequences or as a stand-alone tool ( https: //github.com/HIT-Imm unolo gyLab/CRISPRimm unity ) to perform batch analysis.
Input data r equir ements and navigation of CRISPRimmunity
CRISPRimmunity is a user-friendly w e bsite that provides three functional modules for annotating key CRISPRassociated molecular e v ents, predicting Acr, and identifying novel class 2 CRISPR-Cas loci. A genome file in (m ulti-)FASTA or (m ulti-)GenBank format is r equir ed as input data for all modules. The w e bsite is divided into three main areas: The top area with buttons for various functions, including viewing historical results ('MY JOBS'), accessing a detailed manual ('HELP'), annotating important molecular e v ents ('ANNOTATION'), predicting Acrs ('PREDICT' → 'PREDICT ACR'), identifying novel class 2 CRISPR-Cas loci ('PREDICT' → 'PREDICT NOVEL EFFECTER PROTEIN'), browsing and querying pre-identified CRISPR-Cas e v ents on 18 408 completely sequenced bacterial strains (BROWSE), searching the CRISPR-r elated r esults on 235 Acr-containing microbial strains (CASE STUDY), and downloading the CRISPR annotation results for 208 209 human gut microbes and CRISPRimmunity underlying databases for stand-alone version (DOWNLOAD) (Supplementary Figure S1A); the middle section displays a graphical mechanism or procedure for each functional module (Supplementary Figure S1B); while the bottom section contains a module execution panel where users can view examples, upload input data, set parameters, and submit new tasks (Supplementary Figure S1C). User data is kept on the server for one month, ensuring easy access to historical results.
HTH domain database (HTHDB). CRISPRimmunity collected 166 HTH related profiles from the conserved domain database (CDD) of NCBI ( 17 ) based on the keyword 'HTH', and built an HTH position-specific score matrices (PSSMs) for rapid identification of HTH domaincontaining proteins via RPS-BLAST ( 18 ).
Note: EV: experimentally validated protein; PE: predicted proteins; IP: interacting phage; TP: this paper. Acr and non-Acr datasets. CRISPRimmunity totally collected 99 experimentally validated Acrs from Anti-CRISRdb ( 13 ), and 676 non-Acrs from 3 phage and bacteria MGE datasets described in previously study ( 30 ), and fetched the genome sequences of corresponding strains in the custom pipeline using Biopython ( 31 ).
Functional module 1: important CRISPR-related molecular events annotation program
CRISPRimmunity dissects the important molecular e v ents involved in the co-evolution of CRISPR and anti-CRISPR mechanisms, including CRISPR-Cas system classification, Acr and Aca protein annota tion, STSS, repea t classification, prophage detection, and bacteria-phage interaction detection, to facilitate the users with di v erse needs.
CRISPR array identification. CRISPRimmunity integr ates multiple str ategies for identifying CRISPR arrays, including searching for repetiti v e patterns and using machine learning techniques. A combination of PILER-CR ( 32 ), CRT ( 33 ), CRISPRCasFinder ( 34 ) and CRISPRidentify ( 35 ) are incorporated to provide a thorough analysis of CRISPR arr ays, and integr ated and individual results are available for further analysis.
Pr otein sequence extr action. Prodigal ( 36 ) is utilized with the '-p meta' parameter to extract proteins from a FASTA format genome, and a custom pipeline based on Biopython ( 31 ) is employed for a Genbank format genome.
Annotation of CRISPR Cas protein and repeat type. Proteins within the user-specified distance to the CRISPR array (default: 20 kb) were annotated as the best hit by comparing with CasDB using hmmscan ( 25 ) with a threshold evalue < 1e −9. Repeat type was predicted by RepeatDB with a threshold e -value < 1e −6.
Detection of ST events. CRISPRimmunity detects ST e v ents by utilizing BLASTN ( 29 ) to compare the spacer sequence with the input genome, with the parameter '-task b lastn-short -wor d size 7 -penalty -1 -re war d 1', filtering out hits with too many mismatches or less coverage (default: mismatch = 2, coverage = 1), or occurring within the CRISPR array.
Detection of pr ophag e r egion. CRISPRimmunity utilizes DBSCAN-SWA ( 38 ) (a custom prophage detection tool) with default parameters to predict prophage regions in the input genome based on phage-like protein and integrationrela ted fea tures.
Functional module 2: Acr prediction procedure
CRISPRimmunity integrates three strategies: searching for known Acr homologs, guilt by association analysis and selftargeting in prophage optimization to predict Acrs.
Searc hing f or kno wn Acr homologs . CRISPRimmunity extracts pr oteins fr om the input sequence and compares them with AcrsDB using Diamond BLASTP(16) with default parameters. The identified Acr homologs are then functionally annotated by comparing with CDD using RPSBLAST ( 18 ) with a threshold e-value < 0.01. Proteins with unknown function or < 400 aa in size are preliminarily screened as candidates. Subsequently, the candidates are compared to published Acrs based on amino acid sequence, and those with a similarity higher than 90% are excluded.
Guilt by association anal ysis. CRISPRimm unity compares the proteins extracted from the input sequence to AcasDB. The user-specified upstream and downstream neighbors of the Aca homologs (default: 3) are functionally annotated. Next, Acr candidates are screened using the same parameters as previously described in ' Searching for known Acr homologs '.
Optimizing using self-tar g eting events in pr ophag e. CRISPRimmunity adopts two strategies to predict Acrs in pr ophage. For pr ophage containing ST, CRISPRimmunity compares all proteins in the prophage to HTHDB for searching HTH-domain containing proteins as potential Aca using RPSBLAST ( 18 ) with a threshold e -value < 0.01. Acr candidates are then predicted as described in 'guilt by association analysis'. For prophage not containing ST, Acr candidates ar e scr eened out by the probability of being an Acr predicted with AcRanker ( 39 ).
Ne xt, CRISPRimmunity e xtracts the proteins located near the retained orphan CRISPR(s) based on a userdefined number (default of 5). Function of those proteins are annotated by comparing with CDD using RPSBLAST ( 19 ) with a threshold e -value < 0.01. Initial candidates are chosen based on user-specified size (default of 500 aa) and the presence of a nuclease-associated domain. Because the CRISPR-Cas system e volv ed to resist phage invasion and similar systems can be observed in multiple strains and species, we analyze whether homologs of the initial candidates appear consistently next to the CRISPR array. To be considered as a final candidate, there must be at least three homologous proteins that appear stably next to the CRISPR array, with < 90% similarity.
Finall y, CRISPRimm unity visualizes the evolutionary relationship between candidate(s) and previously published class 2 CRISPR-Cas effectors. A maximum likelihood tree is constructed by FastTree ( 40 ) with default parameters based on the multiple alignment result by MAFFT ( 37 ) with default parameters. The resulting phylogenetic tree is then visualized using the R package ggtree ( 41 ).
User interface and system features
System implementation. The CRISPRimmunity w e b server is implemented using Django, a high-le v el Python w e b frame wor k, and incorporates a well-designed graphical user interface and detailed tutorials for helping features navigation and results interpretation. The w e b interface ensures a fast rendering and high performance through JavaScript libr ary integr ation. Additionall y, the CRISPRimm unity w e b server supports all major internet browsers.
System featur es . CRISPRimmunity provides a concise tab le and interacti v e vie wer f or displa ying important features of protein and CRISPR-Cas loci. A Circle plot is used to intuiti v ely r epr esents key molecular e v ents and anti-CRISPR proteins, with each entity r epr esented by a unique color, and detailed information can be obtained by clicking on the entity. A maximum likelihood tree constructed using FastTree ( 40 ) characterizes the evolutionary relationship between the candidate(s) and the previously published class 2 CRISPR-Cas effectors. The export option is equipped to export key results in various machine-readable f ormats f or further analysis. The platform also allows for online browsing and querying of pre-identified CRISPRassociated e v ents on the 18 408 completely sequenced bacterial strains and 235 Acr-containing bacteria strains from NCBI, with the option to sort or screen results based on e v ents of interest or CRISPR-system type. Detailed information can be obtained by clicking the 'View' button. Additionall y, CRISPRimm unity provides a download option for CRISPR-r elated r esults from 18 408 completely sequenced bacterial strains and 208 209 human gut microbes, as well as various CRISPR-oriented databases collected by this study.
Protein expression and purification
Full-length Cj2Cas9 cDNA was sub-cloned into the bacterial e xpression v ector pGEX-6P-1. Cj2Cas9 protein was expressed in E. coli C43 (DE3) cells and purified as previousl y described previousl y ( 42 ). Cj2Cas9 expression was induced by 0.3 mM IPTG a t 16 • C . After overnight induction, cells were collected by centrifugation. Cj2Cas9 was resuspended in buffer (25 mM Tris-HCl, pH 8.0, 1 M NaCl, 3 mM DTT). Cells were disrupted by sonication and cell debris was removed by centrifugation. The lysate was purified using glutathione sepharose 4B (GS4B) beads. The bound proteins were cleaved with precision protease overnight at 4 • C to remove the GST tag. The cleaved Cj2Cas9 protein was eluted from the GS4B resin and further fractionated by Nucleic Acids Research, 2023, Vol. 51, Web Server issue W97 heparin sepharose column and ion exchange chromatography via FPLC (AKTA Pure, GE Healthcare).
In vitro transcription and purification of sgRNA
The sgRNAs were transcribed in vitro using home-made T7 polymerase and purified by gel electrophoresis as previously described ( 42 ). The transcription template was generated by PCR. RNA was purified by gel electrophoresis on a denaturing (8 M urea) polyacrylamide gel and recovered using the Elutrap System followed by ethanol precipitation. RNAs wer e r esuspended in DEPC H2O and stor ed a t -80 • C .
PAM characterization
To characterize PAM, a plasmid library, which was constructed as previously described ( 43 ), containing 7-bp randomized nucleotides located at the immediately 3 downstream of the target sequence. Reactions were performed in a 50 L system containing 500 nM RNP complex and 2 g PAM library plasmid. Cleavage reactions were performed at 37 • C for 60 minutes in cleavage buffer (20 mM HEPES-Na, pH 7.5, 2 mM MgCl 2 , 100 mM KCl, 1 mM dithiothreitol, 5% gly cerol). The 3 dA ov erhang was added by incubating the cleaved product with A-tailing kit (TAKARA). The A-tailing products (100 ng) were ligated with a dsDNA adapter containing a 3 -dT overhang (100 ng) at 16 • C for 1 h. After ligation, the adapter-bearing cleavage products were PCR-amplified for the addition of sequences required for deep sequencing and subjected to Illumina sequencing offered by the HI-TOM platform ( http://121.40.237.174/ Hi-TOM/ ) ( 44 ).
In vitro DNA cleavage assay
Target DNA sequences were cloned into pUC19 vector using BamH1 and EcoR1 and dsDNA substrates were generated by PCR assay using pUC19 primers. Target DNA sequences contain a PAM sequence of 3 -G motif at the 3 downstream of the 20-nt target sequence.
The DNA cleavage assay was performed in a 20 l system containing 0.5 g Cj2Cas9, 0.1 g sgRNA and 0.2 g plasmid or dsDNA at 37 • C. Cj2Cas9 and sgRNA wer e pr eincubated in cleavage buffer for 5 min at room temperature. The plasmid cleavage reactions were stopped at different time points (5 min, 15 min and 30min) respecti v ely by adding 6 × DNA gel loading buffer. Plasmid cleavage products were run on 1% agarose gel at room temperature and visualized by EB staining. The dsDNA cleavage reactions were stopped by adding 2 × TBE-urea gel loading b uffer. Cleava ge products were run on 6% TBE-urea PAGE at room temperature in 1 × TBE running buffer and visualized by EB staining.
Anti-CRISPR protein (Acr) prediction
We integrate three strategies to predict Acrs based on protein sequence features, cassette features of neighboring proteins, and genomic features including prophage and selftargeting. The first strategy involves searching for homologs of known Acrs to identify potential candidates that may serve as more effective CRISPR switches. We then screen the predicted Acr candidates based on protein size, functional domain, and similarity to known Acrs. This approach is based on the hypothesis that Acrs inhibiting the same type of CRISPR-Cas systems will share similar sequence fea tures. The second stra tegy employs the concept of guilt by association to infer candidates for Acr by tracing potential Aca proteins that commonly locatedadjacent to the Acr in a cassette. We first infer the candidate Aca(s) based on homology to known Aca(s), and subsequently screen the Aca neighbor proteins for the same protein sequence features used in the first strategy to predict Acr candidates. This approach is based on the observation that Aca(s) typically locate downstream of the Acr and likely play a role in regulating the activity of the Acr ( 45 ). The third strategy involves identifying self-targeting e v ents within prophage as a strong indicator of the existence of Acrs. We identify prophage regions with self-targeting e v ents and screen the intraregional proteins using the first two strategies (homologies and guilt by association). Additionally, we utilize machine learning as a complementary method to infer Acrs in the candidate region. This approach is based on the premise that self-targeting e v ents are successful cases of phage immune evasion, and that Acrs are most likely integrated by the bacteriophage and located in the prophage region ( 46 ). The overall prediction frame wor k is illustrated in Figure 1 A. Figure 1 C shows the results of the Acr prediction module with an example query on Staphylococcus schleiferi strain 5909-02. The w e bpage displays a download option and a summary table of the results at the top, listing the putati v e Acr(s), CRISPR-loci, ST e v ent(s), and prophage region(s) closely associated with the Acr(s). A circle plot at the center of the page provides an intuiti v e visualization of the putati v e Acr(s) and three related molecular e v ents, with more detailed information available by clicking on (Figure 1 C, center panel). Four interacti v e panels provide sequential descriptions of the details of Acrs, CRISPR-Cas loci, selftargets, and prophages. The Acr prediction panel lists all putati v e candidates one by one, with a summary table providing comprehensi v e informa tion a t three le v els: the protein itself, its neighbouring proteins, and the whole genome.
Fi v e proteins located upstream and downstream of the candidate protein are displayed in a linear protein plot, and detailed information can be accessed by clicking on (
Identification of novel class 2 CRISPR-cas loci
Numerous orphan CRISPR systems exist in the genome without any homologs of published CRISPR effector proteins or accessory proteins. Ther efor e, the proteins surrounding these orphan CRISPR arrays may be potential novel CRISPR effector proteins. To identify these potential novel CRISPR effector proteins, we first explore orphan CRISPR arrays and investigate the proteins surrounding these arrays. We then screen neighboring proteins based on protein size and functional domain. Proteins with transposase annotation, containing nuclease function, or cleavage acti v e domains such as RuvC, HNH, and HEPN domains, are gi v en higher confidence as CRISPR effectors. As efficient CRISPR-Cas systems are generally preserved through evolution through independent evolution or horizontal gene transfer, and exist across diverse organisms, we search for homologs of candidate effector proteins and detect the corresponding CRISPR-Cas systems in their vicinity. Homologs of proteins and repeats that coexist across species would enhance the confidence in choosing the candidate systems. Finally, we construct a phylogenetic tree based on the resulting alignments of the predicted and known CRISPR-Cas effector protein sequences to gain insights into the origin of the novel systems and their evolutionary relationships with other known CRISPR-Cas systems. The overall prediction frame wor k is illustrated in Figure 2 A. Figure 2 B-F showcases the results of the identification of the novel class 2 CRISPR-Cas loci module with an example query of Armatimonadetes bacterium isolate ATM2 J3, a summary table reporting the novel CRISPR-Cas loci is displayed at the top of the w e bpage (Figure 2 B). Interacti v e panels provide sequential details on novel CRISPR-Cas loci and their homologs. All potential candidates are listed one by one, providing detailed information on effector proteins and neighboring proteins, spacers and repeats in the CRISPR array (Figure 2 C-E). Additionally, an evolutionary panel displays a maximum likelihood tree between the novel candidate(s) and known effectors of class 2 CRISPR-Cas loci, providing insights into their evolutionary relationships (Figure 2 F).
Dissecting key molecular events during the co-evolution of CRISPR and anti-CRISPR
Bacteria have developed various defense mechanisms to protect against phage attacks, and phages have been evolving anti-defense strategies under selection pr essur e. The evolutionary relationship and mechanism of bacteria and phages remain not fully understood. Ne v ertheless, there are se v eral key molecular e v ents that can help us investigate the co-evolution between bacteria and phages.
We first comprehensi v ely detect the CRISPR arrays through integrati v e analysis using PILER-CR ( 32 ), CRT ( 33 ), CRISPRCasFinder ( 34 ) and CRISPRidentify ( 35 ), then annotate the type of CRISPR repeat and Cas genes based on the domain profiles, and finally classify the system based on the dominant type of the Cas effector proteins. Ne xt, we e xamine the occurrence of self-targeting e v ent, which may indicate CRISPR-based autoimmunity, by scanning the spacer on the self-genome. We consider a hit that allows for a maximum of two mismatches at most between the spacer and a portion of the endogenous genomic sequence as credible self-targeting events. To gain insight into in vading prokary otes, we also annotate the potential in vading phages or plasmids by sequence alignment. Finally, we annotate the possible Acrs on the genome for the conve-nience of investigating phage immune escape. The overall prediction frame wor k is illustrated in Figure 1 B.
An example output for the dissection of key molecular e v ents module is illustrated in Figure 1 C, D. The result panel is similar to that of the Acr prediction module. When the Staphylococcus schleiferi strain 5909-02 is used as a query, a panel providing the result of MGE-targeted detection is added compared to the Acr prediction module. This panel re v eals hits between CRISPR spacers and custom MGE genome databases, furnishing information on potential phage or plasmid invaders (Figure 1 D).
Perf ormance f or Acr protein pr ediction
We evaluated the performance of predicting Acrs by comparing CRISPRimmunity's Acr prediction module with that of PaCRISPR ( 30 ) and AcrFinder ( 14 ) on an independent dataset containing 99 experimentally validated Acrs and 676 non-Acrs (see Materials and Methods). The cutoff threshold of the PaCRISPR output was set to 0.5 (Table S1). As shown in Figure 3 A, CRISPRimmunity outperformed the other predictors on this independent dataset based on the four general evaluation metrics, including accuracy, F1-scor e, pr ecision, and r ecall. When pr edicting non-Acrs from phages and bacterial MGEs, all methods achie v ed high true negati v e prediction accuracy (Figure 3 B). Specificall y, CRISPRimm unity, PaCRISPR, and AcrFinder correctly predicted 217 209 and 217 non-Acrs out of 217 phage proteins, and 250 and 249 non-Acrs out of 250 bacterial proteins, respecti v ely. In summary, CRISPRimmunity showed good performance in identifying Acrs from both bacteria and phage proteins, outperforming other predictors on an independent experimentally validated Acr dataset.
Perf ormance f or identifying of no vel class 2 CRISPR-cas effector
To assess the perf ormance f or identifying novel class 2 CRISPR-Cas effectors, we used our module in CRISPRimmunity to detect the CRISPR-Cas loci in the 307 410 strains downloaded from NCBI in 2020. As a result, a candidate from Armatimonadetes bacterium species was identified, featuring a complete CRISPR array, a hypothetical protein as a possible effector protein, and three CRISPR auxiliary proteins Cas1, Cas2 and Cas4. Homology search further revealed other homologous candidate systems, and the candida te ef fectors were named AbCasN1 and AbCasN2 (Figure 3 C). The sizes of AbCasN1 and AbCasN2 are 867 aa and 857 aa, respecti v ely, and their sequence similarity is 0.44. The similarity of the CRISPR array is 0.76, while that of the auxiliary proteins (Cas1, Cas2, Cas4) is 0.58, 0.61 and 0.47, respecti v ely (Figure 3 C). The C-terminus of AbCasN1 and AbCasN2 displays the RuvC domain with three acti v e sites (D, E and D) (Figure 3 D). The possible sequence of PAM was predicted by CRISPRTarget ( 47 ) (Figure 3 E), and the maximum-likelihood tree was constructed based on the resulting multiple alignment of the full-length proteins between the two candidate effectors and other known Class 2 effectors. The candidate effectors formed a separ ate br anch, relati v el y closel y r elated to Cas12j (Figur e 3 F). Significantly, this predicted effector protein was independently discovered and experimentally validated in another study, wher e the structur e was r esolved, and the gene editing activity in vitro and in vivo and the mechanism of action were elucidated ( 48 ). These results demonstrate the power of CRISPRimmunity in identifying novel class 2 CRISPR-Cas effectors.
Case study 1: identification of a G-PAM SpyCas9 from genomic data
Se v eral CRISPR-Cas9 homologs have been utilized for genome editing, and Cas9's recognition of DNA requires a specific sequence downstream of the target sequence, known as the prototypical spacer adjacent motif (PAM) ( 49 ). The smallest Cas9 orthology characterized to da te for ef ficient in vivo genome editing is deri v ed from Campylobacter jejuni species named CjCas9, comprising of 984 amino acid residues and 5 -NNNVRYM-3 PAM ( 50 ). Cas9s with simple PAM r equir ements can facilitate extensi v e genome editing by editing a broader range of loci ( 51 ), so we attempted to identify CjCas9 with a simpler PAM. We used CRISPRimmunity to annotate the CRISPR-Cas systems on 16 723 strains of C. jejuni collected from NCBI. A CRISPR-Cas system with over three spacers and the distance between Cas proteins or CRISPR array less than 500-nt was assigned high confidence, and a total of 10 462 high-confidence type II CRISPR-Cas systems were identified. We then searched for na tural varia tions in the PAMinteracting domain (PID) of Cas9s to identify orthologues of CjCas9. All CjCas9s were compared to the reference Cj-Cas9 protein sequence from Campylobacter jejuni NCTC 11168 strain. Finally, a CjCas9 from C. jejuni NCTC 11951 strain (her eafter r eferr ed to as Cj2Cas9) was selected based on the only amino acid mutation in the PID compared with the r efer ence CjCas9 protein sequence (Figure 4 A, B). To test the PAM of candidate Cj2Cas9, we constructed a plasmid library containing se v en randomized DNA nucleotides next to the downstream of the 20-nt target sequence. We then transformed the expression plasmids of Cj2Cas9 into E. coli , purified ribonucleoprotein (RNP) complexes to cleave the constructed plasmid library in vitro , followed by high-throughput sequencing. Analysis of the sequencing da ta showed tha t Cj2Cas9 cleaved dsDNAs with a PAM sequence of 3'-G (Figure 4 C). The purified Cj2Cas9 was determined by endonuclease digestion to validate its activity in vitro . As shown in Figure 4 D, Cj2Cas9 successfully cleaved the plasmid with a PAM sequence of 3'-G. Finally, we conducted a double-strand DNA with a PAM sequence of 3'-G endonuclease digestion experiment using Cj2Cas9 and different sgRNA of various Cas9. We confirmed the specific double-strand DNA cleavage of Cj2Cas9 with its sgRNA (Figure 4 E). In summary, CRISPRimmunity enabled us to identify a Cas9 homolog displaying G-PAM specificity for gene editing.
Case study 2: identification of two compact Cas13ds from human gut microbiome
The CRISPR / Cas13 system extends the application of CRISPR technolo gy through RN A-targeted endonuclease-media ted degrada tion of RNA ( 52 ). Six subtypes (Cas13a, Cas13b, Cas13c, Cas13d, Cas13X and Cas13Y) have been identified in the Cas13 family, all of which are smaller than Cas9, and Cas13X is the smallest with 775 aa ( 23 ). As the is the largest micro-ecosystem in the human body, and the gut microbiome affects health in different ways. We used CRISPRimmunity to analyze the CRISPR composition of 208 209 human gut microbes obtained from MGnify ( 53 ) and identified two compact Cas13d candidates (hereafter referred to as cCas13d1 and cCas13d2). The CRISPR loci contain a compact Cas13d gene, a CRISPR array with 5 spacers, and the presence or absence of Cas1 and Cas2 (Figure 5 A). The HEPN ribonuclease domain of the protein features two RxxxxH motifs separately located at the N and C terminal of the protein (Figure 5 B), with a 30-nt spacer flanked by two 36-nt dir ect r epeat (DR) sequences (Figure 5 C). The protein sizes of cCas13d1 and cCas13d2 are 871 aa and 721 aa respecti v ely, smaller than any previously known Cas13d effectors ( ∼930 aa) ( 54 ). To gain insights into the origin and evolution of cCas13ds, we constructed a maximum-likelihood tree on the resulting multiple alignments of the full-length proteins between cCas13ds and previously published Cas13 effectors. The cCas13ds family is closely related to the previously known Cas13ds (Figure 5 D). In addition, 15 out of 69 type V CRISPR ef fector candida tes identified have been valida ted to have in vitro cleavage activities ( Figure S2, Table S3). In summary, CRISPRimmunity have the potential to advance genomic engineering and RNA editing technologies by providing accurate identification for novel CRISPR-Cas systems.
Case study 3: annotation of Acrs and key molecular events in 235 microbial strains
Anti-CRISPRdb ( 13 ) has manually collected published Acrs through literature searches. Howe v er, A more comprehensi v e analysis of the strains containing Acrs can improve our understanding of the co-evolution between bacteria and phages. Ther efor e, we downloaded 235 microbial genomes containing 99 known Acrs from anti-CRISPRdb for systematic analysis, including the annotation of potential Acrs, class 2 CRISPR-Cas effector proteins, and various key molecular e v ents, as displayed on the 'CASE STUDY' panel of the w e bsite (Figure 6 A). In addition to the 99 Acrs known to inhibit CRISPR activity validated by experiments, we also predicted 297 homologs to known Acrs and 197 novel Acrs, as shown in Figure 6 B. In 42% of the analyzed microbial genomes, only one or one type of Acr was detectable (Figure 6 C), but in some species, such as Pseudomonas aeruginosa and Moraxella bovoculi , multiple or multiple types of Acrs were detected simultaneously, indicating a fierce CRISPR and anti-CRISPR arms race in these organisms (Figure 6 D). Among the microbial genomes containing Acrs, 26% detected self-targeting e v ents, 64% detected interacting phages, and 71% Acrs were located in prophages, as shown in Table S2. In summary, we can conduct further analyses of interest based on the detailed result of CRISPRimmunity, deepening understanding of the co-evolutionary mechanisms of CRISPR and anti-CRISPR systems.
DISCUSSION
This study introduces CRISPRimmunity, an advanced and user-friendly platform designed for comprehensi v e CRISPR-oriented analysis. This platform goes beyond existing CRISPR-Cas associated tools by providing a more integrati v e and e volutionary perspecti v e on the CRISPR-Cas and anti-CRISPR systems. CRISPRimmunity de v elops streamlined amalgamate analysis to establish the annotation of crucial molecular e v ents, the prediction of Acrs, identification new class 2 CRISPR-Cas loci, and linking bacteria and phages based on CRISPR signals. CRISPRimmunity offers a variety of functionalities, including di v erse methods for detecting CRISPR arrays, a series of custom CRISPR-oriented databases for annotating known Acrs and Acas, type 2 CRISPR-Cas systems, repeats of classified CRISPR systems, HTH domains, and mobile genetic elements. The platform also provides m ultiple a pproaches for predicting Acrs, such as guilt-byassociation analysis and combined analysis of prophage and self-targeting. When tested on experimentally validated Acrs and benchmark datasets of non-Acrs, CRISPRimmunity outperforms other existing Acrs prediction tools. Furthermore, CRISPRimmunity introduces a novel class 2 CRISPR-Cas loci detection pipeline. Se v eral class 2 effector candidates identified (15 / 69) have been validated to have in vitro cleavage activities (Figure 4 , Supplementary Figure S2, Supplementary Table S3). In addition to its ad vanced functionalities, CRISPRimm unity also provides various visualiza tions, customiza tion options, and detailed information to facilitate future experimental design, as well as e xportab le r esults in machine-r eadable format for further analysis. Its well-designed graphical user interface and detailed tutorials make it accessible to users with different le v els of e xpertise. Moreov er, CRISPRimmunity provides a 'BROWSE' option that allows CRISPR engineers or experimentalists to browse and query pre-identified CRISPR-related e v ents on NCBI's 18 408 completely sequenced bacterial strains and 235 Acr-containing bacterial strains. The 'DOWNLOAD' option offers download of the annota ted CRISPR-rela ted results from NCBI's bacterial strains and human gut microbiome, as well as a suite of CRISPR-oriented databases for further data mining. A stand-alone version of CRISPRimmunity is also available on GitHub at https://github.com/HIT-Imm unolo gyLab/ CRISPRimmunity to facilitate batch analysis.
The adapti v e immunity of prokaryotes has undergone a lengthy evolutionary process. The transposon-associated IS200 / IS605 IscB and TnpB molecules are the ancestors of type II and V CRISPR-Cas effectors ( 55 , 56 ). Recent studies hav e re v ealed that IscB and TnpB molecules are RN A-guided DN A endonucleases ( 57 , 58 ), expanding our knowledge of the adapti v e immunity of prokaryotes. Our future goal is to incorporate more types of defense and antidefense-rela ted informa tion into CRISPRimmunity, providing additional biological insights and enhancing our un- derstanding of the adapti v e immunity in prokaryotes. We hope that CRISPRimmunity will facilitate r esear ch into the mechanisms of the CRISPR-Cas and Acr systems from a co-e volutionary perspecti v e. Ultimately, leading to the discover of more efficient and less off-target genome editing tools.
DA T A A V AILABILITY
The CRISPRimmunity is an open-access r esour ce and is pub licly availab le a t http://www.microbiome-bigda ta.com/ CRISPRimm unity . The a pplication is free and open to all users with no login r equir ement. The sour ce code of CRISPRimmunity is published on GitHub via https:// github.com/HIT-Imm unolo gyLab/CRISPRimm unity .
SUPPLEMENT ARY DA T A
Supplementary Data are available at NAR Online. | 2023-05-24T06:17:48.877Z | 2023-05-22T00:00:00.000 | {
"year": 2023,
"sha1": "13b40f8f48118fd2d97529b024c2f46e6c3bcbe6",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/nar/advance-article-pdf/doi/10.1093/nar/gkad425/50417716/gkad425.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "29903592ddb63e1a89c22b6924512721fc5d91bd",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.