id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
247026113
|
pes2o/s2orc
|
v3-fos-license
|
Fast Semantic-Assisted Outlier Removal for Large-scale Point Cloud Registration
With current trends in sensors (cheaper, more volume of data) and applications (increasing affordability for new tasks, new ideas in what 3D data could be useful for); there is corresponding increasing interest in the ability to automatically, reliably, and cheaply, register together individual point clouds. The volume of data to handle, and still elusive need to have the registration occur fully reliably and fully automatically, mean there is a need to innovate further. One largely untapped area of innovation is that of exploiting the {\em semantic information} of the points in question. Points on a tree should match points on a tree, for example, and not points on car. Moreover, such a natural restriction is clearly human-like - a human would generally quickly eliminate candidate regions for matching based on semantics. Employing semantic information is not only efficient but natural. It is also timely - due to the recent advances in semantic classification capabilities. This paper advances this theme by demonstrating that state of the art registration techniques, in particular ones that rely on"preservation of length under rigid motion"as an underlying matching consistency constraint, can be augmented with semantic information. Semantic identity is of course also preserved under rigid-motion, but also under wider motions present in a scene. We demonstrate that not only the potential obstacle of cost of semantic segmentation, and the potential obstacle of the unreliability of semantic segmentation; are both no impediment to achieving both speed and accuracy in fully automatic registration of large scale point clouds.
fusion [2] and localization [3] among others. Given two sets of points in threedimensional (3D) space, the goal is to estimate a rigid-body transformation, consisting of a rotation matrix R ∈ SO(3) and a translation vector t ∈ R 3 , that best aligns the two input point sets. While a variety of methods exist in the literature to address this problem [4,5,6], handling a large proportion of erroneous matches (outliers) present in many large scale datasets, is still a difficult challenge for existing methods. To address this problem, we propose a novel method for fast outlier removal that can be used -as a data pre-processor -to significantly boost the performance of existing state-of-the-art algorithms.
The task of robust registration is typically done by employing randomized strategies, where RANdom Sample Consensus (RANSAC) [4] and its variants [7,8,9] are often the tools of choice. RANSAC is an appealing choice due to its simplicity, while providing relatively satisfactory results in problem instances having low outlier ratios. However, as the number of outliers grows, RANSAC can no longer sustain its performance, since it may require an exponentially large number of iterations to produce acceptable results [4]. This in turn can prove to be a bottle-neck for real-time applications.
To achieve a high-quality estimate, within a reasonable amount of run-time, it is beneficial if a fraction of outliers could be removed before executing the core robust fitting algorithms. Along this line of research, several approximate or exact methods have been developed for the outlier removal task, e.g., [10,11,12,13]. However, such methods require the evaluation for every single pair in the initial putative correspondence set, where each pair requires the execution of a costly branch-and-bound sub-problem. Note that in large-scale applications, methods such as [11] are impractical.
The motivation behind our work comes from human vision, where the reasoning process is guided by semantic information. For example, we would only attempt to match key points that belong to the same semantic class (e.g., houses, trees, cars, etc.). In recent years, semantic information for 2D images or 3D point clouds can also be easily obtained in real-time. However, how to effectively utilize the available semantic information to support classical robust registration algorithms remains an interesting research question.
In this work, we propose novel techniques to partially address the above question. Specifically, we investigate a novel combination between traditional robust estimation techniques -in particular finding pairwise consistent correspondences using maximum clique [5]-with semantic segmentation, which can be extracted from any well-known semantic segmentation framework [14,15,16]. Maximum clique an ingredient of choice in our work, due to the fact that it offers a convenient mechanism to exploit the available semantic information to massively reduce the problem size. Moreover, the graph formulation underlying maximum clique allows us, under the guidance of semantic labels, to decompose the original problem into multiple sub-problems, each of which can be solved efficiently. Maximum clique has been used previously for rigid registration [17,5], since the clique extracted is a set of pair-wise consistent (under preservation of distance between points) correspondences, which can be used to estimate the optimal transformation. However, existing works focus on solving this problem on graphs larger than is actually necessary, hence they are still impractical for large datasets. We show that with a novel use of semantic information, one only needs to solve MC sub-optimally and on smaller graphs.
The correspondence set generated by our method can then be processed by any state-of-the-art robust registration algorithm, but with significantly faster run-time since a large proportion of outliers have been effectively removed. Experimental results show that we achieve competitive or even better registration results, while our total run-time (semantic segmentation, outlier removal + robust registration) is of orders of magnitude faster than existing registration algorithms, including techniques that provide globally optimal results [11].
It should be noted that our work differs from recent deep-learning approaches [18,19] for point cloud registrations, as the main work-horse behind our work is still the well-known classical robust registration algorithm (i.e., based on maximum clique) allowing us to achieve results that are close to globally optimal solutions. Moreover, fully deep learned approaches usually require the design of a new network architecture, which must be re-trained before they can be used on any new dataset. Our method, on the other hand, only requires off-the-shelf semantic classifiers that have been pre-trained. Moreover, our underlying ideas are agnostic as to whether the semantic information comes from 3D data, image data, or both.
Related Work
The task of robustly aligning two point sets has long been an active research topic in computer vision. One of the most well-known methods is RANSAC [20], which is based on repetitively sampling of points to generate model hypotheses to discover the model with the largest consensus set. Variants [21] of RANSAC seek to reduce computing time through guided sampling, and accelerating the evaluation of a hypothesis. Although RANSAC and its variants are simple and easy to implement, processing time, grows exponentially when dealing with the high percentages of outliers. 4-Points Congruent Sets (4PCS) [22] and its variant [23], have better sampling strategies. However, such methods are sensitive to hyper-parameters, such as the approximate percentage of overlapping points, etc.
In contrast to heuristic methods, several globally optimal algorithms [24,25] have been introduced. These methods are mostly based on the branch-and-bound (BnB) technique. Unfortunately, these methods are computationally very expensive. To overcome these limitations, efforts have been made to remove a large number of outliers before applying global optimal algorithms (e.g. BnB). Parra et al. [11] proposed Guaranteed Outlier Removal (GORE) to remove as many certain outliers as possible. Although GORE helps avoid using BnB on all elements of initial match set, it is still computationally expensive as it calculates complex bounding functions. Other work [17,5] sought to quickly discover the largest set of consistent correspondences from an initial match set. Transformation parameters may be extracted easily and quickly from that subset by applying robust algorithms (e.g. RANSAC [20]). However, these methods are still computationally expensive when dealing with large-scale datasets.
Problem Definition
(in which the number of points in each input point set can be in the ranges of M, N > 100, 000 points). As common, we assume that a set of putative correspondences P, has been obtained by any off-the-shelf key point extraction techniques [26,27], and are given as P = {p k } D k=1 , where each correspondence pair p k = (m k , n k ), out of the set of D correspondences, contains m k ∈ M and n k ∈ N . Our goal is to estimate the optimal 6DOF rigid body transformation (i.e. R * ∈ SO(3) and t * ∈ R 3 ) that maximizes the following objective function: where · denotes the 2 Euclidean norm in R 3 . The indicator function I(·) returns 1 if its predicate (·) is true and 0 otherwise. The maximum consensus objective function (1) has been used widely e.g., [11,13]. The hyper-parameter specifies the inlier threshold, which can be chosen based on prior knowledge about the problem.
Due to the noisy characteristic of input sensors and the imperfection of existing feature extraction techniques, P may contain a large fraction of outliers (the KITTI dataset can contain up to ρ = 90% of outliers -please refer to the supplementary material). In order to estimate R * and t * , a subset |I * | containing (1 − ρ)D inliers must be identified, and vice versa (i.e., if the optimal transformation (R * , t * ) is given, I * can easily be obtained). In this work, we introduce a novel approach to effectively remove outliers in P to obtain a subset P ⊂ P, where the outlier ratio ρ of P is much lower than that of the original set P. Given P , the optimal transformation between the two point clouds can then be obtained by, e.g., running RANSAC.
In fact, our outlier removal approach is motivated by a class of registration methods that attempt to solve point cloud alignment without putative correspondences [5,17], which uses pairwise length constraints to obtain the optimal set of correspondences by searching for the maximum clique. However, in practice, using all the points as input to such algorithms renders them impractical hence we still use keypoints.
In the next section, we briefly introduce the Maximum Clique algorithm, which serves as a backbone for our algorithm.
Maximum Clique (for preserved distance between point pairs)
Formulation As previously discussed, when the set I * , containing the highest number of inliers (set with maximum consensus) is achieved, the optimal transformation (R * , t * ) can be obtained using, e.g., SVD [28] or RANSAC [20]. Following [5,17], outliers for such an optimal subset can be removed by solving the maximum set of consistent pairs. Specifically, we first define an undirected graph G = (V, E), where each vertex v k ∈ V corresponds to a pair of correspondences p k between the two point clouds, i.e., V = P. The set of edges E of G is defined as where the function d(p i , p j ) specifies the distance between the two correspondence pairs, p i = (m i , n i ) and p j = (m j , n j ), which can be expressed as and is the user-defined inlier threshold. To assist further discussion in later sections, we also define a clique C = (V , E ), V ⊆ V, E ⊆ E to be a sub-graph of G in which there exists a connection between any two vertices. i.e., For brevity, we define clique size |C| to be the number of vertices in C. A maximum clique C * of G is the clique with the maximum number of vertices.
From (3), a pair of correspondences p i and p j are said to be consistent if d(p i , p j ) ≤ 2 . The graph G defined above is therefore referred to as consistency graph. Intuitively, if p i and p j both belong to the optimal inlier set I * , it is expected that the difference in lengths between the two segments m i m j and n i n j must not exceed 2 (in noise-free settings, = 0).
The task of removing (not necessarily all) outliers in P can be addressed by solving for the optimal set I * that contains the maximum number of consistent correspondence pairs. Since every pair of inliers i, j ∈ P must satisfy d(p i , p j ) ≤ 2 , we can remove outliers by solving Together with the introduction of the consistency graph G as discussed above, maximizing Eq. (5) is equivalent to searching for the maximum clique on G, which will be discussed in the following section. Interested readers are referred to [5] and supplementary material for more details.
Solving Maximum Clique
Removing outliers by finding the maximum clique in the consistency graph Eq. (5) is still computationally challenging as maximum clique is a well-known NP-hard problem. The majority of optimal algorithms are based on BnB For instance, Parra et al. [5] introduced an efficient algorithm by deriving an efficient bounding function. However, such algorithms can only handle input data with modest sizes, while large-scale datasets are still considered impractical. To assist in the introduction of our algorithm and for the sake of completeness, in this section, we quickly summarize the general BnB approach to tackle Eq. (5). Our novel modification will be introduced in Section 4.
A general strategy for solving maximum clique using BnB is traversing the input graph G = (V, E) in a depth-first-search manner. At the beginning of the algorithm, all the vertices in V are pushed into a set S, i.e., S = V. A node v i ∈ S is selected and its adjacent nodes are extracted into a set S for further exploration. During the traversal, the optimal clique size R * obtained so far the search recursively continues from v i to deeper levels. Otherwise, the exploration for v i is terminated, v i is removed from S , and the process continues from other nodes in the queue S until S = ∅. The search is then back-tracked to investigate the remaining nodes in S. More details about BnB algorithm for maximum clique can be found in [5]. It is wellknown that BnB has exponential complexity [29], thus real-time applications rarely employ such techniques, unless the problem size is relatively small.
While BnB is a standard framework, deriving a tight bounding functionR(v) is crucial in determining the execution time. Most of the existing research relies on approximate graph-coloring to obtain the lower bound. In addition, other techniques such as ranking the nodes by their degrees, or [5] can also be used to speed up the algorithms. However, to the best of our knowledge, most algorithms are still slow for large-scale problems, especially when the graph G is dense.
Proposed Method
A schemetic of our proposed approach is depicted in Fig 1 while the following sections show how integrating semantic information on approximating the maximum clique problem addresses the challenges discussed above. In particular, we first discuss the correspondence pruning using semantic labels. Next, using the provided semantic information containing L different classes, we propose to decompose the graph into L sub-graphs, and solve maximum clique for each of the sub-graphs. The solutions from the sub-problems are then combined in an hierarchical manner. Notably, we also demonstrate a novel use of information obtained from the solved sub-graphs to accelerate the computation of lowerbounding functions during optimizing the remaining sub-graphs. This technique allows us to achieve real-time performance on very large datasets containing an extremely high proportion of outliers.
Correspondence Pruning with Semantic Labels
We use semantic labels to prune the set of input putative correspondences P. Specifically, for each 3D point m ∈ R 3 , let l(m) ∈ {1, . . . , L} denote the semantic label (of the L semantic classes) assigned to m. Based on such a labelling, we prune P by removing correspondences p k = (m k , n k ) where l(m k ) = l(n k ). Formally, after pruning, we obtain a new set S ⊆ P: After this pruning procedure, due to the inevitable errors caused by the underlying semantic classifiers, two problems may arise: -True inliers may be incorrectly removed, and -The set S can still contain a large fraction of outliers. For the former problem, our empirical experiments (please refer to the supplementary material) show that our method incorrectly removes an insignificant fraction of correspondences in I * . Moreover, this does not cause a significant impact on the final estimated transformation compared to ground-truth (See Section 5, for performance results).
The latter problem, on the other hand, is still challenging. One would expect that after pruning |P|, since |S| < |P|, applying any state-of-the-art robust fitting algorithm to S would be trivial than the original problem of applying it to P. However, merely pruning the dataset based on semantic class matching still leaves a large number of outliers (please refer to the supplementary material).
In the following section, we show how semantic information can further be utilized to devise a fast algorithm, which can precisely remove outliers in S.
Hierarchical Maximum Clique
Approximating the maximum clique up to a constant factor has been shown to be NP-complete [30], unless P=NP. Therefore, most of the research involving maximum clique resort to heuristic mechanisms and approximate maximum clique. Inspired by recent works on maximum clique for very large graphs [31,32,33], where graph decomposition plays a major role, we propose a hierarchical maximum clique algorithm for outlier removal.
Our heuristic algorithm stems from the special structure of rigid-body point cloud registration. Recall that, in order to roughly estimate the rotation and translation from the two given point clouds, only a subset of inliers are sufficient. The rest of the inliers can be obtained after applying the estimated R and t. Therefore, it is adequate for an algorithm to approximate the set of inliers. In addition, in the context of point cloud registration, it is expected that matching points should belong to the same semantic class. Therefore, we propose to approximate this set of inliers by first using semantic information to decompose the original graph, then combine the solutions in a hierarchical manner. The algorithms are discussed in detail as follow.
Semantic Graph Decomposition After executing the semantic pruning procedure (Section 4.1) to obtain the set S, let us consider the graph G S = (V S , E S ) built from S based on the technique discussed in Section 3.2. Since the labels assigned to points in each correspondence p k = (m k , n k ) ∈ S are identical, with the abuse of notation, we also use l(p k ) to denote the label associated with points belonging to p k , i.e., l(p k ) = l(m k ) = l(n k ). Also, for a graph vertex v k ∈ V S , we denote by l(v k ) the label of the associated correspondence p k .
Note that by decomposing the original graph G S into L sub-graphs, the total number of vertices remains the same, while some edges that connect vertices having different labels are temporarily removed. Therefore, the number of edges in each sub-graph can be massively reduced compared to the original graph, i.e., |E i | |E S | , ∀i = 1, . . . L, allowing us to solve maximum clique efficiently on each G i (see Fig. 2 for an illustration of graph decomposition). Moreover, note that based on the sub-graph definitions, in order to obtain the sub-graphs, instead of constructing the original graph G S and perform the decomposition, it is sufficient to just construct G i based on a subset of vertices having label i.
After decomposing the original graph into L sub-graphs, it is natural to execute the maximum clique algorithm on the sub-graphs. For each sub-graph G i , we denote by I * (G i ) the optimal inliers after solving maximum clique for graph G i . Certainly, if I * (G) denotes the optimal inliers of the original graph G, it can be seen that L i=1 I * (G i ) = I * (G). In other words, combining the inlier sets obtained after solving each sub-problem does not guarantee to recover the optimal solution for the original problem. However, our empirical experiments show that combining the solutions L i=1 I * (G i ) in most cases is sufficient to recover the optimal transformation. Combining sub-optimal solutions After solving the sub-problems to obtain the set J = L i=1 I * (G i ), J may still contain outliers. One of the possible reasons is the presence of moving objects. For instance, the same car can be located at two different positions in the two input point clouds. In this scenario, points belong to such car become "outliers" w.r.t. the remaining points. Moreover, the noisy set of putative correspondences may contain correspondences that are not meaningful to the underlying point cloud structure, but may still form a maximum clique. Therefore, to remove the remaining outliers in J , the final stage in our algorithm is to solve for maximum clique on the set J . if (i > 1) then 5: LB(Gi) ← |Ii| 6: end if 7: Calculate I * (Gi) using PMC with LB(Gi). 8: if Estimate (R * i , t * i ) using RANSAC on i 1 I * (Gi). 10: end if 11: Calculate initial lower bound for next class Ii+1 12: end for 13: Combine sub-optimal solutions from L sub-graphs J = L i=1 I * (Gi) 14: Run PMC again on J to get I * (J ) 15: Estimate (R * , t * ) using robust registration algorithm (e.g. RANSAC) on I * (J ).
vertex v i should be expanded. For our particular problem, we show that while solving for I * (G i ), an initial lower bound LB(G i ) can be used in combination withR(v i ) to effectively prune the search tree.
We make use of the decomposed structure in our problem and utilize the solutions obtained from k i=1 I * (G i ) (where k is the number of classes out of L classes processed so far) to compute during the process of solving for I * (G i ). Our new strategy is motivated by the fact that in order to obtain an acceptable transformation, only a small subset of correspondencesÎ ⊂ I * , where |Î| ≥ 3 is required. Therefore, after obtaining the set k i=1 I * (G i ), we aim to seek (R i−1 , t i−1 ) from this subset using RANSAC. Because the fraction of outliers in k i=1 I * (G i ) is relatively low and | k i=1 I * (G i )| is small, the run-time required by RANSAC, in most of the cases, is very fast. Then, based on (R i−1 , t i−1 ), we compute approximate number of inliers |I i | and use it as initial lower bound for solving I * (G i ): LB(G i ) = |I i | ≤ |I * (G i )| (7) At any given node v, it is expected that LB(G i ) ≤ I * (G i ) ≤ UB(v). Consequently, at a particular node v, if UB(v) < LB(G i ), it can be concluded that the node v can be pruned.
|I i | may become a tight initial lower bound because, firstly, the maximum clique may have some, but not many outliers. As a result, |I i | is quite close to |I * (G i )|. Secondly, the run-time to calculate |I i | is significantly fast.
Experiments
In this section, we test our proposed method on both synthetic and real-world dataset. All experiments are executed on Intel Core 3.70GHz i7 CPU with 32Gb RAM, and Geforce GTX 1080Ti GPU.
Benchmark Dataset. We evaluate the performance of our proposed method using KITTI odometry dataset [34]. This dataset contains point clouds captured from a moving car, equipped with a Velodyne HDL64 LiDAR, around the city of Karlsruhe, Germany. The original point clouds contains around 120,000 points. Since the ground truth error in the original KITTI odometry dataset is large, we utilize the ground truth poses from Semantic KITTI [35] instead.
Baseline Algorithms. To show the significant performance of our method, we compare it against the following state-of-the-art pipelines or approaches on the same dataset: -GORE [11]: Guaranteed Outlier Removal for point cloud registration. The code was provided by the authors. -Practical Maximum Clique with Pairwise Constraints (PMC) [5]. The code was provided by the authors. -Keypoint-based 4PCS (K4PCS) [23]: A variant of 4PCS, which is applied on key points.
We down-sample the input point clouds using a voxel size of 0.1m. We then apply Intrinsic Shape Signature (ISS) [26] to detect keypoints and Fast Point Feature Histogram (FPFH) [36] to compute local geometric features of each point. Finally, we generate initial correspondences based on these features using K nearest neighbors (K = 10) while setting the inlier threshold to 0.1.
Evaluation Criteria The evaluation is based on calculating the angular and translational error of the estimated transformation (R, T ) against the ground truth (R, T ). The angular error is calculated as angErr = 2 arcsin R−R 2 √ 2 . The translational error is calculated by the Euclidean distance between T and T .
Results on the Semi-synthetic Data
We first show the performance of our method on semi-synthetically generated data. The point cloud "000001.bin" from sequence 00 of KITTI dataset is loaded and considered as source point cloud M . In this experiment, we use ground-truth semantic information. Motivated by [37], to produce target point cloud N , we apply a random 6DOF transformation and a small Gaussian noise (with zero mean and variance of 0.01) to N . We randomly select and remove k% points in N to simulate overlapping rate. We conduct the experiments with k = 10%, 15%,..., 50%. Fig. 3 shows the median values (angular error, translation error, optimization run-time, and total run-time) in 100 runs for all methods. In this stage, as we use the ground-truth for semantic information, the total time does not include the run-time for semantic predictions. (The semantic prediction run-time is reported in section 5.2). As shown in the Fig. 3, it is evident that our algorithm outperforms the above methods in term of run-time with better (or comparable) errors to the others. The performance of K4PCS is unstable because it is sensitive to input parameters such as approximate overlapping, etc. To evaluate the robustness of our method, we add noise to the semantic labels in the target point cloud. In particular, we randomly select and change h% point labels. We conduct the experiments with h = 10%, 20%,..., 90%. Table 2 shows the median values (angular error, translation error, and optimization run-time) in 100 runs for our method. As shown in the table, our method is robust to noise in semantic label. We report the results with the noise rate up to 70%. Our method fails when the noise rate is over 80%. However, most of the recent deep learning networks [14,15,16] for point cloud semantic segmentation can achieve over 70% average accuracy. As a result, our pipeline is flexible such that practitioners can use any deep learning network suited to their application.
Results on the Real Data
To demonstrate the practicality of our method, we evaluate the performance on the KITTI odometry benchmark. Based on the benchmark, we use 11 sequences (from 11 to 21) for testing. Motivated by DeepVCP [18] experiment setting, we sample the input source from every 150 frame intervals and register with all target scans within 4m translation. To extract semantic information, we utilize predictions from a state-of-the-art deep learning model RangeNet53++ [16]. The mean Intersection over Union (mIoU) and average accuracy obtained are 0.52 and 0.89 respectively. To be fair, we report median values which are less influenced, as well as mean (that is, median, mean, standard deviation) in Table 3 and show the comparison between median values of all methods in Fig. 4. The total run-time includes the run-time for extracting key points, generating initial correspondences, semantic information prediction (for only our methods), and optimization run-time. Note that the run-time for semantic prediction was taken from the original paper RangeNet53++ [16]. As shown in Fig. 4, our methods achieve better (or comparable) accuracy with extremely faster run-time than others. Compared to PMC, the optimization runtime of HSMC is significantly faster (up to 99 times). In particular, our pipeline is flexible, practitioners can replace PMC with any maximum clique solver, and RangeNet53++ with any state-of-the-art deep learning model.
Conclusion
We have demonstrated that one can employ semantic constraints, in addition to pairwise conserved distance constraints, to reduce the number of putative matches (remove outliers in those) sufficiently well to make a post-processing
|
2022-02-23T06:47:39.729Z
|
2022-02-21T00:00:00.000
|
{
"year": 2022,
"sha1": "5200f37199da7f12514a3ef33d4e019d1d68565d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5200f37199da7f12514a3ef33d4e019d1d68565d",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
212437745
|
pes2o/s2orc
|
v3-fos-license
|
Synthesis and characterization of PVP based catalysts for selected application in catalysis
This research aims to study the catalyst activity in specific reactions and the characteristics of the catalyst in order to optimize its performance. This research investigates PVP based catalysts and their properties and applications. PVP was prepared in combination with different metal oxides in order to be tested for different catalytic applications including dye removal. Methyl orange was used as a dye and different concentrations were tested against different metallic ions in order to optimize the catalyst for being used in dye removal applications. Spectrophotometer was used to calculate the concentration of the dye before and after catalyst exposure and investigate the relation between contact time and concentrations. Applying different contact time to the same weight percent of PVP based catalyst with metallic ions revealed that increasing the contact time with a good shaking leads to decrease in the concentration of the dye mixed with the sample. The tests showed that the mixture between PVP and Nickel has the best dye removal within the other metal ions (copper and ferric) as well it showed that ferric has the least effect on dye removal. Wide angle x-ray diffraction (WA-XRD) was applied to different sample copper with PVP and ferric with PVP.
INTRODUCTION
Textiles and waste water have dyes that could cause hazards and a lot of harm to the environment and to livings [1][2][3][4][5]. These dyes should be removed in order to decrease the harms by different techniques and applications used in dye removal and one of them is by PVP based catalyst mixed with different metal ions [6][7][8][9][10][11][12].
Polyvinylpyrrolidone (PVP) having (Mw) from 2500 to around one million is for the most part gotten by radical polymerization in arrangement. The higher atomic weight type items are polymerized in watery (aquatic) arrangement generally utilizing hydrogen peroxide as initiator [56][57][58]. The polymers in this way acquired have hydroxyl and carbonyl end gatherings. Increasingly steady end gatherings can be acquired by polymerization in solvents, which may go about as chain exchange operators and which produce low atomic weight type items [59][60][61]. Copolymers particularly with monomers, for example, vinyl acetic acid derivation and with different acrylic mixes may likewise be delivered by arrangement polymerization. Popcorn polymerization prompts insoluble PVP. In this way, VP is polymerized without initiator within the sight of little measures of bifunctional monomers [64-66]. The polymeric chips therefore shaped are exceptionally cross-connected, mostly because of entrapments.
The sub-atomic weight appropriation of dissolvable PVP is expansive because of exchange responses. A bizarre property of PVP is its dissolvability in water just as in different natural solvents. The glass progress temperature of high sub-atomic weight polymers (Mw=1 million) is about 175°C and tumbles to values under 100°C with diminishing sub-atomic weight Mw=2500). PVP frames buildings with different mixes, particularly with H-benefactors, for example, phenols and carboxylic acids. The complex framed with cross-connected PVP and polyphenols is utilized economically for the elucidation of refreshments. Another business use is the complexation of iodine with straight PVP, which prompts compelling disinfectants of exceptionally low danger. Further vital utilization of PVP in the pharmaceutical field are their utilization as official or film framing specialists for tablets, and as solubilizing operators for infusions.
The swelling capacity of cross-connected PVP in water is utilized in crumbling operators for tablets. In the corrective field, PVP polymers are utilized as film formers for hair dressing items. Instances of specialized applications are glues, material helpers and scattering operators. PVP has many uses and applications most of them are used in our daily uses and others are used in the manufacturing of some huge products. One of the known and famous applications of PVP is in medical issues. It has an important role in the pharmaceutical tablets as it is used as a binder as it has the ability to passes smoothly through the body when tablets are swallowed. Mixing PVP with iodine in order to form a compound named povidone iodine that contains antiseptic property. This compound has a widely usages as it can be used as liquid hand soaps and scrubs in surgeons. The most common name and the most widely known name which is used as daily antiseptic is Beta dine, as it has a high degree of safety, available and low cost. The PVP acts as a good lubricant that is why it can be used in eye lenses and their solutions as it can reduce the friction. As well it may be used in some eye drops products.
PVP has many other practical uses and application such as it acts as a reducing agent and distributed in NPs mixture, has a role as a stabilizer in the in organic solar cell, increase the solubility of the drugs so it can be used as an aid, can be used in gels for tooth as a thickening agent, it has an important role in agricultural process as it acts as a binder which helps in crop protection, and can be used as an additive in many products such as inkjet papers, ceramics and batteries.
EXPERIMENTAL SECTION
PVP dissolved on 100 ml distilled water was added to metal ions dissolved in distilled water and the mixture was stirred for 2 -3 minutes. After mixing them well, 500 µL hydrazine hydrate was added and heated using microwave for 5 minutes using 30s intervals, then mixture was dried at oven. Methyl orange of concentration 50 ppm was prepared in distilled water using a different weight of different metallic ions adding to them methyl orange with 50 ppm concentration. Then, samples were put on the shaker for different time (10, 20, 30, 40 and 50 minutes). Finally, Spectrophotometer was used to measure the new concentrations, and a curve was drawn between the contact time and the concentrations showing the effect of contact time on concentrations.
XRD Characterization.
XRD diffraction pattern of copper chloride nanoparticles, which was prepared using microwave-assisted synthesis and characterization of (CuCl) was achieved by XRD pattern of catalyst sample as shown in figure 1. XRD of CuCl match that reference code is 01-081-1841 corresponding to cubic structure and the diffraction peaks are ascribed to the (111), (200), (220), (311), (222), and (400). Table 1 shows the concentration calculations for copper based catalyst supported on 20 wt% PVP. It is obvious that by increasing the shaking time, the concentration decreases as shown in figure 3. Table 2 shows the concentration calculations for copper based catalyst supported on 60 wt% PVP. It is obvious that by increasing the shaking time, the concentration decreases as shown in figure 4. Table 3 shows the concentration calculations for copper based catalyst supported on 80 wt% PVP. It is obvious that by increasing the shaking time, the concentration decreases as shown in figure 5. Table 4 shows the concentration calculations for copper based catalyst supported on 90 wt% PVP. It is obvious that by increasing the shaking time, the concentration decreases as shown in figure 6. Table 5 shows the concentration calculations for nickel based catalyst supported on 20 wt% PVP. It is obvious that by increasing the shaking time, the concentration decreases as shown in figure 11. Table 6 shows the concentration calculations for nickel based catalyst supported on 40 wt% PVP. It is obvious that by increasing the shaking time, the concentration decreases as shown in figure 12. Table 7 shows the concentration calculations for nickel based catalyst supported on 60 wt% PVP. It is obvious that by increasing the shaking time, the concentration decreases as shown in figure 13. Table 8 shows the concentration calculations for nickel based catalyst supported on 80 wt% PVP. It is obvious that by increasing the shaking time, the concentration decreases as shown in figure 14. Table 9 shows the concentration calculations for nickel based catalyst supported on 90 wt% PVP. It is obvious that by increasing the shaking time, the concentration decreases as shown in figure 15. In order to find the wave length of the methyl orange this curve was obtained by using the Spectrophotometer by applying the dye (methyl-orange) in the device and start to gain the reading of the wave length curve it showed that peak of the curve was 465 nm which fits with the known wave length of methyl orange in the books done by scientist as shown in figure 16. The methyl orange has been prepared with concentration 50 PPM by applying the equation MV=MV to obtain the needed concentration. The figure shows same sample with same concentrations and the same mixture of PVP with metal ions but mixed with dye (methyl orange), the difference between them is the contact time between (10, 20, 30,40and 50 minutes). In order to get the best contact time between the dye and the mixture it should be held in a shaker with proper RPM. The resulted graphs show that the best curves were obtained from the nickel better than both ferric and copper. It also shows that ferric has the least proper curves because iron is known to have some problems when mixing with other chemical compounds based on previous experiments and articles. The graphs also show some errors in different points that lead not having a straight line that because of deactivating the catalyst as the catalyst adsorb the dye and after a certain time it does not keep the same amount adsorbed but it releases some amount of the dye in the solution again. The graphs show that by decreasing the amount of PVP weight and increasing the weight percent of the metal ions that lead to more dye removal.
Iron Based Catalyst Supported on PVP.
The standard curve in Figure 17 shows the relation between concentrations and absorbance. The curve was obtained by applying different concentrations of methyl orange and obtains its absorbance. This curve makes the reading of concentrations more easily when applying the samples as the absorbance hit the curve and read the concentration. The curve gave R 2 =0.9805 which is acceptable.
CONCLUSIONS
An efficient method was adopted to generate a highly active nanoparticle based catalyst supported on graphene as ideal support. The Pd-Fe 3 O 4 /G catalyst was synthesized via using hydrothermal synthesis technique. Furthermore, the recovery and the recycling process of the catalyst could be implemented up to seven times with high catalytic activity near 100% thus providing high economic viability. It can be easily concluded from characterization that palladium nanoparticles and magnetite nanoparticles are uniformly dispersed onto the surface of graphene nanosheets.
This easy and efficient recycling process of the catalyst is simply due to the magnetic properties of magnetite, thus lead to achieving high yields over different substrates for Suzuki cross coupling reactions.
|
2020-02-06T09:17:04.710Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "d842deee1e056794345591331595aed8becdd0e1",
"oa_license": null,
"oa_url": "https://doi.org/10.33263/briac102.209216",
"oa_status": "GOLD",
"pdf_src": "Unpaywall",
"pdf_hash": "d842deee1e056794345591331595aed8becdd0e1",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
54475993
|
pes2o/s2orc
|
v3-fos-license
|
Systematic review and meta-analysis of experimental studies evaluating the organ protective effects of histone deacetylase inhibitors
The clinical efficacy of organ protection interventions are limited by the redundancy of cellular activation mechanisms. Interventions that target epigenetic mechanisms overcome this by eliciting genome wide changes in transcription and signaling. We aimed to review preclinical studies evaluating the organ protection effects of histone deacetylase inhibitors (HDACi) with a view to informing the design of early phase clinical trials. A systematic literature search was performed. Methodological quality was assessed against prespecified criteria. The primary outcome was mortality, with secondary outcomes assessing mechanisms. Prespecified analyses evaluated the effects of likely moderators on heterogeneity. The analysis included 101 experimental studies in rodents (n = 92) and swine (n = 9), exposed to diverse injuries, including: ischemia (n = 72), infection (n = 7), and trauma (n = 22). There were a total of 448 comparisons due to the evaluation of multiple independent interventions within single studies. Sodium valproate (VPA) was the most commonly evaluated HDACi (50 studies, 203 comparisons). All of the studies were judged to have significant methodological limitations. HDACi reduced mortality in experimental models of organ injury (risk ratio = 0.52, 95% confidence interval 0.40–0.68, p < 0.001) without heterogeneity. HDACi administration resulted in myocardial, brain and kidney protection across diverse species and injuries that was attributable to increases in prosurvival cell signaling, and reductions in inflammation and programmed cell death. Heterogeneity in the analyses of secondary outcomes was explained by differences in species, type of injury, HDACi class (Class I better), drug (trichostatin better), and time of administration (at least 6 hours prior to injury better). These findings highlight a potential novel application for HDACi in clinical settings characterized by acute organ injury.
INTRODUCTION
Decades of research have yielded multiple negative clinical trials of organ protection interventions. 1,2,3 A major challenge in this field is to overcome the redundancy of the multiple pathways activated in response to injury using a single intervention. 2 Interventions targeting epigenetic processes offer a possible solution. Modification of the regulation of gene expression through alterations in chromatin components other than the DNA sequence can regulate the expression of multiple gene pathways that determine stress responses, energy utilization, and cell survival. 4 Multiple epigenetic mechanisms exist ranging from DNA methylation which elicits long-term changes in the genome to processes with greater plasticity such as histone acetylation and deacetylation. These processes are strongly influenced by adverse environmental stimuli and have evolved to modulate a genome wide response to stress. The ability to modify epigenetic processes raises the possibility of harnessing this genome wide response as an organ protection intervention. Histone deacetylase inhibitors (HDACi) increase the acetylation of lysine residues in nucleosomal histones. This reduces their affinity for DNA and leads to transcriptionally active chromatin and the expression of multiple stress response genes. 5 Evidence of efficacy in preclinical models of organ injury and has led us to hypothesize that HDACi may have clinical utility as organ protection interventions. The aim of the current study was to systematically review the evidence from these experimental studies and to evaluate differences in the effects of different HDACi and modes of administration across a range of experimental models with a view to the design of early phase clinical trials.
METHODS
Search methods, data extraction, assessment, and presentation were performed as recommended by the Cochrane Handbook for Systematic Reviews of Interventions (Version 5.1). 6 Information sources. Potentially eligible studies were identified by searching NCBI, SCOPUS and Ovid database from inception until April 2018 with the following search terms: [(in vitro OR tissue OR cells OR ex vivo OR animal OR human) AND (ischemia reperfusion OR ischemia OR glucose deprivation OR ischemia OR hypoxia OR shock OR trauma OR infarct) AND (brain OR heart OR kidney OR liver) AND (valproate OR HDAC OR epigenetic OR histone acetylation)].
Search quality. To assess the search quality, all the searches were done in duplicate by S.Y. with default settings from 1960 up to April 2018. Twenty five percent of the titles were randomly selected and cross referenced between searched lists. Study selection. Two reviewers (S.Y., M.R.) independently selected eligible studies according to the prespecified inclusion and exclusion criteria. All disagreements were resolved by discussion. Following exclusion of titles that were clearly outside the scope of the review, abstracts of the remaining studies were assessed and excluded if they met any of the following criteria: (1) study was a review paper, (2) study was related to cancer/epilepsy/disease, (3) study was undertaken solely on epigenetic/genetic modification, (4) study was performed with non-HDAC treatment, or (5) study was a nonintervention. The full articles for the remaining papers were retrieved and subjected to full text assessment. The inclusion criteria were: (1) Study was conducted in animals, humans and cells, (2) Experimental model of acute organ injury such as ischemia reperfusion, hypoxia, shock, trauma or infarction, or (3) Study was performed in brain, heart, kidney or liver. Studies were further excluded if: (1) they did not assess one of our predefined outcomes listed in the section below, or (2) did not evaluate our prespecified target organs of interest (e.g., eyes), (3) outcomes reported in less than 3 studies (Fig 1).
Data extraction. Data extraction was performed by 2 independent authors (S.Y., B.E.) using a standardized proforma as follows: author, journal, year of publication, animal species, strain, gender, weight, drug administration time, type of injury, type of HDACi, class of HDACi, and concentrations of HDACi. For HDACi classes, these were categorized into: Class I, Class II, Class I/II, and Class III. For type of HDACi; valproic acid (VPA), trichostatin A (TSA), sodium butyrate (SB), and other HDACi-related drugs were extracted. The experimental organ injury was classified as ischemia, trauma, and infection. For each comparison, the number of animals in each group, as well as the mean and standard deviation (SD) or standard error (SEM) for continuous outcomes and the number of events for dichotomous outcomes were extracted. Where the outcomes were reported graphically but not as numerical data in the text, the software WebPlot Digitizer-Version 4.1 (https://automeris.io/WebPlotDi gitizer) 7 was used to extract the values from the graphs. If a published paper involved multiple groups (e.g., using different inhibitors or different concentration), data from each group was individually extracted. Where there were multiple comparisons from the same paper, the data were treated in pair-wise manner and included in the analysis separately. This included multiple independent comparisons reported in the same paper, or multiple treatment comparisons against the same control group. For outcomes measured over time from the same group of animals, we used the first measured time point for analysis. For studies measuring the same outcome in blood and organ tissue from the same animals, we analyzed the measurements taken from blood. Data consistency was cross checked between two independent extraction files and if any inconsistency occurred, the data were cross checked and agreement reached by consensus.
Assessment of methodological quality. Methodological quality was assessed by two reviewers (S.Y., M.R.) against the ARRIVE checklist. 8 A random sample of papers were cross checked and disagreements were resolved by consensus. Methodological quality was expressed using graphics adapted from the Cochrane Handbook of Systematic Reviews Collaboration. 9 Papers were judged to be at low risk of bias if this was evident in all the ARRIVE checklist items. Data synthesis. Treatment effects were expressed as the risk ratio (RR) for dichotomous outcomes, and as standardized mean difference (SMD) for continuous outcomes, for HDACi values versus Controls. Multivariate meta-analytic models were used to account for nonindependence in observed effects. To account for repeated use of the same control group in multiplearmed studies, we estimated the variance-covariance matrix of the effect sizes based on Glesser 2009, 10 and fitted a multivariate random-effects model. In addition, the model included a multilevel structure that takes into account multiple independent comparisons nested within the same papers. All analyses were conducted using the R-package "metaphor". The results were presented using table and forest plots. For the primary analysis, we grouped mouse and rats as rodent.
Assessment of heterogeneity and reporting biases. Heterogeneity was assessed by using the Q-statistics and its p value, which tests whether the variability in the effect sizes is larger than one would expect based on sampling variability alone. We investigated heterogeneity by performing subgroup analyses. We conducted moderator tests followed by subgroup analyses if moderators were identified. Prespecified subgroups included: animal type (rats, mice), inhibitor class (Class I, II, III, I/II), inhibitor (VPA, TSA, SB, other), type of injury (ischemia, sepsis, trauma or other), and first drug administration time (0À6 hours, 6À24 hours, >24 hours for both pre-and postinhibitor administration). If there were 10 or more papers in the meta-analysis publication bias were investigated by using funnel plots and Egger's test.
Searches.
A total of 4695 records were retrieved through electronic searches from: PubMed (n = 1206), SCOPUS (n = 3015), OviD (n = 472), and cross reference sources (n = 2). After the exclusion of duplicates (n = 1035), titles clearly outside the scope of the review were excluded (n = 2877). Following the review of titles and abstracts, 599 studies were excluded because they were reviewed manuscripts (n = 95), associated with viruses, cancer and epilepsy (n = 68), focused on genetic/epigenetic modification (n = 99), were performed with non-HDACi treatment (n = 209), or were noninterventional studies (n = 128). A total of 184 manuscripts underwent detailed review; 50 did not report our prespecified outcome measures, 8 studied nontarget organs (e.g., eye), 16 study did not evaluate prespecified metabolic stress, and 9 study reporting outcomes for less than 3 comparisons. In total 101 manuscripts were included in the quantitative and qualitative analysis. (Fig 1a).
Included studies. The characteristics of included studies are summarized in Table I, and Supplemental Table S1. The 101 manuscripts identified in searches reported a total of 448 comparisons due to the evaluation of multiple independent interventions within single studies. The experimental models included rodents (n = 92, 414 comparisons) and swine (n = 9, 34 comparisons).
The most common experimental injury was ischemia (n = 72, 325 comparisons), followed by trauma (n = 22, 85 comparisons), and sepsis (n = 7, 38 comparisons). More than one type of HDACi was evaluated in some studies. The classes of inhibitors used were: Class I (17 studies, 55 comparisons), Class II (7 studies All study characteristics and findings are listed in Supplemental Tables S1 and summarized in Tables I and 2.
The primary outcome of experimental mortality was reported in 16 rodent studies and 3 swine studies.
Brain injury was assessed in rodents and swine through the following variables: BDNF, brain infarct, GFAP, lesion volume, neurological score and rotarod and were reported in total 4, 19, 4, 7, 15, and 7 studies, respectively.
Heart injury was evaluated in rodents and swine through cardiac output, heart dP, heart dP/dT ratio, heart EDP, size of heart infarct, heart rate, MAP, and RPP. These variables were reported in 3, 6, 10, 8, 7, 15, 14, and 7 studies, respectively.
Kidney injury in rodents was assessed through the BUN and creatinine in 14 analyses studies, while liver injury was assessed by measuring the ALT and AST levels in a total of 11 studies. Inflammation markers selected in rodent studies were: COX-2, IL-10, IL-1b, IL-6, and TNF-a, and reported in 24 studies. Measures of homeostasis included glucose, hemoglobin, and lactate levels and were reported in19 studies. Cell survival signaling was evaluated by measuring: a-SMA, AKT, b-catenin, GSH, HSP70, iNOS, MMP-2, MPO, NFkB, P-ERK, pAkt, and TBARS reported in 40 rodent studies.
Assessment of methodological quality. The grouped assessment of methodological quality as measured against the ARRIVE checklist is reported in Fig 1b. Assessment of methodological quality for individual studies is reported in Supplemental Table S2. No study was free from important methodological limitations: 87/101 study did not specify the animal allocation, 82/ 101 studies did not describe the reasons animals included in the study were excluded from the analyses, 67/101 studies does not provide baseline data of the studies, 71/101 papers did not report the adverse events attributable to the intervention, and 87/101 did not specify any modifications made due to adverse events. Finally, 89/101 studies did not include the sample size calculation in their experimental design. In summary, no study identified in the review was free from potential bias. 1 Primary outcome. Heterogeneity was significant for all outcomes except GFAP (Q = 2.39, p = 0.653). In the swine studies, HDACi resulted in significantly lower brain lesion volumes (SMD ¡1.52, 95% CI ¡2.39 to ¡0.66, p = 0.001) without heterogeneity (Q= 5.04, p = 0.169) (Table II).
Programmed cell death (PCD) BAX, Caspase 3, and TUNEL were lower in the HDACis treatment groups, while Bcl-2 and BrdU were higher. Heterogeneity was significant for all outcomes with the exception of BrdU (Q = 8.80, p = 0.066) (Table II).
Subgroup analyses. To investigate sources of heterogeneity (rodents 19 primary analyses, swine 0 primary analyses) we conducted moderator analyses to examine characteristics of the HDACi treatment and/or type of injuries associated with the overall effect estimate. If moderators were identified heterogeneity was further explored using subgroup analyses (Table II).
Rodent studies. In myocardial protection the effect of the moderator timing of HDACi administration relative to the time of injury was significant for Heart Infarct size (p = 0.009). The effect in heart protection was greater when HDACi were administered before versus after the injury. HDACi administration 6À24 hours before the injury (SMD ¡3.54, 95% CI ¡5.19 to ¡1.88) had a greater effect than administration within 6 hours of the injury (SMD ¡2.22, 95% CI ¡3.85 to ¡0.60). (Fig 2, b). For heart dP/dT ratio and RPP in rodents effect sizes were moderated by the inhibitor class or type of inhibitors (Fig 2, c and d).
For markers of programmed cell death, inhibitor class (p = 0.001) and type (p = 0.0011) were moderators for Caspase-3 and BCL-2, respectively. Compared with the controls, the administration of class I/II inhibitors showed significant reduction in Caspase-3 (SMD (e) Rodent interleukin 6 (IL-6) by animal type. Effect size was presented as SMD (95% CI) and heterogeneity test was presented as Q statistics, df, and p value. N, number of animals; SMD, standardized mean difference; SD, standard deviation; CI, confidence interval; df, degree of freedom; DU, densitometry unit; FC, fold change; CT, cycle threshold.
Levels of IL-1b were moderated by the type of injury (p = 0.021), animal (p = 0.046), and IL-6 were moderated by animal (p = 0.030) (Table II). There was significant reduction by HDACi of IL-1b following ischemia (SMD ¡2.27, 95% CI ¡3.63 to ¡0.91) and trauma (SMD ¡8.44, 95% CI ¡12.35 to ¡4.54) but not with other injury types (Fig 4, c and d). Reduction in IL-6 was significant in rats (SMD ¡3.51, 95% CI ¡5.2 to ¡1.83) but not in mice (Fig 4, e). None of the prespecified moderating variables were found to significantly interact with brain outcomes BDNF and Rotarod, heart injury assessed by EDP, kidney injury outcomes BUN, and creatinine or for COX-2, IL-10 or TUNEL (Table II).
Sensitivity analyses. No sensitivity analysis stratified by methodological quality was performed as all of the studies were considered at high risk of bias.
DISCUSSION
Main findings. HDACi reduce mortality as well as myocardial, brain and kidney injury in experimental models of organ injury. This effect was observed across multiple species and against diverse modes of injury. In models of myocardial injury HDACi reduced myocardial infarct volume whilst increasing measures of myocardial contractility. In models of brain injury HDACi reduced traumatic brain injury and increased functional performance. Organ protection was attributable to increases in pro-survival cell signaling, and reductions in inflammation and programmed cell death. These findings highlight a potential novel application for this class of drugs in clinical settings characterized by acute organ injury.
Strengths and limitations. This is the first study to our knowledge that has systematically reviewed the experimental evidence for HDACi mediated organ protection. The review used comprehensive search strategies in a wide range of registries and data sources, had access to the full texts of all identified trials, used a contemporary risk of bias assessment, and assessed a wide range of experimental outcomes. The study also had important limitations. First, the quality assessment against the ARRIVE guidelines indicated that all of the 101 included studies had significant methodological limitations and were at risk of bias. Importantly, most studies were lacking data on adverse events which is essential when determining the balance of risks and benefits for any clinical trial. Second, assessment of funnel plots indicated likely publication bias for most outcomes, suggesting that selective reporting may have contributed to our results. This is supported by the observations that no negative published study was identified, and no pre-analysis protocols were reported. Third, heterogeneity was observed for many of the secondary outcomes measures, although analysis of the effects of pre-specified modifiers on heterogeneity indicated that much of the variation was attributable to differences in species, type of injury, and type of drug. In rodent models of myocardial protection the effects of HDACi on infarct size were greatest if the intervention was administered 6À24 hours prior to the intervention, and on myocardial contractility if the intervention was Class I versus Class I/II HDACi, or TSA versus other compounds. These moderators were also significant sources of heterogeneity in models of traumatic brain injury where effects were greater when HDACi were administered within 6 hours of injury. Fourth, we included 4 studies that evaluated class III HDACi (sirtuin inhibitors) that act via mechanisms distinct from Class I, II, and IV HDACi. These studies were identified by our prespecified eligibility criteria and were therefore included in our analyses. A post-Àhoc analysis has demonstrated that their inclusion did not materially alter our results (data not shown).
Clinical importance. The limitations of the data notwithstanding the results demonstrate that HDACi reduce mortality in experimental models by conferring multi-organ protection often following a single treatment administered in some cases post injury. We speculate that these findings are consistent with a genome wide activation of stress response genes via an epigenetic process or mitochondrial protection signalling. 112 This was not proven by the current analysis however as the evaluation of the mechanisms of action of HDACi in these studies was limited. Additionally, uncertainty as to the mechanism of action was also evident in an early phase I trial in healthy humans. Here sodium valproate administered as a single dose (120 mg/kg over 1 hour) resulted in changes in leucocyte signaling homologous to those reported in the current analysis, however these changes were not attributed to alterations in histone acetylation. 113 Other areas of uncertainty relate to the most effective HDACi and the timing of administration. In the current analysis TSA had greater efficacy than VPA however as yet this drug has not been evaluated in clinical trials. 114 TSA has greater specificity for HDACi relative to VPA, supporting our primary hypothesis, and further evaluation of pan-HDACi is clearly warranted. Of the many HDACi currently undergoing clinical evaluation in cancer, HIV infection and neurological diseases Vorinostat (SAHA) has been shown to be the most promising and with acceptable toxicity. 115 In this review Vorinostat was evaluated in 11 studies (31 comparisons) where it was shown to be effective. VPA the Class I/II HDACi evaluated most often in preclinical studies is inexpensive and already widely used in neurological disease. However, even short courses of VPA have significant toxicity, particularly in elderly patients. 116,117 This may not be clinically important in acute settings such as trauma or infarction where a single large dose will be given postinjury but may have possible sequelae if used for planned procedures such as surgery.
CONCLUSIONS
In experimental studies HDACi administration results in organ protection against diverse injurious stimuli including ischemia, sepsis, and trauma. Major methodological limitations were identified in all of the studies identified, and importantly, adverse effects, and toxicity were not reported in most studies. HDACi are now undergoing clinical evaluation in multiple clinical settings. The evidence presented here supports their early phase evaluation as organ protection interventions.
|
2018-12-16T18:46:00.882Z
|
2019-03-01T00:00:00.000
|
{
"year": 2019,
"sha1": "1c9bd5f73254612ce4e12ed502a5576af4136638",
"oa_license": "CCBY",
"oa_url": "http://www.translationalres.com/article/S1931524418302159/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c9bd5f73254612ce4e12ed502a5576af4136638",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
254104550
|
pes2o/s2orc
|
v3-fos-license
|
The Casimir effect for fermionic currents in conical rings with applications to graphene ribbons
We investigate the combined effects of boundaries and topology on the vacuum expectation values (VEVs) of the charge and current densities for a massive 2D fermionic field confined on a conical ring threaded by a magnetic flux. Different types of boundary conditions on the ring edges are considered for fields realizing two inequivalent irreducible representations of the Clifford algebra. The related bound states and zero energy fermionic modes are discussed. The edge contributions to the VEVs of the charge and azimuthal current densities are explicitly extracted and their behavior in various asymptotic limits is considered. On the ring edges the azimuthal current density is equal to the charge density or has an opposite sign. We show that the absolute values of the charge and current densities increase with increasing planar angle deficit. Depending on the boundary conditions, the VEVs are continuous or discontinuous at half-integer values of the ratio of the effective magnetic flux to the flux quantum. The discontinuity is related to the presence of the zero energy mode. By combining the results for the fields realizing the irreducible representations of the Clifford algebra, the charge and current densities are studied in parity and time-reversal symmetric fermionic models. If the boundary conditions and the phases in quasiperiodicity conditions for separate fields are the same the total charge density vanishes. Applications are given to graphitic cones with edges (conical ribbons).
Abstract
We investigate the combined effects of boundaries and topology on the vacuum expectation values (VEVs) of the charge and current densities for a massive 2D fermionic field confined on a conical ring threaded by a magnetic flux. Different types of boundary conditions on the ring edges are considered for fields realizing two inequivalent irreducible representations of the Clifford algebra. The related bound states and zero energy fermionic modes are discussed. The edge contributions to the VEVs of the charge and azimuthal current densities are explicitly extracted and their behavior in various asymptotic limits is considered. On the ring edges the azimuthal current density is equal to the charge density or has an opposite sign. We show that the absolute values of the charge and current densities increase with increasing planar angle deficit. Depending on the boundary conditions, the VEVs are continuous or discontinuous at half-integer values of the ratio of the effective magnetic flux to the flux quantum. The discontinuity is related to the presence of the zero energy mode. By combining the results for the fields realizing the irreducible representations of the Clifford algebra, the charge and current densities are studied in parity and time-reversal symmetric fermionic models. If the boundary conditions and the phases in quasiperiodicity conditions for separate fields are the same the total charge density vanishes. Applications are given to graphitic cones with edges (conical ribbons).
Introduction
In the last decade the two-dimensional (2D) fermionic models have attracted considerable attention, both from the experimental and theoretical points of view. Besides being simplified models in particles physics, they also appear as effective theories describing low-energy excitations of the electronic a e-mail: bellucci@lnf.infn.it (corresponding author) subsystem in a number of condensed matter systems [1][2][3][4] . The condensed matter realizations of 2D fermions include Weyl semimetals, graphene family materials (graphene, silicene, germanene, stanene), topological insulators, hightemperature superconductors and d-density-wave states. The dynamics of the low-energy charge carriers in these systems is governed by the Dirac equation with the Fermi velocity appearing instead of the velocity of light [5][6][7]. Other examples of the systems with Dirac fermions include ultracold atoms confined by lattice potentials, nano-patterned 2D electron gases and photonic crystals. An important advantage with these artificial systems is that the corresponding symmetry and parameters are relatively easy to control. This provides new opportunities for studying the influence of those parameters on the dynamics of Dirac quasiparticles. The interesting effects induced by the change of the parameters include topological phase transitions, merging of the Dirac points, generation of the anisotropy of the hopping parameters.
The emergence of Dirac fermions in condensed matter systems provides an interesting possibility to observe different kinds of effects in the system of interacting fields. Here we have a situation typical for braneworld models in highenergy physics where a part of the fields are confined on hypersurface (branes) whereas other fields propagate in the bulk. An example is the set of 2D fermionic and 3D electromagnetic field. In quantum field theory, the interaction of the fermionic field, confined on a surface, with the fluctuations of the bulk quantized fields gives rise to the Casimir type shifts in the expectation values of physical observables (for the Casimir effect and its applications in high-energy and condensed matter physics see [8][9][10][11][12][13]). In recent years, the Casimir effect in systems involving graphene structures as boundaries have seen novel developments (see and [38][39][40] for reviews). In [37] it has been shown that the various electronic phases of graphene family materials, tun-able by external fields, lead to different scaling laws and significant magnitude changes for the Casimir forces. These features can be used to probe the 2D Dirac physics of the corresponding materials. The topologically and boundary induced effects in interacting fermionic systems were discussed in [41][42][43][44][45][46].
In Refs. the Casimir effect is considered for the electromagnetic field. The role of the 2D fermionic field was reduced to the generation of boundary condition on the quantized electromagnetic field. In graphene family materials with edges (nanoribbons) or with nontrivial spatial topology (nanotubes and nanorings) the Casimir type effects appear for the quantum 2D fermionic field as well. The topological Casimir effect for the fermionic condensate, for the vacuum expectation values (VEVs) of the energy-momentum tensor and of the current density in cylindrical and toroidal nanotubes has been investigated in [47,48]. The finite temperature effects were discussed in [49]. In finite length nanotubes, in addition to the topological parts, edge-induced Casimir contributions are present. In carbon nanotubes, these contributions depend on the chirality of the tube and have been studied in [50][51][52]. The Casimir effect in a more complicated geometry of hemisphere capped tubes was considered in [53]. The condensed matter realizations of 2D fermions with curved geometries can be used to model the influence of the gravitational field on the quantum matter (for various types of mechanisms of the generation of curvature in graphene and the related effects see [54][55][56][57]). Both the topological and boundary-induced Casimir effects for the charge and current densities of a fermionic field confined on curved graphene tubes with locally anti-de Sitter geometry have been discussed in [58][59][60].
In the present paper we investigate the effects of planar angle deficit on the VEVs of the charge and current densities for a 2D fermionic field confined on a conical ring threaded by a magnetic flux. Among the condensed matter realizations of this system are the graphitic cones. These structures are obtained from a graphene sheet by cutting one or more sectors with the angle π/3 and gluing the two edges of the remaining sector. The corresponding planar angle deficit is given by π n c /3, with n c = 1, 2, . . . , 5 being the number of the removed sectors. The graphitic cones with all these values of the angle deficit were observed experimentally in both the forms as caps on the ends of the nanotubes and as free-standing structures (see, for instance, [61][62][63]). The electronic properties of graphitic cones have been discussed in [64][65][66][67][68][69][70][71]. The background geometry under consideration in the present paper with 2D fermionic field corresponds to the continuum description of finite radius graphitic cones with cutted apex. Some limiting cases have been considered previously in the literature. The vacuum polarization effects in the boundary-free geometry with applications to graphitic cones have been discussed in [72][73][74][75]77,78]. The zero tem-perature fermionic condensate, the expectation values of the charge and current densities and of the energy-momentum tensor for a conical geometry with a single circular boundary where studied in [76][77][78]. The combined effects of the edge and of finite temperature have been considered in [79,80]. The ground state fermionic charge and current densities in planar rings were investigated in [81].
The organization of the paper is as follows. In the next section the field, background geometry and the mode functions for a fermionic field are presented. In Sect. 3 these modes are used for the evaluation of the VEVs of the charge and current densities in conical rings. Different representations of the VEVs are given and their properties are investigated. Several limiting cases and asymptotics are discussed in Sect. 4. Numerical examples for the behavior of both the charge and current densities are presented. The charge and current densities for the fermionic field realizing the second irreducible representation of the Clifford algebra are considered in Sect. 5. Applications are given to 2D fermionic systems with parity and time-reversal symmetry and to graphene nanocones. The main results are summarized in Sect. 6 . The bound states for different boundary conditions on the edges of the ring and their contributions to the VEVs of the charge and current densities are discussed in Appendix A. In Appendix B we consider the contribution of the special mode for half-integer values of the parameter related to the enclosed magnetic flux and to the phase in the periodicity condition along the azimuthal direction.
Problem setup and the fermionic modes
For the background geometry under consideration the (2+1)dimensional line element is given by where the cylindrical spatial coordinates r and φ vary in the ranges r 0 and 0 φ φ 0 . The special case φ 0 = 2π corresponds to the (2+1)-dimensional Minkwoski spacetime described in cylindrical coordinates. For φ 0 < 2π , the line element describes a cone with planar angle deficit 2π − φ 0 and with the apex at r = 0.
As a quantum field we consider a charged fermionic field ψ(x) in the irreducible representation of the Clifford algebra. The latter is realized by two-component spinors. Additionally, the presence of an external classical abelian gauge field A μ will be assumed. The dynamics of the field is governed by the Dirac equation The gauge extended covariant derivative is defined as D μ = ∂ μ + μ + ie A μ , with μ being the spin connection and e being the charge of the field quanta. In (2.2) we have where l = 1, 2 and q = 2π/φ 0 . It will be assumed that the field is confined in the region a ≤ r ≤ b (conical ring, the geometry of the problem is depicted in Fig. 1). On the edges of the ring the boundary conditions will be imposed. Here n μ is the inward pointing unit vector normal to the boundary and the parameters λ a and λ b take the values ±1. For the boundary at r = u, u = a, b, and in the region under consideration the normal is given by n μ = n u δ 1 μ , where It can be shown that, as a consequence of the conditions (2.4), on the boundaries we get n μ j μ = 0 with j μ = eψγ μ ψ being the current density andψ = ψ † γ 0 is the Dirac adjoint. This means that the normal component of the fermionic current vanishes on the edges and, consequently, the dynamics is completely determined by the field equation and the boundary conditions. The special case with λ r = 1, r = a, b, corresponds to the MIT bag boundary condition (or infinite mass boundary condition in the condensed matter context) on both the edges. Comparing the analytical results on the electronic properties of circular graphene quantum dots derived within the Dirac model with the bag boundary condition to those obtained from the tight-binding model, the authors of [82,83] have found a good qualitative agreement between those two approaches. Considering different boundary conditions in the continuous model for graphene devices and comparing with the experiments, a similar conclusion is made in [84]. Another special case with λ r = −1 was considered in [85]. More general boundary conditions for the confinement of fermions and their realizations in graphene made structures have been discussed in [86][87][88][89].
The background geometry has nontrivial topology and, in addition to the boundary conditions on the ring edges, one needs to specify the periodicity condition along the azimuthal direction. We will assume the condition with a general phase 2πχ. The special cases χ = 0 and χ = 1/2 correspond to untwisted and twisted fermionic fields. The values for the parameter χ realized in graphene cones will be discussed in Sect. 5. As it will be seen below, the nontrivial phase in (2.6) can be interpreted in terms of the fictitious flux threading the ring.
Here we are interested in the VEVs of the charge and current densities induced by a magnetic flux threading the conical ring. The magnetic field is localized inside the region r < a and its influence on the characteristics of the fermionic vacuum is purely topological. This is an Aharonov-Bohm type effect related to the nontrivial topology of the background space. In the region under consideration, a ≤ r ≤ b, the covariant components of the vector potential of the gauge field in the system of coordinates (t, r, φ) are given by A μ = (0, 0, A). Note that for the corresponding physical component one has A φ = −A/r . The magnetic flux enclosed by the ring is expressed in terms of the covariant component as = −φ 0 A. The physical effects on the ring are completely determined by this flux and they do not depend on the radial distribution of the flux in the region r < a.
The VEV of the current density, 0| j μ (x)|0 ≡ j μ (x) , can be evaluated by using the relation where the trace in the right-hand side is over spinor indices and S (1) (x, x ) is the fermion two-point function.
Its spinorial components, with spinor indices i and k, are defined as the VEV S σ (x)} be the complete set of the positive and negative energy fermionic mode functions, obeying the field equation (2.2), the boundary conditions (2.4) and the periodicity condition (2.6). They are specified by the set of quantum numbers σ . Expanding the field operator in terms of the modes and using the commutation relations for the fermionic annihilation and creation operators, the VEV of the current density is presented in the form of the mode sum where the terms with κ = + and κ = − correspond to the contributions of the positive and negative energy modes. The structure of the mode functions ψ (κ) σ (x) is similar to that discussed in [79]. They are specified by the quan-tum numbers (γ , j), where j = ±1/2, ±3/2, . . . is the total angular momentum and the radial quantum number γ determines the energy of the corresponding mode κ E, with E = γ 2 + m 2 . Introducing the notation α = χ + e A/q = χ − e /(2π), (2.9) the mode functions are presented in the form , (2.10) where j = 1 for j > −α and j = −1 for j < −α, Note that the part e /(2π) in (2.9) is the ratio of the magnetic flux threading the ring to the flux quantum 0 = 2π/e. The functions g β j ,ν (γ a, γ r ) of the radial coordinate r , with the orders ν = β j and ν = β j + j , is expressed in terms of the Bessel and Neumann functions as: For the Bessel and Neumann functions we use the notation with u = a, b, f = J, Y , and m u = mu. When the parameter α is equal to an half-integer, the modes with j = −α are still given by (2.10). In this case there is a special mode with j = −α which is separately discussed in Appendix B. The coefficients of the linear combination of the cylinder functions in (2.12) are obtained from the boundary condition (2.4) at r = a. The further imposition of the boundary condition at r = b determines the eigenvalues of the quantum number γ as roots of the equation (2.14) We will denote by z l , l = 1, 2, . . ., the positive solutions of this equation with respect to γ a, assuming that z l < z l+1 . The eigenvalues of γ are expressed as γ = γ l = z l /a. Hence, the mode functions are specified by the set of discrete quantum numbers σ = (l, j). The energies of the positive and negative energy modes are given as E κ = κ E with E = γ 2 l + m 2 . For a given quantum number j, the Eq.
(2.14) for the eigenvalues γ l of the positive and negative energy modes differ by the change of the energy sign. As it will be discussed in Appendix A, depending on the set of the parameters (s, λ a , λ b ), purely imaginary solutions of the Eq. (2.14) may present. For all these solutions γ 2 + m 2 ≥ 0 and the vacuum state is stable. For a massless field the confinement of the field, in general, induces an energy gap that depends on the geometrical characteristics of the ring. The controllable energy gap plays an important role in graphene ribbons. For large values of γ a 1 we can use in (2.14) the asymptotic expressions of cylinder functions for large arguments. In the case λ a = −λ b , to the leading order, the equation of the modes is reduced to sin (b − a) γ = 0 with γ l ≈ πl/(b − a) for large l. For λ a = λ b and γ a 1 from (2.14) we get s(m/γ ) sin x + cos For γ m this gives γ l ≈ π(l + 1/2)/(b − a). Let us present the parameter α from (2.9) in the form where n 0 is an integer. Redefining j → j + n 0 , we see that the solutions z l are functions of b/a, α 0 , j, s, λ u , κ: z l = z l (b/a, s, λ u , j, α 0 , κ). By taking into account (2.13) it can be seen that the function f In particular, this means that the negative energy solutions for the set ( j, α 0 ) coincide with the positive energy solutions for (− j, −α 0 ). Another relation between the roots for different sets of parameters directly follows from the definition (2.13): z l (b/a, s, λ u , j, α 0 , κ) = z l (b/a, −s, −λ u , j, α 0 , −κ). Combining this with the previous relation we get z l (b/a, s, λ u , j, α 0 , κ) = z l (b/a, −s, −λ u , − j, −α 0 , κ).
To complete the specification of the fermionic modes it remains to determine the normalization coefficient C κ in (2.10). It is obtained from the standard orthonormalization condition for fermionic fields. The radial integral involving the square of the cylinder functions g β j ,ν (γ a, γ r ) is evaluated by using the result from [90]. This leads to the following expression where we have defined the function In deriving (2.17) we have used the relations The model under consideration is specified by the set of parameters (χ , A). The first one determines the phase in the periodicity condition in the azimuthal direction and the second one determines the magnetic flux enclosed by the ring. These parameters are not separately gauge invariant. Under the gauge transformation . However, the parameter α, defined by (2.9), is gauge invariant. In particular, in the gauge with b 2 = −qχ/e the fermionic field is periodic in the azimuthal direction and the phase χ is interpreted in terms of a fictitious magnetic flux −2πχ/e = −χ 0 . In this sense, the parameter α can be considered as the ratio of the effective magnetic flux to the flux quantum.
Mode sum
In this section we evaluate the VEVs of the charge and current densities on conical rings. First we assume that all the solutions of the eigenvalue Eq. (2.14) are real. The modifications in the evaluation procedure required by the presence of imaginary roots are described in Appendix A. Having specified the complete set of mode functions (2.10), for the VEV (2.8) one finds the representation where in the summation over j one has j = ±1/2, ±3/2, . . .. Here for the charge and azimuthal current densities we have defined the functions with E = z 2 /a 2 + m 2 and w 1,β j (z) = 0. The VEV of the radial current density vanishes. Note that the physical component of the azimuthal current density is given by We can see that under the replacements β j β j + j , κ → −κ the function (2.13) transforms as From here it follows that the roots z l of (2.14) are not changed under those replacements. The same is the case for the product T ab Hence, we conclude that the VEVs (3.1) are odd periodic functions of the parameter α with the period 1. This implies periodicity with respect to the enclosed magnetic flux with the period of the flux quantum. Of course, this is the well known feature for Aharonov-Bohm type effects.
For half-integer values of the parameter α the contribution of the modes j = −α to the VEV j μ is still given by expression (3.1) and the contribution coming from the special mode j = −α is investigated in Appendix B. Redefining the summation variable j in (3.1), it is sufficient to consider the values α = ±1/2. For definiteness consider the case α = 1/2. Let us present the series over j in (3.8) as j κ f (β j , β j + j , κ) with j = −1/2. In the part over the negative values j we pass to a new summation variable, j → − j − 1. This transforms the series to the form . But as it has been explained above f (β j + j , β j , κ) = f (β j , β j + j , −κ) and the expression under the summation sign is an odd function of κ. Hence, the contributions from the positive and negative energy modes cancel each other and the modes with j = −α do not contribute to the charge and current densities for half-integer values of α. As it is shown in Appendix B the same is the case for the contribution of the mode j = −α if λ a = λ b . For λ a = −λ b and j = −α the positive eigenvalues of γ are zeros of the function sin[γ (b − a)] and, again, their contribution vanishes. In the case λ a = −λ b the only nonzero contribution comes from the zero energy mode and the corresponding charge density is given by (B.8). Hence, for half-integer values of the parameter α the charge and current densities vanish for the boundary conditions with λ a = λ b and are determined by Returning to the general case for α and by using the relations (2.20), for the charge density on the ring edges one finds .
The azimuthal current density on the edge is related to the corresponding charge density by the simple formula For planar rings this relation in the case λ u = 1 has been already mentioned in [81].
In the problem under consideration, for points away from the boundaries the local geometry is flat and the field tensor for external gauge field is zero. Consequently, for those points the divergences in the VEVs of local observables are the same as those in (2+1)-dimensional Minkowski spacetime and the renormalization is reduced to the subtraction of the Minkowskian parts. The renormalization group aspects of interacting (2+1)-dimensional fermionic model with applications to graphene have been considered in [91][92][93] (for the ultraviolet finiteness of massless QED in 2+1 dimensions see, e.g., [94] and references therein). In field-theoretical models on backgrounds with boundaries new surface divergences may arise. Those divergences in the VEV of the energymomentum tensor have been discussed in the literature for various bulk and boundary geometries (see, for instance, [8,[8][9][10][11][12][13]). In particular, for non-conformally coupled fields the VEV of the energy density diverges as the Dth power of the inverse distance from the boundary with D being the spacetime dimension (D = 3 in the problem at hand). For conformally coupled fields the leading term in the asymptotic expansion of the energy density over the distance from the boundary vanishes and the divergence is weaker. The surface divergences in the VEVs of local physical observables are related to the idealization that replaces the physical interaction by imposition of boundary conditions on all modes of fluctuating field. In order to obtain finite values for surface quantities, more realistic physical models should be used. For example, the physical cutoff in the ultraviolet range may be due to the microstructure of the boundary on small scales. Note that unlike the VEV of the energy-momentum tensor (and also for the fermionic condensate), in the problem under consideration the VEVs of the charge and current densities are finite on the boundaries. A similar situation takes place for fermionic models in locally anti-de Sitter spacetime with compact dimensions and in the presence of branes [59,60].
Integral representation
The representation (3.1) has two disadvantages: the roots z l are given implicitly, as zeros of the function (2.14), and the terms with large l are highly oscillatory. Both of these difficulties can be overcome by making use of the summation formula [95] (see also [96]) .
Here we use the notation (2.13) for the Hankel functions H (1,2) ν (x) and the notation for the modified Bessel functions I ν (x) and K ν (x). The conditions on the function w(z), analytic in the right-half plane Re z > 0, are formulated in [95]. On the imaginary axis the function w(z) may have branch points. The square root For the series in (3.1) one has w(z) = w μ,β j (z). The functions w μ,β j (z) have branch points z = ±im a on the imaginary axis and obey the relation w μ,β j (ze −πi/2 ) = −w μ,β j (ze πi/2 ) for z < m a . By using these properties we can see that the positive and negative energy modes give the same contributions to the VEVs of the charge and current densities and they are presented as with u = a, b. The functions in the right-hand sides of (3.9) are defined by and for the modified Bessel functions f ν (z) = I ν (z) , K ν (z) we use the notation where δ I = 1, δ K = −1, and u = a, b. The expressions for the VEVs of the charge and current densities contain a summation over j that enters in the formulas through β j defined as (2.11). Redefining the summation variable j → j +n 0 , with n 0 defined by (2.15), we see that the VEVs do not depend on n 0 and only the fractional part of α is physically relevant. Recall that in deriving (3.8) we have assumed that all the roots of the Eq. (2.14) are real. In Appendix A it is shown that the representation (3.8) is valid also in the presence of imaginary roots corresponding to the bound states. In (3.8), the part j μ a comes from the first term in the right-hand side of (3.6) and is given by the expression . (3.12) For its physical interpretation we note that the last term in (3.8) tends to zero in the limit b → ∞. This shows that (3.12) corresponds to the VEV in the region r ≥ a for a cone with a single edge r = a. By using the identity (3.13) with ν, ρ = β j , β j + j , it can be further decomposed as where the separate parts come from the first and second terms in the right-hand side of (3.13). For the first part one has with the functions and a we rotate the contour of the integration over z by the angles π/2 and −π/2 for the terms with l = 1 and l = 2, respectively. Introducing the modified Bessel functions we get with the notations (3.11) and For the representation s = 1 and for the boundary condition with λ a = 1, this expression for a single boundary-induced part coincides with the one given in [77] (comparing the formulas here with the results of [77], the replacements α → −α and α 0 → −α 0 should be made; this difference is related to that in [77], for the evaluation of the VEVs for the geometry with a single boundary, the analog of the negative-energy mode functions (2.10) was used with α replaced by −α). The part j μ 0 in (3.14) with 0 < r < ∞ corresponds to the VEV in a conical space without boundaries and the contribution j μ (b) a is induced in the region r ≥ a by the presence of the edge r = a.
Another representation of the VEV (3.15) in the boundaryfree conical geometry for the case s = 1 is provided in [77]. The parameter s enters in (3.15) as a coefficient in the charge density and the corresponding generalization is straightforward with the expression where [q/2] means the integer part of q/2, the prime on the summation sign means that for even q the term with l = q/2 should be taken with an additional coefficient 1/2, and we have introduced the functions The boundary-free contributions to the charge density for the fields with s = +1 and s = −1 differ only in sign, whereas the azimuthal current densities coincide. We can also further transform the edge-induced contributions to the VEVs. The dependence on j enters through β j and β j + j (see (3.11)). It can be seen that for both the series in (3.8) and (3.17) one has with the notation As a consequence, the VEVs are presented in the form , (3.23) where the functions W μ,n p (r x) and V (a) μ,n p (ax, r x) are given by (3.18) and (3.9) with the replacements β j → n p and j → 1. The same replacements should be done in the notation (3.11) for the modified Bessel functions. Namely, in (3.23) for the functions f ν (z) = I ν (z) , K ν (z). Note that the ratio of the combinations of the modified Bessel functions in (3.23) can be presented in the form , (3.25) where Under the replacement of the parameters λ u → −λ u , s → −s one has f (u) n p (ax, r x). From here it follows that for the fields with the parameters (λ u , s) and (−λ u , −s) the VEVs of the charge densities differ in sign, whereas the current densities are the same. The expression (3.23) explicitly shows that both the charge and current densities are odd periodic functions of the magnetic flux threading the ring with the period equal to the flux quantum. The periodicity of the physical characteristics in the magnetic flux is a common feature for the Aharonov-Bohm type effects.
As it has been already mentioned, the part in (3.23) with the second term in the square brackets tends to zero in the limit b → ∞ . For a massive field and for fixed r and a, that part decays exponentially, like e −2bm for b → ∞. In the case of a massless field the decay, as a function of b, is power law: as (a/b) q(1−2|α 0 |)+1 for the charge density and as (a/b) q(1−2|α 0 |)+2 for the azimuthal current. Once again, this shows that the contribution (3.14) corresponds to the VEVs outside a single boundary at r = a and the part with the second term in the square brackets of (3.23) is induced by the outer boundary.
Another representation
The representation (3.8) for the charge and current densities in the ring is not symmetric with respect to the inner and outer edges. An alternative representation, with the extracted outer boundary part is obtained from (3.8) by making use of the relation , (3.27) with ν, ρ = β j , β j + j . The expressions for the VEVs of the charge and current densities take the form Here, the first term in the right-hand side is decomposed as and with the notations defined in accordance with (3.24). The functions in the integrand are given by the expressions For the special case with s = 1 and λ b = 1 the expression (3.30) coincides with the corresponding result in [77] (with the replacement α → −α) for the VEVs inside a single circular boundary at r = b. Passing from the summation over j to the summation over n in accordance with (3.21), we obtain the final representation . (3.32) For the ratio under the sign of the real part in (3.32) we have the following explicit expression n p +1 (z) + I 2 n p (z)] + 2λ u n u sm u I n p (z) I n p +1 (z) . (3.33) The denominator in this expression is positive for z m u . Relatively simple expressions are obtained for a massless field. In the limit a → 0 the second term in the square brackets of (3.32 ) behaves like a q(1−2|α 0 |) and it vanishes for |α 0 | < 1/2. From here it follows that the part j μ b corresponds to the VEV in the region r ≤ b for the geometry of a single boundary at r = b for special case of the boundary condition on the cone apex. The latter correspond to the imposition of the boundary condition (2.4) on the circle r = a with the subsequent limiting transition a → 0. The part with the last term in the square brackets of (3.32) can be interpreted as the contribution of the inner boundary.
Limiting cases and numerical analysis
In this section we consider some limiting cases of the general results given above and present the numerical analysis of the behavior of the charge and current densities as functions of the parameters of the model. The limiting transitions a → 0 and b → ∞ have been already discussed in the previous section. We have seen that the contributions to j μ , μ = 0, 2, induced by adding the second boundary to the geometry of a single boundary decay as a q(1−2|α 0 |) for the limit a → 0 and as e −2bm for b → ∞. For a massless field the contribution of the outer boundary in the limit b → ∞ behaves as Limiting transition to the geometry of a conical space with a single boundary at r = b, corresponding to a → 0, can also be seen on the level of the mode functions and of the eigenvalues for the radial quantum number γ . In that limit one has J with the normalization coefficient . For χ = 0 and λ b = 1 this result coincides with that given in [79]. As it has been shown above, for λ a = λ b the charge and current densities vanish for half-integer values of α corresponding to α 0 = ±1/2. That property can also be seen on the base of the representation (3.23). Let us consider the case α 0 → 1/2. In (3.23), for the part with p = +1 we pass to the summation over n = n + 1 and then redefine n → n. All the terms with n = 1, 2, . . . in the parts with p = +1 and p = −1 cancel each other and the only nonzero contribution comes from the n = 0 term in the part with p = −1. The expressions for the VEVs of the charge and current densities are obtained from (3.23) omitting the summation over n and taking n p = −1/2. By using the expressions for the functions I ±1/2 (x) and K 1/2 (x), it can be seen that ax K and for μ = 0, 2. From these relations it follows that the real part of the last term in (3.23) is zero and, hence, lim α 0 →1/2 j μ = lim α 0 →1/2 j μ a , where j μ a is decomposed as (3.14). In the part j μ (b) a we use the relation to see that with μ = 0, 2. Now, by using (3.19), we can see that the limiting value lim α 0 →1/2 j μ 0 exactly cancels the last term in (4.6) and we get lim α 0 →1/2 j μ = 0. In particular, for λ a = λ b the charge and current densities are continuous function of the magnetic flux. This is not the case for the VEVs in the boundary-free conical geometry and also inside a single circular boundary (see also the discussion in [77]). The VEVs j μ 0 and j μ b tend to nonzero value in the limit α 0 → 1/2 and the charge and current densities are discontinuous functions of α at half-integer values of this parameter. Note that in the case λ a = −λ b and in the limit α 0 → 1/2 the expression under the Re sign in (3.23) has pole and its contribution should be appropriately taken into account. Nonzero limiting values of the charge and current densities for boundary conditions with λ a = −λ b are related to that contribution.
The different behavior of the VEVs in the limits α 0 → ±1/2 for the cases λ a = λ b and λ a = −λ b is seen in Figs. 2 and 3. On those figures we have plotted the charge (full curves) and current (dashed curves) densities at the radial point r/a = 2 as functions of the parameter α for a conical ring with q = 1.5, b/a = 4, and for the mass corresponding to ma = 0.5. The Fig. 2 corresponds to the field with s = 1 and for Fig. 3 s = −1. The left and right panels on both the figures are plotted for λ a = λ b = 1 and λ a = −λ b = 1 , respectively. As is seen from the graphs, the behavior of the VEVs near half-integer values of α is essentially different for the cases λ a = λ b and λ a = −λ b (left and right panels, respectively). In the first case the VEVs vanish at those values (corresponding to α 0 = ±1/2) and they are continuous periodic functions of the magnetic flux. For λ a = −λ b the charge and current densities tend to nonzero limiting values in the limit α 0 → ±1/2 and as a consequence of that they are discontinuous at the half-integer values for α. This kind of discontinuities are present also for persistent currents in mesoscopic normal metal rings. They appear due to the degeneracy of the energy levels at the corresponding values of the magnetic flux (see, for example, [97]). As it has been discussed in Appendix B, in the case λ a = −λ b there is a zero energy mode for the angular quantum number j = −α and the nonzero values of the charge and current densities for α 0 = ±1/2 are related to the contribution of that mode. We note that for the case λ a = −λ b (right panels) the approximate relation j φ ≈ −λ a j 0 between the charge and current densities is obeyed to good enough accuracy for other values of the radial coordinate. This relation is exact for the zero energy mode. We have also numerically checked the limiting values of the charge and current densities for the boundary conditions with λ a = −λ b obtained from (3.23) when α 0 → ±1/2 coincide with the contribution of the zero mode (B.8) for α 0 = ±1/2. Now we turn to the investigation of the radial dependence for the VEVs. In Fig. 4 the charge (left panel) and current (right panel) densities are depicted as functions of r/a for a massless field and boundary conditions with λ a = λ b = 1. The graphs are plotted for b/a = 8, α 0 = 1/4, and the numbers near the curves correspond to the values of the parameter q. The curve for q = 1 corresponds to a planar ring. As seen, the presence of the angle deficit may essentially increase both the charge and current densities. For the example presented in Fig. 4 the ratio j 0 /e is negative near the inner edge and positive near the outer edge. The ratio j φ /e is positive.
It is of interest to investigate the dependence of the VEVs on the values of the parameters (s, λ a , λ b ). Figures 5 and 6 display the radial dependence of the charge and current densities for different sets (s, λ a , λ b ) in the case of a massive sets (s, λ a , λ b ) = (1, 1, 1) and (s, λ a , λ b ) = (1, 1, −1), respectively field with the mass corresponding to ma = 0.5. The graphs are plotted for q = 1.5, b/a = 8, α 0 = 1/4. The curves with μ = 0 correspond to the charge density j 0 and the curves μ = 2 correspond to the physical azimuthal component j φ = r j 2 of the current density. Figure 5 corresponds to fields with (s, λ a , λ b ) = (1, 1, 1) (left panel) and (s, λ a , λ b ) = (1, 1, −1) (right panel). In Fig. 6, (s, λ a , λ b ) = (−1, 1, 1) for the left panel and (s, λ a , λ b ) = (−1, 1, −1) for the right panel. The graphs for other sets of the parameters (s, λ a , λ b ) are obtained from the ones depicted in Figs. 5 and 6 by taking into account that under the reflection (s, λ a , λ b ) → (−s, −λ a , −λ b ) the charge density is an odd function and the current density is an even function. As seen, the charge and current densities are mainly located near the edges, inner or outer. The numerical data confirm the relations (3.5) between the charge and current densities on the edges of the ring. An important point to mention here is that the VEVs of the charge and current densities are finite on the edges of ring. This is not the case, for example, for the fermion condensate or for the VEV of the energy-momentum tensor.
Comparing the left panel in Fig. 5 with the graphs in Fig. 4, we see that for a massive field the VEVs are essentially smaller. This can be not the case for other sets of the parameters (s, λ a , λ b ). In order to see the dependence of the VEVs on the field mass, in Fig. 7 we plot the charge and current densities as functions of ma for fixed values b/a = 8, r/a = 2, α 0 = 1/4 and for the field with s = 1. The numbers near the curves are the values for q . The same graphs for the field with s = −1 are presented in Fig. 8. As the numerical results show, the dependence on the mass is essentially different for the cases s = 1 and s = −1. For the parameters corresponding to Fig. 7 the VEVs decrease (by modulus) with increasing mass. For the example corresponding to Fig. 8 both the charge and current densities increase by modulus with initial increase of the mass and take their maximal or minimal values for some intermediate value of ma. The further increase of the mass, as expected, leads to the suppression of the VEVs. 2, (s, λ a , λ b ) = (1, 1, 1). The numbers near the curves are the corresponding values of q Fig. 8 The same as in Fig. 7 for the field with s = −1
VEVs for two irreducible representations of the Clifford algebra
The fermionic field we have considered lives in twodimensional space. In even number of spatial dimensions there are two inequivalent irreducible representations of the Clifford algebra. In this section it will be shown how the VEVs of the charge and current densities are obtained from the results given above for the fields realizing those representations. We will distinguish the different representations by the parameter s taking the values −1 and +1 (as it will be seen below it coincides with the parameter s we introduced before in front of the mass term in the Dirac equation (2.2)). The corresponding sets of the 2 × 2 Dirac matrices will be denoted by γ μ (s) = (γ 0 , γ 1 , γ 2 (s) ) and the related fields by ψ (s) (x).
Here, the parameters λ (s) r also can be different for separate representations.
Charge and current densities in parity and time-reversal symmetric models
In two spatial dimensions, the mass term in the Lagrangian density for a two-component fermionic field ψ(x) is not invariant under the parity (P) and time-reversal (T ) transformations. In the absence of magnetic fields, P-and Tsymmetric models can be constructed combining two fields realizing different irreducible representations of the Clifford algebra and having the same mass. In accordance with the consideration of the previous subsection, the Lagrangian density for this set of fields, denoted as before by ψ (s) , s = ±1, is written in two equivalent forms where ψ (+1) = ψ (+1) and ψ (−1) = γ 0 γ 1 ψ (−1) . The total current density is given by the formula J μ = e s=±1ψ (s) γ μ (s) ψ (s) or by J μ = e s=±1ψ (s) γ μ ψ (s) . The separate fields obey the boundary conditions (5.1) or the conditions 1 + isλ (s) r n μ γ μ ψ (s) (x) = 0 in terms of the primed fields. Note that because of the appearance of the factor s in front of the term with the normal to the boundary, the fields ψ (+1) and ψ (−1) obey different boundary conditions if the fields ψ (+1) and ψ (−1) are constrained by the same boundary conditions and vice versa.
We can combine the two-component fields ψ (s) (x) in a single 4-component spinor field = (ψ (+1) , ψ (−1) ) T with the Lagrangian density For λ , the fields ψ (+1) and ψ (−1) in the Lagrangian density (5.3) obey the same boundary conditions. In this case the boundary condition for the transformed field ψ (−1) differs from the condition for the field ψ (+1) = ψ (+1) by the sign of the term containing the normal to the boundary. As it has been shown above, the charge density is an odd function under the replacement (s, λ u ) → (−s, −λ u ), whereas the azimuthal current density is an even function. From here we conclude that in the model involving two fields ψ (+1) and ψ (−1) with the same masses and the phases in the periodicity condition (2.6), obeying the boundary conditions (5.1) with λ , the VEV of the total charge density vanishes, J 0 = 0, and for the VEV of the total current density one gets J 2 = 2 j 2 , where j 2 is given by (3.23) with μ = 2 and with s = 1, λ u = λ (+1) u .
In models with two fields ψ (+1) and ψ (−1) , realizing inequivalent irreducible representations of the Clifford algebra, a nonzero vacuum charge density may appear if the corresponding boundary conditions are different (λ or the masses for the fields differ. However, note that the difference in the masses will break the parity and time-reversal symmetry of the model. Another possibility for the appearance of the nonzero charge density is realized in models with different phases in the periodicity conditions (2.6) for the fields ψ (+1) and ψ (−1) . The latter type of situation takes place in semiconducting carbon nanotubes where the fields under consideration describe the electronic subsystem of graphene tubes.
Current density in graphitic cones
Among important realizations of 2D fermionic models is graphene. The existence of various classes of graphene allotropes, like carbon nanotubes, fullerens, graphitic cones, nanoloops and nanohorns, makes graphene an exciting arena for the investigation of the effects of the geometry, topology and boundaries on the properties of a quantum fermionic field. Recently, a number of mechanisms have been suggested (see, for example, [98][99][100]) to generate effective curved background geometries for Dirac fermions in graphene. In particular, they include various types of external fields, lattice deformations, and local variations of the Fermi velocity. The advantage of these graphene based artificial systems in modelling the influence of the gravity on quantum matter is that one can tune in a controlled manner the geometrical characteristics of the background spacetime.
In the long wavelength approximation, the effective field theory for the electronic subsystem in graphene is formulated in terms of 4-component spinors S = (ψ +,AS , ψ +,BS , ψ −,AS , ψ −,BS ) T , where S = ±1 corresponds to the spin degree of freedom. It is decomposed into two 2-component spinors, ψ + = (ψ +,AS , ψ +,BS ) and ψ − = (ψ −,AS , ψ −,BS ), corresponding to two inequivalent corner points K + and K − of the hexagonal Brillouin zone of graphene. These two valleys are related by the time-reversal symmetry. The separate components ψ ±,AS and ψ ±,BS give the amplitude of the electron wave function on the triangular sublattices A and B of the graphene hexagonal lattice. In the standard units with the speed of light c and the Planck constanth, the Lagrangian density in the effective field theory is presented as where l = 1, 2, e is the electron charge and v F ≈ 7.9 × 10 7 cm/s is the Fermi velocity for electrons. The energy gap , introduced in (5.6), is related to the Dirac mass m by = mv 2 F . A number of mechanisms has been considered in the literature for the generation of the energy gap in the range 1 meV 1 eV (see, for example, [5] and references therein). The energy scale in the model is determined by the parameter γ F =hv F /a 0 ≈ 2.51 eV, where a 0 ≈ 1.42 Å is the inter-atomic spacing of graphene honeycomb lattice. For the Compton wavelength related to the energy gap one has a C =hv F / . For a given S, the charge density corresponding to the Lagrangian (5.6) is given by J 0 = e¯ S (x)γ 0 (4) S (x) and for the current density we get The separate parts in (5.6) for given S are the analog of the Lagrangian density (5.4) we have discussed before. The two-component fields ψ (+1) and ψ (−1) correspond to the fields ψ + and ψ − . Hence, the parameter s in the discussion above enumerates the valley degrees of freedom in graphene. On the base of this analogy, we can apply the formulas for the charge and current densities given above to graphene conical ribbons with the edges r = a and r = b. The graphene nanocones have attracted considerable attention due to their potential applications such as probes for scanning probe microscopy, electron emitters, tweezers for nanomanipulation, energy storage, gas sensors, and biosensors. In the problem under consideration the separate parts with S = ±1 give the same contributions to the VEVs and we can consider the VEVs for a given spin degree of freedom omitting the index S. The total VEVs are obtained with an additional factor 2. As it has been already mentioned in Introduction, for the opening angle in graphitic cones one has φ 0 = 2π(1 − n c /6) with n c = 1, 2, . . . , 5 being the number of the removed sectors from a planar graphene sheet. The analog of the quasiperiodicity condition (2.6) in graphene cones has been discussed in [64,66,68,71]. For graphene cones with odd values of n c it mixes the valley indices through the factor e −iπ n c τ 2 /2 , where the Pauli matrix τ 2 acts on those indices. The corresponding condition can be diagonalized by a unitary transformation that diagonalizes the matrix τ 2 .
For even values of n c the spinors corresponding to different valleys are not entwined and an additional diagonalization is not required. By taking into account that only the fractional part of the parameter χ is relevant in the evaluation of the VEVs, it can be seen that two inequivalent values of the parameter χ realized in graphitic cones correspond to χ = ±1/3. Note that the same inequivalent values of the periodicity phase are realized in semiconducting carbon nanotubes (in metallic nanotubes χ = 0). The fermionic current density in cylindrical and toroidal carbon tubes has been investigated in [48,52].
For a given spin S, the ground state charge and current densities in graphitic cones are obtained from the results in Sect. 3 in accordance of the procedure described in the previous subsection, adding an additional factor v F for the azimuthal current density. Translating the results given above to graphene made structures it is convenient to make the replacements mu → u/a C , u = a, b, r , in the corresponding formulas. If the energy gap is the same for both the valleys, the net charge density vanishes as a consequence of the cancellation between the contributions from different valleys. However, there exist gap generations mechanisms in graphene breaking the valley symmetry (for example, chemical doping) and one can have a situation with different masses for the fields ψ + and ψ − . In this case there is no cancellation of the corresponding contributions to the charge density. Note that the magnetic flux induced currents in planar graphene rings have been investigated in [81,[101][102][103][104][105][106][107][108]. Based on the concept of branes, a model for the emergence of current density in graphene in the presence of defects has been recently discussed in [109].
Conclusion
The notion of vacuum in quantum field theory has a global nature and its properties are sensitive to both the local and global characteristics of the background spacetime. In the present paper we have investigated the combined effects of boundaries, topology and of the magnetic flux on the ground state mean charge and current densities for a fermionic field in two-dimensional conical rings with arbitrary values of the angle deficit. The boundary conditions for the field operator on the ring edges are specified by the set of parameters (λ a , λ b ). In the special case (1, 1) they are reduced to the standard MIT bag boundary condition (infinite mass boundary condition in the context of 2D fermionic systems). An additional parameter s in front of the mass term in the Dirac equation corresponds to two inequivalent irreducible representations of the Clifford algebra in (2+1)-dimensional spacetime. The fermionic mode functions are presented as (2.10), where the allowed values of the radial quantum number depend on the specific boundary condition and are roots of equation (2.14). For whole family of boundary conditions, we have considered, the vacuum state is stable and for all the roots γ 2 ≥ −m 2 . For fields with (s, λ a , λ b ) = (±1, ±1, ±1) all the eigenvalues for γ are real. In the remaining cases, depending on b/a and ma, purely imaginary eigenvalues γ = iη/a, 0 < η < ma, may appear corresponding to bound states. For half-integer values of the parameter α from (2.15) and under the condition λ a = −λ b there is also a zero mode with the value of the total angular momentum j = −α.
The VEVs of the charge and current densities are evaluated by using the corresponding mode sums over the bilinear products of the mode functions. The VEV for the radial current vanishes and the contribution of the modes with positive γ to the charge and azimuthal current densities is presented as (3.1). In the presence of the bound state or the zero mode, the corresponding contributions, given by (A.8) and (B.8 ), should be added to (3.1). The charge and current densities on the ring edges are connected by simple relation (3.5) that is valid for whole family of boundary conditions. For halfinteger values of α the charge and current densities vanish for the boundary conditions with λ a = λ b . For the conditions with λ a = −λ b the only nonzero contribution comes from the zero mode. In the latter case the charge and current densities are discontinuous functions of α (in particular, of the magnetic flux enclosed by the ring) at half-integer values of that parameter.
In the representation (3.1) the summation goes over the eigenvalues for γ given implicitly, as roots of Eq. (2.14). The explicit knowledge of those roots is not required if we apply the summation formula (3.6) to the corresponding series. In the presence of the bound states an additional term in the form (A.12) should be added to the right-hand side of (3.6). We have shown that the additional term exactly cancels the contribution coming from the bound states and the integral representation (3.8) is valid for all the sets of parameters (s, λ a , λ b ). The first term in the right-hand side of (3.8) corresponds to the VEV in the conical geometry with a single boundary at r = a and the last term is interpreted as the contribution induced by the second edge at r = b. The former part is further decomposed as (3.14) with the boundary-free and edge induced contributions, given by (3.19) and (3.17), respectively. An alternative representation, where the part corresponding to the problem inside a single circular boundary is extracted, is given by (3.28). As a general rule, the modulus of both the charge and current densities increases with increasing planar angle deficit (with increasing q). Depending on the boundary condition, determined by the set (λ a , λ b ), the charge and current densities are mainly located near the inner or outer edge (see Figs. 4,5,6). We have demonstrated that the behavior of the VEVs as functions of the mass can be essentially different for fields with s = +1 and s = −1. In the former case and for the boundary condition with (λ a , λ b ) = (1, 1) the absolute values of the charge and current densities decrease with increase of the field mass. In the case s = −1 and for the same boundary condition, the absolute values for both the charge and current densities increase with initial increase of the mass. After taking the maximum value, as expected, they tend to zero for large masses.
It is well known that in two spatial dimensions the fermionic mass term breaks both the parity and time reversal invariances. P-and T -symmetric fermionic models are constructed considering the set of two fields, ψ (+1) and ψ (−1) , with the same masses realizing two inequivalent irreducible representations of the Clifford algebra. The VEVs of the charge and current densities for the field corresponding to the second representation and obeying the boundary condition (5.1) are obtained from the formulas in Sect. 3.1 with s = −1 and λ u = −λ (−1) u , u = a, b. If in addition to the masses, the phases in the periodicity condition along the azimuthal direction and the boundary conditions on the edges for the fields ψ (+1) and ψ (−1) are the same then the total charge density vanishes, whereas the total current density doubles. In the effective low-energy theory for electronic subsystem of graphene, the fields ψ (+1) and ψ (−1) correspond to two inequivalent points of the Brillouin zone (valley degrees of freedom) and the results obtained in the present paper can be applied for the investigation of the charge and current densities induced by Aharonov-Bohm magnetic flux in graphitic cones. Two inequivalent values of the phase 2πχ realized in graphitic cones correspond to ±2π/3 and for the parameter q one has q = 1/(1 − n c /6). It is of interest to note that the valley-dependent gap generation mechanisms (for a recent discussion see [110] and references therein) create different masses for the fields ψ (+1) and ψ (−1) and, as a result of that, the nonzero net charge density appears. This breaks the timereversal symmetry.
We have considered the expectation values of the charge and current densities at zero temperature. An important issue is the generalization of the corresponding results for finite temperatures. The finite temperature fermionic charge and current densities in conical spaces without boundaries have been discussed in [79,111] The combined effects of the finite temperature and a circular boundary on the fermionic condensate in a (2 + 1)-dimensional conical spacetime are discussed in [80]. At finite temperatures the expectation values of the charge and current densities can be decomposed into the corresponding VEVs and contributions coming from the particles and antiparticles. Having the complete set of mode functions (2.10), the latter contributions for the problem under consideration can be evaluated in a way similar to that used in [80] for the fermionic condensate in the conical geometry with a single boundary. Similar to the fermionic condensate, we expect that the finite temperature effects lead to the suppression of the boundary-induced contributions in the expectation values of the charge and current densities. Note that the finite-temperature Casimir interaction between a suspended graphene layer, described by the Dirac model for quasiparticles, and a parallel conducting surface has been investigated in [14][15][16][17].
Data Availability Statement
This manuscript has no associated data or the data will not be deposited. [Authors' comment: This is a theoretical study and has no experimental data attached.] Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 . In the limit b/a → ∞ the equation for the bound states is reduced to K (a) β j (η) = 0. The latter is the equation for the bound states in a conical space with a single boundary at r = a and has no solutions for sλ a > 0. In this case, in the limit b/a → ∞ the possible bound states determined from (A.4) tend to ma. If there is a bound state in the geometry of a single boundary, then in the limit b/a → ∞ the corresponding bound state for a conical ring (with the same values for the set (s, λ a , j, α 0 , κ)) tends to the limiting value different from ma . These two situations are illustrated in Fig. 9, where we have plotted the radial quantum number η for the bound states as a function of the ratio b/a for (s, λ a , λ b ) = (−1, −1, 1) (left panel) and (s, λ a , λ b ) = (1, −1, −1) (right panel). The graphs are plotted for κ = +, ma = 3 and α 0 = 1/4 and the numbers near the curves are the values of j. The dashed and full curves correspond to q = 1 (planar ring) and q = 1.5, respectively. For the left panel sλ a > 0 and there is no bound state in a conical geometry with a single boundary, r ≥ a. In this case the bound states tend to ma for b/a 1. For the right panel the equation K (a) β j (η) = 0 has a solution and it is the limiting value of the bound state when b/a → ∞. On the right panel we also see that the bound states appear only started from some critical value of b/a. By using the relations between the bound states for different sets of the parameters, we see that the graphs in Fig. 9 also present the locations of the bound states for the set (− j, −α 0 , −κ) or for the set (−s, −λ u , −κ) with the same values of the remaining parameters.
It is of interest to compare the number of the positive and negative energy bound states for given values of the other parameters. In Fig. 10 the bound states are presented as func- If bound states are present their contribution should be added to the right-hand side of (3.1). The corresponding mode functions are given by where B u is defined in accordance with (2.19). Substituting the mode functions into the mode sum (2.8) for the contribution of the bound states to the VEV j μ we get where u = a, b. For the azimuthal current density on the edges we have the relation (3.5).
For the evaluation of the sum over l in (3.1) we again can apply the Abel-Plana type formula (3.6). However, in the presence of the modes γ a = iη the summation formula (3.6) is modified: an additional term appears in the righthand side coming from the poles z = ±iη. The derivation of the summation formula from the generalized Abel-Plana formula is similar to that for (3.6) presented in [95]. The difference is that now the function g(z) in the generalized Abel-Plana formula has poles z = ±iη on the imaginary axis.
In the corresponding integral these poles should be avoided by small semicircles in the right half plane with the centers at z = iη and z = −iη. The contributions of the integrals over these semicircles are combined, up to the coefficient −π 2 /4, as the term Now, the summation formula for the series over the positive roots γ l is obtained from (3.6) by adding to the right-hand side of that formula the term (A.11). After the application of the summation formula (3.6) with the additional term (A.11) in the right-hand side, the contribution to the current density from the modes with γ = γ l is given by (3.8) plus the part coming from (A.11). By taking into account that w μ,β j (ze −πi/2 ) = −w μ,β j (ze πi/2 ) for z < m a , we can see that the additional term in the VEV j μ is presented as Substituting this into (A.12) we see that the contribution (A.12) is the same as (A.8) but with the opposite sign. From here we conclude that the contribution of the bound states to the total VEV j μ is cancelled by the contribution of the additional term (A.11) in the summation formula for the modes γ l . Hence, all the representations for the charge and current densities given above, started from (3.8), are valid in the case of the presence of bound states as well. and for the azimuthal current density one has j 2 (0) (s) = −λ a j 0 (0) (s) /r for all values a ≤ r ≤ b. The reason for the appearance of two signs in the presence of the fermionic zero mode is the same as that discussed in [112].
Appendix B: Special mode and its contribution to the VEVs
In the case λ b = λ a , the boundary condition (B.4) leads to the equation cos z + (λ a sm/γ ) sin z = 0, (B.9) with z = γ (b − a). It is the same for the positive and negative energy modes. Note that the Eq. (B.9) coincides with the eigenvalue equation for a finite length cylindrical tube (see [52] for the case λ a s = 1). For the solutions of (B.9) the expression for (B.3) is simplified to C −2 (s) = 1 − sin (2z) /2z and, again, is the same for the modes κ = + and κ = −. Hence, as in the previous case, the contributions of the positive and negative energy modes cancel each other in the VEVs (B.5).
Concluding the analysis in this section, for half-integer values of α the special mode with the angular momentum j = −α does not contribute to the VEVs of the charge and current densities in the case λ b = λ a . In the case λ b = −λ a the only nonzero contributions come from the zero energy mode (B.7). For the charge density that contribution is given by (B.8).
|
2022-12-01T15:58:47.806Z
|
2020-03-01T00:00:00.000
|
{
"year": 2020,
"sha1": "42a6bf6bcc891f6fca35a5bbb6bbcc88e2c6e1d3",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-020-7819-8.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "42a6bf6bcc891f6fca35a5bbb6bbcc88e2c6e1d3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
}
|
4767587
|
pes2o/s2orc
|
v3-fos-license
|
Biological Age Predictors
The search for reliable indicators of biological age, rather than chronological age, has been ongoing for over three decades, and until recently, largely without success. Advances in the fields of molecular biology have increased the variety of potential candidate biomarkers that may be considered as biological age predictors. In this review, we summarize current state-of-the-art findings considering six potential types of biological age predictors: epigenetic clocks, telomere length, transcriptomic predictors, proteomic predictors, metabolomics-based predictors, and composite biomarker predictors. Promising developments consider multiple combinations of these various types of predictors, which may shed light on the aging process and provide further understanding of what contributes to healthy aging. Thus far, the most promising, new biological age predictor is the epigenetic clock; however its true value as a biomarker of aging requires longitudinal confirmation.
Introduction
Chronological age is a major risk factor for functional impairments, chronic diseases and mortality. However, there is still great heterogeneity in the health outcomes of older individuals (Lowsky et al., 2014). Some individuals appear frail and require assistance in daily routines EBioMedicine 21 (2017) 29-36 already in their 70′s whereas others remain independent of assistance and seem to escape major physiological deterioration until very extreme ages. In keeping with the unprecedented growth rate of the world's aging population, there is a clear need for a better understanding of the biological aging process and the determinants of healthy aging. Towards this aim, a quest for (biological) markers that track the state of biophysiological aging and ideally lend insights to the underlying mechanisms has been embarked upon.
During the past decades, extensive effort has been made to identify such aging biomarkers that, according to the stage-setting definition (Baker and Sprott, 1988), are "biological parameters of an organism that either alone or in some multivariate composite will, in the absence of disease, better predict functional capability at some late age, than will chronological age". Later on, the American Federation for Aging Research (AFAR) formulated the criteria for aging biomarkers as follows (Johnson, 2006;Butler et al., 2004): 1. It must predict the rate of aging. In other words, it would tell exactly where a person is in their total life span. It must be a better predictor of life span than chronological age.
2. It must monitor a basic process that underlies the aging process, not the effects of disease. 3. It must be able to be tested repeatedly without harming the person.
For example, a blood test or an imaging technique. 4. It must be something that works in humans and in laboratory animals, such as mice. This is so that it can be tested in lab animals before being validated in humans.
However, to date, no such marker or marker combination has emerged. Moreover, the existence of such markers has been questioned, because the effects of many chronic diseases are inseparable from normal aging. The rate of biological aging can also vary across different tissues, and hence it may not be feasible to assume a measurable overall rate. On the other hand, as consensus around the definition is missing, the term "aging biomarker" has been widely used in the literature as reviewed in (Lara et al., 2015;Johnson, 2006;Engelfriet et al., 2013).
Recently, several new biomarkers for biological aging have come into play. They can be separated into molecular-(based on DNA, RNA etc.) or phenotypic biomarkers of aging (clinical measures such as blood pressure, grip strength, lipids etc.), although we include both types. The focus of this review is on novel biological age predictors, and we define them as markers that predict chronological age, or at least can separate "young" from "old". They should also be associated with a normal aging phenotype or a non-communicable age-related disease independent of chronological age in humans (Fig. 1). A list of the final biological age predictors discussed in the paper can be found in Table 1.
Search Strategy and Selection Criteria
PubMed was used as the search engine where Medical Subject Headings (MeSH) terms "Aging" and "Humans" and the specific term for each of the six marker categories: 1) Epigenetic clock, 2) Telomere length, 3) Transcriptomics, 4) Proteomics, 5) Metabolomics, and 6) Multi-biomarker, were combined. Cited papers in the selected publications and papers that referenced the selected publications were also considered. We also searched in bioRxiv using a combination of the following search terms: "aging", "biomarker", "humans" and each of the six marker categories described above. The searches were performed between 22nd of November 2016 and 16th of January 2017.
We limited the discussion to those predictors that have been trained/identified in a discovery population of human adults, and then validated in a separate cohort. Only scores derived from multiple measurements, such as different probe signals, were considered (except for telomere length due to its classical role as benchmark biomarker), and studies published in English from 2010 and onwards were included.
Epigenetic Clock
A number of recent studies have identified a measure of DNA methylation age (DNAmAge), also referred to as the epigenetic clock, as a viable biological age predictor. Two of these clock measures, (Horvath, 2013) and (Hannum et al., 2013) calculators, are currently perhaps the most robust predictors of chronological age. Both of them show high age correlations (r = 0.96 for Horvath and r = 0.91 for Hannum) and small, mean deviations from calendar age (3.6 and 4.9 years, respectively) in their corresponding validation cohorts (Hannum et al., 2013;Horvath, 2013). Both algorithms have been developed in large samples (n = 8000 for Horvath and n = 656 for Hannum) covering the entire adult life span and different ethnic populations. The Horvath clock is a multi-tissue predictor based on methylation levels of 353 CpG sites on the Illumina 27 k array, whereas the Hannum clock uses only 71 CpG sites from the Illumina 450 k array and performs best using whole blood samples. Selection of the CpG sites for both predictors was done using a similar penalized regression model, yet they only have six CpG sites in common. Nevertheless, the correlations between the clocks appear to vary from fairly strong (r = 0.76) to moderate (r = 0.37) (Belsky et al., 2016) in independent studies.
DNAmAge and Mortality
The most striking feature of the Horvath and Hannum clocks is their ability to predict all-cause mortality independent of classic risk factors. A recent meta-analysis in 13 different cohorts with a total sample size of 13,089 demonstrated that the epigenetic clock was able to predict all-cause mortality independent of several risk factors such as age, body mass index (BMI), education, smoking, physical activity, alcohol use, smoking and certain comorbidities . When the authors divided the samples into subgroups by race, sex, follow-up time, BMI, smoking status, physical activity and given comorbidities they could, with some exceptions, observe largely similar mortality associations across subgroups . Furthermore, they showed that a weighted average of the Hannum clock based on distinct The concept of biological age predictors. A biological age predictor could be defined as a biomarker correlated with chronological age (black line), which brings additive information in the risk assessments for age-related conditions on top of chronological age. Hence, adult individuals of the same chronological age could possess different risks for age-associated diseases as judged from their biological ages (x's in figure). Usually, the positive predictive value (red line) of a biological age predictor decreases from midlife and onwards due to the increased biological heterogeneity at old age (confidence interval described by dashed lines increases at old age).
aging-associated blood cell counts outperformed the other clock measures in terms of statistically significant associations with mortality . Recently, two studies addressed cause-specific mortality predicted by the epigenetic clock, where the clock is a stronger predictor for cancer mortality than cardiovascular disease Zheng et al., 2016). In the study by Zheng et al., the results suggested a dose-responsive relationship between increased DNAmAge and cancer incidence and mortality; for each one-year increase in the difference between chronological and epigenetic age (the Δage), there was a 6% increased risk of developing cancer within three years and a 17% increased risk of dying of cancer in the next five years. The fact that the DNAmAges were measured in blood and not in the cancer tissue itself makes these results intriguing. As the authors speculated, the actual link could be attributed immune (blood) cells' role in tumor development via inflammatory mechanisms and pro-apoptotic processes, both of which themselves may accelerate epigenetic aging. Perna and colleagues showed similar results for cancer mortality, however, they concluded that DNAmAge also predicted cardiovascular mortality, even though the cardiovascular analysis had less power and was only significant for the Horvath clock. Furthermore, in line with the sex differences in overall mortality ages, consistent observations of men having higher DNAmAges compared to women have been seen Hannum et al., 2013;Horvath and Ritz, 2015;Marioni et al., 2015a). Nevertheless, despite the strong epidemiological evidence, the major issue is the functional role, if any, of DNAmAge in mortality. Longitudinal studies are missing which could shed light on changes of the predictive value over time.
DNAmAge and Aging Phenotypes
Several associations with different aging phenotypes and diseases have been demonstrated. When considering analyses in blood, the clocks correlate with certain blood cell types that also show age-related changes (Marioni et al., 2015a;Chen et al., 2016). Hence, only the cell type-adjusted DNAmAges yield the "pure" clock estimates independent of the changes in cells. Marioni and colleagues were the first to report associations between blood DNAmAge and fitness measures of aging. They observed that age and sex-adjusted Horvath's clock was cross-sectionally associated with poorer cognitive performance (fluid intelligence), lower grip strength and poorer lung function at baseline (Marioni et al., 2015b). The baseline DNAmAge at ∼ 70 years did not, however, predict the rate of change of these fitness measures nor was its change correlated with the changes in the fitness measures during the 6-year follow-up. In a cohort of 38 years-old adults, Belsky and coauthors observed that increased blood-based Hannum's clock, but not Horvath's, was associated with poorer metrics in balance, motor coordination, self-reported physical limitations, cognitive abilities, self-rated health and facial aging but not in grip strength (Belsky et al., 2016). In a recent study of epigenetic age and age-related frailtya state of vulnerability measured as deficits in multiple bodily systems -Breitling and colleagues discovered that the Horvath clock was directly associated with increased number of deficits (i.e., frailty index) in two cohortseven after accounting for several risk factors and blood cell counts (Breitling et al., 2016). Both the Horvath and the Hannum clock have also demonstrated associations with an increase in BMI and with indicators of the metabolic syndrome (Quach et al., 2017). Based on these findings it seems that the epigenetic clock can indeed reflect aging in different bio-physiological domains and across a wide age range, most notably in cross-sectional settings.
DNAmAge and Diseases of Aging
Several studies addressing associations between age-related diseases and DNAmAge have been conducted. Specifically, acceleration of the Horvath clock in Alzheimer's Disease (AD) patients' prefrontal cortex was associated with the presence of plaques, amyloid load and a decline in global cognitive functioning, episodic memory and working memory (Levine et al., 2015b). However, no associations with cognition and memory were observed in non-demented individuals in the same study. The latter finding is in line with the results of a recent study in middle-aged twins where no associations between blood DNAmAge and cognitive abilities were found (Starnawska et al., 2017). However, when cognition was assessed using psychometric measures direct associations were observed (Belsky et al., 2016;Marioni et al., 2015b). In Parkinson's Disease (PD) patients, although they exhibit markedly elevated (computationally estimated) granulocyte counts, their DNAmAge is still elevated compared to controls after adjusting for blood cell composition (Horvath and Ritz, 2015). The results of this study thus support the hypothesis that peripheral immunoinflammatory characteristics, observed as accelerated methylome aging of blood cells, are involved in PD. As regards to cancer, the Hannum clock shows increased epigenetic age in the tested tumor tissues (Hannum et al., 2013), whereas the Horvath clock assigns increased DNAmAges only to certain cancer types (Horvath, 2013). Nevertheless, increased blood DNAmAge has been shown to predict the incidence of lung cancer (Levine et al., 2015a), other cancers as well as cancer mortality (Zheng et al., 2016;Perna et al., 2016). Lastly, in osteoarthritis the Horvath clock in the joint of the affected cartilage, but not in nearby bone or blood, was associated increased epigenetic age (Vidal-Bralo et al., 2016).
DNAmAge and Other Biomarkers
Correlations between DNAmAge and telomere length, as well as with clinical measures, have generally been low or non-significant ( DNAmAge and telomere length are associated with age and mortality independently of each other . In addition, the cell-type adjusted Horvath clock is not associated with common disease risk factors such as alcohol use, smoking, diabetes, hypertension, and the levels of high-and low density lipoproteins, insulin, glucose, triglycerides, C-reactive protein (CRP) and creatinine . Another interesting feature of the epigenetic clock is that offspring of semi-supercentenarians exhibit lower epigenetic age than age-matched controls . As centenarians are an excellent example of successful, healthy agers who managed to escape or postpone the onset of major aging diseases, their offspring's youthful DNAmAge could indicate that common (genetic or shared environmental) factors matters for protection from aging diseases and DNAmAge maintenance.
Potential Mechanisms
Currently it is not entirely clear what aspect(s) of physiological or cellular aging the epigenetic clocks represent. Although the original paper on the epigenetic clock (Horvath, 2013) demonstrated that the clock estimate is close to zero in embryonic and induced pluripotent stem cells, it correlates with cell passage number, and the ticking rate is highest during organismal growth, it is not purely a mitotic clock since it tracks chronological age in non-proliferative tissue such as brain as well. Recent experimental work in primary endothelial cells demonstrated that cells that were forced into replicative senescence and oncogene-induced senescence exhibited increased epigenetic aging, measured using the Horvath clock, whereas cells whose senescence was induced by DNA damage did not. Hence, the authors concluded that epigenetic aging is an intrinsic property of the cells that is uncoupled from senescence per se (Lowe et al., 2016). This conclusion is somewhat in line with the interpretation by Horvath (Horvath, 2013) that the DNAmAge measure represents the function of the epigenetic maintenance system. Hormonal factors may also play a role as late menopause is associated with lower DNAmAge and menopause itself also seems to accelerate epigenetic aging . To date only one genome-wide association study (GWAS) has reported genetic associations for the epigenetic clock. The results of this GWAS suggest that variants near an mTOR complex2 gene (MLST8) and in a putative RNA-helicase (DHX57) are associated with epigenetic age, yet only in the cerebellum . Moreover, the intrinsic Horvath clock has been suggested to represent overall frailty in the body while the Hannum clock is more related to immune responses (Horvath and Ritz, 2015).
Taken together, the epigenetic clock appears to be associated with a wide spectrum of aging outcomes, most consistently mortality. Its predictability is observable in several different tissues, suggesting a pervasive, systems-level mechanism. However, as the epigenetic clock concept is still in its infancy, it is likely that many negative findings have remained unreported. Furthermore, there is a lack of evidence for mortality prediction by the epigenetic clock in tissues other than blood. Lastly, the epigenetic aging rate in one tissue can be quite different from that of another and for biomarker purposes, it is not realistic to obtain a combination of epigenetic age estimates in several tissues. With advances in technologies and increased coverage of arrays, it is likely that a better understanding of DNA methylation will soon be achieved. The remaining unresolved questions are if, and if so how, the clock's ticking rate is modifiable and whether the methylation changes seen with age and aging phenotypes actually drive the phenotypes, or whether they merely represent the work of other genomic control mechanisms, such as histone regulation.
Telomere Length
Telomeres are repetitive DNA sequences capping chromosomes, which shorten every time cells divide; thus, telomere length is a popular marker of biological aging (Blackburn et al., 2006). Today, N6000 publications exist on the topic "telomere length". Many excellent reviews have been written on basic function , associations with aging (Sanders and Newman, 2013), on the trade-off between cellular senescence and regeneration (Stone et al., 2016), and the article by Passos in this special issue on aging, just to mention a few. Hence, this review will focus only on summarizing what is known from large-scale and meta-analysis studies on associations between telomere length and age-associated traits, as well as address telomere length in relation to other biological markers. To start with, a meta-analysis on 36,230 participants (Gardner et al., 2014), and the largest population-based telomere length study to date (n = 105,539) (Lapham et al., 2015), concluded that women on average have longer telomeres than men. Hence, women have a lower biological age than men as judged from the telomere lengths, which is in accordance with measures of the DNAmAge. This observation might be related to the increased longevity seen in women, all discussed in detail elsewhere (Barrett and Richardson, 2011), but important to keep in mind when looking at associations between telomere length and disease. Thus, associations reported below were found in analyses adjusted for age and sex.
Although no meta-analysis on mortality has been reported yet, the association between short telomeres and increased mortality risk has been shown repeatedly in many studies (Needham et al., 2015;Bakaysa et al., 2007;Deelen et al., 2014), and recently in a large cohort study (n = 64,637) (Rode et al., 2015). Unlike the epigenetic clock, telomere length seems to work equally well for cancer and cardiovascular mortality predictions, and as alluded to above, the effect is independent of the epigenetic clock. However, a meta-analysis on telomere length and overall cancer risk (23,379 cases and 68,792 controls) showed a null result, indicating that telomeres may play different roles for different cancers (Zhu et al., 2016). Short telomeres were found to be risk factors for gastrointestinal, head and neck cancers only. Furthermore, short telomere length has been described as a risk factor for coronary heart disease as judged from a meta-analysis of 43,725 participants (8400 events) (Haycock et al., 2014), and from a large-scale observational study (Scheller Madrid et al., 2016). In fact, this relationship has been suggested to be causal by inferring genetic information (Scheller Madrid et al., 2016;Codd et al., 2013). Likewise, in AD patients, telomere lengths have been shown to be shorter, both in observational (Forero et al., 2016a) and causal Mendelian Randomization (Zhan et al., 2015) studies. For PD, only small studies exist with inconclusive results (Forero et al., 2016b). Telomeres have also been associated with many age-related traits such as cognition and physical function. However, studies, and even meta-analysis efforts, are often small with limited conclusions (Gardner et al., 2013). Technical bias in the measurement of telomere lengths may also contribute to the lack of consistent results. For further reading on telomeres and traits related to aging we suggest another review (Mather et al., 2011). To conclude, the suggestive epidemiological evidence for a causal role of telomeres in aging diseases is challenging current knowledge and needs to be further investigated, preferably in longitudinal studies. The discussion around cause or consequence is valid not only for telomeres, but for all biomarkers of aging and is important for future perspectives of healthy aging.
Transcriptomic Predictors
To date, two blood-based sets of gene expression profiles have been developed that fulfill our criteria for a transcriptomic age predictor. The first profile is a five-transcript predictor presented by Holly and colleagues (Holly et al., 2013) who developed it by truncating their previous six-transcript model that could distinguish young individuals (b 65 years) from old (≥75 years) in the InCHIANTI cohort (Harries et al., 2011). The five-transcript predictor was tested in two additional cohorts (Exeter 10,000, n = 95) and the San Antonio Family Heart Study (SAFHS, n = 1240) together with the InCHIANTI data set (n = 698) (Holly et al., 2013). The authors demonstrated that lower levels of interleukin-6 (IL-6) and blood urea, as well as higher levels of serum albumin and muscle strength, were found in the biologically young group compared to the rest. However, no differences were observed for physical function, CRP, systolic blood pressure, and hematocrit.
The second transcriptomic predictor was based on the expression levels of 1497 transcripts in European ancestry populations (Peters et al., 2015). The model was trained in 7074 human blood samples from six independent cohorts. The analyses were adjusted for sex, cell counts, smoking and fasting status (where available), as well as for technical array variables. The predictor was replicated in 7909 blood samples from seven independent cohorts, and a high agreement (r = 0.972) was observed between the results of the discovery and replication sets. Correlations between transcriptomic age and chronological age within the cohorts ranged from to 0.348 to 0.744 and the average absolute differences between the predicted transcriptomic age and chronological age ranged from 4.84 to 11.21 years (mean = 7.8 years). However, there was only partial overlap in the direction of change in the 1497 transcripts among cerebellum, frontal cortex and distinct blood cell sub-types, as well as with blood samples from other ancestries. This is not surprising given that gene expression profiles tend to be tissue-specific. In the combined analysis of all cohorts, Peters and co-authors observed higher Δage, which reflects increased biological/ transcriptomic aging, directly associated with higher systolic and diastolic blood pressure, total cholesterol, HDL cholesterol, fasting glucose levels and BMI (Peters et al., 2015). Current smokers also exhibited higher Δages even after adjusting for BMI. Interestingly, the authors also examined the correlations between their transcriptomic predictor and Horvath's and Hannum's epigenetic clocks in two of the cohorts. The correlations between the transcriptomic and the epigenetic predictors ranged from 0.10 to 0.33, and besides the waist-to-hip ratio they did not show similar associations with the examined aging phenotypes.
Hence, it appears that the transcriptomic age and the epigenetic clock describe different aspects of biological aging. When simultaneously examining multiple cohorts that have their transcriptomic profiles produced using different array platforms, it is critical to control for technical variables and probe design to ascertain whether the signatures are truly platform-independent. Nevertheless, the transcriptomic age predictors still await broader validation in independent cohort studies.
Proteomic Predictors
Over the last two decades, several studies have shown effects of aging on protein glycosylation as measured from human serum or plasma (Pucic et al., 2011;Ruhaak et al., 2010;Parekh et al., 1988;Ruhaak et al., 2011;Knezevic et al., 2010). However, most studies were based on non-targeted approaches in single cohorts, making validation across studies impossible. Recently, Kristic´and colleagues made an effort of combining four European cohorts to study IgG glycosylation in aging (Kristic et al., 2014). A prediction model for age based on three individual glycans, the GlycanAge, was built in one cohort, and replicated well in the others (among which TwinsUK was included). The GlycanAge index was associated with health variables such as fibrinogen, HbA1c, BMI, triglycerides and uric acid after correction for age and sex.
Likewise, individual studies for investigating the effect of age on the proteome have been conducted in human plasma and cerebrospinal fluid (Zhang et al., 2005;Ignjatovic et al., 2011;Lu et al., 2012;Baird et al., 2012). The only attempt thus far to develop an age predictor was done by Menni and co-workers who calculated a protein-derived age variable from four age-associated proteins found in plasma (PTN, CHRDL1, MMP12, and IGFP6) (Menni et al., 2015). The predictor (trained in TwinsUK data) was validated in independent cohorts, and one of the proteins, CHRDL1, was associated with low birth weight, the Framingham risk score and other cardiometabolic risk factors after adjustment for age. However, the protein-derived age variable itself was not tested for associations with health outcomes.
Metabolomics-based Predictors
Relatively few studies have analyzed associations with age on the metabolome (also referred to as the metabonome), and they were conducted using different measurement techniques (Ishikawa et al., 2014;Yu et al., 2012;Menni et al., 2013;Hertel et al., 2016;Collino et al., 2013;Lawton et al., 2008). Yu and colleagues used a targeted massspectrometry method identifying 131 metabolites in fasting serum, where 11 were independently associated with age in females, both in discovery (KORA F4) and replication (TwinsUK), after BMI adjustments . Later, the same groups combined analyses of nontargeted mass-spectrometry and age using the Metabolon platform (Menni et al., 2013). Here, TwinsUK was the discovery cohort where 22 independent age-associated metabolites, mostly lipids and amino acids, were found. One selected metabolite, C-glyTrp, was further replicated in KORA F4, and associated with age-related traits such as lung function and hip bone mineral density after adjustments for age.
In a study from 2016 by Hertel and colleagues, a proton nuclear magnetic resonance (H 1 NMR) spectroscopy investigation in human urine samples quantified 59 metabolites (Hertel et al., 2016). Construction of a Metabolic Age Score included all metabolites as predictors and age as the outcome. The metabolic age score was validated and replicated in two independent cohorts, and found to associate with clinical outcomes independent of age, e.g., kidney malfunction, high HbA1c levels, and hyperglyceridemia. Importantly, survival analysis showed that individuals in the first tertile of the score (lower biological age) had higher all-cause survival rates, and that the prediction added value over commonly known risk factors.
Composite Biomarker Predictors
Other attempts to identify age-related biomarkers focus on combining multiple biomarkers into a biological age predictor. In a study by Levine from 2013, ten biomarkers significantly associated with chronological age (CRP, serum creatinine, glycated hemoglobin, systolic blood pressure, serum albumin, total cholesterol, cytomegalovirus optical density, serum alkaline phosphatase, forced expiratory volume, and serum urea nitrogen) were combined into a biological age predictor in the NHANES III study (n = 9389) (Levine, 2013). Most of the biomarkers were also significant in sex-stratified analyses and no sex differences for the age predictor were reported. Using Cox proportional hazards, the final ten-biomarker age predictor was associated with mortality independent of chronological age. The same predictor model was further validated in the Dunedin study, which is a younger birth cohort (n = 1037) followed longitudinally (Belsky et al., 2015). In cross-sectional analyses at age 38, participants with higher biological age scored worse on IQ-tests and physical function measures such as balance, strength and motor coordination. Similar results were found for longitudinal changes of biological age, as measured over 12 years, and health outcomes. A somewhat different approach was recently presented by Sebastiani et al., where 19 biomarkers correlated with age were used to cluster 4704 participants of the family-based LLFS cohort into 26 different clusters independent of age and sex. Validation was done in the FHS study where classification of the 26 clusters had a sensitivity ranging from 36% to 100%, which was better than random. Moreover, correlations of the biomarker signatures with longitudinal changes in physical function and cognition, as well as proportional hazards for incident diseases and mortality, were found significant (Sebastiani et al., 2017).
A follow-up study by Belsky and co-authors included measures on the epigenetic clock and telomere length in combination with the composite biomarker predictor from Levine in the Dunedin study (Belsky et al., 2016). The correlations between the composite predictor and Horvath and Hannum DNAmAge were weak (r = 0.08 and r = 0.15 respectively) but significant. However, no correlation with telomere length was observed. Health outcomes such as IQ and physical function measured in the Dunedin participants seem to be best predicted by the composite biomarker age, then by DNAmAge and not at all by telomere length. Overall, one type of biological age predictor correlates weakly with other types of predictors, hence indicating that effects are mostly independent of each other, or at least not measurable with these methods.
Finally, a study using TwinsUK data applied a multi-omics approach to investigate relationships between different biomarkers of aging (Zierer et al., 2016). Several biological age predictors have been investigated in those data, as discussed above, and here epigenetic, metabolomic, transcriptomic and glycomic measures were combined into graphical models. Unfortunately, instead of using pre-defined age predictors, multiple single markers were inferred in the models, making comparisons to earlier studies and interpretations difficult. Nevertheless, linking many different data types and disentangling the relationship between different biological age predictors may shed light on the aging process and provide further understanding of what contributes to healthy aging.
Biological Age Predictors in Animals
At the outset, we limited our review to studies in humans. Although the AFAR criteria for aging biomarkers include also working in animal model systems, this is not yet a reality. For the epigenetic clock, some evidence exists of its functionality in great apes (Horvath, 2013), but not in species with more divergent genomes from humans. For telomere length there have primarily been studies in rodents, but also in birds, wild animals and C. elegans to mention a few. However, in mice the telomere maintenance system works differently compared to in humans and they have longer telomeres not reaching the same critical lengths in spite of their short life span (Calado and Dumitriu, 2013). Transcriptomic studies have been performed extensively in mice and rat models, some also with the objective of finding genes associated with aging (Yang et al., 2016). However, as far as we are aware, there have not been any biological age predictors trained and tested in those data. Further, comparisons between humans and rodents are always problematic because of different arrays used and missing homologous gene sequences. Likewise, most omics-data analyses suffer from the same challenges. Thus, considering our selection criteria of independent validation and prediction of health outcomes, none of our proposed biological age predictors are valid in any type of animal model. Nevertheless, though not falling within our definition of biological aging predictors, it should be noted there is a wealth of evidence, for example, on the markers at the insulin/insulin-like growth factor axis that demonstrate utility as markers of longevity and healthy aging both in humans and across the animal kingdom.
Conclusions and Future Questions
In this review, we have summarized current knowledge on biological age predictors and discussed different data types used. There are several existing predictors, where the most plausible candidates are the epigenetic clock and telomere length. They have both been tested in different tissues and validated in many independent cohorts, as indicated by the number of studies in Table 1. However, they all work by providing additional evidence on individual aging independent of chronological age, and they successfully predict health outcomes such as physical function, cognition, morbidity and mortality. In an attempt to make an overview of the conclusions, we illustrated the number of studies versus the strength in mortality prediction by the biological age predictors where applicable (Fig. 2). Briefly, telomere length is extensively validated but has low predictive power. The composite biomarker is not validated enough but has the potential to be a stronger predictor than telomeres, as is the Metabolic Age Score. The epigenetic clock currently performs the best considering both aspects. Other biological age predictors may prove to be useful, but would require further independent validation. Yet others, such as miRNAs and ncRNAs, may emerge as a new class of aging markers once there is more research; the current knowledge on these markers rests largely on their regulatory role in development.
There is no doubt about the escalated interest in predicting biological age. Given the global aging phenomenon, this trend is not going to end anytime soon. It is imperative that we find a validated set of markers that can predict the health span rather than only focus on mortality and lifespan. This could include marker combinations, e.g. a set of physiologic, genomic and blood-based determinants, that predict the years a person would spend being free from frailty before death. Ideally, this marker combination would be a useful indicator both in mid-and late-life. There are, however, a number of challenges in identifying biomarkers in humans, beyond the technical issues noted above and heterogeneity due to cell count. These include our access to longitudinal data, the vast variability we see among humans, and potential for confounding.
Nevertheless, we thus far have little clue about the mechanisms by which biological age predictors work. There have been discussions around the underlying aspects of the epigenetic clock, whether they are describing cellular ticking or something else (see earlier discussion under heading 1). However, we do not yet know what is cause or consequence of the epigenetic clock. A recent study suggested that adiposity causes epigenetic changes and not the contrary (Wahl et al., 2017). On the other hand, when it comes to other biological age predictors, telomere length has been suggested to have a causal effect on health outcomes, and not just being a marker of such (see discussion under heading 2). Additional longitudinal data are necessary to further confirm the causal nature. Moreover, data also suggest that most of the biological age predictors we have discussed have little or no interaction with each other. Thus, effects are independent of each other and may therefore be describing different parts of the aging process. A combination of markers would increase the predictive power and should be further studied in larger samples. In summary, combinations of biological age predictors may be used to monitor the face of aging, with the overall goal of increasing the individual health span and decreasing health care burden. Overview of the four biological age predictors telomere length (Rode et al., 2015), epigenetic clock , Metabolic Age Score (Hertel et al., 2016), and composite biomarker (Levine, 2013) which have all been used in survival models. The hazard ratio per yearly change in biological age (de-)acceleration for each predictor is presented on the x-axis. The y-axis presents an approximation of the number of studies on a log-scale where the predictor has been used.
Conflicts of Interest
The authors have no conflicts of interests to declare.
Author Contributions
Outline of the review: JJ, NLP, SH. Performed the search: JJ, SH. Wrote the manuscript: JJ, NLP, SH.
|
2018-04-03T02:47:23.051Z
|
2017-04-01T00:00:00.000
|
{
"year": 2017,
"sha1": "ed6e21dbe14cd12f9c29977b21de458eff756798",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thelancet.com/article/S2352396417301421/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ed6e21dbe14cd12f9c29977b21de458eff756798",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
18692251
|
pes2o/s2orc
|
v3-fos-license
|
Animal Reservoirs for Visceral Leishmaniasis in Densely Populated Urban Areas
Leishmaniasis is a zoonotic disease of major public health and veterinary importance, affecting 88 countries with up to 2 million cases per year. This review emphasizes the animal reservoirs and spreading of visceral leishmaniasis (VL) in urban areas, particularly in two Brazilian metropolitan areas, namely São Luis and Belo Horizonte, where the disease has become endemic in the past few years. Urbanization of visceral leishmaniasis in Brazil during the last decades has created favorable epidemiological conditions for maintenance of the disease, with dense human populations sharing a tropical environment with abundant populations of the mammalian reservoir and the invertebrate vector, facilitating transmission of the disease. Introduction Leishmaniasis is a zoonotic disease of major public health and veterinary importance. According to the World Health Organization, the visceral form of the disease is endemic in 88 countries distributed in four continents, with 1.5 to 2 million cases per year and 350 million people under risk of infection (WHO; Human leishmaniasis in the Americas can be classified into two broad categories, namely American tegumentary leishmaniasis (TL) and American visceral leishmaniasis (VL) also known as " kala-azar ". American TL may have several clinical manifestations, including (i) cutaneous leishmaniasis, characterized by skin lesions, which may heal spontaneously or progress to chronic lesions with severe scaring; (ii) mucocutaneous leishmaniasis, characterized by ulcerative and destructive lesions in mucous membranes and mucocutaneous junctions; and (iii) diffuse cutaneous leishmaniasis, characterized by non-ulcerative nodular lesions. Conversely, the visceral form of the disease is chronic and
Introduction
Leishmaniasis is a zoonotic disease of major public health and veterinary importance.According to the World Health Organization, the visceral form of the disease is endemic in 88 countries distributed in four continents, with 1.5 to 2 million cases per year and 350 million people under risk of infection (WHO; http://www.who.int/tdr/index.html).Human leishmaniasis in the Americas can be classified into two broad categories, namely American tegumentary leishmaniasis (TL) and American visceral leishmaniasis (VL) also known as "kala-azar".American TL may have several clinical manifestations, including (i) cutaneous leishmaniasis, characterized by skin lesions, which may heal spontaneously or progress to chronic lesions with severe scaring; (ii) mucocutaneous leishmaniasis, characterized by ulcerative and destructive lesions in mucous membranes and mucocutaneous junctions; and (iii) diffuse cutaneous leishmaniasis, characterized by non-ulcerative nodular lesions.Conversely, the visceral form of the disease is chronic and progressive, affecting several organs including the spleen, liver, bone marrow, lymph nodes, and skin.
Human VL has a wide distribution throughout Latin America, extending from Mexico to Argentina [1].American VL is rapidly spreading in several regions of Brazil, and it is currently considered an urban disease.Migration of rural populations to periurban areas led to urbanization of the disease [2], since it created conditions in which the human population shared an environment with high densities of canine and invertebrate vector populations [3].Environmental changes resulting in habitat destruction may have a significant impact on vector populations, with some species disappearing, and others becoming more abundant as some species can colonize and adapt to anthropogenic environmental conditions [2].
The distribution of American VL in Brazil includes the states of Alagoas, Bahia, Ceará, Distrito Federal, Espírito Santo, Goiás, Maranhão, Mato Grosso, Mato Grosso do Sul, Minas Gerais, Pará, Paraíba, Pernambuco, Piauí, Rio de Janeiro, Rio Grande do Norte, Roraima, Sergipe, São Paulo, and Tocantins [1] (Figure 1).There is a broad variation in the prevalence of VL among different regions in Brazil, with higher prevalence in the Northeastern part of the country (Figure 1).According to the Brazilian Ministry of Health (Fundação Nacional de Saúde -FUNASA; www.funasa.gov.br),from 1980 to 2005, there were 60,954 cases of VL registered in Brazil, with the highest number of cases in the States of Bahia (16,635) and Maranhão (8,484).Official records also indicate over 1,000 new human cases of VL yearly in Brazil.In the State of Maranhão, human and canine cases concentrate in the Island of São Luis (Figure 2), where there is a large urban area.In the last five years, 239 human cases of VL that led to 13 deaths and 7,682 canine cases were recorded (Center for Zoonosis Control of São Luis, unpublished data).The metropolitan area of Belo Horizonte (Figure 2), with an estimated population of more than three million people and an area of approximately 6,000 Km 2 , had the first case of human VL in 1994.After that first case, the disease spread quickly with 345 human cases of VL confirmed in that area between 1994 and 1999 [4].Here we review the literature and present some unpublished data with emphasis on animal reservoirs and the spread of VL in urban areas, particularly in two Brazilian metropolitan areas, namely São Luis and Belo Horizonte (Figure 2).
Etiology of visceral leishmaniasis
VL is caused by flagellated protozoa belonging to the order Kinetoplastida, family Trypanosomatidae and genus Leishmania.Human visceral leishmaniasis is caused by Leishmania donovani and Leishmania infantum in the Old World and Leishmania chagasi in the New World [reviewed by 5 and 6]: however, there is currently a tendency to consider L. infantum and L. chagasi as one single species [7].
Leishmania is a parasite that has heteroxeny during its life cycle so it has two hosts, one vertebrate (canids, rodents, and humans) and the other invertebrate sandfly (Order Diptera, Family Psychodidae, Sub-family Phlebotominae) belonging to the genus Phlebotomus in the Old World and the genus Lutzomyia in the New World, with one vector species, Lutzomyia longipalpis, in Brazil [1,8,9].
Leishmania has distinct forms during its life cycle.Promastigotes and paramastigotes are flagellated and mobile, and they are present in the digestive tract of the invertebrate host, whereas the non-flagellated amastigotes (Figure 3), are found within phagocytic cells in the vertebrate host [10].The infective form, named metacyclic promastigote is smaller with a longer flagellum when compared to the procyclic or non-infective form.Upon infection of the vertebrate host, the promastigotes undergo phagocytosis and become amastigotes, multiplying intracellularly until the host cell ruptures, releasing the parasite, which then infects other macrophages [11,12].The different clinical manifestations of the disease are determined by the species of the parasite as well as the immune condition of the host [11].
Natural hosts and reservoirs of visceral leishmaniasis in metropolitan areas
VL used to be restricted to rural and peri-urban areas in Brazil.However, with emergence of foci of the disease in urban areas, visceral leishmaniasis has assumed a major role in public health [13].The domestic dog is the most important reservoir of VL in metropolitan areas.The dog develops an intense cutaneous parasitism, favoring infection of sandflies, thereby playing an important role in the epidemiological chain of VL [14].Importantly, most dogs remain asymptomatic for long periods of time, which contribute to the maintenance and transmission of the disease, since the absence of clinical signs delays the diagnosis favoring the maintenance and transmission of VL.Indeed, it has been demonstrated that there is a significant correlation between the spatial distribution of seropositive dogs and human cases of VL in the metropolitan area of Belo Horizonte [15].Furthermore, the geographic distribution of the first cases of canine VL in Belo Horizonte from 1993 and 1997 coincided with the emergence of human VL.Apparently under those conditions the establishment of human VL was preceded by infection of dogs in a given area [16].
There are no differences in predisposition among dogs of different ages or sexes.However, breed predisposition has been documented, with Cockers and Boxers having a higher risk when compared to other breeds [17].Furthermore, dogs with short hair are predisposed when compared to those with long hair.The role of other species as reservoirs is not clear.For instance, natural infection of domestic cats with Leishmania has been documented [18].In Europe, in addition to dogs, rodents are also identified as reservoirs, whereas humans and domestic cats are considered accidental hosts [19,20].
Environmental degradation associated with intensive migration of human populations from rural into urban areas favored urbanization of VL in Brazil.In the Island of São Luis, the first urban cases of human VL were identified in 1982 [21].Concomitantly, the first cases of VL in dogs were reported in that region [22].Initially, human and canine VL occurred in the periphery of the city of São Luis, but it gradually spread to other municipalities on the island (Paço do Lumiar, São José de Ribamar, and Raposa).The first cases of VL coincided with marked environmental changes and migration of rural human populations into the urban area.In addition, large industrial plants were established at that time, which stimulated influx of immigrants from other Northeastern Brazilian States where VL was endemic [23,24].Currently there are areas of the metropolitan region of São Luis with an extremely high seroprevalence in dogs, reaching 25% in one of the districts [25].These high seroprevalences are associated with poor sanitation and housing conditions in areas previously occupied by the natural vegetation [26].An ecoepidemiological study is currently underway in the Island of São Luis to identify risk factors for visceral leishmaniasis in that particular area.
Similarly, the first case of VL in the metropolitan area of Belo Horizonte was diagnosed in 1994, and was followed by a fast spreading of the disease with 345 confirmed human cases from 1994 to 1999 [4].In 2001, there were 31 cases of human VL reported, resulting in six deaths (Secretaria Municipal de Saúde de Belo Horizonte; http://www.pbh.gov.br/smsa/).In 2004, the number of human cases reached 128 with 21 deaths in the metropolitan area of Belo Horizonte, whereas in the whole State of Minas Gerais there were 587 cases with 49 deaths during the same period.Taking as an example Ribeirão das Neves, which is part of the Belo Horizonte metropolitan area, the first autochthonous case of VL was confirmed in 2000, with no cases reported in 2001, but with a marked increase in the number of cases after 2002 (Figure 4).Interestingly, the increase in human cases of VL coincided with increases in seroprevalence in the canine population in the same area (Figures 4 and 5).Although the dog is recognized as the major source of infection for the invertebrate vector and therefore potential transmission to humans [15,[27][28][29][30], several species of wild canids, marsupials, and rodents have been incriminated as reservoirs in endemic areas [5].However, the suitability of these species as reservoirs as well as their role in the epidemiology of VL remain to be investigated [1].Several wildlife species from the Amazon rain forest in Brazil, including rodents, marsupials, procyonids, canids, and edentates have been surveyed for Leishmania infection.The parasite was detected only in the crab-eating fox (Cerdocyon thous) by culturing samples of the spleen and liver, and by inoculation in hamsters [31].However, L. chagasi has been previously isolated from two opossums, Didelphis albiventris, captured in an endemic area in Jacobina (State of Bahia, Brazil), but the rate of infection was very low [32,33].Furthermore, L. chagasi has been cultured from spleen, liver, and skin of the opossum (Didelphis marsupialis) in Colombia [34,35].In addition, infection of the rodent Proechimys canicolus has been demonstrated by PCR [36].
In order to be considered an effective reservoir of a parasite that is transmitted by a blood-suckling insect, in addition to playing a role in the maintenance and dissemination of the parasite under natural conditions, it is essential to demonstrate that the putative reservoir species effectively infects the invertebrate vector during the blood meal [1].Considering these criteria, only Cerdocyon thous and Didelphis marsupialis are considered natural wild reservoirs of L. chagasi [35,37].
Didelphis sp. is present in both São Luis and Belo Horizonte metropolitan areas where there is a high canine seroprevalence [25,26].However, although there is some evidence indicating that the presence of opossums is a predisposing factor for canine infection [38], the epidemiological role and the impact of the opossum as a reservoir of VL in urban areas is still not clear.
In spite of the fact that the dog is the major reservoir for human VL, other animal species may play a significant role in the maintenance of the disease.These include chickens, pigs, cattle, and horses kept around the residences in urban areas, which favor proliferation of the vector and therefore transmission of the disease [39,40].Importantly, evaluation of the intestinal contents of L. longipalpis indicated that 24.4% (n=547) fed on vertebrate blood, including avian (87.9%); rodent (47.2%); human (42.4%); canine (27.6%); opossum (26.6%); and equine (22.5%) blood [41].
Transmission
Transmission of VL generally takes place between the invertebrate vector and a mammalian host.A female phlebotominae sandfly vector is infected by ingesting amastigotes present in the dermis during the blood meal.The amastigotes transform into promastigotes within the digestive tract of the insect from 24 to 48 hours after the blood meal.The promastigotes then proliferate in the intestine and migrate to the esophagus and pharynx, and are regurgitated into the mammalian host during the blood meal [42].Soon after inoculation the promastigotes are phagocytosed by machophages and locate inside a vacuole that fuses with lysosomes, named the parasitophorous vacuole [10,[43][44][45].The promastigotes then transform into amastigotes that replicate intracellularly eventually causing the rupture of the macrophage and thereby infecting other phagocytic cells [10,46].Thus, control programs for VL must take into account the vectors, reservoirs, and susceptible hosts.Understanding the circumstances in which parasites are transmitted from dogs to sandflies is essential for implementing sound prophylactic measures [3].
The role of L. longipalpis as vector of AVL was first suspected in the 1930s when liver samples from patients suspected of yellow fever were diagnosed with VL.This first study performed in the Northeastern Brazilian State of Sergipe resulted in the description of the first clinical case of VL in a patient whose house had large numbers of L. longipalpis both indoors and outdoors [47].Later on, other cases of human and canine VL were also linked to L. longipalpis [48].However, the role of this sandfly as a vector for VL was initially characterized in the 1950s, when promastigotes of L. chagasi were detected in L. longipalpis, and infection occurred after exposure of foxes to infected sandflies [49][50][51].At that time, the dog was identified as the main reservoir of the disease since L. longipalpis often became infected after feeding on infected dogs [27].Subsequently, transmission of Leishmania was demonstrated in hamsters via the bite of experimentally infected laboratory-bred L. longipalpis [52].The role of L. longipalpis was definitely established after identification of this sandfly as the only species present in the environment where cases of human LV were diagnosed, with sandflies from this particular area being able to infect hamsters [53,54].
L. longipalpis was considered to be the only biological vector of L. chagasi for several years, but cases of VL began to be diagnosed in regions where this sandfly species was absent, indicating that other species could be transmitting the parasite to vertebrate hosts.Indeed, Lutzomyia evansi infected with L. chagasi was identified as the major vector for VL in certain areas of Colombia [55,56] and Venezuela [57].L. evansi is present in several Latin America countries including Costa Rica, Honduras, El Salvador, Guatemala [58] and Mexico [59], but it has not been detected in Brazil.Although there is evidence supporting the notion that Lutzomyia cruzi may act as an alternative vector for VL in Brazil [60], this view is still debatable [1].
L. longipalpis is the predominant sandfly species in São Luis, where it is abundant in the peridomestic environment [61][62][63][64].This behavioral feature of L. longipalpis may partially explain the higher rates of infection in dogs than in humans [1].Importantly, L. longipalpis has been captured indoors and outdoors in all areas with reported cases of human VL in São Luis.L. longipalpis is also the predominant species of sandfly in the metropolitan area of Belo Horizonte [65].
Although VL is largely recognized as an insectborne disease, there are several reports of transmission in the absence of the vector.For instance, the transmission through blood transfusions in human patients has been reported [66] as well as transmission by common use of needles among drug addicts [67].Vertical transmission of Leishmania has been documented in humans [68].Although a previous study failed to demonstrate vertical transmission in dogs [72], amastigotes of Leishmania have been detected in the uterus of a bitch with evidence of vertical transmission [69] and has been suggested by other investigators [70,71].Venereal transmission of VL has been documented between an infected man and his wife in an area completely free of the insect vector [73].This form of transmission is also likely to occur in dogs [27].We have recently demonstrated that dogs with VL develop specific genital lesions, namely epididymitis and balanopostitis, with an abundance of intralesional amastigotes.These lesions are associated with shedding of Leishmania in the semen, further supporting the notion of venereal transmission in dogs [74].We are currently investigating whether venereal transmission actually occurs in dogs.Although the epidemiological significance of venereal transmission in dogs is merely speculative at this point, this form of transmission may have a significant impact in densely populated urban areas, particularly in Brazil where the vast majority of the canine population is not neutered.
Impact of vaccination in dogs
According to the World Health Organization, control measures for VL include reducing the population of the insect vector by massive application of insecticides, elimination of seropositive dogs, and early diagnosis and treatment of human cases [75].In Brazil, the control of VL is heavily based on elimination of positive dogs, and although this measure does have some effect on the number of human cases, it is not sufficient for eradication of the disease [75,76].In 2003, the Brazilian Ministry of Health made new recommendations for controlling VL, including the control of the vector base on entomological surveys.Thus, application of insecticide that was previously performed within 300 meters of each confirmed human case, now is performed in areas classified as moderate or high risk of transmission, whereas areas free of the vector are not treated with insecticide and remain under entomological surveillance [77].
The control of canine VL is based on the elimination of seropositive dogs; serological surveillance in areas with high or moderate risk of transmission; the use of collars impregnated with 4% deltamethrin; the use of kennels with screens; and control of the population of stray dogs.However, treatment of canine VL is not recommended since it results only in improvement of clinical signs, but treated dogs remain a reservoir of the disease [77,78].
A commercial vaccine against canine VL is currently available in Brazil [79].This vaccine has been approved by the Brazilian Ministry of Agriculture, which is responsible for evaluation and approval of therapeutic and prophylactic animal health products.However, the Brazilian Ministry of Health does not recommend vaccination of dogs for prevention of human VL, and proposes the elimination of seropositive dogs in areas with transmission of the disease even if the dog has been vaccinated ("Nota Técnica", May 09, 2005).
Recently, LiESAp-MDP, which is an experimental but not commercially available vaccine, had an efficacy of 92% in experimentally and naturally infected dogs in France, with protection lasting for 24 months [80].The vaccine commercialized in Brazil has 79% efficacy after 12 months of vaccination [79].In Iran, another experimental vaccine that is a precipitate of Leishmania major associated with aluminum hydroxide and the BCG vaccine resulted in an efficacy of 69% [81].
Although in theory vaccination of dogs may have a significant impact in the occurrence of human VL, a drawback of the vaccine currently available in Brazil is that it does not allow for serological differentiation between infected and vaccinated dogs, which may eventually compromise the efficacy of the official programs for controlling VL.
Future perspectives
Human VL in Brazil affects mostly the lowincome class.Therefore, improving socioeconomic conditions is an essential step for effectively controlling the disease.According to Link and Felan [82], socioeconomic deprivation might be a "fundamental cause" of disease because it favors multiple risk factors.In fact, in the State of Maranhão, which has a low Human Development Index (HDI) and is the poorest state in Brazil (http://www.undp.org.br), has a high number of cases (Figure 1).VL has a complex correlation with poverty, involving factors such as poor housing and sanitation conditions, economically Poor sanitation is a factor that contributed to maintenance and dissemination of VL in two districts of São José do Ribamar, the metropolitan area of São Luis [25].Cases of human VL tend to have the same spatial pattern as irregular land occupations due to migratory fluxes [24].Generally, the habitations are built near preexisting forests or non-developed areas, where there is a tendency for an abundance of the vector and reservoirs [26].
Other factors that may have a significant impact in controlling VL in the future include the development of an insecticide with lower environmental impact and the development of new drugs that require a shorter therapeutic scheme and that are less toxic [84].In addition, improvement of the currently available serological tests reducing the occurrence of false-negative results may also influence the efficacy of control programs.This could be achieved by using immunodominant antigens for serological detection, which may favor standardization of the assay and improvement of its sensitivity and specificity, particularly during the early stages of canine VL.
Urbanization of VL in Brazil during the last decades has created a very favorable epidemiological condition for maintenance of the disease, with dense populations of reservoirs, vector and humans sharing the same tropical environment that is extremely favorable for transmission of the disease.Thus, controlling VL under these conditions is an enormous challenge.
Figure 1 .
Figure 1.Distribution of human visceral leishmaniasis in Brazil.Cumulative number of cases from 1980 to 2005.Source of data: Brazilian Ministry of Health (MS/SVS, SES and SINAN).
Figure 3 .
Figure 3. Bone marrow smear from a dog with a macrophage containing a myriad of amastigotes of Leishmania chagasi.
Figure 4 .
Figure 4. Number of confirmed cases and trend line of human visceral leishmaniasis in Ribeirão das Neves (metropolitan area of Belo Horizonte, State of Minas Gerais) from 1999 to 2006.
Figure 5 .
Figure 5. Number of samples of canine serum processed (black columns) and number of seropositive dogs (gray columns) from 2002 to 2006.The percentages indicate the frequency of seropositive dogs per year.
J
Infect Developing Countries 2008; 2(1): 24-33.driven migration, poor nutrition, and the predisposition to other infectious diseases, which are features of the most heavily affected populations [reviewed by 83].
|
2017-04-06T17:57:19.978Z
|
2008-02-01T00:00:00.000
|
{
"year": 2008,
"sha1": "c6be05225f748062f9b02969bd22c1e311ddeaff",
"oa_license": "CCBY",
"oa_url": "https://jidc.org/index.php/journal/article/download/19736384/178",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "c6be05225f748062f9b02969bd22c1e311ddeaff",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
}
|
49316699
|
pes2o/s2orc
|
v3-fos-license
|
Increased right atrial volume measured with cardiac magnetic resonance is associated with worse clinical outcome in patients with pre‐capillary pulmonary hypertension
Abstract Aims Pre‐capillary pulmonary hypertension (PHpre‐cap) has a poor prognosis, especially when caused by pulmonary arterial hypertension (PAH) associated with systemic sclerosis (SSc‐PAH). Whether cardiac magnetic resonance (CMR)‐based quantification of atrial volumes in PHpre‐cap is beneficial in risk assessment is unknown. The aims were to investigate if (i) atrial volumes using CMR are associated with death or lung transplantation in PHpre‐cap, (ii) atrial volumes differ among four unmatched major PHpre‐cap subgroups, and (iii) atrial volumes differ between SSc‐PAH and idiopathic/familial PAH (IPAH/FPAH) when matched for pulmonary vascular resistance (PVR). Methods and results Seventy‐five PHpre‐cap patients (57 ± 19 years, 53 female, 43 de novo) with CMR and right heart catheterization were retrospectively included. Short‐axis stacks of cine images were analysed, and right and left atrial maximum (RAVmax and LAVmax) and minimum volume (RAVmin and LAVmin) were indexed for body surface area. Increased (mean + 2 SD) and reduced (mean – 2 SD) volumes were predefined from CMR normal values. Transplantation‐free survival was lower in patients with increased RAVmax than in those with normal [hazard ratio (HR) = 2.1, 95% confidence interval (CI) 1.1–4.0] but did not differ between those with reduced LAVmax and normal (HR 2.0, 95% CI 0.8–5.1). RAVmax and RAVmin showed no differences among unmatched or matched groups (P = ns). When matched for PVR, LAVmax, LAVmin, and pulmonary artery wedge pressure were reduced in SSc‐PAH compared with IPAH/FPAH (95% CI 0.3–21.4, 95% CI 0.8–19.6, and 95% CI 2–7, respectively). Conclusions Patients with PHpre‐cap and increased right atrial volume measured with CMR had worse clinical outcome. When matched for PVR, left atrial volume was lower in SSc‐PAH than in IPAH/FPAH, consistent with left‐sided underfilling, indicating a potential differentiator between the groups.
Introduction
Pre-capillary pulmonary hypertension (PH pre-cap ) is a severe condition with poor prognosis. 1 PH pre-cap is characterized by elevated pulmonary arterial pressure due to increased pulmonary vascular resistance (PVR), which ultimately leads to right heart failure and premature death unless lung transplantation is performed. 1 The causes of PH pre-cap are heterogeneous with several underlying clinical subgroups: pulmonary arterial hypertension (PAH), PH pre-cap due to lung disease, chronic thrombo-embolic PH pre-cap (CTEPH), and PH pre-cap with unclear or multifactorial mechanisms. 1 Survival rates differ among groups and even within groups with lowest survival in PAH associated with systemic sclerosis (SSc-PAH). 2,3 Findings on echocardiography are indicative of PH pre-cap . 1,4 To confirm the diagnosis, invasive right heart catheterization is required. Haemodynamic features differ among subgroups at the time of diagnosis. 5,6 Patients with idiopathic or familial PAH (IPAH/FPAH) often have higher mean pulmonary artery pressure than do SSc-PAH and other connective tissue disease (CTD) PAH patients. 6 On the other hand, SSc-PAH patients have higher mortality even when IPAH/FPAH and SSc-PAH have similar haemodynamic status at diagnosis. 2,6 The cardiac causes for the poor survival in SSc-PAH are likely diverse and not fully clarified.
Cardiac magnetic resonance (CMR) is gold standard for time-resolved volumetric cardiac assessment. While enlarged right atrial size measured by maximal end-systolic area using two-dimensional echocardiography has prognostic value for negative outcome in PH pre-cap , the accuracy of echocardiographic right atrial volume (RAV) measurement has been questioned. 7,8 Only modest association of RAV with threedimensional methods and RA size from echocardiography has been described. 9 Three-dimensional echocardiography is advancing as a RAV metric for right atrial pressure estimation. [10][11][12] Reduced left atrial volume (LAV) as a sign for underfilling of the left heart in PH pre-cap has been suggested and could serve as an indicator of poor prognosis. 13,14 So far, a few studies have targeted the prognostic value of three-dimensional-assessed atrial volume in PH pre-cap using CMR. 12,15 However, the long-term performance of atrial volumes as prognostic factors for survival in PH pre-cap is not fully elucidated.
The hypotheses tested in the present project are (i) CMRdetermined atrial volumes are associated with outcome in patients with PH pre-cap and (ii) atrial volumes differ among PH pre-cap subgroups. Therefore, we retrospectively examined (i) if CMR atrial volumes were outcome predictors for death or lung transplantation, (ii) if atrial volumes could differentiate among four major aetiologic PAH subgroups, and (iii) if RAV and LAV or survival differed between SSc-PAH and IPAH/FPAH groups when matched for PVR.
Study population
Patients with pulmonary hypertension (PH), who were examined with CMR between 2003 and 2015, were retrospectively identified at the Department of Clinical Physiology and Nuclear Imaging, Skåne University Hospital, Lund, Sweden. PH pre-cap inclusion criteria were mean pulmonary arterial pressure ≥ 25 mmHg and pulmonary artery wedge pressure ≤ 15 mmHg at normal or reduced cardiac output. 1 Exclusion criteria were clinically significant shunts or congenital heart disease, PH due to left heart disease with pulmonary artery wedge pressure > 15 mmHg or with PH due to idiopathic lung disease or hypoxia. 1 Patients with chronic obstructive pulmonary disease (COPD) and emphysema defined as the cause to PH were not included. When there are slight signs of COPD, that by clinical decision was not defined as the primary cause to PH, COPD was listed as co-morbidity. Patients with dataset lacking full atrial images in the short-axis stack were excluded. In total, 75 patients examined between 2005 and 2015 fulfilled criteria and were included (Supporting Information, Data S1).
Patients were divided into subgroups as (i) IPAH/FPAH, (ii) SSc-PAH, (iii) CTD-PAH other than systemic sclerosis (SSc), and (iv) CTEPH. All patients had signed informed consent, and the regional ethics committee, Region Skåne, Sweden, approved the study ( Cardiac magnetic resonance imaging CMR was performed with 1.5 tesla magnetic resonance imaging scanners (Philips Achieva, Best, The Netherlands, and Siemens MAGNETOM Aera, Erlangen, Germany) and with a cardiac coil. ECG-gated parallel short-axis stack, long-axis, and transversal cine steady-state free precession images were acquired at end-expiratory breath-hold covering the whole heart. Typical image parameters for Philips were temporal resolution of 47 ms reconstructed to 30 time phases per cardiac cycle, 60°flip angle, 3 ms cycle repetition time, 1.4 ms echo time, and slice thickness 8 mm with no slice gap; and for Siemens, temporal resolution was 46 ms reconstructed to 25 time phases per cardiac cycle, 60°flip angle, 3 ms cycle repetition time, 1.4 ms echo time, and slice thickness 6 mm with 2 mm slice gap.
Image analysis was performed using freely available software Segment version 2.0 (http://segment.heiberg.se). 16 RAV and LAV were measured by manual tracing of atrial endocardial maximal and minimal contour in short-axis stack of images at ventricular end-systole and end-diastole, respectively. All CMR measures were indexed for body surface area to enable comparison among patients despite differences in body composition, and hence, all volumes described in this 866 A. Bredfelt et al. study are indexed volumes. Atrial appendages were included in the volumes. For RAV, the inferior and superior venae cavae as well as the coronary sinus were excluded. Lung veins were excluded from LAV. Right and left atrial maximal and minimal volume (RAV max , RAV min , LAV max , and LAV min ) were calculated from the short-axis stacks at both ventricular end-systole and end-diastole.
Increased RAV max was predefined from previously reported normal values as mean + 2 standard deviations (SD). 17 Therefore, RAV max > 74 mL/m 2 (54 + 2 × 10 mL/m 2 ) was considered increased. Reduced LAV max predefined as normal value mean -2 SD resulting in LAV max < 26 mL/m 2 (39 À 2 × 6.7 mL/m 2 ) was considered reduced. 17 As increased LAV max is an indicator of increased filling pressure and an established prognostic factor for poor outcome in various heart diseases, patients with increased LAV max (mean + 2 SD) > 52 mL/m 2 were included in a separate group in the Kaplan-Meier survival analysis. 18,19 In the compilation of normal values from CMR by Kawel-Boehm et al., biplane, non-indexed normal minimal values have been suggested. 17 However, reference values for RAV min and LAV min have not previously been reported with a three-dimensional volumetric assessment from CMR, therefore median in the specific group was used for defining increased and decreased minimal volume. 17 The atrial indices of right to left maximal and minimal volumes (AI max and AI min ) were calculated as RAV max /LAV max and RAV min /LAV min . 20
Right heart catheterization
All patients recruited were referred for right heart catheterization on the basis of clinical indications. Right heart catheterization was performed at rest in the supine position with local anaesthesia, via an 8 French sheath inserted in the right internal jugular vein using a triple-lumen 7.3 French balloontipped Swan-Ganz catheter. Pulsatile and mean right atrial pressures, pulmonary arterial pressures, and pulmonary artery wedge pressures were recorded. Cardiac output was calculated via thermodilution, with PVR expressed as (PA mean À wedge mean )/CO.
Statistics
Statistical analyses were performed in GraphPad Prism 7 and SPSS 24. Continuous data were expressed as mean ± SD; categorical data were expressed in absolute numbers and per cent. Kaplan-Meier curves were used for a transplantationfree survival analysis. Data were analysed with the Cox regression analysis and entered into a univariate regression analysis. Measures with P < 0.1 were then included in a multivariate analysis of imaging measures. Colinearity was examined for atrial volumes, and a correlation of r > 0.8 was considered closely related, and both would therefore not be included in the multivariate Cox regression analysis. Distribution of co-morbidities was investigated with Fisher's exact test. Comparisons among groups were performed using a log-rank (Mantel-Cox) test. The Kruskal-Wallis test was used for comparison among PVR-unmatched subgroups. For comparison of PVR-matched SSc-PAH and IPAH/FPAH subgroups, the Mann-Witney U-test was performed. Hazard ratio (HR) and difference between matched groups were expressed with 95% confidence interval (95% CI). Correlations were examined with Spearman's correlation coefficient. Normal distribution was not assumed and was tested with histograms. Intraobserver and interobserver variability was tested in 14 patients with results expressed as intraclass correlation (ICC) and bias described as according to the Bland-Altman method with mean ± SD in per cent and volume in millilitre. 21 A two-sided P-value of < 0.05 was considered to be of statistical significance.
Patient characteristics
In total, 75 patients (age 57 ± 19 years, 53 female, 43 de novo) with PH pre-cap fulfilled the criteria. The included patients were examined between 2005 and 2015. The selection process for including patients is shown in the Supporting Information. Clinical characteristics are shown in Table 1. When atrial volumes were compared in SSc-PAH and IPAH/FPAH adjusted for PVR, 15 patients with SSc-PAH [PVR 7.3 ± 3.2 Wood units (WU)] had complete data including invasive haemodynamics and were matched one to one with 15 patients with IPAH/FPAH (PVR 7.8 ± 3.1 WU). The median time between CMR and right heart catheterization was 2 days. The median follow-up time was 2.3 years. The composite endpoint occurred in 36 patients, including 29 deaths and seven lung transplantations.
Survival
RAV max in all the 75 PH pre-cap patients was 76 ± 36 mL/m 2 . Thirty-four patients had an increased RAV max > 74 mL/m 2 , and 41 patients had a normal or low RAV max . Survival with increased RAV max was shorter than that without (3.1 vs. 5.5 years, HR 2.1, 95% CI 1.1-4.0) ( Figure 1A). When comparing the prevalence of co-morbidities (diabetes mellitus, COPD/emphysema, ischaemic heart disease, systemic hypertension, thyroid disease, atrial fibrillation, and stroke; Table 1) between patients with increased and normal RAV max , atrial fibrillation was the only co-morbidity to differ between the groups. Patients with increased RAV max were more likely to RAV in patients with pre-capillary pulmonary hypertension 867 suffer from atrial fibrillation than were patients with normal RAV max (P = 0.009). When the Cox regression analysis was performed for co-morbidities, stroke, atrial fibrillation, and thyroid disease had P < 0.1 for survival in a univariate analysis. These were included in a multivariate model with RAV, and only stroke (P = 0.008) and RAV (RAV max P = 0.02, RAV min P = 0.04) remained associated with survival.
RAV min in all PH pre-cap patients was 53 ± 35 mL/m 2 . The survival analysis showed that patients with RAV min above median had significantly shorter survival than had patients Figure 1C). LAV max in all PH pre-cap patients was 41 ± 20 mL/m 2 . Twelve patients had reduced LAV max < 26 mL/m 2 , 51 patients had normal LAV max , and 12 patients had increased LAV max . Survival with reduced LAV max was shorter than that of patients with normal LAV max , but not significantly (4.2 vs. 6.8 years, HR 2.0, 95% CI 0.8-5.1) ( Figure 1B). Survival for patients with increased LAV max was 3.1 years compared with 6.8 years in patients with normal LAV max (HR 1.6, 95% CI 0.6-4.3). The survival analysis showed no difference between patients with LAV min below median and patients with LAV min above median (HR 1.2, 95% CI 0.6-2.3) ( Figure 1D).
In a univariate regression analysis of CMR atrial and ventricular volumes in all 75 PH pre-cap patients ( Table 2), RAV max and RAV min were associated with an increased risk for transplantation or death (RAV max HR 1.014, 95% CI 1.004-1.023 and RAV min HR 1.013, 95% CI 1.004-1.023). No other CMR measure was found to be significantly associated with risk for transplantation or death ( Table 2). RAV max , RAV min , and right ventricular ejection fraction (RVEF) (P = 0.005, P = 0.006, and P = 0.08, respectively) were included in a multivariate analysis. RAV max and RAV min showed a very strong correlation with each other (r = 0.96, P < 0.001) and were therefore analysed in relation to RVEF separately. In a multivariate analysis, RAV max (HR 1.014, 95% CI 1.002-1.023) and RAV min (HR 1.012, 95% CI 1.001-1.022) remained significant, but RVEF showed no significance ( Table 2).
Survival in all PH pre-cap patients was 5.0 years. In IPAH/FPAH, survival was 5.5 years, in SSc-PAH 2.8 years, and in CTD-PAH 6.2 years. Survival in CTEPH could not be calculated owing to the small sample size. No differences in survival time among the remaining groups were significant, when unmatched for PVR (P = 0.13).
Atrial volume comparison
Volumes within the aetiologic subgroups are shown in Table 3. There were no differences among aetiologic groups in RAV max , RAV min , LAV max , LAV min , AI max , or AI min (Table 3, Figure 2).
RAV in patients with pre-capillary pulmonary hypertension 869 There was no difference between SSc-PAH patients and IPAH/FPAH patients in RAV max , RAV min , or right atrial pressure (Table 4, Figure 2C, 2D).
AI max and AI min did not differ between SSc-PAH and IPAH/FPAH ( Table 4).
Correlation with prognostic factors
RAV max significantly correlated with invasively measured right atrial pressure and cardiac index as well as N-terminal probrain natriuretic peptide (NT-proBNP) levels ( Figure 3). RAV max did not differ among groups of New York Heart Association (NYHA) class (P = 0.50). LAV max did not correlate with right atrial pressure, cardiac index, or NT-proBNP levels. However, a correlation was seen between LAV max and cardiac index when excluding patients with atrial fibrillation from the analysis (Figure 3). LAV max did not differ among groups of NYHA class (P = 0.91).
Discussion
This study shows that in patients with PH pre-cap , an increased RAV was associated with worse clinical outcome. Also, there RAV in patients with pre-capillary pulmonary hypertension 871 was no significant association between reduced LAV and survival in PH pre-cap . Furthermore, there were no differences in RAV or LAV among unmatched PH pre-cap subgroups. Lastly, when SSc-PAH and IPAH/FPAH were matched for PVR, SSc-PAH had reduced LAV, but RAV did not differ between the subgroups. Our results of the association between RAV and clinical outcome are in concordance with previous studies with both two-dimensional and three-dimensional echocardiography in PH pre-cap patients. 12,22 Patel et al. and Fukuda et al. have shown that the right atrial maximal areas or volumes are of importance for poor outcome, although the association with invasively measured right atrial pressure is only moderate. 10,11,22 But echocardiography alone is insufficient for monitoring PH pre-cap and detecting disease progression. 5 For example, estimation of peak pulmonary systolic pressure by tricuspid regurgitant gradient is useful in early PH pre-cap detection but underestimates severity of disease when cardiac output decreases in a later stage. 1,8 Therefore, evaluation of atrial volumes may be more appropriate to monitor disease progression. 11,12 Prognostic factors in PH pre-cap from CMR have been focused on ventricular measures of which right ventricular end-diastolic and stroke volumes as well as left ventricular end-diastolic volume are associated with poor outcome. 23 Our study showed that the association between survival and RAV also applies for CMR. Sato et al. showed that increased RAV min , defined as above the median within the group, was associated with clinical worsening in PH precap such as hospitalization, death, or transplantation, but with no record of maximal RAVs and of relations to normal values. 15 Darsaklis et al. have also targeted the subject with CMR-assessed right atrial function in patients with PH, but in both pre-capillary and post-capillary PH (Groups 1-5), 24 using single-plane two-dimensional method in the fourchamber view for detecting RAVs from an area-length method. They showed that decreased right atrial emptying fraction is associated with poor survival. Our study supports previous data and furthermore shows that full volumetric non-approximative assessment with CMR is of relevance, when using a cut-off value derived from normal values. To the best of our knowledge, our study is the first to use a cut-off from normal values with three-dimensional measures of atrial volume from CMR in PH pre-cap patients. Our findings are further supported by the significant correlation of RAV max to known prognostic markers such as right atrial pressure, cardiac index, and NT-proBNP. A cut-off from normal values is applicable to other studies instead of using the median within a specific study. This method could add prognostic information when performing CMR on PH pre-cap patients in a clinical setting.
Our study was not designed to perform a comparison with well-validated risk scores such as the REVEAL study, 25 but aimed to test a simple risk stratification strategy including the atrial volumes from CMR. From echocardiography, outcome in PAH is associated with right atrial size alone as demonstrated by Bustamanta-Labarta et al. 26 and Raymond et al. 27 and, furthermore, to the expanded right heart score including systolic blood pressure with right atrial area and right ventricular function as shown by Haddad et al. 28 Even if our study was retrospective and focused on prevalent cases of patients with PAH, a large proportion of patients was investigated de novo and was treatment naïve. In our univariate regression analysis, we found an increased HR of 1% for transplantation or death for each increased millilitre per square metre of RAV. This increased HR remained in a multivariate analysis when adjusting for RVEF. RVEF has been suggested as the strongest predictor of mortality from CMR on meta-analysis. 29 However, former studies have seldom included RAV. Of note, in the present study, increased HR was not significantly shown for ventricular volumes. Our findings suggest that RAV may provide additive information to the ventricular volumes and the RVEF from CMR. In the newly published studies on risk assessment from the risk score of guidelines, right atrial area from CMR is used equivalent to echocardiographic cut-off data. 1,[30][31][32] The prognostic use of right atrial area is not supported in CMR studies but builds on echocardiographic data. 1,[30][31][32] Our findings that RAV min and RAV max were associated with outcome support that these volumes are highly relevant measures, when performing CMR in PAH patients. Therefore, RAV using CMR can be a new variable in risk assessment of patients with PH pre-cap . 30 To include the newly suggested right heart score in a prospective CMR study and to design a prospective study where the atrial volume is followed in different treatment groups are possible approaches for further studies. 28 Twelve patients in our study had reduced LAV. No significant association was found between reduced LAV and transplantation-free survival in this study; however, the results indicate an increased HR, which could possibly be of significance in a larger study with more statistical power. The origin of the left ventricular dysfunction that occurs in PH pre-cap is an issue of current debate. Underfilling of the left side related to reduced preload or reduced flow has been suggested, rather than a true diastolic dysfunction, which would result in enlarged LAV. [33][34][35][36] Increased LAV is a known correlate of left ventricular dysfunction and reflects increased left ventricular filling pressure. 8,18 In contrast, a small LAV could therefore reflect underfilling, as Marston et al. showed in CTEPH patients. 13 Kopic et al. showed that left atrial pressure in pulmonary regurgitation is closely related to right ventricular dysfunction and decreased longitudinal pumping, suggesting that left-sided underfilling originates from the right ventricle. 37 Although LAV was not associated with outcome in our small retrospective study, the findings indicate that LAV may be associated with survival in a U-shaped way with decreased survival in patients with both reduced and increased LAV compared with normal LAV. Normal values from three-dimensional volumetric imaging for minimal LAV are wanted and could assist in deepening the now limited knowledge about left atrial haemodynamics in PH pre-cap . Left-sided underfilling and its pathophysiological significance merit further attention and should be investigated in a larger cohort.
In our study, we found smaller LAVs, lower left atrial pressure, and a reduced survival in patients with SSc-PAH compared with IPAH/FPAH, when matched for PVR. This could reflect a higher degree of left-sided underfilling in SSc-PAH. Another explanation could be higher heart rate in SSc-PAH (81 ± 13 b.p.m.) than in IPAH/FPAH (71 ± 12 b.p.m., P = 0.02). Higher heart rate consequently reduces diastolic filling time and reciprocally affects atrial filling. Altered diastolic filling time leads to the ventricle being not fully relaxed when contraction starts and consequently smaller enddiastolic atrial volumes. Atrial index was the same in both groups, reflecting that RAV was also smaller in the SSc-PAH group than in IPAH/FPAH; however, this difference was not statistically significant owing to the small sample size and the larger variation in RAV. Possible differences in right atrial indices should be investigated in a larger cohort. SSc is in itself a severe condition, and PAH is among the leading causes of mortality. 38 SSc-PAH has the poorest survival among subgroups of PH pre-cap . 2,6 To characterize cardiac pathophysiological differences between SSc and other causes of PH pre-cap would therefore be of particular interest for understanding the causes of this increased mortality, and atrial volumes could be a new approach. As haemodynamic status differs between the groups with higher mean pulmonary arterial pressure and PVR generally seen in IPAH/FPAH at diagnosis, matching the groups to be compared on the basis of (mean) pressure or resistance allows investigating group differences independent of haemodynamic status. 6 Atrial measures in SSc have received limited attention. D'Andrea et al. showed that SSc patients without PAH compared with controls have impaired right atrial function, with impairment more evident in patients with higher pulmonary arterial pressure at exercise, suggesting that the right atrial function may be altered even before PAH diagnosis. 39 To the best of our knowledge, the left atrium in SSc-PAH has not been previously investigated with CMR. The present data on differences in both pressure and volume measures of the left atrium may represent a former undescribed pathophysiological difference between IPAH/FPAH and SSc-PAH. This supports our hypothesis that left heart haemodynamics is of importance in PH pre-cap and justifies future studies.
Limitations
This study was a single-centre retrospective study of a rare condition. Numbers of recruited subjects limited the possibility of matching in larger groups for age and gender. But PH pre-cap subgroups are not phenotypically similar in age and gender with CTD being more common in women than in men. 6 This means that matching of gender and age remains a substantial challenge even in larger study populations.
For inclusion for comparison between SSc-PAH and IPAH/FPAH, right heart catheterization had to be performed within 2 months of CMR. Non-contemporaneousness of haemodynamic vs. CMR data acquisition allows for disease progression/regression and associated haemodynamic alterations. Nevertheless, the time between CMR and right heart catheterization was similar in subgroups with median time difference of 1 day in SSc-PAH and 2 days in IPAH/FPAH. Of note, in the REVEAL study, 1 year survival did not differ between patients enrolled within 3 days of right heart catheterization and patients enrolled within 3 months of right heart catheterization. 25 In the study by Haddad et al., there was an average time between CMR and diagnosis of 1.5 ± 1.5 years. 28 Therefore, our time difference of median 2 days between right heart catheterization and CMR and with a majority of cases de novo could be considered well within the time spans of both the latter studies on risk stratification. 25,28 Nine patients had atrial fibrillation and accounted for most of the patients with increased LAV max . By excluding these patients from the non-decreased LAV max group, atrial fibrillation as a confounder was minimized. Patients with atrial fibrillation were also presented separately in the correlation analysis.
Lastly, intraobserver variability bias was excellent with somewhat larger bias for interobserver variability. However, both intraobserver and interobserver variability had excellent ICC > 0.9, which suggests that the volumetric assessments are reliable.
|
2018-07-03T20:13:54.228Z
|
2018-06-19T00:00:00.000
|
{
"year": 2018,
"sha1": "cdcfad93eeb1913501ea74d16c40e8d885b5555e",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ehf2.12304",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cdcfad93eeb1913501ea74d16c40e8d885b5555e",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
6292645
|
pes2o/s2orc
|
v3-fos-license
|
Systems biology approach to identification of biomarkers for metastatic progression in cancer
Background Metastases are responsible for the majority of cancer fatalities. The molecular mechanisms governing metastasis are poorly understood, hindering early diagnosis and treatment. Previous studies of gene expression patterns in metastasis have concentrated on selection of a small number of "signature" biomarkers. Results We propose an alternative approach that puts into focus gene interaction networks and molecular pathways rather than separate genes. We have reanalyzed expression data from a large set of primary solid and metastatic tumors originating from different tissues using the latest available tools for normalization, identification of differentially expressed genes and pathway analysis. Our studies indicate that regardless of the tissue of origin, all metastatic tumors share a number of common features related to changes in basic energy metabolism, cell adhesion/cytoskeleton remodeling, antigen presentation and cell cycle regulation. Analysis of multiple independent datasets indicates significantly reduced oxidative phosphorylation in metastases compared to primary solid tumors. Conclusion Our methods allow identification of robust, although not necessarily highly expressed biomarkers. A systems approach relying on groups of interacting genes rather than single markers is also essential for understanding the cellular processes leading to metastatic progression. We have identified metabolic pathways associated with metastasis that may serve as novel targets for therapeutic intervention.
Introduction
Metastasis (originating from Greek μετισταναι, to change) is the single most important event changing the course of cancer from manageable to fatal. For metastasis to occur, tumor cells must acquire the ability to detach from the original tumor, relocate through the blood or lymphatic circulation and start a new colony in a different part of the organism [1]. In spite of intensive research [2][3][4][5][6][7][8][9][10][11] there is no consensus regarding the origin of metastases. According to one model, metastatic transformation can be triggered in primary solid tumors by certain conditions, while another model links metastatic potential to a very rare subtype of tumor cells, occurring on the order of one in many millions. Genetic background is also viewed as an important determining factor in metastatic transformation [10,11]. This difference is important for both diagnostic and prognostic purposes.
Early cancer is clinically heterogeneous, and many patients can have an "indolent" disease course that does not significantly affect their survival. Once metastatic disease is documented clinically, the majority of patients die from their tumors as opposed to other causes [12]. This has led some researchers to consider the disease as a series of "states" that include clinically localized tumors and those that have metastasized, as a framework to assess the clinical and biological factors associated with specific phenotypes and outcomes [13]. However, there are other plausible concepts. Analysis of relations between different molecular subtypes of cancer and identification of genes specific to such subtypes is important for understanding the biological basis for this clinical heterogeneity and thus is critical in assessing prognosis, selecting therapy, and evaluating treatment effects. Metastatic transformation is a multi-stage process involving complex interactions between tumor cells and the host [14]. Cells from primary tumors must detach, invade stromal tissue, and penetrate blood or lymphatic vessels by which they disseminate. They must survive in the circulation to reach a secondary site in which they lodge because of physical size or binding to specific tissues. To form clinically significant tumors, metastatic cells must also adjust their metabolism and signaling systems to proliferate in the new microenvironment. Tumor cells growing at metastatic sites are continually selected for their growth advantage. This is a complex and dynamic process that is expected to involve alterations in many genes and transcriptional programs.
Considerable amounts of gene expression data have been deposited in public databases and/or are available upon request from other investigators. Analysis of these data is generally limited to one set at a time. However, recent years have seen multiple attempts to conduct meta-analysis across independent data sets. Among the more successful of these is a study by Ramaswamy et al. of molecular signatures of metastasis in primary solid tumors aiming to elucidate the molecular nature of metastasis [7]. The authors analyzed the gene-expression profiles of 12 metastatic adenocarcinoma nodules of diverse origin (lung, breast, prostate, colorectal, uterus, ovary) and compared them with the expression profiles of 64 primary adenocar-cinomas representing the same spectrum of tumor types obtained from different individuals. They identified 128 genes differentially expressed between primary and metastatic adenocarcinomas. A similar pattern was found in some primary tumors, which suggests that a gene expression program for metastatic transformation is present in some primary solid tumors. Further refining produced a subset of 17 unique genes that the authors presented as the most significant contributors to the difference between primary and metastatic tumors and thus the most likely diagnostic markers for the metastatic potential.
In this work, we present an alternative analysis of gene expression data based on a holistic approach integrating fragmented biological evidence and strengthening the unreliable conclusions by bringing more data rather than cutting straight to a few most consistent observations. We start with the analysis of the same meta-set of metastatic and primary tumors utilized by Ramaswamy et al., but supplement the analysis by algorithms, methodology and data not available to the original authors.
Data
The Ramaswamy et al. meta-set combines genes represented by different probes across multiple distinct microarray platforms (Affymetrix U95A, Hu6800 and Hu35K oligonucleotide microarrays as well as Rosetta inkjet microarrays) traced through by mapping to UniGene build #147. The data have been scaled to account for different microarray intensities in a given set. Each column (sample) has been multiplied in the data set by 1/slope of a least-squares linear fit of the sample versus a reference (the first sample in the data set) using only genes that had 'Present' calls in both the sample being re-scaled and the reference. A typical sample (that is, one with the closest number of 'Present' calls to the average over all samples in the data set) was chosen as reference.
The authors performed thresholding using a ceiling of 16,000 units and a floor of 20 units then subjected geneexpression values to a variation filter that excluded genes with minimal variation across the samples being analyzed by testing for a fold-change and absolute variation over samples, comparing max/min and max -min with predefined values and excluding genes not obeying both conditions. The resulting data are available at http:// wwwgenome.wi.mit.edu/cancer/solid_tumor_metastasis.
Colorectal cancer data sets
The GDS756 dataset provided insight the progression of cancer from primary tumor growth to metastasis by comparison of gene expression in SW480, a primary tumor colon cancer cell line, to that in SW620, an isogenic metastatic colon cancer cell line. Both cell lines were derived from one individual. The GDS1780 set reflects comparison of polysomal RNA from isogenic cell lines established from a colorectal cancer (CRC) patient [6]. The cell lines constitute a cellular model of CRC transition from invasive carcinoma to metastasis. The RNA samples were submitted to microarray analysis using the HG-U133A chip from Affymetrix, (Santa Clara, CA). Three biological replicates were carried out for each cell line and six hybridized arrays obtained. Raw data were analyzed using two microarray analysis software packages, dChip (13) and R-Robust Microarray Analysis (R-RMA) (14). We have downloaded and used these data sets from GEO (GDS756 and GDS1780). Each data set contains 22283 features (probesets) and 6 columns (samples) representing two contrast classes, each with three replicate experiments.
Breast cancer data
The data set we used in this study was downloaded from the GEO database (GDS2617); it contains 22283 probe sets. Tumorigenic and non-tumorigenic breast cancer cells were compared. Tumorigenic breast cancer cells were considered those expressing cell-surface proteins CD44 and CD24. Tumorigenic breast cancer cells isolated from 6 individuals were compared with normal breast epithelium derived from 3 individuals. In terms adopted by the authors of the original paper [4], tumorigenic cancer samples are those with invasive potential, resulting in metastatic progression.
Overview of the analysis pipeline
The general overview of the analysis pipeline is given in Supplemental Figure 1 (Additional File 1). Our pipeline includes most of the standard analysis steps, but has a few important differences. We extend the analysis to maximize the advantage of pathway analysis. The genes important for understanding the biological processes involved in metastatic transformation are selected not solely by the difference in signal emitted by microarray probes. Instead, we concentrate on the "group behavior" of genes, their ability to interact and pre-existing annotation placing the genes into the same biological pathway, linking to the same cellular function. Thus, the inference is done with very liberal selection criteria and not adjusted for multiple testing. We select a large list of potentially differential genes which may contain a large number of false-positives. We then select biological pathways, molecular function and GO terms which are over-represented in the initial intensity-based list. The benefits of the use of pathway and ontological analyses of microarray data have been presented previously [15,16]. More recent GSEA [17] and SAFE [18] methods can be very effective in highlighting the joint effect of a group of genes which may not be significantly differential if considered one by one. However, these methods require additional assumptions that may not be correct in every study. The significance of biological pathways is estimated through a variation of Fisher's exact test as implemented in Metacore or IPA and adjusted for multiple testing using Benjamini-Hochberg FDR analysis (which is a build-in function of GeneGo Metacore software). Single genes that do not map into any statistically significant pathway (i.e. missing all regulators, downstream targets, ligands and other components necessary for a functional molecular mechanism) may be still considered significant if reproducible and independently validated in additional experiments. However, in our analysis pipeline we leave such genes out regardless of their individual difference between primary and metastatic samples in a particular experiment. Our approach is based on collective effects of the groups of genes interlinked by functional relationships, which is inapplicable to some genes lacking information on function, regulation and interaction with other genes.
Biological pathways significantly overrepresented in the shortlist of genes differentially expressed between primary solid and metastatic tumors (Ramaswamy et al. data set) Figure 1 Biological pathways significantly overrepresented in the shortlist of genes differentially expressed between primary solid and metastatic tumors (Ramaswamy et al. data set).
Normalization
The data were normalized using a quantile algorithm similar to one described by Bolstad et al. [19]. We applied our own C++ software for normalization, available from A. Ptitsyn upon request. Box-plots for pre-normalized and normalized expression value distributions are shown in Supplemental Figure 2 (Additional file 1).
Preliminary selection of differentially expressed genes
A set of differentially expressed genes was selected using University of Pittsburgh Gene Expression Data Analysis suite (GEDA, http://bioinformatics.upmc.edu/GE2/ GEDA.html). For selection, we applied the standard J5 metric with threshold 4 and optional 4 iteration of Jackknife procedure to reduce the number of false-positive differential genes [20]. Both J5 metric and threshold parameter are standard pre-set values recommended by the developers. We did not attempt to estimate the confidence level of individual genes and used J5 not as a statistical test, but as a selection procedure providing a shortlist of genes deviating from the expected average value and enriched with differential genes. The MA plot showing selected differential genes is presented in Supplemental Figure 3 (Additional File 1). Notably, the plot shows a balanced representation of moderately and highly expressed genes, i.e. the categories most appropriate for selection of diagnostic biomarkers. Application of selection procedures biased away from highly expressed genes may reveal truly differential genes, but fewer suitable biomarker candidates. We then applied DAVID web-based tools to perform functional annotation of all potentially differential genes selected by GEDA. The complete annotated lists for analyzed data sets are given in the Supplemental Materials (Supplemental Tables 2, 3, 4 and 5 found in Additional File 1).
Functional annotation and pathway analysis
Analysis of biological pathways was performed using Met-aCore software (GeneGo Inc.), Ingenuity Pathways Analysis (Ingenuity Systems Inc.) and free DAVID tools [21].
Results and discussion
Analysis of the Ramaswamy et al. meta-set identified 741 genes differentially expressed between 64 primary solid tumor samples and 12 metastatic tumor samples. The complete list of these genes with functional annotation is given in Supplementary Table 1 (Additional File 1). As expected (see explanation in Methods section), this list is much larger than the original 128 genes identified by Ramaswamy et al. It is likely that there are some false positive differential genes mixed in, however the exact number is not relevant to the analysis. Instead, we focused on the biological function of the genes on the selected shortlist. This function can be estimated through the analysis of the biological pathways, canonic interaction maps and gene ontology categories found within the shortlist. Analysis of statistically overrepresented pathways in the shortlist of differential genes revealed 19 canonic pathway maps (by GeneGo Metacore version) with confidence level p = 0.05 (adjusted for FDR). The chart of overrepresented metabolic maps is given on Figure 1. Analysis of the same shortlist of differential genes with the DAVID Functional Classification tool [21] also reveals 6 clusters of gene functions with 3 to 6 members-functional categories (GO terms, PIR keywords, etc.) significantly overrepresented with p < 0.05 after FDR (Benjamini-Hochberg) adjustment. These results, presented in Supplemental Table 1 (Additional File 1), are based on algorithms and knowledge bases different from those of GeneGo Metacore. However, scrutinizing the contents of these results allows reconstruction of the underlying biological processes, which are common, robust and reproducible in experiments.
The most remarkable among the pathways differentially represented between primary and metastatic tumors are extracellular matrix/cell adhesion/cytoskeleton remodeling and oxidative phosphorylation. The most common pathways break into three classes: a) related to remodeling of internal cellular structure; b) related to alterations in cellular metabolism; and c) alternations in cell surface, antigen presentation and cell adhesion. Pathways related to cell cycle regulation are also found among differential genes.
Detailed analysis of each overrepresented pathway would be far beyond the scope of this study. However, it is appropriate to comment on key processes reflected by the metabolic maps.
One of the most strongly and consistently altered pathways in all evaluated datasets involves glucose utilization, specifically down-regulation of major components of oxidative phosphorylation ( Figure 2) and up-regulation of genes in the glycolytic pathway ( Figure 3). Down-regulated genes included mitochontrial ATPase pathway members, cytochrome oxidase, and NADH dehydrogenases. The phenomenon of prefential use of glycolysis for ATP generation in tumors was first observed by Otto Warburg in the first half of the 20 th century (as early as 1925) [22][23][24]. However, more recent studies have demonstrated that exaggeration of the Warburg phenomenon through inhibition of mitochondrial function may promote metastasis via enhancement of tumor cell invasion and reduced sensitivity to apoptosis [25][26][27]. Furthermore, other groups have recently demonstrated reduction in oxidative phosphorylation-related genes in metastases versus primary tumors using genomic methods [28,29] and correlation of reduction in ATP synthase function with outcome in patients with lung and colorectal cancer [30,31].
Genes differentially expressed between primary and metastatic cancers in the oxidative phosphorylation pathway Figure 2 Genes differentially expressed between primary and metastatic cancers in the oxidative phosphorylation pathway. Relative change and direction of change in transcript abundance of differentially expressed are marked with color flags. Red color designates higher and blue color designates lower transcript abundance compared to average between primary tumor (1) and metastatic samples (2). The legend for GeneGo pathway maps is given in Supplemental Figure 6 (Additional File 1).
In addition to providing a potential novel marker for metastatic potential, the broad conservation of alterations in bioenergetic pathways in metastatic tumors across tumor types and datasets suggests that interference with glycolytic pathways might be a viable therapeutic strategy for the prevention of metastasis. Glycolytic pathway analogs such as 2-deoxyglucose and 3-bromopyruvate are showing promise as therapeutic agents targeting hypoxic primary tumor cells [32], but have been poorly evaluated as antimetastatic drugs. However, a recent study demonstrated inhibition of pancreatic cancer metastasis in mice treated with 3-bromopyruvate when combined with a heat shock protein 90 inhibitor [33]. Furthermore, epige-netic therapies such as histone deacetylase and DNA methyltransferase inhibitors have been shown to reactivate expression of oxidative phosphorylation genes [34], conceivably reducing metastatic potential and suggesting that some alterations in this pathway may be epigenetically regulated.
Another critical pathway in our analysis that was differentially expressed robustly in primary versus metastatic tumors involves the extracellular matrix, cell adhesion, adhesion-mediated signal transduction and cytoskeletal organization, all of which are cooperatively important in the metastatic cascade. Figure 3 Glycolysis pathway. In spite of the fragmentary nature of the composed meta-set, the Warburg effect is still reflected in the pathway map through increased abundance of lactate dehydrogenase (LDHB). Relative change and direction of change in transcript abundance of differentially expressed are marked with color flags. Red color designates higher and blue color designates lower transcript abundance compared to average between primary tumor (1) and metastatic samples (2). The legend for GeneGo pathway maps is given in Supplemental Figure 6 (Additional File 1).
Glycolysis pathway
Alterations in extracellular matrix proteins included reductions in collagen, fibronectin, and a shift in keratin isoform expression (Figure 4). These reductions in cell matrix proteins could theoretically facilitate cell motility and enhance extravasation. The cell adhesion molecules CD63 and CD151 were upregulated in metastatic tumors as well. Experimental and clinical literature demonstrates a role for CD151 in metastasis [35,36] Differential expression of some key proteins responsible for adhesion-mediated cell signaling (RhoA, talin, moesin, ezrin, SPARC) was also observed ( Figure 5). Encouragingly, up-regulation of some well-characterized metastasis-associated genes such as RhoA and ezrin was observed. RhoA plays a key role in regulating the actin cytoskeleton and controlling cell motility, cell-cell interactions and intracellular trafficking [37]. Upregulation of RhoA has been associated with metastasis and/or negative outcome in carcinomas of the liver, kidney, esophagus, and urinary tract [38][39][40]. Upregulation of ezrin has been implicated in metastasis of diverse tumor types, such as osteosarcoma, soft-tissue sarcomas, pancreatic carcinoma, and head and neck carcinoma among others [41-46].
Significant upregulation of important cytoskeletal components such as actin, tubulin and vimentin was also
Alterations in extraceullular matrix and secreted proteins associated with metastatic cancer Figure 4 Alterations in extraceullular matrix and secreted proteins associated with metastatic cancer. Relative change and direction of change in transcript abundance of differentially expressed are marked with color flags. Red color designates higher and blue color designates lower transcript abundance compared to average between primary tumor (1) and metastatic samples (2). The legend for GeneGo pathway maps is given in Supplemental Figure 6 (Additional File 1).
Alterations in adhesion-mediated signaling and cytoskeleton remodeling in metastatic cancer Figure 5 Alterations in adhesion-mediated signaling and cytoskeleton remodeling in metastatic cancer. Relative change and direction of change in transcript abundance of differentially expressed are marked with color flags. Red color designates higher and blue color designates lower transcript abundance compared to average between primary tumor (1) and metastatic samples (2). The legend for GeneGo pathway maps is given in Supplemental Figure 6 (Additional File 1).
observed. These proteins play a key role in cell motility, invasion, cell division and intracellular transport, and differential expression of these members has been implicated in human tumor progression as well [47,48]. Increased vimentin is a well-defined phenotypic indicator of epithelial-mesenchymal transition, which has a known association with carcinoma aggressiveness [48].
Several components of the extracellular matrix -cell adhesion -adhesion-mediated signaling -cytoskeleton pathway have the potential for "druggability". For example, small molecule inhibitors of RhoA are in development [49,50], and rapamycin and its analogs have been shown to inhibit the ezrin-associated metastatic pheno- Figure 6 Alterations in the antigen presentation pathway observed in metastatic tumors. Relative change and direction of change in transcript abundance of differentially expressed are marked with color flags. Red color designates higher and blue color designates lower transcript abundance compared to average between primary tumor (1) and metastatic samples (2). The legend for GeneGo pathway maps is given in Supplemental Figure 6 (Additional File 1).
type through inhibition of downstream AKT-mTOR signaling [51].
The antigen presentation pathway in Figure 6 also reflects, in part, cytoskeleton remodeling: metastatic samples show increased expression of beta-2-microtubulin in the endoplasmic reticulum. All other elements of the antigen presentation pathway found in the differential genes shortlist are down-regulated. Remarkably, the most down-stream elements of the pathway, the final effectors, are the most down-regulated. Immune avoidance is thought to be another key component in successful metastasis; tumor cells must be able to survive in the circulation and avoid immune destruction upon arrest in the endorgan. Furthermore, evidence exists for epigenetic suppression of antigen presentation in tumor cells, and potential reactivation of expression through drugs blocking histone deacetylase and/or DNA methyltransferase, leading to enhanced tumor cell immunogenicity [52-54].
How reproducible are the results of computational analysis of an artificial meta-set of primary and metastatic tumors? We cannot possibly repeat the sample collection, RNA extraction and hybridizations. However, since the time Ramaswamy et al. have published their results there have been quite a few publications reporting microarray analysis of primary vs. metastatic tumors, and the data are available from the public databases such as GEO http:// www.ncbi.nlm.nih.gov/geo. We have extracted and analyzed a few of these new data sets [4,6] using the same analysis pipeline. The final results of these analyses are lists of statistically significant pathways, molecular functions and GO terms within the shortlist of potentially differential genes. These lists show remarkable agreement in all studies. Comparison of the pathways represented in these lists reveals none unique to any of the 3 data sets, only common and similar (Supplemental Figure 4 in Additional File 1). Overall, the Ramaswamy et al. meta-set produces a shorter list of potentially differential genes and further analysis yields fewer significant pathways. This result is not surprising taking into account that many small differences reproducibly observed in each single-tissue experiment have been leveled in composing the metaset. However, the essential features reflecting the metabolic changes between primary and metastatic tumors are apparent in every analyzed data set. The oxidative phosphorylation pathways with most components down-regulated, cytoskeleton remodeling and cell adhesion-related pathways are always found among the longer lists of significant pathways in the specific colon and breast cancer datasets. Remarkably, the suppressed oxidative phosphorylation pathway is always near the top of the most statistically significant pathways.
Taken together, there are dramatic changes in gene expression between primary and metastatic tumors; some are quantitative whereas others reflect a new pattern of expression. But how consistently are those changes revealed by a loose non-parametric J5 procedure? This selection procedure gives no estimation of confidence level for the individual genes. In turn, estimation of significance for the biological pathways is very approximate at best: it does not fully account for interdependencies in gene expression. Pathway maps include genes arbitrarily and the database of gene interactions is filled manually by multiple experts scanning the peer-reviewed literature, i.e. prone to errors and contradictions. These databases and associated tools for pathway analysis have improved significantly in recent years, but quantitative estimation of pathway significance still needs additional validation. In order to select only the most reliably over-represented pathways, we performed a bootstrap analysis randomly re-sampling 50% of the short-listed genes. Comparative analysis of over-represented pathways in the randomly resampled and original shortlists is given in Supplemental Table 6 and Supplemental Figure 5 (Additional File 1). The main pathways are remarkably robust. The genes (putative biomarkers) diagram is dominated by "similar" pathways, i.e. belonging to the same pathway map or involved in the same cellular function. There are also some "common" genes (i.e. genes representing the same pathway, which is still statistically significant in the randomly selected half-list) and no "unique" genes (i.e. representing unique, but statistically significant pathways). This observation leads to important conclusions: a) microarray experiments may yield extensive variation in specific differentially expressed genes, but are robust and reproducible in elucidating differentially expressed pathways; b) random re-sampling of the large list of differentially expressed genes provides no proof of true difference for any single gene, but the list in general has few (if any) false-positive genes. The latter statement is controversial since the common goal of the inference in microarray analysis is to reduce the dimensionality of the feature space and select a small number of truly differential genes. After selection of a shortlist using a t-test or one of its variants, the number of differentially expressed genes is further reduced by application of a False Discovery Rate procedure (typically Benjamini-Hochberg) [55,56]. Some authors even claim that microarrays are not optimal for pathway analysis because of poor reproducibility of the resulting pathways [57]. Our study suggests the opposite. The previously discussed problem of pathway reproducibility is caused by the misconceived methodology, more specifically in the strategy of microarray data analysis. Apparently, applying stricter criteria for selection of differentially expressed genes results in a very small number of candidates that are further reduced by FDR adjustment. The few remaining candidate genes have a much better chance of being successfully reproduced in another microarray experiment and validated by other techniques such as real-time RT-PCR or immunohistochemistry. On the other hand, a shorter list of candidate genes undermines the basis for the pathway analysis, rendering overrepresentation statistics powerless. This may explain the poor reproducibility of pathway analysis in some studies [57]. Such a stringent approach to biomarker selection relies entirely on the signal intensity and associated statistics. This approach can be very effective in cases of lethal mutations, congenital disorders and other diseases caused by a single or few factors. However, in complex multifactorial diseases, the most highly expressed genes and most reproducible differences in gene expression often turn out to be non-specific final effectors, downstream of important switches and regulators in biological pathways. Cancer in general and metastasis in particular are the examples of such multifactorial diseases. Application of a systems biology approach, considering not just the effect of single mutated/healthy genes, but entire networks of interlinked and constantly interacting genes is required not only for understanding the mechanism of disease, but also for the selection of diagnostic and prognostic markers, as well as potential therapeutic targets. As we have demonstrated, the pathways are sufficiently reproducible and robust to serve this purpose. The prevailing methodology in microarray analysis has an internal contradiction: it calls for a strict selection of candidate genes that can be independently verified one by one, but systems biology calls for analysis of large numbers of genes. Furthermore, the number of replicates affordable for a typical microarray study is usually insufficient for reliable reproduction of expression in low-expressed genes. However, important biological functions specific to disease are often performed by low-expressed genes. Pathway analysis has the power to identify such signal transducers and key transcription factors only if a large enough number of candidate genes are input. To resolve this contradiction, we propose an extension of the current prevailing methodology.
First, the analysis pipeline has to be extended to incorporate functional annotation and pathway analysis. Second, selection of the candidate genes cannot be performed based solely on the intensity of signal and its change in the experiment. Instead, we propose to consider this step a pre-selection and relax the criteria for "differential" genes. Third, FDR correction should not be applied to a pre-selected "long list" of candidate genes. Combined with a relaxed selection threshold, this will inevitably create an influx of false-positive genes, which can be addressed subsequently. Fourth, the "long list" is analyzed in order to identify statistically overrepresented biological pathways, GO terms, molecular functions (as implemented in DAVID, IPA and MetaCore software) and gene set enrichment (for example, using GSEA or SAFE methods [17,18]). It is at this stage of analysis that multiple testing adjustments (Bonferroni, or better FDR) should be applied. Most available software, both free (DAVID tools [21]) and commercial (such as IPA and Metacore) have at least one method of false-positive control implemented. However, we still recommend additional techniques, such as the bootstrapping experiment described above, for computational validation of significant pathways. Finally, the discovered statistically significant pathways, gene sets and molecular functions should be used to reverse-engineer the molecular mechanism of disease and select one or more potential biomarkers and drug targets. In our approach, it is important to combine numeric analysis with biological reasoning and deduction.
The proposed analysis strategy is not yet implemented in a single analysis tool, although all the components have been developed and some of the software packages (such as ArrayTrack [58]) offer partial integration; pathway analysis packages, although independent, can be easily invoked from within the microarray analysis software. In the future, we would like to unite all the tools used for systems biology analysis of biomarkers in a single automated software pipeline.
Systems biology approaches to analysis of existing public data reveal a large number of new features overlooked in the original analyses. Meta-analysis and cross-examination of a few data sets allows identification of prospective markers and drug targets. The present day databases available for systems biology empower the researchers beyond the dreams of only a few years ago. Now for each identified significant pathway, we may also correlate expression data with known conserved transcription factor binding sites, and employ siRNA-mediated gene knockdown and known pharmacologic inhibitors (pharmacoprobes) to interrogate the phenotypic effects of interference with identified pathways. The systems approach described here allows identification of a number of key pathways that may serve as therapeutic targets for controlling the metastatic transition of primary solid tumors.
|
2017-08-03T01:05:33.811Z
|
2008-08-12T00:00:00.000
|
{
"year": 2008,
"sha1": "0291627ade04b0c3db9a8c64fb0c4818920a2040",
"oa_license": "CCBY",
"oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/1471-2105-9-S9-S8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0291627ade04b0c3db9a8c64fb0c4818920a2040",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Biology",
"Medicine"
]
}
|
96668438
|
pes2o/s2orc
|
v3-fos-license
|
Titanium Dioxide Nanotube Films for Electrochemical Supercapacitors: Biocompatibility and Operation in an Electrolyte Based on a Physiological Fluid
Growing interest in developing devices that can be implantable or wearable requires the identification of suitable materials for the components of these devices. Electrochemical supercapacitors are not the exception in this trend, and identifying electrode materials that can be not only suitable for the capacitive device but also biocompatible at the same time is important. In addition, it would be advantageous if physiological fluids could be used instead of more conventional (and often corrosive) electrolytes for implantable or wearable supercapacitors. In this study, we assess the biocompatibility of films of anodized TiO 2 nanotubes subjected to the subsequent annealing in Ar atmosphere and evaluate their capacitive performance in a physiological liquid. A biocompatibility test tracking cell proliferation on TiO 2 nanotube electrodes and electrochemical tests in 0.01 M phosphate-buffered saline solution are discussed. It is expected that the study will stimulate further developments in this area. performance in neutral aqueous electrolytes (such as 3 M KCl) as well as in a phosphate-buffered saline is presented. The phosphate-buffered saline is a common buffer solution used in biological research, non- toxic for cells (the consequences of leakage of such an electrolyte from a wearable or implantable device would be dramatically milder than for the leakage of conventional electrolytes). It is shown that the nanotube films have a suitable capacitive behavior in that type of elec- trolyte. The results of the biocompatibility test of the TiO 2 nanotube films are also reported.
Electrochemical capacitors (ECs), also called supercapacitors, are promising candidates for electrical energy storage devices with characteristics intermediate between traditional capacitors and batteries. 1,2 ECs are mainly based on two energy storage mechanisms which represent a non-faradaic process and faradaic processes. The nonfaradaic process includes physical adsorption/desorption on the electrolyte/electrode interface. The negative electrode attracts cations of an electrolyte and the positive electrode attracts anions during the charging process to form two electric double layers on two electrode/electrolyte interfaces; the cations and anions are then released back to electrolyte when the EC discharges. This type of ECs is commonly called electrochemical double-layer capacitors (EDLCs) and their electrodes are typically high surface area carbon materials. [3][4][5][6] The faradaic processes are similar to mechanisms in batteries as reversible redox reactions are involved. In supercapacitors, the redox reactions typically occur only on the surface of electrodes. This type of supercapacitors based on the redox processes in the electrodes is also commonly called pseudocapactors. 7 RuO 2 is a classic example of an electrode material for supercapacitors operating via a pseudocapacitive (redox) mechanism. 1,[7][8][9][10] TiO 2 is an interesting electrode materials for the state-of-the-art electrochemical energy storage systems and can be used both lithiumion batteries [11][12][13] and electrochemical supercapacitors. 12 Films of TiO 2 nanotube arrays have particularly attracted attentions for the application in supercpacitors because of their high surface area and have been a subject of a number of studies. 14-28 Based on the ideal capacitive response of TiO 2 (rectangular cyclic voltammetry curves) observed in some cases, researchers have previously assigned the storage mechanism in TiO 2 nanotube electrodes predominantly to the conventional electric double layer storage (see, for example, 17,23 ). Due to the low conductivity of TiO 2 and the amorphous nature of the anodized nanotubes, the capacitance of the pristine, as-anodized layer of TiO 2 nanotubes is quite low and, therefore, a number of approaches have been developed to improve the capacitance and rate capability of TiO 2 nanotube arrays via post-synthesis treatment and modification of its electronic conductivity, Ti 3+ /Ti 4+ ratio, hydrogenation and induction of oxygen vacancies. For example, annealing in Ar atmosphere, 14,21-23 thermal treatment in H 2 atmosphere, 15,19 annealing in NH 3 atmosphere, 16 cathodic biasing of TiO 2 in an ethylene glycol z E-mail: alexey.glushenkov@deakin.edu.au; ian.chen@deakin.edu.au electrolyte, 18 second anodization with post-annealing in vacuum, 20 electrochemical hydrogenation or plasma-treatment, 24,25 and cathodic polarization treatment. 27,28 Capacitances between 1 and 20 mF/cm 2 have been commonly reported after conducting these additional treatments.
New emerging applications of electrochemical supercapacitors include ECs as power sources for implantable and wearable devices (such as, for example, pacemakers, implanted chips, sensors, smart power bodysuits). 29,30 For such applications it is advantageous to use biocompatible electrode materials and other components of supercapacitors. It is also beneficial to shift from conventional electrolytes (often corrosive) to the electrolyte resembling physiological fluids. 29 It is also important to note that inorganic electrode materials in implantable or wearable supercapacitors have an advantage of possibly higher volumetric energy and power densities, and therefore may be preferred to carbon materials. In this article, biocompatibility and the possibility of operation in a physiological-type electrolyte are investigated for the anodized films of titanium dioxide nanotubes postannealed in Ar atmosphere. The phase composition and morphology of the nanotubes are assessed. Their encouraging electrochemical performance in neutral aqueous electrolytes (such as 3 M KCl) as well as in a phosphate-buffered saline is presented. The phosphate-buffered saline is a common buffer solution used in biological research, nontoxic for cells (the consequences of leakage of such an electrolyte from a wearable or implantable device would be dramatically milder than for the leakage of conventional electrolytes). It is shown that the nanotube films have a suitable capacitive behavior in that type of electrolyte. The results of the biocompatibility test of the TiO 2 nanotube films are also reported.
Experimental
Titanium foils (purity of 99.7%) were used for anodization, which was performed in accordance with the procedure outlined elsewhere. 31 A two-electrode electrochemical system was used at room temperature. The electrolyte was a mixture of glycerol, 0.5 wt% NH 4 F and 20 vol% H 2 O. A Ti foil was used as a working electrode in each anodization experiment and a Pt foil was used as a counter electrode. The two electrodes were separated by a distance of 30 mm. A direct current power supply was used to control the voltage. The titanium foils were anodized at 30 V for 2 h to produce films of TiO 2 nanotubes on Ti
Sample Name Description
NT-600 Anodized nanotubes annealed at 600 • C for one hour in Ar atmosphere Anodized nanotubes annealed at 650 • C for one hour in Ar atmosphere NT-700 Anodized nanotubes annealed at 700 • C for one hour in Ar atmosphere foils. Three nanotube samples were subsequently annealed at 600 • C, 650 • C and 700 • C in Ar atmosphere for one hour. The three types of sample used in the tests are described in Table I. X-ray diffraction (XRD, PANalytical X-pert Pro MRD XL with Cu Kα radiation (λ = 1.5418 Å)) was used to analyze the phase composition of the samples. The scan rate and step angle used for the measurements were 2 s/step and 0.02 • , respectively, and the measurements were performed over a range of 2θ from 20 • to 50 • . The HighScore Plus v.3.0x software (PANalytical B.V. Almelo, The Netherlands) was used to analyze the recorded data. The surface morphology of titanium dioxide nanotubes was observed by scanning electron microscopy (SEM, Carl Zeiss SUPRA 55VP instrument).
The electrochemical properties were measured in the threeelectrode cell including a working electrode (TiO 2 nanotubes on Ti foil), a counter electrode (Pt wire) and a reference electrode (Ag/AgCl). The cells were filled with 3 M KCl solution or 0.01 M phosphate-buffered saline (PBS, pH 7.4, Sigma-Aldrich) solution under vacuum. The electrochemical properties were characterized by galvanostatic charge-discharge and cyclic voltammetry (CV) measurements using a Solartron Analytical 1470E instrument. The potential window between −0.1 and 0.6 V vs Ag/AgCl was used in tests. The CV plots were recorded at scan rates from 1 to 500 mV/s. Galvanostatic charge-discharge curves were recorded at various current densities between 1.5 to 10 μA/cm 2 (per square cm of Ti substrate). The electrochemical impedance spectroscopy (EIS) was performed at the open circuit potential (OCP) within a frequency range from 100 kHz to 0.01 Hz using an Ivium-n-stat electrochemical analyzer. The amplitude of the modulation signal was set to 5 mV. Experimental data were fitted with ZView v.3.1 software (Scribner Associates, Inc., UK).
The biocompatibility of all samples was evaluated using Osteoblast-like cells (SaOs-2, Sarcoma osteogenic) (Barwon Biomedical Research, Geelong Hospital, Victoria, Australia), a human osteosarcoma cell line. 32,33 All samples for cell culture were sterilized in a muffle furnace at 180 • C for 1 h. The samples were placed in a well in a 24-well cell culture plate. SaOs-2 cells were seeded on the surface of samples and the control without any sample at a cell density of 5 × 10 3 cells per well. MTS assay was used to measure the in vitro proliferation of the SaOs-2 cells after cell culture for 7 days. The control is considered as biocompatible. In all cases, one-way analysis of variance was employed to evaluate the significant difference in the data, and the statistical difference was thought to be significant at p < 0.05.
Results and Discussion
Characterization of nanotube films.- Figure 1 shows the XRD patterns of TiO 2 nanotube films annealed at 600, 650 and 700 • C for one hour. As it can be seen in Figure 1 transforms to the more stable rutile phase at higher temperatures. At 700 • C, the anatase phase almost disappears and the pattern is dominated by the peaks of rutile TiO 2 .
According to the SEM images in Figure 2a, 2b, 2c, the typical morphologies of structures in the TiO 2 films on Ti can be observed. Arrays of nanotubes can be seen for each sample. The morphology changes after annealing at a higher temperature, and the nanotubes form bundles with increased gaps between the bundles. An enlarged SEM image of the sample NT-700 is shown in the inset of Figure 2c. It can be concluded the microstructure of the film changes when the annealing temperature is increased.
The electrochemical properties of the three samples of nanotube films (NT-600, NT-650 & NT-700) in 3 M KCl aqueous electrolyte are shown in Figure 3. It can be seen in Figure 3a, 3d, 3g that the cyclic voltammetry (CV) curves of the three samples recorded at slow scan rates (in the range between 1 and 20 mV/s) in a three-electrode system have an ideal rectangular shape, mirror-symmetric in respect to the horizontal axis. The CV curves of NT-600 and NT-650 remain close to ideal and mirror-symmetric at higher scan rates (50 to 500 mV/s), as shown in Figure 3b, 3e. In contrast, the CV curves of the sample NT-700 become elliptical at high scan rates (100 to 500 mV/s), as it can be seen in Figure 3h. It appears that the sample NT-700 deviates from the ideal capacitive behavior at high scan rates.
The galvanostatic charge-discharge curves of the three samples recorded at various current densities (1.5, 2, 5 and 10 μA/cm 2 ) are shown in Figure 3c, 3f, 3i. The shapes of all charge-discharge curves for the three samples are almost ideally triangular. The capacitances per unit of the electrode surface of NT-600, NT-650 and NT-700 can be calculated from the formula: where C is the capacitance per unit of the electrode surface (μF/cm 2 ), I is the current density per unit of the electrode surface (μA/cm 2 ), t is the time of discharge (s), and V is the potential window (V). The capacitances of NT-600 and NT-650 can be up to 1700 μF/cm 2 and 1360 μF/cm 2 , respectively, at a current density of 1.5 μA/cm 2 .
The sample NT-700 shows a sharply lower capacitance of only 620 μF/cm 2 at the same low current density.
The electronic conductivities of the three samples were evaluated by electrochemical impedance spectroscopy, and the Nyquist plots are shown in Figure 3j. The semi-circle area of plots for NT-600 and NT-650 is enlarged in Figure 3k. The diameter of the semi-circle corresponds to the so called charge-transfer resistance at the electrodeelectrolyte interface, which is often correlated with the electronic conductivity of an electrode. The semi-circle diameters for the samples NT-600 and NT-650 are quite small and are around 20-25 for both. The semi-circle diameter of NT-700 is much higher and can be estimated to be about 2500 . This is likely to be correlated with the structural damage in the nanotube films at higher annealing temperatures (700 • C), resulting in the sharply lower electronic conductivity of the sample. This is consistent with a much lower value of capacitance that can be calculated from the CV and charge-discharge curves of the sample NT-700. We conclude that the sample NT-700 has overall an inferior electrochemical performance to the other two samples and focus on the performance of NT-600 and NT-650 in the remainder of the study.
The plots of capacitance as a function of current density for the three samples are shown in Figure 3l. The capacitance of NT-600 drops to about 1015 μF/cm 2 at a high current density of 10 μA/cm 2 . A 30% drop in capacitance of NT-650 is observed at the same high current density for the sample NT-650. Surprisingly, the capacitance retention in the NT-700 was probably the best despite the somewhat compromised structural integrity of this sample. The measured capacitance was, however, significantly lower than that of NT-600 and NT-650.
The measured electrochemical capacitances are generally in line with the values observed in the literature (1-20 mF/cm 2 ). [14][15][16][17][18][19][20][21][22][23][24][25][26][27][28] Our values are close to the bottom of the range. The higher values of capacitance can be obtained when the samples are subjected to additional treatments such as annealing in Ar atmosphere, 14,21-23 thermal treatment in H 2 atmosphere, 15,19 annealing in NH 3 atmosphere, 16 cathodic biasing of TiO 2 in an ethylene glycol electrolyte, 18 second anodization with post-annealing in vacuum, 20 electrochemical hydrogenation or plasma-treatment, 24,25 and cathodic polarization treatment. 27,28 Optimization of the electrochemical performance is, however, beyond the scope of this paper. Instead, we focus in the rest of the manuscript on biocompatibility of the TiO 2 nanotube films and the possibility of their operation in electrolytes based on physiological liquids.
A biocompatibility assessment was carried out using an in-vitro cell culture for the sample of TiO 2 nanotubes before and after heattreatment in Ar at 650 • C while the empty well with cells was used as the control group. The cell numbers after cell culturing for 7 days are shown in Figure 4. It can be seen that the cell numbers on the sample before the heat-treatment was similar to that of the control. The cell numbers on the sample after the heat-treatment was higher than that of the control. It indicates that TiO 2 nanotubes before and after annealing are biocompatible and the heat-treatment increases the biocompatibility of the sample. Figure 5 shows the electrochemical performance of NT-600 and NT-650 electrodes in 0.01 M PBS solution. The nanotube electrodes are characterized by CV and galvanostatic charge-discharge. The CV curves of NT-600 and NT-650 are close to the ideal, rectangular shapes at low scan rates (Figure 5a, 5d). Elliptical shape appears in the CV curves at very high scan rates (500 mV/s), indicating that the limitations of the transport phenomena are more pronounced in the threeelectrode cell with 0.01 M PBS electrolyte in respect to the cell with the conventional 1 M KCl aqueous electrolytes. It can be concluded from the CV measurement that anodized titanium dioxide nanotubes after annealing are capable of displaying capacitive properties in the physiological electrolytes.
Charge-discharge curves confirm these findings, showing approximately triangular curves (Figure 5c, 5f). The measured capacitance of NT-600 is 1040 μF/cm 2 at a low current density (1.5 μA/cm 2 ) and over 60% of capacitance can be preserved high current density (10 μA/cm 2 ). Interestingly, the capacitance of NT-650 can be up to 2817 μF/cm 2 , which is much higher than that of NT-600. We can assume that the mixture of rutile and anatase phases is capable to have higher capacitance than the single anatase phase in the PBS solution. However, this type of material cannot be stable during long cycling as shown in Figure 5h. Indeed, the NT-600 sample has a noticeably better stability after 1000 cycles although its capacitance is limited to only 640 μF/cm 2 at a current density of 10 μA/cm 2 . In summary, it has been confirmed that the TiO 2 nanotubes demonstrate biocompatible behavior while, at the same time, being capable of operating in a physiological liquid. We expect that these results will promote further research in this area.
Conclusions
The films of titanium dioxide nanotubes were produced by anodization and were annealed at various temperatures (600, 650 and 700 • C). The XRD results indicate that the produced films of TiO 2 nanotubes contain anatase and rutile phases. The contribution of the anatase phase decreases with the increase of the annealing temperature, and the rutile phase is dominant in the material annealed at 700 • C. The SEM analysis confirms the morphology of nanotubes and indicates changes at higher annealing temperatures, where the structural integrity of the nanotubes declines and nanotubes tend to aggregate into bundles. In line with previous reports, the nanotube films possess capacitive properties, but the capacitance drops noticeably at a higher annealing temperature (700 • C).
To conduct preliminary evaluation of TiO 2 nanotube electrodes for possible applications in implantable and wearable supercapacitors, a biocompatibility test and electrochemical characterization in a physiological liquid (0.01 M PBS solution) were conducted. It is shown that the nanotube films show biocompatible behavior, with the number of cells in contact with the material increasing over time. Nearly ideal capacitive behavior of TiO 2 nanotube films can be demonstrated in a physiological liquid. The specific capacitance of 1040 μF/cm 2 is recorded for the nanotube film annealed at 600 • C while the sample is also capable of demonstrating a stable cyclic behavior.
|
2019-04-06T00:40:42.901Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "a43a396025d7ee823b87bb9574c0a9a89ac4ab51",
"oa_license": "CCBY",
"oa_url": "https://dro.deakin.edu.au/eserv/DU:30072190/zhou-titaniumdioxide-2015.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "2a3befdc5e43f96745557d8aabb3fd466d1ce7a6",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
13450054
|
pes2o/s2orc
|
v3-fos-license
|
Phase II Study on Safety and Efficacy of Yadanzi ® ( Javanica oil emulsion injection ) Combined with Chemotherapy for Patients with Gastric Cancer
Gastric cancer is one of the most common malignant tumors in china. It continues to be a fatal disease with majority of cases presenting with advanced disease (Blum et al., 2013). Chemotherapy is a main treatment option for patients with advanced gastric cancer. However, chemotherapy is reported to be associated with a series of adverse reaction, eg, bone marrow suppression, gastrointestinal toxicity, immunosuppression, etc (Yamamoto and Iwase, 2012). Traditional Chinese Medicine (TCM) is widely used in treating patients with gastric cancer in China (Liu et al., 2011). On this background, it is possible to develop a chemotherapy regimen containing TCM to enhance response rate, reduce toxicity and improve quality of life for patients in this setting (Man et al., 2012). Yadanzi® (Javanica oil emulsionsemulsion) is a product with Brucea Jen petroleum ether extracts as raw materials, and purified soybean lecithin as emulsifier. The main active ingredient is not well clarified (Liu et al., 2012). Javanica oil emulsion is reported to be active in treating patients with lung cancer, brain metastases from lung and gastrointestinal cancer (Wang et al., 2012). Possible mechanism includes inhibition of topoisomerase
Introduction
Gastric cancer is one of the most common malignant tumors in china.It continues to be a fatal disease with majority of cases presenting with advanced disease (Blum et al., 2013).Chemotherapy is a main treatment option for patients with advanced gastric cancer.However, chemotherapy is reported to be associated with a series of adverse reaction, eg, bone marrow suppression, gastrointestinal toxicity, immunosuppression, etc (Yamamoto and Iwase, 2012).Traditional Chinese Medicine (TCM) is widely used in treating patients with gastric cancer in China (Liu et al., 2011).On this background, it is possible to develop a chemotherapy regimen containing TCM to enhance response rate, reduce toxicity and improve quality of life for patients in this setting (Man et al., 2012).
Yadanzi ® (Javanica oil emulsionsemulsion) is a product with Brucea Jen petroleum ether extracts as raw materials, and purified soybean lecithin as emulsifier.The main active ingredient is not well clarified (Liu et al., 2012).Javanica oil emulsion is reported to be active in treating patients with lung cancer, brain metastases from lung and gastrointestinal cancer (Wang et al., 2012).Possible mechanism includes inhibition of topoisomerase
Phase II Study on Safety and Efficacy of Yadanzi ® (Javanica oil emulsion injection) Combined with Chemotherapy for Patients with Gastric Cancer
Jin Liu1 , Xin-En Huang 1 *, Guang-Yu Tian2 , Jie Cao 1 , Yan-Yan Lu 1 , Xue-Yan Wu 1 , Jin Xiang 3 II, blockage of cell cycle in S phase and a direct damage on the structure of plasma membrane.Our hypothesis is that the addition of Yadanzi ® to chemotherapy could increase the response rate of chemotherapy and quality of life of patients with advanced gastric cancer.
Patients
Patients were required to be pathologically/ cytologically diagnosed with gastric cancer in Jiangsu Cancer Hospital & Research Institute from January 2011 to December 2012; to sign an informed consent before treatment; to expose to long term chemotherapy or supportive care; to have a score of karnofsky performance status (KPS) ≥ 70; to be 25 to 75 years of age.Other eligibility criteria included: adequate hematological (white blood cell count > 3.0× 10 9 and platelet count > 150×10 9 ), liver (bilirubin and transaminases < 1.5 times the upper normal limit) and renal function (creatinine leval < 1.5 times the upper normal limit); patients were excluded from this study if they failed to complete two cycles of chemotherapy, or with any serious medical or psychiatric condition, or other malignancies.Pregnant or lactating women are also excluded from this study.
Methods
Chemotheraphy agents including oxaliplatin, irinotecan, paclitaxel, docetaxel, fluorouracil, tegafur, etc.And combination is as following: DOC regimen, which consisted of docetaxel, oxaliplatin and capecitabine; Xelodal-contained chemotherapy regimens, which consisted of xelodal and oxaliplatin; tegafur-based regimen, which consisted of tegafur, taxol and leucovorin; and other regimens.Detailed information of these chemotherapy was reported elsewhere (Li et al., 2011;Xu et al., 2011).Chemotherapy for all patients was applied with javanica oil emulsion injection (Yadanzi ® , made by ZheJiang Jiuxu Pharmaceutical Co., Ltd.).Yadanzi ® 30 ml, dissolved in 250 ml normal saline, was intravenous infused during chemotherapy, once daily and continued for 2 cycles.Routine blood test, blood biochemistry and tumor markers were reviewed prior, during and after chemotherapy.CT scan was reviewed after two cycles of treatment to evaluate efficacy.
Efficacy evaluation
Before the chemotherapy, all patients received physical examination, routine blood test and blood biochemical examination.Treatment efficacy was evaluated according to RECIST criteria (Response Evaluation Criteria In Solid Tumor) (Sohaib, 2012) after more than two cycles of chemotherapy.Complete response (CR), partial response (PR), stable disease (SD), and progressive disease (PD) was separately defined.Quality of life was designated increasing if the KPS score increased by 10 after treatment, decreasing if the score decreased by 10 and otherwise stable.We have enough experience in conducting medical researches, and have published some results elsewhere (Huang et al., 2004;Zhou et al., 2009;Jiang et al., 2010;Yan et al., 2010;Gao et al., 2011;Huang et al., 2011;Li et al., 2011;Li et al., 2011;Li et al., 2011;Xu et al., 2011;Xu et al., 2011;Xu et al., 2011;Yan et al., 2011;Zhang et al., 2011;Gong et al., 2012;Li et al., 2012;Yu et al., 2012).
Toxicity Assessment and Safety:
All Patients were assessed and graded for toxicities according to WHO criteria (De Angelis, 2004).During the Yadanzi ® (Javanica oil emulsion) Injection and chemotherapy, all adverse reactions were documented.
Results
From January 2011 to December 2012, we recruited 75 patients with gastric cancer satisfied all study criteria.General characteristic of patients is listed in Table 1.Among them, 47 were male and 28 female, average age was 61 years.All patients had completed medical records, including results of CT scan, endoscopy or pathology biopsy.Of all 75 patients, 39 diagnosed with tubular adenocarcinoma, 22 with poor differentiated adenocarcinoma, 4 mucinous adenocarcinoma, 9 signet ring cell carcinoma and 1 squamous cell carcinoma.After 2 cycles of Yadanzi ® and chemotherapy, bone marrow suppression was recorded for 44 patients: I° in 24, II° in 8, III° in 6 and IV° in 6 patients.Forty patients reported of poor appetite and nausea, 5 watery diarrhea, 4 coprostasis, and 8 vomiting were recorded.One patient reported red skin rash with fever, and 17 cases with hepatic function enzymes higher than normal.One patient was diagnosed with allergy.Main side effects were listed in Table 3.
Discussion
The principle of advanced cancer treatment is to prolong survival time of patients and improve the quality of life.Combination chemotherapy can significantly prolong the survival of patients with advanced gastric cancer, and improve the quality of life.However, in the treatment of advanced gastric cancer, chemotherapy will produce toxicity such as gastrointestinal reactions and bone marrow depression, seriously affect the quality of life of patients, many patients had to discontinued therapy due to inability to tolerate chemotherapy reactions.Javanica oil emulsion is product which use Brucea Jen petroleum ether extracts as raw materials, and purified soybean lecithin as emulsifier (Liu et al., 2012).The main active ingredient are oleic acid and linoleic acid, and the antitumor activity of both have already been confirmed.Javanica oil emulsion is mainly used in lung cancer, brain metastases from lung and gastrointestinal cancer treatment in clinical, and have achieved satisfactory results.Its possible mechanism of action includes specifically inhibit the vitality of topoisomerase II, arrest cell cycle in S phase and the directly damage the structure of plasma membrane.It can also kill tumor cells through regulating the body immune.Javanica oil emulsion can protect the bone marrow hematopoietic stem cells, and promote its proliferation.It can also enhance immunity, improve physical fitness and reduce chemotherapy side effects.The patient's quality of life has been improved.It subjectively enhance their confidence of overcoming the disease, and objectively create the conditions for better treatment.Our results show as follows: after 2 cycle, short-term effect is not obvious, probably due to the combination therapy cycle is short (2 cycles), older patients response slower to treatment (median age ≥ 60 years) and traditional Chinese medicine effects slowly.The significantly effective rate of improvement in quality of life was 12%.In conclusion, in this study, the effective rate of (Yadanzi ® ) combined with chemotherapy in treating patients with advanced gastric cancer was 85.3%.The KPS score improvement was 12%.Thus, Javanica oil emulsion injection combined with chemotherapy could reduce side effects caused by chemotherapy, and improve quality of life of patients.However, our results deserve to be further investigated by randomized controlled clinical trails.
|
2017-06-18T00:10:31.402Z
|
2013-03-30T00:00:00.000
|
{
"year": 2013,
"sha1": "700ee3a07a7c493d93a593f7fa053fbad84e1df1",
"oa_license": "CCBY",
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201321365237809&method=download",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "700ee3a07a7c493d93a593f7fa053fbad84e1df1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
8227964
|
pes2o/s2orc
|
v3-fos-license
|
What moderator characteristics are associated with better prognosis for depression?
A retrospective data analysis was conducted to evaluate the usefulness of baseline characteristics in predicting treatment response to antidepressant medication in 97 outpatients with nonpsychotic major depression treated for up to sixteen weeks with nefazodone. Baseline demographics (gender), illness features (symptom severity, length of illness, length of current episode, number of episodes, age of onset, longitudinal subtype, endogenicity, melancholia, family history of mood disorders), and social features (living status) were evaluated. Response to treatment was defined as a >/= 50% reduction in the 17-item Hamilton Rating Scale for Depression (HRSD(17)) score. The results of a survival analysis indicated that patients with shorter histories of illness (< 4 years), a negative family history of depression, and those who were either married or were living with someone were more likely to have a positive outcome during the acute phase treatment of depression. The main findings are consistent with extensive previous literature indicating a better short-term outcome of depression where illness is shorter, where there is no family history, and where there is better social support.
Introduction
Studies attempting to identify moderators and mediators of treatment (Baron and Kenny 1986;Kraemer et al 2002), such as baseline/pretreatment predictors of response to antidepressants, have not yielded consistent findings (Bielski and Friedel 1976;Greenhouse et al 1987;Katz et al 1987;Croughan et al 1988;Hooley and Teasdale 1989;Joyce and Paykel 1989;Kocsis et al 1989Kocsis et al , 1990Brugha et al 1990;Danish University Antidepressant Group 1990;Vallejo et al 1991;Keller et al 1992;Goodwin 1993;Hoencamp et al 1994;Friedman et al 1995;Baldwin et al 1996;Cohn et al 1996;Aliapoulous and Zisook 1996;Nierenberg 2003). The identified clinical predictors have typically fallen into the following categories: (1) demographic characteristics (gender); (2) illness features (severity at baseline, length of illness, age of onset, length of current episode, number of episodes, longitudinal subtype, endogenicity, melancholia, family history of mood disorders); and (3) social factors (living status, social support).
Numerous illness characteristics have been found to be associated with a positive response to antidepressant medications, including severity of depressive symptoms, depressive subtypes, age at onset of illness, and past history of depressive episodes (AHCPR 1993). Several studies have shown that specific features of psychiatric history and of the current episode of depression influence treatment outcomes. Reimherr et al (1990) found that patients with only a single episode of depression responded slightly better to SSRI treatment than those with recurrent episodes (sertraline: 62% vs 52%; amitriptyline: 66% vs 62%). Additionally, patients with melancholic features responded slightly better than those without such features (sertraline: 56% vs 50%; amitriptyline: 63% vs 59%). Based on a sample of patients with recurrent depression, Greenhouse et al (1987) also found age, symptom severity, and number of previous episodes associated with time to sustained treatment response. The most extensively studied area of treatment predictors includes history of illness characteristics, such as number of previous episodes, length of illness, and the presence of either single episode or recurrent longitudinal subtypes. However, findings are not consistent across all studies. Hoencamp et al (1994) demonstrated that the baseline Hamilton Rating Scale for Depression (HRSD) score was of greater significance in predicting recovery than other variables, such as premorbid history, symptomatology, and endogenous features (Danish University Antidepressant Group 1986). Greenhouse et al (1987) found that patients with high HRSD scores tended to take longer to stabilize than patients with lower scores. However, Pande and Sayler (1993) were unable to replicate this finding. Conversely, patients with less severe depressive illness have been shown to be more likely to respond to amitriptyline or imipramine than those with severe depressive illness (Croughan et al 1988). Family history has also been found to be a prognostic indicator of outcomes. When comparing patients who experience full interepisodic recovery with partial and non-interepisodic recovery patients, Akiskal (1982) found that patients with a family history (first degree biological relative) of affective disorders were more likely to have an incomplete recovery when treated with tricyclic antidepressants.
In addition to illness features that predict treatment response, researchers have attempted to examine the effects of social or situational factors on treatment outcome. Direct measures of social support and functioning, ie, Social Adjustment Scale, Social and Occupational Functioning Assessment Scale (Weissman and Bothwell 1976;APA 1994), and indirect measures such as living status have been found to be positively related to response. Vallejo and colleagues (1991) directly tested the effects of social support on response after 6 weeks of medication treatment. They found that patients treated with imipramine, with higher levels of social support showed a greater percent decrease in HRSD scores than patients with lower levels of social support. Tomaszewska and colleagues (1996) obtained similar results in a population of patients with nonmelancholic depression, and found higher levels of social support in those patients who responded favorably to treatment.
The vast majority of research evaluating the relationship between social support and major depressive disorder (MDD) has largely focused on the role played by social support in improving clinical outcomes in general. Research has not focused as much on patients being treated specifically with antidepressant medications or psychotherapy. The relationship between social support and psychological wellbeing has long been documented (Myers et al 1975;Dean and Lin 1977;Andrews et al 1978;Cohen and Wills 1985). Studies have suggested that social support plays a significant role in mediating the development of depressive symptoms by buffering the effects of negative life events (Aneshensel and Stone 1982;Brown 1988;George et al 1989;Zlotnick et al 1996). Brugha et al (1997) found that patients with a longer course of illness, as manifested by multiple recurrent episodes, reported significantly lower levels of social support than patients with a shorter course of illness. Ezquiaga et al (1998) found that the psychological support provided by a spouse was significant, with high levels of support being associated with positive treatment outcomes. In a sample of 552 women aged 18-65, Costello (1982) found that lack of intimacy with a spouse or cohabitant and lack of a confidant were associated with clinically significant levels of depression. Hooley and Teasdale (1989) found, in a 9-month posthospitalization follow-up of patients with unipolar depression, that patients with higher levels of marital discord had significantly higher relapse rates. Coyne and colleagues (2002) found higher levels of marital distress in depressed females than in community controls. Interestingly, there appear to be gender differences in the reporting of marital distress, with depressed females reporting higher levels of marital distress than their partners (Ensel 1982;Crowther 1985). In their 1992 study, Goering et al found that in a population of women with MDD, few demographic or clinical factors were related to the course of symptoms over the 6-month study period. Recovery was predicted based on the patients' ratings of their current marital relationship and by the spousal rating of their premorbid relationship.
The purpose of this study was to identify baseline demographic, illness, and social features that predict response to an antidepressant medication, nefazodone, in patients with MDD. The demographic, illness, and social feature predictors evaluated were: gender, age of onset, severity at baseline, length of illness, length of current episode, number of episodes, longitudinal subtype, endogenicity, melancholia, family history of mood disorders, and living status.
Methods Subjects
Participants in this study were 97 outpatients with MDD: 59 females and 38 males, age 38.5 ± 10.0 (mean + SD). They were treated for up to sixteen weeks with nefazodone in the Department of Psychiatry, The University of Texas Southwestern Medical Center, Dallas, USA. This sample has been reported as part of a larger multicenter trial (Trivedi et al 2001). Prior to interview, all patients provided written informed consent. All participants met the Structured Clinical Interview for DSM-III-R (SCID) (Spitzer et al 1992) criteria for MDD and were 18 years of age or older. The duration of the current major depressive episode was ≥ 6 months. Patients with bipolar disorder, seasonal affective disorder, substance abuse or dependence disorders, as well as those with delusions or hallucinations during the current episode, were excluded. Patients judged to be at serious risk of suicide and those with a concurrent diagnosis of organic mental syndrome, schizophrenia, or any other psychotic disorder, were not included in the sample. All patients had a 17-item Hamilton Rating Scale for Depression (HRSD 17 ) (Hamilton 1960(Hamilton , 1967 score ≥ 20 at the baseline evaluation. See Table 1.
Procedures
Patients were evaluated at baseline using a SCID to obtain demographic (age and gender), social (living status), and illness features (length of episode, depressive subtype, age at onset, recurrence, length of illness, number of episodes, family history of depression). Melancholia was characterized based on the SCID (Spitzer et al 1992), and endogenicity was defined using the Research Diagnostic Criteria (Spitzer et al 1977). Symptom severity was assessed at baseline using the HRSD 17 and the thirty-item Inventory of Depressive Symptomatology -Clinician Rated (IDS-C 30 ) (Rush et al 1986(Rush et al , 1996 and again with the HRSD 17 at weeks 1, 2, 3, 4, 6, 8, 10, 12, and 16. Nefazodone was administered twice daily for 12 weeks. Dosages were titrated up to 400 mg/day by the end of the second week. Patients not responding to the initial titration schedule were titrated to 500 or 600 mg/day after week 3. Patients were restricted from concomitant use of other drugs except lorazepam, temazepam, or oxazepam.
Statistical analysis
Cox's proportional hazard models (Cox 1972) were used to test for differences in treatment response for a series of demographic, social support, and illness features. The demographic characteristic and social support predictors evaluated were gender and living status, respectively. Illness features included length and severity of current episode, length of illness, number of episodes, age at onset, longitudinal subtype, endogenicity (Spitzer et al 1977), melancholia (Spitzer et al 1992), and family history of mood disorders. For these analyses response to treatment was defined as a > 50% reduction in the baseline HRSD total score. length of the current episode (0-48 months vs > 49 months (this median split provided sample size for the two groups to make a reasonable comparison)); endogenous vs nonendogenous; and melancholic vs nonmelancholic. Length of the current episode revealed that patients whose current episode was less than 4 years were more likely to respond (70%) than those with longer current episodes (62%). This effect was marginally significant (χ 2 = 3.2, df = 1, p < 0.08). Dropout rates were about the same for these two groups (28% and 26%, respectively). Since chronic depression has been defined as episodes lasting 2 years or more, a secondary analysis was conducted using a 24-month threshold to distinguish short vs longer current episodes. No statistically significant differences were found. Baseline severity, endogenicity, and the presence of melancholia were not significantly related to treatment response.
Four measures were used to define strata based on history of illness (single vs recurrent; age at onset: 1-19 vs 20 or older; length of illness: 0-9 years vs > 10 years; and number of episodes: 1 vs 2 vs 3 or more). Length of illness was significantly related (χ 2 = 5.2, df = 1, p < 0.03) to outcome. Patients with shorter illnesses were more likely to respond than those with longer illnesses (see Figure 2). Single versus recurrent depression, age of onset, and number of episodes were not significantly related to treatment response.
Strata were created based on a positive versus negative family history of major mood disorders in first-degree relatives. There was a significant difference based on family history (χ 2 = 4.6, df = 1, p < 0.04). Patients with a negative family history were more likely to respond (77%) than those with a positive family history (62%) (see Figure 2). Dropout rates before completing 16 weeks of treatment were similar (21% and 30%, respectively).
Analyses of combined predictors
To explore the relationships between individual baseline features that predicted response, survival analysis was conducted and included all three individual predictors: marital status, length of illness, and family history. Once living status entered the model, no other predictors significantly improved the model. Thus, there was substantial overlap between these individual predictors, as might be expected in this relatively small sample. The parallel analyses conducted using an HRSD score ≥ 10 criteria produced the same results for living status (χ 2 = 7.7, df = 1, p < 0.006) and length of illness (χ 2 = 4.3, df = 1,
Demographic and social features
The analysis of living status indicated better response to treatment for the married patients (married or cohabiting) than those patients living alone (single, engaged, divorced, widowed): 84% and 51%, respectively (χ 2 = 6.9, df = 1, p < 0.009). While the response rates were similar in the first few weeks of treatment, after week 4, married patients show clearly higher rates of response than the unmarried patients (see Figure 1). This result is qualified by differences in the dropout rates (single, 34%; married, 14%). There was no statistically significant effect for gender. See Table 2.
Illness features
Four predictors were used to create a test for differences in response based on characteristics of the current depressive episode: severity (HRSD score 20-23 vs HRSD score ≥ 24); Predictors of response to antidepressants p < 0.04). A significant difference, as opposed to a marginal difference, was found for length of episode (χ 2 = 4.3, df = 1, p < 0.04). Only one major difference was found for the single vs recurrent depression groups. A significant difference was found using this response criterion (χ 2 = 4.6, df = 1, p < 0.04). Patients with a recurrent history responded better than those in their first episode.
Discussion
The results of this study suggest that three easily discernable patient characteristics can help predict a patient's response to antidepressant medication, in this case, nefazodone. The characteristics found to be most useful were, living status, total length of illness, and family history of depression. The findings associated with illness features are consistent with previous studies reporting longer current episodes associated with a lower likelihood of response to antidepressants (Bielski and Friedel 1976;Rush et al 1983;Keller et al 1984Keller et al , 1992. Although more elaborate methods of outcome prediction such as receptor analysis or neuroimaging (Espisito and Goodnick 2003) may be useful, they have not yet led to consistent findings and are very likely not to be practical in routine clinical practice. The results of this study suggest, simple, easy to obtain data, such as marital status, show great promise in the prediction of response to antidepressant treatment. Interestingly, in our sample, being married or living together appeared to have a profound positive effect on the overall response rates, with married or cohabiting patients meeting criteria for treatment response with greater frequency than their single cohorts. Living status also had an impact on the patients' willingness to stay in treatment. The married/cohabiting group remained in treatment longer even in the absence of response, a finding consistent with Hagerty and Williams (1999) who found patients living alone were more likely to drop out of treatment. Numerous drug utilization studies have shown that over 45%-50% of patients who start an antidepressant are not in treatment by 3 months, suggesting a strong need to be able to preemptively identify predictors of not only response but also dropouts. While not all studies have found social support to be a significant predicator of treatment outcome (Hirshfeld et al 1986;George et al 1989), the majority of the studies have suggested social support and even more specifically marital status as positive predictors of response. The quality of the marital relationship also provides a significant aid in predicting treatment response. Moreover, recent data from a number of studies (Hunkeler et al 2000;Unutzer et al 2002;Trivedi et al 2004) suggest there is a clear benefit for a disease management approach in the treatment of depression. These studies have emphasized more frequent patient contact as well as more robust psychosocial and educational support for patients to enhance adherence, improve the patient's ability to self-monitor their symptoms, and increase patients' understanding of the chronic medical illness nature of their depressive disorders. These data thus provide another dimension of evidence that optimal outcome can be enhanced if pharmacotherapy is augmented with social support.
The current study is limited by several factors including a moderate sample size, the use of only one antidepressant medication, and the lack of a placebo control group. It is also possible that factors indicating a chronic and or severe disorder like length of illness may predict poorer outcome irrespective of the treatment(s) used. Moreover, it is also likely that social support, life stressors, and family history of psychiatric comorbidity are indicators of positive outcome for MDD across various treatment modalities and may also increase the chances of spontaneous response. Therefore, in the absence of a placebo or an active control, the demographic and clinical features identified may be thought of as good prognostic predictors of outcome independent of treatment modality. Despite these limitations, the current study does provide additional support for the hypothesis that easy to identify patient factors may be able to significantly improve the quality of patient care by increasing physician efficiency when prescribing antidepressant medications. These clinical predictors, although not prescriptive, can assist in treatment planning and aid with patient education regarding potential outcomes. Of the three clinical predictors identified (living status, total length of illness, and family history of depression), living status appears to be the most promising mediator of treatment outcome. Future study of clinical predictors of treatment response for emerging antidepressant medications should include social features, such as living status and an assessment of the quality of relationships, in addition to the commonly used demographic and illness features in predictor analyses.
|
2014-10-01T00:00:00.000Z
|
2005-03-01T00:00:00.000
|
{
"year": 2005,
"sha1": "96808de600d1205009dae15c70baa44da9c6e740",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc2426815?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "96808de600d1205009dae15c70baa44da9c6e740",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
146109482
|
pes2o/s2orc
|
v3-fos-license
|
Multi-Channel Optoelectronic Measurement System for Soil Nutrients Analysis
To solve the problems that occur when farmers overuse chemical fertilizers, it is necessary to develop rapid and efficient portable measurement systems for the detection and quantification of nitrogen (N), phosphorus (P), and potassium (K) in soil. Challenges arise from the use of currently available portable instruments which only have a few channels, namely measurement and the reference channels. We report on a home-built, multichannel, optoelectronic measurement system with automatically switching light sources for the detection of N, P, K content in soil samples. This optoelectronic measurement system consists of joint LED light sources with peak emission wavelengths of 405 nm, 660 nm, and 515 nm, a photodiode array, a circuit board with a microcontroller unit (MCU), and a liquid-crystal display (LCD) touch screen. The straightforward principle for rapid detection of the extractable nutrients (N, P, K) was well-established, and characterization of the designed measurement system was done. Using this multi-channel measurement system, available nutrients extracted from six soil samples could be measured simultaneously. The absorbance compensation, concentration calibration, and nutrition measurements were performed automatically to achieve high consistency across six channels. The experimental results showed that the cumulative relative standard deviations of 1.22%, 1.27%, and 1.00% were obtained from six channels with known concentrations of standard solutions, respectively. The coefficients of correlation for the detection of extracted nutrients of N, P, K content in soil samples using both the proposed method and conventional lab-based method were 0.9010, 0.9471, and 0.8923, respectively. Experimental results show that this optoelectronic measurement system can perform the measurement of N, P, K contents of six soil samples simultaneously and may be used as an actual tool in determining nutrients in soil samples with an improvement in detection efficiency.
Introduction
There are more than 13 essential nutrients that are supplied by soil for crop growth.However, not all soils are sufficiently fertile to provide balanced nourishment [1].For this reason, the appropriate fertilizers should be applied to soil when specific nutrients are deficient such that it may result in reduced crop yield.It is known that fertilizer application is one of the most effective ways to increase crop production [2][3][4].The average amount of urea in China is reported to be 180 kg/ha to obtain a high yield in the majority of crops [5].Fertilizer applications exceeding crop requirements can pollute groundwater and stream water [6].Farmers, however, often overuse chemical fertilizers since information on insufficient or deficient nutrients and the necessary respective concentrations needed to be added in the soil is lacking [7].Experimental statistics indicate that farmers can cut down on the amount of the chemical fertilizer and save money if they choose chemical fertilizers based on scientific information gathered from their farm's nutritive need [8].Currently, the challenge that farmers face when deciding on the type and amount of chemical fertilizers is knowing exactly which soil nutrients are deficient in their farms' soils [9,10].Therefore, without a doubt, rapid measurement and analysis of soil nutrients can lead to efficient use of fertilizers and resultant improved crop yield [11]. Efficient use of chemical fertilizers also prevents environmental pollution.As analytical technology and methods have progressed, the ability and speed of analysis of soil nutrients have increased, and more options have become available, including nondestructive measurement variants (remote sensing and soil sensor technology) [12][13][14].The Nash colorimetric method, Olsen sodium bicarbonate method, and ammonium acetate extraction-flame photometry have matured over years to facilitate detection of nitrogen (N), phosphorus (P), and potassium (K) in soil [15].The above three methods, however, require complicated pretreatment processes and also do not allow for rapid simultaneous multi-sample detection of soil nutrients.Since the early 1990s, there have been many reports on the use of sensor technology to rapidly measure soil nutrients [16][17][18].For instance, both spectroscopy and electrochemical sensors have become important measurement methods in the detection of soil nutrients in precision agriculture.However, the results of experiments from the sensors technology face reproducibility problems.Electrochemical sensors can assess spatial variability of different soil chemical properties.Tully & Weil reported on a good correlation with R 2 values about 0.96 [19].An automatic flow sampling system with ion electrodes offers opportunities for rapid soil nutrient measurements [20,21].Vis-NIR advanced spectroscopic measurement of soil nutrition developed using a diode array spectrophotometer with wavelengths ranging from 399 nm to 1697 nm has been proven.However, potassium is not spectrally active in Vis/NIR range when a solid sample of soil is measured using spectral method [22].Detection and quantification of soil nutrients with diffuse reflectance Fourier transform near infrared spectroscopy (FT-NIRS) together with multivariate analysis have been demonstrated where a concentration of total nitrogen (N) was obtained [23].A miniaturized optoelectronic colorimetric measurement system with microelectromechanical systems (MEMS) structure has been used in measuring the extractable nutrients [24].Different extractants were applied to obtain the NPK content in this MEMS system in which the complicated sample preparation was used.A highly precise measurement of soil nutrients was reached from this MEMS structure.However, this MEMS method cannot be obtained by the conventional fabrication due to expensive equipment.For this reason, the development of an inexpensive, robust system capable of repeatable, multi-channel, optoelectronic measurement of soil nutrients plays a crucial role in practical applications.Here, we report on the development of a multi-channel, optoelectronic measurement system that is versatile enough to be used as a portable field instrument for site soil nutrient analysis.
Soil Sample Collection and Preparation
To confirm the characteristics of this optoelectronic measurement system, the soil was randomly sampled at a sampling depth of 20 cm from a wheat field in Tianchi town, Sanmenxia city, Henan province.These samples were taken at a sampling position indicated by a differential GPS (34 • 40 5.59" N, 111 • 51 9.73" E).The soil was the calcareous type that has elevated levels of both magnesium carbonate and calcium.The disintegrated granules were sieved with a mesh of size of 1 mm to remove the undisintegrated granules prior to the experiment.A mass of 5 g of dry soil sample was measured with an accuracy of ±0.01 g and then placed into a 100 mL triangle bottle and thereafter mixed with 50 mL universal extraction solution.These sample solutions were stirred for 30 min with a constant speed of 180 rpm (revolutions per minute).The extraction solution with 0.45 M NaHCO 3 and 0.374 M Na 2 SO 4 was provided by Henan Nongda Measurement and Testing Technology Co., Ltd.
(Zhengzhou, China).After stirring, any undissolved granules were removed by filtration.The filtrate was then placed into another 100 mL triangle bottle for further experiments.The photodiode 2CR1227 (with sensitive area of 10 × 10 mm) was used to convert the optical signals transmitted in the spectral range of 340-1000 nm with a peak wavelength of 720 nm into electronic signals (Hangzhou Yuxuan Electronics Co., Ltd., Hangzhou, China).Joint LED (light-emitting diodes) light sources (product model 5PRG9VCA-A40-M660-A405-T515) with peak emission wavelengths of 405 nm, 660 nm, and 515 nm were purchased from Shenzhen Guanghui Sheng Electronics Co., Ltd.
Measurement Principle
The soil extractant solution was provided by Henan Nongda Measurement and Testing Technology Co., Ltd.(Zhengzhou, China) to produce a specific color for measurements.The absorption spectra of the nutrient solution obtained using the optoelectronic measurement system show concentrations of nitrogen (N), potassium (K), and phosphorus (P) in the soil samples.The conventional lab-based methods include Nash colorimetric method for measuring Nitrogen (NH 4 -N), Olsen method for the determination of available phosphorus in calcareous soil for measuring phosphorus (P), and flame photometer for measuring potassium (K), respectively.The optoelectronic measurement system designed is equipped with joint LED sources, a cuvette holder, Microcontroller chip PIC24FJ64 (Microchip Technology Inc., USA), and photodiode 2CR1227 (Hangzhou Yuxuan Electronics Co., Ltd., Hangzhou, China) (spectral responses from 340 nm to 1000 nm) with a high quantum efficiency of 75%.The cuvette holder is specially designed for square spectrophotometer cells with approximate outside dimensions of 12.5 × 12.5 × 46 mm each with an optical path length of 10 mm.To account for the light attenuation caused by the square spectrophotometer cell, a blank solution without analytes of interest is measured prior commencement of the experiment.The standardized transmission spectra of a measured solution was calculated as the ratio of the flux transmitted by the measured solution to that transmitted by the solution of known concentration under identical illumination conditions.
According to Beer-Lambert law, the absorbance of standard solutions is directly proportional to their concentrations.The absorbance determined from the standard solutions can be used to calculate the concentration of an unknown sample solution.Absorption of a solution is proportional to the effective concentration of sample solution, path length (d), and the extinction coefficient.
where I 0 is the initial light intensity; c is the concentration of sample solution (in moles); d is the path length (mm), and ε is the extinction coefficient (M −1 cm −1 ).
After the power is switched on, the light beam from the LED light source is not stable within a period of time of approximately 20 min.In this experiment, a constant current source circuit with a three-terminal adjustable shunt regulator for driving LED was designed.All the square spectrophotometer cells had a reading error not exceeding 0.50% in absorbance when a blank solution sample was measured after thorough cleaning.The processes that follow thereafter consisted of eight separate operations: soil sample of mass 5 g was weighed after placing it into an aluminum box for the determination of soil water content; choosing an appropriate wavelength for a particular nutrient of interest (based on color of the solution) from the joint LED light sources by using an included switching circuit; inserting a cuvette containing a blank solution into one of the cells; collecting background light intensity in a dark room; inserting a solution of known concentration (in a cuvette) into the cells to calibrate the absorbance; placing solutions (in cuvettes) to be measured into the other cells, and the transmitted intensity measured and used to calculate the concentration value.For example, while a solution with known concentration was used as a standard, the concentration of the measured solution can be calculated from the following expression: where A s is the absorbance obtained from the known concentration solution; A x is the absorbance obtained from the known concentration solution; k(λ) is a wavelength-dependent absorptivity coefficient; L is the path length, and C s is the concentration of a standard solution.Accordingly, the concentration of a measured solution C x can be calculated by the absorbance of the measured solution.
Principle Design
This multichannel optoelectronic system is designed to rapidly detect the concentration of N, P, and K in soil based on the well-known Beer-Lambert law.The scheme was considered in the construction of the measurement system (Figure 1). Figure 1 is mainly composed of joint LED light sources, photodiodes, a cuvette holder for square spectrophotometer cells with approximate outside dimensions of 12.5 × 12.5 × 46 mm and an optical path length of 10 mm, an electronic board including a microcontroller unit (MCU) with amplifiers, a multiple switcher, 16-bit A/D converter ADS1100 (Texas Instruments Incorporated, Dallas, TX, USA), and a touch screen.
where is the absorbance obtained from the known concentration solution; is the absorbance obtained from the known concentration solution; () is a wavelength-dependent absorptivity coefficient; is the path length, and is the concentration of a standard solution.Accordingly, the concentration of a measured solution can be calculated by the absorbance of the measured solution.
Principle Design
This multichannel optoelectronic system is designed to rapidly detect the concentration of N, P, and K in soil based on the well-known Beer-Lambert law.The scheme was considered in the construction of the measurement system (Figure 1). Figure 1 is mainly composed of joint LED light sources, photodiodes, a cuvette holder for square spectrophotometer cells with approximate outside dimensions of 12.5 × 12.5 × 46 mm and an optical path length of 10 mm, an electronic board including The LED driver circuit produces the current pulses for each LED as long as the electric power supply is switched on.Lights go through the square spectrophotometer cells directly and sequentially, and after achieving the desired measurement results of the N, P, K concentrations, the power supply of the LED switches off.The power supply to the LEDs is controlled by the MCU to reach the desired levels automatically.The photodiode 2CR1227 (Hangzhou Yuxuan Electronics Co., ltd, Hangzhou, China) then measures the transmitted light from the spectrophotometer cells, followed by amplification of these signals by the TLC2654C amplifier (Texas Instruments Incorporated,USA).The digital conversions are dependent on the output of the amplifier.The microcontroller unit (PIC24FJ64) offers multiple communication interfaces, including Universal The LED driver circuit produces the current pulses for each LED as long as the electric power supply is switched on.Lights go through the square spectrophotometer cells directly and sequentially, and after achieving the desired measurement results of the N, P, K concentrations, the power supply of the LED switches off.The power supply to the LEDs is controlled by the MCU to reach the desired levels automatically.The photodiode 2CR1227 (Hangzhou Yuxuan Electronics Co., Ltd., Hangzhou, China) then measures the transmitted light from the spectrophotometer cells, followed by amplification of these signals by the TLC2654C amplifier (Texas Instruments Incorporated, Dallas, TX, USA).The digital conversions are dependent on the output of the amplifier.The microcontroller unit (PIC24FJ64) offers multiple communication interfaces, including Universal Asynchronous Receiver Transmitter (UART), Serial Peripheral Interface (SPI), and Inter-Integrated Circuit (I 2 C).An integrated touch-screen controller that drives the relatively high-resolution LCD is also provided by this MCU chip.
Construction of the Optoelectronic Measurement System
This optoelectronic measurement system requires a photodiode whose 3-dB bandwidth is close to reaching 1 MHz, and we can expect to have a photocurrent of more than 2 mA from the photodiode 2CR1227, whose capacitance is 950 pF.The constant current source circuit with a three-terminal adjustable shunt regulator for driving the LED and the front-end circuit of the photodiode was designed, as shown in Figure 2. The inverting input of amplifier TLC 2654C draws no current.It can always force the voltage there to be close to zero.However, the voltage gain of this amplifier TLC2654C is not infinite, so the swing is not zero.To produce an output voltage V o , the amplifier TLC2654C requires an input voltage V i = V o /Av, where Av is the voltage gain of this amplifier, which rolls off at high frequency.The responding output voltages of the circuit were obtained from the standard phosphorus concentration of 0 (blank), 4000 µg/L (C1), 8000 µg/L (C2), 12,000 µg/L (C3), and 16,000 µg/L (C4) injected into the square spectrophotometer cells, respectively (see the inset in Figure 2a).
Asynchronous Receiver Transmitter(UART), Serial Peripheral Interface(SPI), andInter-Integrated Circuit(I 2 C).An integrated touch-screen controller that drives the relatively high-resolution LCD is also provided by this MCU chip.
Construction of the Optoelectronic Measurement System
This optoelectronic measurement system requires a photodiode whose 3-dB bandwidth is close to reaching 1 MHz, and we can expect to have a photocurrent of more than 2 mA from the photodiode 2CR1227, whose capacitance is 950 pF.The constant current source circuit with a three-terminal adjustable shunt regulator for driving the LED and the front-end circuit of the photodiode was designed, as shown in Figure 2. The inverting input of amplifier TLC 2654C draws no current.It can always force the voltage there to be close to zero.However, the voltage gain of this amplifier TLC2654C is not infinite, so the swing is not exactly zero.To produce an output voltage , the amplifier TLC2654C requires an input voltage = /, where is the voltage gain of this amplifier, which rolls off at high frequency.The responding output voltages of the circuit were obtained from the standard phosphorus concentration of 0 (blank), 4000 µg/L (C1), 8000 µg/L (C2), The resulting optoelectronic measurement system after assembling appropriately the 6 joint LEDs, cuvette holder with 6 cells, and 6 photodiodes, an electronic board with a microcontroller (MCU, PIC24FJ64), the amplifier (TLC2654C) and the 16-bit A/D converter ADS1100 is shown diagrammatically in Figure 3a and photographically in Figure 3b.Once the solution with known concentration was placed in position, the multi-channel optoelectronic measurement system was set to obtain the transmitted light intensity.The measured solutions were placed into other positions, and the multi-channel optoelectronic measurement system took the transmitted light intensity The resulting optoelectronic measurement system after assembling appropriately the 6 joint LEDs, cuvette holder with 6 cells, and 6 photodiodes, an electronic board with a microcontroller (MCU, PIC24FJ64), the amplifier (TLC2654C) and the 16-bit A/D converter ADS1100 is shown diagrammatically in Figure 3a and photographically in Figure 3b.Once the solution with known concentration was placed in position, the multi-channel optoelectronic measurement system was set to obtain the transmitted light intensity.The measured solutions were placed into other positions, and the multi-channel optoelectronic measurement system took the transmitted light intensity again.The concentration of the measured solution can be calculated from Equation 2. The joint LED light sources with peak emission wavelengths of 405 nm, 660 nm, and 515 nm were driven by a constant current to guarantee the stability of the light source.The cuvette holder met the need for precision sampling in absorbance applications.The cuvette holders were crafted firmly to ensure repeatability.A unique positioning system guaranteed that the square spectrophotometer cells stood with the holder centered and upright without tilting.From Figure 3a, cylinder-shaped circular holes for placing joint LED sources were arranged towards photodiodes.Both LED light sources and photodiodes were configured on each side of the cylinder-shaped cuvette holder, and there were 6 square spectrophotometer cells and corresponding photodiodes involved in this measurement system.Extractable nutrients (N, P and K) from 6 samples could then be measured simultaneously.Figure 3a shows the type of the cuvette holder that involves a structure driven manually.The cuvettes were evenly distributed.With this type of a holder, to ensure repeatability of measurement results together with high accuracy, a stable positioning system was designed.The multisample cell was explored in this optoelectronic measurement system and consisted of joint LED light sources having peak emission wavelengths of 405 nm, 660 nm, and 515 nm and photodiodes firmly fixed onto the supporting clamp.However, due to the spatial chromatic nonuniformity in LED lighting sources and photodiodes, the consistency of the measurement results should be calibrated prior to measuring the concentration of the soil nutrients.Figure 3b displays a prototype of this system consisting of 6 cuvette holders, 6 photodiodes, and 6 joint LEDs that were eventually used to design this portable optoelectronic measurement instrument without moving parts with the cost of 700 RMB.
again.The concentration of the measured solution can be calculated from Equation 2. The joint LED light sources with peak emission wavelengths of 405 nm, 660 nm, and 515nm were driven by a constant current to guarantee the stability of the light source.The cuvette holder met the need for precision sampling in absorbance applications.The cuvette holders were crafted firmly to ensure repeatability.A unique positioning system guaranteed that the square spectrophotometer cells stood with the holder centered and upright without tilting.From Figure 3a, cylinder-shaped circular holes for placing joint LED sources were arranged towards photodiodes.Both LED light sources and photodiodes were configured on each side of the cylinder-shaped cuvette holder, and there were 6 square spectrophotometer cells and corresponding photodiodes involved in this measurement system.Extractable nutrients (N, P and K) from 6 samples could then be measured simultaneously.Figure 3a shows the type of the cuvette holder that involves a structure driven manually.The cuvettes were evenly distributed.With this type of a holder, to ensure repeatability of measurement results together with high accuracy, a stable positioning system was designed.The multisample cell was explored in this optoelectronic measurement system and consisted of joint LED light sources having peak emission wavelengths of 405 nm, 660 nm, and 515nm and photodiodes firmly fixed onto the supporting clamp.However, due to the spatial chromatic nonuniformity in LED lighting sources and photodiodes, the consistency of the measurement results should be calibrated prior to measuring the concentration of the soil nutrients.Figure 3b displays a prototype of this system consisting of 6 cuvette holders, 6 photodiodes, and 6 joint LEDs that were eventually used to design Figure 3. Multichannel optoelectronic measurement system: (a) components of the multichannel optoelectronic system, (b) prototype of the multichannel optoelectronic measurement system.
Absorbance Compensations
For this multichannel optoelectronic measurement system, it is necessary to ensure the consistence of the absorbance from each channel owing to errors existed during absorbance measurement.These errors emanate from the joint LED light source, the cuvettes, and the electronic components in the system.To ensure measurement accuracy of this multichannel optoelectronic measurement system, the light intensity correction was performed prior to the absorbance compensation.Firstly, the transmitted light intensity of the blank solution was recorded.The correction factor of the light intensity of blank solution for each channel was initially calculated by the ratio of the average value of the light intensity to the light intensity from each channel.Then, the absorbance values from soil solutions with known concentrations of the nutrients were obtained from each channel for consistency correction.The average of the transmitted light intensity of the blank solution from 6 channels was calculated.A correction factor was obtained from the average of the transmitted light intensity of the blank solution when the blank solution was placed
Absorbance Compensations
For this multichannel optoelectronic measurement system, it is necessary to ensure the consistence of the absorbance from each channel owing to errors existed during absorbance measurement.These errors emanate from the joint LED light source, the cuvettes, and the electronic components in the system.To ensure measurement accuracy of this multichannel optoelectronic measurement system, the light intensity correction was performed prior to the absorbance compensation.Firstly, the transmitted light intensity of the blank solution was recorded.The correction factor of the light intensity of blank solution for each channel was initially calculated by the ratio of the average value of the light intensity to the light intensity from each channel.Then, the absorbance values from soil solutions with known concentrations of the nutrients were obtained from each channel for consistency correction.The average of the transmitted light intensity I bv of the blank solution from 6 channels was calculated.A correction factor β n was obtained from the average of the transmitted light intensity of the blank solution when the blank solution was placed into the corresponding channel position, divided by the transmitted light intensity from each channel, where, I bn is the transmitted light intensity of the blank solution from each channel.Consequently, the corrected intensity from each channel was obtained from the measured intensity of light multiplied by the correction factor.Secondly, the consistency of absorbance from different channels was evaluated by placing the soil standard solution into each cuvette in position.The measured absorbance from each channel is expressed as where, I sn is the transmitted light intensity of the solution of known concentration from each channel and I bn is the transmitted light intensity of the blank solution from each channel.The average value of the absorbance of the standard solution with known concentration A v was calculated from 6 channels, accordingly.A correction factor σ n was calculated from the absorbance when the solution of known concentration was placed into the corresponding channel position, divided by the measured absorbance, After that, one cuvette with the blank solution was placed into channel 1 (CH1).The transmitted light intensity of the blank solution from 6 channels was expressed as follows, Accordingly, one cuvette with the standard solution of known concentration was placed into channel 2. The absorbance of the solution of known concentration from 6 channels is then expressed as follows: The correction coefficients of the transmitted light intensity of the blank solution and the absorbance of the solution of known concentration from each channel were measured five times.The measurement results of absorbance uncertainty from each channel with different LEDs at the peak emission wavelengths of 405 nm, 660 nm, and 515 nm are shown in Table 1, respectively.From the experimental results, the cumulative relative standard deviations of absorbance from all channels were 1.22%, 1.27%, and 1.00%, respectively.The known concentration of standard solution have the same wavelengths of maximum absorption of N, P, K measurements, respectively.
Establishment of the Calibration Curve
In order to evaluate the performance of this measurement system, firstly, known concentrations of potassium (100, 150, 200, 250, 300, 350 mg/kg), phosphorus (10,20,40,60,80, 100 mg/kg), and nitrogen (NH 4 ) (20, 30, 40, 50, 60, 70 mg/kg) were arranged to be sequentially measured to establish the measurement model prior to the sample measurement.The maximum relative standard deviations in absorbance were 5.24%, 5.64%, and 4.90% for N, P, K measurements with three repetitions, respectively.The relationship between concentrations of N, P, and K and their corresponding absorbance values is displayed in Figure 4. From these results, it was observed that there exists a linear relationship between absorbance and the concentration of the samples, as expected from Beer-Lambert law.The sensitivity (changes in absorbance per change of analyte concentration) of the optoelectronic device was found to be 7.17, 5.96, and 1.41 µM −1 for N, P, K measurements, respectively.
Establishment of the Calibration Curve
In order to evaluate the performance of this measurement system, firstly, known concentrations of potassium (100, 150, 200, 250, 300, 350 mg/kg), phosphorus (10,20,40,60,80, 100 mg/kg), and nitrogen (NH4) (20,30,40,50,60, 70 mg/kg) were arranged to be sequentially measured to establish the measurement model prior to the sample measurement.The maximum relative standard deviations in absorbance were 5.24%, 5.64%, and 4.90% for N, P, K measurements with three repetitions, respectively.The relationship between concentrations of N, P, and K and their corresponding absorbance values is displayed in Figure 4. From these results, it was observed that there exists a linear relationship between absorbance and the concentration of the samples, as expected from Beer-Lambert law.The sensitivity (changes in absorbance per change of analyte concentration) of the optoelectronic device was found to be 7.17, 5.96, and 1.41 µM −1 for N, P, K measurements, respectively.
Sample Measurements
Sample measurements consisted of three steps: first, putting one of the known concentrations of nitrogen (N), phosphorus (P), and potassium (K) solution into the operation software specially written for this measurement system, adding both blank solution and standard solution into the first and second channels of the cuvette holder and the solutions to be measured placed in other channels.The calibration of absorbance consistency for the channels was then run accordingly.The
Sample Measurements
Sample measurements consisted of three steps: first, putting one of the known concentrations of nitrogen (N), phosphorus (P), and potassium (K) solution into the operation software specially written for this measurement system, adding both blank solution and standard solution into the first and second channels of the cuvette holder and the solutions to be measured placed in other channels.The calibration of absorbance consistency for the channels was then run accordingly.The above calibration process of absorbance consistency was repeated to ensure measurement accuracy before daily sample solution measurements.For validation purposes, the concentrations of the nutrients of interest (NH 4 -N, P, and K) in the tested soil samples were also determined using conventional lab-based methods that include Nash colorimetric method for measuring nitrogen (NH 4 -N), the Olsen method for the determination of available phosphorus in calcareous soil for measuring phosphorus (P), and flame photometer for measuring potassium (K), respectively.The measurement results from both methods are shown in Table 2.The relationship between the concentration results of NH 4 -N, P, and K as measured using our optoelectronic device and the conventional methods was shown in Figure 5a-c The relationship between the concentration results of NH4-N, P, and K as measured using our
Conclusions
In this paper, we demonstrated that the optoelectronic measurement system based on joint LED light sources with a multichannel cuvette holder can be used to measure the concentration of N, P, K rapidly and conveniently.The concentration of measured solutions conforms to the Beer-Lambert law, which is described together with the absorbance compensation.Prior to the practical measurement of soil samples, the absorbance compensation was initially applied to correct the absorbance across different channels to ensure measurement accuracy of this system.Then, the absorbance was considered to vary linearly to N, P, K concentrations in the measurement range.The cumulative relative standard deviations of absorbance from all channels were 1.22%, 1.27%, and 1.00%, respectively.The soil was randomly sampled at a sampling depth of 20 cm from a wheat field in Tianchi town, Sanmenxia city, Henan province.The experiment results show high correlations (R 2 = 0.9010, 0.9471, and 0.8923) between the absorbance at certain peak emission wavelengths (405 nm, 660 nm, and 515 nm) and their respective nutrient concentration in soil samples.Based on these results, this multichannel measurement system was developed in-house to measure soil nutrients, providing a reliable and robust method for portable and rapid measurement of soil nutrients.It provides capabilities for more effective, continuous, and site-specific nutrient measurement for use in identification of the correct fertilizer for use in farms.In addition, this multichannel measurement system and the optoelectronic measurement methods can be further applied for the detection of pesticide residues and heavy metals, as well as other related applications involving biometrics.
Figure 1 .
Figure 1.Block diagram of the multichannel optoelectronic measurement system.This system is controlled by a microcontroller chip (PIC24FJ64).LED: light-emitting diode.
Figure 1 .
Figure 1.Block diagram of the multichannel optoelectronic measurement system.This system is controlled by a microcontroller chip (PIC24FJ64).LED: light-emitting diode.
Figure 2 .
Figure 2. The circuits of the multichannel optoelectronic measurement system.(a) The diagram of the constant current source circuit with a three-terminal adjustable shunt regulator for driving LED, the front-end circuit of an impedance amplifier for the photodiode and the responding output voltages of the circuits displayed on the oscilloscope.(b) The prototype of the microcontroller unit (MCU) circuit board, including a chip of PIC24FJ64, a 16-bit A/D converter ADS1100, and a multiple switcher CD4051.
Figure 2 .
Figure 2. The circuits of the multichannel optoelectronic measurement system.(a) The diagram of the constant current source circuit with a three-terminal adjustable shunt regulator for driving LED, the front-end circuit of an impedance amplifier for the photodiode and the responding output voltages of the circuits displayed on the oscilloscope.(b) The prototype of the microcontroller unit (MCU) circuit board, including a chip of PIC24FJ64, a 16-bit A/D converter ADS1100, and a multiple switcher CD4051.
Figure 3 .
Figure 3. Multichannel optoelectronic measurement system: (a) components of the multichannel optoelectronic system, (b) prototype of the multichannel optoelectronic measurement system.
Figure 4 .
Figure 4. Calibration curves of nitrogen, phosphorus and potassium obtained from the multichannel optoelectronic measurement system: (a) the calibration curve obtained from the known concentrations of nitrogen, (b) the calibration curve obtained from the known concentrations of phosphorus, (c) the calibration curve obtained from the known concentrations of potassium.
Figure 4 .
Figure 4. Calibration curves of nitrogen, phosphorus and potassium obtained from the multichannel optoelectronic measurement system: (a) the calibration curve obtained from the known concentrations of nitrogen, (b) the calibration curve obtained from the known concentrations of phosphorus, (c) the calibration curve obtained from the known concentrations of potassium.
Figure 5 .
Figure 5.Comparison analysis of measurement results obtained from both conventional lab-based method and this multichannel optoelectronic measurement system: (a) the relationship of measured nitrogen established by the Nash colorimetric method, (b) the relationship of measured available phosphorus established by the Olsen method, (c) the relationship of measured potassium established by the Flame photometer method.
Figure 5 .
Figure 5.Comparison analysis of measurement results obtained from both conventional lab-based method and this multichannel optoelectronic measurement system: (a) the relationship of measured nitrogen established by the Nash colorimetric method, (b) the relationship of measured available phosphorus established by the Olsen method, (c) the relationship of measured potassium established by the Flame photometer method.
Table 1 .
Results of absorbance uncertainty obtained from 6 channels.
Table 2 .
Measurement results from both conventional lab-based methods and this optoelectronic measurement system.
|
2019-05-07T13:57:22.336Z
|
2019-04-20T00:00:00.000
|
{
"year": 2019,
"sha1": "ed5c13a9c70bdf5e378a6250a65fe149afd5b327",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9292/8/4/451/pdf?version=1555731641",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ed5c13a9c70bdf5e378a6250a65fe149afd5b327",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences",
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
29269240
|
pes2o/s2orc
|
v3-fos-license
|
On language production principles and the form of language: a más cómo, menos por qué
The idea that the form of linguistic expressions is modulated by output requirements is widely accepted in current approaches in linguistics, including those taking “extremely nativist” positions. Thus, for instance, the Minimalist Program developed in recent years by Chomsky (1995) holds that language form is strongly determined by the requirements imposed on it by the two main interfaces it links: on the one hand the conceptual-intentional domain of meaning, and on the other, the sensorimotor domain by means of which linguistic expressions are externalized. Within psycholinguistics, consideration of the impact that processing requirements have on the form of sentences can be traced back to Yngve's Depth Hypothesis (Yngve, 1960), which identified an asymmetry in the incidence of left-branching and right-branching grammatical structures in English due to processing constraints. Thomas Bever (Bever, 1970), in a seminal paper generally acknowledged to pioneer modern psycholinguistics, put forth and discussed the hypothesis that the form of language reflects general cognitive laws, in such a way that mechanisms of language learning and processing partially determine the form of grammar. Fodor et al. (1974) integrated ideas in philosophy, psychology, and linguistics to explain what was called language performance, that is, language production and comprehension. At the time, the possibility that the form of grammatical structures might be determined by domain-general behavioral systems was thought of as an important challenge to linguistics. However, after decades of interdisciplinary language study, this view it is now endorsed to varying degrees by most linguists and cognitive scientists, including those in “classic theories” which also hold the view that certain central architectural aspects of human language are not dependent on experience but rather imposed on it by organism-internal biases (Berwick et al., 2013).
The idea that the form of linguistic expressions is modulated by output requirements is widely accepted in current approaches in linguistics, including those taking "extremely nativist" positions. Thus, for instance, the Minimalist Program developed in recent years by Chomsky (1995) holds that language form is strongly determined by the requirements imposed on it by the two main interfaces it links: on the one hand the conceptual-intentional domain of meaning, and on the other, the sensorimotor domain by means of which linguistic expressions are externalized. Within psycholinguistics, consideration of the impact that processing requirements have on the form of sentences can be traced back to Yngve's Depth Hypothesis (Yngve, 1960), which identified an asymmetry in the incidence of left-branching and right-branching grammatical structures in English due to processing constraints. Thomas Bever (Bever, 1970), in a seminal paper generally acknowledged to pioneer modern psycholinguistics, put forth and discussed the hypothesis that the form of language reflects general cognitive laws, in such a way that mechanisms of language learning and processing partially determine the form of grammar. Fodor et al. (1974) integrated ideas in philosophy, psychology, and linguistics to explain what was called language performance, that is, language production and comprehension. At the time, the possibility that the form of grammatical structures might be determined by domain-general behavioral systems was thought of as an important challenge to linguistics. However, after decades of interdisciplinary language study, this view it is now endorsed to varying degrees by most linguists and cognitive scientists, including those in "classic theories" which also hold the view that certain central architectural aspects of human language are not dependent on experience but rather imposed on it by organism-internal biases (Berwick et al., 2013).
The question, then, is not whether output/input factors related to the production and perception (i.e., externalization) of language modulates linguistic form, but rather how they do it and whether that is all there is to linguistic form. Put differently, what needs to be determined is whether externalization factors can account for linguistic categories like Noun or Preposition that are distinct (though related to) conceptual entities; whether externalization conditions chisel discreteness into linguistic categories (phonemes, words, and phrases); whether externalization forces a combinatorial and hierarchical structure on language.
MacDonald (2013) presents us with a thorough discussion of some principles operating on language production, and suggests a significant impact of those principles on the form of language and grammar. The discussion is adequately framed within the general theme of the relative weight that organism internal factors and experience have in cognition. The paper opens with some reference to findings on motion perception and object-face recognition, and it summarizes the state of the art acknowledging that "While such accounts don't deny innate factors in perception, they are notable in ascribing a central role for experience in development and in adult performance." Two are the issues under discussion here. One is whether externalization (production) demands can fully account for language form, or whether externalization demands modulate language form in concurrence with other factors, some of which are organism internal and previous to experience, as in other aspects of cognition. If the first position were correct, then language would be truly distinct, and very much unlike motion perception, face recognition, and other cognitive functions in not involving innate, organism internal factors in its structure and development. The question is not whether experience plays a key role in development and adult performance, since no account of language denies that, but whether experience is all that is required to explain its nature, a more controversial view, particularly among scholars devoted to the study of language form.
The position defended by MacDonald does not appear to fall completely in this latter class, because the strong position at the start of the paper is softened into the claim that "the memory and planning demands of language production strongly affect the form of producers' utterances," a statement that views externalizationproduction demands as one molding factor, leaving space for others. Indeed, few language researchers would feel uncomfortable agreeing with MacDonald "not that all aspects of language form and comprehension can be traced to the computational demands of language production, but rather that production's impact in these areas is so pervasive that understanding production becomes essential to explaining why language is the way it is, and why language comprehension works the way it does." The interest of the current proposal lies in the details, that is, in showing how production can explain why language is the way it is. In scientific thinking, a más cómo menos por qué, the more we know about how, the less we need to ponder about why (Wagensberg, 2006).
MacDonald argues that language form is significantly molded by two general domain principles that seek to unburden computational requirements on the part of the speaker: Easy First and Plan Reuse. Easier elements are put first in the sequence, and speakers tend to use again the types of sentences they have just heard. In order to evaluate the predictive capabilities of these two principles, we need to know what types of elements are easy, and what a plan is. Regarding what is easy in language, it involves words that are easily retrieved, and since the reasons why a given word or phrase might be easy to retrieve can be many, the criteria for easiness are heterogeneous, including frequency, length (favoring shorter words and phrases), complexity (favoring syntactically less complex elements), importance for the speaker, previously mentioned material, newly mentioned material. MacDonald is well aware of the possible circularity and lack of predictive power of this principle; it is one thing to argue for analogies in the navigation or action-planning system regarding the easy-first principle, and another to show that it is operational in accounting for language form.
As examples, let us consider some aspects of linguistic form and how principles seeking to minimize difficulty during production might relate to them. We will consider a simple sentence form from English, shown in (1): (1) There are men in the room This is an existential sentence, a very simple type of statement, which nevertheless involves a number of interesting form-related particularities, as existential sentences do across languages. This is one reason why they have been extensively studied in linguistics (from Milsark, 1979;to Moro, 1997; among many others). From the perspective of easy-first, it would appear that both there and men could appear first in the sentence, because they are short and frequent words, with men having the advantage of being animate, and therefore conceptually very salient. But only there can take the first place in an existential English sentence. Consider how other attempts fail: (2) (a) * men are there in the room (b) * men are in the room there (c) * men there are in the room To account for these patterns, the Plan Reuse principle can be appealed to: English-speakers are used to hearing existential sentences in that form, and that is why they keep using them in such a form. The question that lingers is then why English should have chosen a plan that violates easy-first to begin with. Particularly, when it appears that nothing stands in the way of not saying that initial and apparently semantically redundant word there, as shown by the Spanish example in (3): (3) Hay hombres en la habitación Are men in the room "there are men in the room" One could also wonder about why, given the easy-first principle, this Spanish sentence doesn't turn into (4), placing the most salient piece of information at the start, particularly given the relative freedom or word order that Spanish displays in other regards: (4) * hombres hay en la habitación A purported example of easy-first in action, as discussed by MacDonald, is the choice between active and passive in English, which speakers can allegedly use to place easier elements early. However, these subtle choices, if they could account for the passive and active sentences produced in English, can hardly account for one central type of question that linguists try to answer, which revolves around the class of possible and impossible sentences, rather than preferred and dispreferred ones. Turning back to linguistic form, the central types of question linguists seek answers to involve contrasts like the one in (5) between Dutch and English. Why can Dutch create passive sentences out of intransitive (unergative) verbs as in (5a), whereas English or Spanish cannot (5b, c)?
(5) (a) Er wordt door Jan getelefoneerd There was by Jan telephoned (b) * It/there was telephoned by John (c) * fue telefoneado por Juan Similarly, accounting for the form of language requires understanding how is it that passive constructions are characteristic of nominative languages like English, Spanish, and Dutch above, and how is it that they are missing in ergative languages, where instead of passives we find a different type of construction known as antipassive, illustrated in (6) for Inuit (examples from Bobaljik, 1992): (6) (a) Jaani-up tuktu tuqut-vaa Jaani-Erg caribou kill-3sg-3sg "Jaani killed a caribou" (b) Jaani tuktu-mik tuqut-si-vuq Jaani caribou-P kill-antp-3sg "Jaani killed (by/at) caribous" In the last decades, linguistics has gained a deeper understanding of the central issues about the form of languages, and a large part of the explanation does not involve principles like easy-first or plan reuse. This, of course, does not mean that avoidance of computational burden is not operational in language production. But it does mean that a persuasive approach to linguistic form, whatever its type, must present coherent accounts of these types of typological correlations. The times are ripe for a truly interdisciplinary quest to understand the complex nature of language, and there are many new outlooks that seek to find what the underlying forces are that make language so easy to use, but so hard to characterize (see Sanz et al., 2013 for a current forum). I thoroughly agree with MacDonald that the separation of psycholinguists who study production and perception mechanisms of language and linguists who study language form is a real and quite unfortunate one, but we both probably agree that this gulf has been bridged progressively in the last years, at least in some realms. This will undoubtedly increase the knowledge of the types of problems different language researchers seek to solve, and the often intricate details involved in the problems themselves.
ACKNOWLEDGMENTS
The author is grateful to Idoia Ros and Mikel Santesteban for comments and discussion. Research funding from the Spanish Ministry of Economy and Competitiveness CSD2007-00012, FFI2012-31360 and the Basque Government IT665-13 is gratefully aknowledged.
|
2016-06-18T01:16:25.400Z
|
2013-04-30T00:00:00.000
|
{
"year": 2013,
"sha1": "0e24fbf33241ebdd772d1a96a97afeb643966f31",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2013.00231/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f73e3cfdd88b2279992d1141d65ee4de3654a97",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
74406780
|
pes2o/s2orc
|
v3-fos-license
|
Percutaneous thoracic endovascular aortic repair for ascending aortic pseudoaneurysm after prosthetic aortic valve repair
Ascending aortic pseudoaneurysms are an uncommon and challenging surgical problem that requires intervention to avoid rupture and hemorrhage. Preceding cardiac procedures often compound the high rate of morbidity and mortality associated with open repair. A case is described of an iatrogenic pseudoaneurysm in a patient with a recently placed prosthetic aortic valve and a clinical course precluding repeat open operative procedure. An endovascular approach was used, with placement of a thoracic aorta endograft with temporary cardiac pacing and a double-curved Lunderquist wire to avoid instrumenting the prosthetic aortic valve. At 9 months of follow-up, the patient returned to his baseline activity status, and at 24 months, had no symptoms or signs of infection, and a computed tomography angiogram demonstrated pseudoaneurysm exclusion with no graft migration.
The mainstay of treatment for ascending aortic pathology is open repair. Cardiothoracic surgeons have long performed open procedures for ascending aorta aneurysms and type A aortic dissections, often with repair of concurrent aortic valve anomalies (bicuspid and unicuspid valves with stenosis or insufficiency). These procedures require cardiopulmonary bypass and may be unsuitable for some patients. Endovascular approaches for ascending aortic pathology have been described more frequently in recent years, often as case reports or limited case series. [1][2][3] Most reports of endovascular approaches focus on treatment of ascending aorta aneurysmal disease as well as type A dissections, with less attention to other aortic pathology.
Pseudoaneurysms of the ascending aorta are seen in rare circumstances, typically after cardiac surgery or chest trauma. The potential for rupture or significant hemorrhage mandates surgical intervention. However, open operative procedures for aortic pseudoaneurysms carry a high risk of morbidity, with mortality rates of 41% to 60%. 4,5 Endovascular approaches to ascending aortic pseudoaneurysms (AAPs) are varied. Septal occluder devices have been used in a few cases 6,7 ; however, complications, including device embolization, have been reported. 8 Endograft placement for AAPs has also been described in case reports [9][10][11] ; yet, complications such as graft migration, 12 ventricular perforation, or pseudoaneurysm formation have been reported, 3 and technical approaches vary significantly. We discuss a unique case of endograft exclusion of an iatrogenic AAP in a patient with a prosthetic mechanical aortic valve and the techniques that led to successful treatment. The patient presented in this case report consented to the publication of this information.
CASE REPORT
A 60-year-old obese man had undergone a minimally invasive sternotomy with prosthetic aortic valve replacement. His postoperative course was complicated by a cardiac arrest with mediastinal bleeding requiring re-exploration. An anterior ascending aorta injury was found and was repaired primarily. He was discharged home, but returned 10 days later after a syncopal episode and fall with recurrent mediastinal bleeding.
Repeat sternal exploration was performed after computed tomography diagnosed continued anterior aortic bleeding. Purulent fluid was also drained from the pericardium, and cultures were positive for Escherichia coli and Proteus mirabilis. He was subsequently discharged to a skilled nursing facility but returned in 1 week with wound dehiscence and recurrent bleeding. A sternotomy was once again performed, and the site of aortic bleeding was repaired with biologic glue because the tissue was felt unsuitable for suture retention. A pectoralis major flap was placed in the sternal wound bed. Three subsequent sternal site washouts followed during the next week, and anticoagulation was held. Unfortunately, he once again became hypotensive, with chest pain and evidence on examination of active bleeding, and underwent another mediastinal hematoma evacuation.
The patient was transferred to our hospital on broad-spectrum antibiotic therapy and concern for continued ascending aorta bleeding and infection. He arrived with imaging demonstrating a pseudoaneurysm on the anterior portion of the ascending aorta (Fig 1). Multiple cardiothoracic surgeon consultants judged he was too severely deconditioned to tolerate a repeat open operative intervention requiring cardiopulmonary bypass and ascending aortic graft placement. He was managed initially with pharmacologic hypotension and no anticoagulation, despite the prosthetic aortic valve; however, he had had a recurrent bleeding episode during medical management.
The decision was made to attempt endovascular repair with offlabel endograft use for exclusion of the AAP. This was deemed a viable option based on a preoperative computed tomography angiogram showing the ascending aorta was at least 10 cm long. This was confirmed with catheter-based angiogram measurements of the outer wall length; therefore, we chose a 45-mm diameter by 10cm length CTAG device (W. L. Gore & Associates, Flagstaff, Ariz).
A multidisciplinary team was organized, consisting of a cardiac anesthesiologist, echocardiographer, interventional cardiologist, cardiac surgeon, and vascular surgeons. A temporary pacemaker was placed to provide rapid ventricular pacing for accurate graft deployment. Percutaneous catheterization of the right common femoral artery was performed under ultrasound guidance. A Prostar 10F XL (Abbott, Abbott Park, Ill) device was inserted and deployed with the preclose technique.
After access into the ascending aorta and placement of a Lunderquist double-curved wire (OptiMed, Ettlingen, Germany), a 24F sheath was inserted. Thoracic arch aortography and transesophageal echocardiography showed the prosthetic aortic valve was intact. The Lunderquist wire was advanced and allowed to deflect off of the valve ring. The prosthetic valve was not crossed, reducing the possibility of wire entrapment on the valve and decreasing the risk of damage to the valve leaflets.
A pigtail catheter was passed retrograde from the right brachial artery into the aorta for arch aortography during graft deployment and to facilitate real-time localization of the innominate artery at the point where the brachial and transfemoral wires crossed. A transesophageal echocardiogram was used to confirm ascending aortic diameter measurements and to precisely locate the coronary ostia and the innominate artery orifice.
The CTAG device was advanced into position (Fig 2). A rescue wire was placed from the right brachial artery through the innominate artery and into the ascending aorta in case there was need for snorkel stent placement. Rapid ventricular pacing was briefly induced to ensure a high rate of capture, while simultaneous confirmatory angiography and transesophageal echocardiographic monitoring was performed with apnea (ventilation held). The endograft was then deployed directly above the coronary ostia and at the innominate origin (Fig 3) during rapid pacing, which lasted for 4 to 5 seconds. A standard frame rate of 3 frames/s was used throughout the procedure and contrast injection rates were 15 mL/s for 30 mL (2 seconds). Echocardiographic imaging allowed real-time evaluation of the coronary arteries to confirm our positioning, which was based primarily on angiography. The sheath was withdrawn, and the access site closure was performed with the closure device.
Postoperatively, the patient was restarted on systemic anticoagulation with heparin. He was extubated on postoperative day 4 and transferred out of the intensive care unit on postoperative day 11. He was discharged 22 days after the procedure. Given the placement of a prosthetic endograft in an infected field with Escherichia coli and Proteus mirabilis returning from cultures, intravenous vancomycin, ceftriaxone, and micafungin were used for a total of 4 weeks.
The patient has been seen in follow-up at 5, 9, 12, and 24 months from surgery with repeat computed tomography. He has no signs or symptoms of infection, has returned to his usual activities, and imaging shows an excluded AAP (Fig 4).
DISCUSSION
Endovascular treatment of ascending aortic pathology is a novel option. One device specifically developed for the ascending aorta has been deployed successfully 13 ; however, infrarenal and off-label descending thoracic aortic devices are currently the mainstay of endovascular treatment. Case reports and small case series describe treatment of common ascending aorta pathologies 1-3 ; yet, in patients with more uncommon conditions and complicated anatomy, such as this patient with an AAP and a prosthetic aortic valve, strategies for safe device deployment require further exploration.
The nuances of endovascular device placement in ascending aortic pathology are important. Rapid ventricular pacing is used by some, 14 and atrial occlusion for controlled hypotension is used by others. 9 Rapid right ventricular pacing through a pulmonary artery catheter has also been described. 15 Rapid transvenous ventricular pacing is our preferred method and resulted in adequate flow arrest. Using this technique, a 10-cm device was landed precisely between the coronary ostia and the innominate origin.
Techniques to protect previously placed prosthetic aortic valves while placing an ascending aortic endograft are less well known and deserve focused attention. Manipulating a largecaliber endovascular deployment system in the ascending aorta requires a stiff wire for tracking while in close vicinity to the aortic valve. Wire entrapment is therefore a major concern. Techniques to avoid wire entrapment have been described in the interventional literature, specifically when passing across prosthetic valves for left ventricle pressure measurement. 16 Wire entrapment has been reported when passing through native valves while placing hemodialysis catheters 17,18 and during attempts at paravalvular leak repair after prosthetic valve placement. 19 Ventricular complications, such as cardiac perforation or ventricular pseudoaneurysm, have also been reported after endovascular wire manipulation within the heart. 3 In this case, a double-curved Lunderquist wire provided adequate stiffness while allowing deflection off of the prosthetic valve in a limited working zone. Without crossing the valve, the chance of wire-related valve dysfunction and ventricular complications were reduced.
Other techniques used in this case included percutaneous access to avoid the sternum and obtaining a second access for arteriography from the right brachial artery. Transesophageal echocardiography was essential for real-time assessment of guidewire location, endograft positioning, and prosthetic valve function. Careful selection of a device with an adequate working length and a short nosecone is necessary. In this case, the short tip of the W. L. Gore delivery system offered a distinct advantage over other devices, given the mechanical valve anatomy.
The skills of many subspecialties were used, with involvement of vascular surgery, echocardiography, interventional cardiology, cardiac anesthesia, and cardiac surgery. This is the same team approach that has been successful in transcatheter aortic valve implantation in the Placement of Aortic Transcatheter Valves (PARTNER) study. 20 Extended antibiotic therapy, with guidance from infectious disease specialists, was also a key component to achieving adequate long-term results, with the caveat that close follow-up will continue to be essential given placement of a prosthetic material in an infected field.
CONCLUSIONS
AAPs pose significant treatment challenges, particularly in patients with clinical comorbidities limiting open intervention. Certain technical considerations can guide safe deployment of ascending aortic stent grafts in these challenging patients.
|
2019-03-12T13:06:43.924Z
|
2015-12-01T00:00:00.000
|
{
"year": 2015,
"sha1": "2969fd68b63cb8a17ffe3b59d900cb57070d35d0",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jvsc.2015.09.002",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8eac788ce2aebcdf3c65491fb9b26498719bec8c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
254329901
|
pes2o/s2orc
|
v3-fos-license
|
Cost of oral cholera vaccine delivery in a mass immunization program for children in urban Bangladesh
Highlights • Estimation of the cost of an OCV campaign targeting children is important for policy implication.• Vaccine price was the highest cost driver among the variable costs.• Cost difference can be attributed to the scale and location of vaccination.
Introduction
Cholera is an acute diarrheal disease that is responsible for a substantial health burden in the developing world and is endemic in Asia and Africa [1] genic Vibrio cholerae and is closely linked to poverty, poor sanitation, and lack of safe water [2,3]. The disease is a significant cause of mortality and morbidity, particularly in children [4,5]. It is estimated that approximately 2.86 million cholera cases and 95,000 related deaths occur annually in endemic countries where 1.30 billion people are at risk [1]. A large part of these deaths occur in resource limited countries in sub-Saharan Africa and Asia where access to improved drinking water source remains a significant challenge [6]. Cholera is endemic in Bangladesh, which ranks among countries with the highest number of people at risk for cholera [1,7]. A recent nationwide surveillance network found that cholera is pervasive throughout the country, with substantial heterogeneities within and between geographic areas [8].
WHO has prequalified several oral cholera vaccines (OCVs) and coordinates a Gavi-funded global stockpile of the OCV in 2013 managed by the International Coordinating Group (ICG) as well as by the Global Task Force on Cholera Control to ensure rapid access to vaccines for outbreak and humanitarian emergency situations, as well as control of cholera in endemic hotspots [5,9,10]. Different randomized trials conducted in a variety of settings (Bangladesh, India) have demonstrated that the OCV Shanchol confers substantial protective effectiveness, lasting for three to five years [11][12][13][14]. The Expanded Program on Immunization (EPI) in Bangladesh is one of the most successful programs in the health sector in reducing childhood morbidity and mortality from vaccine preventable diseases [15]. The decision to introduce new vaccines into the EPI program should be supported by public health impact evaluations as well as economic analysis. It is crucial to estimate the cost of vaccination, including the cost of delivery and other sundry costs, for policymakers to prioritize effective nationwide vaccination programs. Although a number of studies have evaluated the cost of cholera vaccine delivery targeting all age groups, no studies have focused on campaigns targeted at only children [1][2][3][4][5][6][7]. In cholera endemic areas, OCV programs for children ages 1 to 14 years old are considerably more cost-effective than vaccination of all age groups [8]. The greater cost-effectiveness of vaccinating children is due to both the higher incidence rates among children and the herd immunity effects of adults by vaccinating children. The aim of the study was to estimate the cost of an OCV campaign targeting on children in a lower-middle income country using empirical data. Furthermore, identifying the relative contribution of different cost components to the total cost and estimating the cost per fully-vaccinated individuals were other objectives of this study. This analysis provides a comprehensive account of the cost components involved in deploying cholera vaccines to children in high-risk, urban areas.
Study site
The study featured a mass vaccination campaign carried out in different administrative wards of Dhaka South City Corporation (DSCC), including Kamrangirchar, Hazaribag and part of Rayerbazar. These areas are densely populated while also lacking in adequate housing which further exacerbates the vulnerability to waterlogging and disease outbreaks associated with poor drainage and sanitation systems [16].
Social mobilization activities involved to inter-personal communication by the field workers and focal advocacy meetings in the catchment area. Prior to the vaccination campaign, a group of trained workers and volunteers visited each target site to distribute vaccination cards, and present messages related to cholera, OCV and vaccination activities. Communities were reminded by the volunteers about their vaccination date and time and to bring their ID card to the vaccination site. Targeted cell-phone messages and banners at vaccination sites were used to create awareness and encourage the participation in the vaccination program. Community leaders, both elected and appointed, were sought out and enlisted to help spread the word about the importance of immunization.
Vaccination campaign
The oral cholera vaccination program was conducted in the study areas with a two-dose regimen of Shanchol vaccine between January and February 2016. The single dose vials of Shanchol were stored at recommended storage temperature of 2-8°C at EPI headquarters for this study. The target population was children aged 1 to 14 years old living in the urban communities. The first dose campaign was conducted from 23 January to 4 February, followed by the second dose from 6 February to 18 February. Before the vaccination campaign a broader baseline census was carried out to determine the target population in the study areas. A group of trained staff made house-to-house visits and collected demographic and social information of each household member. Unique household and member identification numbers were generated, and laminated identity (ID) cards were distributed to household members. This ID card contained household ID, member ID, individual name, age, and household location. Individuals were asked to show their study ID cards when receiving vaccine for verification and record-keeping during the program. All registered household members aged 1 to 14 years living in the study areas were invited to participate in the vaccination campaign.
To implement vaccine delivery, the program chose feasible sites such as school/college campuses, government/non-government institutions, ground floor/parking spaces of houses, and other open spaces in each study area, keeping in mind the maximum accessibility of the study population. Before and during the vaccination campaign, promotional activities in the form of media announcements, leaflets, and posters distribution were carried out. Field workers and volunteers also visited targeted households in selected areas to inform residents about the vaccination program and the time and place of vaccination. Local health facilities, pharmacies, and community residents were also involved to encourage attendance.
Informed written consent from guardians as well as assent from children aged 11-14 years were taken before vaccinating individuals. Each vaccination site was equipped with first aid boxes and drinking water for vaccine recipients. In case of any adverse event due to vaccination, a trained medical team was assigned to each vaccination site for emergency treatment. Participants who received the first dose of vaccine were eligible to receive 2nd dose of vaccine, with an interval of at least 14 days between doses in this study. After vaccinating, empty vials and aluminum foils were kept separate from other general waste in biohazard collecting waste bags. All unused vials in good condition from all vaccine sites were collected at the end of the day and restored to central cold storage for use the next day. To maintain the cold chain, packing, transportation and distribution of vaccine and logistics and waste management, 12 people were engaged throughout the period. Although the campaign essentially took place at a fixed location, certain teams functioned as mobile teams and others had different prices based on importance for the successful vaccination program.
Perspective of this study
Vaccination cost was estimated from societal perspective, which incorporates the costs from both the provider and household perspectives. Cost from societal perspective arises from three main elements: cost of acquiring the vaccines from the manufacturer, cost of vaccine delivery and administration, and cost incurred by the household to receive the vaccine [17]. Thus, the societal cost can be obtained by adding up the aforementioned three cost components.
Data collection
The study employed quantitative techniques to collect the cost data based on the financial records of purchasing, transporting, and administering the vaccine. Other related programmatic costs were also qualitatively gathered through document review, observational checklist, interviews with program managers, and related authorities. Vaccinated individuals also incur time and monetary costs for traveling to the vaccination sites, potentially queuing for the vaccine, and the time requirements associated with the vaccination protocol. We have included these costs as well.
Data analysis
All resource items used for vaccine delivery activities were captured using an ''ingredients" approach of micro-costing. In the ingredients approach, all types of inputs were listed with their respective quantities and costs by activity [18]. Fixed and variable costs were captured through a comprehensive list of activities during the time of vaccination. Fixed costs were those which were essential for setting up and running the vaccination campaign and do not vary with the number of vaccinated people. Variable costs, on the other hand, vary with the number of people being vaccinated. The major activities of this mass vaccination campaign were in vaccine procurement from the manufacturer, storage and cold chain management, training, social mobilization, vaccine delivery to the selected population, and waste management. During this campaign, cold chain management related activities were performed at no cost by the Government of Bangladesh. Although actual expenditure for these items were zero, the shadow prices for each item were obtained from the market prices and included in our analysis.
Prices of all capital items such as vaccine cold boxes, vaccine carriers, and dial thermometers were discounted using an inflation adjusted rate and annualized using their respective functional lifetime to calculate the allocated cost for the program. For this purpose, we applied a 3 % discounting rate [17]. This rate was then adjusted according to the average inflation rate, which was 7.01 % for the period 2011-2015 as reported by the Bangladesh Central Bank. Shared cost items (cold chain storage, refrigerator) were apportioned according to the proportion of time usage of the relevant item or activity. The rental prices of vehicles used during the vaccination campaign were also included in the analysis, apportioned in a similar approach.
The monetary value of time spent by some senior management staff of vaccine project were obtained after discussion with program management team and added to our estimation by calculating each staff member's time involvement in the study and their respective salaries.
To obtain the indirect costs incurred by the vaccine recipients, such as the travel and waiting time taken to receive the vaccine, we used an age-specific human capital approach. For this calculation, vaccine recipients were categorized into two age groups: children < 5 years of age, and children between 5 and 14 years of age. We considered the cost of travel and waiting time as zero for the younger group. For the older age group, the cost of time lost by vaccine recipients and attendants (e.g., parents, guardians) was calculated by generating per minute income of the attendants, and multiplying that measure by the time spent for vaccination. When income of the attendant was not available, such as for those not directly involved in economic activities, the monthly income was replaced by the minimum wage rate of Bangladesh.
Sensitivity analyses were conducted to determine the range of cost estimates for different scenarios pertaining to delivery activities. We examined the effect of changes in the price of vaccines and the salary levels of the staff; salary level was of interest because relatively lower salary levels may be more appropriate for Bangladesh rather than that of the icddr,b project staff. A univariate analyses was conducted to observe the possible effects of the main cost drivers on the cost of full vaccination:
Results
During the vaccination campaign, a total of 141,852 doses of vaccine were used, with 75,170 administered as the 1st dose and 66,487 administered as the 2nd dose; 195 vaccine vials were wasted as they were broken, soiled or damaged. In total 66,311 individuals received full vaccination (both 1st and 2nd dose) and the remaining 9,035 individuals received incomplete vaccination (not received 2nd dose) (Fig. 1). The dropout rate between the 1st and 2nd round was 12.02 %.
Cost of vaccination program
The total cost of the vaccination campaign was US$ 405,445. Fig. 2 showed the vaccine procurement cost was US$ 269,519 which included freight charges, insurance, international and domestic transport. The vaccine delivery-related cost was US$ 129,159 and the cost incurred by the vaccine recipient was US$ 6,767 (Fig. 2).
Fixed costs accounted for 1.9 % of the total cost of vaccination, while variable costs accounted for 98.08 %. Considering all cost components, vaccine price was the main cost driver, constituting 66.47 % of the total cost, followed by staff salaries which comprised of 18.09 % of total cost. Fixed costs of the vaccination campaign amounted to US$ 7,804, and the cost of social mobilization and promotions was the largest cost component within the fixed costs.at US$5,787 Social mobilization activities featured a variety of advocacy meetings arranged before and throughout the vaccination campaign, including meetings with local government representatives, NGOs, and pediatric associations. The cost of social mobilization and promotion included the cost of promotional banners, leaflets, meeting-related cost as per-diem, and other promotional activities. Moreover, six different long-term training sessions were conducted with the supervisors and volunteers for demonstrating the vaccine administration. For this purpose, US$ 175 was spent for training-related activities.
Regarding the total cost distribution, it was found that vaccine price was the highest cost driver among the variable costs and also among total vaccination campaign cost. This cost component accounted for 66.47 % of total costs. Vaccine vials were imported at a special rate of US$ 1.85, and after adjusting the freight charges,
Cold chain management and designing stationaries
Cold chain-related costs were another important contributor to fixed costs. When the vaccines were shipped to Bangladesh, the vials were temporarily stored at the central EPI cold-storage and later sent to field sites on demand. A total of US$ 1,164 was spent for cold-chain and management-related activities, including the costs of cold boxes, vaccine carriers, thermometers, and cold storage at field sites. Additionally, a total of US$ 678 was used to develop the micro plan guide for the vaccination campaign, outline strategic guidelines, and produce other manuals for managers, supervisors, those administering the vaccine, and volunteers. This cost also covered the capacity building of human resources, advocacy, communication and social mobilization, program support costs, recruitment of health professionals, community health workers, and volunteers, supporting the creation and scaling up of new performance-based incentives systems, reward/incentive payments to health workers, volunteers, or community health workers.
Staff salaries, transportation, and other variable costs
During the mass vaccination campaign, a total of US$ 73,329 was paid as honoraria to all staff related to the vaccination program, including personnel salaries and communication allowances for icddr,b staff members and small allowances for non-staff personnel such as cold chain packers, supervisors, and volunteers. In After administrating the vaccines, empty vials and other supplies were disposed of according to local guidelines. A total of US $ 3,185 was spent on waste management, including US$ 1,332 for incineration and US$ 1,853 for the purchase of biohazard bags and waste-baskets.
Recipient cost for vaccination
The total cost incurred by participants to receive the vaccine was estimated to be US$ 6,767. This cost included all direct costs to the participants such as travel and food, and indirect costs such as time lost by participants to receive the vaccine, including time for travel and attendance.
Sensitivity analysis
To assess the impact of key cost components like vaccine price, staff salary, social mobilization cost, recipients cost for vaccination, and printing cost on the overall cost of full dose vaccination, a univariate sensitivity analysis was performed. In Fig. 3 lines representing different cost components demonstrate their effects on full dose vaccination cost based on percentage change of estimated costs, where greater sloped lines show higher responses. Initially, at the existing set point with the true estimated cost of items (Table 1), the cost of a full regimen of vaccination was US$ 6.11. A 10 % increment in the vaccine price came to US$ 6.52. In this scenario, the total cost per fully vaccinated person would be increased by 6.65 %. On the contrary, 20 % decrease in staff salary would make the cost per fully vaccinated individual US$ 5.89 with a 3.62 % decrease from initial cost (see Table 2).
Discussion
This analysis is the first empirical evaluation of OCV costs in a program exclusively targeting children. We estimated the total societal cost of delivering a two-dose regimen of OCV to children living in urban communities by comprehensively costing a mass vaccination campaign that reached 75,346 children in Dhaka, Bangladesh.
With the increased interest in controlling cholera through vaccination, research to investigate and analyze the cost of immunization programs has been gaining traction. While a number of analyses on the delivery cost of OCV have been published, few of these reports feature sufficient detail regarding the data collection of cost components, and none are focused on delivery only to children [19][20][21][22].
Our comprehensive evaluation of OCV costs took into account the indirect costs of vaccination, such as the loss of income and transportation costs of vaccine recipients, which are critical elements that are often overlooked in other cost estimation research. We assessed these indirect costs using an age-specific human capital approach, which fully considered the lost time and income of vaccine recipients and their adult attendants. Incorporating these cost dimensions is crucial for a better understanding of the total costs of vaccination from a broader societal perspective [19]. In our study we have intended to explore the distribution and cost drivers for a large-scale vaccination intervention. We assumed that the vaccine does not cause any adverse effects causing treatment cost, which is supported by evidence from the previous trials [23,24]. There are some limitations of this study. Firstly, the study vaccination campaign was conducted in a few selected areas within Dhaka. As such, the cost of vaccination may vary if the vaccination campaign were conducted in different geographic areas or during different seasons. Furthermore, foregone time due to vaccination could not be exactly valued for school-aged children, leading to imprecision regardless of the method used to value their time. There may also be some reporting bias in the financing data, which were extracted from a variety of sources documents and interviewing the respective authorities. Despite these limitations, the analysis demonstrates the evaluation of the societal cost of the cholera vaccination program using empirical data. The findings of this analysis are encouraging and also demonstrate a methodological approach for estimating costs which can be applied even in rural Bangladesh as well as in the routine public health practice.
Overall, the cost estimations found in this analysis are consistent with those determined in other studies of OCV delivery, with the crucial caveat that these studies feature OCV programs aimed at all age groups rather than exclusively children [19][20][21][22]. Various modelingbased studies indicated that vaccination strategy targeting children appeared more cost-effective than a strategy that included vaccinating adults due to having higher incidence rates of endemic cholera in children than in adults [25][26][27][28]. However, none of the studies analysed the cost of vaccination for children, such as the empirical data obtained in this study. Several studies indicated that low-cost vaccines often most cost-effective strategy for controlling infectious diseases, especially in resource poor countries like Bangladesh [29,30]. Similar to other studies we also observed that vaccine procurement price was the largest cost driver (66 % of total cost) [21,31]. The second largest cost driver found in our analysis was staff salary (18 %). Although the campaign essentially took place at a fixed location, certain teams functioned as mobile teams to vaccinate the persons who did not present for vaccination at fixed sites. This cost driver was also found as the second highest contributor in another study in Haiti, which found that salary accounted for 14 % of total cost [32]. The relatively higher proportion of staff cost in this present study can be attributed to the relatively high staff salary level at icddr,b compared to that of government employees who typically conduct such mass immunization programs.
This study observed that the cost per single dose of OCV vaccination was US$ 2.86 while the vaccine delivery cost was US$ 0.91 which was relatively lower than the vaccination campaign in Malawi where total economic cost per partially immunized person was US$5.43 [21]. In a study in Haiti observed the estimated per dose vaccination cost was US$ 2.90 including US$ 0.70 for vaccine delivery cost using Shanchol vaccine [32]. In a 2-dose OCV vaccination campaign with Dukoral in an urban setting in Mozambique, the estimated delivery cost was US$ 2.09 per fully vaccinated person which was a more expensive vaccine [33]. This is similar to our calculated delivery cost of US$ 1.95 per fully vaccinated person. Moreover, the delivery costs of OCV through mass vaccination campaign might differ depending on the local context and geographical terrain as well as the availability of vaccine delivery infrastructure. For instance, delivery cost of Shanchol was estimated US$ 1.14 in India and US$ 3.05 in South Sudan per fully vac-cinated person [19]. This cost difference could be partially attributed to the scale of vaccination and nature of the health care setting where vaccination was conducted.
Regarding vaccination cost using Shanchol vaccine, we found that the societal cost per fully vaccinated individual was estimated at US$ 6.11 which was higher than what was found in a previous study conducted in urban Bangladesh [20]. This differential is largely due to the significantly different study contexts and target population of the vaccination campaigns: in the earlier vaccination campaign, all high-risk people irrespective of age were targeted and approximately 123,661 were fully vaccinated, totaling almost twice as many vaccine recipients than in the current study, in which only young children up to 14 years old were included [20]. However, if the total quantity/output (here, vaccine recipient) is higher, the average cost will be lower in short run. We observed that the cost for a complete two-dose regimen was higher than the estimated cost from adding together the costs of two single dose regimens (US$ 6.11 vs US$ 5.72 (2.86 Â 2)). This disparity occurred due to the additional cost occurred from incomplete doses (either only 1st dose or only 2nd dose).
Overall, the cost of full oral cholera vaccination estimated in this analysis was relatively similar to the costs reported from Shanchol OCV campaigns that target all age groups. The cost per fully vaccinated person of a two-dose OCV campaign in Haiti was US$ 5.8 excluding the household cost [32]. Similarly, an Ethiopian study they estimated that the cost was US$ 5.85 per fully vaccinated person [34]. In the current analysis the estimated fixed cost was only 1.92 % of total cost and as this fixed cost was not extensive, the cost per person fully vaccination may vary by sample size and vaccine coverage rate.
There are several limitations as the estimation was performed using the costs incurred during the vaccination program at this time and we did not analyze the impact of inflation or indeed reduction in vaccine costs in the near future. Again, we did not analyze the cost of vaccination if the vaccine coverage rate may have varied or if the vaccination campaign had been conducted in outside the Dhaka or even other geographical location. We did not include the cost of census-related activities as the coverage was estimated from the data already available in the OCV project. Further there is a limitation of generalizing this costing exercise to other settings in Sub-Saharan Africa and/or humanitarian settings where the OCV campaign is very common.
Conclusion
In this analysis, we found that the total societal cost was relatively greater in an OCV campaign exclusive to children compared to ones reported elsewhere for all ages. While this higher cost may be unexpected, it should also be noted that vaccinating children with OCV may have greater impact on cholera than vaccinating adults. Multiple cost-effectiveness models have found that vaccinating children, rather than all age-groups, is more cost-effective in cholera endemic countries [26]. Vaccinating children with OCV is of special interest to policymakers, and greater research is needed to fully elucidate the empiric costs and gains from child-focused OCV strategies.
Data availability
Data will be made available on request.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2022-12-07T16:30:25.030Z
|
2022-12-01T00:00:00.000
|
{
"year": 2022,
"sha1": "3688947bfdb8b64694e1ddb2e4011d38027cc609",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jvacx.2022.100247",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c7b1c1f1cedf617a77c7e49c6c76a14d5c66e591",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
238690043
|
pes2o/s2orc
|
v3-fos-license
|
Kocuria rosea Bacteremia in Chronic Kidney Disease Patient: A Rare Case Report
Kocuria sp. may cause bacteremia, peritonitis, brain abscesses, meningitis, endocarditis, and acute cholecystitis in immunocompromised individuals. Recent reports identified Kocuria rosea in bacteremia associated with in dwelling intravenous lines, continuous dialysis fluids etc. We report on the case of bacteremia caused by K.rosea, a gram-positive microorganism in a 65-year-old female with a known case of end-stage renal disease on hemodialysis. After Piperacillin and Tazobactam antibiotic treatment, the patient got cured of fever and infection. This report presents a rare case of K.rosea bacteremia successfully treated with common antibiotics. Proper identification systems should be there to know the cause of bacteremia. The bacteremia cases with rare organisms should not be ignored.
INtRODUCtION
Sepsis and bacteremia are severe conditions that are associated with high mortality. Bacteremia can be because of many grampositive as well gram-negative organisms 1 . Bacteremia from Kocuria sp. is very rare and only a limited number of case reports of infections by Kocuria sp. are available. Therefore, we have performed a Medline search using the key words of Kocuria rosea," " bacteremia," and "hemodialysis" on https://www.ncbi.nlm.nih. gov/pmc/?term=Kocuria+rosea+bacteremia+ hemodialysis and obtain 18 results as on 23 Jul 2021. Out of theses very few pertain to Kocuria rosea related infections.
Genus Kocuria is aerobic gram-positive bacteria belonging to the family Micrococcaceae. These are found in the environment as saprophytes 2 . About 5 species of genus Kocuria are found to be rare pathogens in cases of catheterrelated bacteremia, peritonitis, brain abscesses, meningitis, endocarditis and acute cholecystitis in immune-compromised individuals 3 .
Here we report a chronic renal disease patient with a bloodstream infection that, after culture, was caused by Kocuria rosea.
Case Report
A 65 year female with end stage chronic kidney disease; on hemodialysis for past three years was admitted with a complaint of dyspnea and altered sensorium in the Department of Medicine MMIMSR Hospital, Mullana. The patient was a known case of type II diabetes mellitus and hypertension. Clinical, radiological and laboratory diagnostic procedures were performed.
Laboratory findings
Complete blood counts showed anemia (Hb=6.6 gm%), TLC= 10,540/cumm and Liver function tests and blood chemistry were within normal limits. ABG showed mild metabolic acidosis, and renal function tests were markedly deranged.
The chest radiograph showed mild pulmonary edema.
On the third day of her admission to the hospital, she developed a fever of 102˚F. RT-PCR for COVID-19: negative. (As the COVID-19 pandemic was going on, the test was done to rule out COVID as the cause of fever.) Two sets of blood samples were drawn from two different peripheral veins and one time urine sample was collected taking aseptic precautions. The samples were examined for microbial growth using suitable media as per standard protocol. The urine culture report was negative. However, both the blood cultures showed the growth of gram-positive cocci (Fig. 1), which on identification using VITEK 2 system (bioMerieux Inc., France) came to be Kocuria rosea. The antibiotic sensitivity using the VITEK-2 system reported the organism sensitive to all the antibiotics (Penicillins, amoxyclav, piperacillin+tazobactam, amikacin, ciprofloxacin, levofloxacin, erythromycin, clindamycin, linezolid, vancomycin, trimethoprim/sulfamethoxazole etc.).The patient was empirically given the antibiotic injection of Piperacillin+Tazobactam (2.25 g every 8 h for 14 days) after taking the samples for cultures.
Blood sample was taken for repeat culture after obtaining the first blood culture reports, which became negative as the patient was already receiving the antibiotics. The fever spikes started reducing on the third day of antibiotics and entirely resolved on the fifth day. The patient was discharged in satisfactory condition from the hospital after intravenous piperacillin+tazobactam injections for 14 days.
DIsCUssION
About 18 different species of gender Kocuria are known. Kocuria sp. belong to the family Micrococcaceae and are gram-positive cocci 2 . They are present as saprophytes in the environment or as normal flora on the skin. However, some species can be opportunistic pathogens in humans and rare infections by Kocuria sp. in have been documented. These are generally of low virulence. There might be certain risk factors predisposing the individual to infections by Kocuria species. These risk factors can be: diabetes, metabolic disorders, neoplasms, endstage renal disease, etc. [3][4][5][6][7] . We could found limited literature on K. rosea bacteremia. The low number of cases might be because of misidentification as contaminants in the laboratories as coagulasenegative Staphylococci (CoNS). The antibiotic therapy for Kocuria is based on case reports as no clinical guidelines are available for treating such infections. From the available case reports, we found that most of the Kocuria isolates were susceptible to commonly used antibiotics like penicillins, tetracycline, erythromycin, vancomycin, and streptomycin gentamycin, ceftriaxone, chloramphenicol etc. 5,8 .
In conclusion, although Kocuria is an uncommon cause of bacteremia among immunocompromised patients, the occurrence of unusual pathogens from hospitalized patients might not be ignored as contaminants/commensals. As such, ignored strains can be a vector of transfer of drug resistance or resistant strains that can cause of treatment failure with usual antibiotics. Therefore, accurate identification and susceptibility testing with automated identification systems and molecular methods are imperative for diagnosing and treating unusual pathogens.
|
2021-09-08T05:16:39.266Z
|
2021-09-01T00:00:00.000
|
{
"year": 2021,
"sha1": "5cbd22871cb6bd8c2b831410445d6097991134f8",
"oa_license": "CCBY",
"oa_url": "https://microbiologyjournal.org/download/50048/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5cbd22871cb6bd8c2b831410445d6097991134f8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
4548840
|
pes2o/s2orc
|
v3-fos-license
|
Feature Selection and Parameters Optimization of SVM Using Particle Swarm Optimization for Fault Classification in Power Distribution Systems
Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO) based support vector machine (SVM) classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR) method with a pseudorandom binary sequence (PRBS) stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.
Introduction
Distribution networks deliver electrical energy from transmission systems to consumers and are important and integral part of all power systems. Once an electrical fault occurs in any distribution feeder, immediate fault classification plays an important role in postfault analysis and power supply restoration. The accuracy of the fault type information assists the fault diagnosis system not only to locate the electrical faults promptly but also to ensure power quality as well as reliability of the system [1,2].
A variety of approaches have been developed to build an effective fault classifier in electrical distribution feeders. As the amount of power delivered by a distribution system significantly increases, it is essential to focus on fault classification schemes. The studies of fault classification in distribution feeder can be divided into three separate categories, as follows: (1) impedance based method [3,4], (2) travelling wave based method [5,6], (3) and artificial intelligence based method [7,8]. The most common method for fault classification in power systems is known as timedomain reflectometry (TDR) [9][10][11].
TDR is rather simple to implement; however, it is not a perfect fault-location method since any single pulse stimulus injected into the electrical line is quickly attenuated along that line, causing fault location and classification to become inaccurate. To overcome this problem, an improved TDR method using incident pseudorandom binary sequence (PRBS) excitation is proposed to locate such faults in [12]; however, it should be noted that it is only applied for highpower transmission lines. Actually, it is quite difficult to apply the TDR method to find faults in distribution feeders because of the various junctions and ends of branched network involved. As a result, various reflected responses may occur in the reflectometry trace [13]. Therefore, an intelligent algorithm is required to extract fault location information on a multiple-branched network from the reflectometry 2 Computational Intelligence and Neuroscience trace provided. SVM has been used successfully to resolve classification issues for a wide range of applications because of its strongly regularized characteristic and rapid training speed [14][15][16].
To build a SVM classifier, the aspect of feature subset selection plays an important role in detecting relevant variables in classification spaces. Principal component analysis (PCA) [17] and multidimensional scaling (MDS) [18] are two traditional methods applied to remove redundant variables in the original feature vectors. Authors in [19] proposed a Hadoop scheme to extract feature in parallel, in which hundreds of mappers are composed. In a recent paper [20], Ma and Niu used the firework algorithm to select input features by removing redundant influence in order to improve the icing forecasting of high voltage transmission line.
In addition to feature subset selection, the optimal set of SVM parameters also plays an important role in the distribution of samples in a given search space. Vapnik showed that the penalty parameter and kernel function parameter such as gamma for the radial basis function (RBF) significantly affect the performance of SVM [21]. Various researches have been proposed to select these two parameters, but there is no general opinion for their settings [22]. The grid search method (GSM) is investigated to determine optimal parameters by attempting different values and selecting those values possessing the least amount of testing error [23]. Because of the computational complexity involved with GSM, genetic algorithm (GA) has been developed to improve classification accuracy and reduce training time by using a minimal number of features [24]. However, it takes significant amounts of calculation time due to the complex operational process, including inheritance, selection, recombination, and mutation. To overcome this relative problem, Kennedy and Eberhart proposed a population-based search technique known as particle swarm optimization (PSO) [25]. The primary advantage of the PSO based encoding technique is in its capacity to decrease trapped status in local optima and increase the classification accuracy as well as the training speed.
In this paper, a novel method based upon PSO techniques is developed to simultaneously optimize input features and SVM parameters in order to classify the fault types found in the distribution network. These fault types can be divided into ten classes, including single phase-to-ground faults (AG, BG, and CG), line-to-line faults (AB, AC, and BC), double line-toground faults (ABG, ACG, and BCG), and three-phase shortcircuit faults (ABC). Further, this PSO-SVM classifier uses a dataset obtained from TDR analysis with PRBS excitation. Not only is the proposed PSO based encoding technique easy to use, but it also helps to significantly increase the success rate of the SVM classifier.
The remainder of this paper is constructed as follows. In Section 2, the theory of the proposed method is discussed, including TDR, SVM, and PSO. Section 3 presents the modeling of a typical two-branched distribution feeder. The developed PSO based SVM fault diagnosis approach is given in Section 4. In Section 5, experimental simulation results and discussions are presented. Finally, a conclusion is presented in Section 6.
Cdz Gdz TDR is based on a single pulse being injected into the given line or cable to be examined. Afterwards, some of the pulse energy is reflected back to source whenever it reaches the point of any discontinuities, such as electrical faults, tee joints, or line terminals. Since the propagation velocity is assumed to be constant, the fault distance can be measured based on the expected pulse transit time. Hence, the reflectometry trace will not only display the desired information of the fault type, but also determine the fault location. Assume a distribution line is modeled by a lumpedparameter equivalent circuit as shown in Figure 1 with a distributed series inductance , resistance , capacitance , and conductance .
A voltage introduced at the generator will require a certain amount of time to propagate along the line represented in the following equation: where V( , ) and ( , ) are the forward travelling voltage and current waves, respectively. The amplitude of incident pulse will be attenuated along the line and the phase of the voltage travelling along the line will be distorted resulting from varying frequency [26]. The attenuation and phase shift are determined by the propagation coefficient, as shown in where and are the attenuation coefficient and the phase change coefficient, respectively. The velocity at which the voltage moves down the line can be defined in = . ( From (1), using the Laplace transform and differential equation, we can obtain Computational Intelligence and Neuroscience 3 where V + ( − / ) and + ( − / ) are the forward travelling voltage and current waves, respectively; V − ( + / ) and − ( + / ) are the backward travelling voltage and current waves, respectively. Equating the coefficients of − / , (4) can be rewritten as where is called the characteristic impedance. When the line is terminated with any load whose impedance value is other than the characteristic impedance, a reflected wave will occur at the load and then propagate back toward the source. The voltage moving down the line in this case is given by means of where is called the load impedance. This reflected wave is related to the incident wave by representation in the following equation: where Γ is called the receiving-end voltage reflection coefficient and is called the transit time. TDR is quite simple to implement, but it is not a perfect technique since the use of single pulse excitation that is quickly attenuated along the line. In addition, the pulse width is one of the factors that affect the accuracy rate of the reflectometry method. TDR method, using incident pseudorandom binary sequence (PRBS) excitation can solve these problems by using cross-correlation (CCR) function between the reflected wave and incident wave given by (8) for fault diagnosis in distribution feeders: where is the cross-correlation (CCR) function between the reflected wave and incident wave; is the forward signal and is the feedback signal.
As previously mentioned, a variety of different components exist along electrical distribution lines like transformers, capacitors, tap changers, phase splitters, and so forth so it is not easy to extract fault locations from various reflections observed in the reflectometry trace. In this study, a multilayer SVM classifier is proposed as a supporting technique for the TDR method to provide fault diagnosis in multibranch distribution networks, including single phaseto-ground faults (AG, BG, and CG), line-to-line faults (AB, AC, and BC), double line-to-ground faults (ABG, ACG, and BCG), and three-phase faults (ABC).
Support Vector Machine.
A support vector machine (SVM) was first mentioned by Vapnik in 1995, and it has become one of the most optimal techniques for data classification. It has a solid theoretical foundation based on a combination between the structural risk minimization principle and statistical machine learning theory (SLR). The main advantages of SVM are the global optimization and high generalization ability. Further, it overcomes overfitting problems and provides sparse solutions in comparison to existing methods such as artificial neuron network (ANN) and refined genetic algorithm (RGA) in fault classification.
In standard linear classification problem, for example, one should separate the set of training data, ( , ), = 1, 2, . . . , , is the number of given observations, where ∈ are feature vectors and ∈ (−1, +1) are label vectors. A binary classification problem can be posed as an optimization problem in the following way: Subjected to: where is the regularization parameter; the penalizing relaxation variables. Equation (10) means It is to be noted that the nonlinear classifier may be denoted in the input space as where ( ) is the decision function and the bias * is calculated by the Karush-Kuhn-Tucker (KKT) conditions; ( , ) is the kernel function that produces the inner product for this feature space. In this paper, the following radial basis function (RBF) is used: where is the kernel parameter.
To obtain optimum performance, some SVM parameters need to be select property, including the regularization parameter and the kernel parameter . In this work, PSO technique is applied to optimize these two parameters accordingly.
Particle Swarm Optimization.
Particle swarm optimization (PSO) is inspired by the social and cooperative behavior displayed by various species to fill their needs in the search space. This algorithm is guided by personal experience , overall experience , and the present movement of the particles to decide their next positions in the search space. Computational Intelligence and Neuroscience Figure 2: The PSO search mechanism th particle at th iteration.
Further, the experiences are accelerated by two factors 1 and 2 , and two random numbers 1 and 2 generated between [0 1]; whereas, the present movement is multiplied by an inertia factor . Mathematically, updated positions of each particle in the search space can be expressed using the two equations discussed below.
The initial population (swarm) of size and dimension is denoted as In (14), , represents personal best th component of th individual, whereas represents th component of the best individual of population up to iteration . Figure 2 shows the search mechanism of PSO in a multidimensional search space. The initial of each particle is their initial position, whereas the initial is the initial best particle position among randomly initialized population. The and of each particle are updated as follows.
At iteration , where ( ) is the objective function subject to minimization. The updating procedure should be repeated until a stop condition is reached, such as a prespecified number of iterations
System Modeling
An equivalent model has to be constructed by using Simulink software and MATLAB Toolbox to simulate a typical twobranched distribution feeder shown in Figure 3, in which dots represent the distribution transformers and their loads.
Two distribution transformers in the sample system are used to reduce the voltage on the distribution line to the level of customers that are distributed along a feeder. Their parameters and connection phases are shown in Table 1 [31]. It is noted that these distribution transformers are operated in a full-load condition with 0.8 lagging power factor; as a result, the sample distribution system is operated with unbalanced conditions in occurrence. The main feeder and laterals are constructed by means of overhead lines whose positive-sequence impedance is 0.131 + 0.364 Ω/km [31].
Developed PSO Based SVM Fault Diagnosis Approach
Since the TDR technique does not diagnose fault easily in the distribution networks hence it requires to be supported from other intelligent techniques in order to obtain the best results. This paper proposes a PSO based SVM classifier to improve the performance of the TDR method in fault classification in electrical distribution feeders. The overall structure of SVM short-circuit classifier is shown in Figure 4, in which PSO is performed to optimize the feature subset and SVM parameters. For this, the data acquisition for data preprocessing is mentioned first.
Data Acquisition.
To obtain a suitable dataset for classification process, PRBS disturbance is injected directly into the secondary circuit of the current transformer (CT) 200/5A which is placed at the beginning of the line under test. The primary circuit of the CT is connected to the main feeder; thus the amplified PRBS is propagated along the line to diagnose any faults which may occur. Once a fault occurs in the distribution feeder, it causes producing a reflected signal that travels between the fault location and the substation. Then, these reflected responses are cross-correlated with the incident impulse by (8) in order to reduce the impact of noise as well as surmount amplitude attenuation. It is worth noting that, for each of the fault types specified, the magnitudes of the feedback waves are different at the shortage time; as a result, the peaks of the CCR are not found to be the same. Hence, the reflected responses and CCR between the reflected wave and the incident wave are used as input feature vectors for the training phase. The total number of feature vectors is 12, and they comprise a feature vector = [V 1 , V 2 , . . . , V 12 ] , in which V 1 -V 6 are the reflected voltage and current obtained at the substation and V 7 -V 12 are the peaks of CCR between the reflected and the incident waves.
Feature Extraction.
For utilization of the reflectometry method, various echo responses are collected, in which some irrelevant data may be confusing to the SVM classifier and subsequently increase the training time. Feature extraction is the best effective method to select appropriate input features in order to improve the speed of training as well as to ensure the success rate of classification. For optimum feature selection in this work, PSO is employed to improve the performance of the SVM classifier. To select optimum features of the given dataset, a binary string has been optimized using PSO where each bit represents a given feature of the dataset. In the binary string, a "0" represents an ignored feature, whereas a "1" represents a selected feature of the dataset. The optimum features are those features taken from the given dataset which correspond to the optimized binary string having its bit as a "1." For this, a given set of predefined SVM parameters has been used while the selection of features of the given dataset using PSO is made. At the end of feature selection stage, the selected strings provide the information regarding the features needed for optimizing the SVM parameters.
Optimum SVM Parameters.
The performance of SVM is susceptible to kernel function parameter and the regularization parameter , so these parameters must be carefully selected to increase the classification accuracy. In this paper, PSO technique is used to select the parameters of the SVM classifier. Performance is measured according to the classification accuracy on unseen testing data. In the learning stage, the PSO based encoding SVM model is trained based on structural risk minimization to minimize the training error. While training error improvement occurs, penalty parameter and kernel function parameter are regulated by means of PSO. The regulated parameters with minimal error are reported as the most suitable parameters. As a result, the optimal parameters ( and ) are to be obtained.
Once the optimized parameters of the SVM are obtained, then it is used for the retraining of the SVM model. After the training phase, the SVM classifier is ready to identify new samples in the testing phase. The testing set is also chosen by means of the above feature selection from the original dataset obtained by the TDR trace. Then, testing patterns are inputted to the trained multilayer SVM classifier which can identify all the 10 types of faults, including single-phase-to-ground faults (AG, BG, and CG), line-to-line faults (AB, AC, and BC), double-line-to-ground faults (ABG, ACG, and BCG), and three-phase faults (ABC).
Detailed experiment procedure for feature extraction and SVM parameter selection using PSO algorithm can be expressed using the following steps: (1) Read complete data and set , 1 , and 2 parameters.
(2) Initialize positions X and velocities V of each particle of population. (3) Initialize sets of SVM parameters within its ranges as particle position and velocity. (4) Form SVM using training dataset and initialized positions of each particle.
(13) If max ite then = + 1 and go to step (6); else go to step (14). (14) Optimum solution obtained: print the results of optimum generation as (15) Retrain SVM with optimum features and parameters; then identify unknown samples on testing dataset.
The experiment procedure can be visualized in Figure 5.
Computational Intelligence and Neuroscience 7 Table 2: Dataset of ten fault types located at distances of 3 km and 4 km from the substation.
, and c are magnitudes of reflected voltages and currents, respectively; cc-V a , cc-V b , cc-V c , cca , ccb , and ccc are CCR between reflected signal and incident signal.
Test Results and Discussion
In this paper, the fault types are considered by using a 127bit PRBS stimulus with frequency = 1 MHz and a velocity of 198,000 km/s propagated along the sample system given in Figure 2. The dataset used in this study was obtained at the substation end by TDR analysis, with the number of features being 12, in which six features are considered to be the magnitudes of reflected signals and six remaining features are extracted from the peaks of CCR between the feedback wave and the forward wave. This dataset is comprised of 5700 samples generated by creating each type of fault at different locations on two laterals with varying fault impedance value.
Note that training and test sets are randomly divided from the original dataset, in which 4500 and 1200 are used for training and testing set, respectively. Table 2 only gives a few portions of the dataset for purposes of brevity, which were created by a simulation of the ten types of short-circuit fault on the first lateral, located at distances of 3 km and 4 km from the substation.
In this paper, PSO technique is used to select the features and parameters of the SVM classifier. Preliminary experiments also permit this study set population size as 10; inertia weight has been taken into account as between 0.1 and 0.5 (considered randomly at each iteration); and acceleration factors ( 1 and 2 ) have been taken as equal to 2 with a maximum iteration set to 1000. Table 3 gives the results of the classification accuracy for the SVM algorithm using a dataset both with and without PSO optimization. The optimum values of and of SVM classifier are 181.0193 and 1.1212 without consideration of PSO and are 15.0381 and 0.0334 with consideration of PSO. From this table, it is observed that the classification accuracy in the case of using the entire feature is 93%, whereas the classification accuracy in the case of using a PSO based encoding technique is found to be 97.15%. This demonstrates the optimal efficiency of the proposed method in which PSO optimization is applied. All 12 features are autoselected from the corresponding input, and the testing success rate has been improved significantly. The remaining features are 8, which are 1-7 and 9. Furthermore, Table 3 provides the computational times for training SVM classifier. The overall simulation time taken by the SVM classifier without PSO is 134.8 seconds, whereas with PSO it is 83.54 seconds. It should be concluded that the PSO technique takes a relatively shorter computational time for training. The convergence characteristic of the proposed PSO is shown in Figure 6. From this figure, it is can be observed that MSE beyond 15 iterations is nondecreasing; thus the optimized SVM parameters can be obtained prior to the total training time taken (83.54 sec).
Conclusions
In this paper, a multilayer support vector machine (SVM) based on optimum parameters optimization and feature selection approach has been developed to classify ten types of faults in radial distribution feeders. Particle swarm optimization (PSO) has been used as an optimizer to improve the performance of SVM classifier by selecting an appropriate feature subset and kernel parameters. Further, time-domain reflectometry (TDR) with pseudorandom binary sequence (PRBS) stimulus has been utilized for generating a fault dataset. In the proposed technique, not only does using PRBS injection overcome the stimulus distortion problem, but it also surmounts the impact of noise to provide a reliable dataset for SVM classifier. The proposed PSO based SVM classifier has been successfully applied to identify all ten types of short-circuit faults in the radial distribution network observed. The achieved high accuracy rate in classifying fault types (over 97%) demonstrates greater effectiveness over existing fault identifiers.
|
2018-04-03T02:15:06.774Z
|
2017-07-11T00:00:00.000
|
{
"year": 2017,
"sha1": "4427ec81388916d5d68a053551ba49d8edf7483a",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/cin/2017/4135465.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "898b0e097298397aeb5c47a734cbea99310a9f8d",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
238198310
|
pes2o/s2orc
|
v3-fos-license
|
Representation by degenerate Frobenius-Euler polynomials
The aim of this paper is to represent any polynomial in terms of the degenerate Frobenius-Euler polynomials and more generally of the higher-order degenerate Frobenius-Euler polynomials. We derive explicit formulas with the help of umbral calculus and illustrate our results with some examples.
Introduction and preliminaries
The aim of this paper is to derive formulas expressing any polynomial in terms of the degenerate Frobenius-Euler polynomials (see (1.9)) with the help of umbral calculus (see Theorem 3.1) and to illustrate our results with some examples (see Section 5). This can be generalized to the higher-order degenerate Frobenius-Euler polynomials (see (1.10)). Indeed, we deduce formulas for representing any polynomial in terms of the higher-order degenerate Frobenius-Euler polynomials again by using umbral calculus (see Theorem 4.1). As corollaries to these theorems, we obtain formulas for expressing any polynomial by the degenerate Euler (see Theorem 3.2) and the higherorder degenerate Euler polynomials (see Theorem 4.2). The contribution of this paper is the derivation of such formulas which have potential applications to finding many interesting polynomial identities, as illustrated in Section 5.
The following identity is a slight modification of the formulas obtained in [12] which express any polynomial in terms of Bernoulli polynomials B n (x) defined by t e t −1 e xt = ∞ n=0 B n (x) t n n! : where n ≥ 2, and H n = 1 + 1 2 + · · · + 1 n . Letting x = 0 and x = 1 2 in (1.1) respectively give a slight variant of the Miki's identity and the Faber-Pandharipande-Zagier (FPZ) identity.
In 1998, Faber and Pandharipande found that the FPZ identity must be valid for certain conjectural relations between Hodge integrals in Gromov-Witten theory. In the appendix of [5], Zagier gave a proof of the FPZ identity. Another proof of FPZ identity is given in Dunne-Schubert [4] by using the asymptotic expansion of some special polynomials coming from the quantum field theory computations. As to the Miki's identity, Miki [17] uses a formula for the Fermat quotient a p −a p modulo p 2 , Shiratani-Yokoyama [21] utilizes p-adic analysis and Gessel [6] exploits two different expressions for Stirling numbers of the second kind S 2 (n, k). Here it should be stressed that these proofs of Miki's and FPZ identities are quite involved, while our proofs of Miki's and Faber-Pandharipande-Zagier identities follow from the simple formulas in [12], involving only derivatives and integrals of the given polynomials.
Many interesting identities have been derived by using these formulas and similar ones for Euler and Frobenius-Euler polynomials (see [8][9][10][11][12][13]15]). Some convolution identities for Frobenius-Euler polynomials are found in [7,19] by exploiting generating function methods and summation transform techniques. The list in the References are far from being exhaustive. However, the interested reader can easily find more related papers in the literature. Also, we should mention here that there are other ways of obtaining the same result as the one in (1.1). One of them is to use Fourier series expansion of the function obtained by extending by periodicity of period 1 of the polynomial function restricted to the interval [0, 1) (see [16]).
The outline of this paper is as follows. In Section 1, we recall some necessary facts that are needed throughout this paper. In Section 2, we go over umbral calculus briefly. In Section 3, we derive formulas expressing any polynomial in terms of the degenerate Frobenius-Euler polynomials. In Section 4, we derive formulas representing any polynomial in terms of the higher-order degenerate Frobenius-Euler polynomials. In Section 5, we illustrate our results with some examples. Finally, we conclude our paper in Section 6.
The Euler polynomials E n (x) are defined by When x = 0, E n = E n (0) are called the Euler numbers. We observe that E n (x) = n j=0 n j E n−j x j , d dx E n (x) = nE n−1 (x), E n (x + 1) + E n (x) = 2x n . The first few terms of E n are given by: More generally, for any nonnegative integer r, the Euler polynomials E n (x) of order r are given by The Frobenius-Euler polynomials H n (x|u) (u = 1) are defined by When x = 0, H n (u) = H n (0|u) are called the Frobenius-Euler numbers. We observe that H n (x|u) = n j=0 n j H n−j (u)x j , d dx H n (x|u) = nH n−1 (x|u), H n (x + 1|u) − uH n (x|u) = (1 − u)x n . The first few terms of H n (u) are given by: More generally, for any nonnegative integer r, the Frobenius-Euler polynomials H (r) n (x|u) (u = 1) of order r are given by For any nonzero real number λ, the degenerate exponentials are given by (1) n,λ t n n! .
Carlitz [1] introduced a degenerate version of the Euler polynomials E n (x), called the degenerate Euler polynomials and denoted by E n,λ (x), which is given by For x = 0, E n,λ = E n,λ (0) are called the degenerate Euler numbers.
More generally, for any nonnegative integer r, the degenerate Frobenius-Euler polynomials h (r) n,λ (x|u) (u = 1) of order r are given by (see [14]) n,λ (0|u) are called the degenerate Frobenius-Euler numbers of order r. Obviously, h n (x|u), as λ tends to 0. We recall some notations and facts about forward differences. Let f be any complexvalued function of the real variable x. Then, for any real number a, the forward difference ∆ a is given by If a = 1, then we let We also need∆ In general, the nth oder forward differences are given by For a = 1, we have It is easy to see that Finally, we recall that the Stirling numbers of the second kind S 2 (n, k) are given by
Review of umbral calculus
Here we will briefly go over very basic facts about umbral calculus. For more details on this, we recommend the reader to refer to [3,20,22]. Let C be the field of complex numbers. Then F denotes the algebra of formal power series in t over C, given by and P = C[x] indicates the algebra of polynomials in x with coefficients in C.
The set of all linear functionals on P is a vector space as usual and denoted by P * . Let L|p(x) denote the action of the linear functional L on the polynomial p(x).
a k t k k! , we define the linear functional on P by From (2.1), we note that where δ n,k is the Kronecker's symbol.
Some remarkable linear functionals are as follows: Then, by (2.1) and (2.3), we get That is, f L (t) = L. Additionally, the map L −→ f L (t) is a vector space isomorphism from P * onto F .
Henceforth, F denotes both the algebra of formal power series in t and the vector space of all linear functionals on P. F is called the umbral algebra and the umbral calculus is the study of umbral algebra. For each nonnegative integer k, the differential operator t k on P is defined by Extending (2.4) linearly, any power series gives the differential operator on P defined by It should be observed that, for any formal power series f (t) and any polynomial p(x), we have Here we note that an element f (t) of F is a formal power series, a linear functional and a differential operator. Some notable differential operators are as follows: The order o(f (t)) of the power series f (t)( = 0) is the smallest integer for which a k does not vanish.
If o(f (t)) = 0, then f (t) is called an invertible series.
If o(f (t)) = 1, then f (t) is called a delta series.
For f (t), g(t) ∈ F with o(f (t)) = 1 and o(g(t)) = 0, there exists a unique sequence s n (x) (deg s n (x) = n) of polynomials such that The sequence s n (x) is said to be the Sheffer sequence for (g(t), f (t)), which is denoted by s n (x) ∼ (g(t), f (t)). We observe from (2.8) that In particular, if s n (x) ∼ (g(t), t), then p n (x) = x n , and hence (2.10) s n (x) = 1 g(t) x n .
It is well known that s n (x) ∼ (g(t), f (t)) if and only if The following equations (2.12), (2.13), and (2.14) are equivalent to the fact that s n (x) is Sheffer for (g (t) , f (t)), for some invertible g(t): For s n (x) ∼ (g(t), f (t)), and r n (x) ∼ (h(t), l(t)), we have
Representation by degenerate Frobenius-Euler polynomials
Our goal here is to find formulas expressing any polynomial in terms of the degenerate Frobenius-Euler polynomials h n,λ (x|u).
As (4.2) is equivalent to saying that g(t)h n,λ (x|u), we have Now, we assume that p(x) ∈ C[x] has degree n, and write p(x) = n k=0 a k h (r) k,λ (x|u). Then we observe that By using (4.4) and (3.4), we observe that By evaluating (4.5) at x = 0, we obtain This also follows from the observation g(t) r f (t) k |h (r) l,λ (x|u) = l! δ l,k . The equations (1.13) and (2.7) give some alternative expressions of (4.6) as in the following: Another expression for a k follows from (1.16): Summarizing the results so far, from (4.7) and (4.8) we obtain the following theorem.
We get the next result as a corollary to Theorem 4.1 by letting u = −1. (a) Write p(x) = n k=0 a k H (r) k (x|u). As λ tends to 0, f (t) → t. Thus, from Theorem 4.1, we recover the following result obtained in [9,10]: where g(t) = e t −u 1−u , and f (t) = t.
B m (x)H n (x|u) = It is one of our future research projects to continue to find formulas for representing polynomials in terms of some specific special polynomials and to apply those to discovering some interesting identities.
|
2021-09-29T01:15:50.964Z
|
2021-09-28T00:00:00.000
|
{
"year": 2022,
"sha1": "71fd4f77546a9d5d960d794956d0d1106a2d35de",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "71fd4f77546a9d5d960d794956d0d1106a2d35de",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
13920345
|
pes2o/s2orc
|
v3-fos-license
|
Impact of wavefront distortion and scattering on 2-photon microscopy in mammalian brain tissue
Two-photon (2P) microscopy is widely used in neuroscience, but the optical properties of brain tissue are poorly understood. We have investigated the effect of brain tissue on the 2P point spread function (PSF2P) by imaging fluorescent beads through living cortical slices. By combining this with measurements of the mean free path of the excitation light, adaptive optics and vector-based modeling that includes phase modulation and scattering, we show that tissue-induced wavefront distortions are the main determinant of enlargement and distortion of the PSF2P at intermediate imaging depths. Furthermore, they generate surrounding lobes that contain more than half of the 2P excitation. These effects reduce the resolution of fine structures and contrast and they, together with scattering, limit 2P excitation. Our results disentangle the contributions of scattering and wavefront distortion in shaping the cortical PSF2P, thereby providing a basis for improved 2P microscopy.
Introduction
2P microscopy [1,2] is widely used in neuroscience because it allows structural and functional imaging, photolysis and photoactivation, deep within brain tissue at submicron spatial scales [3]. Despite the advantages provided by the 2P excitation process, the resolution and depth penetration are ultimately limited by the optical properties of brain tissue [4]. However, surprisingly little is known about how tissue affects the 2P excitation point spread function (PSF 2P ), which sets the image resolution in conventional 2P microscopy (as emitted fluorescence is not reimaged). Since the theoretical PSF 2P dimensions are similar to some small structures of interest, such as synapses and spines, any tissue-induced changes in the PSF 2P shape or size will have a large effect on the final image quality, blurring images and limiting depth penetration. Changes in PSF 2P shape will also compromise interpretation of time varying functional signals and photolysis. Recent work has shown that the size of the PSF 2P is enlarged in acute slices of hippocampus [5], while other studies show that the PSF 2P is distorted in fixed cortical slices [6,7], but tissue-induced changes in PSF 2P shape have not been investigated in living slices.
The PSF 2P can be adversely affected by several physical processes when imaging biological tissue, including absorption, statistically homogeneous scattering [8] and optical aberrations [9]. In brain tissue, absorption is usually negligible [10,11]. Statistically homogeneous scattering and optical aberrations due to wavefront distortions both arise from local variations of refractive index, which disperse and delay a fraction of the ballistic photons. However, the relative contributions of these effects on light propagation depend on the size of the structures within the sample (Fig. 1). Although scattering and wavefront distortion both occur in brain tissue, light propagation in 2P microscopy is often modeled as purely scattering, and is described in terms of modulation of the power of ballistic photons [12]. This predicts the attenuation of the ballistic laser power with tissue depth, enlargement of the PSF 2P and generation of out of focus 2P excitation [12], which are all circularly Fig. 1. Comparing the effects of static, statistically homogeneous scattering and wavefront distortion on excitation photons in 2P microscopy. Brain tissue is made of particles of a wide range of size and refractive index. Particles that are smaller than the wavelength of light create a statistically homogeneous effect. This decreases the power of ballistic photons while leaving the wavefront undistorted. Particles whose size is larger than the wavelength of light induce wavefront distortion. symmetric around the optical axis. These effects set a fundamental limit to the depth of 2P microscopy [12]. The wavefront of excitation light also becomes distorted while propagating through the sample (Fig. 1), which can be modeled using phase modulation of the ballistic photons [9]. Tissue-induced wavefront distortions have been shown to cause enlargement of the PSF 2P in fixed brain slices and the appearance of a speckled pattern [6,7]. Wavefront distortions have also been partially characterized in fixed brain slices [13] but as fixation (paraformaldehyde) modifies the brain refractive indexes, it is unclear how these findings relate to living brain tissue. Furthermore, the relative contribution that scattering and wavefront distortion make to tissue-induced changes in the PSF 2P , image quality and depth penetration, is poorly understood.
Wavefront distortion can be counteracted with wavefront shaping using adaptive optics devices, such as liquid crystal spatial light modulators (SLMs) and deformable membrane mirrors (DMMs) [14][15][16]. In 2P microscopy these technologies have been used to compensate for optical system induced aberrations, spherical aberrations induced by a gross mismatch in refractive index [6,[17][18][19][20], and to enhance 2P imaging in both fixed tissue [6,7,19] and living samples [20][21][22][23]. DMMs have also been used to reject background noise [24] in living brain tissue. However, adaptive optics have not been used to quantify the contribution that wavefront distortion makes to the PSF 2P shape in living brain tissue.
We have investigated the respective effects of statistically homogeneous scattering and wavefront distortion at intermediate depth in acute slices of barrel cortex, under conditions similar to those used to study neuronal activity. To do this we examined how the fluorescence emitted by objects and the shape of the PSF 2P changed when imaging through living tissue. We then combined DMM-based adaptive optics, wavefront analysis and computer modeling to disentangle how wavefront distortions and scattering set the PSF 2P characteristics and depth penetration. Our results establish that brain tissue introduces substantial distortions in the PSF 2P shape, including surrounding lobes (speckle patterns) that mediate a large fraction of the 2P excitation. Tissue-induced wavefront distortion therefore reduces the image quality, 2P excitation and signal to noise ratio (SNR) of fluorescence signals.
Optical properties of the 2P microscope
To investigate the effects of brain tissue on the PSF 2P we first quantified the properties of our 2P microscope, which consisted of a femtosecond tuneable Laser (Tsunami, Newport-Spectra Physics), scanhead (Ultima, Prairie Technologies), upright microscope (BX51, Olympus), and IR antireflection coated water-immersion objective (Olympus LumPLanFL/IR 60x/0.90W). Green fluorescence light was collected selectively using an emission filter (HQ 525/70m-2P Chroma Technology) and detected using GaAsP photomultipliers (Hamamatsu H7422). Images were acquired with PrairieView acquisition software. The laser intensity was controlled using a Pockels-cell (Conoptics Model 302CE) and neutral density filter (NDC-50S-3M, Thorlabs) when necessary. A single layer of 200 nm diameter green fluorescent beads (FluoSpheres, Invitrogen) was fixed to the bottom of the recording chamber ( Fig. 2 (a)) and both epi-and trans-fluorescence signals were collected during imaging. All bead images ( Fig. 2 (a)) were acquired at a high magnification factor using 8 μs dwell time and were averages of 2 or 4 single images. Acquisition of the bead images required a laser power after the objective of 3.7 ± 1.6 mW (n = 24).
To quantify the PSF 2P shape and fluorescence we established a protocol that could account for distortion and tilt of a 3D Gaussian [2] from the universal frame of reference (x, y, z) to its eigenframe of reference (x e , y e , z e ) using only 2 images: the focal plane (xy), and the plane containing the PSF 2P axis (z e ) and the optical axis (z). These 2 images were processed using IgorPro (Wavemetrics) as follows. They were filtered using a 3 × 3 median filter (4 repetitions) and fitted with 2D Gaussians. This enabled calculation of the angle (ε) between the (z e ) and (z) axes, and the full width half maximum (FWHM) in each axis of the PSF 2P . The spatial extent of the bead was defined using a fluorescence threshold equal to the fluorescence of the image of the bead at 3 standard deviations away from its center. This mask was applied to the image and the mean value was used as the bead fluorescence. The background fluorescence was calculated from the pixel distribution of the image after exclusion of the bead mask area. The SNR was calculated as the ratio of the mean fluorescence of the bead divided by the background fluorescence. As the 2P images of the beads are the convolution product of the PSF 2P and the bead (both represented by Gaussian distributions), the FWHM of the PSF 2P for each axis (FWHM 2P ) was calculated as follows: where FWHM Bead is the FWHM of a Gaussian fit of a 2D projection of a sphere. While the volume of the 3D Gaussian component of the PSF 2P at the focal plane (FV 2P ) is: The results in Table 1 and Fig. 2 show that the dimensions of the PSF2P are slightly larger than that predicted theoretically for a fully back-filled diffraction limited 0.9 NA objective at a wavelength of 725 nm, resulting in FV2P about fifty per cent larger than the theoretical minimum.
Measurement of the PSF 2P in acute cortical slices
We next examined the properties of the PSF 2P in acute slices of mouse 'barrel' cortex. This widely studied region of the somatosensory cortex processes information arising from the whiskers. Cortical slices were cut in two different orientations: tangential, which is a good model for in vivo imaging (from the surface of the brain; Fig. 2 (b)) and thalamocortical slices, which allow investigation of the individual cortical layers ( Fig. 6 (a) We investigated the effects of brain tissue on the PSF 2P by imaging 200 nm diameter beads through cortical slices ( Fig. 2 (b)). This required an average laser power of 86 ± 29 mW (n = 38; all data expressed as mean ± standard deviation, unless stated otherwise) and illuminated an area of 5 × 10 4 μm 2 at the brain surface. The PSF 2P imaged through cortical tissue was enlarged in comparison to the microscope PSF 2P (Table 1 and Figs. 2 (a-c)). The x-y and z dimensions of the PSF 2P and the FV 2P were significantly larger in the slice than for the microscope (p < 0.006). Indeed the cortical PSF 2P dimensions correspond to an apparent NA of around 0.6, 33% lower than the microscope. The PSF 2P was also distorted, tilted and fragmented into multiple lobes (or speckles; Fig. 2 (b)). Comparison with the near-diffraction limited PSF 2P of the microscope ( Fig. 2 (a)) shows that the PSF 2P in tissue ( Fig. 2 (b)) has many more features. The PSF 2P shape and anisotropy were highly variable but, there was no significant difference in the PSF 2P dimensions in different cortical layers (p > 0.08). However, variability was more pronounced in thalamocortical slices than in the tangential slices (Table 1). Analysis of the cortical PSF 2P suggests that 82 ± 18% (n = 8) of the 2P excitation (integral of the distribution of the squared intensity of excitation light in the PSF 2P ), which was estimated here from fluorescence, was generated outside of the 3D Gaussian core in the subset of data where it could be measured. In comparison, only 13 ± 3% (n = 6) of the 2P excitation was generated outside of the 3D Gaussian core for the microscope alone. However, although we selected regions where beads were sparse (at least 3 μm apart), we cannot exclude the possibility that the integration included some contaminating 2P excitation from neighboring beads or out-of-focus 2P excitation [12]. Nevertheless, this first order analysis (see §5.1 for refinement), establishes that a substantial fraction of the 2P excitation is carried by the surrounding lobes.
While part of the PSF 2P anisotropy, as well as the PSF 2P tilt, can be explained by refractive index mismatch at the sloping surface of the brain slice [27], the pronounced distortions observed in the PSF 2P shape (surrounding lobes/speckle pattern) indicate that cortex-induced wavefront distortions make a major contribution to the optical properties of cortex.
Excitation wavelength dependence of PSF 2P in acute cortical slices
We also examined the dependence of the PSF 2P characteristics on the excitation wavelength by comparing images of beads acquired through cortical slices at wavelengths between 725 nm and 950 nm. Aberrant lobes were present in the PSF 2P across this range (17 beads; data not shown). Gross distortions of the PSF 2P induced by brain tissue are therefore present at both the shorter excitation wavelengths typically used for photolysis and the longer wavelengths often used for structural and functional imaging.
The contribution of static scattering of ballistic photons to PSF 2P enlargement
In scattering samples the power of ballistic photons is exponentially attenuated with depth [10] and this decay is characterized by the excitation mean free path or scattering length (L se ). In 2P microscopy, the brain is often modeled as an homogeneous scattering sample [4,12], where the fluorescence power decreases exponentially with depth, twice as fast as the power of ballistic photons, enabling L se to be determined [28,29].
Measurement of mean free path
The fine processes (dendrites) of an individual neuron can span focal depths of hundreds of micrometers. We used this property to determine L se , by imaging these small structures at various depths. To do this, single cortical neurons in layer II / III (thalamocortical slices) were whole-cell patch-clamped and filled with an internal solution containing (in millimoles of compound used to make a liter of solution): 130 KMeSO 3 , 10 Na 2 Phosphocreatine, 10 HEPES, 4 MgCl 2 , 0.1 EGTA, 0.3 NaGTP, 4 Na 2 ATP, and a green emitting fluorescent dye, 0.2 mΜ fluo-4 (Invitrogen) or Alexa 488 (Invitrogen). After the dye had reached diffusion equilibrium within the cell, a z-stack of images of the dendritic tree was acquired (using epifluorescence collection) at a range of excitation wavelengths (725 ≤ λ ≤ 950 nm in n = 8 cells), using 4 μs dwell time and averages of 2 images ( Fig. 3 (a)). The relationship between the fluorescence normalized by the square of the illumination power (F(z) / P 0 2 ) and the depth (z) was fitted by an exponential, giving an estimate of L se ( Fig. 3 where α is a proportionality constant. These measurements yielded L se = 77 ± 11 μm (n = 4) at 725 nm. Therefore the depth at which we studied PSF 2P characteristics corresponds to 2 L se . Previous work showed that the excitation transport mean free path (L t ), which is directly related to L se (L t = L se / (1 -g), where g is the anisotropy factor [10]), increases with the excitation wavelength (λ) [28]. Our measurements showed that L se increased with λ, by a factor of two from 725 nm to 950 nm ( Fig. 3 (c)), thus providing larger depth penetration at longer excitation wavelength.
Comparison of the dimensions of the experimental PSF 2P in the cortex and the PSF 2P obtained from a scattering model
In a purely scattering sample, attenuation of ballistic excitation light due to scattering leads to reduction of the effective NA and enlargement of the PSF 2P [12]. To examine the contribution of statistically homogeneous scattering (static) in cortical tissue we compared the dimensions of the measured PSF 2P with that predicted using a vector-based model of PSF 2P that incorporated scattering. We determined the PSF 2P by calculating the fluorescence at each image point (x p , y p , z p ) using Richards and Wolf's normalized diffraction integral [30] expressed in cylindrical coordinates (r p , ω p , z p ). We introduced space dependent amplitude Α (θ,ω) and phase factors Φ (θ,ω) of the field in the back aperture of the objective, giving where (r, θ, ω) are spherical polar coordinates for the reference sphere with polar axis θ = 0 in the z direction and θ ΝΑ is the maximal acceptance angle of the objective ( Fig. 4 (a)). The 2P excitation distribution (distribution of the squared intensity of excitation light), was then calculated according to ( ) ( ) Exponential attenuation with depth of ballistic excitation photons due to scattering, as well as illumination of the back aperture of the objective by a Gaussian shaped profile with a lens fill factor of 1, were taken into account in the equations by substituting for A(θ,ω) and Φ(θ,ω) as follows: and ( , ) 0 To model the PSF 2P we assumed that ballistic photons form a cone that can be decomposed into beamlets of polar coordinates (r, ω) ( Fig. 4 (a)). The length of the optical path of each beamlet is inversely proportional to cos θ. Since the large angle beamlets trace a longer path, their power decreases as θ increases, hence reducing the apparent NA. For L se = 77 μm, an NA of 0.9 and λ = 725 nm, our model predicts that at 150 μm in layer II / III, the PSF 2P FWHM is 0.356 ± 0.002 μm (n = 4) in the focal plane and 1.50 ± 0.06 μm (n = 4) along the optical axis ( Fig. 4 (b)). This is significantly less than the dimensions of the measured PSF 2P in the cortical layer II / III (p < 0.05). Furthermore, since L se increases with the excitation wavelength, its effects on the effective NA will be smaller at longer wavelengths. This indicates that the decrease in the apparent NA due to excitation light scattering is small compared to the PSF 2P enlargement measured in the cortex, suggesting that wavefront distortions contribute significantly.
Correcting for wavefront distortions in the cortex with adaptive optics
To investigate the properties of brain tissue-induced wavefront distortion we used adaptive optics. To do this a DMM (gold coated mini DMM, Boston Micromachines, with 32 electrostatic actuators) was added to the excitation path of our commercial 2-photon microscope.
Wavefront shaping using a conventional optical configuration
The DMM was inserted in the excitation path in a conventional 4f configuration, ensuring conjugation of the DMM and the galvanometers mirrors, which were reimaged onto the objective backaperture [19,21] (Fig. 5 (a)). The DMM was set up with the actuators at mid voltage (control conditions, CC), to enable positive and negative wavefront shaping. As we were using a commercial microscope there were physical constraints that prevented us fully backfilling the objective, thereby reducing the effective NA of the system. The PSF 2P dimensions of the microscope including the DMM in control conditions were 0.47 ± 0.02 μm (n = 9) in the x-y plane and 3.1 ± 0.6 μm (n = 9) along the optical axis ( Fig. 5 (b)). Thus, the dimensions of the PSF 2P of the optical system including the DMM were close to the theoretical values for a 0.65 NA objective. To correct for aberrations the DMM shape was adjusted from the control conditions to maximize the fluorescence from 200 nm diameter beads. To do this the scanning mirrors were parked at the center of a bead and the resulting fluorescence signal was used as fitness parameter [31] and the DMM shape was adjusted using a random search optimization algorithm (via an in-house developed computer program; LabVIEW, National Instruments) [32]. The output of the photomultiplier was integrated for a period of 10 ms and this measurement was averaged 5 times during each DMM shape optimization. As 200 nm diameter beads are sensitive to bleaching, bleaching compensation was used to avoid bias. The optimized mirror shape (OMS) was defined as the setting when the bead fluorescence reached a steady maximum after several hundred iterations, which took approximately 2-5 min.
4.1.1. Wavefront shaping to correct for wavefront distortions arising from the microscope The optimized DMM shape for the microscope alone (OMSm) increased the fluorescence and the SNR of all beads in the field of view by 25 ± 19% (n = 9, p < 0.001, paired t-test) and 23 ± 14% (n = 9, p < 0.01, paired t-test), respectively, in comparison to control conditions. The fluorescence and SNR improvements resulted from both a significant decrease in the volume of the main lobe of the PSF 2P by 12 ± 11% (n = 9, p < 0.01, paired t-test) and a decrease of the background fluorescence by 14 ± 9% (n = 9, p < 0.05, paired t-test).
Wavefront shaping to correct of cortex-induced wavefront distortions
We then corrected for the wavefront distortions introduced by the cortex by optimizing the DMM shape using beads imaged through cortical slices (Fig. 6 (a)). Using this optimized mirror shape for the cortex (OMSc) resulted in increased fluorescence and SNR of 35 ± 37% (n = 36, p < 0.001, paired t-test) and 66 ± 55% (n = 36, p < 0.001, paired t-test), respectively. Alternatively, the laser power could be decreased by 21 ± 10% (n = 36, p < 0.001, paired ttest) while maintaining similar fluorescence levels and significantly decreasing the background fluorescence by 42 ± 17% (n = 36, p < 0.001, paired t-test). These improvements arise from compensation of optical aberrations introduced by the cortex, because using the settings for OMSm did not resulted in significant fluorescence enhancement when beads were imaged through the cortex (n = 18, p = 0.7, paired t-test). The OMSc resulted in a decrease in the size of main lobe of the PSF 2P by 16 ± 37% (n = 36, p < 0.005, paired t-test; Figs. 6 (b-d)) and a reduction of the surrounding lobes. This decreased the background and increased the fluorescence of the main PSF 2P (Fig. 6 (d)). To quantify the spatial dependence of cortical wavefront correction with the DMM, we first investigated the field of view over which an OMSc provided fluorescence and SNR enhancement. To do this we optimized the DMM shape on a bead at the center of the field of view ( Fig. 7 (a)) and then translated the sample by 50 or 100 μm in the x-y plane (arbitrary direction) and compared the same bead, or group of beads, imaged using the OMSc settings obtained at the center of the field of view and the DMM under control conditions. There was no significant change in fluorescence and SNR with distance from the optical axis (Figs. 7 (bc)), showing that a particular cortical area is corrected with an OMSc no matter where it is in the field of view. (1), at a particular cortical location. The cortical location was moved across the field of view at distances of 50 μm (position 2) or 100 μm (position 3) away from the optical axis and the fluorescence and SNR obtained using the optimized mirror shape (OMSc) and in control conditions (CC) were measured. The change in fluorescence (b) and SNR (c) across the field of view using the OMSc performed at position 1. There was no significant change in these parameters with distance to the optical axis (p > 0.08, paired t-test). Grey symbols: individual experiments, colored symbols: mean, black bars: sem. The DMM shape was optimized on the central bead giving the optimized mirror shape in the cortex (OMSc). (Right) Using OMSc improved bead definition and fluorescence intensity in the lower half of the image but not in the upper half of the image, illustrating that wavefront distortions vary across a cortical slice. (b) Protocol used to examine the variability of wavefront distortions in the cortex: a first cortical column (C1) was positioned at the center of the field of view (indicated by the square box) and a first DMM shape optimization was performed there, giving OMSc (1). Then a neighboring cortical column (C2) was positioned at the center of the field of view, a second DMM shape optimization was performed, giving OMSc (2). Last a bead at the center of C2 was imaged using OMSc (1), OMSc (2) and CC. (c) Bead fluorescence using OMSc (1) was significantly smaller than using OMSc (2) (*) (p < 0.011, n = 8, paired ttest). Grey symbols: individual experiments, colored symbols: mean. The sem is smaller than the colored symbols.
We then tested whether wavefront correction at a particular cortical location was effective in correcting for wavefront distortions introduced by the surrounding regions. Figure 8 (a) shows that an OMSc obtained for a bead at the center of the field of view was effective in enhancing the fluorescence and SNR of beads in the lower but not the upper regions of the field of view. To investigate further this regional variation we positioned a cortical column in the field of view and performed a DMM optimization. We then moved to a second cortical column and performed a second DMM optimization (Fig. 8 (b)). Lastly, we imaged a bead in the second cortical column using both of the OMSc's and under control conditions. Fluorescence levels of bead images acquired with the local OMSc were significantly higher than those obtained with the non-local OMSc, confirming that wavefront distortion varies with location in the cortex. These results suggest that optimization of the DMM shape at a particular location can correct for the same aberrations if the sample is moved across the microscope field of view. However, optical aberrations introduced by cortical tissue appear to vary from region to region. Therefore, DMM optimizations have to be repeated for different cortical regions.
Using a light-efficient DMM configuration for wavefront correction
The conventional DMM configuration used so far resulted in the loss of about 50% of excitation light, due to the incoming and outgoing beams passing through a polarization beam splitter and quarter-wave plate. Since both the power of the excitation light and the effective NA are limiting for deep tissue imaging and for performing photolysis, we explored the possibility of using an alternative DMM implementation ( Fig. 9 (a)) with a better optical transmission efficiency. This involved positioning the DMM at 45° to the direction of the beam propagation, thereby avoiding the use of polarization optics (this configuration is related to configurations used previously [22,24]). This increased the available light by 64% and reduced PSF 2P dimensions with the DMM in control conditions to 0.40 ± 0.04 μm (n = 9) in the x-y plane and 2.8 ± 0.2 μm (n = 7) along the optical axis ( Fig. 9 (b)). Thus, the dimensions of the PSF 2P of the optical system including the DMM were close to the theoretical values for a 0.7 NA objective.
Using the 45° light-efficient configuration, DMM optimization of the microscope alone enhanced fluorescence and SNR of beads across the field of view by 19 ± 15% (n = 9, p < 0.001, paired t-test) and 25 ± 18% (n = 7, p < 0.003, paired t-test) in comparison to control conditions. There was no significant decrease in the size of the main lobe of PSF 2P (p = 0.44, paired t-test), instead the enhanced efficiency arose from a decrease in surrounding lobes, which significantly decreased background fluorescence by 22 ± 14% (n = 7, p < 0.03, paired ttest) and increased fluorescence of the main PSF 2P .
Wavefront shaping to correct for cortical wavefront distortions using the light-efficient DMM configuration
Optimization of the DMM shape in the cortex, resulted in increased bead fluorescence and SNR of 83 ± 108% (n = 44, p < 0.001, paired t-test) and 87 ± 100% (n = 44, p < 0.001, paired t-test), respectively (Figs. 9 (c-h)). Alternatively, the laser power could be decreased by 41 ± 21% (n = 44, p < 0.001, paired t-test) while maintaining similar fluorescence levels and significantly decreasing the background fluorescence by 39 ± 31% (n = 44; p < 0.001, paired t-test). The OMSc reduced the speckle pattern, significantly decreasing background levels and increasing fluorescence levels of the main PSF 2P . We also used the PSF 2P data acquired using the DMM in control conditions to estimate the 2P excitation in the surrounding lobes at NA 0.7. Analysis of the cortical PSF 2P as in § 2.2, suggests that 74 ± 17% (n = 10) of the 2P excitation was generated outside of the 3D Gaussian core in the subset of data we could analyze (Fig. 12 (c) below). In comparison, only 20 ± 7% (n = 7) of the 2P excitation was generated outside of the 3D Gaussian core for the microscope alone.
To quantify the spatial dependence of cortical wavefront corrections with this configuration, we investigated the field of view over which a DMM OMSc provided fluorescence and SNR enhancement (as described in 4.1.1). There was no significant change in the fluorescence and SNR enhancements with distance from the optical axis ( Fig. 10 (a-c)).
These results suggest that optimization of the DMM shape at a particular cortical location can correct for the same aberrations across the microscope field of view for the light-efficient configuration. The potential disadvantage of this optical configuration, that only the center of the mirror is fully conjugated to the back aperture of the objective, is therefore not a problem in practice. Moreover, the reduction in the effective stroke of the DMM in this configuration was also not an issue as the actuators never reached their maximal value. These results show that it is possible to use a light efficient optical configuration where the DMM is positioned at 45° to the direction of the beam propagation for wavefront shaping in the living brain. Such a configuration is simple to implement in an existing commercial microscope and transmits most of the laser power.
4.2.2/ Optimizing the DMM shape using a cellular element
Since it is difficult to distribute beads in living brain we examined the feasibility of optimizing the DMM shape on fine neuronal processes. DMM shape optimizations could be successfully achieved on small dendrites or spines from fluorescent-labeled neurons located at an average depth of 85 ± 24 μm (n = 9) below the surface of the brain slice, provided that the laser intensity used during the optimization was kept low (Fig. 11). Using the OMSc, the collected fluorescence increased significantly by 60 ± 56% (n = 10, p < 0.001, paired t-test). Alternatively, the laser power could be significantly lowered by 32 ± 18% (n = 9, p < 0.01, paired t-test) using the OMSc while maintaining fluorescence constant.
These results show that it is possible to compensate for brain-induced aberrations by optimizing the DMM shape using dye filled living subcellular elements, thereby improving optical efficiency and contrast of fine objects in the mammalian cortex.
Contribution of brain-induced wavefront distortion and scattering to 2P microscopy
To quantify the relative contributions of wavefront distortion and scattering to image quality in the cortex, we measured the wavefront corrections introduced by the DMM and used modeling to examine their effect on the PSF 2P and on the resolution of small objects.
Effect of wavefront distortions on the PSF 2P
The theoretical PSF 2P can be calculated in the presence of wavefront distortions, by expressing the wave aberration function Φ(θ,ω) as a linear combination of Zernike polynomials in Wolf's integral representation of the image field [33] as in § 3.1. We used this approach to model the aberrated PSF 2P by introducing the conjugate Zernike modes determined from the OMSc measured in the cortex. Φ(θ,ω) and A(θ,ω) were substituted in Eq. (7-9) as follows: To measure the optical wavefront, the DMM was set up in the same configuration as that used for correcting cortical wavefront distortion. We used the 45° configuration for this analysis since the NA was larger and the wavefront correction was more effective. To do this the DMM was illuminated with a Ti:sapphire laser and reimaged onto a Shack-Hartmann Wavefront Sensor (WFS150-7AR by Thorlabs) using two lenses. A rolling average (10 time points) was used to reduce the noise and the scale was set to micrometers. The Zernike coefficients were determined for a particular OMSc measured in cortex after subtraction of the coefficients corresponding to the DMM control conditions. Only modes up to the 5th Zernike order were considered as higher order modes either could not be corrected or their contribution was small [34]. As the insertion of the DMM in the optical path resulted in a decrease of the NA to 0.70, this was used for modeling the PSF 2P . Water refractive index (1.34) was used for the microscope alone and a refractive index of 1.4 was used in the brain [35]. Modeling was performed in square cuboids of 4 μm × 4 μm × 14 μm centered on the center of each PSF 2P . Figures 12 (a) and (b) compares a model of an ideal PSF 2P (without aberrations) and a model PSF 2P that includes the wavefront distortions determined from OMSc's in cortex, respectively. The main core region of each of 18 modeled cortical PSF 2P 's were fitted with a 3D Gaussian function and the average FWHM was 0.5 ± 0.1 μm along the x axis, 0.8 ± 0.1 μm along the y axis and 4.5 ± 1.3 μm along the optical axis. The modeled cortical PSF 2P was significantly enlarged compared to the ideal PSF 2P (p < 0.001, z-test) and the extent of this enlargement was not significantly different from that measured in the cortex (p > 0.15, paired t-test). Wavefront distortions can therefore account for the PSF 2P enlargement observed in the cortex. The modeled PSF 2P also showed a speckle pattern with clear surrounding lobes as observed experimentally (Figs. 12 (b) and 2 (b), respectively). We quantified the fraction of 2P excitation mediated by the surrounding lobes by using the fit of a 3D Gaussian function to define the central core of the PSF 2P . Analysis of the modeled microscope PSF 2P (0.7 NA objective) using a step size of 0.02 μm for the x and y dimensions and 0.1 μm in the z dimension, showed that 12.1 ± 0.4% of the 2P excitation is not carried within this 3D Gaussian region of the central core, as predicted analytically. In contrast, analysis of the cortical PSF 2P showed that 54 ± 13% (n = 18) of 2P excitation was generated outside of the 3D Gaussian core in the subset of data we modeled (Fig. 12 (c)). Thus, by compensating for the optical aberrations that generate the surrounding lobes, the fluorescence of the Gaussian PSF 2P should increase by approximately a factor of two. These predictions are consistent with our experimental observations that wavefront correction with the DMM increased the fluorescence of beads by a factor of 1.83 ± 1.08 (n = 44). Moreover, they refine our analysis of the 2P excitation in the surrounding lobes of the measured cortical PSF 2P in § 2.2 and 4.2.1 by providing an estimate that does not contain contamination from neighboring beads or from out-of-focus 2P excitation related to scattering. However it is likely to represent a lower limit for the effect of the surrounding lobes given that wavefront shaping did not compensate for all the brain-induced wavefront distortions.
Relative contribution of scattering and wavefront distortions in the cortex
Both scattering and wavefront distortions contribute to decreased 2P-excitation in tissue. To investigate their respective contributions, we modeled the fluorescent emission from spherical objects of 0 -7 μm diameter ( Fig. 12 (d)). We compared the predicted 2P fluorescence from these objects for an undistorted wavefront, in the presence of statistically homogenous scattering and in the presence of the wavefront distortions that we had compensated for with the DMM in cortex. Figure 12 (d) shows that homogeneous scattering reduces the 2P excitation fluorescence by a factor of fifty at a depth of 150 μm and this factor is independent of the size of the object. Wavefront distortions reduce 2P excitation fluorescence by a factor of four for the largest objects and a factor of ten for the smallest ones, and thus lead to decreased contrast of fine objects. Normalizing the plots by the maximal fluorescence emission reveals that scattering has little effect on the relationship between 2P fluorescence and object size. In contrast, wavefront distortion substantially reduced the emission from objects below about 7 μm in diameter introducing significant low-pass spatial filtering ( Fig. 12 (d)). Furthermore, comparison of the effects of the main 3D Gaussian core to the full aberrated PSF 2P shows that low-pass filtering results mainly from the surrounding lobes ( Fig. 12 (d)). These models show that cortical tissue-induced wavefront distortions have marked effect on PSF 2P and that their effects are the largest when imaging small objects. Correction of tissue-induced aberrations is therefore particularly useful to improve 2P imaging of fine structures. Correction of wavefront distortions will also reduce the 2P photolysis and photostimulation volume in cortex, allowing more localized uncaging and activation to be achieved.
Discussion
We have measured the properties of the PSF 2P in living slices of mammalian cortex. Our results show that in cortex the PSF 2P decomposes into a central Gaussian region and a speckle pattern consisting of multiple surrounding lobes that can carry more than half the 2P excitation at a tissue depth of 150 μm. The central Gaussian region is enlarged in comparison to the microscope PSF 2P and this arises mainly from brain-induced wavefront distortions rather than scattering, which has a relatively small effect on PSF 2P size. Scattering of ballistic photons is, however, the dominant process in reducing the 2P-excitation with depth, although wavefront distortion also contributes significantly. By combining adaptive optics and modeling, we establish that the 3D speckle pattern of the PSF 2P arises from brain-induced wavefront distortions and this together with the enlarged central region causes low-pass spatial filtering when imaging cortex. Lastly, we show that the tissue-induced surrounding lobes of the PSF 2P can be reduced by wavefront shaping using a DMM, thereby improving the efficiency of excitation and the resolution of fine structures. Our results provide a quantitative basis for understanding the processes that set the resolution and limit 2P excitation of fluorescent structures in brain tissue and identify features of the PSF 2P that can be improved with wavefront shaping.
Experimental measurement of the PSF 2P in the mammalian brain
We have shown that the PSF 2P is distorted and enlarged in acute cortical slices, resulting in loss of 2P excitation efficiency, decreased image contrast and reduced spatial resolution. Similar enlargement of the core of the PSF 2P with depth has recently been reported in hippocampal slices [5] and in fixed cortical slices [6] and attributed to scattering or wavefront distortions, respectively. Moreover, recent work using fixed cortical slices have reported that the PSF 2P has a 3D speckle pattern [6,7]. Our work, in living tissue, shows that the 3D speckle pattern arises from tissue-induced wavefront distortions and accounts for up to 80% of 2Pexcitation at 150 μm for an NA of 0.9. This has important implications for the resolution of fine structures, since they are spatially low-pass filtered by the enlarged and distributed PSF 2P . It is also detrimental for 2P photolysis since the uncaged/photoactivated molecules will be spatially distributed. Our experimental results also show that there is a large spatial variability in wavefront distortion in the cortex and thus in the PSF 2P shape from location-to-location. This variability, which is manifest mainly in the extent of the 3D speckle pattern (compare Fig. 2 (b), Fig. 6 (b) and Fig. 9 (e)) rather than the core of the PSF 2P , will produce variability in image distortions and in the number and spatial distribution of uncaged / photoactivated molecules.
Modeling the effects of wavefront distortion and scattering on the PSF 2P to disentangle their respective roles in the cortex
By combining measurements of the excitation mean free path (L se = 77-140 μm, for 725-950 nm) and the wavefront distortions introduced in living cortical tissue with a vector-based model of the PSF 2P that includes both phase modulation and static, statistically homogeneous scattering, we show that both wavefront distortions and the NA reduction induced by attenuation of ballistic excitation photons contribute to the enlargement of the core of the PSF 2P . Our model predictions establish that wavefront distortion is the major determinant of the PSF 2P enlargement. This extends previous work using a homogeneous scattering model, which show that the PSF 2P is enlarged in scattering samples [12]. Previous work also showed that scattering sets a fundamental imaging limit (i.e. 5 L se ) due to out of focus 2P excitation [12]. Our modeling did not include 2P excitation generated by scattered photons and out of focus 2P excitation at large distances from the focal point, as these are small at the imaging depths studied here, but could easily be extended to include them. Nevertheless, our modeling showed that 2P excitation due to tissue-induced speckle pattern accounts for more than half of 2P-excitation at 150 μm for an NA of 0.7. Moreover, the fraction of the 2P excitation carried by the surrounding lobes is expected to be larger for higher NA configurations, since they are more prone to aberration. Indeed, our modeling shows that the 3D speckle pattern is responsible for most of loss of spatial resolution. It has been suggested recently from adaptive optics experiments on various fixed tissue that scattering makes a greater contribution to the decrease in 2P excitation than aberrations arising from wavefront distortion [36]. While our results are consistent with this conclusion our modeling shows that in acute brain slices, wavefront distortions contribute significantly to the decrease of 2P-excitation in brain tissue, and that their effect is particularly pronounced for the small fluorescent structures.
Correction of brain-induced optical aberrations
Our results show that wavefront shaping using a DMM increased fluorescence by almost a factor of two at a depth of 150 μm by reducing the speckle pattern and improving the SNR. Our wavefront shaping usually reduced the speckle pattern but did not always reduce the size of the main lobe of the PSF 2P . At first glance this result seems in contrast with previous reports showing a decrease in the size of different objects in fixed brain slices [6] or living brain [23] after wavefront shaping. However, as the size of the objects imaged in previous reports ranged from one micrometer to several tens of micrometers, the dimensions of their images (convolution product of the object and the PSF 2P ) will depend on both the main lobe of the PSF 2P and the extent and intensity of the surrounding lobes. The decrease in the size of the objects observed could therefore have resulted from a decrease in the speckle pattern as well as from a reduction in the size of the core of the PSF 2P . The brain-induced wavefront distortions we corrected for with the DMM did not account for all cortical aberrations. Nevertheless, the fluorescence improvement we obtained is within the same range as that obtained with a spatial light modulator [6], which can correct for higher spatial frequencies than the DMM used here. Full correction may not be possible, because optical aberrations originating from micro-lensing effects of cellular bodies and vessels [37] are difficult to correct for due to their local nature and high spatial frequencies. Our results show that a key difficulty in correcting for optical aberrations in living brain tissue is the lack of spatial homogeneity. The ability of the DMM to correct for optical aberrations across the full field of view is dependent on the resolution and the sample [38,39]. A potential solution to this problem has been proposed [7,40], but unfortunately it relies on a large distance between the imaged object and the turbid layers and is therefore of limited utility for imaging brain tissue where such distances are short. Despite these limitations, our results show that implementation of a DMM allows a significant improvement of image quality in acute cortical slices.
We show that wavefront correction can be implemented with a commercially available microscope using both a conventional DMM configuration and a simpler configuration that improved light transmission efficiency by 60%. The simpler configuration allowed us to achieve larger fluorescence improvement thanks to a higher NA, (which is more prone to wavefront distortion). Moreover, we show that DMM optimization can be performed on a subcellular element in a brain slice, making its application to living tissues much more straightforward than using beads. This is likely to be useful for in vivo imaging for two reasons: it can firstly improve tissue penetration allowing deeper layers in the cortex to be reached or less laser power to be used at similar depth, and secondly it can improve the accuracy of anatomical information. These features could be particularly useful for chronic imaging of neuronal growth and plasticity where the resolution of 1 μm spine structures is crucial and photodamage particularly problematic.
Conclusion
We have measured the optical properties of the mammalian cortex and have disentangled the contributions made by wavefront distortion and statistically homogeneous scattering. By identifying key determinants of 2P excitation our results provide a basis for improved adaptive optics-based correction strategies and 2P imaging of brain structure and function in the future.
|
2016-05-12T22:15:10.714Z
|
2011-10-26T00:00:00.000
|
{
"year": 2011,
"sha1": "47bc383d4e72a3b9b1ce5f8f63969d1427ea07a0",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1364/oe.19.022755",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "47bc383d4e72a3b9b1ce5f8f63969d1427ea07a0",
"s2fieldsofstudy": [
"Engineering",
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
248267740
|
pes2o/s2orc
|
v3-fos-license
|
Comparative analysis of temperature preference behavior and effects of temperature on daily behavior in 11 Drosophila species
Temperature is one of the most critical environmental factors that influence various biological processes. Species distributed in different temperature regions are considered to have different optimal temperatures for daily life activities. However, how organisms have acquired various features to cope with particular temperature environments remains to be elucidated. In this study, we have systematically analyzed the temperature preference behavior and effects of temperatures on daily locomotor activity and sleep using 11 Drosophila species. We also investigated the function of antennae in the temperature preference behavior of these species. We found that, (1) an optimal temperature for daily locomotor activity and sleep of each species approximately matches with temperatures it frequently encounters in its habitat, (2) effects of temperature on locomotor activity and sleep are diverse among species, but each species maintains its daily activity and sleep pattern even at different temperatures, and (3) each species has a unique temperature preference behavior, and the contribution of antennae to this behavior is diverse among species. These results suggest that Drosophila species inhabiting different climatic environments have acquired species-specific temperature response systems according to their life strategies. This study provides fundamental information for understanding the mechanisms underlying their temperature adaptation and lifestyle diversification.
Introduction
Temperature has been identified as one of the most critical environmental factors affecting various biological processes and species distribution [1]. Each species adapts to the temperature conditions of its habitat by tuning its lifestyle and life history. For instance, species living in higher latitudes or altitudes are known to tolerate cold and have evolved a higher capacity to maintain their daily life activities at lower temperatures [2][3][4]. However, the mechanism by which organisms have acquired such features remains unclear. Elucidating the mechanism of temperature adaptation and its diversification will significantly contribute to shed light on environmental adaptation and evolution.
The genus Drosophila includes about 1300 described species and is widely distributed from the tropical to subarctic regions [5,6]. Several studies have explored their ecology and environmental adaptations such as food habits, temperature resistance, and diapause [7][8][9][10][11][12]. In particular, D. melanogaster is identified as a powerful and important model organism for biological research and has contributed to our understanding of the molecular and cellular mechanisms underlying various biological processes. Hence, Drosophila species are determined to be potentially useful in research in terms of for analyzing temperature adaptation and its diversification at molecular and cellular levels.
At a certain range of temperature, an animal exhibits active foraging and reproductive behaviors. In general, the temperature range at which animals remain active is dependent on the temperatures of their habitats [13]. For instance, animals living in cool climates generally exhibit higher activities even at low temperatures than animals living in tropical regions. However, animals are not always active; they can often change their activity level due to the effects of various factors even when ambient temperature is favorable. The activity level displays circadian rhythmicity. For example, D. melanogaster as well as several other animals exhibit circadian rhythmicity in locomotor activity; its inactive phase is often interpreted as "sleep" because it displays not only low locomotor activities but also low sensitivities to environmental signals [14]. Temperature has also been suggested to affect circadian rhythmicity; high temperature causes reduced daytime activity and increased nighttime activities [15], and total sleep length increases with increasing average temperature of habitats [16]. Nevertheless, there is limited research on the effects of temperatures on the locomotor activity and sleep of other Drosophila species from different climatic regions.
Ectothermic animals, whose body temperature is highly dependent on ambient temperature, require behavioral thermoregulation in response to temperature fluctuations. One of the behavioral thermoregulations is temperature preference behavior by which animals can maintain body temperature within a physiologically permissive range [17]. The genetic and cellular mechanisms underlying apparent temperature preference behavior in D. melanogaster have been extensively investigated [18][19][20][21]. Previous studies have shown that the transient receptor potential (TRP) ion channel A1 in the brain and Gustatory Receptor (GR) 28b in the antennae mediate Ito and Awasaki --3 warmth-sensing, and a trio of Ionotropic Receptors (IRs), IR21a, IR25a, and IR93a in the antennae mediate cooling [19,22,23]. Therefore, it is possible that the temperature preference of D.
melanogaster is regulated by the balance between heat and cold or cooling avoidances through the activation of the TRP channel, GR and IRs. Hence, it would be useful to understand whether other Drosophila species have species-specific preferred temperatures and whether their preferred temperatures correlate with the optimal temperatures required for their activities and are determined by the thermal sensing system as in D. melanogaster [19,22,23].
Following the sequencing of the whole genome of D. melanogaster, the genome was also sequenced in other 11 Drosophila species that have been used for research in various fields to date [24]. This allows the analysis of the genetic mechanisms underlying unexplored biological simulans and D. persimilis, the amount of activity was generally higher at ≤ 23°C than at > 23°C. In D. erecta, the peak temperature for the amount of activity was 23°C. In D. virilis, the amount of activity was higher at ≤ 20°C than at > 20°C. To estimate the optimal temperature and maximum performance for daily locomotor activity, we fitted thermal performance curve (TPC) model for each species (Supplementary Fig. 2). These results, in addition to the effects of temperature on the amount of activity and optimal temperatures estimated by TPC models (with some exceptions), indicate that tropical and temperate Drosophila species tend to exhibit higher activity levels at high and low temperatures, respectively.
Effects of temperature on the daily activity pattern
We examined how temperature affects the daily activity pattern in these Drosophila species. The daily patterns of locomotor activity were investigated at different temperatures ( Fig. 2, Supplementary Fig. 3). Each Drosophila species exhibited a unique daily activity pattern. Most species have bimodal morning and evening peaks of activity in a day, as also known in D.
melanogaster [26]. However, the height, shape, and ratio of the two peaks were noted to vary among species. To investigate the effect of temperature on activity peaks, we focused on the amount of activity in the first (from 2:00 to 14:00) and second (14:00-2:00) half of a day, which includes the morning and evening peak, respectively ( Supplementary Fig. 4). In all the examined species, either or both the amounts of locomotor activity in the first and second half of the day were affected by temperature (Kruskal-Wallis test: p < 0.05, Supplementary Fig. 4). However, the ratio of the activity level in the first and second half of the day was almost constant at all experimental temperatures in most species ( Supplementary Fig. 5). In fact, the overall activity peaks, for example, higher morning peak in D. yakuba and higher evening peak in D. simulans, were maintained in most species ( Fig. 2 and Supplementary Fig. 3). Remarkably, although the ratio of activity was different between low and high temperatures in D. willistoni, this was caused by the disappearance of the morning and evening peaks at low temperature (≤ 20°C) rather than an inversion of the ratio. These results illustrated that the species-specific ratio of the morning and evening activity peaks is less affected by temperature, suggesting that each species maintains daily activity pattern even at different temperatures.
To further explore the effect of temperature on the daily activity pattern, we have examined how light condition affect daily activity patterns (Fig. 3). All the examined species showed significant differences in both daytime (light on) and nighttime (light off) activity levels
Effects of temperature on sleep
Environmental temperature is known to influence sleep in several animals, including D.
melanogaster [27][28]. However, how temperature affects sleep in other Drosophila species has not been well investigated. Thus, in this study, we examined the effect of temperature on sleep in the different Drosophila species (Fig. 4 and Supplementary Fig. 7). First, we compared the range of total length of sleep in a day and found that the ranges of daily sleep length were quite different (Kruskal-Wallis test: χ 2 = 747.72, p < 0.001, Supplementary Fig. 8). This result suggests that, as observed in the locomotor activity, each species has its own range of daily sleep length, and this range is diverse among Drosophila species. The daily activity level and sleep length showed a significant negative correlation (Spearman's rank correlation = -0.936, p < 0.05), indicating that long-sleep species are less active and short-sleep species are more active.
Next, we compared the daily sleep length and patterns among different temperatures (
Temperature preference behavior
We next investigated the behavioral thermoregulation in these Drosophila species. First, we examined the distribution of flies on the temperature gradient field (Fig. 5). Each species exhibited species-specific distribution on the field. Of the 11 species, 7 showed prominent distribution peaks Ito and Awasaki --7 on the field, while D. melanogaster showed a distribution peak at 25°C as reported in a previous study [19,21]. Then, we determined the preferred temperature (see Methods) for each species (Fig. 6) and compared it with the optimal temperature required for locomotor activity (the temperature at which the species showed the highest activity) (Fig. 1, Supplementary Fig.2 and Fig. 6). The comparison revealed that the preferred temperatures almost coincided with the optimal temperature required for locomotor activity in D. melanogaster, D. mojavensis, D. persimilis, and D.
pseudoobscura. However, the preferred temperature was higher than the optimal temperature in D.
simulans and D. virilis and lower than the optimal temperature in D. ananassae, D. yakuba, D.
willistoni, and D. sechellia. These results indicated that Drosophila species do not always prefer the temperature at which their locomotor activity levels are high.
Function of antennae in terms of temperature preference behavior
In D. melanogaster, studies have experimentally demonstrated that antennae sense the external cold temperature and are indispensable for cold avoidance and temperature preference behavior [23,29].
Therefore, we investigated the function of antennae in the temperature preference behavior of the 11 species using antenna ablation experiments (Fig. 7). Consistent with previous studies, D.
melanogaster with antenna ablation was broadly distributed to cooler temperatures at ≤ 23°C.
However, intriguingly, antenna ablation had no such effects in the other species. In D. erecta and D.
virilis, the distribution peaks became smoother without large shifts of peak temperatures. In contrast, the peaks became prominent in D. willistoni and D. persimilis. However, heat avoidance at higher temperature (31°C) appeared to be lost in D. mojavensis. The other species showed no drastic changes by antenna ablation. These results suggest that the function of antennae in the temperature preference behavior can be diverse among Drosophila species.
To compare the effect of the antenna quantitatively, we determined the preferred temperature in flies with antenna ablation (Fig. 8 is not the typical system.
Discussion
We investigated the effects of temperature on locomotor activities and temperature preference behavior in 11 Drosophila species. As per our findings, we observed that these species exhibited species-specific activity levels and sleep duration. Most species exhibited higher locomotor activities at temperatures they encountered in their habitats. Temperature had a unique impact on the amount of total daily activities and sleep in a species-specific manner. All species exhibited high activities around morning and/or evening, irrespective of temperature. Some species changed the amount of daily activity maintaining the daytime/nighttime activity ratio, whereas others changed the total activity changing the daytime/nighttime activity ratio. With regard to temperature preference behavior, all species avoided higher temperatures; however, the degree of avoidance or attraction to cold temperature was diverse among species, resulting in species-specific responses to the temperature gradient. Furthermore, we experimentally found that the function of the antenna in the temperature preference behavior was not common among Drosophila species, and some species identified the preferred temperature without antennae. These results suggest that these Drosophila species have acquired species-specific temperature response systems based on their own life strategy to cope with ambient temperatures.
Previous studies have showed that some characteristics of life history of D. melanogaster are adapted in laboratory conditions by laboratory culture [30,31]. We used the laboratory strains for all analyses in this study. Therefore, it is uncertain how much the observed species-specific characteristics in locomotor activity and temperature-preference behavior can explain the speciesspecific adaptation to the natural thermal environment. On the other hand, it has also been reported that laboratory maintenance did not eclipse fundamental species-specific ecological and physiological characteristics in Drosophila species [32]. It is reasonable to assume that some characteristics are altered while others are maintained by laboratory maintenance. To understand the extent of species-specific characteristics maintenance in laboratory strains and the presence of intraspecific variation in certain Drosophila species, we need to comparatively analyze many different strains, including freshly-established different local strains and other laboratorymaintained strains. Previously, the laboratory strains of 11 Drosophila species have been used for comparative studies of the embryonic development and the larval locomotor behavior [33,34].
Interestingly, the correlation between species-specific characteristics and their natural habitats has Ito and Awasaki --9 been reported with the laboratory strains [34]. Therefore, the experimental results using laboratory strains, including our study, should be valuable benchmarks for future comparative analyses in understanding the exact species-specific characteristics and adaptations.
Increases in locomotor activity would be related to opportunities for mating and foraging.
Therefore, the temperature at which the species is most active is considered to be optimal for activity in the species. In this study, excluding D. melanogaster, all the other species exhibited significant or remarkable differences in total activity among the analyzed temperatures, and the temperature at which high total activity was observed matched the temperature in the habitat. This result suggests that these species are adapted to the temperature zone of their habitats. On the other hand, D. melanogaster could maintain the same amount of daily activity at various temperatures. A previous study showed that effect of temperature on the locomotor activity was different between D. melanogaster strains established from different climate zones; activity levels at temperature 17ºC to 29ºC were about the same for the strain established from the tropical climate zone, though the activity was decreased at 17ºC compared with 25ºC and 29ºC in the temperate strain [13].
Additionally, it has also been reported that the spontaneous activity was higher at 24ºC than at 17ºC in another temperate strain [35]. Furthermore, it has also been shown that the temperature of the developmental period significantly affects the activity level in an adult [36]. Therefore, it is noteworthy whether such plastic features are specific to D. melanogaster, and whether these are related to the fact that this species is cosmopolitan.
Regarding the daily locomotor activity pattern in these species, we were able to note a species-specific effect of temperature on the daily activity pattern. It is well known that D.
melanogaster has two daily activity surges, known as morning and evening peaks [37]. As in D.
melanogaster, we found that each species has a morning and evening peak for activity at the optimal temperature with the species-specific pattern. Some species were more active in the evening, whereas some species were more active in the morning. These differences might correlate with the niche in time, which is generally observed in animals [38]. Interestingly, the shape of each peak and the ratio of surge magnitude are species-specific, and they retain their characteristics even at different temperatures in most cases. Moreover, each species has its unique range of total daily activity, which is diverse among species (Supplementary Fig. 1). Because the species-specific daily activity pattern and level might be tightly correlated with the species-specific lifestyle and life strategy, they would be robustly maintained even at different temperatures.
In addition to locomotor activity, we observed that sleep was affected by temperature. In most species, total daily sleep length was lesser at the temperature at which total daily activity was higher. In this study, the definition of sleep for D. melanogaster was applied to all other species, i.e., an immobile state for ≥5 minutes [39]. However, the total locomotor activity was significantly different among species. Several species, such as D. erecta and D. mojavensis, were much less Ito and Awasaki --10 active compared with D. melanogaster. Therefore, it should be considered whether applying the definition for D. melanogaster to all other species is suitable.
If the optimal temperature for activity is the most adequate temperature for certain species, the preferred temperature should match the optimal temperature for activity. As anticipated, the four species, namely, D. melanogaster, D. mojavensis, D. persimilis, and D. pseudoobscura, preferred the optimal activity temperature or nearby temperatures in the temperature preference experiments. However, there were discrepancies between the preferred and optimal temperatures for the remaining species. Four tropical or subtropical species, i.e., D. ananassae, D. yakuba, D.
sechellia, and D. willistoni, preferred temperatures lower than the optimal activity temperature, whereas a cosmopolitan species, D. simulans, and a temperate species, D. virilis, preferred temperatures higher than the optimal activity temperature. It can be assumed that there are several factors affecting the temperature preference behavior. Temperature can influence various behaviors and physiological phenomena such as courtship, egg-laying, energy storage, and energy efficiency [40][41][42][43]. Additionally, it has been reported that the rearing temperature and ages affect the adult temperature preference in several Drosophila species [44,45]. If the optimal temperature is different among these traits, the temperature that flies prefer might differ depending on situations they are placed and the behavior or phenomenon that is precedent. Moreover, their temperature preference may be affected by noxious stress [17]. Because tropical and subtropical species have increased chances of being exposed to higher temperature in their habitats, they might have a higher adaptive plasticity for heat avoidance and/or cold attraction. For the same reason, some temperate species might be adapted to avoiding cold stress and/or heat attraction. The preference behavior of a species would be determined under the balance of these factors, and also the priority factor would vary among species.
Previous studies have shown that antennal thermosensing regulates cold avoidance in D.
melanogaster [19,22,23]. Therefore, it is expected that the temperature preference behavior of Drosophila species is also largely dependent on antennal thermosensing. However, in this study, we found that antenna ablation affected temperature sensing in only D. simulans and D. willistoni, in addition to D. melanogaster. The effects of ablation were diverse among these species. For example, in D. willistoni, antenna-ablated flies showed cold avoidance unlike D. melanogaster, suggesting that antennal thermosensing functions to attract flies to cold. The temperature preference behavior is a type of decision-making behavior made by specific brain neurons to stay at or leave from the current temperature regime [46]. In D. melanogaster, manipulating the subsets of dopaminergic neurons has altered the temperature preference behavior [47]. Hence, it is possible that the difference in the effect of antenna ablation between these two species is caused by the difference in the manner of utilizing temperature information from the antenna for the decisionmaking behavior rather than the difference in the properties of antennae as a temperature sensor.
--11
Unexpectedly, we found no significant contribution of antennae for the temperature preference behavior in 6 of the 11 examined species. This result indicates that these species do not use antennae for regulating the temperature preference behavior but utilize other temperature sensors. Previous studies have shown that the anterior cell (AC) neurons inside the head sense innocuous warmth above 25°C and exert essential function in the temperature preference behavior in D. melanogaster [19]. However, the multiple dendritic (md) neurons in the adult body wall are candidates to sense noxious warmth for the escape behavior in adults, although it has not yet been tested [18]. Therefore, in addition to adult AC cells, md neurons are can be considered potential candidates to sense temperature for the temperature preference behavior in these species. As insects commonly have thermosensors in antennae with specific morphologies [48], it is difficult to believe that the antennae of these species do not sense temperature. In these species, the decision-making for temperature preference would not be largely dependent on thermal information from the antenna thermosensors. Drosophila species would make a species-specific decision using multiple sources of thermal information depending on the type of behaviors, resulting in their unique life strategy for adaptation to the temperature environment in their habitats.
In this investigation, by conducting comparative studies on 11 Drosophila species, we found that the effects of temperature on behavior are diverse and that there is an interspecific variation in the responses to temperature. This finding suggests that even at the same environmental temperature, different species respond differently to temperature, and it is favorable for some species but unfavorable for others. This phenomenon would be associated with the difference in behavioral output following the sensing of temperature information. Moreover, these differences may be associated with strategies for adaptation to habitat temperature conditions. Understanding the mechanisms responsible for these differences would shed light on the adaptation strategy and evolution of species. This study was able to elucidate the diversity of temperature adaptation in Drosophila species and opened avenues for understanding the underlying mechanisms behind their diverse temperature adaptations.
Fly strains
The following fly strains were used in this study: D. melanogaster (Canton-S, k-aba006), D. Ito and Awasaki --12
Measurements of locomotor activity and sleep
To measure daily locomotor activities, male adult flies were collected within 4 h after eclosion.
Before applying the activity tubes, the collected flies were maintained in standard medium for 1-2 days. Under ice anesthetization, an individual fly was applied into monitoring tubes (7mm diameter for D. virilis and 5mm diameter for others) filled with 5% sucrose (Wako) and 2% Bacto agar (BD) on one side, and activity was evaluated using the Drosophila Activity Monitor (DAM) system (TriKinetics Inc.) under the 12-h/12-h LD cycle condition (light on 8 AM and off 8 PM every day) at five different temperatures (17°C, 20°C, 23°C, 26°C, and 29°C). After 2 days of acclimation for each recording temperature, the locomotor activities were recorded every 1 min for 3 days. The average total daily locomotor activity of an individual fly was calculated using the total counts for 3 days for each recording. Each sample size is shown in Supplementary Fig. 3. The median value of total daily locomotor activity in each species was calculated with data on total daily activities at all experimental temperatures ( Supplementary Fig. 1). The daily activity pattern was estimated by summarizing activity counts every 30 min ( Fig. 2 and Supplementary Fig. 3). The total activities of first and second half of 24 h and daytime and nighttime were analyzed by summarizing the counts for 12 h from 2 AM to 2 PM and 2 PM to 2 AM (Supplementary Fig. 10) and from 8 AM to 8 PM and 8 PM to 8 AM (Fig. 3), respectively.
The definition of sleep in D. melanogaster, i.e., 5 or more minutes of immobile state [26], was applied for this analysis ( Fig. 4 and Supplementary Fig. 7). We determined sleep behavior based on the activity data recorded using the DAM system. The average total daily sleep duration was calculated using the total sleep duration for 3 days for each recording. The number of sleep events per day and the average length of single sleep were also calculated using the same datasets.
Temperature preference assay and antenna ablation experiment
To analyze the temperature preference behavior, we designed the temperature gradient field according to a previous study with some modifications [49]. In this assay, the air temperature between a plexiglass cover and an aluminum plate was monitored using a FLUKE 52II thermometer with multiple temperature probes (FLUKE 80PK-1) at 4 or 6 points along the temperature gradient field. Before applying the flies, the plexiglass cover was coated with super To analyze the distribution of flies on the temperature gradient field, the flies were classified into nine temperature ranges based on a method described in a previous study [19,21]. Table 1). In order to estimate the optimal temperatures and maximum performance of locomotor activity, we fitted 3 or 13 TPC models using rTPC package for R [51] and close the best fitted one based on the lowest Akaike Information Criteria (AIC) values ( Supplementary Fig. 2).
Data availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
|
2022-04-21T13:17:38.454Z
|
2022-07-07T00:00:00.000
|
{
"year": 2022,
"sha1": "66588c55c411c355e37a9489c4faddec1651bade",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-022-16897-7.pdf",
"oa_status": "GOLD",
"pdf_src": "BioRxiv",
"pdf_hash": "66588c55c411c355e37a9489c4faddec1651bade",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
13777784
|
pes2o/s2orc
|
v3-fos-license
|
Risk of Band Keratopathy in Patients with End-Stage Renal Disease
This study is a retrospective, nationwide, matched cohort study to investigate the risk of band keratopathy following end-stage renal disease (ESRD). The study cohort included 94,039 ESRD on-dialysis patients identified by the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM), code 585 and registered between January 2000 to December 2009 at the Taiwan National Health Insurance Research Database. An age- and sex-matched control group comprised 94,039 patients selected from the Taiwan Longitudinal Health Insurance Database 2000. Information for each patient was collected from the index date until December 2011. In total, 230 ESRD patients and 26 controls had band keratopathy (P < 0.0001) during the follow-up period, indicating a significantly elevated risk of band keratopathy in the ESRD patients compared with controls (incidence rate ratio = 12.21, 95% confidence interval [CI] = 8.14–18.32). After adjustment for potential confounders including sarcoidosis, hyperparathyroidism, iridocyclitis, and phthisis bulbi, ESRD patients were 11.56 times more likely to develop band keratopathy in the full cohort (adjusted HR = 11.56, 95% CI = 7.70–17.35). In conclusion, ESRD increases the risk of band keratopathy. Close interdisciplinary collaboration between nephrologists and ophthalmologists is important to deal with band keratopathy following ESRD and prevent visual acuity impairments.
consequent elevated serum calcium levels have long been associated with ESRD 13,14 . Additionally, ESRD patients are at a higher risk for increased ocular problems such as increased intraocular pressure [15][16][17][18] and irritated red eyes 9,19,20 , which necessitate long-term use of eye drops associated with band keratopathy. Therefore, it is clinically relevant to determine whether ESRD is a predictor of band keratopathy.
Several previous studies have discussed the association between ESRD and red eyes related to calcific deposits in the conjunctivae or corneas 9,19,20 , but the results of published studies were limited by the small number of patients or the absence of comparative control data. Using a nationwide population-based dataset, we designed a cohort study to investigate the risk of band keratopathy following ESRD in Taiwan. In our cohort study, the ESRD patients were all under dialysis treatment.
Methods
Database. On March 1, 1995, a single-payer National Health Insurance (NHI) scheme was launched in Taiwan, which provides extensive medical care coverage for all residents in Taiwan. About 22.60 million individuals (>98%) of the total Taiwanese population of 22.96 million were enrolled in this program as of 2007. The data of our cohort study were obtained from the Taiwan National Health Insurance Research Database (NHIRD). The NHIRD supplies enciphered patient identification numbers as well as information regarding patient birth date, sex, and admission and discharge dates. It also includes the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnoses and procedure codes, prescriptions details, and costs covered and paid by NHI. Ethical approval and informed consent were waived off by the Institutional Review Board of Chi-Mei Medical Center because a public database was used for analysis. Because analysis of datasets in a database does not use identifiable personal information, the requirement of informed consent was waived.
Study Design. This For each ESRD case, one control without ESRD was selected from the longitudinal Health Insurance Database 2000 (LHID2000). LHID2000 was a data subset of the NHIRD that contained entire claim data for one million beneficiaries (4.34% of the total population) systemic-randomly selected in 2000. There was no significant difference in age, sex, and health care costs between this sample group and all national health insurance enrollees. The 94,039 controls were matched by age, sex, and index date. The index date for the ESRD patients was the date of their first dialysis, and the index date for the controls was created by matching the date with the ESRD subject's index date. Moreover, the controls diagnosed with band keratopathy before the index date were also excluded. Each patient was followed up to determine the incidence of band keratopathy until the end of 2011 or censored because of death.
To distinguish all patients who had developed band keratopathy (ICD-9-CM code 371.43), we tracked every patient from his or her index outpatient visit or hospitalization through December 2011. Demographic data (e.g., age and sex) were recorded. Furthermore, we collected information regarding comorbidities including sarcoidosis (ICD-9-CM code 135), hyperparathyroidism (ICD-9-CM codes 252.0, which excluded secondary hyperparathyroidism due to renal disease.), iridocyclitis (ICD-9-CM code 364.0 to 364.3), and phthisis bulbi or degenerated eye (ICD-9-CM code 360.40, 360.41), because these conditions are critical factors that increase the risk of band keratopathy. In this study, the inclusion criteria for sarcoidosis, hyperparathyroidism, iridocyclitis, and phthisis bulbi were documentation of the condition at least once in the inpatient setting or ≥ 3 times in the ambulatory setting within 1 year before the initial ESRD on dialysis medical service date.
Statistical Analysis. SAS 9.4 for Windows (SAS Institute, Inc., Cary, NC, USA) was used in this study.
Pearson chi-square test was used to compare the demographic characteristics and comorbid disorders between the ESRD and control groups. The incidence rate was calculated as the number of band keratopathy cases identified during follow-up divided by the total person-years (PY) for each group by age, sex, and select comorbidities. The Poisson regression analysis was performed to calculate the incidence rate ratio (IRR), which demonstrated the comparison in the risk of developing band keratopathy between the ESRD and control groups. The adjusted hazard ratio (HR) for developing band keratopathy was calculated using Cox proportional hazard regression analysis. Cumulative incidence rates for band keratopathy of ESRD were evaluated by Kaplan-Meier analysis, and differences in cumulative-incidence rate curves were analyzed using the log-rank test. Additionally, we subdivided the patients into three age subgroups for further analysis: < 50 years, 50-64 years, and ≥ 65 years. Data are presented as mean ± standard deviation (SD), and 95% confidence intervals (CIs) are provided when applicable. Statistical significance was defined as P < 0.05. These statistical assessments were performed in consultation with a statistical expert.
Results
Demographic Data. Between Table 2). In addition, there was a significant difference in the incidence of band keratopathy between the groups (ESRD patients = 5.20/10000 PY; control = 0.43/10000 PY), and the IRR between the ESRD and control groups was statistically significant (12.21, 95% CI = 8.14-18.32, P < 0.0001; Table 2).
After the two groups were divided by age, we found that ESRD patients < 50 years old had the highest incidence rate (8.37/10000 PY), followed by patients aged 50 to 64 years, and patients ≥ 65 years old. We found significantly higher IRRs for all ESRD age groups compared with their age-matched controls ( Table 2).
In the ESRD group, the incidence rates of band keratopathy, from the highest to the lowest, were in the order of patients with phthisis bulbi (61.41/10000 PY), iridocyclitis (25.54/10000 PY), and hyperparathyroidism (3.09/10000 PY). However, the IRR for band keratopathy associated with comorbidities could not be determined because no band keratopathy was observed in patients with sarcoidosis, hyperparathyroidism, iridocyclitis, phthisis bulbi in the control group. (Table 2) Table 3 provides the crude and adjusted HRs for band keratopathy, by cohort, during the follow-up period. After adjusting for age, sex, and select comorbid conditions, ESRD remained an independent risk factor for band keratopathy (adjusted HR = 11.56, 95% CI = 7.70-17.36). The comorbidities that were significant risk factors for band keratopathy in both groups included iridocyclitis (adjusted HR = 4.31, 95% CI = 1.07-17.39, P < 0.05) and phthisis bulbi (adjusted HR = 10.02, 95% CI = 2.48-40.49, P < 0.05) after adjusting for age, sex, and select comorbid conditions.
The Kaplan-Meier survival analyses revealed higher band keratopathy cumulative incidence rates in the ESRD patients than in the control patients, and the log-rank test was also significant (P < 0.001; Fig. 1).
Discussion
To the best of our knowledge, our study is the largest-scale population-based study that has been conducted to explore the relationship between ESRD and subsequent band keratopathy. We analyzed 94,039 ESRD patients and 94,039 control subjects. We found that the incidence rate of band keratopathy in ESRD patients was 12.21 times higher than that in controls, and that the relative risk of band keratopathy for patients with ESRD was 11.56 times higher in the full cohort after adjusting for age, sex, sarcoidosis, hyperparathyroidism, iridocyclitis, and phthisis bulbi.
Band keratopathy is a frequent chronic degenerative condition that presents with deposition of grayish to whitish opacities on the corneal surface, most commonly in the interpalpebral zone 11 . These opacities are the result of precipitation of calcium hydroxyapatite crystals in the superficial layers of the cornea, including the epithelial basement membrane, basal epithelium, and Bowman's membrane 10 . The pathophysiology is multifactorial; besides the well-known chronic ocular conditions such as uveitis and phithsis bulbi, other contributing factors may include elevated serum phosphate levels and increased serum calcium levels that are possibly related to hyperparathyroidism and long-term eye drop use 11 . Although many previous reports drew attention to the link between ESRD and red eyes resulting from calcific deposits in the conjunctivae or corneas 9,19,20 , there are few studies that directly evaluated the association of band keratopathy and ESRD. Mullaem suggested that band keratopathy, which typically involves amorphous, white, crystalline, and sub-epithelial calcific deposits, is one of the most frequent ocular problems in ESRD patients 21 . Our study is the largest nationwide population-based cohort study to investigate the risk of band keratopathy following ESRD in Taiwan. Our findings demonstrate an association between band keratopathy and ESRD. The common pathogenic mechanisms for band keratopathy and ESRD include elevated serum phosphate levels, increased serum calcium levels that are possibly related to hyperparathyroidism, and long-term eye drop use in cases involving elevated intraocular pressure or irritated red eyes; these three conditions are discussed separately as follows. The most well-known pathogenic mechanism common to both conditions is increased serum phosphate and calcium levels. Elevated serum phosphate level is a serious complication affecting ESRD patients receiving hemodialysis 12 . Increased serum calcium levels occur in conjunction with the secondary renal hyperthyroidism 13,14 . When the elevated serum phosphate and calcium levels result in calcium phosphate salt precipitation, ESRD patients are susceptible to band keratopathy associated with the deposition of calcium phosphate salts in the form of microcrystalline hydroxyapatite [20][21][22] . To prevent ongoing hydroxyapatite deposition, elevated levels of phosphate or calcium have to be aggressively controlled.
Another possible pathogenic cause of band keratopathy and ESRD is the greater frequency of long-standing eye drop use in ESRD patients with ocular hypertension and irritable red eyes. There is controversy regarding the effect of hemodialysis on intraocular pressure. Although some studies have demonstrated that intraocular pressure may decrease or remain stable after hemodialysis 23 , most studies describe an increase in IOP during hemodialysis [15][16][17][18] . Various theories about the relationship between elevated IOP and hemodialysis have been postulated, and the most well-accepted theory suggests an influx of volume into the posterior chamber via the ciliary body due to an imbalance in osmolality between the ocular chamber and the blood during hemodialysis 24 . When the intraocular pressure is elevated, ESRD patients usually need long-standing eye drop treatment. Band keratopathy might result from the chronic or excessive use of glaucoma medications with mercury-containing preservatives or the use of pilocarpine, an anti-hypertension agent 11,25 . Another condition related to long-standing eye drop use in ESRD patients is the dryness and irritable red eyes related to inflammation of calcific deposition in the conjunctiva 9,19,20 . Band keratopathy might be associated with the chronic use of symptom-releasing medications manufactured with phosphate-containing preservatives 26 .
We found that the incidence of band keratopathy was greater in younger ESRD patients. ESRD patients aged ≥ 65 years exhibited the lowest incidence of band keratopathy in the ESRD groups (Table 2), and this was an independent protective factor after adjusting for other confounding factors in both groups (adjusted HR = 0.30, 95% CI = 0.21-0.43, P < 0.05, Table 3). The age-dependent incidence trend was found in the control group. However, paradoxically, the incidence of band keratopathy in ESRD patients aged ≥ 65 years was the lowest. We have attempted to explain the phenomenon by proposing that the death censoring might play a role in explaining the higher incidence rate in the younger ESRD patients and the low incidence rate in ESRD patients aged ≥ 65 years. Among elder ESRD populations, there might be a higher proportion of patients who died before band keratopathy development than those in the control group.
Band keratopathy is a common and vision-threatening corneal disorder with a characteristic of deposition with gray to white opacity in the surface of the cornea. Many comorbidities have been associated with band keratopathy, including sarcoidosis, hyperparathyroidism, iridocyclitis, and phthisis bulbi. In this cohort study, we evaluated these comorbidities in ESRD patients and controls and found that iridocyclitis and phthisis bulbi are associated with higher incidences of band keratopathy in the ESRD patients compared with that in the controls ( Table 2) and are significant risk factors for band keratopathy in the cohort (Table 3). Many reports demonstrated the link between iridocyclitis and band keratopathy 11,27,28 . They suggested that although the exact mechanism of calcium-phosphate precipitation in the superficial layers of cornea is unknown, it may result from deposition left as degeneration and necrosis from chronic inflammation related to iridocyclitis 11,27,28 . ESRD patients with iridocyclitis should be advised to control their inflammation through regular follow-up and treatment by ophthalmologists because of significant association with subsequent band keratopathy.
Band keratopathy in ESRD is an interdisciplinary important issue and close collaboration between nephrologists and ophthalmologists is essential for its management. Nephrologists should be aware of the potential for irritation and visual impairment, which typically presents as white, amorphous, and subepithelial hydroxyapatite crystalline deposition, in ESRD patients under chronic dialysis. The most important concerns for ophthalmologists are evaluating the necessity of treatment in various band keratopathy conditions. Although band keratopathy usually does not impair visual acuity or induce irritation, especially in the early stages, there are some indications for intervention if the condition has progressed. The two major indications for treatment in band keratopathy are decreased vision, which occurs as the calcific precipitation spreads centrally, and mechanical irritation, which occurs because of broken epithelium or a disrupted corneal surface related to calcium accumulation 11,21 . Multiple therapies have been attempted for band keratopathy, including mechanical debridement to remove the calcific deposition, EDTA chelation to remove the calcium only and keep the corneal surface smooth 29,30 , and excimer laser phototherapeutic keratectomy to remove wide areas of cornea precisely while avoiding trauma to adjacent tissue [31][32][33] . To avoid additional calcium phosphate deposition, ophthalmologists should maintain caution while prescribing phosphate-containing eye drops to treat irritation 26 , and old mercury-containing eye drops to control the elevated ocular hypertension in ESRD with band keratopathy patients 11,25 . Once the diagnosis of band keratopathy is confirmed by an ophthalmologist, prevention of ongoing hydroxyapatite crystalline deposition by aggressive treatment of the increased serum calcium or phosphate levels in ESRD patients on dialysis is of outmost importance for nephrologists. These modalities include dietary recommendations such as substitution of animal protein with vegetarian sources 12,34,35 , more frequent and more prolonged sessions of dialysis treatments 36,37 , dual binder therapy -the use of two phosphorus-binding medications 38 , drug treatment in the patients with secondary renal hyperparathyroidism with agents such as calcitriol analogues 13 or calcimimetic agents 14,39 , and recommending the patients with secondary renal hyperparathyroidism to undergo parathyroidectomy. When dealing with band keratopathy in ESRD patients on dialysis, close cooperation between nephrologists and ophthalmologists is important to reduce the risk of further irritation and visual impairment.
There are several strengths in our study. Based on a nationwide and population-based dataset including a large sample of ESRD patients, the study showed increased precision in risk appraisal and elevated statistical power. In addition, because patients with visual disturbances visit an ophthalmologist rather than a general practitioner in Taiwan, the selection bias in referral centers and chances of misdiagnosis are reduced. Furthermore, this study is a cohort study monitoring the band keratopathy incidence in ESRD and in comparison cohorts with maximum longitudinal data of 10 years. Finally, because sarcoidosis, hyperparathyroidism, iridocyclitis, and phthisis bulbi were taken into account as confounding factors to adjust the hazard ratio of band keratopathy in ESRD patients, our results are reliable.
There are some limitations in our study. Because the sampled patients' medical history can only be traced back to the year 1996, we cannot confirm that the controls had no ESRD history before January 1996. Additionally, the diagnosis of ESRD, band keratopathy, and other comorbidity disorders relied on ICD-9-codes, which may lead to disease misclassification. Furthermore, some bias may have been introduced because the insurance claims data did not include laboratory data on serum calcium or phosphate levels, or information regarding vitamin D treatment. Besides, intraocular pressure changes after hemodialysis are a controversial topic, since they may increase, remain stable, or decrease. Therefore, one of the mechanisms explaining the increased incidence of band keratopathy based on an increased use of eye drops for the treatment of elevated intraocular pressure in ESRD patients is not very solid. Finally, the evaluation of many comorbidities associated with band keratopathy, including sarcoidosis, hyperparathyroidism, iridocyclitis, and phthisis bulbi in ESRD patients and controls showed that the absence of these comorbidities in the control group compromised the significant incidence ratios of these comorbidities between ESRD and control patients.
In summary, our study showed that after adjusting for age, sex, sarcoidosis, hyperparathyroidism, iridocyclitis, and phthisis bulbi, ESRD patients showed a significantly higher risk of developing band keratopathy during the follow-up period. The association between ESRD and band keratopathy is possible based on the elevated serum phosphate and increased serum calcium levels related to secondary renal hyperparathyroidism, and long-term eye drop use in patients with ocular hypertension or irritable red eyes. We recommend that ophthalmologists should provide adequate treatment modalities in ESRD patients with band keratopathy including observation only, avoidance of phosphate-containing or mercury-containing eye drops, mechanical removal, chelation treatment, and phototheraputic keratectomy. Nephrologists should be aware of the link between the elevated serum phosphate, increased serum calcium levels related to secondary renal hyperparathyroidism and band keratopathy and aggressively control the elevated levels of phosphate or calcium by dietary recommendation, more frequent and more prolonged hemodialysis, dual binder therapy, and medical control (e.g., calcitriol analogues or calcimimetic agents) or recommendations for surgery (e.g., subtotal or total parathyroidectomy) in patients with secondary renal hyperparathyroidism. Close cooperation between nephrologists and ophthalmologists is necessary when dealing with band keratopathy following ESRD and to reduce the irritation and visual impairment development.
|
2018-04-03T04:56:12.470Z
|
2016-06-27T00:00:00.000
|
{
"year": 2016,
"sha1": "97443222e3bade02a46713ab19efd5dfec99f3fc",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/srep28675",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "97443222e3bade02a46713ab19efd5dfec99f3fc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
30650626
|
pes2o/s2orc
|
v3-fos-license
|
Shock Acceleration of High-Energy Cosmic Rays : The Importance of the Magnetic-Field Angle
The physics of particle acceleration by collisionless shocks is addressed using analytic theory and numerical simulations. In this paper we focus on the importance of the angle between the shock normal and upstream mean magnetic field, 〈θBn〉, in determining the energy spectrum of the accelerated particles. We show that the acceleration rate is strongly dependent on θBn and is a maximum at perpendicular shocks. Moreover, we demonstrate that for a wide range of reasonable parameters, the acceleration efficiency is weakly dependent on the shock normal angle. When applied to acceleration at supernovae blast waves, we find, therefore, for any given time interval, the highest-energy cosmic rays originate from regions in which the shock moves normal to the mean magnetic field. We also find that maximum energy is larger than that obtained using the well-known Bohm-limit.
Introduction
The origin of high-energy cosmic rays remains an important unsolved problems in astrophysics. It is currently thought that Galactic cosmic rays up to about 10 15 eV are accelerated at shocks driven by supernovae explosions via the mechanism of diffusive shock acceleration. In this theory, charged particles are accelerated as they scatter within the converging plasma flow across the shock [1,2,3,4]. The close association of energetic particles with collisionless shocks observed in interplanetary space, as well as those at the Earth's bow shock, provide convincing direct evidence that astrophysical shocks accelerate particles to high energies.
For a supernova blast wave moving into an interstellar plasma caontaining a magnetic field, B IS , the angle between the unit normal to the shock and B IS (θ Bn ) can vary along the shock surface, as shown in Figure 1. This can lead to intensity variations of cosmic rays along the shock because, in general, the particle transport is different in the directions perpendicular and parallel to the magnetic field. Because the acceleration depends critically on the particle transport normal to the shock front, θ Bn plays a critical role in determining the resulting energy spectrum of the accelerated particles. Jokipii [5,6] showed that, in general, the rate of energy gain is highest for perpendicular shocks.
The physics of particle acceleration at nearly perpendicular shocks is not as well developed as that for parallel shocks; nevertheless, it is clearly different as shown in Figure 2. In particular, an important issue has been the well-known injection threshold problem. The problem arises because, until recently, it was assumed that particles move essentially along the lines of force which are convecting through the shock. Therefore, it was thought that there was no means by θ Bn Expanding Shock Front B IS Figure 1. Illustration supernova shock and incident magnetic field which low-energy, or suprathermal particles could encounter the shock several times, which is required for efficient particle acceleration. Here we show that there is actually no such injection problem and, in fact, the injection does not depend strongly on the shock-normal angle. This can be understood in terms of the increased cross-field transport arising from so-called field-line random walk due to the large-scale (order of a parsec) turbulent interstellar magnetic field.
Below we discuss the physics of particle acceleration at shocks with arbitrary obliquity using simple arguments based on the theory of diffusive shock acceleration. In the following section, we present results from recent test-particle numerical simulations that reveal the weak dependence of the injection efficiency on the shock-normal angle.
The Parker Equation
For charged particles with a speed w moving in a plasma with a bulk flow speed U , the particle transport, to the first order in U/w, can be well described by the well-known Parker transport equation give by where f (x i , p, t) is the phase-space distribution of cosmic rays, x i is the position vector, p is the particle momentum, and t is time. Here, κ ij is the diffusion tensor U i is the plasma velocity, and Q is any source. Writen in this form, the the effects of gradient and curvature drifts associated with the large-scale magnetic field are contained in the off-diagonal elements of the diffusion tensor (the first term on the right-hand side of (1)).
In (1) we have, in addition to spatial diffusion, convection and acceleration/deceleration caused by the U × B electric field. Note that the electric field does not appear explicitly -it is nonetheless contained in the terms containing the flow velocity U. (1) was first written down by Parker [7] and is the basis of most current work on cosmic-ray transport and acceleration. The equation is a good approximation for energetic particles (U/w ≪ 1) if there is enough scattering by magnetic irregularities that the scattering time is much smaller than the macroscopic time scales of the system and the distribution is nearly isotropic. It applies at shocks (discontinuities in U) and the entire theory of diffusive shock acceleration [1,2,3,4,5] may be obtained from the equation by simply putting a step-function flow velocity U into it. From this we see that it applies equally to parallel and perpendicular shocks [5,6]. If the divergence of the flow velocity U is zero, there is no energy change to this order in U/w.
Diffusive Shock Acceleration
The basic physics of shock acceleration is that particles gain energy as they move back and forth across the shock. The energy gain comes from moving against the motional electric field For the case of a parallel shock, the particles move back and forth across the shock and either gain energy as they are scattered by an approaching magnetic irregularity in the upstream flow, or lose energy as they scatter off of a retreating fluctuation in the downstream flow. There is a net gain in energy because the gains are larger than the losses due to the difference in flow speeds across the shock. For a perpendicular shock, the particle drifts along the shock front due to the gradient in the magnetic field at the thin shock and crosses it many times in its gyromotion. This drift is in the same direction as the electric field, thus the particles gain energy. Another way of seeing this is to note that at the perpendicular shock, due to its gyromotion, the particle crosses the shock many times, gaining an energy increment each time, in one parallel mean free path. These differences are illustrated in Figure 2.
A quantitative solution can be obtained by solving the (1) for a shock-like discontinuity. The usual approach is to solve (1) in the upstream and downstream plasma separately, and then match the solutions at the shock. Assuming that the shock is located at x = 0, it is readily found that the steady solution to (1) for a one-dimensional planar shock is given by where r is the ratio of the downstream to upstream plasma density, p 0 is the injection momentum, H(p) is the Heaviside step function, and Here κ xx is the component of the diffusion tensor normal to the shock. In terms of θ Bn , the acute angle between the shock normal and mean magnetic field, κ xx = κ ⊥ sin 2 θ Bn + κ cos 2 θ Bn .
The key result is that the downstream spectrum depends only on the shock compression ratio! In the limit of a strong shock, where r → 4 the momentum dependence becomes f (p) ∝ p −4 , which corresponds to an energy spectrum j = p 2 f ∝ p −2 which is not far from the observed Galactic cosmic-ray spectrum at relativistic energies.
Acceleration Rate
Of course, the spectra observed in space are only power laws up to a particular energy. At higher energies they roll over and their dependence on energy is more complicated. There are a number of causes of the spectral rollover. These include free-escape losses, finite size of the shock (see below), and time dependence. Of course, the power law does not arise instantaneously at the shock. The acceleration to high energies takes time. Thus, one needs to solve the timedependent transport equation (1), for particles injected at a momentum p 0 at t = 0. The result reveals that the spectrum above p 0 is the same power law given in (2), but with a rollover at a cutoff momentum, p c . This cutoff increases with time with the following rate: Thus, taking ν acc, to be the acceleration rate at a parallel shock (θ Bn = 0), and ǫ = κ ⊥ /κ , we obtain ν acc ν acc, = 1 cos 2 θ Bn + ǫ sin 2 θ Bn Equation (4) is plotted as a function of θ Bn for the case of ǫ = 0.02 in the left panel of Figure 3 (A). Clearly the acceleration rate is a maximum at a perpendicular shock.
It is interesting to note that the acceleration rate at a perpendicular shock is larger than the result from Bohm diffusion, which was previously thought to be the smallest possible diffusion coefficient [8]. This can be shown by determining the ratio of the acceleration time to that for Bohm diffusion given by: For reasonable parameters, this quantity is larger than unity for the highest-energy particles (note that η is expected to decrease with energy according to the quasi-linear theory, while ǫ is independent of energy [9]). Thus, a for a given time interval, higher energies can be achieved by acceleration at a perpendicular shock compared to the predictions using Bohm diffusion.
Effects of Spherical Geometry
The previous discussion has considered a planar shock. This is a good approximation if the scale of variation of the energetic particles upstream of the shock, L cr ≈ κ nn /U 1 (where κ nn is the component of the diffusion tensor in the direction of the unit normal to the shock) is significantly less than the shock radius, r sh . If L cr is of the order of r sh or greater, the scale becomes approximately r sh . Also,the accelerated particles may escape from the shock along the magnetic field with diffusion coefficient κ . The relevant scale length is r sh . This give a loss time But, in the quasi-perpendicular part of the shock, they are accelerated with a time scale 1/ν acc,⊥ given by For acceleration to proceed, clearly we must satisfy τ acc < τ loss (8) or, putting in numbers and setting κ ⊥ = ǫκ and writing U sh = 10 9 cm/sec and r sh = r ⋆ pc parsecs we obtain √ ǫκ < 6.2 × 10 27 r ⋆ pc cm 2 /s For ǫ = 0.02 (as before), and r ⋆ pc = 1, we find that this condition gives κ < 4.4 × 10 28 cm 2 /s, which can be satisfied even with the ambient value of κ estimated from secondary cosmic rays of the order of 10 28 cm 2 /sec [10].
Injection Problem
We have shown that perpendicular shocks are more rapid accelerators of particles and can probably account for acceleration of cosmic rays to 10 15 eV, and perhaps a bit beyond this, at a supernova blast wave. However, it is widely thought that perpendicular shocks are inefficient accelerators because of the well-known injection threshold problem. We now show that this is not the case.
The main assumption in diffusive shock acceleration is that the pitch-angle distribution is nearly isotropic. By requiring the diffusive streaming anisotropy to be small, one can readily derive an expression for the "injection velocity," w inj (c.f. [9]). The most general expression is given by: where κ ⊥ , and κ , are the components of the diffusion tensor perpendicular and parallel to the mean magnetic field, respectively, and the antisymmetric component of the diffusion tensor is For the case in which the correlation scale of the turbulent magnetic field is much larger than the gyroradius of the particles of interest, it has been shown from numerical simulations that κ ⊥ /κ is independent of energy [9]. Thus, taking ǫ = κ ⊥ /κ ≪ 1 and η = λ /r g , where λ = 3κ /w is the parallel mean-free path, (10) can be rewritten as: where w inj, = 3U 1 is the injection velocity for a parallel shock. Shown in right panel of Figure 3 (B) is the solution to (11) for η = 100 and ǫ = 0.02. The dashed curve is sec θ Bn , which is the scatter-free approximation which is clearly invalid for the case of a turbulent magnetic field. Note that at low-energies, the injection velocity at a perpendicular shock approaches 3U 1 , which is the same as that obtained for a parallel shock [11]! Thus, we can conclude that enhanced motion normal to mean field by field-line random walk significantly decreases the injection velocity threshold for acceleration. Thus, the theory predicts that there should not be an injection problem at nearly perpendicular shocks. Therefore, we conclude that perpendicular shocks are both efficient and rapid accelerators of charged particles are most important in producing high-energy cosmic rays in a wide variety of astrophysical plasmas.
Test-Particle Simulations
We now consider non-diffusive test-particle numerical simulations to better address the physics of acceleration at low energies. This work has recently appeared in the Astrophysical Journal ( [12]). In these calculations, the trajectories of an ensemble of test particles are integrated by numerically solving the Lorentz force on each particle using pre-specified electric and magnetic fields. The mean magnetic field makes an angle θ Bn with respect to the shock-normal direction. Superimposed on this is a fluctuating component that is determined from a pre-specified power spectrum that resembles the usual Kolmorov spectrum. The correlation scale of the turbulent magnetic field is taken to be 2000 U 1 /Ω i , where U 1 is the upstream flow speed and Ω i is the ion cyclotron frequency. Both components satisfy Maxwell's equations. Test particles (protons) are released with an energy of 3 times the plasma-ram energy in the local fluid frame just behind the shock front. Each particle's trajectory is integrated until it escapes downstream by convection (based on a probability of return criterion), or reaches an arbitrary high-energy cutoff (taken to be 2 × 10 5 times the plasma-ram energy). The results shown in Figure 4 indicate the injection energy, and therefore, the acceleration efficiency does not have a strong dependence on the shock-normal angle. However, as shown in the right panel of Figure 4, for any given time interval to accelerate the particles, perpendicular shocks produce the highest-energy particles. This is because, as we discussed above, the acceleration rate is strongly dependent on the shock normal angle, provided κ ⊥ ≪ κ . This is discussed further in Giacalone [12].
Self-Consistent Hybrid Simulations
Recently, Giacalone [13] performed massive-scale two-dimensional hybrid simulations of perpendicular shocks propagating into a turbulent upstream magnetic field. It was shown that a fraction of thermal particles encountering the shock are accelerated to high energies. The physics of this process is similar to that which we have already described above. However, the source of the high-energy particles comes directly from the thermal population, which had not been seen in previous self-consistent plasma simulations. It has been long known that a fraction of thermal ions are specularly reflected by the shock and begin to gyrate within the shock ramp before becoming thermalized downstream. For the case in which the shock moves into an upstream region containing large-scale magnetic fluctuations, some of these ions can move upstream along these lines of force before returning to the shock. These ions can gain considerable energy because they can achieve multiple interactions with the shock.
The efficiency for the acceleration in these large-scale hybrid simulations is difficult to estimate because the spatial domain is still rather limited by computation resources. However, it was estimated that the efficiency is probably comparable to that obtained for a parallel shock, or about 10-20% [14].
Summary
We have addressed the physics of charged-particle acceleration by shocks. We have shown that the perpendicular shocks are as efficient as parallel shocks in accelerating particles to high energies using reasonable parameters. For these same parameters, perpendicular shocks are much more rapid accelerators. Thus, we conclude that perpendicular shocks are important sites of acceleration and can produce high-energy cosmic rays in a wide variety of astrophysical plasmas. Application of these ideas to a supernova shock suggests that the standard energy limts obtained by considering acceleration at a parallel shock must be reconsidered. It seems likely that energies up to the galactic cosmic-ray knee at ∼ 10 15 eV are attainable.
Acknowledgements
This work was supported in part by NASA under grants NAG5-7793 NAG5-12919, and NNG04GA79G, by NSF under grants ATM0327773 and ATM0447354.
|
2017-09-27T10:06:04.749Z
|
2006-10-01T00:00:00.000
|
{
"year": 2006,
"sha1": "5247cbe88ec528b5d59da9f068ef3bf8d30cce24",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/47/1/020",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "0078ec37bb09c622b69880768e750463f6cbe982",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
249674572
|
pes2o/s2orc
|
v3-fos-license
|
The trickle down from environmental innovation to productive complexity
We study the empirical relationship between green technologies and industrial production at very fine-grained levels by employing Economic Complexity techniques. Firstly, we use patent data on green technology domains as a proxy for competitive green innovation and data on exported products as a proxy for competitive industrial production. Secondly, with the aim of observing how green technological development trickles down into industrial production, we build a bipartite directed network linking single green technologies at time $t_1$ to single products at time $t_2 \ge t_1$ on the basis of their time-lagged co-occurrences in the technological and industrial specialization profiles of countries. Thirdly we filter the links in the network by employing a maximum entropy null-model. In particular, we find that the industrial sectors most connected to green technologies are related to the processing of raw materials, which we know to be crucial for the development of clean energy innovations. Furthermore, by looking at the evolution of the network over time, we observe that more complex green technological know-how requires more time to be transmitted to industrial production, and is also linked to more complex products.
Introduction
The impact of human systems of production and consumption on the environment is increasingly at the center of public debate [1,2,3]. As countries face the transition to a more sustainable economy, they will need to take advantage of the new business opportunities and identify profitable entry points in which they can compete in emerging green markets. In particular, there is a broad consensus that green technology development will play a crucial role in sustaining this process as well as in addressing climate change [4,5]. These are complex and multi-faceted phenomena. The reductionist view of general equilibrium economics will not be able to disentangle the underlying mechanisms and configurations [6]. We thus argue that a framework rooted in the Economic Complexity (EC) literature is better suited to account for the dynamic nature of the socio-economic transformations and structural change that the sustainability transition will ensue, as some exploratory but promising attempts have proven [7,8,9,10]. In the present paper we propose a novel application of the EC toolbox that allows us to provide a multilevel analysis on the trickle down from single green technological innovations, as proxied by patenting activity in climate change adaptation and mitigation technologies (CCMTs), to industrial production at the level of single products, as proxied by export data [11]. In order to do so, we draw from studies on the coherence in firm-level patenting [12,13], on the product space and multi-layer networks [14,15,16], and also from the work of Sbardella et al. [10], Napolitano et al. [9] and Barbieri et al. [7], who proposed measures of Economic Fitness and technology space based on green technologies. More in detail, we build a network linking single CCMTs, identified through the Y02 Cooperative Patent Classification (CPC) technology class, to single exported products by contracting over the geographical dimension two bipartite networks connecting countries with green technologies at time t 1 and countries with exported products at time t 2 ≥ t 1 respectively, with a time lag between these two layers of ∆T ≡ t 2 − t 1 (that could also be zero). This firstly enables us to identify the co-occurrences in the same country of competitive patenting and export, secondly to assess the statistical significance of the co-occurrences via an ad hoc maximum entropy null-model [17], and finally to define a green technology-product bipartite network, where each link represents the (statistically significant) probability that being proficient in a green technology τ at time t 1 will lead to the successful export of product π at time t 2 . An important feature of the network is its time-dependency: the direction and magnitude of the information flow can change in time, especially when considering different lags between the two original bipartite networks. Each link from a green technology to an exported product highlights the fact that they share similar underlying technological and productive capabilities, therefore indicating the existence of high probability of jumping from the green technology to the linked product. Focusing on the complementary and interrelation between green technological development and specific production lines allows us to identify the green footprint of each product and target specific areas of potential in the green race, providing a valuable external validation for the connection of single products to environmentally-relevant processes otherwise not detectable. As mentioned above, the methodology we propose draws from the EC literature and in particular is based on the Economic Fitness and Complexity (EFC) approach [18,19,15]. EFC is part of the burgeoning literature on EC [20,21] and is a multidisciplinary approach to economic big data where the informational content of different types of empirical networks is maximized by using ad hoc algorithms which optimize the signal-to-noise ratio. It has proved highly successful in forecasting [22] as well as explaining [23] economic growth, and was recently adopted by the World Bank [24] and the European Commission [25]. By combining insights from the evolutionary [26,27] and structuralist approaches [28,29] in economics, EC describes the economy as an evolutionary process of globally interconnected ecosystems and, in a departure from standard economic views, goes beyond aggregate indicators and measures of productive inputs. It considers instead a more granular and structural view of the productive capabilities of an economy by emphasizing the importance of specialization patterns for long-run growth [30,31,20]. One of the most successful fields of application of the EC framework has been the study of local or national innovation systems. Looking through the lenses of EC at the geographical distribution, quality and relatedness of the innovative activities in which economic actors specialize into, as proxied by patent data, allows one to characterize firms [12,32,33], regions or cities [13,34,35,36], as well as to uncover emergent technology patterns at different scales of analysis [37]. Recently, some promising attempts to draw insights from the EC literature to analyse environmental issues have been put forth, with a special focus on environmental products [38,39,8] and technologies [7,9,40,10], setting the basis for a study of the productive or technological capabilities that are relevant to the green economy. Bearing in mind the benefits and the shortcomings of using patent data for studying technological innovation [41,42,43], the choice to study green patenting is motivated by the fact that there is a broad consensus among academics and policy makers that accelerating the development of far-reaching green technologies and promoting their global application are crucial steps, albeit not the only ones, towards containing and preventing greenhouse gas (GHG) emissions and implementing the sustainability transition [44,4,5]. Despite being a relatively recent phenomenon still at early stages of the life-cycle [45,46], over recent years we have witnessed a great acceleration in the development of green technologies, especially in the energy and transport area [44]. These technologies show distinctive features with respect to non-green ones. They appear to be heterogeneous, encompassing many domains of know-how [47] across different geographical areas [10], but are linked in non-trivial ways to the pre-existing knowledge base [45,7]. However, it is important not to disregard the intrinsic limits and difficulties of a "big technological fix" [48,49] and to be aware that science and technology can indeed provide effective tools to tackle the climate crisis, but they will be the more effective the more they will be accompanied by a project of radical transformation of current production and development models [50,1]. Within this context, our findings allow several considerations to be made. First, and somewhat surprisingly, the products with the highest green technology footprint -i.e., most connected to green technologies in the network-concern the export of raw material products, such as mineral, metal and chemical products. Their persistent presence and importance in our network resonate with the literature on the raw material requirements that the green transition will entail [51,52,53,54]. In fact, materials like lithium, cobalt, indium, nickel and many others are key inputs for several green technologies, in particular for those related to renewable energy and electrical mobility. Hence, to deal with the climate and environmental crisis, the extent to which an increase in the development of green technologies could affect mineral demand will need to be carefully taken into consideration when countries or international organisations [55,1] take action or reflect upon future scenarios. Among the goods that according to our analysis are significantly related to green technologies we also find different products related to the export of animal and vegetable products -which are mostly connected to technologies for GHG capture and storage -and machinery and electrical products -which especially show connections with CCMTs in information and communication technologies. Finally, another key result of our analysis is that the links in the green technology-exported product network structure change when we increase the time lag between between green patenting and product exports. When passing from the simultaneous observation of the two network layer, to a 10 year time lag between the two, we observe an increasing number of links between complex green technologies and complex products, suggesting that more complex green technological know-how requires longer to unfold into industrial production and to enter in connection with more sophisticated production lines.
Results
As mentioned above, the aim of this paper is to leverage statistically validated networks to explore the connections between green technologies and exported products, i.e. the trickle down from green technology innovation to industrial production. Each link between a green technology and a product not only indicates that they require similar capabilities to be competitive in them, but also that having a comparative advantage in a green technology is a good predictor for the development and export of a specific product. We compute the validated links for two different aggregations of the data on exported products, moving from a broader level of description -consisting of 97 so-called product chapters, labeled with 2-digit codes -to a more detailed one -consisting of 5053 product subheadings, labeled with 6-digit codes. Moreover, we are able to assess the evolution of the green technology-product network by taking into account the effect of a time lag of 10 years between the development of green technologies and the export of products.
Aggregated analysis
In order to build the multi-layer network in which green technologies are linked to exported products, we start by considering two binary networks: the first one connects countries to the green technologies they patent competitively, the second one connects countries to the products they export competitively. By summing over the geographical dimension we then build the so-called Assist Matrix [16,15], which is the adjacency matrix of the multi-layer network connecting green technologies to exported products, in the following way: where the M matrices define the bipartite networks where countries are linked to the green technologies or exported products in which they have a comparative advantage (see Methods). That is, we are counting suitably normalized co-occurrences, the normalization factors being the product diversification of country c at year t 2 d c (t 2 ) -i.e. the number of products included in the export basket of that specific country -and the ubiquity of the green technology τ at year t 1 u τ (t 1 ) -i.e. the number of countries that are patenting in that specific technological sector. The resulting green technology-product links are then statistically validated by using the Bipartite Configuration Model [56,17]. We details of the procedure can be found in the Methods section. We start our analysis by considering simultaneous normalized co-occurrences, that is with a time lag ∆T ≡ t 2 − t 1 = 0 between the two network layers. Firstly, we investigate the links between green technologies and exported products at a 2-digit aggregation level. Figure 1 represents the adjacency matrix of the green technology-product network at a 95% statistical significance, where we find 46 significant links in total (i.e. 46 green rectangles in the figure). This figure allows us to provide some initial qualitative insights on which green technologies and exported products are connected and which are not. As regards green technologies we note that, although not uniformly, all technology sub-classes (see Table 1 for CPC Y02 code descriptions) have some links to products and are present in the network. The same cannot be said for the exported product layer: some 2-digit product sections are almost completely disconnected, including e.g. Foodstuffs, Plastics/Rubbers, Leather and Textiles, while others have a considerable amount of links. In particular, product like Mineral fuels, Nickel, Lead, Organic and Inorganic chemicals are highly connected with green technologies such as technologies for adaptation to climate change (Y02A) and CCMTs in information and communication technologies (Y02D), indicating that a relatively high number of countries are active in both. This hints at an overlapping of the green technological know-how and the productive capabilities needed for being proficient in both, suggesting that countries that do patent in technology sub-classes as Y02A and Y02D not only are more likely to export raw material products, but also that different types of metals and chemicals are highly connected to R&D in CCMTs, and thus new sustainable avenues in their production could be explored. The topic of raw material products and a specific case study will be discussed in more detail below. In Fig. 2 we offer an alternative representation in which we show the directed network between green technologies and exported products, with the node size being proportional to the node degree. The size of the edges also varies between links: each edge linking two nodes is more or less thick according to the corresponding Assist Matrix entry. The network representation permits a clear distinction between the disconnected components (such as the two nodes relative to air transport at the bottom left) and the large connected component in the center. For instance, it is interesting to notice the energy-related cluster on the left portion of the plot, where green technologies aimed at improving efficiency in computing, in wire-line and wireless communication networks and in the electric power management are linked to the export of raw material products and optical and electrical products, which are important inputs for this kind of technologies. [57]. In the first column the CPC code identifying the Y02 technology sub-class is reported. The second column reports the corresponding description.
Fine-grained connections
We move forward into the analysis by considering the 5053 exported products present in the classification at 6-digit aggregation level. Increasing the level of data breakdown reveals the potential of our methodology, that can be easily applied to any level of data aggregation, and when applied to fine grained information can provide very punctual insights. Figure 3 represents the entire bipartite green technology-product network. The dimension of the nodes is proportional to their degree; the green ones correspond to the green technologies, while all the others correspond to the exported products, and are coloured according to the product sections they belong to (see Table 2). We notice that, in line with the 2-digit product case, almost all green technologies are present: indeed, 39 (out of the total 44) are present in the network. This means that almost all green technologies are connected to the production of at least one product. However, depending on where the green nodes are placed in the network, a green technology may be more or less integrated into the production system as a whole. More specifically, we can see that the periphery of the network is dominated by technologies related to services and transport, while the core of the network contains technologies belonging to sub-classes like Y02A, which covers technologies for the adaption to the adverse effects of climate change in human, industrial (including agriculture and livestock) and economic activities, and Y02W, which covers CCMTs related to waste management.
In Table 2 we collect some descriptive information on the distribution of product nodes and edges in the network. More in detail, products belonging to primary sectors such as animal and vegetable products show many connections with green technologies. In particular, the links we observe are between green technologies and the export of meat, fish, milling industry products and miscellaneous grains. All of these are largely connected with Y02A -especially with Y02A 40 -adaptation technologies in agriculture, forestry, livestock or agroalimentary production and Y02A 50 -in human health protection -and with technologies for capture, storage, sequestration or disposal of GHG -i.e. Y02C. This is consistent with the high level of pollution and emissions that the agricultural sector is accountable for [58]. Finally, in line with the results obtained in the 2-digit product case, the subheadings belonging to minerals, chemicals and metals product sections are confirmed to be highly connected to green technologies. We elaborate on this by focusing on the export of a specific product in the following.
A case study: cobalt
An interesting product export example in our green technology-product network is that of cobalt and other intermediate products of cobalt metallurgy. Figure 4 layout highlights which technologies are significant requirements for the successful export of cobalt, with a level of confidence larger than 95%. In the figure, three red concentric circles delimit the 99.9%, 99% and 95% level of significance. The blue peaks exceeding one of these circle in the figure denote that the export of cobalt is linked at the corresponding level of significance with the green technology labeled around the circular border. In particular, cobalt export is linked with technologies for adaptation to climate change (Y02A), related to transportation (Y02T) and waste treatment (Y02W), for energy generation, transmission and distribution (Y02E), and with CCMTs in in information and communication technologies (Y02D) and in the production or processing of goods (Y02P). The cobalt example further reinforces what we clearly observe in all the results of the analysis, namely a consistent presence of raw materials among the exported products that are most linked to green technologies. This is not surprising: in fact, an emerging literature on the topic is trying to estimate the mineral intensity of green technologies and to forecast how their spread will shape the mineral demand in the years to come [52,53,59,54]. In particular, cobalt is considered a high-impact mineral for the clean energy transition. Indeed, to meet expected future demand its production needs to increase up to nearly 500% of 2018 levels by 2050 [52]. Cobalt is a key element in energy storage technologies, which are crucial to a low-carbon transition for two main reasons: they are used in the transport sector to power electric vehicles and they are needed to store energy from other intermittent renewable sources, such as solar photovoltaic and wind. Given that 64% of global cobalt supply comes from the Democratic Republic of Congo [60], the risks associated with meeting its demand -which will rise if certain climate targets are to be metand the cross-cutting way in which it is used in green technologies, have led to cobalt being placed on the European Commission's list of critical raw materials [51], which includes materials considered critical for their supply risk and economic importance. The list is updated every three years, and cobalt has been in it since its first version published in 2011 [61]. It is worth to notice that we do not have data on green patents for the Democratic Republic of Congo. However, even if its main world supplier is missing, we still observe many connections between cobalt and cobalt metallurgy products and green technologies. In particular, these connections arise from the co-occurrences of several green technologies and cobalt products export in countries like Australia, Belgium, Canada, Finland, Norway, Russia and South Africa, which are all important producers of cobalt and refined cobalt [62].
Connections in a 10 year horizon
With the aim of analysing whether the spectrum of green technologies needed to gain a comparative advantage in a variety of productive sectors changes over time, here we explore how the links between green technologies and exported products change, both in qualitative and quantitative terms, moving from a time lag between the green technology and exported product layers of ∆T ≡ t 2 − t 1 = 0 to ∆T = 10.
In fact, our analysis can be conducted also by considering different values of ∆T allowing for a dynamic perspective on the green technology-production nexus. When considering ∆T = 10 from a quantitative point of view we observe a slight increase in the total number of links, both in the case of 2-digit and 6-digit products (from 46 to 60 links in the case of 2digit products and from 2166 to 2354 links in the 6-digit case). This finding is coherent with the results presented in Pugliese et al. [16], in which the authors show that technological advancements on average anticipate export. The increase of roughly 10% of the resulting links suggests that green technologies are better integrated into the production process after a ten years digestion.
Regarding possible differences in the properties of the linked technologies and products for both time lags, in Fig. 5 we plot the cumulative increment in the number of links for both green technologies and exported products. In particular, in the x-axis of the two plots we rank green technologies and exported products by increasing complexity, which is computed through the implementation of the Economic Fitness & Complexity (EFC) algorithm [18] (see the Supplementary Information [Economic Fitness & Complexity algorithm]). The blue lines in the figures plot the cumulative difference between the number of links that each activity has for ∆T = 10 and ∆T = 0. What emerges from the two plot layouts is particularly significant: the new links that appear when the time lag is increased are relative to more complex products and also more complex green technologies. Therefore, it is likely that more complex potential spillover effects in the economic production deriving from the development of a green technology will arise at a later stage. This is in line with the idea that more complex green technological know-how requires more time to be transmitted to the productive sectors. Moreover, this finding is in agreement with Barbieri et al. [45,47] that study the relationship between green and non-green knowledge bases and argue that green technologies are generally complex and have a heterogeneous development process, involving different domains of know-how. In their respective panel, green technologies and exported products are ordered (not labeled) in ascending order of complexity ranking. The labels of "25%, 50% and 75%" delimit the first, second and third quartiles of the complexity ranking (moving from the last position to the first one). If the y-value is below 0 (dashed red line), then the cumulative number of links delimited by the corresponding green technology or product in the x-axis is higher for ∆T = 0. On the contrary, if the y-value is above 0, then the cumulative number of links is higher for ∆T = 10.
Discussion
To address the climate crisis, it is necessary to change the way economies have grown and developed in recent decades, as this has led to an overall ecological overshoot. To overshoot means that humanity is beyond the limits that the planet imposes to our economy: we are using more resources and producing more waste than can be regenerated and absorbed without consequences [63]. Different approaches can be adopted to steer economies onto a more sustainable path. For example, a strand of research focuses on the analysis and development of green growth policies that aim at promoting economic growth by mitigating its environmental impact through the decoupling of growth and greenhouse gas (GHG) emissions [44]. In contrast, in the literature on degrowth it is argued that such decoupling is not feasible, and that a rethinking of consumption and production patterns is necessary to address the climate crisis [64]. Therefore, the policies that characterise these approaches are not always complementary; on the contrary, they often arise in opposition to each other. However, even across different approaches there is agreement on some essential actions that should be undertaken in any case: among these is certainly the development of environmental innovations aimed at reducing GHG emissions. This is where our article comes in. In fact, through our work we are able to establish at a very detailed level which productive activities benefit the most from the development in green innovation. Therefore, our focus is on possible industrial scenarios resulting from the development of green technologies. In particular, we discuss how green technological know-how is transmitted to industrial production at the product level, even years later. However, we are in no way arguing that there is a causal relationship that links green patenting to subsequent product export, we are just observing statistical significant probabilities that having a comparative advantage in a green technology will lead to the export of a specific product. Among our main findings, we emphasised the presence of many links between green technologies and the export of raw materials, especially mineral and metal products. In addition, we provide evidence on the presence of significant connections between products belonging to the agricultural sector, like Animal & Animal products, and green technologies aimed at the capture and storage of GHG emissions. Finally, we observe that as the years between the the filing of a green patent and the export of a product increase, so does the complexity, and therefore the skills and expertise required, of the products and technologies that are linked together. Throughout the paper we have argued about how raw materials are necessary for the development of environmental technologies: in recent years, several reports analysing this issue have been published by international organisations and institutions [52,53,61]. Therefore, our paper strongly stands in this context: we claim that in order to spread the development of green technologies and to increase their use with the objective of achieving a sustainable transition of the economies, the raw materials intensity of these technologies is a core issue to be deepened. Indeed, it is important to plan appropriate strategies to meet the expected surge in raw materials demand, or to reduce the supply dependency on individual countries that could undermine the stability of the overall raw materials value chain. Despite the fact that these materials are considered as necessary inputs for the realisation of green technologies, thus suggesting an inverse relationship to that studied by us, we nevertheless believe that the links from green technologies to mineral product exports we observe have a strong relevance. Future research could thus explore this issue further, for instance by looking at the connections from exports to green patenting, or by considering import data. We believe that our results are particularly relevant for a number of reasons. First of all, being able to go into such detail in assessing the implications that emerge from the development of green technologies, not only evaluating their collective impact on industrial production, but discussing individual product exports with individual technology domains on a case-by-case basis, has very strong policy implications.
For example, such an analysis could provide support for the industrial policies of a given country, even in the long term, by looking at the patent portfolio in which it is currently competitive. In addition, possible contributions could be made to the classification of environmental products: some products could be linked to green technologies because of the low environmental impact of their production processes. Being able to monitor the export and import of environmentally sensitive products is a central objective on the global policy agenda. For example, the Harmonized System under which exported products are classified is about to be updated [65] with the main changes being the introduction of several 6-digit subheadings that include environmental goods in order to facilitate their trade.
For future developments in the analysis, we believe it would be important to take in consideration time intervals of more years than those considered in our dataset. The export data update just mentioned could be very useful in this respect, as it would add more annual collections to the product dataset which in turn would allow us to increase the time lag of our analysis even beyond 10 years. We expect that this would lead to an increase in the signal from green technologies to products, as previous analysis shows that the peak of the technological impact on industrial production is reached after about 20 years [16]. Finally, another important aspect could extend the layers of activities, and consequently the type of data, taken into account in the analysis: for example, by including also data on employment and wages at a sector or occupational level, or on the scientific production of countries, we could broaden our understanding of how the production and technological structure of a country or a region can make the transition to new green sectors.
Methods Data
We use data on patent applications in environment-related domains as a proxy for environment-related innovation and on exported products as a proxy for economic production [11]. Both datasets consist of single data collections recorded annually at a country level. In particular, we have information on patent applications on 44 green technological fields -corresponding to the CPC groups listed in the Supplementary Information [Table S2: CPC detailed descriptions] -for 48 countries between 1995 and 2019 and on product exports -whose number depends on the level of aggregation considered: 97 in the 2-digit case, 5053 in the 6-digit one -measured in US dollars for 169 countries between 2007 and 2017. As explained in detail in the next section, our methodology requires selecting the countries in common between the two data collections, which turn out to be 47. All data can be represented as matrices: in particular, we denote by W(t) and V(t) the matrices corresponding respectively to the data of green patents and exported products in year t. Each matrix has a number of rows and columns equal to the number of countries c and activities a respectively, where the latter refer to both green technologies τ and exported products π. We report more detailed description of the two datasets we use, including also a complete list of all countries at our disposal, in the Supplementary Information [Data features].
Temporal aggregation
Both the export of products and the patenting activity are collected yearly: it is then possible to investigate the connections on different time scales. While annual data can offer more detailed results, i.e. distinct for each year considered, it may also supply them with more noise. In fact, data can fluctuate from one year to another. In order to minimize the possibility that the green technology-product connections arise from data fluctuations, we consider the total volume of products and patents produced in given time intervals. For our analysis, we compute the matrices W(δ, t) and V(δ, t), corresponding to the time interval of δ years ending in the year t. To this aim, we sum the yearly matrices V(t) and W(t) over the year range δ: Summing data over a time window of δ years reduces the noise in our results, giving more weight to patents and exports that are consistently registered several times in nearby years. Given the years of the datasets at our disposal, we decide to sum the matrices over 5 years (δ = 5). Starting from the layer of exported products, we select the two most recent 5-year aggregate matrices available to us, with the condition that the years included did not overlap each other. Therefore, these matrices are V(δ, t) = {V(5, 2012); V(5, 2017)}. Then, depending on which time lag ∆T we consider between the two layers, we select the green patents matrices. So, for a time lag ∆T = 0, the corresponding matrices are W(δ, t) = {W(5, 2012); W(5, 2017)}, while for ∆T = 10 they are W(δ, t) = {W (5,2002); W(5, 2007)} so that the green technologies "anticipate" the exports. To easy the notation, from now on we do not express the δ dependency of the data matrices, and all our results are produced from the analysis of the aggregated 5-year data collections just mentioned. We have conducted robustness tests of the links we found with respect to changes for both different aggregation time intervals δ and the final year t. We report this tests in the Supplementary Information [Robustness test]. The green technology-product links we find are robust to such changes in the parameters.
Revealed Comparative Advantage
Both exports and patents matrices strongly depend on the total size of the economy or sector. In order to remove this correlation, which hides the capability content of these activities, we computed the revealed comparative advantage (RCA) [66] . The RCA is computed as the ratio between the weight of activity a (be it a patent in a technology field τ or the export of a product π) in the activity basket of the country c and the weight of that same activity with respect to the world volume, as reported in the following equation: (3) Where the element X ca refers to both W cτ and V cπ , i.e. the elements of the country-green technology and country-exported product matrices (for a more detail description on how the matrices are built, we refer to the Supplementary Information [Data Features]). The next step is the computation of the binary matrices M = M ca = {M cτ ; M cπ }, whose elements are 1 or 0 depending on whether the value of RCA ca ≥ 1, meaning that the country c is or is not competitive in activity a. The RCA metric is frequently used in the Economic Complexity framework to assess whether a country is a significant exporter of a product [14,21]. The extension of its use to the patent layer [16] allows us to compare patent and export data in a coherent way in the methods we present in the following sections.
Full technology-product network
Starting from the binary matrices M described above, that summarise the comparative advantage in products and technologies of different countries, a network linking green technologies to products can be derived. The method adopted here is already present in the Economic Complexity framework [14,16]: the idea is to count how many countries have developed a given green technology and at the same time are competitive in the export of a product. This number is called co-occurrences [67]. In practice, however, the co-occurrences should be suitably normalized to take into account the nested structure of the bipartite networks; the result of this process is the so-called Assist Matrix [16,15]. This matrix A is obtained from the contraction of the binary country-technology and country-product matrices. The matrix element A τ π depends on both the year t 1 relative to the patenting of the technology τ and the t 2 of the subsequent export of product π. In formula: By counting the co-occurrences between green technologies and exported products -while weighing them with the degree (or ubiquity) of the technology u τ and the country degree (or diversification) in the exports d c -each element of the matrix A τ π (t 1 , t 2 ) offers a quantitative measure on how likely is to have a comparative advantage in exporting the product π in the year t 2 , conditional on having a comparative advantage in the technology τ in the year t 1 . Therefore, t 1 and t 2 indicate that it is considered the possibility that the link couples patents developed in a given year with products exported in a different year. After the computation of the Assist Matrix, we process the statistical validation of the empirical results expressed by each node A τ π (t 1 , t 2 ) through the implementation of a null model which we present in the following section.
Comparison with a null model The matrix elements computed in Eq. (4) need to be validated by a statistical test able to distinguish meaningful links from the noise and to supply a confidence level for assessing the probability that two nodes share a statistically significant number of co-occurrences. In particular, here we rely on the filtering procedure based on the Bipartite Configuration Model (BiCM) [56] developed by Saracco et al. [17] for the projection of bipartite onto monopartite networks, and subsequently adapted to a similar multi-partite network by Pugliese et al. [16]. It must however be noted that no absolute criteria exists for the choice of the model, and that different null models can yield different outcomes [68]. Here, we use a null model for the binary matrices M, in which the matrices are randomised except for some constraints we impose [69] -in this case the average degrees. The use of BiCM allows for a stricter filtering procedure with respect to other null models [68] and takes into account the possible noise present in the input data [69,17,68]. This class of models is based on the maximum entropy principle [70], which leads to the realisation of an ensemble Ω of bipartite networksM, where links are random but maximize the number of possible configurations which satisfy the imposed constraints. In the present case the entropy function: is maximized under the constraint that the ensemble averages . . . Ω of the ubiquity of activities (i.e. of green technologies and exported products) and of countries diversification of the random networks,ũ a (t) andd c (t), must be equal the observed ones (labeled without the tilde symbol): The maximization procedure yields the probability distribution for each possible couple of nodes countryactivity to be linked. Then, we use it to perform a direct sampling of the ensemble Ω. The ensemble is composed of a number of realisations of the null model which necessarily depends on the threshold p-value with which we want to validate the links in the technology-product space. In particular, since our results are mostly set to a statistical significance of 95%, we construct ensembles consisting of 10000 realisations of the null model. In such a way a rough but conservative estimate yields a sampling error of 5 ‰. For each couple of null model realizations {M cτ (t 1 );M cπ (t 2 )} related to the green technology and exported product layers, we compute the corresponding null Assist Matrix of elementà τ π (t 1 , t 2 ) through a contraction as in equation (4). By doing so, we build an ensemble of 10000 realizations of the null Assist matrix. Finally, for each possible link green technology-product τ -π we compare the empirical value A τ π (t 1 , t 2 ) with the 10000 null values of that same link. We are thus able to assess the statistical significance of our results: for example, if we want to select only the links that are 95% significant, we consider those with the experimental value higher than the corresponding null one in at least 9500 cases out of 10000.
Validation of the results for a specific time lag
As we already stressed, the methodology at our disposal allows us to build different networks linking green technologies to exported products by varying the temporal dimension. We express the temporal dependence of the analysis through the time lag ∆T given by the difference between the year t 2 of the country-product matrix and the year t 1 of the country-green technology matrix. In particular, given the years at our disposal for the two data collections, we consider both ∆T = 0 and a time lag of ten years (i.e. ∆T = 10). We recall that our matrices refer to sums over 5-year intervals. For each of the two time lags considered we associate two different pairs of 5-year aggregate technology-product matrices: these are for ∆T = 10, where, following equation (2), the year is the last of the five years interval. For each couple of matrices we follow all the steps described above -i.e. RCA, computation of the Assist Matrix, and statistical validation through the null model at a chosen p-value -and we consider only those links that are statistical significant in both of them. Therefore, for instance, the links represented in Fig. 2 are those that with a 95% statistical significance in both the networks W(2012) − V(2012) and W(2017) − V(2017). We believe that this is an important step in order to be able to argue that the know-how of a specific technology is transmitted to a product immediately or requires a time lag of 10 years, regardless of the specific years we are considering. Moreover, it gives additional robustness to our analysis of the multi-network beyond the adoption of the null model.
SUPPLEMENTARY INFORMATION Data Features Green Patents
As a response to the increasing attention and concern about climate change and renewable energy generation, we are witnessing a large increase of patent applications in environment-related domains: according to the European Patent Office (EPO), in the last years there have been around 1.5 million patent applications in sustainable technologies [71]. Searching for environment-related patent documents has, therefore, been a challenge, especially because in the past documents relating to sustainable technologies did not fall into one single classification. In 2013 the EPO and the United States Patent and Trademark Office (USPTO) agreed to harmonise their patent classification practices and developed the Cooperative Patent Classification (CPC) system, which encompasses five hierarchical levels spanning from 9 sections to around 250000 subgroups and where codes starting with the letters A to H represent a traditional classification of innovative activity in technological fields, while the Y section [72] tags cross-sectional technologies. Here in particular we employ the Y02-Technologies or applications for mitigation or adaptation against climate change retrieved from the OECD REGPAT database [73]. The Y02 class consists of more than 1000 tags organised in 9 sub-classes and includes patents related to climate change adaptation and mitigation (CCMT) 1 technologies concerning a wide range of technologies related to sustainability objectives, such as energy efficiency in buildings, energy generation from renewable sources, sustainable mobility, smart grids and many others, the details of which can be found in Table 1 of the manuscript. Following the notation given in the manuscript, we have matrices W(t) from 1995 to 2019. The number of countries (i.e. the number of rows in each matrix) are 48 (see Table S1). The number of columns are 44 technological fields corresponding to the CPC groups listed in Table S2. To build such matrices, each patent family -i.e. each collection of patent applications covering the same or similar technical content -counting as a unit and recorded in REGPAT is divided between all technology codes τ and all countries c with which it is associated, following the procedure adopted in Napolitano et al. [9] and Barbieri et al. [7]. Therefore, each element W cτ (t) of the matrix represents the fraction of patent families associated with the country-technology pair c − τ in year t.
Exported products
For the export we resort to the UN-COMTRADE database [75], which provides yearly trade flows between countries in US Dollars. This information is provided at the product-level, so that it is possible to study in detail which countries are exporting a given amount of a given product in a chosen year. The products in the dataset are classified according to the Harmonized System, a hierarchical classification that allows to go from two digit (about 100 different product chapters) up to six digits (about 5000 different product subheadings) codes. This degree of freedom is key to investigate the effect of technological innovations at different levels of detail: in fact, we move from the links that green technologies have with the export of entire product categories such as those related to the Machinery/Electrical sector to those that they have with the export of detailed single products such as electric motors. We point out that since importers' and exporters' declarations do not precisely coincide, suitable reconstruction algorithms are needed in order to achieve a coherent and cleaned dataset. In order to do so, we adopt a global Bayesian optimization approach to obtain a denoised dataset, as proposed by Mazzilli et al. [76]. The goodness of this procedure is empirically confirmed by Tacchella et al. [22], who, by employing the denoised dataset, obtained a sizeable increase in GDP forecasting performance. Finally, following the notation given in the manuscript, we have matrices V(t) from 2007 to 2017: the number of rows, corresponding to the number of countries, is equal to 169 (see Table S1), while the number of columns, corresponding to the exported products, depends on the level of aggregation considered (97 in the 2-digit case, 5053 in the 6-digit one). Thus, each element V cπ (t) represents the volume of exports of the product π, expressed in thousands of dollars, by the country c in year t.
Country list
Depending on which step of our analysis we deal with, we consider all countries included in each collection or only those in common. In particular, the computation of the Revealed Comparative Advantage (RCA) is done separately for patents and exports, thus including all countries in the respective datasets. On the contrary, the calculation of the assist matrix is done by contracting the patent and export data over the geographical dimension, and therefore we only consider those in common. In Table S1 we collect all the countries included in both datasets, also writing their names in different colours depending on whether they are part of the 47 common countries between the two datasets or they are only present in one of them. As regarding green patents data, we have information on patent applications on 44 green technology groups. These are in turn grouped into 8 subclasses, which are reported in Table 1 of the manuscript. In Table S2 we report the codes and descriptions at the group aggregation level.
Economic Fitness & Complexity algorithm
In Fig. 5 of the mauscript we order the codes related to green technologies and exported products according to their level of complexity. The latter is intended as an algorithmic assessment of the number and the sophistication of the capabilities needed to be competitive in a given activity. To compute it, we use the Economic Fitness & Complexity (EFC) algorithm product [18,77], originally introduced for exports but also applied to green patents [10]. More in detail, it consists of a non-linear iterative algorithm that, starting from the binary matrices M ca (t) obtained through the implementation of RCA detailed in the manuscript in the Methods section, allows to quantify the complexity of the activities Q a and the competitiveness of the countries, namely their fitness F c , that perform in them. The mathematical formulation of the algorithm at each iteration n is as follows: where, in the left-hand bracket, the calculation of the fitness and complexity parameters for all countries and activities is shown, while in the right-hand one is the following normalisation step. The non-linear structure of the algorithm causes the activities in the baskets of less competitive countries (i.e. with low fitness) to be assigned a low level of complexity. The most competitive countries turn out to be those with more diversified activity baskets. Given the convergence properties of the algorithm, discussed in Pugliese et al. [78], we do not consider the complexity values but their rankings. In particular, the ranking are computed using the most recent 5-year aggregate matrices given the years of the data we considered in the analysis: thus, we use M cτ (5,2017) Table S1: All country list. Legend: "Red-labelled country": included in both datasets (47 in total); "Green-labelled country": included in green patents dataset only (1 in total); "Black-labelled country": included in exported products dataset only (122 in total).
CPC subclass
Description Y02A 10 Adaptation to climate change at coastal zones 20 Water conservation 30 Adapting infrastructure 40 Adaptation technologies in agriculture 50 in human health protection 90 Indirect contribution to adaptation to climate change
Y02B 10
Integration of renewable energy sources in buildings 20 Energy efficient lighting technologies 30 Energy efficient heating 40 Improving the efficiency of home appliances 50 Energy efficient technologies in elevators 60 ICT aiming at the reduction of own energy use 70 Efficient end-user side electric power management 80 Improving the thermal performance of buildings 90 GHG emissions mitigation [Buildings]
Y02C
10 CO2 capture or storage 20 Capture or disposal of greenhouse gases
Y02D 10
Energy efficient computing 30 Reducing energy consumption in communication networks 50 Reducing energy consumption in wire-line communication networks 70 Reducing energy consumption in wireless communication networks
Y02E 10
Energy generation through renewable energy sources 20 Combustion technologies with mitigation potential 30 Energy generation of nuclear origin 40 Technologies for an efficient electrical power generation 50 Technologies for the production of fuel of non-fossil origin 60 Enabling technologies 70 Other energy conversion systems reducing GHG emissions
Y02P 10
Metal processing 20 Chemical industry 30 Oil refining and petrochemical industry 40 Processing of minerals 60 Agriculture 70 CCMT in the production process for final products 80 CCMT
Robustness test
In the manuscript we build the green technology-product bipartite network starting with two important preliminary steps: firstly, we summed the yearly data collections at our disposal over 5 years; secondly, depending on the time lag ∆T we consider, we select specific 5-year aggregate matrices. In particular, we select the two most recent exported product matrices available to us that do not overlap each other In this section we want to show that our results do not depend on the choices of the years considered nor on the parameter δ. To this aim, we conduct a robustness test in which we repeat our analysis for both different values of δ and years considered. In particular, we replicate our results for a 2-digit level of product aggregation and for the time lag ∆T = 0. Considering the 10 years covered by the two 5-years summed data collections we consider in the analysis for ∆T = 0 -i.e. from 2008 to 2017 -we create a dataset composed by 32 matrices (16 for green patents and 16 for exported products) aggregated at 3,4 and 10 years, so that δ = {3, 5, 10}. The dataset is reported In Table S3: each M(δ, t) in the table stands for a corresponding couple of technology-product matrices W(δ, t) − V(δ, t) for which we process the full analysis, meaning RCA, assist matrix and null model computations. We consider as a benchmark of this test the 46 links validated at a 95% level of significance in the manuscript. The results we obtain can be summarized as follows: • Considering only the aggregation over 3-year intervals, on average 73% of the 46 links are present at a 95% significance level. This percentage increases to 87% if we consider a 90% level of significance for the 3-year results.
• Considering only the aggregation over 4-year intervals, on average 80% of the 46 links are present at a 95% significance level. This percentage increases to 92% if we consider a 90% level of significance for the 4-year results.
• 85% of the 46 links are present at a 95% significance level for the unique pair of technology-product matrices with the 10-year time aggregation. This percentage increases to 98% (45 links out of 46) if we consider a 90% level of significance for the 10-year result.
Based on the above summary, we consider the robustness test successful. Therefore, we interpret the results reported in the manuscript as showing a real link of interdependence between the acquisition of green technological capabilities and the development of productive ones. Table S3: Composition of the dataset we use for the robustness test of our results. Since we consider the time lag ∆T = 0, data collections refer to both green patents and exported products.
|
2022-06-16T01:15:49.346Z
|
2022-06-15T00:00:00.000
|
{
"year": 2022,
"sha1": "8de3797be0f923162c1eab6aba15f565772a3239",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8de3797be0f923162c1eab6aba15f565772a3239",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Medicine",
"Economics"
]
}
|
222222465
|
pes2o/s2orc
|
v3-fos-license
|
Sensorless Neutral Point Voltage Stabilization in Three-Phase Four-Wire Converters
This paper presents a midpoint voltage balancer (MVB) which provides neutral point voltage stabilization to three-phase four-wire converters. The MVB consists of dual switching legs, two neutral inductors, and two split capacitors. A sensorless approach with open-loop control is adopted. It removes current/voltage sensors and alleviates computational demand. It is a cost effective and robust alternative to the closed-loop MVB. Due to the zero-crossing of the neutral inductor current, all switches of the MVB operates in zero-voltage switching mode when neutral current is smaller than the nominal phase current of the three-phase four-wire converter. Interleaving operation of the MVB minimizes the high-frequency current circulation in the split capacitors. In addition, the two neutral inductors are magnetically coupled to decrease the inductor current ripple. As a result, the size of passive components of the MVB is reduced. The proposed sensorless approach is verified by a 20 kVA prototype.
Introduction
Three-phase four-wire (3P4W) converters, being standalone or grid-connected, provide a path for neutral current. While the standalone can support various single-phase loads in an isolated micro-grid [1], most of the grid-connected inject energy from distributed resources to utility grid. A 3P4W grid-connected converter can offer zero-sequential voltage imbalance compensation to the point of common connection (PCC) in a low-voltage distribution network [2]. Nowadays, the adoption of single-phase rooftop photovoltaic system and electric car are increasing. These may lead to degradation of voltage balance in utility grid. Therefore, research and development of 3P4W grid-connected converters that offers voltage imbalance compensation is necessary.
Popular configurations of 3P4W converters are the split dc bus (2-C) [3], the four-leg converter (4-Leg) [4], the actively control split dc bus (ACSB) [5] , and the midpoint voltage balancer (MVB) [6]. The 2-C configuration is easy to implement. However, it suffers from bulky size of electrolytic capacitors and incapability to process neutral current with dc component. The 4-Leg configuration adopts an extra switching leg to form a neutral point for the 3P4W GCC. Since the control of the fourth switching leg and the control of the three-phase inverter are not decoupled, it requires more development effort. The ACSB configuration combines the idea of 2-C and the 4-Leg, and it is easy to control. In addition, the split dc bus of an ACSB can be implemented using small capacitors.
The MVB configuration proposed in [6] doubles the neutral current handling capability of the 3P4W GCC by adopting dual switching legs, as shown in Fig. 1. Dual-loop control is implemented, and the neutral point voltage ripple is less than 1.5% under severe neutral current transient. It offers zero-voltage switching (ZVS) operation to the dual switching legs under a wide range of neutral current injection.
The ZVS operation is achieved by adopting small neutral inductance so that inductor current cross zero at any switching cycle. Interleaved control is implemented to prevent high-frequency current from circulating into the split capacitor branch. Because of dual-loop control, usage of multiple sensors and high computational demand is inevitable.
In this paper, the control strategy of the MVB is redesigned to allow sensorless neutral point voltage stabilization in the 3P4W converter. This approach is considered as a cost-effective alternative to the MVB proposed in [6]. It provides majority of the advantages existed in its predecessor, while it requires no information on neutral inductor currents, split dc bus voltages, nor neutral current. The MVB operates in open loop with a fixed duty cycle and fixed switching frequency. The neutral inductance is half compared to that in [6]. Nevertheless, same inductor current ripple is maintained due to coupling. In addition, capacitance of the split dc bus is also reduced. Moreover, the neutral point voltage is tolerant to capacitance mismatch, which might happen due to parameter deviation and component degradation.
Description of MVB
The MVB is connected to a 3P4W grid-connected converter, as shown in Fig. 1. It consists of dual IGBT switching legs with S1, S2, S3, S4, two neutral inductors LN1, LN2, and two split capacitors CN1, CN2. Being connected to the common end of the neutral inductors, the midpoint of the split dc bus forms the neutral point N. Table 1 presents the specifications of the 20 kVA 3P4W converter, which adopts the MVB to stabilize the neutral point. This work focus on the MVB, while the design of the three-phase inverter and how it is controlled to offer voltage imbalance compensation is not covered. The MVB operates in open loop mode, requiring no current and voltage measurements. The PWM signals for each switching legs are 180° phase shifted so that the current ripple of the two neutral inductors are interleaved. As a result, the inductor output current iLN is, theoretically, ripple free [7].
A three-phase voltage signal can be decomposed into positive-sequence components, negative-sequence components, and zero-sequence components. An unbalanced three-phase voltage measured at a PCC may contain negative-sequence components. During voltage imbalance compensation, the 3P4W converter injects unbalanced currents to the PCC. The injected currents contain suitable amount of zero sequential currents with reversed polarity to that of the voltage sequential components. The zero sequential currents, i.e. the neutral current directly flows into the midpoint of the dc bus. This current may contain components of fundamental frequency and certain harmonic frequencies based on the power quality compensation mode of the 3P4W converter.
Grid standard, EN 50160-2010, limits voltage total harmonic distortion (THD) at PCC up to 8%. To meet harmonic requirements of grid regulations, the adoption of 3P4W grid-connected converter is a good approach at mitigating harmonics, and compensating voltage imbalance. Since low-order voltage harmonics are dominant in utility grid, the 3P4W converter mentioned in this paper concerns harmonic order up to 13 th . This means that the neutral current could contain harmonic components from 3 rd up to 13 th order with various amplitude.
Three-phase inverter
Unbalanced and non-linear loads With the proposed MVB, this harmonic-abundant neutral current is redirected to the neutral inductor path, keeping the split capacitor free from the neutral current. Therefore, the voltage of the neutral point is stabilized.
Operation principle
The MVB either operates in closed loop [6,8] or open loop. In closed loop, current and voltage signals are fed back to the loop, which modifies duty cycle to alter the neutral point voltage to a desired condition. Fig. 2 (a) shows the control implementation of the MVB, where dual loop is implemented [8]. The voltages of the upper and bottom split capacitor vCN1, vCN2, currents of the neutral inductors iLN1, iLN2, and the neutral current iN are measured. Due to the existence of LC resonance in the MVB circuit, active damping is typically adopted. Two control commands uk_LN1 and uk_LN2 are compared with the 180° phase shifted carrier signals, from which the gate signals G1, G2, G3, and G4 are derived.
The objective here is to keep the neutral point voltage as stable as possible. Since the neutral point voltage should be half of the total dc bus voltage, the duty cycle D of the switches is kept at 50%. Therefore, in open loop, control commands uk_LN1 and uk_LN2 are fixed values, which equal half of the carrier amplitude. As shown in Fig. 2 (b), same to the closed loop, the control commands are compared with the carrier signals, from which gate signals are derived.
According to Fig. 2 Assuming that the dc bus voltage is constant, so dVdc/dt=0, (1) becomes and = Therefore, the MVB can be simplified into a buck converter with dual switching legs, as shown in Fig. 3 (a) [9]. This buck converter is connected to a capacitor CN and a current source IN.
As mentioned before, the duty cycle is 50% with switching pattern interleaved by 180° phase shift. Neglecting dead time effect, there are only two switching states. In state 1, S1 and S4 are switched on, S2 and S3 are switched off; in state 2, the switches work in the opposite. Both states last half switching period Ts. Fig. 3 (b) shows some key waveforms of the MVB under different switching states. Note that the doted lines in Fig. 3 (b) are meant for inductor currents of the MVB with coupled neutral inductor, which is covered in the next section.
In state 1, the inductor voltages are represented as LN1 dc The current through LN1 increases, while the current through LN2 decreases. Assume that the inductance of both inductors is equal, LN1=LN2=LN. Since the duty cycle is 50%, VCN=Vdc/2. The inductor current change during each switching cycle are expressed as Similarly, in state 2, the inductor current change during each switching cycle are denoted as Based on (5) and (6), the increment and decrement of neutral inductor current in each switching cycle are equal. Therefore, the current ripple on the two neutral inductor counteracts each other. Ideally, the total inductor current iLN is a straight line and ripple-free, as shown in Fig. 3
Passive component design
Because of sensorless approach, design of the neutral inductors and dc split capacitors are particularly crucial. According to (2), if iLN counteracts iN, no current flows into CN, which stabilizes the neutral voltage vCN. This requires the impedance of the neutral inductor branch to be much smaller than that of the split capacitor branch at selected frequencies, which range from grid fundamental and its harmonic up to 13 th order as indicated in Table 1. In closed loop, impedance reduction of the neutral inductor branch is achieved by paralleling a virtual impedance to the existing inductor [9,10]. However, this requires multiple sensors and multiloop control implementation. The sensorless approach proposed in this paper removes the need for control loops and keeps the neutral point voltage stable by selecting appropriate neutral inductors and split capacitors.
Split capacitor
As mentioned before, the MVB only has two switching states. At any given time, one of the neutral inductors is connected to the positive potential of the dc bus, while the other is connected to the negative. The circuit diagram of an impedance model of the MVB is shown in Fig. 4, where the neutral inductors LN1, LN2, and the split capacitors CN1, CN2 are represented by ZLN1, ZLN2, ZCN1, ZCN2, respectively. Based on the assumption that LN1 equals LN2, thus ZLN1=ZLN2=ZLN. Therefore, the voltage of the neutral point N is expressed as Given the premise that at frequencies of interest, equation (7) becomes Therefore, as long as (8) is satisfied, the neutral point voltage VN equals half of the dc bus voltage, even if the split capacitors does not match in capacitance.
In [6], the neutral inductance is designed to be LN=220 µH, such that the IGBTs of the MVB operate in ZVS mode under a wide neutral current injection range (IN ≤ 29 Arms). Under this circumstance, the neutral inductor currents iLN1, iLN2 cross zero at any given switching cycle when the neutral current iN is less than the nominal phase current Iabc. Re-arrangement of (8) yields and fN is the frequency of the neutral current, ranging from 50 Hz to 650 Hz. Therefore, two split capacitors of 10 µF are selected to meet the requirement of (10). Note that CN1 and CN2 can be different without deteriorating the neutral point voltage stabilization of the 3P4W converter. In addition, the tolerance of the neutral point voltage stability to split capacitance mismatch is verified through experiments.
Neutral inductor
Interleaved control of two switching legs significantly reduces the output current ripple of iLN. However, the inductor current ripple remains unchanged. On the one hand, inductor current ripple is deliberately increased to facilitate ZVS of the MVB. On the other hand, less inductor current ripple is desired to reduce copper loss. In this paper, neutral inductor coupling is employed to address the trade-off between current ripple and power loss.
By coupling the neutral inductors, as shown in Fig. 5, in a way that the derived ideal transformer is connected out of phase with the polarity dots on opposite ends, one obtains the MVB with coupled neutral inductors. The coupled inductor is represented as an ideal 1:1 transformer, two leakage inductors Llk1, Llk2, and a magnetizing inductor Lm [11]. Same to the MVB with independent inductors, there are only 2 operational states of the MVB with the coupled inductors.
In state 1, the inductor voltages are represented as (3) and (4). As elaborated in [11], the current changes through neutral inductors in this state are ( ) where k is the coupling coefficient of the coupled inductor, Ls is the self-inductance, and Since the duty cycle D is always 50%, the inductor current changes are denoted Similarly, in state 2, the current changes are expressed as For a transformer with perfect coupling, the leakage inductance Llk is zero, and the coupling coefficient k is then equal to 1. Compared to (5) and (6), it is beneficial to build a coupled inductor with k ≈1. Therefore, the current ripple through the coupled neutral inductor ΔiLc is half of that through the independent neutral inductors ΔiLi, as shown in Fig. 3 (b) and expressed as However, as mentioned before, the neutral inductor current ripple is deliberately brought up to maintain ZVS operation under a wide neutral current injection range. To preserve the same amount of inductor current ripple, a 50% neutral inductance reduction becomes a good alternative, hence LN1=LN2=110 µH.
Experimental verification
In order to verify the effective of the proposed MVB, A 20 kVA 3P4W grid-connected converter prototype is built. Main components of the converter are shown in Fig. 6. Key specifications of the converter has already been presented in Table 1.The converter provides harmonic mitigation and voltage unbalance compensation to the PCC. Two three-phase IGBT modules are adopted in the prototype. A dSPACE MicroLabBox is employed to handle current and voltage signal acquisition and PWM signal generation.
Key waveforms of one switching leg during switching operation are shown in Fig. 7. These waveforms include gate signal of S2, neutral inductor current iLN1, current through S2, and voltage across S2. The conventions of these signals are referred in Fig. 5. The inductor current has a high current ripple, and it crosses zero at every switching cycle. During switching period T1, gate signal G2 is positive, however, iS2 ≤ 0, diode D2 conducts until the current reverses its polarity. During switching period T2, G2 is still positive, iS2 > 0, S2 conducts until the G2 goes to negative. After that, in T3, D1 conducts and in T4, S1 conducts. S2 operates in hard turn-off at the transition between T2 and T3, as shown in Fig. 7. However, since the anti-paralleled diode D2 conducts before S2, there is only a diode forward voltage across the emitter and the collector during the turn-on of S2. As a result, the turn-on loss of S2 is almost zero. This operational principle also applies to S1, S3 and S4. Therefore, the ZVS turn-on is great advantage of the MVB, since turn-on loss of IGBT is normally higher than turn-off losses due to diode reverse recovery.
The 3P4W converter injects unbalanced current ia, ib, ic to the PCC, such that a certain amount of iN is injected to the neutral point. Due to component availability, coupled inductor whose self-inductance is 85 µH is chosen instead of 110 µH. To verify the response of the proposed MVB, experimental tests under four different cases were performed, as described in the following.
• Case 2: Closed loop based on independent inductors with CN1=CN2=100 µF, LN1=LN2=220 µH. It contains iN, vCN, iLN1, iLN2, and iLN. Similarly, a neutral current transient of 60 Apeak is created at 2.44 s to verify the dynamic response of the MVB. It takes the MVB around two grid cycles to fully react to the neutral current transient, and to maintain the Offset -17 A Even though, the self-inductance of the coupled inductor is 39% of that in case 1, the average current ripple across the coupled inductor is just 47% more than that in case 1. Due to some piratical issues, the coupling coefficient of the inductor is never 1, and the leakage inductance Lk1 and Lk1 are not the same. As it shown in Fig. 10 (f), currents iLN1 and iLN2 does not cancel each other fully, which results a 9.4 Apeak current ripple in iLN. Nevertheless, the coupled implementation of the neutral inductors is a way to reduce core and copper use of the proposed MVB.
In order to show voltage stability tolerance to split capacitance mismatch, CN1 is chosen to be 100% more than that of CN2. The test results of case 4 is shown in Fig. 11, which includes waveforms of iN and vCN. The MVB regulates the neutral point voltage to half of the dc bus voltage with ripple around 15 V rapidly. It can be concluded that the dynamic performance of the MVB is not compromised under split capacitance mismatch.
The comparison between the open loop regulated MVB with sensorless approach and the closed loop controlled MVB is summarized in Table 2. The open loop regulated MVB is inferior in terms of neutral current handling capacity and tolerance to neutral inductance asymmetry. However, considering the fastdynamic performance, absence of control loops and sensor, as well as small passive components, the proposed MVB is a cost-effective and robust alternative to the closed loop controlled MVB.
Conclusion
A voltage stable neutral point is essential to normal operation of 3P4W converters, especially those in voltage imbalance compensation applications. An unstable neutral point could lead to undesired output current distortion, which degrades the compensation performance of the converter at PCC. This paper proposes a cost-effective and robust MVB for neutral point voltage stabilization in 3P4W converters.
The MVB requires no current nor voltage measurements, and it operates in open loop mode. The passive components (110 μH and 10 μF respectively) are carefully selected so that most of the neutral current is re-directed into the inductor branch, keeping the neutral voltage stable. Coupled implementation of neutral inductor is adopted, which cuts the required inductance to half. In order to achieve the best result, high coupling coefficient and symmetrical inductor construction are desired. Due to interleaving operation, high-frequency current circulation inside the split capacitor branch is alleviated. All IGBT switches of the MVB operates under ZVS when the neutral current is less than the nominal phase current of the 3P4W converter.
Compared to the closed loop counterpart, the sensorless approach does reduce the neutral current handling capability of the MVB by about 33%. However, it stabilizes the neutral point voltage much faster by around 300%. In addition, the performance of the proposed MVB is not compromised even under a 100% split capacitance mismatch.
|
2020-10-09T17:22:28.621Z
|
2020-09-01T00:00:00.000
|
{
"year": 2020,
"sha1": "267112624c3892445f6155a3ad93c6dac082d834",
"oa_license": "CCBY",
"oa_url": "https://pure.tue.nl/ws/files/160043302/Sensorless_Neutral_Point_Voltage_Stabilization_Final.pdf",
"oa_status": "GREEN",
"pdf_src": "IEEE",
"pdf_hash": "267112624c3892445f6155a3ad93c6dac082d834",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
97913
|
pes2o/s2orc
|
v3-fos-license
|
Universal Filtering via Hidden Markov Modeling
The problem of discrete universal filtering, in which the components of a discrete signal emitted by an unknown source and corrupted by a known DMC are to be causally estimated, is considered. A family of filters are derived, and are shown to be universally asymptotically optimal in the sense of achieving the optimum filtering performance when the clean signal is stationary, ergodic, and satisfies an additional mild positivity condition. Our schemes are comprised of approximating the noisy signal using a hidden Markov process (HMP) via maximum-likelihood (ML) estimation, followed by the use of the forward recursions for HMP state estimation. It is shown that as the data length increases, and as the number of states in the HMP approximation increases, our family of filters attain the performance of the optimal distribution-dependent filter.
of the HMP-based universal filtering scheme.
The remainder of the paper is organized as follows. Section 2 introduces some notation and preliminaries that are needed for setting up the problem. In Section 3, the universal filtering problem is defined explicitly. In Section 4, our universal filtering scheme is devised, the main theorem is stated, and proved. Section 5 extends our approach to the case where the channel has memory. Section 6 concludes the paper and lists some related future directions.
Detailed technical proofs that are needed in the course of proving our main results are given in the Appendix.
Notation and preliminaries 2-A General notation
We assume that the clean, noisy and reconstruction signal components take their values in the same finite M -ary alphabet A = {0, · · · , M − 1}. The simplex of M -dimensional column probability vectors will be denoted as M.
The DMC is known to the filter and is denoted by its transition probability matrix Π = {Π(i, j)} i,j∈A . Here, Π(i, j) denotes the probability of channel output symbol j when the input is i. We assume Π(i, j) > 0 ∀i, j, and let Π min = min i,j Π(i, j). We assume this channel matrix is invertible and denote the inverse as Π −1 . Let Π −1 i denote the i-th column of Π −1 . We also assume a given loss function (fidelity criterion) Λ : A 2 → [0, ∞), represented by the loss matrix Λ = {Λ(i, j)} i,j∈A , where Λ(i, j) denotes the loss incurred when estimating the symbol i with the symbol j. The maximum single-letter loss will be denoted by Λ max = max i,j∈A Λ(i, j), and λ j will denote the j-th column of Λ.
As in [21], we define the extended Bayes response associated with the loss matrix Λ to any column vector V ∈ R M as B(V) = arg min where arg minx ∈A denotes the minimizing argument, resolving ties by taking the letter in the alphabet with the lowest index.
We let P denote the true joint probability law of the clean and noisy signal, and E(·) denote expectation with respect to P . Also, every almost sure convergence is with respect to P . If we need to refer to the probability law of clean or noisy signal induced by P , we denote P X and P Z , respectively. If P is written in a bold face, P, with a subscript, it stands for a simplex vector in M for the corresponding distribution of the subscript. For example, P Xt|z t is a column M -vector whose i-th component is P (X t = i|Z t = z t ).
When we have some other probability law denoted as Q, and want to measure its difference from P , a natural choice of such a measure is the relative entropy rate. First, denote the n-th order relative entropy between P and Q as D n (P ||Q) = z n P (z n ) log P (z n ) Q(z n ) = E log P (Z n ) Q(Z n ) .
Then, the relative entropy rate (also known as Kullback-Leibler divergence rate) is defined as D(P Q) lim n→∞ 1 n D n (P ||Q) if the limit exists. When Q is a probability law in a certain class of HMPs, this limit always exists and the relative entropy rate is well defined. A more detailed discussion about this limit will be given in Lemma 2. This relative entropy rate will play a central role in analyzing our universal filtering scheme.
2-B.1 Definition
As stated in the Introduction, the HMPs are generally defined as a family of stochastic processes that are outputs of a memoryless channel whose inputs are finite state Markov chains. Throughout the paper, we will only consider the case in which the alphabet of HMP, Z, and underlying Markov chain, X , are finite and equal, i.e., Z = X = A, and the channel is DMC and invertible.
There are three parameters that determine the probability laws of HMP: π, the initial distribution of finite state (B θ,t A θ )1, whereB θ,t is M × M diagonal matrix whose (j, j)-th entry is the (j, z t )-th entry of B θ , and 1 is the M × 1 vector with all entries equal to 1.
Now, let Θ k ⊂ Θ be a set of θ's, such that the order of underlying Markov chain of HMP is k. Furthermore, for some δ > 0, define Θ δ k ⊂ Θ k as the set of θ ∈ Θ k satisfying: • a ij,θ ≥ δ, if the first k − 1 components of the k-tuple state j are equal to the last k − 1 components of k-tuple state i • a ij,θ = 0, otherwise where a ij,θ is (i, j)-th entry of A θ , and b ij,θ is (i, j)-th entry of B θ . In particular, if θ ∈ Θ δ k then: 1) the stochastic matrix A θ is irreducible and aperiodic; thus, if the Markov chain is stationary, π θ is the stationary distribution of the Markov chain, and is uniquely determined from A θ , 2) B θ = Π ∀θ, and, therefore, θ is completely specified by A θ . For notational brevity, we omit the subscript θ and denote the probability law Q ∈ Θ δ k , if Q = Q θ , and θ ∈ Θ δ k .
2-B.2 Maximum likelihood (ML) estimation
Generally, suppose a probability law Q is in a certain class Ω. Then, the n-th order maximum likelihood (ML) estimator in Ω for the observed sequence z n , is defined aŝ resolving ties arbitrarily. Now, if Q ∈ Θ δ k , then there is an algorithm called expectation-maximization(EM) [4] that iteratively updates the parameter estimates to maximize the likelihood. Thus, when Q is in the class of probability laws of a HMP, the maximum likelihood estimate can be efficiently attained. 1 We denote the ML estimator in Θ δ k based on z n byQ k,δ [z n ] = arg max Obviously, when the n-tuple Z n is random,Q k,δ [Z n ] is also a random probability law that is a function of Z n . 1 We neglect issues of convergence of the EM algorithm and assume that the ML estimation is performed perfectly.
2-B.3 Consistency of ML estimator
When P Z ∈ Θ δ k , an ML estimatorQ k,δ [Z n ] is said to be strongly consistent if The strong consistency of the ML estimatorQ k,δ [Z n ] of the parameter of a finite-alphabet stationary ergodic HMP was proved in [1]. For the case of a general stationary ergodic HMP, the strong consistency was proved in [12].
We also have a sense of strong consistency for the case where P Z is a general stationary and ergodic process. By the similar argument as in [8, Theorem 2.2.1], we have the consistency in the sense that if the observed noisy signal is not necessarily a HMP, and we still perform the ML estimation in Θ δ k , then we get This second consistency result is the key result that we will use in devising and analyzing our universal filtering scheme.
The universal filtering problem
As mentioned in the Introduction, we will assume a stochastic setting, that is, the underlying clean signal is an output of some stationary and ergodic process whose probability law is P X . From P X and Π, we can get the true joint probability law P and corresponding probability law of noisy observed signal, P Z . That is, Π(x i , z i ), and A filter is a sequence of probability distributionsX = {X t }, whereX t : A t → M. The interpretation is that, upon observing z t , the reconstruction for the underlying, unobserved x t is represented by the symbolx with probabilitŷ . A filter is called deterministic ifX t (z t ) is a unit vector in R M for all t and z t , and randomized ifX t (z t ) can be a simplex vector in M other than a unit vector for some t and z t . The normalized cumulative loss of the schemeX on the individual pair (x n , z n ) is defined by . Then, the goal of a filter is to minimize the expected normalized cumulative loss E LX(X n , Z n ) .
The optimal performance of the n-th order filter is defined as where F denotes the class of all filters. Sub-additivity arguments similar to those in [21] imply lim n→∞ φ n (P X , Π) = inf n≥1 φ n (P X , Π) Φ(P X , Π). 2 Just as in [ By definition, Φ(P X , Π) is the (distribution-dependent) optimal asymptotic filtering performance attainable when the clean signal is generated by the law P X and corrupted by Π. This Φ(P X , Π) can be achieved by the optimal filterX P = {X P,t } whereX P,t (z t )[x] = P r(B(P Xt|z t ) =x).
For brevity of notation, we denoteX P (z t ) =X P,t (z t ). Note that this is a deterministic filter, i.e., for a given z t , the filter is a unit vector in R M for all t. We can easily see that this filter is optimal since it minimizes E(ℓ(X t ,X(Z t )) for all t, and thus, it minimizes E LX(X n , Z n ) for all n.
As can be seen,X P (z t ) needs the exact knowledge of P Xt|z t , and thus, is dependent on the distribution of the underlying clean signal. The universal filtering problem is to construct (possibly a sequence of) filter(s),X univ , that is independent of the distribution of underlying clean signal, P X , and yet asymptotically achieving Φ(P X , Π). We describe our sequence of universal filters in the next section.
Universal filtering based on hidden Markov modeling 4-A Description of the filter
Before describing our sequence of universal filters, we make the following assumption on the source.
Assumption 1 There exists a sequence of positive reals {δ k }, such that δ k ↓ 0 as k → ∞, and P X satisfies For any probability law Q, we construct a randomized filer as follows: For ǫ > 0, denote L 2 ǫ-ball in R M as Then, we define a filter for fixed ǫ aŝ where U ∈ R M is a random vector, uniformly distributed in B ǫ . For brevity of notation, we denoteX ǫ Q (z t ) =X ǫ Q,t (z t ). This filter is randomized since depending on Q and z t ,X ǫ Q (z t ) can be a probability simplex vector in M that is not a unit vector. The reason we needed this randomization will be explained in proving Lemma 3.
To devise our filter, let's first consider an increasing sequence of positive integers, {m i } i≥1 , that satisfies following conditions: Now, define Then, given that our source distribution satisfies (2), and for fixed k, define a random probability law That is, Q t k is the ML estimator in Θ δ k k based on Z m i(t) . As discussed in Section 2-B.1, we only need to estimate the state transition probabilities of the underlying Markov chain to obtain this ML estimator, and this can be efficiently done by the Expectation-Maximization (EM) algorithm. Once we get Q t k , we can then calculate Q t kXt|z t using the forward-recursion formula which is described in detail in [4]. Note that we get this conditional distribution directly, not by first estimating the output distribution, and then inverting the channel, as was done in [18][19] [21].
Finally, we take as our sequence of universal filtering schemes, indexed by k and ǫ, The following theorem states the main result of this paper.
Theorem 1 Let X ∞ ∈ A ∞ be a stationary, ergodic process emitted by the source P X which satisfies Assumption 1.
Let Z ∞ ∈ A ∞ be the output of the DMC, Π, whose input is X ∞ . Then:
4-B Intuition behind the scheme and proof sketch
The intuition behind our scheme parallels that of the universal compression and universal prediction problems in the stochastic setting. In the n-th order problem of both cases [5] [14], the excess expected codeword length per symbol and the excess expected normalized cumulative loss incurred by using the wrong probability law Q in place of the true probability law P could be upper bounded by the normalized n-th order relative entropy 1 n D n (P Q). Then, to achieve the asymptotically optimum performance, the compressor and the predictor try to find and use some data-dependent Q that makes 1 n D n (P Q) → 0 as n → ∞, that is, makes D(P Q) zero. We follow the same intuition in our universal filtering problem. For fixed k and ǫ, our scheme, as can be seen from (5), divides the noisy observed signal into sub-blocks of length (m i − m i−1 ). Since mi−1 mi tends to zero as i → ∞, the length of each sub-block grows faster than exponential. Now, to filter each sub-block, it plugs the ML estimator in Θ δ k k obtained from the entire observation of noisy signal up to the previous sub-block. From (1), we know that as the observation length n increases, this ML estimator will converge to the parameter that minimizes the relative entropy rate between the true output probability law P Z . Then, to show that this scheme achieves the asymptotically optimum performance, we bound the excess expected normalized cumulative loss with this relative entropy rate, and show that the bound goes to zero as the HMP parameter set becomes richer, that is, k increases.
To be more specific, we briefly sketch the proof of our main theorem. Part (b) of Theorem 1 states that our scheme is asymptotically optimal. We can easily see that this follows directly from Part (a) and Reverse Fatou's Lemma. Therefore, proving Part (a) is the key in proving the theorem. Part (a) states that in the limit, the normalized cumulative loss of our scheme, for almost every realization, is less than or equal to the asymptotically optimum performance.
To prove Part (a), we first fix k and ǫ, and get the following inequality where F (x, y) is some function such that F (x, y) → 0 as x ↓ 0, and then y ↓ 0. 3 There are two keys in getting this inequality. The first one is to show the concentration of LXǫ univ,k (X n , Z n ) to its expectation which will be shown in Lemma 3 and Corollary 1. The second is to get the explicit upper bound function F (x, y) which will be based on Lemma 4. Once establishing this inequality, we show that from Lemma 5 and then send ǫ ↓ 0 to get Part (a). Keeping this proof sketch in mind, let us move on to the detailed proof in the next section.
4-C Proof of the theorem
Before proving the theorem, we introduce several lemmas as building blocks. Lemma . The latter assumed that all the parameters are lower bounded by δ > 0, whereas in Θ δ k , some parameters can be zero. We take this into account in proving Lemma 1 and Lemma 2. Lemma 3 shows the uniform concentration property of the normalized cumulative loss on Θ δ k , which is an important property that we need to prove the main theorem. Lemma 4 provides a key step to get the upper bound described in (6), and Lemma 5, which needs three additional definitions, enables to show (7). After building up the lemmas, we give the proof of the main theorem, which is merely an application of the lemmas.
Proof: To prove this lemma, we need three more lemmas in Appendix 1, which are variations on those found in uniformly converge on Θ δ k , and have the same limit. First, the uniform convergence of each subsequence {f jk+l } can be shown by showing the series Since ρ δ,k,k < 1, M < ∞ and ρ δ,k,k does not depend on Q, ω, and l, we conclude that all k subsequences converge uniformly on Θ δ k . Now, to show that the k subsequences have the same limit, construct another subsequence, {f j(k+1)+1 , j = 0, 1, 2, · · · , }. Since this subsequence contains infinitely many terms from all k subsequences, if this subsequence converges uniformly on Θ δ k , we can conclude that the k subsequences have the same limit. The derivation of the uniform convergence of this subsequence is the same as that described above, but setting m = k + 1 in Lemma 8.
Therefore, the original sequence {f t } converges to its limit uniformly on Θ δ k . The remarkable fact of this lemma is that the convergence is not only uniform on Θ δ k , but also in ω. That is, the convergence holds uniformly on every realization of z 0 −∞ .
Lemma 2 For the distribution of the observed noisy process {Z t }, P Z , and every Q ∈ Θ δ k , .
Moreover,
Proof: This lemma consists of three parts. The first part is to show the existence of the first limit in the lemma so that the definition of D(P Z Q) is valid. The second part is to show that the value of the limit is indeed . Finally, the last part is to show the uniform convergence of normalized log-likelihood ratio to the relative entropy rate. The first two parts and the pointwise convergence of the third part is a generalization of the Shannon-McMillan-Breiman theorem. The proof of these parts is identical to those in [ Since the pointwise convergence can be shown and the parameter set Θ δ k is compact, it is enough to show that 1 n log Q(Z n ) is an equicontinuous sequence by Ascoli's Theorem. That is, we need to show for ∀ǫ > 0, ∃δ(ǫ) > 0 such that ij | is defined to be the L 1 distance between the two parameters defining Q and Q ′ .
This equicontinuity can be proved by observing that a process k are irreducible and aperiodic and Π(x k (j),z) > 0, ∀x k (j),z, T is also irreducible and aperiodic. Hence, T has the unique stationary distribution τ . Although there are zeros in T , by the construction, any n-tuple s n has positive probability. Since {S t } is also stationary, we have For another probability law Q ′ ∈ Θ δ k , we have where (8) is from the fact that 1 n ≤ 1, nss n ≤ 1, and (9) is from the fact that DMC, Π, is equal for Q and Q ′ . The summations are over the pairs that have nonzero transition probabilities.
Since the function f (x) = log x is a uniformly continuous function for δ ≤ x < 1, and a ij ≥ δ that occur in the summation, we have for ǫ > 0, Also, we know that all the elements of the stationary distribution of T are bounded away from zero, since the largest element of the stationary distribution of T is lower bounded by 1 M k+1 , and any state can be reached by finite number of steps whose transition probabilities are bounded away from zero. Therefore, for some C 1 < ∞, we have, Then, from the result of the sensitivity of the stationary distribution of a Markov chain [10], for some Hence, for ǫ > 0, we obtain, Therefore, by letting δ(ǫ) = min(δ 1 (ǫ), δ 2 (ǫ)), we have Let us now go back to the original process Z. From where the summations are again over the sequences that have nonzero probabilities. By changing the role of Q, and Q ′ , we get the result that 1 n log Q(Z n ) is an equicontinuous sequence. Therefore, we have the uniform convergence of the lemma. Lemma 4 (Continuity) Consider a single letter filtering setting. Suppoes Q is some other joint probability law of X and Z. Define single letter filtersX P (z) andX ǫ Q (z) aŝ where U ∈ R M is a uniform random vector in B ǫ as before. Then, where the expectations on the left hand side of the inequality are under P and This lemma states that the excess expected loss of a randomized filter optimized for a mismatched probability law can be upper bounded by the L 1 difference between the true and the mismatched probability laws of output symbol, plus a small constant term which diminishes with the randomization probability. This is somewhat analogous to a for the prediction which was derived in [14, (20)].
where (13) is from the fact that z Π(x, z) = 1, (14) is from Cauchy-Schwartz inequality, and (15) is from the fact that L 2 -norm is less than or equal to L 1 -norm.
The second term in (12) becomes It is easy to see that the inner summation in (16) is always nonnegative since by definition,X Q (z) assigns probability 1 to B(Q X|z ). Now, for a given Q, define resolving ties arbitrarily. Then, we have, where (18) follows from (17), (19) follows from the fact and (20) follows from the Cauchy-Schwartz inequality. Note that depending on Q and z, (18) and (19) can be both zero and hold with equality. Together with (15), the lemma is proved.
Before moving on to Lemma 5, we need following three definitions. In Lemma 2, we have seen that for Q ∈ Θ δ k , D(P Z Q) is well-defined. Now, let's consider the case where Q ∈ Θ δ k is some function of the noisy observation Z n (denoted as Q[Z n ]). As mentioned in the footnote of Section 4-B, the notion of the relative entropy rate between P Z and that random Q[Z n ] is defined in Definition 2 using Definition 1. Definition 3 is also needed for the inequality in Lemma 5.
the Lebesgue integration with respect to the randomness of Q[Z n ] is excluded.
Definition 2 Suppose Q[Z n ] ∈ Θ δ k . Then, the relative entropy rate between P Z and Q[Z n ] is defined as, 4 .
Definition 3
Define the k-th order Markov approximation of P X for n ≥ k as Furthermore, denote P Z and P (k) Z as the probability law of the output of DMC, Π, when the probability law of input is P X and P (k) X , respectively. 5 Now, we give following lemma that upper bounds the relative entropy rate between P Z and the ML estimator.
Lemma 5 For the given sequence {δ k } defined in Section 4-A and for fixed k, we have
Proof: Recall thatQ k,δ k [Z n ] is an ML estimator in Θ δ k k based on the observation Z n . From (1), we know that Also, (2) and Definition 3 assures that P (k) Z ) a.s. 4 Note that D(P Z Q[Z n ]) is a function of Z n , and still is a random variable. 5 Here, P (k) Z is not the k-th order Markov approximation of P Z , but is the distribution of the channel output whose input is P (k) X , the k-th order Markov approximation of the original input distribution P X . This is the link where we needed Assumption 1. Now, let's denote P (k) as the joint probability law of (X n , Z n ) when the probability law of input process is P (k) X . Then, by the chain rule of relative entropy [5, (2.67)], we have Since the DMC is fixed, we have E log P (Z n |X n ) = 0. Moreover, by the nonnegativity of relative entropy, X ) always exists by ergodicity, we have and the lemma is proved.
Proof of Theorem 1
We are now finally in a position to prove our main theorem. As mentioned in Section 4-B, we first fix k and ǫ, and try to get the inequality in the form of (6) to prove Part (a). To refresh, (6) is given again here.
From the definition of LXǫ univ,k (X n , Z n ), where from (5), we know that Q t k is a function of Z mi(t) . Since ℓ(X t ,X ǫ Q t k (Z t )) is a function of (X t , Z t , Q[Z mi(t) ]), we can define a quantityÊ(ℓ(X t ,X ǫ Q t k (Z t ))) from Definition 1. From this, we also definê E LXǫ univ,k (X n , Z n ) = 1 n n t=1Ê ℓ(X t ,X ǫ Q t k (Z t )) . Now, we have following Corollary 1 from Lemma 3, whose proof is given in Appendix 3. This corollary is a key step in proving the main theorem, since it provides a crucial link that enables to get the inequality in (6). Therefore , to get the inequality of the form of (6), we can equivalently show Now, let's consider following chain of inequalities: since the square root function is a continuous function. For the expression inside the square root of the right-hand side of the inequality, lim sup where (24) is from Cesáro's mean convergence theorem; (25) is from the fact that P ( which finally is in the form of (6). Now, we need to check if the right-hand side of (27) goes to zero if we let k → ∞ and ǫ ↓ 0. To see this, consider following further upper bounds.
where (28) Note that the expectation here is with respect to the randomness of probability law within the paranthesis, too. By sending ǫ to zero, Part (b) is proved.
Extension: Universal filtering for channel with memory
Now, let's extend our result to the case where channel has memory. With the identical assumption on {X t }, now suppose {Z t } is expressed as where ⊕ denotes modulo-M addition, and {N t } is an A-valued noise process which is not necessarily memoryless.
We assume we have a complete knowledge of the probability law of {N t }. Specifically, let's consider the case where {N t } is FS-HMP, that is, it is an output of an invertible memoryless channel Γ = {Γ(i, j)} i,j∈A whose input is irreducible, aperiodic ℓ-th order Markov chain, {S t }, which is independent of {X t }. Let Γ min = min i,j∈A {Γ(i, j)}, and suppose Γ min > 0. For simplicity, assume that the alphabet size of {S t } is also A.
In this model, the channel between X t and Z t at time t is an M -ary symmetric channel, which is specified by the S t -th row of Γ. Let's define an M × M matrix Π t whose (x t , z t )-th element is where ⊖ denotes modulo-M subtraction. Now, let's make following assumptions on the noise process.
• {N t } is stationary, i.e., Π t is identical for ∀t As stated in [22, 2-A], the first and the second assumptions are rather benign. Especially, for the second assumption, it can be shown that under benign conditions on the parametrization, almost all parameter values except for those in a set of Lebesgue measure zero, give rise to a process satisfying this assumption. Also, since this only corresponds to the case when k = 0 in [22, Assumption 1], it is a much weaker assumption. The third assumption is a similar positivity assumption as Assumption 1, which enables our universal filtering scheme.
Under these assumptions on the noise process, we can extend our scheme to do the universal filtering for this channel. First, we can convert this channel to the equivalent memoryless channel, Ξ = {ξ((i, j), h)} i,j,h∈A , where the input process is {(X t , S t )} and the output is {Z t }. Here, Ξ is M 2 × M matrix, and the channel transition probability is To do the filtering, we apply our scheme to this equivalent memoryless channel. For fixed k ≥ ℓ, as in Section 2-B.1, define a parameter set of HMPs, Θ k , whose Markov chain has M k+ℓ states, and the memoryless channel is Ξ. The k-th order conditional probability of our new input process is where (31) is from Assumption 1 and the third condition on the noise process. Let γ k = δ k · α. Then, we can model {Z t } in Θ γ k k , or equivalently, model (X t , S t ) as k-th order Markov chain, and obtain Q t k , the ML estimator in Θ γ k k based on Z m i(t) . By forward recursion, we can get Q t k (X t , S t |Z t ), and by summing over S t 's we can calculate Q t kXt|Z t . Then, finally we define our sequence of universal filtering schemes as, X ǫ univ,k = {X ǫ Q t k ,t }, exactly the same as we proposed in Section 4-A.
The analysis of this scheme is identical to the one given in the proof of the main theorem. (21), which is the only place where the invertibility of the Π is used, can also be obtained in this case due to the second assumption of the noise process. Thus, we again get by the same argument as Lemma 5, we have the same result as Theorem 1. Thus, we can successfully extend our scheme to the case where the channel noise is FS-HMP with some mild assumptions.
Conclusion and future work
In this paper, we proved that, for the known, invertible DMC, a family of filters based on HMPs is universally asymptotically optimal for any general stationary and ergodic {X t } satisfying some mild positivity condition. That is, we showed that our sequence of schemes indexed by k and ǫ achieves the best asymptotically optimal performance regardless of clean source distribution. We could also extend this scheme to the case where channel has memory, especially where the channel noise process is FS-HMP. The future direction of the work would be to ascertain the relationship between k and n, such that we can devise a single scheme that grows k with some rate related to n. Attempting to loosen the positivity assumption that we made in our main theorem and extending our discrete universal filtering schemes to discrete universal denoising schemes are additional future directions of our research. Proof: Now, let's bound the terms in (32). First, . Note that a m ij ≥ δ m and a k jj0 ≥ δ k , ∀i, j, j 0 from the assumption of Θ δ k . Let Q(Z ∞ t+m+k+1 |X t+m+k = j 0 ) = α j0 . Then, the last expression is Now let's look at the second term in (32). That is, where T = {t + 1, · · · , t + m + k}\{t, t + m}. Thus, from (34) and (35), ρj (δ·Πmin) m+k , and thus, ρ j ≥ (1 + M−1 (δ·Πmin) m+k ) −1 , which proves the lemma.
Proof: From the argument of Lemma 6, it is easy to see that Since δ,k and m are fixed, let's simply denote µ = µ δ,k,m . Also, let's omit d and the parenthesis for above four quantities to simplify notation. Then, where (36) is possible from Lemma 6, since β ij ≥ µ for ∀i, j.
for ∀p, ∀d ≥ 1, and 0 ≤ l ≤ m − 1. Proof: and therefore, On the other hand, and thus, Therefore, from Lemma 7, we have Note that the result does not depend on either Q or l.
Appendix 2
Before proving Lemma 3 we need following lemma first. Part (b),(c), and (d) are crucial for Lemma 3, and Part (a) enables Part(b). Part (a) is the reason why we need a randomization of the filter.
(a) We have where t 1 , t 2 > 0 are arbitrary integers. That is, for any integer t > 0 and any individual sequence (a) For given simplex vector Q, fixedx, and B ǫ defined as in Section 4-A, we define followings.
• dist(Q, c T y = 0) The shortest L 2 distance from a simplex vector Q to the plane c T y = 0 That is, Sx(Q) is a set of vectors in ǫ-ball, B ǫ , that makes the Bayes response B(Q + W) equal tox. Also, DP (x) is a set of decision planes that separate the decision region for the reconstruction alphabetx and other alphabets. Then, for some fixed t, by definition, where Vol(·) is a volume of a set. Since Vol(B ǫ ) is a constant, for any t 1 and t 2 , we have For the numerater, as a crude bound, we get where where (41) is from the triangular inequality, (42) is from Cauchy-Schwartz inequality, and (43) is from the fact that L 2 -norm is less than or equal to L 1 -norm. Therefore, (40) becomes and thus, (39) becomes Therefore, we have and Part (a) is proved.
(b) By the exact same argument as in proving Lemma 1, we can easily know that we get the uniform convergence.
(d) We know that for the individual sequence pair (x 0 , z 0 −t ), .
For Q ∈ Θ δ k , Π is fixed and we can think of 0 i=−t Π(x i , z i ) as a constant for the individual sequence pair is the ratio of two finite order polynomials of {a ij }, and as Θ δ k is closed and bounded, Q(x 0 |z 0 −t ) is a uniformly continuous function of {a ij }. Therefore, for given η, ∃ǫ(η) such that Q − Q since there are only finite number of possible (x 0 , z 0 −t ) pairs. Also, since Θ δ k is compact, we can always find a finite set, F k (t, η) that for any Q ∈ Θ δ k , there exists at least one Q ′ ∈ F k (t, η), that satisfies Q − Q ′ 1 < ǫ(η). Therefore, Part (d) is proved.
Proof of Lemma 3:
To prove Lemma 3, first consider following limit.
while the ergodic theorem gives lim n→∞ 1 n n t=1 g Q (T t (X, Z)) = E g Q (X, Z) a.s.
from the proof of pointwise convergence above. As in (3), this convergence is also uniform on F k (t, η).
Therefore, the lemma is proved.
For the second term, since n ≥ m i(n) ≥ N , Therefore, for any n ≥ m I0 , and m i(n) ≤ n ≤ m i(n)+1 , we have LXǫ univ,k (X n , Z n ) −ÊLXǫ univ,k (X n , Z n ) < δ, and since δ was arbitrary, we have the corollary.
|
2006-05-17T11:00:10.000Z
|
2006-05-17T00:00:00.000
|
{
"year": 2008,
"sha1": "4298a4dc2196ed9f4f600e819ec6e17298950fdf",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cs/0605077",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "079451a528596902dd7a5c04275ffec1afe8d0f4",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
189927738
|
pes2o/s2orc
|
v3-fos-license
|
T-cell receptor gene therapy targeting melanoma-associated antigen-A4 by silencing of endogenous TCR inhibits tumor growth in mice and human
Genetically engineered T cells expressing a T-cell receptor (TCR) are powerful tools for cancer treatment and have shown significant clinical effects in sarcoma patients. However, mismatch of the introduced TCR α/β chains with endogenous TCR may impair the expression of transduced TCR, resulting in an insufficient antitumor capacity of modified T cells. Here, we report the development of immunotherapy using human lymphocytes transduced with a codon-optimized melanoma-associated antigen (MAGE)-A4 and HLA-A*2402-restricted TCR, which specifically downregulate endogenous TCR by small interfering RNA (si-TCR). We evaluated the efficacy of this immunotherapy in both NOD-SCID mice and uterine leiomyosarcoma patients. Our results revealed that transduced human lymphocytes exhibited high surface expression of the introduced tumor-specific TCR, enhanced cytotoxic activity against antigen-expressing tumor cells, and increased interferon-γ production by specific MAGE-A4 peptide stimulation. Retarded tumor growth was also observed in NOD-SCID mice inoculated with human tumor cell lines expressing both MAGE-A4 and HLA-A*2402. Furthermore, we report the successful management of a case of uterine leiomyosarcoma treated with MAGE-A4 si-TCR/HLA-A*2402 gene-modified T cells. Our results indicate that the TCR-modified T cell therapy is a promising novel strategy for cancer treatment.
Background
In recent years, the development of immune checkpoint-based immunotherapy, such as PD-1/PD-L1 monoclonal antibodies, has been applied in a variety of tumors and shown good clinical results. However, immune checkpoint treatments are only effective in a small number of cancer patients, and thus new options and methods are needed 1 . Adoptive T cell therapy is a rapidly developing method of tumor immunotherapy. The principle is to introduce large numbers of in vitro amplified effector cells into the patient to produce a direct killing effect of the tumor cells. Two types of genetically modified specific T cell adoptive immunotherapies, chimeric antigen receptor T cells (CAR-T) and T cell receptor-engineered T cells (TCR-T), have been shown to be attractive for treating patients with malignancies 2,3 . CAR-T cell therapy produces significant clinical results in hematological tumors, but they are only specific for surface antigen and show limited applications in solid tumors 4,5 . In contrast, TCR-T cells recognize fragments of antigen as peptides bound to major histocompatibility complex (MHC) molecules and display good clinical effects in the treatment of solid tumors. Three NY-ESO-1/HLA-A2-specific TCR-T clinical trials in 2011 and 2015 showed that more than 50% of patients with synovial sarcoma, malignant melanoma, and multiple myeloma exhibited an objective clinical response, which encouraged the development of novel TCR-T cell immunotherapies [6][7][8] .
However, on-target adverse events in TCR gene therapy targeting melanocyte differentiation antigens, such as melanoma antigen (MART)-1 or gp100, etc., have been reported. Normal tissues such as the skin and brain, which express sequence-like antigens, show cross-recognition of TCR-T cells and severe destruction, particularly when using high-affinity TCR. Thus, optimal antigen selection is crucial for TCR-T treatment [9][10][11] .
Cancer/testis antigens are particularly attractive targets for immunotherapy because they are highly expressed on adult male germ cells or tumor cells, but not in normal adult tissues 12,13 . Melanoma-associated antigen (MAGE) family antigens are mainly expressed in many malignant tumors such as melanoma, but show low expression in normal tissues. Immunotherapy strategies for targeting these antigens have been well-studied 14,15 . High expression of MAGE-A4, a member of the MAGE-A family, was reported in ovarian cancer, melanoma, non-small cell lung cancer, and esophageal squamous cell carcinoma [16][17][18] . This suggests that TCR-T cell therapy targeting MAGE-A4 is feasible and promising treatment for malignant tumors.
During introduction of exogenous TCR into T cells, the presence of endogenous TCR leads to a mismatch between the two types of TCRs, which may result in recognition of unknown antigens expressed on normal tissues and cause tissue damage. There are several new developing strategies to minimize the risk of mixed TCR dimer and improve the expression of the introduced TCR, such as zinc-finger nucleases, meganucleases, non-viral genome targeting, TALEN technology, CRISPR technology, TCR inhibitory molecular (TIM) peptide as well as RNAi-mediated TCR knockdown [19][20][21][22][23][24][25][26] . Gene editing by zinc-finger nucleases is an appealing approach for shutting down TCR expression, but it takes a long time for T cell culture and also includes multiple sorting steps which was inconvenient for clinical applications 19 . The meganucleases only have a single domain which making it more difficult to be engineered. Moreover, the enzyme is too expensive to develop for clinical use 25 . For non-viral T cell genome targeting, the major barrier is the toxicity of the DNA at high concentrations 20 . TALEN technology has high precision and efficiency to target long gene sequences, but the plasmid construction is complicated and the off-target effect is also existed 23 . CRISPR technology is simple, low cost, easy to construct and very efficient. However, both cytotoxicity and off-target effects were reported 26 . Other exploratory approaches to prevent GVHD caused by allogen-reactive T cells such as the application of TIM peptide is an appealing strategy which has been designed in CAR-T treatment and the clinical trial is ongoing 27 . The use of RNAi-mediated gene editing is also fast, easy, reducing the number of experimental steps and saving time costs. But this method can't completely remove the gene function. We and our collaborators chose the RNAi-mediated TCR editing by retrovirus transduction, which uses a one-step transduction protocol and have show the reduction of endogenous TCR efficiently. This approach meet our requirements for avoiding mismatches between exogenous and endogenous TCR and showed better T cell recognition and killing activity 28 .
In this study, we further evaluated the application of this RNAi-mediated TCR gene editing method in mice and human. We manufactured MAGE-A4-si-TCR genemodified T cells using human peripheral blood mononuclear cells (PBMCs) and examined the in vitro activity of the T cells. Next, we introduced the gene-modified T cells into NOD-SCID mice bearing tumors with or without HLA-A2402 to assess its in vivo antitumor capacity. Moreover, we present clinical evidence of a case of uterine leiomyosarcoma adoptively transferred with MAGE-A4 si-TCR gene-modified T cells.
Isolation and cultivation of peripheral blood mononuclear cells
Peripheral blood mononuclear cells (PBMCs) were derived from healthy donors (for in vitro and in vivo experiments) or patients (for adoptive immunotherapy) who signed informed consent. PBMCs isolated using Ficoll were cultured in GTT551 media (Takara Bio) supplemented with 1% autologous plasma, 0.2% human serum albumin (Shuyang Bio, Sichuan, China), and 1000 IU/mL interleukin-2 (Bailu Bio, Beijing, China).
Preparation of gene-modified cells
The MAGE-A4 si-TCR retroviral vector was developed and provided by Takara Bio and shows high surface expression of the introduced tumor-specific TCR and reduced expression of endogenous TCRs 28 . Genemodified cells were manufactured as described previously 28,29 . Briefly, PBMCs were stimulated with 30 ng/mL OKT-3 (eBioscience, San Diego, CA, USA) and 1000 IU/ mL interleukin-2 prior to transduction, and then retroviral vector solutions were applied onto Retronectin (Takara Bio, Beijing) preloaded bags for 16 h at 4°C. Stimulated cells were then cultured in vector-coated bags and expanded for 3-5 days. Control PBMCs (non-genemodified cells, NGMCs) were cultured simultaneously.
Flow cytometric analysis and tetramer assays
Fluorescein isothiocyanate (FITC)-conjugated antihuman CD2, CD3, CD8, and CD45 and phycoerythrin (PE)-conjugated antihuman CD4, CD14, CD19, and CD56 monoclonal antibodies were all from BD Biosciences (Franklin Lakes, NJ, USA) and used to detect the immunophenotypes of transduced T cells. PEconjugated MAGE-A4 143-151 /HLA-A*2402 tetramer (from TCMetrix, Epalinges, Switzerland) was also used to detect TCR expression in gene-modified cells and PBMCs from patient's blood. The results for tetramer analysis were defined as the percentage of tetramer positive cells in CD8 positive cells. Data were acquired with a FACS CantoII flow cytometer (BD Biosciences) and analyzed using FACSDiva and FlowJo software (Ashland, OR, USA).
Cell killing experiment
50-100 thousand CFSE (Thermo Fisher Scientific, USA) labeled target cells per well were seeded in triplicate in 12well plates for 1 days, then were cocultured with effector cells at an effector/target ratio of 10:1. Four hours later, all cells were collected and stained with NIR-Zombie dye (Biolegend, USA) for cell viability assay. The results were analyzed by flow cytometry and the percentage of dead cells was considered as the cell killing rate.
ELISPOT assay
Human ELISPOT assays (Dakewe, Nanshan, China) were used to detect the secretion of IFN-γ after specific peptide stimulation. Briefly, cells with or without MAGE-A4 peptide were seeded into a 96-well plate that had been precoated with a capture antibody specific for human IFN-γ. After 24 h of incubation in a humidified 37°C, 5% CO 2 incubator, the wells were washed and biotinylated detection antibody specific for human IFN-γ was added to the wells. Unbound biotinylated antibody was washed away and horseradish peroxidase-conjugated antibody was added. Following four washes to remove unbound enzyme, substrate solution was added. A colored precipitate formed and appeared as spots at the sites of cytokine localization. The spots were counted by computer-assisted image analysis (ImmunoSpot Series 3 Analyzer: CTL, Shaker Heights, OH). PMA and Ionomycin (Sigma-Aldrich) were used to stimulate the cells as positive control.
IFN-γ assay
Target cells (T2A24) were pulsed with 4 μg/μL MAGE-A4 143-151 peptide for 3 h and then were cocultured with effector cells at an effector/target ratio of 1:1. Two hours later, GolgiPlug (BD Biosciences) was added to the samples at 1 μL/mL and then the cells were further incubated for another 20 h. For staining, the cells were first stained with FITC-conjugated anti-CD8 and then permeabilizated using a IntraPrep kit (Beckman Coulter, Brea, CA, USA) following the manufacturer's instructions. The cells were further stained intracellularly with PE-conjugated anti-IFN-γ (Beckman Coulter) and analyzed by flow cytometric analysis.
Mice
Six to eight-week-old female NOD-SCID mice (Beijing Vital River Laboratory Animal Technology Co., Ltd., Beijing, China) were used in this study. All experimental protocols were approved by the Ethics Review Committee for Animal Experimentation (Tianjin Medical University Cancer Institute and Hospital).
Tumor inoculation and adoptive cell transfer KE4 tumor cells (1 × 10 7 in 0.2 mL PBS) and QG56 tumor cells (1 × 10 6 in 0.2 mL PBS) were subcutaneously injected into the right flank of NOD-SCID mice. Mice were randomly divided into three groups and administered different treatments through the tail vein: normal saline group (NS), NGMC group (1 × 10 8 cells), and GMC (1 × 10 8 cells) group. The condition of mice was monitored every day and the tumor size was measured with calipers via perpendicular diameters twice per week. Safety was evaluated by observing the general status, response, and body weight of mice and efficacy was evaluated by observing the tumor growth of KE4 and QG56.
Detection of MAGE-A4 antigen expression
Expression of MAGE-A4 was assessed by immunohistochemistry (IHC) using the monoclonal antibodies MCV-1 and MCV-4, which were kindly provided by Mie University, Japan 29
Treatment protocol
The patient was both MAGE-A4 and HLA-A2402 positive and administered a preconditioning regimen (Cyclophosphamide 10 mg/kg for 2 days and Fludarabine 25 mg/m 2 for 2 days), followed by the manufactured genemodified lymphocytes intravenously on days 0 and 14. Interleukin-2 (1 million/m 2 for 2 days and 2 million/m 2 for 5 days) infusion was conducted for 7 days. On days 21 and 28, the patient was subcutaneously administered 300 mg of MAGE-A4 peptide (NYKRCFPVI; PolyPeptide Laboratories, Torrance, CA, USA) emulsified with incomplete Freund adjuvant (Montanide ISA-51VG; SEPPIC, Paris, France). On days 56 and after, safety and clinical responses were assessed, and the patient was followed up until March, 2019. This study was approved by the Research Ethics Committee of Tianjin Medical University Cancer Institute and Hospital, China. Written informed consents were obtained from the patient before participating in the study.
Measurement of proviral copy number for cell kinetics analysis
DNAs were isolated from patient's blood using the DNA extraction kit (Qiagen). Primers for proviral DNA sequence (retroviral packaging signal region, existing in TCR transduced cells) and human IFNg DNA (genes of whole T cells) were from Provirus Copy Number Detection Primer Set (6167, Takara Bio Inc.). qPCR assay were performed using Cycleave PCR Core Kit (CY501, Takara Bio Inc.). The DNA samples were amplified by 50 cycles of 3-step PCR reactions. Serially diluted DNA control template for provirus and IFNg were amplified and the standard curve was generated. Each DNA concentration of IFNg or proviral vector for MAGE-A4 si-TCR expression was calculated from the standard curve. The copy number was represented as the ratio of proviral DNA and IFNg DNA values.
Statistical analysis
Data were analyzed using GraphPad Prism version 5 (GraphPad Software, San Diego, CA). Differences between cell and tumor growth were determined by repeated measures ANOVA. The Mann-Whitney U-test and Student's t-test were used for parameter comparison between two subject groups. A P-value less than 0.05 was considered statistically significant.
Manufacture of lymphocytes transduced with MAGE-A4 si-TCR
The generation of retroviral vectors encoding both small interfering RNA constructs that specifically downregulate endogenous TCR and a codon-optimized, small interfering RNA-resistant TCR specific for the human tumor antigens MAGE-A4 (named as si-TCR) were described previously 28 . Lymphocytes from at least three different healthy donors were isolated and transduced with retrovirus carrying MAGE-A4 si-TCR. The genemodified T cells (GMCs) were manufactured on a large scale in vitro and the cell growth curve was drawn according to the cell number account results. There was no significant difference in cell growth between GMCs and non-gene modified cells (NGMCs), indicating that the retrovirus transduction method did not influence the cells (Fig. 1a). Flow cytometry was performed to detect the immunophenotype of the modified cells, evaluated as the expression of cell surface markers, including CD3, CD4, CD8, CD2, CD19, CD56, CD14, and CD45. The results revealed that the immunophenotype of GMCs and NGMCs were similar and mainly exhibited the T cell phenotype, particularly CD8 + killer cells (Fig. 1b, c).
To evaluate the expression of introduced codonoptimized TCR, MAGE-A4 143-151 /HLA-A*2402 tetramer and antihuman CD8 mAb were stained for flow cytometric analysis. The results showed cell surface MAGE-A4-specific TCR expression was much higher in si-TCR cells than in control NGMCs (Fig. 1d, P < 0.01). To further validate the cytotoxic ability of GMCs towards cell lines with and without MAGE-A4/HLA-A*2402, KE4 (MAGE-A4 + and HLA-A*2402 + ) and QG56 (MAGE-A4 + and HLA-A*2402 − ) cells were used for the cell killing assay. After cocultured for 4 h, the cell killing effect observed on KE4 was significant increased in si-TCR cells than in NGMCs, while there was no significant difference between the killing effect on QG56 (Fig. 1e, P < 0.05). All experiments were performed at least for three independent times and the representative data was shown.
Si-TCR gene-modified cells produced more interferon-γ after specific MAGE-A4 peptide stimulation in vitro
To further investigate the function of GMCs, an ELI-SPOT interferon (IFN)-γ assay was conducted after MAGE-A4 143-151 peptide stimulation. As a positive control, we stimulated both GMCs and NGMCs with PMA/ Ionomycin. The number of spots developed in GMCs after peptide-specific stimulation was significantly higher than that in NGMCs following the same stimulation (Fig. 2a, left panel). A scatter plot is shown in the right panel (P < 0.05). Moreover, to verify that the GMCs react specifically with HLA-A*2402-restricted MAGE-A4, we measured intracellular IFN-γ secretion by flow cytometry, which also reflected the GMC gene transduction efficiency (ratio of IFN-γ positive in CD8 + cells). The peptide-loaded T2-A*2402 cells (T2A24) were used as target cells. The results revealed that GMC cells produced IFN-γ specifically after MAGE-A4 peptide stimulation (Fig. 2b left panel); the histogram is shown in the right Fig. 1 Growth curve, immunophenotype, tetramer, and cell killing analysis of manufactured melanoma-associated antigen (MAGE)-A4 si-TCR gene-modified cells. a Transduction of si-TCR in human lymphocytes. Peripheral blood mononuclear cells from healthy donors were stimulated with anti-CD3 mAb and interleukin-2. Cells were cultured with or without retroviral transduction. Cell growth rate of si-TCR T cells and NGMC cells were monitored and the growth curve is shown. b Immunophenotype of si-TCR GMCs and NGMCs were detected by flow cytometric analysis and the histogram is shown. c Representative staining for gene-modified and unmodified cells with antibodies for cell surface markers. d GMCs and NGMCs were stained with MAGE-A4 tetramer and anti-CD8 mAb, percentage of tetramer positive cells among CD8 + cells is indicated. e Cell killing assay was performed in GMCs and NGMCs cocultured with KE4 and QG56 cell lines and the results were shown. All experiments were done at least from 3 donors' PBMC and the representative data was shown. Repeated measures ANOVA was used for cell growth rate analysis and Mann-Whitney U-test was used for others. Data represent mean ± SEM, *P < 0.05, **P < 0.01 Fig. 2 In vitro functional analysis of MAGE-A4 si-TCR gene-modified cells. a si-TCR GMCs and NGMCs were stimulated in vitro with MAGE-A4 peptide, and then subjected to ELISPOT assays. PMA/Ionomycin stimulation was used as a positive control. The right panel shows the scatter plot. b si-TCR gene-modified cells and unmodified cells were stimulated with T2A24 cells pulsed with the MAGE-A4 143-151 peptide and subjected to an IFN-γ assay by flow cytometry. Representative staining is shown in the left panel. Mann-Whitney U-test was used to determine the significance of differences between two groups, Data represent mean ± SEM, *P < 0.05, **P < 0.01 modified cells (manufactured from healthy donors) into NOD-SCID mice was conducted to confirm the antitumor response. Eight-week-old female NOD-SCID immunodeficient mice were inoculated subcutaneously with KE4 (1 × 10 7 cells/mouse) and QG56 (1 × 10 6 cells/ mouse) into the right flank of mice. After inoculation with tumor cells, mice were randomly divided into three groups and administered three different treatments: normal saline group (NS), NGMC group, and si-TCR GMC group. We evaluated the safety of these treatments by observing the general status, response, and weight of the mice. Anti-tumor effectiveness was evaluated by measuring the tumor growth of KE4 and QG56. At least three independent experiments were performed and the representative data was shown. The results revealed no significant side effects or toxicity in the three experiment groups (Fig. 3a). Compared to the NS group and NGMC group, si-TCR GMCs transferred into KE4 tumor-bearing mice inhibited tumor growth specifically, while this effect was not observed in QG56 tumor-bearing mice (Fig. 3b, c, P < 0.05), indicating that administration of si-TCR genemodified lymphocytes inhibited human tumor growth in NOD-SCID mice in a specific manner for MAGE-A4 + HLA-A*2402 + KE4 tumors.
Uterine leiomyosarcoma patient treated with MAGE-A4 si-TCR/HLA-A*2402 gene-modified T cells after chemotherapy obtained a stable status for 41 months; a pilot trial To evaluate the clinical efficacy of MAGE-A4 si-TCR GMCs, we conducted a pilot trial in a HLA-A*2402 uterine leiomyosarcoma patient. MAGE-A4 expression was assessed by immunohistochemistry (IHC) and the results were judged as positive (Fig. S1). The female patient was diagnosed with uterine leiomyosarcoma on August 14, 2014 (Pathology: II A) and underwent operation. Three cycles of adjuvant chemotherapy with L-OHP + IFO + EPI were performed from September 18, 2014 to November 11, 2014 and pelvic field-assisted radiotherapy was performed on December 2014. After 5 months, the computed tomography (CT) scan results revealed double lung metastasis (pathology: malignant tumor, sarcoma) and 1 round of chemotherapy with bevacizumab + CBP + DTIC was performed on June 15, 2015. One month later, CT scanning showed disease progression, and 4 rounds of second-line chemotherapy with ENDOSTAR + DOX + GEM were performed from July 2015 to October 2015. After chemotherapy, the treatment efficacy showed a complete response (CR). Because the tumor burden was at a lower level, we administered TCR gene-modified cell treatment. PBMCs from patient were separated by apheresis. Cells were stimualted with IL2, anti-CD3 and then transduced with the si-TCR vector. After 7-10 days culture, cells were harvested and then frozen until use. Quality detection of si-TCR T cell products were performed as described above (Fig. S2). The treatment schema was shown in Fig. 4a, as depicted, MAGE-A4 si-TCR cells were infused to the patient for 1 cycle (twice) according to the clinical protocol on Day 0 and Day 14 after low-dose lymphodepletion (Cyclophosphamide 10 mg/kg for 2 days and Fludarabine 25 mg/m 2 for 2 days). Interleukin-2 (1 million/m 2 for 2 days and 2 million/m 2 for 5 days) infusion was conducted for 7 days after the second infusion. On Day 21 and 28, the patient was subcutaneously administered 300 mg of MAGE-A4 peptide. On days 56 and after, safety and clinical responses were assessed. The treatment was welltolerated with no treatment-related morbidity, life- threatening complications, or side effects. The patient has been followed-up until March, 2019, that is for more than 3 years, the efficacy remains CR (Fig. 4b). This evidence supports the application of si-TCR GMCs in clinical treatment.
In general, patients with sarcoma do not receive any other treatments will relapse soon after chemotherapy. We speculate that the reason for our patient to maintain long-term CR might be due to the si-TCR immunotherapy. Hence we detected both the MAGE-A4 tetramer and copy number in patient's blood to investigate the longterm persistence of transduced T cells. Over more than 600 days observation, the tetramer + CD8 + T cells were detected and the TCR transgene copies were also observed until Day 233 (Fig. 5a, b). Both these two indicators have a peak value between Day 20 and Day 30, suggesting the effect of MAGE-A4 peptide vaccination. These results indicated the long-term persistence of si-TCR T cells in the patient.
Discussion
In this study, we manufactured human lymphocytes which simultaneously express a codon-optimized MAGE-A4 TCR and siRNAs to silence endogenous TCR. The modified T cells showed high surface expression of introduced tumor-specific TCR and enhanced antigenspecific cytotoxicity for target cells. NOD-SCID mice inoculated with human tumor cell lines expressing both MAGE-A4 and HLA-A*2402 exhibited decreased tumor growth after si-TCR T cell treatment. Clinical evidence also showed that the patient was well-tolerated after si-TCR T cell therapy and maintained a stable status.
TCR-T cell therapy using retrovirus as a vector was first implemented in melanoma more than 10 years ago 10 therapy has shown good clinical effects in the treatment of solid tumors such as multiple myeloma, synovial sarcoma, and melanoma 6,8,30,31 . On February 9, 2016, the FDA granted breakthrough therapy designation for the Adaptimmune's affinity enhanced T-cell therapy targeting NY-ESO-1 in synovial sarcoma. However, one problem associated with introduction of exogenous TCR is the mismatch with endogenous TCR, which leads to T cell recognition of normal tissues expressing unknown antigens to cause tissue damage 11,[32][33][34] . In our study, the introduction of endogenous small interfering RNA allowed T cells to express TCRs of specific epitopes while interfering with the expression of inherent TCR. This design avoids the mismatch of exogenous and endogenous TCR and enhances T cell recognition and killing activity.
High expression of MAGE-A4 was reported in several solid tumors such as ovarian cancer, melanoma, nonsmall cell lung cancer, and esophageal squamous cell carcinoma 17,35 . In this study, we used MAGE-A4 as the target antigen and developed a vector carrying both the MAGE-A4 tumor antigen TCR gene and small interfering RNA vector (MAGE-A4 si-TCR). We manufactured large numbers of gene-modified T cells and performed in vitro and in vivo detection of anti-tumor activity. Compared to non-transduced cells, genetically modified T cells showed no significant differences in cell morphology or growth rate, indicating that the transduction did not affect the growth characteristics of the cells. Simultaneously, the surface markers of cells were detected by flow cytometry. Both genetically modified and un-modified cells showed phenotypes of T lymphocytes, which were mainly CD8 + cytotoxic T cells. Tetramer assay showed the cell surface MAGE-A4-specific TCR expression was much higher in si-TCR cells than in control NGMCs and cytotoxic assay indicated the specific cell killing effect toward MAGE-A4 + , HLA-A*2402 + cell lines. Moreover, functional analysis showed that the genetically modified cells produced more IFN-γ following stimulation of specific MAGE-A4 peptide. For in vivo experiment, we observed the general state and response of NOD-SCID mice infused with si-TCR T cells, no adverse reactions or side effects were detected. After infusion, the growth of KE4 tumors (MAGE-A4 + , HLA-A*2402 + ) was specifically inhibited, while the tumor size of QG56 tumors (MAGE-A4 + , HLA-A*2402 − ) was not changed, indicating that the modified T cells specifically killed their HLA matching KE4 tumors.
Interestingly, we found clinical evidence of a uterine leiomyosarcoma patient who showed a long-lasting CR after chemotherapy following TCR-T cell treatment. Although the patient had a CR following the last round of chemotherapy, it is very rare to maintain CR for more than 3 years without any other treatments. Our results for tetramer and copy number detection indicated the persistence of si-TCR cells in the patient blood. All these findings enable us to believe that the si-TCR immunotherapy is the main reason for the patient to maintaining long-term CR.
In summary, we successfully manufactured MAGE-A4 si-TCR gene-modified T cells and both in vitro and in vivo tests indicated its specific activity toward MAGE-A4 and HLA-A*2402 positive tumor cells. Clinical evidence for a patient also suggested that si-TCR T cell therapy may attribute to maintaining a CR status. Our data suggest the adoptive cell therapy with human lymphocytes engineered to express MAGE-A4 si-TCR is a promising strategy to treat patients with MAGE-A4 expressing tumors. Further studies are being conducted at our institution to validate the clinical application of TCR-T therapy.
|
2019-06-18T14:33:35.144Z
|
2019-06-17T00:00:00.000
|
{
"year": 2019,
"sha1": "96b149abc452efc2786c5b4af850b44626ed5717",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41419-019-1717-8.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7157595d3cf86b5791e44d1bc94136e14a26d23e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
246289207
|
pes2o/s2orc
|
v3-fos-license
|
Review: Use of Electrophysiological Techniques to Study Visual Functions of Aquatic Organisms
The light environments of natural water sources have specific characteristics. For the majority of aquatic organisms, vision is crucial for predation, hiding from predators, communicating information, and reproduction. Electroretinography (ERG) is a diagnostic method used for assessing visual function. An electroretinogram records the comprehensive potential response of retinal cells under light stimuli and divides it into several components. Unique wave components are derived from different retinal cells, thus retinal function can be determined by analyzing these components. This review provides an overview of the milestones of ERG technology, describing how ERG is used to study visual sensitivity (e.g., spectral sensitivity, luminous sensitivity, and temporal resolution) of fish, crustaceans, mollusks, and other aquatic organisms (seals, sea lions, sea turtles, horseshoe crabs, and jellyfish). In addition, it describes the correlations between visual sensitivity and habitat, the variation of visual sensitivity as a function of individual growth, and the diel cycle changes of visual sensitivity. Efforts to identify the visual sensitivity of different aquatic organisms are vital to understanding the environmental plasticity of biological evolution and for directing aquaculture, marine fishery, and ecosystem management.
a small negative wave (also referred to as the a-wave). If the intensity of light stimuli is large enough, an early receptive potential (ERP), composed of two oppositely polarized waves, will appear before the a-wave. This is followed by a positive b-wave and a set of rhythmic wavelets with higher frequencies and lower amplitudes superimposed on the rising branch of the b-wave (also referred to as oscillating potentials). A positive wave with slower rise, called a c-wave, also occurs. As the light stimulus ends, a positive upward protrusion, or d-wave, can be detected. All these waves are derived from different components (i.e., cells) of the retina (Figure 1A). The a-wave is mainly derived from photoreceptor cells; light absorption by these cells triggers conformational changes of photosensitizing pigments, thereby triggering the cascade of G proteins and closure of the cGMP-activated cation channel, which reduces the influx of Na + and Ca 2+ and leads to hyperpolarization of the membrane (Graham and Klistorner, 1998). The b-wave is primarily associated with the activity of depolarized bipolar cells (Pardue and Peachey, 2014). The c-wave consists of the superposition of two oppositely polarized potentials; one is the positive potential resulting from hyperpolarization of the retinal pigment epithelium apical membrane, and the other is the negative potential resulting from Müller glial cells (Tarchick et al., 2016). The d-wave is an off-response associated with the activity of photoreceptors and Müller cells (Ueno et al., 2006). The ERP originates from the outer segment of the photoreceptor and results from intramolecular electron transfer, which is triggered by changes in the configuration of visual pigment molecules during light stimuli (Fei and Fang, 1989). Oscillatory potentials are related to the action between bipolar cells, amacrine cells, and ganglion cells in the inner retina. The amplitude of a-d waves varies with the duration and intensity of light stimuli and the adaptive conditions of the retina. However, the b-wave is a positive phase response that is most sensitive to variations of external factors, thus it can indirectly reflect the activity of photoreceptors. Therefore, the amplitude of the b-wave is often treated as the response amplitude of a set of light stimuli (Brown, 1969;Rager, 1979;Gacic et al., 2014;Hayasaka et al., 2021).
MILESTONES
The use of ERG dates back more than 100 years. Holmgren (1865) was the first researcher to use a galvanometer to connect electrodes to the front and rear of the eyeball. He was able to measure a rapid current change when an eyeball isolated from a frog was subjected to light stimuli, with the current appearing positive on the corneal side. Through subsequent experiments, Holmgren (1870), found that after an eyeball isolated from an animal was transversely cut, the potential difference was also measurable from the vitreous side to the posterior sclera, and a change in potential also occurred during light stimuli. This result indicated that the retina was the source of the electrostatic potential of the eye. Kühne and Steiner (1880) detected a change in potential in the retina isolated from the eyeballs of animals, but not in the eyeball without the retina, supporting the premise that the retina was the source of the signal.
The components and origins of ERG signals have been studied extensively. As observed in previous studies, Gotch and Francis (1903) reported that a more rapid negative potential occurred before the positive potential. Ishihara (1906) described a slowly rising positive wave following the appearance of the positive potential on the ERG. Einthoven and Jolly (1908) referred to the first negative wave as an a-wave, the positive wave that occurred subsequently as a b-wave, and the wave that occurred after the cessation of light stimuli as a d-wave. They further indicated that these waves might originate from the superposition of two components. However, referring to all components of the ERG profile, Granit (1933) proposed the "three waveguide range." He argued that the ERG signal originated from superimposition of three components (PI, PII, and PIII) and inferred that PI was a type of c-wave, PII was a type of b-wave, and PIII was associated with both a-and b-waves. Adrian (1946) reported double b-waves and referred to the first as a photopic b-wave and the latter as a scotopic b-wave. He showed that the photopic b-wave was associated with cone cells and the scotopic b-wave was associated with rod cells. Riggs and Johnson (1949) found that some small amplitude waves were superimposed on the b-wave as measured by ERG. Heck and Rendahl (1957) identified four peaks of these small amplitude waves, and Yonemura et al. (1962) referred to these small amplitude waves as oscillatory potentials. Dawson et al. (1968) reported that the shape and quantity of oscillating potentials depended on the experimental conditions. Brown and Murakami (1964) described a wave with almost no latent period under strong light, which he referred to as the ERP.
The progress of microelectrode intracellular recording techniques promoted further studies of the origin of ERG components. Murakami and Kaneko (1967) recorded two PIII components using a step-by-step recording technique and named them distal PIII and proximal PIII, respectively. They also proposed that a-waves originated from the synapses of the photoreceptor cells. Based on microelectrode recordings in cells, Miller and Dowling (1970) found that the light-induced action potential of Müller cells mostly coincided with b-waves. Subsequent studies revealed that the potential changes of Müller cells were triggered by an increase in the concentration of extracellular potassium ions, due to the depolarization of retinal neurons, instead of being directly induced by light, and that the generation of b-waves reflected the light-induced depolarization of ON bipolar cells (Stockton and Slaughter, 1989;Gurevich and Slaughter, 1993;Hood and Birch, 1996). In addition, the activation of Nav channels on cone bipolar cells and rod bipolar cells affects b-waves measured using ERG (Mojumder et al., 2008;Smith et al., 2013).
The progress of ERG research is closely related to the development of instruments. The ERG profiles of aquatic animals are usually recorded with a metal wire placed on the corneal surface or a glass electrode penetrating the eye. Cotton core electrodes (Northmore and Muntz, 1970;Matsuda and Wilder, 2010) and metal electrodes Kopperud and Grace, 2017;Warrington et al., 2017;Kingston et al., 2019) can also be used. After the 1960s, glass microelectrodes began to be used in numerous applications (Karita et al., 1973;Clark, 1975;Mccormick et al., 2019;Venkatraman et al., 2020; FIGURE 1 | (A) Schematic representation of a flash electroretinogram (ERG) and the retinal events contributing to it (Cameron et al., 2008). (B) Primate photopic ERG responses to 200 ms stimuli (Bush and Sieving, 1996). Hayasaka et al., 2021). Metal electrodes are generally made of platinum or tungsten rods, silver wires, or silver-silver chloride (Hanyu and Ali, 1964;Yang et al., 1990), whereas glass microelectrodes are generally filled with ionic solutions (e.g., KCl solution) and then connected to an external metal wire. Other kinds of electrodes, such as stainless steel electrodes (Fuentespardo et al., 1997) and polyethylene pipe steel electrodes (Weber, 1982), are less commonly used. To date, tungsten halogen lamps (Weber, 1982), xenon lamps (Karita et al., 1973;Clark, 1975;King-Smith and Cronin, 1996), and light-emitting diodes (LEDs) (Vetter et al., 2019;Hasenei et al., 2020) are the main light sources used in ERG studies, and the light intensity and wavelength range are controlled by neutral filters and interference filters.
APPLICATION OF ELECTRORETINOGRAPHYS IN STUDIES OF VISION IN AQUATIC ORGANISMS
For the majority of aquatic organisms, vision is crucial for predation, hiding from predators, communicating information, and reproduction (Johnsen, 2005;Mccormick and Allan, 2016;Butler et al., 2019). Even slight changes of light intensity and spectral composition may impact the feeding, survival, and growth of aquatic organisms (Villamizar et al., 2011). Efforts to measure the visual sensitivity of aquatic organisms help us to assess the effects of light conditions on the growth and development of cultured species. The reliance of aquatic animals on color vision and visual sensitivity must be considered when constructing fish facilities and improving fish yield efficiency (Browman, 2005). Therefore, electrophysiological data can be used to describe the visual sensitivity of aquatic organisms (Figure 2), which in turn provides direct data relevant to aquaculture, marine fishery, and ecosystem management (Hasenei et al., 2020).
Visual sensitivity includes spectral sensitivity, luminous sensitivity, and temporal resolution as well as contrast and polarization sensitivities. The b-wave amplitude of an electroretinogram indicates the response of the retina to a variety of light stimuli, and luminous sensitivities can be expressed by two indicators: K 50 [the irradiance required to generate 50% of the peak amplitude (V ma x)] and dynamic range (the irradiance difference required to generate 5% and 95% of the V max ). Spectral sensitivity can be expressed as the reciprocal of the irradiance required to generate the standard reaction. Temporal resolution is a measure of the integration time of the eye, which can reflect the ability of an organism to track moving objects. The highest frequency at which the signal produced by the eye and the light stimuli of the set irradiance retain the same phase is defined as the critical flicker-fusion frequency (CFF) (Hayasaka et al., 2021). Studies of visual sensitivity often involve separating the photoreceptor cell cone from the rod signals, and rods and cones can be distinguished according to their characteristics. The rod is sensitive to dark light and light at higher frequencies, whereas the cone is sensitive to bright light and light at lower frequencies.
During adaptation to darkness, the sensitivity of the rod to light is 1,000 times greater than that of the cone. Rods can be bleached by bright light, so that use of dark-adapted blue light stimuli can separate the rod from the cone signals. In addition, adaption to different light spectral components diminishes the function of cones, which allows differentiation of different types of cones. As FIGURE 2 | Part of the device diagram for electroretinogram testing of aquatic animals discussed in this review (Coates et al., 2006;Mccomb et al., 2013;Kingston et al., 2017Kingston et al., , 2019Liu et al., 2021).
an alternative method, the two types of photoreceptor cell signals can be separated by temporal resolution, and high frequency flicker stimuli can accurately separate cone cell signals (Fei and Fang, 1989;Cameron et al., 2008).
Types of Photoreceptors
Photoreceptors are the retinal cells that convert light (photon) signals into electrical signals to propagate photoelectrical conversion. Vertebrate photoreceptors are classified into cone and rod cells. The outer segments of rod cells are rod-shaped and contain rhodopsin, which is sensitive to light and mainly responsible for dark vision. The outer segments of cone cells are conical and are responsible for photopic vision and color vision, and different cones have opsins that are sensitive to different wavelengths.
The zebrafish, Danio rerio, has four types of cone cells: ultraviolet wavelength, short wavelength, medium wavelength, and long wavelength. When the CFF of D. rerio was measured by ERG, no signal was detectable at 3 days post-fertilization (dpf); signals appeared at 4-12 dpf, but the amplitude was lower than that of adults and there was no response under low light intensity. The CFF-light intensity function curve at 15-24 dpf was similar to that of adult zebrafish, which was consistent with morphological data. Thus, these results suggested that photoreceptors began to develop at 2 dpf and that complete rod cells first appeared at 12 dpf (Branchek, 1984;Branchek and Bremiller, 1984). Variations of spectral sensitivity were observed in D. rerio under dark adaptation during development, and the spectral sensitivities at 6-8 and 13-15 dpf were due to cone cells, with very little contribution by rod cells. Measurement of the spectral sensitivities of the retina at 21-24 and 27-29 dpf showed that both rod and cone cells were functioning. The spectral sensitivity of D. rerio adults was mostly due to rod cells and ultraviolet-type cone cells .
Using the b-wave amplitude of the electroretinogram as an indicator of the response to light stimuli, Lu et al. (2021) analyzed the spectral sensitivity of D. rerio adults under dark adaptation. They identified spectral sensitivity peaks at 500 and 365 nm. After cone and rod cells were separated under photopic and scotopic conditions, the b-wave amplitude of D. rerio increased with age under dark adaptation with the same light stimuli. However, under light adaptation, no significant change in the b-wave amplitude was detected in any age group (Nadolski et al., 2020). Brockerhoff et al. (2003) discovered a cone mutant of D. rerio called no optokinetic response f, which had impaired cone function and normal rods. Electroretinography results confirmed the lack of an ERG response when the mutant was under light adaptation. Under dark adaptation, the b-wave amplitude was consistent with that of wild-type fish. Compared with wild-type D. rerio juveniles at 6 dpf, the mutant had smaller visual-motor and optokinetic responses under light adaptation. Subsequent to dark adaptation, the two responses of mutant and wild-type fish were consistent, which suggested that rod cells were involved in the visual behavior response, even though they are immature in the early developmental stage (Venkatraman et al., 2020).
Variations of Visual Sensitivities
In different aquatic zones in nature, the spectral composition (color) and intensity (brightness) usually vary with depth due to light refraction, scattering, and absorption. In clean coastal waters, green light has a strong penetrating ability, while blue light can penetrate deeper into the water column. Dissolved organic matter and suspended particles in estuaries and freshwater areas cause light at shorter wavelengths to scatter and be absorbed, which also happens fast for yellow and red light, and is why coastal light appears typically greenish . Given differences in habitat and behaviors of aquatic organisms, visual sensitivity also varies among different types of animals.
Generally, nearshore benthic fishes are more sensitive to light intensity than pelagic fishes, but less sensitive than deep sea species. For instance, the benthic-dwelling flounder, Paralichthys dentatus, has greater luminous sensitivity than pelagic fish such as Morone saxatilis, Pomatomus saltatrix, and Rachycentron canadum (Horodysky et al., 2010;Horodysky et al., 2013). The spectral sensitivity and dynamic range of pelagic fishes are lower than those of nearshore fishes. Fish living in the same body of water at different depths may also have different spectral sensitivities, which are closely associated with the spectral composition of the marine environment. Several groups reported (Matsumoto et al., 2009;Horodysky et al., 2013;Mccomb et al., 2013) that the range of spectral sensitivities of Chaetodipterus faber, Tautoga onitis, and Centropristis striata living in temperate coral reefs was 400-600 nm. Chaetodipterus faber, which inhabits shallow water, tends to be more sensitive to the light (green) at longer wavelengths, whereas T. onitis and C. striata, which live in deep water, tend to be more sensitive to the light (blue) at shorter wavelengths.
Visual sensitivity is also associated with lifestyle. In the same habitat, species that are active at night generally have lower temporal resolutions, while species that are active during the day have higher temporal resolutions (Matsumoto et al., 2009;Mccomb et al., 2010). In low visibility environments, organisms may exhibit prolonged integration times of the retina, which means that temporal resolution is reduced to capture light to the greatest extent possible (Warrant, 1999;Kalinoski et al., 2014;Ryan et al., 2017). Matsumoto et al. (2009) used an exponential function to fit the frequencies of the flicker stimulations and ERG amplitude, treated the slope of the exponential function as the temporal resolution, and found that the dark-adapted temporal resolution of the tuna, Thunnus orientalis, was significantly lower than that of the mackerel, Scomber japonicas. The former tended to be more active during the day and was insensitive to light intensity, with the irradiance required to generate a half response of the maximum ERG amplitude being 1.38 quanta cm −2 s −1 . The luminous sensitivity of the perch, Siniperca chuatsi, which becomes accustomed to preying at night under light adaptation conditions, is much lower than that under dark adaptation; the irradiance at which the ERG signal appears under light adaptation is 1,000 times greater than that under dark adaptation (Liang et al., 1994).
Aquatic organisms also adapt to the environment by optimizing their visual system. Mccomb et al. (2010) found that three species of coastal sharks (Sphyrna tiburo, Sphyrna lewini, and Carcharhinus acronotus) had scotopic spectral sensitivity peaks that fit the spectral range of their environment during the high predation period at dusk. The temporal resolution of T. onitis and the diel cycle invariance of luminous and spectral sensitivities are consistent with this species' dormant lifestyle at night . Lampreys are usually active at night, and their temporal resolution and contrast sensitivity correspond to their lifestyles and habitats. For example, the temporal resolutions of parasitic Mordacia mordax and M. praecox are lower than 50 Hz. Moreover, the temporal resolution of parasitic species at all temperatures and light intensities is higher than that of non-parasitic species, and their response to flickering square-wave white light stimuli at a frequency of 5 Hz decreases with decreasing contrast levels. As temperature increases, the contrast sensitivity of lampreys also increases, which suggests that ambient temperature restricts the visual function of poikilotherms (Warrington et al., 2017).
Most fish species exhibit a diel rhythm cycle with respect to their visual sensitivity (Bassi and Powers, 1987;Li and Dowling, 1998;Kunz, 2006). The b-wave and d-wave amplitudes of D. rerio juveniles (5 dpf) are normal during the day but substantially disappear at night. Even under total darkness, the amplitude is high during subjective days and low during subjective nights, which can be attributed to the fact that the activity of the photoreceptor outer segment decreases and synaptic ribbons in the cone pedicles disassemble (Emran et al., 2010). The ERG amplitude of the tarpon, Megalops atlanticus, also shows a diel rhythm cycle, with the highest visual sensitivity at midnight and the lowest at noon. Under continuous darkness, the visual sensitivity during subjective nights is significantly higher than during subjective days, which suggests that it may be directly regulated by an endogenous biological clock (Kopperud and Grace, 2017).
The visual sensitivity of fish also varies among different developmental stages. Visual sensitivity in the juvenile stage is lower than that in the adult stage. The peak wavelengths of dark-adapted spectral sensitivity in juvenile T. orientalis is 474-494 nm, and the luminous sensitivity tends to increase with growth stage (Matsumoto et al., 2011). Sexually mature three-spined sticklebacks (Gasterosteus aculeatus) show higher sensitivity to the red spectrum (Shao et al., 2014). The male cichlid, Astatotilapia burtoni, exhibits a change in body color during courtship. Compared with non-sexually mature females, the sexually mature female A. burtoni is more sensitive to wavelengths similar to those of the male's body color during courtship. Subsequent to stimulative ovulation, females also have a higher visual sensitivity compared to pre-injection measurements (Butler et al., 2019).
Effects of Environmental Factors on Visual Sensitivity
The variation of environmental factors also impacts vision. Hypoxia reduces visual sensitivity. Stenslokken et al. (2008) reported that severe hypoxia irreversibly impaired the vision of the shark, Hemiscyllium ocellatum. Acute hypoxia affects the rods and cones of the goldfish, Carassius auratus, to varying extents; the effect of acute hypoxia on b-waves under light adaptation is faster than that on b-waves under dark adaptation, which suggests that the signaling pathway of cones tends to be more sensitive to hypoxia than the signaling pathway of rods (Wei et al., 1995). Temperature can also affect the sensitivity of the retina. For example, rapid cooling may damage the retina of the catshark, Scyliorhinus canicula, because the ERG waves do not appear after timely rewarming. The b-wave amplitudes of the carp, Carassius gibelio, increased significantly after rewarming, but it did not fully recover to the original value. However, the visual sensitivity of the eel, Anguilla, did not differ significantly as a function of temperature. Subsequent to rewarming, the b-wave amplitude exceeded the initial level (Gacic et al., 2015). The composition of feed also affects visual sensitivity. Compared with a group of bass (Dicentrarchus labrax) fed with 0% or 1.5% taurine, the peak of spectral sensitivity of D. labrax fed with 5% taurine gradually shifted to longer wavelengths. The light intensity required to reach 75% of the ERG amplitude peak differed significantly among fish fed with feed containing different amounts of taurine (Brill et al., 2019).
In aquaculture and fisheries, the effects of anomalous light on the visual sensitivities of aquatic organisms need to be considered. Strong light may affect the visual sensitivity of fish, and different light periods and wavelengths during the rearing process may also affect the fish growth. ERG can be used to assess the visual sensitivity trends of fish as they adapt to different background light conditions. Dixon et al. (2004) found that compared with natural light, continuous acclimation under blue, green, and orange light reduced the sensitivities of D. rerio juveniles (6-10 dpf) to ultraviolet stimuli. Continued dark rearing also reduced the visual sensitivity of D. rerio and affected early visual development (Saszik and Bilotta, 1999). However, the negative effects from rearing under dark conditions resumed after placing D. rerio under normal light . The retina is also susceptible to light during development, but its structure and function show a certain plasticity. The peak of spectral sensitivity of M. atlanticus adapted to red light (at long wavelengths) for 2-4 months was significantly different from that of the fish adapted to blue light (at short wavelengths); subsequent to dark adaptation, the irradiance required for light stimuli to achieve a response was less than that of light adaptation (Schweikert and Grace, 2018). The frequency and intensity of light stimuli also affect fish behavior. After strobe light stimuli, the visual sensitivity of the carp, Hypophthalmichthys molitrix, and H. nobilis was reduced (Vetter et al., 2019). After exposure to strong light (approximately 2,000 µmol·m −2 ·s −1 ) for 15 min, the visual sensitivity of the halibut, Hippoglossus stenolepis, decreased (Brill et al., 2008), and the decrease may have been irreversible (Magel et al., 2017).
Types of Photoreceptors
Photoreceptors are evolutionarily classified into ciliary and rhabdom types according to the structural characteristics of their membranes. There are various types of invertebrate photoreceptor systems; primitive types include the ocellus of protozoa, and more developed types include the camera eyes of cephalopods. Despite structural differences, their cells consist of three parts: a photosensory part, cell body, and axon (Huang, 1988). In crustaceans, photoreceptors are monocular and compound. Compound eyes are composed of the ommatidium, and its outside to inside structure consists of a refraction system, photosensitive system, and pigment cells. In the photosensitive system, the protruding microvilli of the retinula cells comprise the rhabdom. The structure of crustacean eyes is also associated with light intensity. Light and dark adaptation trigger changes in the ultrastructure of eyes, the most obvious being migration of pigments and changes in the position, size, and shape of the rhabdom. After light adaptation, pigment particles are distributed across the photoreceptor cells, the diameter of the rhabdome shrinks, and the microvilli arrangement is disordered; after dark adaptation, the diameter of the rhabdome enlarges, the microvilli arrangement is regular, and the pigment particles are only distributed at the distal and proximal ends of cells (Meyer-Rochow, 1999, 2001Matsuda and Wilder, 2014).
Factors Impacting Visual Sensitivity
The visual sensitivity of crustaceans is closely associated with habitat. Analogous to the absorbance characteristics of rhodopsin, crustacean spectral sensitivity is modified by the attenuation of light by the dioptric apparatus and the length and absorption coefficient of the rhabdom (Johnson et al., 2002). Unlike direct measurement of the absorbance of the visual pigment, total response of the retinal photosensitive layer to light stimulation, as measured by ERG, considers the light filtering of other eye parts (Bryceson, 1986). Nearshore species are generally more sensitive to light at long wavelengths than oceanic species, and the peak of spectral sensitivity of shallow water species is different from that of deep sea species, which coincides with the spectral composition of the habitat. The shrimp species, Neomysis integer, Praunus flexuosus, and P. inermis, are offshore species, and their peak wavelengths of spectral sensitivity are 515, 530, and 535 nm, respectively; Mysis mixta and M. relicta sp. II are pelagic species, and their peak values are 510 and 520 nm, respectively (Lindstrom, 2000). Three shallow-water decapods (Crangon allmani, Pandalus montagui, and Nephrops norvegicus) are most sensitive to light in the range of 510 to 525 nm, whereas the peaks of spectral sensitivity of two deep sea decapods [Paromola cuvieri and Chaceon (Geryon) affinis] are below 500 nm (Johnson et al., 2002). The peak of spectral sensitivity of deep sea creatures is generally in the blue region of the visible spectrum. Frank et al. (2012) used ERG to identify the spectral sensitivity of eight benthic crustaceans and found that the spectral sensitivity peaks ranged from 470 to 497 nm. The squat lobsters, Euumunida picta and Gastroptychus spinifer, are also sensitive to light at ultraviolet wavelengths, which may be associated with deep sea bioluminescence.
Differences in spectral sensitivity are also closely associated with phylogeny. Under the same spectral distribution, Audzijonyte et al. (2005) found that the peak wavelength of spectral sensitivity of the lake species, M. relicta (564 nm), was higher than that of bay species, Mysis salemaai (545 nm). The spectral sensitivity of M. relicta collected from four geographically isolated areas also differed, and the difference was not related to the spectral distribution, but rather to the geographic location (saltwater/freshwater). Donner et al. (2016) examined the visual sensitivities of three species of mysids (M. relicta, M. salemaai, and Mysis segerstralei) and reported that the peak spectral sensitivity of lake populations was greater than that of brackish water species, and that this difference had minimal effects from the light environment. As the light intensity in a habitat becomes weaker with increasing depth, the luminous sensitivity of crustaceans increases and the temporal resolution tends to decrease, but there are some exceptions. Some species that are distributed in deeper zones have a much higher temporal resolution. For example, the CFFs of krill (Nematobrachion flexipes and N. sexspinosus) distributed at 400-600 m were 44 and 56 HZ, respectively, which may be due to the bioluminescence of most of the live prey (Frank, 1999(Frank, , 2000Johnson et al., 2000). The temporal resolution of crabs is generally lower than that of shrimp. Of the collected benthic crustaceans in their study, Frank et al. (2012) found that the CFFs of two shrimp, (Heterocarpus ensifer and Eugonatonotus crassus) were within the range of 16 to 24 Hz, whereas the CFFs of four crab species (G. spinifer, E. picta, Munidopsis erinacea, and Bathynectes longipes) were slightly lower (10-14 Hz), presumably due to the less mobile lifestyle of crabs.
Like most fish species, strong light stimuli reduces the electroretinogram saturated amplitude of crustaceans, which is associated with the intensity and duration of light stimuli. However, an animal's visual sensitivity can recover to a certain extent. A moderate increase in adaptation to strong light over a long period of time will relieve the damage caused by strong light stimuli, and this process is associated with the rate at which light intensity increases (Viljanen et al., 2017). For two populations of M. relicta, the peak amplitude of ERG decreased after 2-3 weeks of strong light exposure, and the sensitivity to light intensity recovered at a slow rate under dark adaptation. Based on these results, the researchers suggested that recovery of retinal sensitivities of marine populations is limited by the regeneration rate of 11-cis retinal, whereas that of retinal sensitivity of lake populations is limited by the photoreceptor membrane turnover (Feldman et al., 2020).
Diel Cycle Changes in Visual Sensitivity
The visual sensitivity of crustaceans also follows a circadian rhythm. For example, Cai and Zheng (1990) found that the luminous sensitivity of the crab, Orithyia sinica, followed a circadian rhythm, whereas the spectral sensitivity curves and peak values during the day and at night generally remained the same. The luminous sensitivity of the crab, Poriunus trituberculaius, and the prawn, Penaeus japonicas, showed no change with circadian rhythm, and the spectral sensitivity curves and peak values during the day and at night were perfectly matched. These differing results may be associated with changes in light conditions in the animal habitat. In another study, ERG analyses were performed on the retina and laminar ganglion of the eyestalk of the crayfish, Procambarus clarkia. The luminous sensitivity of the two tissues during the day was lower than that at night, with a cycle approximately 22-23 h. When day was treated as a dark condition and night was treated as a light condition, the ERG waves at night remained higher than those during the day (Aréchiga and Rodríguez-Sosa, 1998). Pigment dispersion hormone (PDH), which functions as a non-photosynchronizer, can synchronize the circadian rhythm of P. clarkia. Studies showed that injection of PDH into an isolated eyestalk could advance or delay the circadian response of its visual photoreceptor depending on the time of treatment (Verde et al., 2007;Solis-Chagoyan et al., 2012a). Hormones can depolarize the potential of the photoreceptor membrane receptors. In a diel cycle, the light sensitivity varies from photoreceptor to photoreceptor. When the eyestalk isolated from P. clarkia was injected with PDH during the subjective day, ERG activation was slow, with a longer latency (the duration from light stimuli to the time when the ERG signal reached 10% of the peak amplitude) and half-time of activation (the duration from the time when the ERG signal reached 10% of the peak amplitude to the time when it reached 50% of the peak amplitude) when compared with the results for subjective night (Barriga-Montoya et al., 2010). Barriga-Montoya et al. (2017) reported that when subjected to light stimuli, the recovery of visual sensitivity was associated with a circadian rhythm, and the recovery of visual sensitivity with PDH depended on the duration of the circadian rhythm.
The amplitude at which photoreceptors show an electric response to light is also affected by melatonin, and the release of melatonin generally shows a circadian rhythm. At a low concentration of endogenous melatonin, the ERG amplitude increased when P. clarkia was injected with melatonin, and this induced effect was similar to the ERG response level observed at a high concentrations of endogenous melatonin (Solis-Chagoyan et al., 2012b). Melatonin synchronizes the cycle and phase of the circadian rhythm, thus the use of exogenous melatonin at different times may have specific effects on the rhythm phase. The phase advances and slows during a subjective day, but there is no phase change during a subjective night (Solis-Chagoyan et al., 2008).
Types of Photoreceptors
Most photosensitive organs of mollusks are less structurally organized, but cephalopods have highly developed camera-type eyes. With a simpler retina structure compared to vertebrates, the retina of cephalopods consists of a rhabdom layer, melanin layer, optic cell nuclear layer, and nerve fiber layer. There is only one type of photoreceptor cell (rod cells), and two visual pigments are present in the cephalopod retina: rhodopsin in the outer segment of the photoreceptor-type cell and retinal pigment in the myelin corpuscle membrane of the inner segment. Due to the lack of color-differentiating cone cells, cephalopods are generally considered to be color blind. Scallops also have complex photosensitive organs. In their typical mirror eye, the retina is divided into a distal and a proximal retina. The distal retina consists of ciliary photoreceptor cells, which are homologous to rods and vertebral cells of vertebrates; the proximal retina consists of microvilli photoreceptor cells, which are homologous to those of invertebrates (Speiser et al., 2011;Sun, 2015). The retina of gastropods contains some photoreceptor cells with microvilli and cilia. Electrophysiological measurements revealed that the spectral sensitivity of various gastropods only had a single peak, with a wavelength range of 475 to 496 nm (Gillary, 1974;Chernorizov et al., 1992Chernorizov et al., , 1994Zhukov et al., 2006;Shepeleva, 2013). Gao et al. (2016c) used histological methods to divide the eye tissue of the abalone, Haliotis discus hannai from outside to inside into a retinal pigment epithelial cell layer, outer nuclear layer, photoreceptor inner segment, inner nuclear layer, melanin particle deposition layer, and optical fiber layer. Hamasaki (1968a) anesthetized octopus (Octopus vulgaris and O. briareus) samples for ERG analysis and found a continuous negative wave on the recorded electroretinogram. The amplitude and light adaptation state were associated with light stimuli intensity; as the stimuli intensity decreased, the negative wave amplitude decreased and the latency of the response increased. Under dark adaptation or with adaptation to different spectral components, the peaks of spectral sensitivity were approximately 480 nm, which suggested that there was only one visual pigment in octopus (Hamasaki, 1968b). The peak of spectral sensitivity of the cuttlefish, Sepiella maindroni de Rochebrune, appeared at 490 nm when the ERG recording was performed with adaptation to background light at different intensities and wavelengths, but the peak did not shift and the curve shape stayed the same (Zheng and Cai, 1981). Yamamoto et al. (1985) classified the process of Sepiella japonica development from egg laying to hatching into 40 stages. ERG readings were first observed at Stage 34 when the photoreceptor cells began to differentiate into inner and outer segments. From Stage 35 to 36, the density and arrangement of microvilli tended to be more regular, leading to the formation of the rhabdome and an increase in the ERG amplitude; by Stage 39, the ERG amplitude reached the individual adult level. In another study, the ERG amplitude of Sepia officinalis decreased as the body size increased, and its sensitivity to blue light was 100 times greater than that of yellow light (Groeger et al., 2006).
Measurement of the Spectral Sensitivities of Mollusks
The squid, Euprymna scolopes, has a bioluminescent organ containing symbiotic photobacteria (Vibrio fischeri), which has a photosensitive function. In an ERG test, this organ generated a positive amplitude signal under light stimuli, which was opposite the negative amplitude generated from eyes under light stimuli. However, this difference was not associated with the presence or absence of photobacteria. The ERG amplitude of photogenic tissue was only associated with the size of the bioluminescent organ; the larger the photogenic organ, the greater the response amplitude (Tong et al., 2009). Spectral responses of the scallop Amusium japonicum spanned 433-700 nm, with a peak of approximately 470-520 nm. These values indicate that this organism is well-adapted to the light environment of a shallow sea habitat. Its CFF was 1.3-1.5 Hz, indicating that only slowmoving objects could be detected (Kanmizutaru et al., 2005). The range of spectral sensitivities of the larvae of the oyster, Magallana gigas, is 500 to 650 nm, with a peak at 620 nm (Kim et al., 2021). As a typical diurnal nocturnal creature, the movement and feeding behaviors of abalone follow an obvious circadian rhythm (Gao et al., 2020). However, Gao et al. (2016d) found that abalone showed phototaxis toward darkness (red and orange light at long wavelengths) but hide from blue and green light at short wavelengths. Under blue and green light, the growth, survival, and feed conversion ratio of abalone were significantly lower than those under red and orange lights (Gao et al., 2016b). Under blue and green lights at short wavelengths, the larval metamorphosis and survival rate of H. discus hannai were significantly higher than those under red and orange lights (Gao et al., 2016a(Gao et al., , 2017. This finding suggests that in diverse stages of growth and development, abalone are sensitive to different spectral components to a varying extent, and therefore electrophysiological technology may provide the best way to assess the spectral sensitivity of abalone. Certain gastropods are able to regenerate their eyes. For example, the ERG results of retina from a regenerated eye of the snail, Achatina fulica, with or without morphological abnormality, was similar to that of a normal eye. In different age groups, the ERG signals of regenerated eyes resembled, but were all slightly lower than, those of normal eyes. Under recurring stimuli, the amplitudes of the signals from regenerated eyes decreased, but the extent of the amplitude was greater than that of a normal eye. Additionally, during the period of recovery, the amplitude of the response decreased with age (Flores et al., 1983;Tartakovskaya et al., 2003).
Application of Electroretinography in Studies of Visual Functions of Other Aquatic Animals
When spectral sensitivity is measured by flash-ERG, it usually takes a long time to record. During this period, the amplitude of ERG components may be subject to various parameters, such as changes in the eyes, the electrode position, or the metabolic state of the retina. Another alternative is to use flicker-ERG, in which the intensity of the monochromatic test light is adjusted to generate an ERG amplitude identical to the fixed reference light stimuli. This alternative has some strengths when used for spectral sensitivity measurements: any ERG change that occurs during the test may also be reflected in the response to the test and reference lights. In addition, it only requires one measurement for each specific test wavelength, thereby shortening the test duration (Neitz and Jacobs, 1984;Jacobs et al., 1996).
When the spectral sensitivity of sea turtles was measured by flicker-ERG, at least two visual pigments were found in adult Dermochelys coriacea, Caretta, and Chelonia mydas, which were sensitive to the longer wavelengths, with peaks at 580 nm. Dermochelys coriacea showed higher sensitivity in the short wavelength region, with a peak at 500 nm. In contrast, the ERG results of C. mydas and C. caretta had a secondary peak at approximately 520 nm. When C. caretta and D. coriacea hatchlings were assessed using flicker-ERG, both species had peak sensitivities between 520 and 540 nm and a secondary peak at UV wavelengths was detected, but the sensitivity of D. coriacea to light at wavelengths higher than 520 nm was significantly lower than that of C. caretta (Levenson et al., 2004;Crognale et al., 2008;Horch et al., 2008).
Color vision is based on the extent to which the visual system can respond to light at different wavelengths, and it is based on two or more photoreceptor types. Most marine mammals have no color vision. For example, the seal, Phoca vitulina, only has rods and only a few cones, so it likely is color blind. Compared to terrestrial mammals, marine mammals lack cones sensitive to blue light, which may have resulted from prolonged adaptation to the marine environment (Peichl and Moutairou, 1998;Griebel and Peichl, 2003). This also coincides with the electrophysiological results for P. vitulina. The flicker-ERG test results revealed whether it was adapted to different spectral conditions, with the ERG amplitude having only one peak at 510 nm (Crognale et al., 1998). Levenson et al. (2006) used flicker-ERG at different frequencies and found that its peak of spectral sensitivity was at 502.5 nm, where rod cells play a dominant role. However, early behavioral studies suggested that the sea lion, Zalophus californianus, could identify color, which was supported by comparison of cone signals and rod signals (Griebel and Schmid, 1992;Griebel and Peichl, 2003). However, Scholtyssek et al. (2015) conducted follow-up behavioral studies and maintained that previous experiments may have neglected the effects of brightness and contrast, so Z. californianus may identify color based on brightness. Oppermann et al. (2016) conducted behavioral studies to show that Z. californianus could identify different colors, and further inferred that the low light intensity used by Scholtyssek et al. (2015) (in which cones cannot function) may explain their conclusion that Z. californianus is color blind.
Electroretinography has also been used to detect visual sensitivity of jellyfish. The form of the crystallin lens eyes of Tripedalia cystophora resembles that of vertebrates and cephalopods. Crystallin lens eyes include lower and upper lens eyes, and both have similar spectral sensitivities, with the peak corresponding to blue and green light at a wavelength of 500 nm (Coates et al., 2006). The visual system of the horseshoe crab, Limulus Polyphemus, exhibits a circadian rhythm. Under the natural light cycle, the ERG results showed that the visual sensitivity rhythm of L. polyphemus coincided with the circadian cycle. Even when L. polyphemus was fed in darkness for 1 year, its visual sensitivity followed the circadian rhythm. The circadian rhythm in subjective days was higher than those in subjective nights, and the rhythmic phase of the electroretinogram revealed an advance or delay of phases depending on exposure to short durations of different lighting conditions (Barlow, 1983;Watson et al., 2008). Information about components of the visual system of an increasing number of aquatic organisms is becoming available, and ERG can be used to further identify retinal functions, study how aquatic organisms adapt to different habits, and determine how to protect endangered species.
PROSPECTS
This review focused on the use of ERG in studies of visual sensitivities of aquatic organisms. ERG can accurately reflect the response of the retina to light, and data acquisition is quick and precise. The variations of ERG waveforms caused by different light stimuli are extremely useful for observing the visual function characteristics of aquatic organisms, including spectral sensitivity, luminous sensitivity, and temporal resolution. Considering the enormous differences in body size, eye size, and form of aquatic organisms, the first step when studying a species is to reasonably reconstruct the characteristics of the eyes. For example, electrodes that can tightly fit the eyes and record data with a high signal-to-noise ratio should be used. Second, information about the general composition of visual organs and the retinal structures of aquatic organisms is incomplete, so there is an urgent need to describe the basic parameters, such as visual formation stages, retinal structure, and cell type and function, using anatomy, histology, and scanning or transmission electron microscopy. These parameters are crucial elements of ERG analysis. Third, the parameters of ERG light stimuli should be selected so that they can be adjusted according to the species being studied. The International Society for Clinical Visual Electrophysiology has published a set of standards for clinical ERG, but they are derived from human clinical research. Limited data are available for aquatic animals. To effectively assess the function of cone or rod cells, the durations of light and dark adaptations must be considered, the duration of flicker stimuli should be shorter than the integration time of photoreceptors, and the interval of stimuli must not affect the current state of retinal light adaptation. Fourth, ERG records the general response of retinal cells to light, but it is still difficult to associate the changes of a given ERG pattern with the specific changes of retinal light responses. Each waveform of the electroretinogram reflects the activity of different retinal cells, and wave components formed may overlap to some extent, thereby undermining the process of analyzing the functions of individual cells based on their unique wave components. To investigate the functions of specific retinal cells, efforts should be made to block the expression of certain neuronal functions using drugs, or to use models with loss of functions or mutations to isolate the corresponding wave components. We also need to develop a technique to accurately analyze the ERG waveform and associate the derived parameters with the activity of retinal cells in order to better describe the function of the retina. Finally, based on the retinal structure identified in aquatic organisms, the visual acuity of species dwelling in the same habitat could be inferred from the determination of ERG curves, and from comparisons and analyses of the characteristics of ERG curves of sibling species. This will, in turn, provide immediate guidance for the optimization of light conditions in aquaculture practice and the selection of lamps in the marine fishing industry.
AUTHOR CONTRIBUTIONS
XG and CK conceptualized the study. SL, MZ, and ML conducted research and collected the data. YL, WY, and XL provided the materials and the device. SL and XG wrote the manuscript. CK had primary responsibility for the final content. All authors read and approved the final manuscript.
|
2022-01-27T14:23:06.893Z
|
2022-01-27T00:00:00.000
|
{
"year": 2022,
"sha1": "b5cb7f987a064889fdec72f055c3c2c1b9ac83b9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "b5cb7f987a064889fdec72f055c3c2c1b9ac83b9",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216729045
|
pes2o/s2orc
|
v3-fos-license
|
Assessment Report on Anti-Rabies Dogs Vaccination Performance in Four Sub-Cities of the Addis Ababa City Administration
Rabies is a deadly zoonotic disease with world-wide occurrence, and is transmitted through bite. Many of the victims are children, more likely to play with or approach Free Roaming Dogs. Despite rabies has remained neglected by relevant veterinary, medical and public authorities. Addis Ababa is the capital city of Ethiopia. The number of free roaming canidae and apparently, loose management of dogs contributes to a high endemicity of canine rabies in the capital city. The anti-rabies mass vaccination was carried from November 2007 to January 2010 a total 1070 owned dogs in 4 major Sub-cities; covering; Gulele (70 %), Addis Ketema(12%), Arada(10%), and Kolfe-Keranio, (8%) within the Addis Ababa City Administration that encompasses 116 Woredas. During the same period were vaccinated, of which male dogs were 700 (65 %) and females 370 (35%) respectively. The age structure ranged from 1 month to19 years, of which 367 (34.3%) were less than 1 year( ≤1 year) ,623 dogs (58.2%) being between one to ten years (1-10 years) and the remaining 80 dogs (7.5%) were above 10 years (≥10 years), with an average age structure of 4 years and 5 months. From the present vaccination data it was interesting to observe that the dominant colors among 1070 immunized canines were brown,(460) 43%), and black (265) 24.8%, while the remaining population being; white (110 ) 10.3%, grey (70) 6.3% , and other different colors (165)15.4% respectively. The reporter also noted that among the vaccinated canines about 80% revealed to be indigenous, while the remaining were mongrels or hybrids of unidentified breeds. This is an indication of absence of breeding policy. The vaccination was being carried out as a regular annual activity for possible minimization of the transmission of the fatal disease for the well-being of public health.
INTRODUCTION
Rabies is a deadly zoonotic disease with world-wide occurrence, caused by a virus of the family Rhabdoviridae and the genus Lyssavirus. The antigenic structures of Rabies Virus biotypes are stable in nature and not easily affected by passage in laboratory hosts or cell cultures.
Rabies is responsible for estimated annual human mortalities of 31,000 and 24,000 in Asia and Africa, respectively, with people mostly at risk of dying due to rabies being those who live in rural areas of these continents (JVMAH, 2014). It is far below the 70 percent needed to halt the transmission of canine rabies in Ethiopia. Rabies as an important social and economic factors has not been given due considerations by the relevant veterinary, medical and public authorities. WHO records indicate that 40% of people are children between the ages 5-14 years who are bitten by suspect rabid animals. In up to 99%of cases, domestic dogs are responsible for rabies virus. Treating a rabies exposure, where the average cost of rabies post-exposure prophylaxis (PEP) is $US 40 in Africa, and $49 in Asia, can be a catastrophic financial burden on affected families whose average daily income is around $US 1-2 per person.
Data recorded at the Ethiopian Health and Nutrition Research Institute (EHNRI, 2001) that occurrence of rabies in other domestic animals and wild fauna could be due to spillovers of infections from canine rabies. In Ethiopia, the national annual estimates from official reports indicate 12 exposure cases per 100,000 population and 1.6 rabies deaths per 100,000 populations. However, the actual numbers are expected to be higher as many cases are not reported. According to, the Ethiopian Health and Nutrition Research Institute (EHNRI, 2011), from the total of 2667 brain samples examined from dogs during 1999-2000, 1951 (73.2%) were positive for rabies. Dogs accounted to 96.2 % of the total animals examined and represented 89.83% of the total brain samples that were found to be laboratory conformed positive rabies cases. During the indicated period cats accounted for 5.35% of the total confirmed rabies cases and represented 2.62% of the total brain samples. The above facts illustrate that rabies is well established in Ethiopia, Addis Ababa, in particular.
Although Rabies has been recognized as one of the public health issues, people's awareness about what to do if bitten by dogs is low, and most of them often do not seek medical help when bitten. According to the USA-CDC-Information, 2017, each year thousands of people are infected with rabies in Ethiopia and an estimated 2,700 people die, which is one of the highest rabies death rates in the world. However, the true number of death caused by rabies is unknown because the disease is underreported and rabies diagnostic laboratories are not established.
Journal of Biology, Agriculture and Healthcare www.iiste.org ISSN 2224-3208 (Paper) ISSN 2225-093X (Online) Vol.9, No.21, 2019 To the author's knowledge, the human risk for rabies is directly linked to the high population in dogs, and like many other countries with high human rabies death rate, the rabies vaccination coverage among dogs is very low in Ethiopia.
Rabies remains endemic in Addis Ababa and as a whole in Ethiopia, and it is indeed representing a serious veterinary and public health problem. Therefore, this report aimed to seize an opportunity to address the importance of the disease and express optimistic professional views that concerned stakeholders should take appropriate measure in developing a clear policy and strategy to control rabies in Addis Ababa and Ethiopia as a whole.
General Objectives
The report attempts to indicate the outcome of the vaccination program, importance of the disease and its prevalence as well.
Specific objective
• Assessment and documentation of the outcome of the vaccination program and the potential risk factors of canine rabies based on evidence. • Sharing of professional views for major actors and make recommendations for policy intervention
Description of Program Coverage Sites
The Sub-cities lie at an altitude between 2,300-2,500 meters above sea level with an average temperature ranging between 8. The anti-rabies mass vaccination was carried out in 4 major Sub-cities, that covered Gulele (75 %), Addis Ketema (12%), Arada (10.5%), including Kolfe-Keranio (8%), within the Addis Ababa City Administration that encompass 116 Woredas. The vaccination was carried out from November 2007 to January 2010. It is also interesting to observe that among the vaccinated dogs the dominant colors were of brown (460 or 43%), following black (265 or 24.8%), white (110 or 10.3%), and grey (70 or 6.5%) and the remaining (165 or 15.4%) had been of different colors.
Demographic Characteristics of Vaccinated Dogs
The number of vaccinated dogs during the period of 2 years (November 2007-January 2010) ,as shown in Table 1,totals 1070,of which Male dogs accounts for 65 % (700 ) and females 35% ( 370) respectively.
The age structure ,depicted in Table 1, ranged from 1 month to19 years, of which 367 (34.3%) were less than 1 year ( ≤1 year) ,623 dogs (58.2%) being between one to ten years (1-10 years) and the remaining 80 dogs (7.5%) were above 10 years (≥10 years), with an average age structure of 4 years and 5 months. Among the vaccinated canines about 80% revealed to be indigenous (exotics) while the remaining were mongrels or hybrids of unidentified breeds (Table 1). It is also interesting to observe that among the vaccinated dogs, the dominant colors were of brown (460 or 43%), following black (265 or 24.8%), white (110 or 10.3%), and grey (70 or 6.5%) and the remaining (165 or 15.4%) had been of different colors.
DISCUSSION
Rabies is a preventable viral disease most often transmitted through the bite of a rabid animal. The rabies virus infects the central nervous system (CNS) of mammals, ultimately causes disease in the brain and death.
In Ethiopia, priority has been given to diseases like Malaria, HIV/AIDS and TB that attract funding for achieving Millennium Development Goals (MDG). In addition, these diseases are rated as top priority in the current Ethiopian Growth and Transformation Plan (GTP). According the WHO recommendation, vaccinating 70% of the dog population helps to control rabies, and thus, prevent the rabies virus from circulating amongst susceptible animals. Vaccination of domestic dogs is a highly recommended strategy to prevent and control rabies. However, culling, which is an immediate and visible response to public concerns about rabies, is still frequently carried out in response to rabies outbreaks, which is an indication of an inadequate practice of vaccinating domestic dogs as a strategy to control rabies. Eshetu et al (2003) reported that the total number of owned dogs in Addis Ababa was estimated at 225,078, which may be an underestimate of the actual population as it only considered four months of the year. The study showed that about 33% of dog owners brought their dogs for vaccination, and 67.6% of owners keep their dogs in well fenced houses. 1070 dogs were vaccinated from November 2007 to January 2010 against rabies in 116 Woredas of Gulele, Addis Ketema, Kolfe -Keranio and Arada Sub-cities of the Addis Ababa City Administration. As depicted in Table 1,the age stratification ranged from 1 month to19 years, of which 367 (34.3%) were less than 1 year ( ≤1 year) ,623 dogs (58.2%) being between one to ten years (1-10 years) and the remaining smaller population 80 dogs or7.5% were above 10 years (≥10 years). The reporter also observed that among the vaccinated canines, about 80% revealed to be indigenous (exotics) while about 20% were mongrels or hybrids of unidentified breeds (Table 1). Although Rabies is not comparable with these diseases which cause very high health and economic burden, one can take into account that Rabies causes a health economic hardship and with a high likelihood of fatality for exposed rural inhabitants calling for national and global attention.
Considering the goal set by WHO to eliminate dog-mediated human rabies by 2030, and meeting the Millennium Development Goals (MDG), list of combating disease burden and reducing extreme poverty, including rabies control into the Ethiopian Growth and Transformation Plan (GTP) would be very crucial.
CONCLUSIONS AND RECOMMENDATIONS
The present report revealed that dogs' owners are willing to bring their dogs for vaccination if conditions are conducive. Based on the findings of the report the human risk for rabies is directly linked to the high population in dogs following high human rabies death rate.
The age structure ranged from 1 month to19 years, of which 34.3% were less than 1 year 58.2% being between one to ten years and the remaining 7.5% were above 10 years , with an average age structure of 4 years and 5 months.
Among the vaccinated canines about 80% revealed to be indigenous (exotics) while the remaining were mongrels or hybrids of unidentified breed According to the World Health Organization (WHO), vaccinating 70% of the dog population in any given area helps to control and prevent rabies virus in human population.
Based the above conclusions the following recommendations forwards Addis Ababa being the Capital of Ethiopia and Africa, there is an urgent need for a coordinated rabies control activities by forming a sustainable standing committee or any other relevant body. A successful pilot rabies control program in Addis Ababa could serve as a model to cover other urban and rural areas in Ethiopia; • The role of media was found to be low in disseminating information to the public about this deadly but completely preventable disease; • Therefore, strong extension services should be in place, and in the mean time regular vaccination campaigns, possibly 70 percent and above coverage, should be combined with continuing vaccination scheme for young dogs; • Animal health services is a public good, government involvement in the sector should be reduced as far as possible. It is apparent that private sector is more effective than the government in providing the necessary services in animal health in general; • Further epidemiological investigation is required to fully understand the extent of the distribution of the disease in other parts of the country. • It is indeed a time to initiate and develop a clear policy and legislation with adequate enforcement requiring the registration, licensing and taxation of dogs, a measure which is often considered as the basis for mass immunization and dog population control.
|
2019-12-05T09:32:43.360Z
|
2019-11-01T00:00:00.000
|
{
"year": 2019,
"sha1": "c12e1a6acd58bd9d7c8fd3a0e098c81bca03fcc9",
"oa_license": "CCBY",
"oa_url": "https://www.iiste.org/Journals/index.php/JBAH/article/download/50261/51928",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "b70d465749e0a564b30eef97eaf9924560cbf598",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252907404
|
pes2o/s2orc
|
v3-fos-license
|
Polycentric Clustering and Structural Regularization for Source-free Unsupervised Domain Adaptation
Source-Free Domain Adaptation (SFDA) aims to solve the domain adaptation problem by transferring the knowledge learned from a pre-trained source model to an unseen target domain. Most existing methods assign pseudo-labels to the target data by generating feature prototypes. However, due to the discrepancy in the data distribution between the source domain and the target domain and category imbalance in the target domain, there are severe class biases in the generated feature prototypes and noisy pseudo-labels. Besides, the data structure of the target domain is often ignored, which is crucial for clustering. In this paper, a novel framework named PCSR is proposed to tackle SFDA via a novel intra-class Polycentric Clustering and Structural Regularization strategy. Firstly, an inter-class balanced sampling strategy is proposed to generate representative feature prototypes for each class. Furthermore, k-means clustering is introduced to generate multiple clustering centers for each class in the target domain to obtain robust pseudo-labels. Finally, to enhance the model's generalization, structural regularization is introduced for the target domain. Extensive experiments on three UDA benchmark datasets show that our method performs better or similarly against the other state of the art methods, demonstrating our approach's superiority for visual domain adaptation problems.
Introduction
In recent years, unsupervised domain adaptation [9] has been developed to reduce the domain shift by transferring knowledge from a labeled source domain to a target domain, and has achieved promising results in object detection [3,35], semantic segmentation [39,52] and person re-identification [7,42]. The main research directions of the existing UDA methods include (i) minimizing the distribution discrepancy by matching the statistical distribution moments [24,30]; (ii) applying adversarial training to learn domain-invariant feature representations [45,51]; and (iii) bridging the domain gap by using clustering [8] or pseudolabeling [1]. It is worth noting that all of them assume that both well-trained source models and labeled source data are available. However, with the increasing concerns about data privacy and intellectual property of users, the accessibility of well-labeled source data cannot be guaranteed for many real-world tasks.
To overcome the above problem, some recent works [14,15,18,19,33,47] explored domain adaptation without source data. Source-free domain adaptation (SFDA) is a new unsupervised learning setup for the domain adaptation task. Recently, image generation [18], class prototypes [19,47], and pseudo labeling [19] are widely utilized in the existing SFDA approaches. However, generative models require a large computational capacity for generating target-style images. Class prototypes and pseudo labeling methods have shown competitive results but noisy labels are introduced due to category biases in the source and target domains. We argue that only one clustering center for each category is insufficient to avoid negative transfer caused by hard transfer data in the target domain. Furthermore, the structural information of the target data in the feature space is often ignored, which is very helpful to reduce the noisy labels though.
Based on the ideas presented above, a simple yet effective structured clustering strategy for polycentric clustering is proposed in this paper. Specifically, due to class imbalance in the target domain, to avoid the easy data dominating the target model, an inter-class-balanced sampling strategy is designed to aggregate representative samples of each class. To assign more accurate pseudo-labels, polycentric clustering is proposed to generate multiple feature clustering centers within a class of the target data. In addition, to alleviate the noisy labels, a mixup structural regularization term is introduced into our framework, encouraging the interpolation samples to be consistent with the interpolation predictions. Under the guidance of structural regularization, the model is enforced to maintain consistency, thus the robustness against noisy labels is improved.
To evaluate the effectiveness of our model, we conduct extensive experiments on three benchmark datasets, and the experimental results show significant superiority of our method in SFDA. The main contributions of our work are summarized as follows: • We propose a novel framework, Polycentric Clustering and Structure Regularization (PCSR) for SFDA tasks, which aims to protect data privacy and maintain the model performance without access to the source data.
• To avoid easy-transfer data dominating the target model, an inter-class-balanced sampling strategy is designed to address the challenge of class imbalance. And a polycentric clustering approach is proposed for each class to reduce the noisy labels for those hard data.
• To reduce the noisy labels, the mixup regularization module is introduced to interpolate the target data for consistent training, leading to more robust pseudo labels.
• Extensive experiments on three benchmark datasets validate the superiority of our PCSR strategy. The results show that our strategy is comparable to or significantly outperform existing methods.
2 Related Work
Unsupervised Domain Adaptation
As a classic example of transfer learning [28], in recent years, UDA methods for image classification tasks have aimed to align the source and target distributions in an attempt to minimize the domain gap in knowledge transfer. There are currently three main classes of UDA methods: discrepancy-, adversarial learning-, and reconstruction-based. The feature distributions of the source and target domains are aligned by minimizing Maximum Mean Discrepancy (MMD) [21,22,23] in the discrepancy-based methods. In the adversarial learning-based approaches, the network is trained to learn domain invariant features by adding a feature discriminator [10,25,34]. Different from the previous two methods, the network is guided to extract domain-invariant features by an auxiliary image reconstruction module in the reconstruction-based approaches [2,27]. Although these UDA methods are effective, they require access to both the source and target data. In the real world, this is impractical due to data privacy or security concerns. In contrast, our method proposed in this paper does not require source data when performing adaptation, making it more suitable for real-world applications.
Unsupervised Source-Free Domain Adaptation
In most realistic scenarios, only source models and unlabeled target data are available. As a result, some recent work on source-free domain adaptation has emerged [15,18,19,20,31,38,44,49]. Specifically, SHOT [19] proposed freezing the source classifier to maximize mutual information and minimize entropy, while using a pseudo-labeling strategy to obtain extra supervision. In 3C-GAN [18], labeled target-style training images were generated based on a conditional GAN to improve model performance on the target domain. In G-SFDA [49], the neighborhood structure of the target data for clustering enhanced the predictive consistency of local neighborhood features effectively. In CPGA [31], the source avatar prototypes were generated via contrastive learning to mine the hidden knowledge in the source model. Many methods described above freeze the source classifier during adaptation to preserve class information and assign pseudo-labels based on the classifier's output. They mainly focus on a single feature prototype to align two domains, which often causes negative transfer and noisy labels. Instead, we here introduce polycentric clustering for each class to reduce noisy labels. In addition, a consistent training strategy is introduced to enhance the target domain for source-free domain adaptation.
Method
In this section, we first formally define the problem and the notation used for source-free domain adaptation followed by an overview of our framework. Later, a detailed description of our proposed strategy to solve the SFDA problem is presented.
Preliminaries and notations
We denote that the source domain with n s labeled samples as it has the same label set C as that of D s . Under the SFDA setting, only the model f s trained on the source data is accessible, which consists of two parts: a feature extractor g s : X s → R d and a classifier h s : where d denotes the dimension of the feature space. In this work, with only the source model f s and unlabeled data {x i t } n t i=1 available, our goal is to learn an objective function f t : X t → Y t and to infer
Overall framework
An overview of our proposed framework is presented in Figure 1. The target model f t is initialized by the source model f s , and the source model consists of two modules: the feature encoding module g s and the classifier module h s . The target model uses the same classifier module, namely, h t = h s , and two new modules named PCC and mixup are introduced respectively. It is noted that our PCSR learn f t in an epoch-wise manner. As for each epoch stage, firstly, a balanced set of feature instances representing each class is obtained using inter-class balanced sampling. Then polycentric clustering is implemented to obtain accurate pseudo-labeling. After that, information maximization loss is used to reduce the gap between the feature distributions in the source and target domains. Meanwhile, the mixup operation is introduced to enhance the target domain with more interpolated samples.
Information maximization
We update the feature extractor g t using the information maximization (IM) loss [12], which reduces the feature distribution between the source and target domains so that the classification output of the target features has some certainty and global diversity. The IM loss consists of a conditional entropy term and a diversity term: Where δ k (a) denotes the k-th element in the softmax output of the K-dimensional vector a.
is the average of the current batch's softmax output.
Intra-class polycentric clustering
To reduce the gap between the source and target domains, a simple existing solution is to eliminate the noisy labels by selecting pseudo-labels with high confidence. However, this will bias the model toward majority classes and ignore minority classes, resulting in noisy labels for those hard data in the target domain. To reduce the noisy labels, an intra-class polycentric clustering strategy is proposed, which contains two steps.
Inter-class balanced sampling. Due to class imbalance in the target domain, instead of using the existing prediction results based on argmax operations, we adopt an inter-class balanced sampling strategy to construct each class of the target domain. Specifically, for the k-th class in the target domain, each sample in the target domain is represented by a feature vectorĝ t (x t ) and a classification result p(x t ) = δ (f t (x t )). Instead of choosing the top-1 feature, the top-M p(x t ) of the k-th class on the target domain D t are selected as potential representative features for aggregation. Then these top-M are averaged to form an inter-class balanced feature clustering center c k , and the initial pseudo-labelŷ t is obtained from the nearest centroid classifier as follows: Wheref t =ĝ t • h t denotes the previously learned target hypothesis, M = max (1, n t r×K ). r is the hyperparameter of the top-M selection ratio and K is the number of classes in the target domain. D f (a, b) measures the cosine distance between a and b. Based on the above strategy, we can obtain balanced sampled feature instances for each class. Similar to SHOT [19], we perform iterative computations to obtain more robust clustering centers c k and pseudo-labelŝ y t as follows: Although the pseudo-labels and the centroids can be updated by Eq. (3) multiple times, we find that two rounds of updating are sufficient.
Polycentric clustering. According to the above strategy, class-balanced prototype and robust pseudo-labels can be obtained. However, for those ambiguous data located nearby the decision boundary, they may not be effectively represented by a coarse monocentric prototype. In this paper, polycentric clustering is proposed to get more accurate pseudolabels with a predefined number of clustering centers. Specifically, the classical k-means algorithm [26] is introduced to achieve intra-class clustering of the target domain, assuming that the number of clustering centers is P and {c i k } P i=1 is defined as the multiple clustering centers of the k-th class. The k-means algorithm is used to obtain multiple clustering centers of each class {c i k } P i=1 and obtain more robust pseudo-labelsŷ t : Empirically, we find that iterating this process for two rounds is sufficient. Given the generated pseudo-labeling, the loss function for computing the intra-class polycentric clustering pseudo-labeling is as follows.
Structural regularization by mixup training
As mentioned above, we consider intra-class polycentric clustering to mitigate negative transfer, but this ignores the target domain's data structure and still suffers from the noisy labels. According to [48], even though the target data is shifted in the feature space, the target data of the same class is still expected to form a cluster in the embedding space. Therefore, we consider paired target structure information by MixUp [50] to reduce the intra-domain variation, and the new instance {x, y} generated by the MixUp operation Mix((X 1 ,Y 1 ), (X 2 ,Y 2 )) can be defined as: λ denotes the mixup coefficient. The structured loss is obtimized by using interpolation consistency training [41]: Where λ obeys Beta distribution sampling, λ ∈ Beta(α, α), and the hyperparameter is empirically set to 0.3, following the setup of [50]. l ce represents the cross-entropy loss. f t indicates that no gradient calculation is required, but only the value of f t is provided. This loss function can supply more augmented samples for the target domain, allowing for better generalization ability. Integrating all the loss function equations introduced, we can derive the final loss function as follows.
Where β is a hyperparameter experimentally set to 1.0. Algorithm 1 summarizes our method's training process.
Algorithm 1 Polycentric Clustering and Structural Regularization for SFDA Input: Pre-trained source model f s (h s , g s ); target data X t ; max epoch number T max ; Number of clustering centers P. Initialization: Initialize f t (h t , g t ) using f s (h s , g s ).
3:
Compute multiple clustering centers and pseudo-labels based on P and k-means algorithm using Eq.(4). 4: for iter_idx = 1 to the number of target samples N b do 5: Calculate IM loss according to the Eq.(1). 6: Apply MixUp to performing structural regularization operations by Eq.(7). DataSets. We conduct experiments on three datasets, including Office-31 [32], Office-Home [40], and VisDA-C [29]. Office-31 is divided into three domains: Amazon(A), Webcam(W), and DSLR(D), with 31 categories. Office-Home contains 65 categories and consists of four domains: Artistic images(A), Clip Art(C), Product images(P), and Real-World images(R). VisDA-C is a more challenging dataset, with 152K synthetic images generated by rendering 3D models in the source domain while the target domain has 55K real object images, which are divided into 12 shared classes. Implementation details. To ensure a fair comparison with the related approaches, we employ ResNet-50 [11] pre-trained on Image-Net [6] as the backbone for Office-31 and Office-Home, and ResNet-101 [11] as the backbone for VisDA-C. Similar to the previous work, for all datasets, we apply the gradient descent (SGD) optimizer with momentum 0.9 and weight decay 1e-3, the batch size is set to 64, and the input image size is reshaped to 224×224. The learning rate is set to 1e-2 for Office-31 and Office-Home, and 1e-3 for VisDA-C, and 30 epochs are trained for all the settings. For the hyperparameter settings, we set the hyperparameter r of the selection ratio to 3 on all datasets, and the predefined number of clustering centers P is set to 3 for Office-31 and Office-Home, and 4 for VisDA-C. All experiments are built on a TITAN Xp with Pytorch-3.8. The source code of the proposed algorithm is available in https://github.com/Gxinuu/PCSR. Tables 1-3 show the experimental results on the three datasets mentioned above. The best results in SFDA shown in bolded font and the sub-optimal results underlined. In Table 1, our method achieves comparable results to 3C-GAN on Office-31 and even obtains more competitive performance. Note that 3C-GAN highly relies on the extra synthesized data. And Office-31 is a small-scale dataset whose image number of each class is around 40 on average. Therefore, it is hard for our method to aggregate valid polycentric clustering. Yet Method SF A→D A→W D→A D→W W→A W→D Avg. CDAN(2018b) [25] 92. we still achieve the best results on 3 of 6 tasks. As shown in Table 2, our method achieves the latest performance (72.8%) and is higher than the second best NRC by a margin of 0.6% on Office-Home, achieving the best/secondbest results on 10 out of 12 individual tasks. Our method is even superior to some of the traditional domain adaptation methods which require source data. This can be attributed to the fact that due to the increased amount of data, more finely polycentric clustering and more comprehensive structure information is available to support our approach.
Quantitative comparison
To further demonstrate the effectiveness of our proposed PCSR, we conduct evaluation experiments on the large dataset VisDA-C and illustrate the results in Table 3. Our method significantly outperforms SHOT, surpassing it by 2.7%. We can find that class-balanced performance has been improved with our strategy. Especially for the challenging class 'truck', our method achieves 66.4%, which outperforms SHOT applied monocentric clustering by 23.7%. The reason is that the polycentric clustering strategy introduces more fine-grained feature clustering centers and the generalization ability of the target model is improved by structural regularization. The results demonstrate the effectiveness of our approach, and our method also outperforms domain adaptation methods with access to source data on both Office-Home and VisDA-C.
Ablation studies
Number of clustering centers P. In Figure 3, we show the results using different P∈{1,2,3,4, Table 3: Classification accuracies (%) on VisDA-C for ResNet101-based methods. 5,6} on three datasets. When P is set to 1, it is a generalized single-centroid clustering strategy. It can be discovered that when P is set to 3, accuracy can go up to the best result on Office-31 and Office-Home, whereas the best result for VisDA-C is obtained when P is set to 4. The data size of VisDA-C is larger than Office-31 and Office-Home, and the adaptation is performed from the synthetic images to the real images on VisDA-C, there is a great discrepancy in the feature distribution between them. Therefore, more clustering centers should be set to achieve better performance. It can be seen that the polycentric clustering strategy introduce more fine-grained feature clustering centers for each class, which allows the model to assign more accurate pseudo-labels for those hard transfer data. This demonstrates that it is necessary to implement intra-class polycentric clustering.
Ablation study of hyper-parameter β . In Eq.(8), β is an empirical hyper-parameter, and we conduct an ablation experiment which is shown in Fig. 2. Some different β in [0,5, 1.5] are set for the experiment, and it can be seen that the value of β is insensitive to any change, so we select β = 1 in the paper. Table 4: Ablation of the losses on Office-Home. Ablation study on losses. We validate the effectiveness of our methods on Office-Home. Results are shown in Table 4. The classification accuracy is 59.6% when the source-only model is used. We start with applying the information maximization loss, where makes the classification output of the target features becomes more certainty and more global diversity. This achieves 70.5% accuracy. In the third row, based on the information maximization loss, with the intra-class polycentric feature clustering, more accurate pseudo labels are obtained, and the performance increases by 12.5% to 72.1%. And using structural regularization, the model is enforced to maintain consistency, and the performance achieves 72.5%. The model's performance is optimized when all three are used simultaneously. This demonstrates the validity of each loss function.
t-SNE visualization. To visually the effectiveness of our method, we compare the t- SNEs embedding the features extracted by ResNet-50, SHOT and our method on Ar→Cl in Office-Home. As shown in Figure 4, where the source and target domain features are expected to cluster independently. We observe that the features in the target domain become more structured after adaptation, and the source and target domains are better aligned via our method. This result clearly demonstrates that it is possible to reduce the discrepancy between two different domains even without accessing source data.
Conclusion
In this paper, we have proposed a polycentric clustering and structure regularization (PCSR) strategy for source-free domain adaptation. Specifically different from the previous monocentric clustering, our PCSR strategy reduced the negative transfer of hard data in the target domain by considering intra-class polycentric clustering through inter-class-balanced sampling. In addition, structural regularization of the target domain interpolates the target data for consistent training, and improves the model's robustness. The experimental results on three benchmark datasets have demonstrated the effectiveness of our approach. For future work, we intend to apply the method to other vision tasks, such as semantic segmentation and target detection.
|
2022-10-17T01:16:29.166Z
|
2022-10-14T00:00:00.000
|
{
"year": 2022,
"sha1": "949744516ea72c199af885272a7e24969fc51ec6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "949744516ea72c199af885272a7e24969fc51ec6",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
150348752
|
pes2o/s2orc
|
v3-fos-license
|
Anemia and the frailty syndrome amongst the elderly living in the community : a systematic review
1 Instituto de Ensino e Pesquisa da Santa Casa de Belo Horizonte, Programa de Pós-graduação Stricto Sensu em Medicina e Biomedicina, Laboratório de Epidemiologia. Belo Horizonte, Minas Gerais, Brasil. 2 Fundação Hospitalar do Estado de Minas Gerais (FHEMIG), Hospital Regional de Barbacena Dr. José Américo, Fisioterapia Respiratória. Barbacena, Minas Gerais, Brasil. 3 Instituto da Previdência dos Servidores do Estado de Minas Gerais (IPSEMG), Departamento de Fisioterapia. Belo Horizonte, Minas Gerais, Brasil.
INTRODUCTION
Aging is accompanied by physiological factors that can cause reduced functional capacity.When combined with chronic degenerative diseases, functional dependence can be a determinant for the deterioration of quality of life 1,2 .The gastrointestinal system and bone marrow also suffer during aging, leading to a greater frequency of anemia in this population 1 .
According to the World Health Organization, anemia is defined as a concentration of hemoglobin of <120g/l for women and <130g/l for men 3 .The prevalence of anemia increases with age and has been reported in more than 20% of elderly persons aged 85 years or more, over 10% of elderly persons living in the community and in around 50% of institutionalized elderly persons 4 .The reduction of hemoglobin may be due to nutritional deficiency, chronic inflammation or unexplained factors 5 .Anemia is associated with reduced mobility, cognitive ability and quality of life and increased mortality.Some studies have associated the reduction of hemoglobin levels with the development of the frailty syndrome 6,7 .
Frailty is a clinical syndrome that leads to multisystem decline and reduced energy reserves and homeostatic balancing ability following a destabilizing event.It is multifactorial and associated with immunosenescence and inf lammatory processes 3,8 .Immunosenescence is accompanied by dysregulation of the immune system and an increase in the production of inflammatory cytokines (IL-6, TNF-alpha, IL-1), producing a chronic low-grade inflammatory state.The mechanism by which the increase of inflammatory cytokines leads to the development of the frailty syndrome is still uncertain, but evidence indicates the catabolic action of this mediator 9 .The criteria that define frailty are controversial; the most used are decreased muscle strength, exhaustion, reduced gait speed, reduced physical activity and unintentional weight loss 9 .However, some authors suggest the inclusion of other criteria such as nutrition, comorbidities and socioeconomic aspects 3,8 .The establishment of precise criteria and the standardization of instruments that evaluate frailty are important for the diagnosis of the condition and for preventive interventions that can delay or prevent the progression of the syndrome, preserving functional independence for longer 10,11 .
The probable association between the frailty syndrome and anemia is of great importance to the area of geriatrics, as these are common phenomena in this population and when combined may present a more serious clinical outcome 3 .
The aim of this study was to evaluate, through a systematic literature review, the association between anemia and the frailty syndrome in elderly persons living in the community.
METHOD
The literature review used the MEDLINE and LILACS databases to search for articles in English, Spanish and Portuguese.As the concept of frailty has been established in literature in the last decade, the review was limited to publications from the last 10 years .The selected descriptors were: frail, elderly, anemia, sarcopenia, motor activity, muscle strength, mobility limitation, walking.Articles with the key words in the titles or abstracts and published by September 2016 were sought.The search strategy and the Boolean descriptors and operators was as follows: tw: [anemia AND ("Capacidade funcional" OR funcionalidade OR "Independencia funcional" OR "Atividade funcional" OR "capacidad funcional" OR funcionalidad OR "independencia funcional" OR "actividad funcional" OR "Functional capacity" OR functionality OR "functional independence" OR "functional activity" OR sarcopenia OR "Motor Activity" OR "Actividad Motora" OR "Atividade Motora" OR "Muscle Strength" OR "Fuerza Muscular" OR "Força Muscular" OR "Mobility Limitation" OR "Limitación de la Movilidad" OR "Limitação da Mobilidade" OR "walking" OR "caminata" OR "caminhada" OR "frail elderly")] AND (instance:"regional") AND [limit:("aged") AND la: ("en" OR "es" OR "pt") AND year_ cluster:("2013" OR "2014" OR "2009" OR "2008" OR "2012" OR "2015" OR "2007" OR "2010" OR "2006" OR "2011" OR "2016")].
Observational-type studies addressing anemia, frailty and/or functional capacity in the elderly living in the community were adopted as inclusion criteria.Articles that approached hospitalized or institutionalized elderly people undergoing cancer treatment in the postoperative period and with serious diseases such as rheumatologic, renal, cardiac and pulmonary insufficiency, were excluded.The selection and qualification of articles were carried out by two independent reviewers following the inclusion criteria.In the case of disagreement, the articles were read and discussed together.The review followed the specific methodological guidelines for observational studies 12 .To select the data of the articles, the Preferred Reporting Items for Systematic Reviews and Meta-Analyzes (PRISMA) criteria were applied, and for the analysis of the selected articles an instrument was elaborated based on the population, exposure/ intervention, control and outcome (PECO) domains 13 .
The assessment of the risk of bias in the articles included in the analysis was performed using an adapted version of the Newcastle-Ottawa Scale (Chart 1).The original scale evaluates the quality of observational studies and contains eight items that analyze three dimensions, with several options for each item.In this review, the questions were adjusted to investigate exposure and outcome (frailty syndrome), and the risk of bias was divided as follows: low, uncertain, and high risk 14 .
This review was registered with the International Prospective Register of Systematic Review (PROSPERO) under number CRD42017057567.
Chart 1. Adaptation of Newcastle-Ottawa Scale for evaluation of quality of studies.Belo Horizonte, Minas Gerais, 2017.
RESULTS
This review identified 193 articles in the Medline (96) and Lilacs (97) databases.After the elimination of eight duplicate articles, 32 articles were selected according to the inclusion criteria.Of the 32 eligible articles, 25 were excluded for being review articles, clinical trials, or treatment and other studies where frailty was not the primary outcome.In the end, only seven studies met all the inclusion criteria.
Figure 1 shows the flow chart of the identification and selection of articles for the systematic review.
The descriptions and evaluations of the selected studies are presented in Charts 2 and 3.The articles were separated according to the definitions of the frailty syndrome employed.The objective was to evaluate the association between anemia, hemoglobin and frailty.
Complete frailty criteria (Fried) not used.
Positive association between anemia and frailty.
Silva et al. (2014)
Cross-sectional study based on FIBER.
The objective was to evaluate the association between frailty, inflammatory markers and hemoglobin.
Frailty assessed with WHAS criteria.
Reduction of hemoglobin associated with frailty, sarcopenia, and weight loss.
Positive for the association between anemia and frailty in women.
Llibre et al. (2014)
Longitudinal study based on the Cuban cohort.
The objective was to identify the prevalence and incidence of frailty, risk factors (anemia) and incidence of functional dependence.
Greater frailty in women (educational level and marital status as protective factor for frailty).
Anemia as a risk factor for the prevalence of frailty (prevalence rate of The concepts and criteria of the frailty syndrome diverged between the studies.Of the seven articles included, three used standardized criteria for the definition of frailty [15][16][17] while four studies evaluated functional capacity as a synonym for frailty syndrome [18][19][20][21] .
The review found a consensus that anemia is independently related to worsening functional capacity and frailty syndrome.However, there was great variability in relation to the criteria and instruments used for functional evaluation and characterization of this syndrome.
DISCUSSION
Despite the importance of the theme, this systematic review identified few studies that related anemia and the frailty syndrome in elderly persons living the community.
All seven observational articles that were included in this systematic review found an association between anemia and the development of frailty or the worsening of functional capacity.However, a high risk of bias in the evaluation and definition of the frailty syndrome was found by the adapted Newcastle- Longitudinal study based on Octabaix cohort.
The objective was to analyze the prevalence of anemia in the elderly and the association with mortality after 3 years of follow-up.
Higher mortality in anemic elderly persons, who had a worse perception of quality of life.
Higher mortality in anemic elderly, who had a worse perception of quality of life.Presence of an association between anemia and physical function.
The objective was to evaluate the association between anemia and mortality in elderly individuals aged 95 years or older.
participants.
Tests: basic and instrumental activities of daily living, squatting, ability to raise hands above head level.Nondetailed evaluation tools.
Anemia associated with all causes of mortality.Women with anemia were more physically dependent than men.
Presence of association between anemia and physical function.
Ottawa scale.Of the seven studies included, only one presented a low risk of bias in the evaluation criterion for the identification of this syndrome.This fact is explained by the variety and non-standardization of the criteria and instruments used to define frailty and may limit the interpretation of the association between anemia and this syndrome (Chart 4).Another important issue is the fact that many authors consider functional capacity and frailty to be synonyms, even though both can occur in isolation [18][19][20][21] .
Chart 4. Assessment of bias risk according to the adaptation of the Newcastle-Ottawa Scale.Belo Horizonte, Minas Gerais, 2017.The study conducted by Corona et al. 15 evaluated the association between anemia, hemoglobin and the frailty syndrome.Anemic individuals were 2.5 times more likely to develop frailty.However, the criteria proposed by Fried were not fully complied with 15 .The study from the FIBRA network, meanwhile, identified a relationship between hemoglobin reduction, frailty, sarcopenia and weight loss, using the Women's Health and Aging Study (WHAS) criteria for frailty 16 .
In a sample of 2813 Cuban elderly persons the assessment of cognitive status was added to the frailty criteria proposed by Fried.Anemia was an important risk factor for the prevalence of frailty (a prevalence rate of 1.64% (Confidence Interval 1.23 -2.20)) 17 .
In a longitudinal study of 2,601 elderly people, Patel et al. evaluated whether functional changes (walking, climbing stairs and activities of daily living) varied in the presence of anemia among black and white skinned people.A decline in functional performance was observed in the presence of anemia.Considering the cutoff point for anemia proposed by the WHO, white anemic elderly individuals presented more functional alterations than black elderly persons.The results suggest adaptations to the cutoff point for the diagnosis of anemia specific for each skin color 18 .
A longitudinal study with three years of follow-up assessment evaluated functional capacity in anemic elderly persons using the Barthel and Lawton Indexes and the Performance Oriented Mobility Assessment (POMA).The authors found a higher mortality rate and worse perception of quality of life and functional capacity in anemic elderly persons 19 .
Two studies with centenarian elderly persons found an association between anemia and the decline of functional capacity.The studies used different assessment instruments and did not define the criteria for the frailty syndrome 20,21 .The number of nonagenarian or older elderly persons is increasing rapidly around the world, but longitudinal studies in this age group are rare due to the greater number of comorbidities and a high mortality rate 20,21 .Decreased functional capacity in the elderly can be associated with numerous multidimensional factors which determine the degree of dependence of the population and can lead to frailty.Various instruments exist for the measurement of muscle strength, balance, activities, and instruments of daily living.The studies in this review present tests and instruments for assessing functional capacity, such as the Lawton scale and the step climbing, gait speed, squat, handgrip and lower limb strength and balance tests, and the Short Physical Performance Battery (SPPB).
The frailty syndrome is complex and involves deterioration in multiple physiological domains, including strength and muscle mass, flexibility, balance, coordination and cardiovascular function 22 .The inclusion of other criteria for the definition of frailty syndrome, such as socioeconomic factors, nutritional factors and comorbidities, such as the presence of anemia, has been previously discussed 23 .However, there is still no explicit agreement as to how to diagnose the syndrome or an instrument to assist in the prior identification of adverse events.The model most widely used around the world is that proposed by Fried et al.It is composed of five biological items: unintentional weight loss in the last year, reduced handgrip strength, slow gait, exhaustion and low physical activity.Elderly persons are considered frail when they present three or more of these criteria 10 .Therefore, the lack of a conceptual and methodological criterion to define a frail elderly persons makes it difficult to evaluate and compare the studies evaluated.
Only one study of this review associated the increase of inflammatory markers, the presence of anemia and the frailty syndrome 16 .The inflammatory state is part of the immunosenescence process and is directly related to age.This process is characterized by an increase in inflammatory cytokines such as IL-6, IL-1, TNF-alpha and IFN-gamma.These cytokines are directly related to increasing age and the development of inflammatory anemia.Anemia and the frailty syndrome may share this pathophysiological mechanism of the inflammation process, and so anemia may trigger the frailty syndrome and vice versa 23,24 .
CONCLUSION
Anemia was related to a decline in functional capacity and to the presence of the frailty syndrome in the elderly persons living in the community.However, the definition and criteria used to assess frailty differed between studies, and caution is required in the interpretation of these results.
The heterogeneity of the studies makes it difficult to verify evidence and generalize the data of the association between anemia and the frailty syndrome in elderly residents of the community.
New studies should be carried out with greater methodological rigor and standardization of the instruments and criteria used to allow a statistical comparison between anemia and frailty.
Early identification of anemic and/or frail elders will allow interventions to prevent or delay the "anemiafrailty" interaction, leading to an improvement in the quality of life of these elderly people.
Exposure a) Secure record + primary measures * (low risk of bias) Obtaining the independent variables b) Structured interview + primary measures, without knowledge of the outcome * (low risk of bias) c) Interview with knowledge of the outcome (high risk of bias) d) Non-secure sources and self-assessment (high risk of bias) e) Does not describe clearly (uncertain risk of bias) Outcome Is the assessment of frailty adequate?a) Yes, (low risk of bias) b) Yes, according to Fried et al. with some modifications (2 or 1 components) (uncertain risk of bias) c) Yes, according to Fried et al., with many modifications (3 or more components) (high risk of bias) d) No, describes as functional capacity (high risk of bias) Sample representativeness a) Representative of the local population * (low risk of bias) b) Possibility of selection bias (high risk of bias) c) Does not clearly describe (uncertain risk of bias) Selection of participants a) Community * (low risk of bias) b) Does not clearly describe (uncertain risk of bias) Definition of the control group a) No previous history of the syndrome * (low risk of bias) b) Does not clearly describe (uncertain risk of bias) * Represents an item with the classification of low risk of bias.
: H -high risk of bias; B -low risk of bias; I -uncertain risk of bias.
|
2019-05-12T14:22:53.080Z
|
2018-04-01T00:00:00.000
|
{
"year": 2018,
"sha1": "4f88f21f239fc62a5712ad5de48f8deb1145fedb",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/rbgg/v21n2/1809-9823-rbgg-21-02-00223.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bb4825105dad80486e12736ae88cbb1471e1c52e",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Psychology"
]
}
|
10198093
|
pes2o/s2orc
|
v3-fos-license
|
Prevalence and characteristics of avoidant/restrictive food intake disorder in a cohort of young patients in day treatment for eating disorders
Background Avoidant/Restrictive Food Intake Disorder (ARFID) is a “new” diagnosis in the recently published DSM-5, but there is very little literature on patients with ARFID. Our objectives were to determine the prevalence of ARFID in children and adolescents undergoing day treatment for an eating disorder, and to compare ARFID patients to other eating disorder patients in the same cohort. Methods A retrospective chart review of 7-17 year olds admitted to a day program for younger patients with eating disorders between 2008 and 2012 was performed. Patients with ARFID were compared to those with anorexia nervosa, bulimia nervosa, and other specified feeding or eating disorder/unspecified feeding or eating disorder with respect to demographics, anthropometrics, clinical symptoms, and psychometric testing, using Chi-square, ANOVA, and post-hoc analysis. Results 39/173 (22.5%) patients met ARFID criteria. The ARFID group was younger than the non-ARFID group and had a greater proportion of males. Similar degrees of weight loss and malnutrition were found between groups. Patients with ARFID reported greater fears of vomiting and/or choking and food texture issues than those with other eating disorders, as well as greater dependency on nutritional supplements at intake. Children’s Eating Attitudes Test scores were lower for children with than without ARFID. A higher comorbidity of anxiety disorders, pervasive developmental disorder, and learning disorders, and a lower comorbidity of depression, were found in those with ARFID. Conclusions This study demonstrates that there are significant demographic and clinical characteristics that differentiate children with ARFID from those with other eating disorders in a day treatment program, and helps substantiate the recognition of ARFID as a distinct eating disorder diagnosis in the DSM-5.
Background
Historically, children and adolescents have not been easily diagnosed with eating disorders (EDs) based on past versions of the Diagnostic and Statistical Manual of Mental Disorders (DSM), including the 4 th edition. In fact, over 50% of these patients met criteria for Eating Disorder Not Otherwise Specified (EDNOS), likely leading to missed diagnoses and difficulty obtaining appropriate and timely treatment [1][2][3]. With the preparations for publication of the 5 th edition of the DSM (DSM-5), the Eating Disorders Work Group was assigned the tasks of improving clinical utility of the diagnostic categories and reducing the frequency of EDNOS. One of the imperatives was to recognize new disorders and eliminate others by exploring the clinical profiles of patients who fell under the heterogeneous EDNOS category. In addition, the DSM-5 as a whole has attempted to take a developmental, or life-span, approach to all disorders.
Feeding Disorder of Infancy or Early Childhood, a diagnosis in the DSM-IV, delineated a persistent eating dysfunction leading to weight loss or failure to gain weight, with the requirement that patients be less than six years of age. This was a non-specific diagnostic category that was rarely used in practice and for which there was insufficient literature [4]. A great number of patients are over six years old at the time of initial ED evaluation, even if some have had symptoms from an early age, and have been necessarily given the diagnosis EDNOS in the past. Feeding Disorder of Infancy or Early Childhood also excluded those children with abnormal eating patterns or nutritionally deficient or limited diets, but who were growing normally secondary to sufficient caloric intake, possibly due to the use of nutritional supplements. The inability of DSM-IV to capture such patients was significant, as they often presented with considerable impairment, both physically and functionally [5].
Clinicians and researchers have long recognized specific types of EDs that fall under the umbrella of EDNOS. The Great Ormond Street (GOS) classification system captured a way to describe these types of patients, and was often utilized by clinicians for descriptive purposes. These criteria were actually found to have a higher inter-rater reliability for younger patients than the DSM-IV [1]. The GOS categories include: Food Avoidant Emotional Disorder (FAED), Selective Eating, and Functional Dysphagia, as well as Anorexia Nervosa (AN) and Bulimia Nervosa (BN).
FAED was first described as a combination of inadequate food intake and emotional disturbance; these young people knew that they were underweight and wanted to be heavier, but found this difficult to achieve [6]. The GOS system further clarified this group, and differentiated their presentation by the absence of weight and shape concerns in the presence of significant food restriction. Somatic complaints were frequent as well as more general psychopathology, e.g. generalized anxiety [4,5].
Selective eating, also known as "picky eating", is a common problem of childhood, with anywhere between 13 to 22% of children between 3 and 11 years of age being reported to be picky eaters at any given time [7]. While young children are typically thought to "grow out of" their pickiness, studies have shown that between 18 and 40% of the rigidity concerning food persists into adolescence [8][9][10]. Patients with selective eating are usually not underweight, as they take in adequate calories from preferred foods, but their diets may be lacking in micronutrients. Some selective eaters have sensory concerns related to the taste, smell, color, or texture of foods, which may limit their intake to such a narrow range of acceptable foods that weight loss, or failure to gain appropriate weight, may occur. Studies have shown a higher prevalence of boys with selective eating, as well as a high degree of co-morbid anxiety [11,12].
Functional dysphagia is a fear of swallowing or an inability to eat or swallow food, especially solid or lumpy foods. There is generally a fear of gagging, choking, or vomiting, often subsequent to actual traumatic episodes or witnessed episodes. Sometimes an illogical connection in the child's mind leads to development of the phobia. Some children present with food refusal specifically out of fears of vomiting, contamination, poisoning, or defecation as well. Many cases of acute food refusal due to specific fears present clinically malnourished and ill, as they often lose weight rapidly. They can easily be mistaken with AN on initial presentation due to the severity of the restriction; however, they are not concerned with weight or shape [4,5].
The DSM-5 has subsumed and expanded Feeding Disorder of Infancy or Early Childhood to capture a greater number of patients who present with avoidant or restrictive eating, but are clearly different from those with AN in that there are no disturbed cognitions about weight and/or shape, or a wish to lose weight. It has been renamed Avoidant/Restrictive Food Intake Disorder (ARFID) and includes those types of patients recognized in the GOS system. Patients with ARFID may present with clinically significant restrictive eating leading to weight loss or lack of weight gain, nutritional deficiencies, reliance on tube feeding or oral nutritional supplements and/or disturbances in psychosocial functioning (see Table 1) [13]. Additionally, they may exhibit similar physical signs and symptoms as patients with AN due to semi-starvation.
Very little has been published on patients with ARFID. Recently, a large multicenter study of children and adolescents presenting as new patients to adolescent medicine ED programs, revealed a 14% prevalence of ARFID, with unique clinical characteristics, including younger age and a greater number of males [14,15]. An 11-year retrospective chart review of adolescent ED patients in Canada reported a 5% prevalence of ARFID [16]. These patients were compared to a matched sample of AN patients, and demonstrated a younger age at presentation, and a higher likelihood of being male. There were specific behaviors and symptoms in the ARFID group, including food avoidance, decreased appetite, abdominal pain, and emetophobia. Both of these studies included all new patients presenting for initial assessments to tertiary care ED programs.
Due to the dearth of literature on ARFID, we sought to determine the prevalence and clinical characteristics of ARFID in young patients admitted to a day treatment program for EDs, and to compare patients with ARFID to those with AN, BN, and Other Specified Feeding or Eating Disorders/Unspecified Feeding or Eating Disorder (OSFED/UFED) in the same cohort.
Participants
A retrospective chart review was conducted on 177 patients admitted to a day program for children and adolescents with EDs between August 4 th , 2008 and May 1 st , 2012. This program treats female and male patients, ages 7 to 17 years, with EDs and co-morbid psychopathology. The majority of patients in the program have restrictive EDs, based mostly on the younger average age. However, patients with purging disorders are treated as well. While we treat some patients with sensory features related to food, who may or may not also have an autism spectrum disorder diagnosis, it is important to clarify that patients with longstanding feeding issues and autism are not typically admitted to our program, and are usually managed in the Feeding Disorders Program at our institution.
Initial ED and co-morbid psychiatric diagnoses were made upon admission to the program based on a comprehensive diagnostic psychiatric evaluation, by both a trained child and adolescent psychiatrist and either an experienced clinical psychologist or a licensed social worker/clinical psychiatric specialist, using DSM-IV-TR criteria. Some of the co-morbid diagnoses were based on history conveyed by the parent to the health care provider. DSM-5 ED diagnoses were determined retrospectively and agreed upon together through careful discussion by two of the psychiatric specialists and an adolescent medicine physician, all of whom were personally involved with the cases, using a checklist based on the proposed DSM-5 diagnostic criteria, which were almost identical to the published criteria. Therefore, these diagnoses were not made in a blinded fashion.
Of the 177 eligible subjects, a total of four participants were excluded from the study. Two were excluded for having medical conditions that were retrospectively determined to fully account for their disordered eating behaviors. Two subjects were excluded for having Binge Eating Disorder and composed too small a distinct group for data analysis.
Demographics, historical and clinical features
Data collected at intake included age, gender, and ethnicity. Historical information included past history of ED and/or other mental health treatment, other medical disorders and consultations by other medical specialists, presence of weight loss, percentage of body weight lost, length of illness, use of nutritional supplements, presence of purging behaviors, excessive exercise, history of food allergies, fears of choking and/or vomiting, and sensory issues related to food. This information was gathered from the initial evaluations by the adolescent medicine physician, the psychiatrist, and the psychologist or clinical social worker.
Anthropometrics
Weight and height were measured by trained staff at initial presentation. Gowned weights were obtained on a hospital-grade SECA digital scale and recorded to the nearest tenth of a kg. Heights were measured in bare feet using a fixed stadiometer with a right angle headpiece and recorded to the nearest tenth of a cm. BMI was calculated using the standard formula (kg/m 2 ) and the % Median Body Weight (%MBW) was determined based on the 50 th percentile BMI-for-age.
Psychometric measures
The Children's Eating Attitudes Test (ChEAT) [17] The ChEAT is a 26-item scale assessing attitudes and behaviors WHAT ARFID IS NOT • A problem with eating or feeding (e.g. seeming disinterest in food or eating; repulsion to certain foods based on their sensory qualities; fears about aversive effects of eating) leading to recurrent inability to take in adequate nutrition and/or energy coupled with one (or more) of the following: • The eating problems are not due to body image disturbance, and anorexia nervosa or bulimia nervosa cannot be diagnosed instead.
• Feeding or eating problems are not the result of scarcity of food or a culturally endorsed tradition.
○ Substantial weight loss (or lack of weight gain).
• The disordered eating is not due to a concomitant medical problem or another psychiatric disorder, so that if the medical or psychiatric disorder is treated, the eating problems resolves.
○ Reliance on nasogastric or gastric tube feeding or oral nutrition supplements.
○ Impaired psychosocial function. associated with food and eating, validated in patients as young as 8 years old, adapted from the original EAT-26 [18]. A score of ≥ 20 is considered clinically significant relative to the normative population. The three subscales reflecting varying types of eating pathology include: Dieting, Bulimia/Food Preoccupation, and Oral Control [18].
Children's Depression Inventory (CDI) [19]. The CDI is a 27-item self-report inventory for assessing depression in children between the ages of 7 and 17 years. The measure yields a Total score (M = 50; SD = 10) and five factors: Negative Mood, Interpersonal Problems, Ineffectiveness, Anhedonia, and Negative Self-Esteem (M = 10; SD = 3).
Revised Children's Manifest Anxiety Scale (RCMAS) [20]. The RCMAS is a 37 item self-report instrument designed to measure anxiety for children and adolescents ages 6 to 17 years. The measure yields a Total Anxiety score based upon 28 items, with 9 items comprising the Lie Scale which is designed to detect responses that are socially desirable. The Total Anxiety Score is expressed as a T-score (M = 50, SD = 10) and there are three factorbased subscales, expressed as scaled scores (M = 10, SD = 3): Physiological Anxiety, Worry/Oversensitivity, and Social Concerns/Concentration.
The Child Behavior Checklist (CBCL) [21]. The CBCL provides three global measurements which are expressed as a T-score (M = 50; SD = 10) including Total Score; Internalizing; and Externalizing Scales. In addition, the measure includes 14 Syndrome Scores which reflect clusters of psychiatric symptoms. These scales are also expressed as T-scores (M = 50; SD = 10) and include the following scales: Anxious/Depressed, Withdrawn/ Depressed, Somatic Complaints, Social Problems, Thought Problems, Attention Problems, Rule-Breaking Problems, Aggressive Behavior, Affective Problems, Anxiety Problems, Somatic Complaints, ADHD Problems, Oppositional Defiant Problems, and Conduct Problems. It is completed by parents and/or other caregivers.
Statistical analysis
Analysis included descriptive statistics, chi-square, analysis of variance (ANOVA), and Pearson's correlation. Bonferroni correction was used to adjust for Type I error, with thresholds set at p < 0.01 for patient characteristics, p < 0.007 for ED symptoms and features, and p < 0.008 for psychiatric co-morbidities. Post-hoc testing to examine between-groups effects was performed with the Hochberg GT2 test. Data were entered and analyzed using SPSS (version17.0, SPSS Inc., Chicago, Illinois).
This study was approved by the Institutional Review Board of the Penn State Hershey Medical Center/College of Medicine.
Demographics and anthropometrics
Using the proposed DSM-5 criteria, 39 (22.5%) patients met criteria for ARFID, 93 (53.8%) for AN, 20 (11.6%) for BN, and 21 (12.1%) for OSFED/UFED. Notably, all patients diagnosed with ARFID carried a DSM-IV diagnosis of EDNOS. None were diagnosed with DSM-IV Feeding Disorder of Infancy or Early Childhood, as all were over six years old at intake. Of the 173 participants included, 92% were female with a mean age of 13.5 years (SD = 2.03) (range 7.2 -16.9 years). The cohort was predominantly Caucasian (95%), reflecting the ethnic/racial makeup of the geographic area. There was no significant difference in duration of illness between those patients with ARFID and the other ED groups.
Patients with ARFID were found to be younger than those with other EDs (11.1 years, SD = 1.7 vs. 14.2 years, SD = 1.5; p < 0.0001) and to have a greater percentage of males (20.5% vs 4.5%; p = 0.008). Of the patients who had lost weight as part of their ED, those with AN lost a greater percentage of their premorbid weight than the other ED groups, including those with ARFID (Table 2). There was a significant difference found in %MBW between those with ARFID and BN, but not between ARFID and AN, or OSFED/UFED (Table 2). While the degree of malnutrition was similar to that of patients with AN, those with ARFID were found to have a greater dependence on nutritional supplements, fears of vomiting and/or choking, and texture/sensory issues pertaining to food (all p < 0.0001).
Psychometric assessment and psychiatric co-morbidities
Patients with ARFID were less likely to report typical ED symptoms, e.g. purging behaviors and excessive exercise, during intake interview (all p <0.0001). In addition, they had significantly lower total scores on the ChEAT (14.86, SD = 2.10) than those of the remaining patients overall (27.51, SD = 17.28) (p < 0.0001) (Figure 1). Post-hoc analysis revealed significant differences among patients with ARFID and all other groups for the total ChEAT score. While patients with ARFID also had significantly lower scores on both the Dieting and Bulimia Nervosa/Food Preoccupation subscales (p < 0.0001), there was no significant difference between groups on the Oral Control subscale. An interesting finding on chart review was that while patients with ARFID did not have true body image distortion, as seen in AN, 21% exhibited body preoccupation with somatic concerns. For example, some children were fixated on fears of physical illness due to issues related to shape/weight, e.g. high cholesterol and/or obesity leading to heart disease, either because of personal experiences with relatives or information in their school curriculum. Others who were chronically underweight due to their feeding and eating disturbance had suffered teasing by their peers because of their low weight, which may have led to body image concerns, although of a different nature than typically seen in AN and BN. There was a significantly higher comorbidity of anxiety disorders in patients with ARFID (72%) than the other ED groups (31%), as determined by clinician diagnosis (p < 0.0001). Furthermore, this was supported by parental report on the CBCL (p =0.005). However, there were no significant differences between groups on the total RCMAS score. Autism spectrum disorder (p = 0.001), learning disorders (p < 0.0001), and cognitive impairment (p < 0.0001) were also seen more frequently in the patients with ARFID, based on past history reported at initial assessment ( Table 2). On the CBCL, children with ARFID had significantly more social problems (p = 0.001) and attention problems (p < 0.0001) than those with AN. There was a lower comorbidity of depression diagnosed in children with ARFID (23%) than the other EDs (57%) (p < 0.0001), and total CDI scores were lower in this group as well (54.4 vs. 60.0, p = 0.05). Additionally, children with ARFID were found to have significantly lower scores on the CDI subscales Negative Mood (p =0.02) and Negative Self Esteem (p < 0.0001). There were no significant differences between the groups on the Interpersonal Problems, Ineffectiveness, or Anhedonia subscales, however.
A smaller percentage of children with ARFID (35%) sought outpatient psychotherapy before coming to the program, compared to patients in the other ED groups (AN = 60.22%, EDNOS = 75%, BN = 80%; p = 0.002). However, there were no differences in the past history of higher levels of psychiatric care, e.g. inpatient, residential, or day treatment. In contrast, more children with ARFID (46.2%) had seen other medical specialists for consultation (e.g. gastroenterology, endocrinology) before coming to program than those with other EDs (26.1%), although this did not reach statistical significance with the Bonferroni correction (p = 0.02).
Discussion
This study adds to the literature on ARFID by comparing a cohort of children and adolescents undergoing day treatment for EDs, including patients with this "new" diagnosis. Notably, almost a quarter of our patients were diagnosed with ARFID, which illustrates the significant prevalence of this disorder amongst children and adolescents requiring an intensive level of ED treatment in a tertiary care setting. This was a higher prevalence than that found in the multicenter studies [14,15], which might be accounted for by the fact that our patients were encountered over four years in a day treatment setting, as opposed to all ED patients presenting for initial evaluation over a one-year period. The prevalence rate was in even starker contrast to the 11-year retrospective review from Canada, where the prevalence was only found to be 5% [16]. There is no mention of age range in that study, only that the patients were adolescent ED patients assessed in a pediatric tertiary care hospital program. Our study included children and adolescents between 7 and 17 years, which may have been a slightly lower range than the Canadian study; this might also justify the higher prevalence of ARFID found in our cohort. Another possible explanation for the discrepancy in prevalence rates across studies is that younger patients with atypical EDs, like ARFID, may be increasingly referred to adolescent medicine ED programs in more recent years, as there has been greater recognition of these presentations as true EDs. The Canadian study reviewed records starting in 2000 and it would be interesting to know whether the prevalence increased annually over the 11 years. In our experience, referrals from primary care providers tend to generate more referrals once they are successfully managed. Lastly, the higher prevalence in our cohort may reflect the fact that many children and adolescents with ARFID present acutely and significantly malnourished, requiring a higher level of care, such as day treatment.
Similar to the multicenter and Canadian studies [15,16], our results demonstrate that there are significant demographic and diagnostic characteristics that differentiate children with ARFID from those with other EDs. First, while female patients remain the majority, there was a higher preponderance of male patients in the ARFID group than in the other ED groups. Children and adolescents with ARFID were more likely to present at a younger age with significant weight loss or failure to gain appropriate weight, were more dependent on oral or enteral nutritional supplementation, and had significantly more fears of choking and/or vomiting, and texture and/ or sensitivity issues regarding food. These findings are consistent with those in studies of early-onset EDs [2,22,23], as well as in the recent multicenter study [15], and many are relevant and important features in making the diagnosis of ARFID [24].
Based on DSM-5 criteria, a patient cannot have body image distortion and be diagnosed with ARFID. However, our data revealed that 21% of patients diagnosed with ARFID had body preoccupation with somatic concerns. It is important to reiterate that none of the patients with ARFID had been diagnosed with AN using DSM-IV criteria, which underscores the absence of true body image distortion. During evaluation of a young patient with possible ARFID versus AN, it is critical to probe about body concerns that need to be distinguished from body image distortion. For example, if a patient has worries about becoming fat, this may have something to do with events in the family's medical history, e.g. an overweight parent or grandparent with a recent myocardial infarction or diabetes diagnosis. Children and adolescents are often privy to this information, but may make illogical associations based on their cognitive developmental stage. This knowledge may then trigger restrictive eating behaviors. Thorough history-taking can often elicit this information.
As has been documented in other studies of patients with acute food avoidance without weight/shape concerns [2,15,22,25,26], there were no significant differences in our study between % MBW in patients with ARFID and AN; however, patients with AN lost a significantly greater percentage of their premorbid body weight. This may be explained by the fact that our patients with ARFID, notably those with the acute food refusal seen in functional dysphagia, may have presented sooner after the onset of illness than those with AN. The data may not fully bear this out due to the heterogeneity of the ARFID category (e.g. more chronic selective eaters vs more acute food refusal), which might balance out the length of illness data. Furthermore, young patients may present relatively early in the course of their illness, based on their age alone.
Based on both clinician and parental report, patients with ARFID had significantly more anxiety and less depression than patients with other EDs, which is similar to findings in the large multicenter study on ARFID [15]. However, our study is the first of patients with ARFID to use standardized measures obtained from parents to aid in evaluation. There were no self-reported significant differences found between children with ARFID and those with other EDs on the RCMAS or any of its subscales, which could be due to the generally high comorbidity of anxiety symptoms in EDs. Alternatively, younger patients (those more likely to be diagnosed with ARFID) may have had a harder time filling out the questionnaire than older subjects, perhaps in understanding the questions or acknowledging symptoms of anxiety, due to cognitive developmental stage. It is important to clarify that ARFID is not simply a type of anxiety disorder, as the severity of the eating disturbance exceeds that which might be seen in an anxiety disorder and necessitates further clinical attention (see Table 1) [13].
Other than the use of outpatient psychotherapy, there were no significant differences between the groups in terms of prior mental health treatment, including hospitalizations for EDs or other mental health issues, admissions to day treatment programs, intensive outpatient programs, or residential treatment facilities. It should be taken into consideration, however, that ours is a young, relatively treatment-naïve population, and that the rate of past mental health admissions would be very different when looking at an older population of patients. Additionally, children with ARFID may be more likely seen as medically ill initially, and the early referrals may tend to gravitate toward the medical as opposed to mental health arena, as a trend in our data revealed, although it was not significant.
There were several strengths to this study, including the large sample size and the use of both clinical and standardized psychometric measures for patient assessment. Additionally, the use of multiple informants (patients, parents, and clinicians) adds to the validity of the findings. Furthermore, experienced clinicians completed all assessments and the adolescent medicine physician involved in deciding on the retrospective DSM-5 diagnoses was integrally involved in the efforts leading up to the inclusion of ARFID in the DSM-5. As ARFID is still a relatively "new" diagnosis, there are no formalized assessment tools available yet. However, instruments will likely be developed, capturing the clinical features and diagnostic criteria which will help standardize diagnosis. There are some available resources to help guide the clinician in evaluation [24,27].
However, there are several limitations that deserve mention. The retrospective nature of this study, and the fact that diagnoses were made on DSM-5 criteria that had not yet been formalized by the time of its completion, need to be taken into consideration. However, as previously mentioned, the published DSM-5 criteria were essentially the same as the proposed criteria used for this study. Careful discussion amongst experienced clinicians very familiar with all of the cases was undertaken to decide upon the appropriate DSM-5 diagnosis for each patient; this did not allow for direct assessment of inter-rater reliability. The absence of blinding of the clinicians may have introduced bias to the outcome of the study, possibly leading to a higher prevalence of ARFID than previously seen in other studies. Lastly, our patients were undergoing day treatment, which implies a certain severity of illness, and may limit the generalizability to patients in other settings, or nonclinical populations. Despite these limitations, this study provides support for ARFID as a separate diagnostic category.
Conclusions
This is the first study to examine patients with the diagnosis of ARFID in a cohort of patients undergoing day treatment and adds to the limited literature available on this new diagnosis. The inclusion of psychometric measures from both patients and parents has not been documented to date. Children and adolescents with ARFID are clearly distinct from those with other EDs and can now be identified and labeled more specifically and accurately. Ideally, this will enable more timely recognition and access to care. The degree of both physical and psychosocial dysfunction with which these patients present indicates the need for prompt and appropriate treatment. The relatively high prevalence of patients with
|
2016-05-12T22:15:10.714Z
|
2014-08-02T00:00:00.000
|
{
"year": 2014,
"sha1": "41930a3f9e38022647baa296daf86cc69c0ed399",
"oa_license": "CCBY",
"oa_url": "https://jeatdisord.biomedcentral.com/track/pdf/10.1186/s40337-014-0021-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "df60d81e524b7d487f440df1a52964f7c8d668b5",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
238152926
|
pes2o/s2orc
|
v3-fos-license
|
Inuence of Body Mass Index on Mechanical Properties in People With Obesity
Background: The study was to determine inuence of body mass index on muscular mechanical properties in people with obesity. Methods: A total of 300 individuals (mean age: 27.31±7.21 years) were participated. The participants were assigned in groups base on BMI classication (Group 1 (BMI=18.50-24.99 kg/m 2 ), Group 2 (BMI=25.00-29.99 kg/m 2 ), and Group 3 (BMI ≥ 30 kg/m 2 )). The biceps brachii (BB), biceps femoris (BF) were measured bilaterally using the "MyotonPRO" device. Results: All mechanical properties of the right and left BB muscle, left BF tone and stiffness were found signicantly difference between groups (p<0.05). The bilateral BB tone in Group 3 was lower than the other two groups. The right BB stiffness of Group 2 was found to be higher compared to the other two groups (p<0.05). While the right and left BB elasticity was similar in Groups 2 and 3, it was lower compared to Group 1 (p<0.05). The left BF tone and stiffness of Group 3 were found to be signicantly higher than Groups 1 and 2 (p<0.05). The right BB tone showed a weak negative correlation with BMI in females, and for left side in males. A weak positive correlation was found between the right and left BB elasticity and BMI in males and females. The left-right BF tone and left BF stiffness showed a weak positive correlation in males. Conclusions: The bilateral BB tone and elasticity decreased, and the left BF stiffness increased as BMI increased. Different mechanical properties were observed in sex comparasion base on BMI clasication. The BB and BF mechanical properties were affected more in males than females. than females. The muscular actions and eciency can adopt to inactivity. Therefore, our study demonstrated the obesity or higher BMI lead to the human body for developing different muscular adaption. The upper and lower extremity muscles acting differently for daily living activities or functions. The increased mechanical load in lower leg cause to rising stiffness of muscles. The upper extremity muscles, which lacks mechanical loading, may adversely affected. (UT) myotonometric there between the elasticity and and a moderate between the SCM and stiffness and BMI study investigating study shown found between studies Previous studies investigated In this study, increased BMI changes the mechanical properties of the muscles. The decrease in the muscular performance of people with obesity may indicate why physical activity is reduced or vice versa.
Introduction
Obesity is a signi cant health problem that is gradually increasing. It can be de ned as excessive fat accumulation in a way that can disrupt health, and it predisposes to chronic diseases (1). The calculation of body mass index (BMI) is the simplest indicator of an increase in adipose tissue in the body and the frequently used method (2). According to BMI, individuals are evaluated as underweight (< 18.5 kg/m 2 ), normal (18.5-24.9 kg/m 2 ), overweight (25-29.9 kg/m 2 ), rst-degree obese (30-34.9 kg/m 2 ), second-degree obese (35-39.9 kg/m 2 ), and third-degree morbidly obese (≥ 40 kg/m 2 ) (3). A decrease or increase in BMI may be a factor in the formation of chronic diseases (4).
Obesity is closely related to adipose tissue, and it may have direct or indirect effects on physical activity and the musculoskeletal system (5). While the relationship of obesity with cardiovascular diseases and type 2 diabetes draws attention, its effects on the musculoskeletal system are less questioned.
Disorders that can affect bone health, such as osteoarthritis or osteoporosis, which threaten joint health due to overload, are the most well-known. There are few studies on the effects of increased BMI on the mechanical properties of muscles or the methods evaluated these properties (6). and movements were excluded from the study. It was stated that they should not consume alcohol for at least 24 hours and not engage in strenuous physical activity for at least 48 hours before the test (17). Individuals were divided into three subcategories according to sexes and BMI range: Group 1 (BMI = 18.50-24.99 kg/m 2 ) (n = 100), Group 2 (BMI = 25.00-29.99 kg/m 2 ) (n = 100), and Group 3 (BMI ≥ 30 kg/m 2 ) (n = 100).
Procedures
The ethics committee approval numbered 2020/101 and dated 16.12.2020 was obtained from the non-invasive research ethics committee of Hasan Kalyoncu University, Faculty of Health Sciences. All participants voluntarily involved and they were informed about the content and purpose of the study and signed the consent form The physical characteristics and demographic information of the individuals were recorded prior to the test. Weight was evaluated using an electronic scale GSE 450 (GSE Scale Systems, Novi, Michigan), and height was evaluated using a standard stadiometer. BMI was calculated by dividing the weight in kilograms by the square of height in meters.
The tone and viscoelastic properties of the biceps brachii (BB) and biceps femoris (BF) muscles were evaluated bilaterally using a Myoton Pro (Müomeetria Ltd., Tallinn, Estonia) device. This device is known to have good to excellent reliability in healthy individuals (18,19). It can be used for objective diagnosis and monitoring in soft tissues in terms of validity and inter-user reliability (20,21).
The BB mechanical properties were evaluated by palpating the lateral end of the acromion and the cubital fossa in the middle from the ¾ of the distance between them with the individual in the resting supine position (22). Concerning the BF, the individual lay in the prone position and was asked to contract the hamstring muscle after placing a pillow under the ankle. The muscle was palpated while the individual was contracting it. Along with the contraction, the most prominent part of the muscle was marked and measured in muscle contraction, as suggested by Gavronski et al. (23). These muscles were preferred since they had been studied previously in many studies (24)(25). For each measurement, mean deviation, median and 95% con dence interval were given, and mean values obtained from three consecutive measurements from the reference points were used in statistical analysis.
Statistical Analysis
Descriptive statistics were presented as mean ± standard deviation. The Shapiro-Wilk test was used to check whether the data were normally distributed. The Mann-Whitney U test was used to compare differences between males and females (sex), and the Kruskal-Wallis test was used to compare differences in three groups (according to BMI range) for non-normally distributed data. Post-hoc binary comparisons (after Dunn's correction) were used to determine the source of the difference. The relationship between numerical variables was evaluated by Spearman correlation. As Spearman's rank correlation coe cient, 0.00-0.10 was interpreted as very weak correlation or no correlation, 0.10-0.39 as weak correlation, 0.40-0.69 as moderate correlation, 0.70-0.89 as high correlation, and 0.90-1.00 was interpreted as very strong correlation (27).
Statistical analysis was conducted using Windows version 24.0 for SPSS (IBM Corp. Armonk, NY IBM Corp.), and the value p < 0.05 was considered statistically signi cant. The minimum total number of participants required for each group was determined to be 44 (α = 0.01) in order to determine the expectation that there would be a signi cant difference between three different BMI groups at the large effect level (f = 0.75) obtained by referring to the published article with a power of 0.90. G-power program version 3.9.1.7 was used in power analysis (28).
Correlation between the BB and BF mechanical properties and BMI All individuals A weak negative correlation was found between the right and left BB tone and BMI (r= -0.177-p = 0.002, r= -0.157-p = 0.006, respectively). A weak positive correlation was revealed between the right and left BB elasticity and BMI (r = 0.258-p = 0.000, r = 0.211-p = 0.000, respectively). No correlation was determined in the bilateral BB stiffness (p > 0.05). A weak positive correlation was found between the left BF stiffness and tone and BMI (r = 0.164-p = 0.004, r = 0.143-p = 0.013, respectively). No correlation was detected in other mechanical parameters of the BF (p > 0.05)( Table 1). Table 1).
Males
A weak positive correlation was revealed between the right and left BB elasticity and BMI (r = 0.285-p = 0.000, r = 0.199-p = 0.015). No correlation was found in the other parameters of the BB (p > 0.05).
A weak positive correlation was observed with the left BF stiffness and tone (r = 0.284-p = 0.000, r = 0.301-p = 0.000, respectively). No correlation was found between the bilateral BF elasticity and BMI (p > 0.05) ( Table 1).
All individuals
A statistical difference was found in all mechanical properties of the right and left BB muscle, left BF tone and stiffness (p < 0.05) ( Table 2). When advanced statistical methods were used, it was observed that the bilateral BB tone in Group 3 was lower than the other two groups (p < 0.05). The right BB stiffness of Group 2 was found to be higher compared to the other two groups (p < 0.05). While the right and left BB elasticity was similar in Groups 2 and 3, and it was higher than Group 1 (p < 0.05). The left BF tone and stiffness of Group 3 were found to be signi cantly higher than Groups 1 and 2 (p < 0.05) ( Table 3).
Males
The statistical differences were found in the right BB all mechanical properties (stiffness, tone, elasticity) and the right BF stiffness, left BB stiffness and elasticity, left BF tone and stiffness base on BMI group comparasion (p < 0.05) ( Table 2). The right BB tone of Group 3 was found to be lower than the other two groups (p < 0.05). The right and left BB stiffness of Group 2 was determined to be higher than that of Groups 1 and 3 (p < 0.05). The left BF tone of Group 3 was higher than that of Group 1 and Group 2 (p < 0.05). The right and left BF stiffness increased as BMI increased, and it was found to be higher in Group 2 than Group 1 (p < 0.05). The right and left BB elasticity in Group 1 was higher then other two groups (p < 0.05) ( Table 3).
Females
The statistical differences were found in the right and left BB elasticity (p < 0.05) ( Table 2). The right BB elasticity was higher in Group 1 compared to the other groups (p < 0.05), the BB elasticity in Group 2 was better than in Group 3, but it was not statistically signi cant in Groups 2 and 3 (p > 0.05). Although the left BB elasticity was higher in Group 1 compared to the other groups (p < 0.05), no statistical difference was found between Group 1-2 and Group 2-3 (p > 0.05) ( Table 3).
Discussion
This study was conducted to determine in uence of body mass index on muscular mechanical properties in people with obesity. There was found a weak relation between BMI and the mechanical properties of the BB and BF muscles. The bilateral BB tone and elasticity decreased as BMI increased, and the left BF stiffness increased. Different mechanical properties were observed in sex comprasion base on BMI clasi cation. The BB and BF mechanical properties were affected more in males than females.
Resting muscle tone is classi ed into two categories as neural and non-neural. If there is no neural activation, muscle tone contains passive stiffness and viscoelastic properties (22). When all individuals were examined, a weak negative correlation was observed between BMI and the bilateral BB tone. The left BF tone showed a positive correlation. While the left BF tone was positively correlated in males, the bilateral BB tone was found to be weakly negatively correlated in females. This correlation in tone suggests that it is caused by different neural and muscular adaptations that people with obesity can develop in the lower extremity and upper extremity. In a study conducted with 12 people with obesity (BMI > 27) adolescent girls and 12 healthy girls, it was reported that with increased mechanical load in people with obesity, adaptation would occur in muscles and nerves, and as a result, people with obesity might have a larger pennation angle, anatomical cross-sectional area and muscle thickness (29). While this advantage in mechanical loading is observed in the positive direction in the lower extremity depending on the increase in weight, it may explain that it is in the negative direction in the upper extremity. The upper extremity, which lacks mechanical loading, and the reduced inactivity, may bring along a disadvantage that will result in the loss of the cross-sectional area and contractile components. In studies comparing athletes and sedentary individuals, it is stated that sedentary individuals have a smaller cross-sectional area (30). This opposing relationship proves that muscular and neural structures will develop different adaptations in the upper and lower extremities.
In the evaluations we performed in males and females, the right and left BB elasticity showed a similar weak positive correlation in all three groups. In the study in which Kocur et al. evaluated the relationship between the SCM muscle stiffness and elasticity and BMI, it was indicated to be highly correlated with elasticity and moderately correlated with stiffness (31). In a study comparing mechanical properties, it was reported that males with high BMI had lower biceps brachii elasticity than females (28). Interestingly, in our ndings, a weak correlation with the bilateral BB elasticity in the upper extremity in all three groups (all individuals, males and females) was not observed in the bilateral BF elasticity in the lower extremity. Furthermore, no correlation was observed in the bilateral BB stiffness. A weak positive correlation was found between the left BF stiffness and BMI only in males. Fat in ltration into skeletal muscles in people with obesity can create higher muscle stiffness and reduce exibility compared to the people with non-obesity group due to the limitation of range of motion and stable posture (28). Moreover, the increase in adipokines, which regulate the production of metalloproteinases, prostanoids, and cytokines in adipose tissue, can affect stiffness and exibility in overweight and people with obesity (32). The different elasticity relationship in the lower and upper extremities suggests that it may be caused by changes in adipose tissue according to sex.
|
2021-07-27T00:05:15.415Z
|
2021-05-10T00:00:00.000
|
{
"year": 2021,
"sha1": "ca7218fa5e6999b12c636cab0779edfec4d83f72",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-383093/v1.pdf?c=1631896868000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "dedd590330c8fe63a75c87616f36d8e21943f122",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55793486
|
pes2o/s2orc
|
v3-fos-license
|
Torpid Diabetic Wound Healing: Evidence on the Role of Epigenetic Forces
The increasing number of diabetes patients represents a health challenge due to disease-related, end-organs complications. Hyperglycemia is considered the proximal trigger of an intricate cascade of molecular processes that progressively deteriorate tissues and organs, leading to the onset of clinical complications. Lower extremity ulcerations and their ensued refractoriness to heal can potentially result in amputation and disability and remain the second most feared diabetic complication. We have identified particular morphogenetic traits in diabetic foot ulcer granulation tissues and its cultured fibroblasts. Diabetic ulcerderived fibroblasts conserve a sort of memory as their in vitro traits very much recapitulate the in vivo behavior, in terms of proliferative disabilities and transcriptional and post-translational modifications of genes involved in proliferation, migration and ECM dinamics. Furthermore, the acute, in vivo morphologic recreation of a microangiopathy in a neo-formed granulation tissue is worth mentioning. All these elements suggest that “metabolic memory,” in which chromatin remodeling and long-lasting epigenetic changes play important roles, could contribute to the persistence of diabetic complications. Metabolic memory is largely responsible for the onset/perpetuation of the ulcers chronicity phenotype . The comprehensive understanding of the chromatin choreography underlying this pathogenic stream; and its potential pharmacologic manipulation would allow for future innovative therapies for diabetic complications, including wound healing refractoriness.
Introduction
Type 2-diabetes mellitus (T2DM) is a group of metabolic disorders that is currently expanding in a pandemic magnitude [1].
Hyperglycemia has been defined by the World Health Organization as a condition in which fasting blood glucose levels are greater than 7.0mmol/L (126mg/dL) or greater than 11.0 mmol/L (200mg/dL), 2 hours after meals [2].Sustained hyperglycemic state is invoked as the proximal trigger for diabetes-associated biochemical disturbances and the ensued end-organs complications [3] Hyperglycemic condition is associated with systemic endothelial dysfunction, a central factor for the onset and progression of macro and microvascular complications which eventually undermine whole organ systems [4].
Although building the molecular bridge between the trivial episodes of hyperglycemia and gene transcription machinery still stands as a challenge, mounting evidence sustain the pathogenic role of epigenetic mechanisms in hyperglycemia-induced organs ISSN: 2377-3634 complications.The experimental and clinical evidences which support the concept of "metabolic memory", as the cellular ability to remember hyperglycemic experiences and thus perpetuate diabetic complications even under normal blood glucose levels [5,6], have offered explanation for the defective wound healing traits of diabetic ulcer cells.
Chronic wounds are commonly defined as wounds that do not follow the well-defined stepwise process of physiological healing but are trapped in an uncoordinated and self-perpetuated phase of inflammation.As a result, the healing process is delayed, incomplete, and/or asynchronous; thus resulting in poor anatomical and functional outcomes [7].Among chronic wounds, lower extremities ulceration is outlined as one of the most complex to treat.Diabetic foot ulcer, with the subsequent healing failure, is associated with amputation-disability, morbidity and mortality [8].Irrespective to the research and financial efforts put forward for years, current figures of amputations among the diabetic population are alarming since every 20 seconds some individual is amputated [9].
There is no single, universal mechanism to explain why cutaneous wounds fail to heal in diabetic subjects.Rather, it is a multifactorial event so that diverse cellular and humoral factors interact in disrupting more than one of the phases of the healing process.Aside from the local and systemic functional, predisposing factors [10], the evolution to chronicity appears to be influenced by protracted inflammation [11].Besides, toxic effects induced by the dermal accumulation of Advanced Glycation-End Products (AGEs), Reactive Oxygen Species (ROS) overproduction and an actively recurrent biofilm are also responsible of this hard-to-heal phenotype [11,12].
Here we provide a brief overview and authors' considerations about particular experimental observations, which can only be explained by virtue of the cellular ability to retain past metabolic experiences.However, these views and thoughts render further substantiation to this controversial concept.
Wound Healing Phases and Epigenetic
Most chronic wounds show similar behavior and evolution despite etiological differences, which indicates that their development is a heterogeneous and multifactorial process [13].Stagnancy in granulation, failure of contraction, and delayed re-epithelialization are clinical hallmarks of diabetic lower extremities wounds.Although it is known that multiple driving forces disrupt the cutaneous repair machinery in diabetes-affected individuals, it is intriguing to notice that granulation tissue exhibits malformed vessels that recreate the typical long term diabetic microvascular damages [14].The question is: what are the mechanisms operating for these vascular morphologic abnormalities, occurring even in a young granulation tissue from a diabetic patient with controlled glucose levels?
Growing data fuel the concept that epigenetic changes are instrumental players for the diabetics" wound healing failure.The inflammatory process in diabetic ulcers is more a condition than a physiological reaction.Pro-inflammatory cytokines reduce fibroblasts and vascular progenitor cells migration, anchorage, activation, and most importantly "entice" these cells to commit suicide [15].Via nuclear factor-kappaB (NFκB p65) and c-Jun N-terminal kinases signaling pathways [16], inflammation also disrupts extracellular matrix synthesis not only by up-regulating matrix proteases [17] but also by dismantling anabolic pathways usually activated by the agonistic occupation of tyrosine kinase receptors, (including insulin receptor) which at the end, would turn to activate PIK3CA/AKT1-MTOR axis [18].Under conditions of poor growth factors availability and consequently reduced Akt activity, FOXO is activated and retained in the nuclear compartment, hence shutting down MTOR anabolic activities [19].The contributions by El-Osta's laboratory are pivotal to understand the perpetuation of inflammation in diabetes.They demonstrated that NFκB-p65 transcriptional up-regulation resulted from epigenetic marks which modified the nature of the histone methylation on the gene promoter region [20].
Furthermore, particular epigenetic changes have shown to impact on the angiogenesis, and re-epithelialization processes of diabetic models as excellently reviewed by Rafehi and co-workers [21].The reviewed studies converge to implicate an epigenetic-based metabolic memory in the onset of the phenotype that characterizes diabetic wounds.Thus, it could be attractive to dissect out what the possible contribution could be of ischemia and neuropathy, for the shaping of a specific chromatin remodeling and its input into the ulcer phenotype.Cumulative epigenetic modifications appear to be the driving force en route for the point-of-no return in end-organs complications or simply to irreversible insulin resistance [22].
Cells Involved in Cutaneous Healing Recall their Metabolic Experiences
Acute exposure to high glucose concentrations exerts a detrimental metabolic and bioenergetic effect on cutaneous fibroblasts [23].Early observations suggested the existence of an intrinsic or imprinted behavioral pattern on cutaneous fibroblasts, since replication did not appear solely impaired in cells harvested from diabetics, but also in cells from diabetes-genetically predisposed subjects [24].Although the study by Engerman and Kern in 1987 [25] was seminal to shape the future concept of metabolic memory relevant to cutaneous wound healing, the first evidence supporting the metabolic memory concept, emerged from the classic experiment developed by Vracko and Benditt.In 1975, they showed that diabetic patient-derived cells exhibited about half the number of population doublings as compared to cells from non-diabetic donors, indicating a reduced replicative lifespan for diabetic fibroblasts, even under normoglycemic culture conditions [26].This and other subsequent findings based on cultured fibroblasts suggested that explanted cells from diabetic patients conserve a memory since their in vitro traits recapitulate their in vivo behavior.It was perhaps the primary intuition on the existence of a sort of genetic or epigenetic predisposition for a certain trait, even when the cells no longer remained in the diseased organism.
As mentioned before, only by virtue of the existence of an epigenetic-mediated mechanism it is possible to explain the proliferative refractoriness and the propensity to culture senescence of diabetic ulcers-derived fibroblasts.Analogous to fibroblasts [26], endothelial cells also preserve the imprinting imposed by high glucose burden as they exhibit elevated transcriptional expression rates for both fibronectin and collagen IV even after been switched to normal glucose environment [27].As a matter of fact the authors of this study were those who coined the term of metabolic memory in 1990.
Molecules and Epigenetics
Animal studies have illustrated the responsibility of the oxidative and nytrosilative stress as instrumentals for the onset of the pointof-no return [28].In line with these observations is the fact that the diabetic wound is a rich source of oxygen reactive species which generates a intensely toxic microenvironment for fibroblasts and endothelial cells [29].However, it remains to be answered what could be the echoes of the excess of free radicals for local and peripheral cutaneous cells once they were exposed.In situ ulcer recurrence documented upon short follow-up periods [30] represents a serious problem that entails frustration for the patient and the clinician.Considering the above described evidences it is plausible to hypothesize that ulcer recurrence, irrespective to other predisposing factors, could be a clinical expression form of the point-of-no return.In other words, the early metabolic stress is "remembered" by the cutaneous cells and translated into a permanent, progressive and perpetuated harmful imprinting [31].
The fact that mitochondrial ROS overproduction was identified as proximal in the pathogenic cascade of diabetes complications [32] paved the way for the concept that ROS operates as the putative link between glycemia and the cells' chromatin structure.Mitochondrial excessive superoxide with the ensued ROS spillover stood as the pathogenic core of the "hyperglycemic memory" in which chromatin remodeling and long-lasting epigenetic changes ensure the persistence ISSN: 2377-3634 of inflammation and other molecular disorders involved in diabetes complications.Figure 1 is a diagrammatic representation of the major actors performing in the cascade.
Our Experience in Metabolic Memory
In an attempt to understand the molecular basis of diabetic wound healing refractoriness, our group has systematically cultured primary granulation tissue fibroblasts from ischemic and neuropathic ulcers.We have confirmed the reduced replicative potential, as compared to age-matched cells from non-diabetic, burn-injured donors.Again, the improvement of culture microenvironment, including adequate oxygen availability and physiological glucose levels, does not ameliorate the proliferative arrest.We observed that this phenotype of arrest appears associated to the overexpression of activated forms of TP53 and CDKN1A (p21) along with a downregulation of the AKT1/ MTOR/CCND1 axis [33].We still miss to learn how long could last these post-translational modifications in those biopsy-derived cells following culture passages.
Recent observations from our group deserve special comment.Not only because they are unprecedented, but particularly because looking through the prism of transmissible epigenetic events, can they be explained [14].(I) Through the systematic histological analysis of the granulation tissue biopsies from diabetic ischemic and neuropathic ulcers, we distinguished that the ulcer's major ethiopathogenic component imposes a particular histological pattern of granulation tissue, which is largely similar and privative for ischemics and for neuropathics classes.(II) Microvascular damages as the fibro-hyaline and proliferative arteriolar sclerosis, ordinarily of long term evolution, are found and completely recreated in neo-formed vessels within granulation tissues no older than two weeks.These observations incite to speculate that an aberrant driving force imposes over and impinge the organizational process during fibroangiogenesis.(III) Following comparative RT-PCR studies using clinical biopsies of granulation tissue from pressure ulcers, and diabetics' ischemic and neuropathic ulcers; we detected a significant derangement in a group of well characterized glucose-metabolism related genes in the diabetic ulcers [14].Diabetic ulcer cells express far less insulin receptor, hexokinase (isoforms 1 and 2), phosphofructokinase, pyruvate kinase (isoforms 1 and 2), pyruvate dehydrogenase, and significantly more of its inhibitor enzyme pyruvate dehydrogenase kinase (isoform 4).We see with interest that granulation tissue, which can be considered as a transient organ made up by "de novo" cells, reproduces the same transcriptional profile of those genes considered as insulin resistance/ glucose intolerance markers, and predictors for type-2 diabetes onset, in the liver, skeletal muscle, and adipose tissue as has been broadly described [34][35][36].This raises the questions about if the granulation tissue is an additional insulin-resistant organ.
Experimental evidences supporting the relevance of the metabolic memory in tissue repair has extended to adult zebrafish.This fish has been reliably used as a wound healing model through caudal fin regeneration, under normal or diabetic circumstances based on the fin amputation [37].Since the zebrafish is capable to regenerate its damaged pancreas and restore a euglycemic state similar to what would be expected in post-transplant human patients; multiple rounds of caudal fin amputation allow for the separation and study of pure epigenetic effects.Although euglycemia is achieved following pancreatic regeneration the impaired fin regeneration is retained even after multiple rounds of regeneration in the daughter fin tissues.These elegant experiments conducted in a small aquatic organism converge to support in vitro, as from rodents and clinical evidences of an underlying epigenetic process based on a wrong metabolic experience [38].The later was subsequently confirmed through microarray and bioinformatic studies demonstrating the aberrant expression of 71 key regulatory genes involved in tissue repair in the diabetic state [39].Above all, the growing zebrafish experiences have left no room to doubt that metabolic memory is a phenomenon broadly represented in animal species.
Concluding Remarks
Diabetes is an exclusive disease that imposes a variety of distal organs complications as some unmatched traits in some cell types of the affected patients.Epigenetic as the resultant governor of cell's phenotype upon its genome interaction with the environment, has provided an innovative research field and set out an era of hopes for the control of diabetic complications, including an effective and lasting wound healing.Now we stand before the challenge to fully understand the phenomenon since myriad of questions still remain to be answered and controversies continue to exist.As we have learned during the last 20 years that benefits are not the best if glucose control is delayed; it is likely that over the next 20 years we could teach the cells to keep a healthier metabolic memory as to preserve the native chromatin structure.
Figure 1 :
Figure 1: Hyperglycemia strikes mitochondrial functions leading to an auto-perpetual everlasting metabolic disorder known as, "hyperglycemic metabolic memory".Hyperglycemia increases AGEs production thus leading to the consequent direct ROS generation via AGE/RAGE interaction; but also via AGEs generation from mitochondrial proteins involved in oxidative phosphorylation.Increased oxidative stress as a consequence of mitochondrial ROS generation diminish glycolytic flux by inhibiting GADPH, thus causing a stimulation of the polyol and hexosamine pathways together with augmented PKC expression which pivots in AGEs and ROS generation in a continuous vicious cycle represented with an elliptical arrow.This perpetual signaling network is supported by persistent epigenetic changes caused by ROS and AGEs chromatin remodeling through histones modification thus leading to altered gene expression.
|
2018-12-05T21:09:57.158Z
|
2015-02-28T00:00:00.000
|
{
"year": 2015,
"sha1": "56b109e572f184685e70bed18ca8dd1104298919",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.23937/2377-3634/1410020",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "56b109e572f184685e70bed18ca8dd1104298919",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
8793243
|
pes2o/s2orc
|
v3-fos-license
|
Effect of point mutations on Herbaspirillum seropedicae NifA activity
NifA is the transcriptional activator of the nif genes in Proteobacteria. It is usually regulated by nitrogen and oxygen, allowing biological nitrogen fixation to occur under appropriate conditions. NifA proteins have a typical three-domain structure, including a regulatory N-terminal GAF domain, which is involved in control by fixed nitrogen and not strictly required for activity, a catalytic AAA+ central domain, which catalyzes open complex formation, and a C-terminal domain involved in DNA-binding. In Herbaspirillum seropedicae, a β-proteobacterium capable of colonizing Graminae of agricultural importance, NifA regulation by ammonium involves its N-terminal GAF domain and the signal transduction protein GlnK. When the GAF domain is removed, the protein can still activate nif genes transcription; however, ammonium regulation is lost. In this work, we generated eight constructs resulting in point mutations in H. seropedicae NifA and analyzed their effect on nifH transcription in Escherichia coli and H. seropedicae. Mutations K22V, T160E, M161V, L172R, and A215D resulted in inactive proteins. Mutations Q216I and S220I produced partially active proteins with activity control similar to wild-type NifA. However, mutation G25E, located in the GAF domain, resulted in an active protein that did not require GlnK for activity and was partially sensitive to ammonium. This suggested that G25E may affect the negative interaction between the N-terminal GAF domain and the catalytic central domain under high ammonium concentrations, thus rendering the protein constitutively active, or that G25E could lead to a conformational change comparable with that when GlnK interacts with the GAF domain.
Introduction
Biological nitrogen fixation is a process carried out by some prokaryotes that reduces dinitrogen (N 2 ) to ammonia (NH 3 ) in a reaction catalyzed by the nitrogenase complex. It is highly energy-demanding and is thus controlled at both transcriptional and translational levels (1). Transcription of the nif genes, which encode the nitrogenase complex and all gene products necessary to assemble an active enzyme, is controlled by NifA in response to ammonium and oxygen levels. NifA is a s 54 -dependent transcriptional activator that shows a typical three-domain structure. The N-terminal GAF domain shows the lowest similarity among NifA homologs, and is involved in ammonium control. The central AAA+ domain interacts with the s 54 -RNA polymerase and possesses ATPase activity, while the C-terminal domain shows a helix-turn-helix motif involved in DNA-binding. Two linkers connect these domains: the Q-linker connects GAF and central domains, and the ID-linker connects the central and C-terminal domains.
NifA proteins are separated into two classes based on their regulation by ammonium and oxygen (1). One class occurs in g-Proteobacteria and is regulated by the antiactivator NifL, while the second class is observed in a-Proteobacteria, where NifL is absent. Nitrogen regulation by both mechanisms involves a PII-like protein and interaction with either NifL or NifA (2). In contrast, oxygen control differs between these two mechanisms. In g-Proteobacteria, NifL senses oxygen levels through a flavin moiety (3), whereas in NifL-independent regulation, oxygen control is hypothesized to involve a putative Fe-S cluster associated with a cysteine tetrad located at the end of the central domain and ID-linker (4).
Herbaspirillum seropedicae is a nitrogen-fixing b-proteobacterium associated with important agricultural Gramineae, such as rice, wheat, sorghum, and sugarcane (5). Transcriptional control of nitrogen fixation in H. seropedicae relies on a NifL-independent NifA system that is controlled by both nitrogen and oxygen levels (6).
Recently the regulation of nitrogen fixation in this organism has been reviewed (7).
The H. seropedicae NifA N-terminal GAF domain comprises the first 184 amino acids (Figure 1), and although it is involved in negative regulation by ammonium, it is not strictly required for activation of nif gene transcription (6,8). This N-terminal GAF domain interacts with GlnK in response to the fixed nitrogen concentration (9). The N-terminal GAF domain is linked to the central domain by the 18 amino acids of the Q-linker. The H. seropedicae NifA central domain comprises 236 amino acids and contains the catalytic site. It also interacts with the s 54 RNA polymerase holoenzyme (4). The central domain is linked to the C-terminal domain by a 58-amino acid region named the ID-linker. A conserved cysteine motif located at the end of the central domain and the ID-linker (positions 414, 426, 446, and 451) is suggested to be involved in the regulation of NifA by O 2 . Mutation of these cysteine residues produces inactive proteins (10). Finally, the last 43 amino acids of the NifA primary sequence form the C-terminal domain, which is responsible for DNA binding (11).
In this work, site-directed mutagenesis was used to determine amino acid residues in H. seropedicae NifA that are important for its control.
Reagents
All chemicals were analytical or molecular biology grade and were purchased from Merck (Germany), Sigma (USA), J.T. Baker (Netherlands), or Invitrogen (USA). Restriction enzymes were obtained from Fermentas (Lithuania) or Invitrogen. Oligonucleotides were synthesized by IDT (USA).
Site-directed mutagenesis
Point mutations were introduced into the H. seropedicae nifA gene using mutagenic primers (Supplementary Table S1) as described previously (15). The mutated genes were then cloned into pET-29a, using NdeI and BamHI restriction sites, or into pLAFR3.18 (XbaI/HindIII restriction sites) for activity analyses in E. coli or H. seropedicae, respectively. These plasmids are listed in Table 1.
Construction of H. seropedicae mutants
The nifA gene, cloned into pRAM1T7 as an NdeI/BamHI fragment, was digested with EcoRI to remove 750 bp from the central region of nifA. Following re-ligation, the resulting plasmid (pBA3) was digested with BamHI, and a sacB::Km cassette, obtained as a BamHI fragment from pMH1701, was introduced into pBA3, producing pBA4. This plasmid was then introduced into H. seropedicae SmR1 (wild type) and LNglnKdel, a glnK mutant (16), by electroporation (10 kV/cm, 4 kO, 330 mF, using a Gibco Cell-Porator, USA). Transformed cells were first selected by growth in NFbHP medium with 1 mg/mL kanamycin, and then by survival in NFbHP medium with 15% (w/v) sucrose to obtain mutants with a second recombination. The nifA mutation was confirmed by DNA amplification using 1U Taq DNA Polymerase (Fermentas) in Taq buffer with (NH 4 ) 2 SO 4 , 3 mM MgCl 2 , 0.8 mM dNTP and 0.4 mM of primers HSNifA1 and HSNifA2a (Supplementary Table S1) and the following parameters: one step for 5 min at 95 o C and 30 cycles of 30 s at 95 o C, 30 s at 45 o C and 2 min at 72 o C. These strains were named NifAdel and NifAdel/GlnKdel, respectively.
H. seropedicae mutants carrying a chromosomal nifH::lacZ fusion were generated by introducing the pTZnifH::lacZ plasmid by electroporation, followed by selection for kanamycin resistance. The pTZnifH::lacZ plasmid was constructed by cloning a lacZ::Km cassette at the BamHI site located downstream from the 5.3-kb fragment containing part of H. seropedicae nifH and its promoter region in the pLAU1 plasmid. The lacZ::Km cassette was obtained as a BamHI fragment from the pKOK6.1 plasmid.
Results
In this work, we generated eight constructs that introduced NifA point mutations (K22V, G25E, T160E, M161V, L172R, A215D, Q216I, and S220I) and analyzed their effects on transcriptional activation activity. Four mutations were based on described NifA mutations from Rhodospirillum rubrum (19) and Sinorhizobium meliloti (20), while the other four amino acids were chosen from among conserved residues (Supplementary Table S2). Residues were selected for mutagenesis by aligning NifA proteins from K. pneumoniae, Azoarcus sp., and Azotobacter vinelandii, which are regulated by NifL, and from H. seropedicae, R. rubrum, S. meliloti, Rhodobacter capsulatus, Bradyrhizobium japonicum, and Azospirillum brasilense, which are regulated in a NifL-independent manner (Supplementary Table S2). The mutations were located in the N-terminal GAF domain (K22V, G25E, T160E, M161V, and L172R) and the central domain (A215D, Q216I, and S220I) of H. seropedicae NifA (Figure 1). The secondary structure of each mutant protein was predicted using the Psipred tool (21), which indicated no major differences in secondary structure between the mutants and wild-type NifA (data not shown).
The ability of the H. seropedicae NifA mutants to activate nif promoters was determined in E. coli JM109 (DE3) carrying plasmid pRT22 (K. pneumoniae nifH::lacZ fusion) (Figure 2). Full-length NifA, expressed from plasmid pRAM1, showed no b-galactosidase activity, consistent with previous descriptions, mainly because of lower expression of endogenous E. coli PII, which is necessary to relieve the negative control of the N-terminal GAF domain on the catalytic domain of NifA (7). In contrast, the N-terminal truncated NifA protein (DN-NifA) expressed from pRAM2 was fully functional in E. coli regardless of the ammonium concentration (22). These results confirmed the regulatory role of the N-terminal GAF domain on H. seropedicae NifA that has been described previously: in the presence of ammonium or the absence of PII, the N-terminal GAF inhibits NifA transcriptional activity (6,23). The constructed NifA point mutants were analyzed under the same conditions and showed no activity, except for NifA G25E, which partially activated nifH transcription in E. coli. G25E also appeared to retain some nitrogen control, as activation of transcription was higher under low ammonium concentrations. This result indicated that the G25E substitution affected the need for PII for NifA activity in H. seropedicae.
Considering that A215D, Q216I, and S220I are located in the central domain of NifA, and that the full-length protein is inactive in E. coli (6) (Figure 2), these three mutations were also tested using an N-truncated form (GAF truncated protein) (Figure 3). The removal of the first 203 amino acids of NifA yields an active protein in E. coli (8), as shown using protein expressed from plasmid pRAM2. The N-truncated protein (DN-NifA) was active regardless of the nitrogen level, but only under low O 2 , reinforcing its sensitivity toward O 2 . The N-truncated DN-Q216I and DN-S220I mutants showed lower b-galactosidase activity than that expressed by pRAM2, indicating that these mutations negatively affect transcriptional activity, while retaining O 2 responsiveness. In contrast, DN-A215D was inactive under all tested conditions, suggesting that a negatively-charged amino acid at position 215 affects the catalytic activity of the protein. These proteins were expressed under all conditions tested, as determined by gel electrophoresis (data not shown). The three disrupted amino acids are close to the ATP-binding site, which is located at positions 231-238.
In contrast to H. seropedicae, a NifA strain with a mutation in this region (M217I) in S. meliloti was oxygen tolerant (20).
The point mutations were also analyzed in an H. seropedicae background. Assuming that a functional NifA variant leads to nif gene transcription, nitrogenase activity can be determined. However, for these assays it was necessary to construct two H. seropedicae mutant strains: a nifA mutant strain and a double mutant nifA/ glnK, both of which were obtained by partial gene deletion. These strains were named NifAdel and NifAdel/GlnKdel, respectively. The nifA/glnK double mutant allowed detection of a NifA mutant that does not require GlnK for activity, as this PII protein is responsible for relieving the nitrogen-regulated negative control of NifA (16). These H. seropedicae mutant strains showed no nitrogenase activity (acetylene reduction method; Table 2) (24,25). However, the nitrogenase activity was restored in the pRAMM1-carrying NifAdel strain, which expresses the fulllength NifA, and in the NifAdel/GlnKdel strain carrying ppnifAN185, which expressed an N-truncated form of NifA (Table 2).
To analyze the effect of NifA mutations on nitrogenase activity, each construct was cloned into the pLAFR3.18 vector, which is stable in H. seropedicae, and transformed into both the NifAdel and NifAdel/GlnKdel strains. Assays performed with NifAdel showed that the G25E mutant was fully active, Q216I was partially active, and the other mutants showed no significant nitrogenase activity (Table 2). However, NifA levels similar to those of the wild type were expressed from pRAMM1, while the Q216I mutant did not show any nitrogenase activity in the absence of GlnK (assay in NifAdel/GlnKdel strain). In contrast, the G25E mutant demonstrated nitrogenase activity, which implied that G25E was active and does not require GlnK for activity.
The G25E mutation was also tested in the NifAdel and NifAdel/GlnKdel strains carrying a nifH::lacZ chromosomal fusion, which allowed assessment of transcriptional NifA activity in the presence of high ammonium concentrations (Figure 4). The wild-type H. seropedicae strain (SmR1) carrying the nifH::lacZ fusion only showed b-galactosidase activity at low ammonium concentrations. Conversely, the G25E mutant showed nifH::lacZ transcription in both the NifAdel and NifAdel/GlnKdel strains, regardless of ammonium concentration. However, comparison of b-galactosidase activity at both low and high ammonium concentrations indicated that the G25E mutant protein was partially regulated by ammonium, as transcriptional activity was higher at low ammonium concentrations. This result suggested that the G25E mutant did not depend on GlnK for activity, but could still detect ammonium concentration.
Discussion
H. seropedicae NifA is regulated by both ammonium and oxygen (6). The effect of O 2 on the NifA protein is related to a putative Fe-S cluster involving four cysteine residues located at the end of the central domain and the ID-linker. These conserved cysteine residues are found in all NifA proteins that are directly sensitive to oxygen, but absent in NifA proteins that depend on NifL for oxygen control (4). In H. seropedicae, mutation of the conserved cysteine residues rendered inactive proteins (10). Conversely, Krey et al. (20) obtained a S. meliloti NifA mutant (M217I) that was active even under high O 2 levels. Using sequence alignment, we determined the corresponding amino acid residue in H. seropedicae to be serine 220. The NifA S220I mutation resulted in lower activity in H. seropedicae (Table 2) and partial activity in E. coli (Figure 3), but retained sensitivity toward O 2 , indicating a difference in behavior compared with S. meliloti NifA M217I. Two further amino acid residues close to S220 in H. seropedicae NifA were also mutated and analyzed. The A215D mutation was benign in both H. seropedicae and E. coli, even if an N-truncated form was used. However, the N-truncated protein form carrying the Q216I mutation showed transcriptional activity dependent on O 2 . This mutant Q216I H. seropedicae strain also produced an active nitrogenase complex dependent on the GlnK protein, similar to the wild type. These results indicate that Q216I and S220I retained regulatory activities similar to the wild type, although with lower activity. Conversely, because no transcription from strains carrying the A215D mutation was observed under any of the conditions tested, the alanine residue at position 215 is likely to be essential for activity. Alternatively, a negative charge at position 215 may be more deleterious for H. seropedicae NifA compared with the previous substitutions.
Mutations M161V and L172R in H. seropedicae NifA correspond to mutations M173V and L184R described previously in R. rubrum (19). Analysis carried out using a yeast two-hybrid system showed that RrM173V produced a protein with stronger GlnB interaction, whereas the RrL184R mutant did not require GlnB for activity. In R. rubrum, GlnB is the PII protein responsible for controlling NifA activity (26). The H. seropedicae NifA M161V mutant was inactive in all conditions tested, while the L172R mutant showed very low nitrogenase activity, indicating that these amino acids are important for the overall NifA activity.
H. seropedicae differs from R. rubrum in that nitrogen regulation depends on GlnK (16). Among the eight H. seropedicae NifA mutations investigated in this work, the G25E mutation rendered an active NifA protein that did not require GlnK. Mutation G25E in H. seropedicae corresponds to G36E in R. rubrum, which also produced a partial active NifA independent of GlnB (19). The G25E mutation may affect the negative regulatory interaction between the N-terminal GAF domain and the catalytic central domain under high ammonium concentrations, resulting in a constitutively active protein. Furthermore, the glutamate residue at position 25 could lead to a conformational change comparable with that produced when GlnK interacts with the N-terminal GAF domain. The G25E mutant was also analyzed using a nifH:lacZ chromosomal fusion in H. seropedicae (Figure 4). This allowed us to determine the NifA transcriptional activity even in the presence of high ammonium concentrations, a condition where nitrogenase activity is not observed (16). The assay showed that the G25E mutant is active in the absence of GlnK, as observed by nitrogenase activity, but also showed partial regulation by fixed nitrogen, with higher transcriptional activity under low ammonium concentrations than in the presence of high ammonium concentrations (Figure 4). The partial regulation by fixed nitrogen was also observed in assays performed in E. coli (Figure 2).
The observed ammonium regulation could be related to the GAF domain, which has been shown to bind small molecules such as cyclic nucleotides and 2-oxoglutarate (27). In A. vinelandii, formation of the NifL-NifA complex is prevented by the binding of 2-oxoglutarate to the NifA GAF domain (28). Although it has not been confirmed that the N-terminal GAF domain of H. seropedicae NifA binds small molecules, the possibility that a small molecule such as 2-oxoglutarate may interact with the protein, signaling a cellular deficit of fixed nitrogen, cannot be ruled out.
Supplementary material
Click here to view [pdf].
|
2016-05-10T15:56:32.711Z
|
2015-07-10T00:00:00.000
|
{
"year": 2015,
"sha1": "0d2357f5869197ab59b1ea670231ff285c7bc078",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/bjmbr/v48n8/1414-431X-bjmbr-1414-431X20154522.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6db748e681edb81f2d1a852085eb2fd53eb1bf9f",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
235270901
|
pes2o/s2orc
|
v3-fos-license
|
Bodily Information and Top-Down Affective Priming Jointly Affect the Processing of Fearful Faces
According to embodied theories, the processing of emotions such as happiness or fear is grounded in emotion-specific perceptual, bodily, and physiological processes. Under these views, perceiving an emotional stimulus (e.g., a fearful face) re-enacts interoceptive and bodily states congruent with that emotion (e.g., increases heart rate); and in turn, interoceptive and bodily changes (e.g., increases of heart rate) influence the processing of congruent emotional content. A previous study by Pezzulo et al. (2018) provided evidence for this embodied congruence, reporting that experimentally increasing heart rate with physical exercise facilitated the processing of facial expressions congruent with that interoception (fear), but not those conveying incongruent states (disgust or neutrality). Here, we investigated whether the above (bottom-up) interoceptive manipulation and the (top-down) priming of affective content may jointly influence the processing of happy and fearful faces. The fact that happiness and fear are both associated with high heart rate but have different (positive and negative) valence permits testing the hypothesis that their processing might be facilitated by the same interoceptive manipulation (the increase of heart rate) but two opposite (positive and negative) affective primes. To test this hypothesis, we asked participants to perform a gender-categorization task of happy, fearful, and neutral faces, which were preceded by positive, negative, and neutral primes. Participants performed the same task in two sessions (after rest, with normal heart rate, or exercise, with faster heart rate) and we recorded their response times and mouse movements during the choices. We replicated the finding that when participants were in the exercise condition, they processed fearful faces faster than when they were in the rest condition. However, we did not find the same reduction in response time for happy (or neutral) faces. Furthermore, we found that when participants were in the exercise condition, they processed fearful faces faster in the presence of negative compared to positive or neutral primes; but we found no equivalent facilitation of positive (or neutral) primes during the processing of happy (or neutral) faces. While the asymmetries between the processing of fearful and happy faces require further investigation, our findings promisingly indicate that the processing of fearful faces is jointly influenced by both bottom-up interoceptive states and top-down affective primes that are congruent with the emotion.
According to embodied theories, the processing of emotions such as happiness or fear is grounded in emotion-specific perceptual, bodily, and physiological processes. Under these views, perceiving an emotional stimulus (e.g., a fearful face) re-enacts interoceptive and bodily states congruent with that emotion (e.g., increases heart rate); and in turn, interoceptive and bodily changes (e.g., increases of heart rate) influence the processing of congruent emotional content. A previous study by Pezzulo et al. (2018) provided evidence for this embodied congruence, reporting that experimentally increasing heart rate with physical exercise facilitated the processing of facial expressions congruent with that interoception (fear), but not those conveying incongruent states (disgust or neutrality). Here, we investigated whether the above (bottom-up) interoceptive manipulation and the (top-down) priming of affective content may jointly influence the processing of happy and fearful faces. The fact that happiness and fear are both associated with high heart rate but have different (positive and negative) valence permits testing the hypothesis that their processing might be facilitated by the same interoceptive manipulation (the increase of heart rate) but two opposite (positive and negative) affective primes. To test this hypothesis, we asked participants to perform a gender-categorization task of happy, fearful, and neutral faces, which were preceded by positive, negative, and neutral primes. Participants performed the same task in two sessions (after rest, with normal heart rate, or exercise, with faster heart rate) and we recorded their response times and mouse movements during the choices. We replicated the finding that when participants were in the exercise condition, they processed fearful faces faster than when they were in the rest condition. However, we did not find the same reduction in response time for happy (or neutral) faces. Furthermore, we found that when participants were in the exercise condition, they processed fearful faces faster in the presence of negative compared to positive or neutral primes; but we found no equivalent facilitation of positive (or neutral) primes during the INTRODUCTION Embodied theories of emotion suggest that emotional processing is at least in part grounded in bodily, interoceptive, and motor processes (Barsalou, 2008;Wilson-Mendenhall et al., 2013). Accordingly, every time an emotion is felt, thought about, or even recognized on someone else's face, an emotion simulation would occur, which re-enacts perceptual, interoceptive, and motor streams congruent with (e.g., corresponding to past experiences of) the emotion in both the brain and the body (Oosterwijk et al., 2012). For example, the sight of fearful stimuli activates cortical areas involved in pain experience, including anterior cingulate cortex and insula (Botvinick et al., 2005) and amygdala (LeDoux, 2003;Garfinkel et al., 2014). In turn, the amygdala (via hypothalamus and brainstem circuits) engages autonomic processes and produces changes in heart rate (LeDoux, 2000).
Importantly, the re-enactment of interoceptive and bodily processes is not just a side-effect of emotional processing, but part and parcel of the emotional experience. Indeed, according to embodied theories, feelings and emotions may derive from a central representation of the physiological state of the body and its changes (James, 1884;Damasio and Carvalho, 2013). Embodied theories, therefore, predict that the momentary physiological condition of the body should influence the processing of emotionally charged stimuli. Early studies that experimentally induced states of high arousal and anxiety (e.g., by conducting the experiment on a fear-arousing bridge) reported that these physiological changes influenced subsequent ratings of attractiveness (Schachter and Singer, 1962;White et al., 1981). Additionally, the ingestion of different types of food has been found to influence the recognition of faces with different emotional expressions (Pandolfi et al., 2016). Other studies showed that manipulating facial expressions to render them congruent with specific emotions influenced the subsequent recognition of emotionally charged stimuli (McCanne and Anderson, 1987;Havas et al., 2010;Kever et al., 2015;Marmolejo-Ramos et al., 2020).
More recently, a study by Pezzulo and his colleagues manipulated (increased) heart rate before participants performed a gender-categorization task with (male and female) faces expressing an emotion congruent with the interoception of an increased heart rate (fear), an incongruent emotion (disgust), or neutrality (Pezzulo et al., 2018). Participants performed the task in two conditions: in a rest condition, wherein they remained at rest to maintain their baseline heart rate; and in an exercise condition, wherein they performed physical exercise to build and sustain an increased heart rate. Response time and kinematic variables were recorded using mouse-tracking software (Freeman et al., 2011). Pezzulo and his colleagues found that, despite the implicit nature of the gender-categorization task, an increased heart rate facilitated the participants' processing of fearful faces, and not disgusted or neutral ones. These findings indicated that, when a certain kind of physiological condition (e.g., increased heart rate) is an interoceptive signature of a particular emotion simulation (e.g., fear), inducing the former facilitates the exteroceptive processing of stimuli displaying that emotion. This can be described as a form of embodied congruence between interoceptive state and emotional processing.
Such influence of the body's physiological condition on emotional processing could be explained from the perspective of interoceptive inference, which posits that the brain uses interoceptive streams [e.g., signals from the internal organ as heart rate, breath, and gut (Craig, 2003)] to estimate the physiological condition of the body, to control corrective autonomic responses (Seth et al., 2012;Pezzulo, 2013;Seth, 2013;Barrett and Simmons, 2015;Pezzulo et al., 2015;Barrett et al., 2016;Seth and Friston, 2016;Gu et al., 2019). According to this framework, the resulting estimate of bodily and physiological parameters that culminates in the insula constitutes the central representation of the body's state and its changes which, from an embodied perspective, is responsible for experiences of feeling and emotion (James, 1884;Damasio and Carvalho, 2013). Hence, an experimental manipulation such as physical exercise that increases heart rate (amongst with other interoceptive variables) would induce feelings and bodily responses that are congruent with the processing of fearful stimuli-therefore potentially facilitating the processing of these stimuli.
Pezzulo and his colleagues' study, however, only addresses one aspect of interoceptive inference: the effect of (bottom-up) interoceptive signals such as heart rate in the processing of emotional states. Inferential theories of perception (Helmholtz, 1866;Gregory, 1980;Friston, 2005), including the theory of interoceptive inference, would predict bottom-up information (which, here, includes interoceptive streams such as heart rate) to be integrated with other top-down sources of information, such as the prior affective context (i.e., positive or negative), during the perception of emotionally charged stimuli. In other words, if emotional processing is the result of an inference, it should integrate multiple sources of bottom-up and top-down information and, thus, be influenced by both the physiological condition of the body-accessible via interoceptive streams-as well as the affective context in which the processing takes place. Whether and in which ways such integration of bottom-up and top-down information occurs during the processing of emotional stimuli has not been addressed thus far.
To address these questions, we designed the present study as an extension of Pezzulo and his colleagues' experiment (Pezzulo et al., 2018), wherein we chose to manipulate not just, akin to the original study, participants' (bottom-up) physiological conditions during a gender-categorization task, but also the (topdown) affective context by using affective priming.
Participants completed a gender-categorization task similar to Pezzulo and colleagues' experiment (Pezzulo et al., 2018). Participants were presented with static pictures of faces showing positive (happy), neutral, and negative (fearful) facial expressions that they were asked to categorize as male or female. They performed the experiment in two conditions: in a rest condition, when they had remained at rest to maintain their baseline heart rate; and in an exercise condition, when they had performed physical exercise to achieve and sustain an increased heart rate. Similar to Pezzulo et al. (2018), mouse-tracking software was used to record response time and kinematic measures of the participants' mouse trajectories while they completed the gendercategorization task. These recordings of the mouse movements provide more fine-grained information on the decision-making process with respect to the response time of button-pressing during the task. In particular, kinematic measures, such as the trajectory of the movement, provide information on the process of choice-revision while others, such as the x-flips or the smoothness of the trajectories across the horizontal axis, provide information on the participants' uncertainty in making the choice and any "changes of mind" they might have had Pezzulo, 2012, 2015;Flumini et al., 2015;Quétard et al., 2015Quétard et al., , 2016Barca et al., 2017;Iodice et al., 2017).
The main difference from the original study is that participants performed the task in three blocks (and twice, because they were tested both after rest and exercise). For each trial within the task, before receiving an image of a face with a fearful, neutral, or happy expression, they were shown an affective prime image, which was positive, neutral, or negative in these three different blocks. We chose affective priming as our prior context manipulation as previous studies have shown that manipulated emotional situations can be used to influence participants' bottom-up processing of ambiguous stimuli (Dror et al., 2005). Moreover, affective stimuli, such as images or words, have been found to prime positivity and negativity in a subsequent task. An example of this phenomenon is when the valence of an affective prime matches the target of a task, bottom-up processing of that target was found to be quicker, even when the valence was irrelevant to the goal of the task, such as evaluating whether a string of letters was a real word or made-up (Fazio et al., 1986;Ferré and Sánchez-Casas, 2014).
Another difference with the original study is that we selected fearful, happy, and neutral faces, as opposed to fearful, disgusted, and neural ones. We did this to maintain symmetry with the three levels of our affective prime variable (i.e., fearful face and negative prime, happy face and positive prime, neutral face and neutral prime). Importantly, we selected positive (happiness) and negative (fearful) emotions which, despite their differing valences, are both associated to high heart ratesthe physiological domain we are interested in. Indeed, despite inconsistent research on the interoceptive signatures of emotions, fear is widely associated with a number of physiological changes, including increased heart rate (Kreibig, 2010). Similarly, while the findings are not quite as unanimous as those on fear, there is a consensus that happiness is also associated with increased heart rate, especially when the stimuli are images (Dimberg and Thunberg, 2007).
This configuration of stimuli-i.e., emotional facial expressions with opposite affective valences (positive and negative), but shared physiological signature (increased heart rate)-allows us to test whether and how bottom-up and topdown processes are integrated. In line with previous findings (Pezzulo et al., 2018), we predicted the bottom-up manipulation (i.e., the increase of heart rate and other physiological signals via the exercise condition) to facilitate the processing of emotional (fearful and happy) faces compared to neutral faces -because fear and happiness are associated with similar bodily states (e.g., high heart rate) as the exercise condition. Furthermore, we predicted the top-down modulation (i.e., positive and negative primes) to further facilitate the processing of happy and fearful faces, respectively, when participants were in the exercise condition (and hence had high heart rate). This latter finding would imply that emotional processing could integrate bottom-up and top-down streams (here, changes of physiological state and emotional priming, respectively).
Ethics Statement
The study protocol conformed to the ethical guidelines of the Declaration of Helsinki (BMJ 1991;302: 1194) as reflected in prior approval by the Institution of Cognitive Sciences and Technology's human research committee (ISTC-CNR, Rome -N.0003971/04/12/2015).
Participants
Thirty-six participants were recruited (M age = 20 years old, SD = 0.90; 50% females, 50% males) from University of Rouen Normandie, France. Participants were members of the same cultural group (i.e., Caucasian) and all were of French nationality. All participants were right-handed with normal or corrected-tonormal vision. All were volunteers and provided their informed consent before the experiment.
Stimuli of Affective Primes. Participants were presented with positive, neutral, and negative images taken from the International Affective Picture System (IAPS) (Lang and Bradley, 2007). The images have been rated by valence, arousal, and dominance. Positive images have a valence range of 6.00-7.99, neutral images have a range of 4.00-5.99, and negative images of 2.00-3.99. Five hundred and forty images were used in total (i.e., 180 images for each valence level). Images from the IAPS were chosen because the picture system is standardized and its validity has been tested (Verschuere et al., 2001).
Stimuli of Faces with Emotional Expressions. Experimental stimuli comprised 45 male faces (15 happy, 15 neutral, 15 (Lundqvist et al., 1998;Goeleven et al., 2008). The luminance of the images was normalized, and a Hann window was applied to remove hair and the peripheral information of the faces (Buck et al., 1972;Schwartz et al., 1980;Greenwald et al., 1989;Mermillod et al., 2009) and avoid gender-categorization based on the peripheral facial hair (Kaminski et al., 2011).
Experimental Procedure
At the beginning of each trial, participants were instructed to click on the /START/ button located at the bottom-center of the screen. Then, a sequence of two stimuli were presented centrally: an affective prime (positive, neutral, or negative) for 200 ms, followed by a target face (male or female face, having either a happy, neutral or fearful expression) for 500 ms. Participants were instructed that their task was to perform a gender-categorization task: they had to categorize the target face as "Male" or "Female, " by clicking with the mouse on one of two response buttons, which appeared on the top-left or top-right corner of the screen. Rightward and leftward responses of "Male" and "Female" were counterbalanced. In each trial, the mouse could not be moved before the presentation of one of the target faces was completed.
While the participants were responding by moving their mouse, the x-and y-coordinates of the mouse trajectories were recorded (sampling rate of approximately 70 Hz) using the MouseTracker software (Freeman and Ambady, 2010). Given that each trial consists in a sequence of multiple stimuli (i.e., the prime for 200 ms and the face stimulus for 500 ms) we used a software command, which prevents mouse movements while the sequence of stimuli unfolds. Thus, the mouse "freezes" when the participant clicks on the /START/ button, and "unfreezes" after the last stimulus is presented.
Before the experimental block, participants received a block of 5 practice trials. Then, for the experimental task, they received 270 experimental trials in total, with 3 blocks of 90 trials each: one block with positive primes, one with neutral primes, and one with negative primes. Within each block, they received 90 primes and 90 face stimuli (30 happy faces, 30 neutral faces, and 30 negative faces, with 15 male and 15 female faces per emotion). They received the same set of face and prime stimuli for each block, in accordance with the within-subjects design. The blocks were counterbalanced among participants, and interleaved by a pause of one minute. Within each block, the trials were randomized.
All participants performed the gender-categorization task in two conditions on two different days at least one week apart from each other: an exercise condition session and a rest condition session. In the exercise condition, they were asked to bike on a cycle ergometer at their own rhythm for 3 min, both before the experimental task and during the pauses between blocks to ensure that the participants' heart rates remained accelerated during the duration of the experimental task. In the rest condition, participants completed the experimental task without any prior physical activity. Instead of performing physical activity, they were asked to rest on their chair for 3 min before the task and for one minute during the pauses to maintain the timing across conditions. The order of rest and exercise conditions was counterbalanced across subjects.
A Polar RS800CX was used to check the participants' heart rate after the rest or exercise period and before the task in each condition. When in the exercise condition, participants had an average of 122 ± 6 bpm, while when in the rest condition, they had an average of 69 ± 8 bpm.
Mouse Tracker Data
Response time (RT) and accuracy were collected, together with parameters measuring mouse movement trajectories [i.e., Participants' response time (RT) in milliseconds was recorded from the end of the presentation of the faces (i.e., when the mouse "unfroze") 1 to the time they clicked on the selected gender option. block mouse movement during the unfolding of the stimulus sequence. Therefore, the mouse was frozen from the moment the participant pressed the /START/ button and only unfroze once the face stimulus presentation was completed. This procedure prevented the recording of accidental micro-movements of the mouse that may occur before the face stimulus appears or can be processed.
The measure of Maximum Deviation (MD), calculated from the mouse coordinates, is the most substantial perpendicular deviation between the actual mouse trajectory and the ideal mouse trajectory (the straightest line from the start point to the endpoint of the trajectory), where positive scores indicate a trajectory in the direction of the unselected choice. Therefore, higher maximum deviation scores reflect participants' maximal attraction to the unselected choice, while negative maximum deviation scores reflect the participant's maximum attraction to the selected option.
The Area Under the Curve (AUC) is the space under the curve created by the actual trajectory in comparison to the ideal trajectory, where positive scores indicate a bend in the direction of the unselected choice. Therefore, higher scores reflect participants' overall attraction to the unselected choice across all time-steps, while negative scores reflect the participant's overall attraction to the selected option.
Data Processing
Incorrect responses were removed from data analysis (0.03% of total data points). Response time outliers have been detected and removed with the R code "outlierKD, " which uses the Tukey's method to identify the outliers ranged above and below the 1.5 * interquartile range (Dhana, 2018).
Estimation statistics have been used to test and plot the effect size of the significant interactions. We computed and plotted Cohen's d using the tools available at https:// www.estimationstats.com/, determining the distribution with bootstrap sampling (5000 samples), and using bias-corrected and accelerated confidence intervals (Ho et al., 2019). An effect size of d = 0.2 is considered a "small" effect size, d = 0.5 represents a "medium" effect size, and d = 0.8 a "large" effect size. To ensure the robustness of our analyses, we (conservatively) only report large or medium effects.
Response Time (RT)
The overall response times, with the data from the three blocks aggregated, is displayed in Figure 1. The left part of Figure 1 shows raincloud plots [generated with the cowplot R script (Allen et al., 2018)], which highlight a difference in the distributions of response time for exercise and rest conditions for faces with a fearful expression. The right half of Figure 1, which depicts Cumming estimation plots and estimation statistics (i.e., Cohen's d) on the same data, allows one to appreciate the result. The analysis shows that the exercise condition significantly speeds up the processing time for faces with fearful expressions (Cohen's d = 3.78), but not happy (d = −0.3) or neutral ones (d = −0.2). Additionally, in the exercise condition, fearful faces were responded to significantly faster than faces with happy (d = 3.6) or neutral expressions (d = 3.5) (and d Happy-Neutral = −0.12). In the rest condition, the differences are negligible (d Fear-Happy = 0.14; d Fear-Neutral = 0.15; d Happy-Neutral = 0.01). No other comparisons appeared to be relevant. The facilitatory effect of the exercise condition on the processing of fearful but not neutral faces replicates the findings of Pezzulo et al. (2018). Contrary to our prediction, we did not find a facilitatory effect of the exercise condition on the processing of happy faces, despite happiness (similar to fear) being usually associated with increased heart rate.
After replicating the prior study's results, we considered possible effects of affective primes on gender-categorization. Figure 2 shows raincloud plots of the response times in the different conditions. The figure highlights a difference in the distributions of response time for faces with a fearful expression with respect to faces with happy or neutral expression, but only in the exercise condition. In the exercise condition (panel A), participants responded faster to fearful faces when they were preceded by a negative prime compared to a positive (d = 0.52, large effect) or neutral (d = 0.47, small effect) prime. No difference emerged between positive and neutral primes (d = −0.13, trivial effect). In other words, when participants performed the experiment in the exercise condition, they responded to fearful faces significantly faster when the faces were preceded by a negative prime than a positive or neutral prime. The same comparisons in the rest condition (panel B) did not yield comparable results, with only a small effect (d = 0.40) for fearful faces preceded by negative vs. neutral primes. In keeping, a Mann-Whitney test indicates that the only two comparisons that reached statistical significance (P value < 0.05) were negative vs. positive primes and negative vs. neutral primes, but only when participants processed fearful faces in the exercise condition (the same comparisons failed to reach significance when participants were in the rest condition). For the sake of completeness, all the results of estimation statistics (i.e., Cohen's d) and Mann-Whitney test are reported in Table 1. These results show that no other comparisons apart for those discussed above appeared to be relevant.
Finally, we conducted a further analysis to confirm that the effects of the negative prime are specific for fearful faces in the exercise condition. We found that in the presence of a negative prime, response times for feaful faces were faster in the exercise condition than in the rest condition (d = 3.41). However, no such effect was present for happy (d Exercise-Rest = −0.26) or neutral faces (d Exercise-Rest = 0.02).
Movement Trajectories
Mouse trajectories were rescaled into a standard coordinate space. The top-left corner of the screen corresponded to "−1, 1.5" and the bottom-right corner to "1, 0, " with the start location of the mouse (the bottom-center) with coordinates "0, 0." Then, the duration of the trajectory movements were normalized by re-sampling the time vector into 101 time-steps using linear interpolation to allow averaging across multiple trials.
Area Under the Curve (AUC)
Estimation statistics on Area Under the Curve (AUC) for Physical Condition and Facial Expression reveal that in the Exercise condition AUC values were smaller in response to faces with fearful expressions (AUC value = −0.52) compared to happy (AUC value = 0.21, d = 3.87) or neutral ones (AUC value = 0.25, d = 3.96), but were not different between happy and neutral faces (d = 0.29). On the contrary, in the rest condition, AUC values were larger in response to faces with fearful expresssions (AUC value = −0.08) compared to happy (AUC value = −0.0005, d = 0.93) or neutral ones (AUC value = −0.003, d = 0.92), but were not different between happy and neutral faces (d = −0.2). Note that the rest of the effect sizes were lower in the rest condition compared to the exercise one. Table 3 reports the estimation statistics on Maximum Deviation (MD) for Condition, Facial Expression and Affective Prime. The analysis reveals no medium or large size effects.
DISCUSSION
Embodied theories of emotion posit that the processing of emotionally charged stimuli consists of a situated simulation that re-enacts emotion-congruent physiological, interoceptive, and affective states (Wilson-Mendenhall et al., 2013;Barrett, 2017). Accordingly, not only should processing emotions with reliable somatic components, such as fear, produce significant physiological changes, such as heart rate acceleration (Cacioppo et al., 2000), but the converse should be true as well. For example, an accelerated heart rate should facilitate the processing of fearful faces-potentially via interoceptive channels that are key to the processing of bodily sensations and emotion (Kreibig, 2010;Barrett and Simmons, 2015).
This latter prediction was tested in a previous study by Pezzulo et al. (2018), which reported that after exercise (which increased heart rate), participants' processing in gender-categorization of faces expressing fear was facilitated compared to that of faces expressing neutrality or disgust (Pezzulo et al., 2018). These results provided evidence for the bottom-up influence of both physiological and interoceptive states on emotional processing, specifically for affectively congruent pairs (increased heart rate and fear). That this effect was obtained in an incidental gender-categorization task is noteworthy, as it did not require participants to explicitly attend to the emotional content of the stimuli. This suggests that emotional content is nevertheless implicitly attended to.
In the present study, we aimed to replicate this finding and extend it to study the joint-effect of the above (bottomup) interoceptive manipulation of heart rate and a (top-down) manipulation of the context of emotional processing via affective priming. To this end, we tested participants after rest and exercise in the gender-categorization of faces with three types of emotional expressions: two (happiness and fear) both congruent with high heart rate, but with opposite valence (positive and negative) and the third, neutral. Participants performed the gender-categorization in three blocks (for each affective priming condition), during which emotional faces were preceded by positive or negative primes (congruent with happy and fearful faces, respectively) or neutral primes.
Theories of interoceptive or embodied inference would interpret such emotional processing as the result of a principled (Bayesian) inferential process that integrates (top-down) affective primes and (bottom-up) interoceptive streams (Seth et al., 2012;Pezzulo, 2013;Seth, 2013;Barrett and Simmons, 2015;Pezzulo et al., 2015;Barrett et al., 2016;Seth and Friston, 2016;Gu , 2019). From these perspectives, positive and negative primes should have facilitated the processing of happy and fearful faces, respectively, when participants were in an emotioncongruent bodily state (i.e., increased heart rate). This would be explained by both the interoceptive information of increased heart rate and the priming of positive (or negative) emotional content anticipating the incoming perception of happy (or fearful) faces. This anticipation would facilitate the processing of those interoceptively and prior affectively congruent stimuli. In more ecological contexts, interoception and prior affective context could be used to to prepare to deal with behaviorally relevant positive or negative events (Seth et al., 2012;Pezzulo, 2013;Seth, 2013;Barrett and Simmons, 2015;Pezzulo et al., 2015;Barrett et al., 2016;Seth and Friston, 2016;Friston et al., 2017;Gu et al., 2019). Our results replicated the embodied congruence between an interoceptive state of high heart rate 2 and emotional target reported in Pezzulo et al. (2018). Participants in the exercise condition processed fearful faces faster than the other faces (see Figure 1). While the response time values reported in the present study seem shorter than those in the original one, the response times between these two studies are comparable as, in this study, the mouse was frozen until the end of the presentation of the face, while this was not the case in the previous one. Furthermore, akin to Pezzulo et al. (2018), trajectories toward fearful faces in the exercise condition had lower MD and AUC values compared to those in response to other facial expressions (see Figure 3). Surprisingly, trajectories toward fearful faces had higher MD and AUC values in the rest condition compared to the other faces. While this difference was not significant in Pezzulo et al., it trended in that direction (Pezzulo et al., 2018).
Our results extend the findings of Pezzulo et al. (2018) by showing that, in accordance with our hypothesis, participants in the exercise condition processed fearful faces even faster when they were preceded by a negative prime compared to the other primes (see Figure 2). However, contrary to our hypothesis, participants did not process happy faces faster when they were in the exercise compared to the rest condition. Additionally, participants in the exercise condition did not process happy faces faster when they were preceded by positive primes. Thus, our findings suggest that emotional processing integrates (top-down) affective context and (bottom-up) interoceptive condition of the body, but only in the processing of fearful faces.
These asymmetric results-and the fact that neither exercise nor positive primes alone, or their combination, facilitated the processing of happy faces-may be due to a number of factors. First, both fearful and happy faces are associated to high heart rate, but the latter less consistently, which may explain why the high heart rate induced by physical exercise facilitates the processing of fearful, but not happy faces. A further (not mutually exclusive) possibility is that positive IAPS primes (Lang and Bradley, 2007) exerted a weaker effect compared to negative primes on the processing of emotional stimuli. The prominence of negative primes is supported by the analysis of kinematic measures (maximum deviation, MD), which shows that in the increased heart rate condition, negative primes influenced the processing of happy and, to some extent, neutral faces, reducing the attractiveness of competing alternatives. The lack of an effect on fearful faces could plausibly be due to a ceiling effect, given that the MD of trajectories with fearful faces is already extremely low. Future studies using a wider range of emotional expressions and primes may help establish to what extent the asymmetry of results between the positive and negative domains-and the stronger effect of negative vs. positive primes-depend on the materials we used or fundamental differences between emotional domains.
In summary, the analyses of response times and movement kinematics suggest that the congruence between negative primes and increased heart rate influences the emotional processing in two ways. The first is valence-specific: as reported above, it selectively speeds up the processing of fearful stimuli, which are congruent with both high heart rate and negative primes. This embodied congruence between interoceptive state, affective priming, and emotional expression processing is compatible with Pezzulo et al. (2018) as well as theories of interoceptive or embodied inference which predict that (top-down) negative affective primes and (bottom-up) interoceptive streams signaling high heart rate would selectively influence the processing of fearful faces (Seth et al., 2012;Pezzulo, 2013;Seth, 2013;Barrett and Simmons, 2015;Pezzulo et al., 2015;Barrett et al., 2016;Seth and Friston, 2016;Gu et al., 2019). The second may consist of a broader form of arousal, which non-selectively influences the processing of happy and neutral faces, too. Previous research has reported that arousal from exercise can facilitate emotional processing, improving vigilance and performance in cognitive tasks (Poulton, 1977). Our results suggest that valence-specific and broader effects are not mutually exclusive, but can coexist. However, the latter, broader effects seem to be more limited and confined to some subtle kinematic aspects of movement.
There are some limitations in the present study that can be corrected or expanded on in further studies. First, as noted above, it is possible that our materials of happy and fearful faces, or positive and negative primes, were not completely balanced. This is something that future studies should carefully control. Second, we used a block design and this may have limited the effect of affective priming. Future experiments using an eventrelated rather than a block design may be more effective in testing significant effects of the prime on emotion processing. Third, the limited sample of participants prevented us from analyzing the gender variable across participants. In addition, we did not design the study to test gender differences and we do not have a priori hypotheses about this comparison (especially in this task, which is a gender classification task that probes emotional processing indirectly). Fourth, a wider variety of emotions should be tested. Further categories of facial expression should be used to expand our repertoire of potential interoceptive signatures of different emotions. Different emotional target stimuli could also be used to investigate more granular types of emotion, such as nostalgia, annoyance, or sexual arousal. Such target stimuli could consist of images, videos, or stories of specifically emotionally salient situations. Fifth, a greater variety of ethnicities should also be introduced in further experimentation by having participants consist of different ethnic groups and the face stimuli as well. Finally, future studies may consider applying this paradigm to clinical populations, given the emerging link between interoceptive dysfunctions and several psychopathological conditions such as anorexia nervosa (Barca and Pezzulo, 2020), bulimia nervosa (Klabunde et al., 2017), anxiety and addiction (Paulus et al., 2019), somatic symptoms perception (Pezzulo et al., 2019), and panic disorders (Maisto et al., 2021).
Despite these limitations, the present study provides promising evidence that the implicit processing of fearful facial expressions is facilitated by both congruent interoceptive states and affective primes within the same paradigm. It shows that salient interoceptive signals, such as a high heart rate and negative affective prime, jointly create the appropriate contextor expectation-to process subsequent congruent stimuli such as a fearful face.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Institution of Cognitive Sciences and Technology's Human Research Committee (ISTC-CNR, Rome -N.0003971/04/12/2015). The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
ANCY, PI, GP, and LB designed the research. PI performed the research. LB analyzed the data. GP and LB wrote the article in consultation with ANCY and PI. All authors contributed to the article and approved the submitted version.
|
2021-06-02T13:17:12.959Z
|
2020-01-29T00:00:00.000
|
{
"year": 2021,
"sha1": "3e8208131e55d6f6f5aa86f0d4d7227f6ea8726a",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2021.625986/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3e8208131e55d6f6f5aa86f0d4d7227f6ea8726a",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
269770181
|
pes2o/s2orc
|
v3-fos-license
|
Why should Medical Societies increasingly attend to their Specialist Title exams and why should medical professionals obtain it?
ABSTRACT Medical societies must maintain high standards of competence and quality when awarding specialist titles, defining the certification criteria, taking into account the needs and realities of the health system and medical practice.
EDITORIAL
C ompetence and professionalism are nouns present in a doctor's career with varied meanings and often used as opposites.Competence refers to knowing how to do things with discernment.Concurrently, professionalism combines technical competence with a continuous reflection of know-how, imposing an ethical sense on the physician's actions.
Every competence happens in a certain context.Just as there is no authority for every field, it is also not conceivable to have a competence for every conceivable context.Thus, a physician recognized as competent in a technical understanding must adhere to the principles of bioethics, especially the principles of beneficence and non-maleficence.
However, the lack of technical competence preexists the lack of professionalism, but the existence of competence is not enough to denote technical competence.
In this scenario of ambiguity is the medical specialist who, in theory, means to society a doctor with professionalism preceded by a recognized technical competence relevant to his specialty.
The Specialist Title exam plays a vital role in promoting excellence and protecting patient health while contributing to the continuous advancement of the medical specialty.
These examinations should be designed to ensure high standards of competence and professionalism among physicians seeking to become specialists.
Societies must ensure the quality of the specialist title exam so that they can guarantee to the population that, by obtaining the specialist title, the physi- Vantagens do Título de Especialista para as Sociedades Médicas e população Professional Competency Guarantee It is one of the ways to ensure that surgeons who obtain certification possess the knowledge, skills, and competencies necessary to practice the specialty effectively and safely.
Patient Health Protection
By setting rigorous standards for certification, the surgical specialty society contributes to the safety and quality of patient care.Board-certified surgeons are recognized as professionals capable of delivering high-quality care.
Encourage the Ethical Practice of the Medical Act By establishing the difference between moral conduct (right and wrong) and the reflection of one's actions, which is ethics (good and evil).All of this refers the surgeon to the fundamental principles of our Code of Medical Ethics, especially to principle II: "The target of all the physician's attention is the health of the human being, for the benefit of which he/she must act with the utmost zeal and the best of his professional capacity."
Constant Update
The exam frequently covers recent advances in the surgical field, encouraging practitioners to stay up to date with the latest practices, technologies, and scientific evidence.
Specialty Standardization
Certification through the specialist title exam carried out by peers of the same specialty contributes to the standardization of practice in the specialty.This creates a common set of standards that benefits both practitioners and patients.
Recognition of the Specialization
It confers a formal recognition of the surgeon's ability in a specific area.This is valuable for the professional in terms of credibility and for society in ensuring that experts are properly recognized.
Professional Enhancement Continuous (Re-Certification) -not a practice in Brazil
This requirement encourages surgeons to engage in an ongoing process of learning and professional development, promoting excellence in the specialty over time.
Credibility of the Society
The surgical specialty society gains credibility by certifying professionals through a rigorous evaluation process.This reinforces the trust of the medical community and the public in society and its members.Even more so, if a periodic recertification process is implemented that guarantees patients that the specialist surgeon's capacity remains proven and attested.cable, legal entities, trade unions, and medical associations to disclose, when not specialists, that they deal with organic systems, organs or specific diseases, by inducing confusion with alleged specialties".
In this sense, a specialist title from a medical society offers a variety of benefits, ranging from legal and ethical training, through professional recognition to continuous development, and participation in professional communities.
These advantages can have a positive impact on both the surgeon's career and satisfaction in his/her professional practice (Table 2).Obtaining a specialist title from a surgical society offers several advantages for a surgeon, both in terms of professional development and recognition in the medical field.
Here are some of the main advantages: Vantagens da obtenção do Título de Especialista pelos profissionais médicos Professional Recognition It confers formal recognition of the physician's experience and competence in the specific area of expertise.This is valuable for building a solid reputation in the surgical field.
Credibility and Trust
It is a seal of approval that indicates to colleagues, patients, and healthcare institutions that the physician meets high standards of knowledge, skill, and professional ethics.
Market Differentiation
In a competitive professional landscape, having a specialist title makes the doctor stand out, providing a competitive advantage and potentially opening doors to professional opportunities.
Continuous Development
The process of obtaining the specialist title typically involves a commitment to continuous learning, encouraging the surgeon to stay up to date with the latest practices, technologies, and research in their field.
Access to Professional Networks
Many surgical societies provide opportunities for members to connect and collaborate with other professionals in the field.This can lead to valuable collaborations, mentorships, and networking.
Participation in Scientific Activities
Many societies promote conferences, workshops, and scientific activities that provide opportunities for presenting research, exchanging knowledge, and updating professionals.
Leadership Engagement
Skilled surgeons often can engage in leadership within society, participating in committees, boards, and contributing to the advancement of the specialty.
Ethical and Professional Standards
The certification process often encompasses both ethical and professional aspects, reinforcing the surgeon's commitment to high ethical standards in surgical practice.It also allows the qualified surgeon to present himself as a specialist in possession of a Specialist Qualification Registration (SQR), in view of his/her training and professionalism.
Best Job Opportunities
Having a specialist title can increase the chances of landing prominent positions in healthcare institutions, hospitals, or clinics that value specialization and certification.
Personal and Professional Satisfaction
In addition to the external benefits, obtaining the specialist title can provide a sense of personal and professional fulfillment, highlighting the doctor's dedication to their specialty.
Worldwide Recognition
The Brazilian Medical Association is affiliated to the World Medical Association (WMA), an international organization that represents physicians from all over the world and recognizes the titles generated by the specialties recognized by the Brazilian Medical Association (AMB).
For any of the 54 medical specialties recognized in Brazil, the Federal Council of Medicine, through its Regional Councils (CRM), can only register as specialists Residency and Specialist Title are certificates of a different nature, being independent.A doctor may have one or both, but one or the other entitles the specialist to register as such in a CRM.By determination of the AMB, to grant a Specialist Title only for an excellent curriculum or for proof of completion of Medical Residency is no longer allowed (FALK, 2006).Currently, it is always necessary to pass exams that involve a written test (cognitive assessment) and a practical test (assessment of technical skills and communication skills).
On the other hand, the certificates of completion of specialization courses have their recognition in the Academy, as well as a degree of importance in the labor market and in the curriculum.However, these certificates are not sufficient for the physician to be registered as a specialist in the Medical Councils (FALK, 2006).• In the ruling, the magistrate argued that Brazilian law does not authorize doctors to disclose postgraduate degrees, because, if they do so, they can "deceive eventual patients claiming to be specialists.".
And it is the role of the judiciary to protect the "collective right of the people not to be deceived by false medical experts.".
• With that decision, it was once again forbidden for doctors with lato sensu postgraduate degrees to advertise themselves as specialists.
Thus, the Code of Medical Ethics, the rules of the Federal Council of Medicine (CFM) and now the Common Justice prohibit the physician from claiming to be a specialist, through business cards, prescriptions, office signs, health plans, etc., without having the Specialist Qualification Record issued by a CRM.(FALK, 2006).
cian has been entitled to the recognition of his technical competence, which is one of the highest and most important levels of medical training.Article 17 of Law No. 3,268/57, which regulates the Councils of Medicine and gives other provisions, states that "The legally registered physician may practice his profession in any of its branches or specialties, assuming, of course, responsibility for his acts.".However, Article 11 of the CFM Resolution prescribes that "It is forbidden for physicians and, where appli-1 -Universidade de São Paulo (USP), Curso de Medicina -Bauru -SP -Brasil 2 -Universidade Federal de São Paulo (UNIFESP), Departamento de Cirurgia -São Paulo -SP -Brasil 3 -Universidade Federal de Goiás (UFG), Departamento de Cirurgia -Goiânia -GO -Brasil 4 -Universidade Federal do Paraná (UFPR), Departamento de Cirurgia -Curitiba -PR -Brasil 5 -Associação Médica Brasileira (AMB), Presidente da AMB -São Paulo -SP -Brasil 6 -Universidade Federal do Estado do Rio de Janeiro (UNIRIO) Escola de Medicina e Cirurgia -Rio de Janeiro -RJ -Brasil 7 -Colégio Brasileiro de Cirugiões (CBC), Presidente do CBC -Rio de Janeiro -RJ -Brasil Editorial A B S T R A C T A B S T R A C T Medical societies must maintain high standards of competence and quality when awarding specialist titles, defining the certification criteria, taking into account the needs and realities of the health system and medical practice.Keywords: Societies, Medical.Professional Training.Quality Assurance, Health Care.Academic Performance.
Since 1958, the AMB has granted the Specialist Title to physicians who have passed rigorous theoretical and practical evaluations with the objective of seeking scientific improvement and professional appreciation of physicians.Through its National Accreditation Commission, the AMB works on updating the titles, administering the necessary credits (AMB, 2021).The criteria for the public notices of the specialist titles exams must be in accordance with the requirements established in the agreement signed between the Federal Council of Medicine (CFM), the Brazilian Medical Association (AMB), and the National Commission for Medical Residency (CNRM, 2002), and with the Sufficiency Exam Regulation for Specialist Title or Certification of Area of Expertise of the AMB (2021).
(
granting the SQR) physicians who present at least one of these two documents: Certificate of Completion of Medical Residency accredited by the National Commission for Medical Residency (CNRM); Specialist title granted by the Brazilian Association or Society of the respective specialty, which is affiliated to the Brazilian Medical Association (AMB) and whose notice of the specialist title exam followed the rules of the AMB and is approved by it (FALK, 2006; AMB, 2021).Rev Col Bras Cir 51:e20243750EDIT01 Due to information disseminated in the press regarding Medical Specialist Titles issued by an entity not affiliated with the Brazilian Medical Association, it was clarified that: • According to Decree No. 8,516/15, which establishes the rules for the recognition of medical specialties, and CFM Resolution No. 1,974/11, which regulates medical advertising, only those recognized as medical specialists with titles granted by CNRM or AMB can be recognized as medical specialists, for those who have been approved in specific specialty tests of the societies affiliated to the Brazilian Medical Association, and, in both cases, must be registered with the Regional Council of Medicine.• On February 22, 2023, the Federal Regional Court of the 1st Region judged the action number 1026344-20.2020.4.01.3400, accepting the appeal filed by the Federal Council of Medicine and the Brazilian Medical Association, suspending the decision given in the first instance in favor of an association of doctors who offer postgraduate courses.
Monitoring trends and understanding individual motivations are essential to better understand the panorama of medical education in Brazil.Undergraduate medical studies in Brazil are characterized by terminality: the student receives full license to practice medicine upon completing the course at one of the country's medical schools (BICA & KORNIS, 2020).A bottleneck was formed by the disproportionate growth of undergraduate vacancies in relation to medical residency (MR) vacancies, as shown in Medical Demography 2023 (SCHEFFER, 2023), with 41,805 undergraduate vacancies (77% in private courses) and 4,950 MR programs accredited in Brazil, authorized to train physicians in 55 specialties and 59 areas of expertise, recognized by the Joint Committee of Specialties (CME), composed of representatives of CNRM, CFM, and AMB.At the end of 2023, the opening of another 10,000 undergraduate medical vacancies was authorized, jointly by the Ministries of Education and Health (BRASIL, 2023), which were not computed in the current study of Medical Demography, which will make the bottleneck even greater for medical residency vacancies.To the great astonishment of many medical professionals who had already graduated, contrary to what happened in previous years, most graduates were interested in going through MR.Even with the large increase in undergraduate vacancies across the country, there has been an annual decrease in the demand for first year residency vacancies (R1) in the programs, which in 2021 had 16,848 vacancies available.On the one hand, there has been a huge expansion in the offer of lato sensu specialization courses and an increasing number of doctors advertising their Rev Col Bras Cir 51:e20243502EDIT01 work through social networks.On the other hand, we have seen a great deal of bad news, in the various types of media, about errors associated with harmful practices against patients, both by doctors and by professionals from various other areas, even non-health.In this scenario, the Specialist Titles awarded by Medical Societies serve as an indicator of competence, quality, and professional ethics in medical practice.By establishing rigorous criteria for certification, requiring ongoing professional updating, and monitoring the practice of members, Medical Societies play a key role in protecting against professional malpractice and promoting high standards of health care.While it is important for Medical Societies to maintain high standards of competence and quality in granting Specialist Titles, this requires a careful and thoughtful approach in defining the criteria for certification, considering the needs and realities of the healthcare system and medical practice.There is no jus-tification for lowering the rigor of the evaluation for the specialist title granted by the AMB and its affiliated specialty societies.Specialists' capacity and patients' safety must always come first.Finally, faced with the uncontrolled dilemma of the number of physicians trained per year, we see the granting of the specialist title by Medical Societies as one of the last ethical and efficient barriers in the protection of the population.The Commission for the Specialist Title in General Surgery of the Brazilian College of Surgeons (COTECIG) has sought to improve its techniques for the preparation and review of items, the selection of clinical cases for the oral test, as well as the simulated stations, progressively improving the training of face-to-face and online evaluators.Thus, the commitment to excellence and the constant search for improvement are essential to ensure that obtaining the Specialist Title by the Brazilian College of Surgeons is recognized as an indication of proficiency and competence in the surgical area.www.Gov.Br/saude/pt-br/assuntos/noticias/2023/ outubro/lancado-edital-com-regras-para-novoscursos-de-medicina-e-desconcentracao-das-vagas-deformacao-dos-profissionais.4. Conselho federal de medicina.Resolução cfm nº.1785/2006.D.O.U.22 De junho de 2006, seção i, p.127.Disponível em: http://www.Portalmedico.Org.Br/resolucoes/cfm/2006/1785_2006.Htm. 5. Falk jw.Os títulos de especialista.Rev bras med fam comunidade.2006;2(7):162-4.Doi: 10.5712/ Rbmfc2(7)50.6. Scheffer m.Demografia médica no brasil 2023.São paulo: departamento de medicina preventiva, fmusp, cremesp, cfm; 2023.7. Tribunal regional federal da 1ª região.Processo: 1026344-20.2020.4.01.3400 -"Divulgar e
Table 1 -
Advantages of the Specialist Title for Medical Societies and the population.
Table 2 -
Advantages of obtaining the Specialist Title by medical professionals.
|
2024-05-16T06:17:56.463Z
|
2024-04-22T00:00:00.000
|
{
"year": 2024,
"sha1": "a09afab5d137d8f10cd151696f5691e6948a025f",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rcbc/a/DnMD4SRtNTH5n9vsX4d3K4f/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ffddbbb31d48cd0d2c30f2782195eba27b511473",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9149364
|
pes2o/s2orc
|
v3-fos-license
|
Lyme Disease, Virginia, USA, 2000–2011
Geographic expansion of Ixodes scapularis ticks has increased human exposure to Borrelia burgdorferi.
Lyme disease, caused by the bacterium Borrelia burgdorferi and transmitted in the eastern United States by the black-legged tick (Ixodes scapularis), is increasing in incidence and expanding geographically. Recent environmental modeling based on extensive field collections of hostseeking I. scapularis ticks predicted a coastal distribution of ticks in mid-Atlantic states and an elevational limit of 510 m. However, human Lyme disease cases are increasing most dramatically at higher elevations in Virginia, a state where Lyme disease is rapidly emerging. Our goal was to explore the apparent incongruity, during 2000-2011, between human Lyme disease data and predicted and observed I. scapularis distribution. We found significantly higher densities of infected ticks at our highest elevation site than at lower elevation sites. We also found that I. scapularis ticks in Virginia are more closely related to northern than to southern tick populations. Clinicians and epidemiologists should be vigilant in light of the changing spatial distributions of risk.
L yme disease (LD), caused by the bacterium Borrelia burgdorferi and transmitted in the eastern United States by the black-legged tick (Ixodes scapularis), is the most common vector-transmitted disease in North America (1). Maintained in an enzootic cycle comprising competent vertebrate reservoir host species, B. burgdorferi is transmitted to humans by the bite of an I. scapularis nymph or adult that acquired infection during a blood feeding as a nymph or larva (2). Although the principal reservoir host for this pathogen, the white-footed deer mouse, Peromyscus leucopus, is wildly distributed throughout North America, LD is generally confined to 2 geographic foci in the eastern United States: 1 in the upper Midwest and 1 in the Northeast (2)(3)(4)(5). Densities of host-seeking I. scapularis nymphs correlate significantly with cases of human LD (3), but this species has been reported throughout much of eastern North America (6)(7)(8)(9). Nationally, LD incidence increased during 1992-2002, but overall numbers of confirmed cases have since remained relatively stable (1,10).
In some locations, LD incidence recently has increased dramatically; in Virginia, the number of confirmed cases nearly tripled from 2006 to 2007 (http://www.vdh.virginia. gov/epidemiology/surveillance/surveillancedata/index. htm) to ≈12.4 cases per 100,000 residents, well above the 1998-2006 average of 2.2 per 100,000 (1). A 1990 report of LD cases in Virginia noted that the disease was rare in the early 1980s but apparently increased in incidence and geographic distribution through the late 1980s, leading the authors to conclude that the disease was expanding southward (11). Before 2006, most studies of I. scapularis ticks in Virginia focused on the eastern and southeastern parts of the state and found that densities of I. scapularis ticks declined, as did their rate of infection with B. burgdorferi, with distance from the coast (12,13 (15,16).
Although I. scapularis ticks exist in the southeastern United States (6-9), they are most easily detected by drag sampling, a method used as a proxy for risk to tick exposure (5), in areas associated with highest LD incidence, i.e., the Northeast (New Jersey through Massachusetts) and upper Midwest (Wisconsin and Minnesota) (3,5,17). The difference in apparent abundance of I. scapularis ticks and risk for LD between the northern and southeastern United States has been the subject of much discussion and debate (18) and might be related, either through behavioral or physiologic mechanisms, to genetic differences between I. scapularis populations in these regions (7,(19)(20)(21)(22). Population genetic structure of I. scapularis ticks has shown that dynamic range shifts are likely to have occurred in recent evolutionary history (19)(20)(21)(22) and that 2 distinct lineages within this species can be identified; a relatively genetically uniform "American clade" exists in the northern United States (although this lineage has also been detected in the South), and a genetically diverse "southern clade," members of which have been found only in the South (20). Although other nomenclatures have been proposed for these 2 lineages (e.g., clades A and B for northern and southern lineages, respectively [19]), we follow the terminology established by Norris et al.: "American" describes the widely distributed yet less diverse clade and "southern" describes the geographically restricted yet more diverse mtDNA clade of I. scapularis ticks (20).
Range expansion of I. scapularis ticks over relatively short periods has been observed (23,24). Moreover, recent environmental modeling, based on extensive field collections of host-seeking I. scapularis ticks, suggests that this species suggests that the range of this species is expanding widely and its occurrence in a given area depends on the lack of abiotic drivers, vapor pressure deficit and elevation (5,25). In Virginia, studies found that I. scapularis ticks were concentrated in in northern sites; very few ticks were reported in other parts of the state (5,17,25). In contrast, human LD cases at inland, higher-elevation locations have increased in recent years in Virginia (http://www.vdh.virginia.gov/epidemiology/surveillance/surveillancedata/index.htm). The incongruity between human case and vector abundance datasets might be explained by recent (i.e., since 2007) spatial and/or numerical expansion of I. scapularis populations. We hypothesized that density of B. burgdorferi-infected ticks would be highest in counties associated with high incidence of human disease if epidemiologic data represent cases in tick-endemic areas. In contrast, low numbers of infected ticks in areas of high human disease might indicate either misdiagnosis or allochthonous exposure.
Data from Cases in Humans
We compiled LD cases reported to the Virginia Department of Health (http://www.vdh.virginia.gov/epidemiology/ surveillance/surveillancedata/index.htm) directly by physicians or identified through follow-up of positive laboratory results by county public health department personnel what constituted laboratory evidence of infection; an IgMpositive Western immunoblot (WB IgM) test result could be counted as laboratory evidence of infection even though a more specific 2-tier test that used an enzyme-linked immunoassay (EIA) and the WB IgM was recommended. The Virginia Department of Health used the less restrictive interpretation of laboratory evidence in its LD surveillance from 1996 through the end of 2007. However, given that single-tier positive results from either the EIA or the WB IgM are less specific than a positive 2-tier result from both the EIA and the WB IgM (26,27), laboratory evidence of infection in the 2008 SCD required, at a minimum, a positive 2-tier test result on blood collected during the acute phase of illness (i.e., within 30 days after illness onset). The more stringent laboratory criteria adopted in the 2008 SCD were designed to minimize the number of false cases counted by state surveillance programs.
We analyzed all data at the county level, which required us to reclassify cases reported in cities to the counties in which they are situated because cities and counties in Virginia are often separate administrative entities. We estimated LD incidence per county for each year during 2000-2011 by dividing the annual number of counted cases by the estimated population size in 2007 (28). To characterize annual change in incidence per county, we calculated the difference in cases between successive years and then averaged these values across years. We analyzed the spatial distribution of human LD cases at the state level by identifying the centroid, or geometric center, of county-level LD incidence for each year, starting in 2000 using ArcMAP 10.0 (ESRI, Redlands, CA, USA). We then used weighted linear regression to determine the effect of year on latitude and longitude of that year's centroid position weighted by annual number of cases.
Study Sites and Field Collections
In May and June 2011, we sampled ticks at 4 closedcanopy deciduous forest sites along an east-west elevation- Figure 1). We collected ticks at all sites by drag sampling (29) whereby a 1-m 2 piece of corduroy was dragged along both sides of 5 haphazardly selected 100-m transects (1,000 m 2 total), stopping every 20 m to remove ticks (17,25). We visited each site 4 times during May-July 2011 with at least 10 days separating visits. All ticks were speciated by light microscopy using dichotomous keys (30), and density of I. scapularis ticks was calculated as the average number of ticks collected per transect. Difference in density of I. scapualris nymphs among sites and visits was determined by analysis of variance of square root-transformed count data. We compared infection prevalence in ticks among sites by Gtest and by creating log-likelihood estimates of 95% CIs with a binomial probability function (31).
Molecular and Phylogenetic Methods
To extract total DNA, individual ticks were dried and flash-frozen by using liquid nitrogen, crushed by using a sterilized pestle, and processed with Qiagen DNeasy Blood and Animal Tissue Kit (QIAGEN, Valencia, CA, USA) by using manufacturer's protocols. We tested for B. burgdorferi DNA by PCR amplification of the outer surface protein C (ospC) gene and the intergenic spacer region of 16S-23S rRNA genes (32). Presence of amplified DNA was determined by gel electrophoresis, and samples that produced amplicons were purified with a QIAquick PCR Purification Kit (QIAGEN) and submitted for sequencing at the Nucleic Acids Research Facility at Virginia Commonwealth University (Richmond, VA, USA). We also performed PCR to amplify and subsequently sequence an ≈460-bp portion of the I. scapularis 16S rRNA gene using primers 16S +1 and 16S -1 (20). Bidirectional chromatograms from all sequence data were assembled and initially analyzed with Sequencher 4.10.1 (Gene Codes, Ann Arbor, MI, USA). B. burgdorferi sequences were blasted by using GenBank (http://blast.ncbi.nlm.nih.gov/Blast.cgi) to confirm species identification. Sequence data from I. scapularis 16S samples were aligned with reference sequences (33) by using ClustalW (http://www.clustal.org) implemented in MEGA 5.0 (http://www.megasoftware.net/), which was also used to select among models of evolution and to reconstruct phylogeny.
Results
During 1995-1998, the Virginia Department of Health counted 55-73 LD cases per year. The number increased to 122 cases in 1999, and cases continued to increase through the early 2000s. Although Virginia's LD activity during 2000-2005 was focused primarily on northern Virginia and the Eastern Shore of Virginia (a peninsula extending south from Maryland on the eastern side of the Chesapeake Bay), small numbers of LD cases were recorded in counties across Virginia, including counties in the most southern and southwestern parts ( Figure 2). During 2006-2007 the incidence of LD increased substantially in counties throughout the Appalachian Mountains ( Figure 2). After the change in the SCD in 2008, many of the most southern and southwestern counties that had recorded LD cases before 2008 ceased to report cases, and the geographic progression of LD appeared as a compact front that progressed from county to county from northeast to southwest. LD cases were not observed again in any of the far southwestern counties until 2011, by which time LD was considered endemic to many of the counties immediately to their northwest ( Figure 2).
We collected 2,549 ticks from the field: 2,192 Amblyomma americanum (1 larva, 1,917 nymphs, 274 adults), 306 I. scapularis (304 nymphs, 2 adults), 50 Dermacentor variabilis (all adults), and 1 I. dentatus (nymph). Sampling site was a major determinant of I. scapularis density (F = 71.07, p<0.0001, degrees of freedom [df] = 3), as was sampling date (F = 6.85, p=0.024, df = 1). Post hoc comparisons indicated that tick density at the highest elevation site (9.55 nymphs/200 m 2 ) was significantly greater than at any other site and that tick density at GR (1.66 nymphs/200 m 2 ) was significantly higher than at site AB (0.25 nymphs/200 m 2 ) (Table). We detected B. burgdorferi DNA in 48 I. scapularis nymphs, 45 of which produced unambiguous sequence reads for at least 1 locus (ospC or intergenic spacer region. Infection prevalence varied significantly among sites (likelihood ratio test, G = 16.3, p<0.0001, df = 3); the prevalence of infection was significantly higher at site LE (0.2) than at sites CR (0.00) and GR (0.04). Because of low sample size, site AB did not yield a reliable estimate of infection prevalence (Figure 3).
Analysis of I. scapularis 16S sequences yielded 17 haplotypes (GenBank accession nos. KF146631-47) from 85 individual nymphs (14 haplotypes from 44 ticks at LE, 1 from 2 ticks at AB, 6 from 21 ticks at GR, and 4 from 18 ticks at CR). Maximum-likelihood phylogenetic reconstruction using Tamura 3-parameter model (34) indicated that all haplotypes detected fall within the American clade; none of the ticks we sampled were phylogenetically identified as southern clade I. scapularis (Figure 4). In addition to an overall increase in human LD cases (from 136 in 2000 to an average of >1,000 in 2010 and 2011), we observed a significant spatial shift of the geometric center of LD incidence in Virginia. The longitude value associated with the centroid of each year's LD incidence depended significantly on year from 2000 to 2011 (F = 12.48, p = 0.005, r 2 = 0.56) ( Figure 5). Latitude values did not change significantly over time (F = 0.14, p = 0.71, r 2 = 0.01). We also calculated the average LD incidence per county for 2000-2006 (before the dramatic spike in cases in Virginia) and for 2007-2011 to identify counties in which the largest increases in cases occurred (Table).
Discussion
Our results indicate that 1) human LD incidence in Virginia has increased since 2000 and that the spatial distribution of cases has changed significantly, 2) abundance of I. scapularis nymphs and prevalence of B. burgdorferi infection are consistent with recent changes in human disease data, and 3) I. scapularis populations detected in central and western Virginia are dominated by American-clade haplotypes. Taken together, these results suggest recent spatial and/or demographic expansion of I. scapularis ticks in Virginia, resulting in increased human exposure to B. burgdorferi; the most notable increases in ticks and disease risk are at higher elevations in the western part of Virginia. More generally, our results indicate a dynamic pattern of LD risk. The spatial trends we identified through acarologic sampling are consistent with observed changes in disease incidence and are of paramount public health importance; the observed changes LD epidemiology in Virginia most likely reflect a spatial increase in disease endemicity (Table). We propose that the increase in LD in Virginia is caused by either increasing abundance of I. scapularis ticks, increasing prevalence of B. burgdorferi infection in the vector, or both. Our data suggest that this vector species may be more abundant than it was before 2007; during widespread collections during 2004-2007, I. scapularis ticks existed throughout most of Virginia, and no infected I. scapularis ticks were detected in central or western Virginia (17,25). Similar range expansion of I. scapularis ticks has been described in Wisconsin and Michigan (23,24).
The extent to which the spatial distribution of LD cases in Virginia will continue to change is unclear. Environmental variables previously identified as important drivers of I. scapularis abundance may not have uniform effects throughout the range of this species. For example, on the basis of extensive sampling in the eastern United States over several years, Diuk-Wasser et al. estimated an elevational threshold of 510 m for this species (25), and Rosen et al. detected more I. scapularis on deer at low elevation than high elevation sites in Tennessee (35). However, our sampling showed the highest density of host-seeking I. scapularis nymphs at elevations approaching this threshold, and we have subsequently collected host-seeking nymphs at >1,000 m in Nelson County in west-central Virginia (R.J. Brinkerhoff, unpub. data). In 2007, a growing focus of LD incidence was observed in southwestern Virginia in Pulaski, Floyd, and Montgomery Counties. These counties have continued to have that region's highest incidence of LD through 2011 (http://www.vdh.virginia.gov/epidemiology/ surveillance/surveillancedata/index.htm) and mostly occupy high mountain valleys with average elevations of 584-762 m. An elevational threshold that limits tick populations at northern latitudes, where high elevation sites experience extreme cold during winter months, would not be expected where equivalent elevations are associated with more moderate climatic conditions. Our analysis of LD data from humans indicates that the largest increases in LD incidence since 2007 has occurred in higher-elevation counties in western Virginia; the correspondence between these data and acarologic sampling suggests that the cases reported in these locations most likely are locally acquired and indicate recent spatial and/or numerical expansion of human disease. Our results are notably inconsistent with the findings of surveys of I. scapularis ticks on hunter-killed deer in North Carolina and Maryland during 1987-1992, which indicated that I. scapularis ticks were most abundant on the Coastal Plain and absent or uncommon in the Appalachian Mountains (14)(15)(16). When human LD surveillance began in Virginia in 1989, the highest incidence was on the state's Eastern Shore (http://www.vdh.virginia.gov/ epidemiology/surveillance/surveillancedata/index.htm). This finding was consistent with early surveys of ticks indicating that I. scapularis was the most common species in the Coastal Plain and much less common at higher elevations to the west (14). A logical conclusion at that time was that LD would continue to spread southward along the state's Coastal Plain. However, during 2000-2011, LD became more prevalent in Virginia's upper Piedmont and Appalachian Mountain zones than in the lower Piedmont and Coastal Plain. The results of older surveys of ticks and recent environmental models are not consistent with the current geographic incidence of LD or our field data. This discrepancy suggests a southwestward spatial expansion of northern tick populations into the upper Piedmont and mountain regions of Virginia or demographic expansion of persons into areas of previously low tick density in western localities. We do not have acarologic data from each county in which LD incidence has increased, nor do we have long-term systematic sampling data, and thus we cannot directly attribute local changes in LD to changes in tick densities.
Analysis of single-nucleotide polymorphisms in the I. scapularis genome reinforces the hypothesis that these ticks recolonized northern North America after the most recent glaciation event and that northern populations are genetically less diverse than southern populations (21). Moreover, analyses of single-nucleotide polymorphism data are consistent with south-to-north postglaciation gene flow, whereby northern American-clade populations are a subset of the genetic variation found in southern-clade populations (21) resulting from founder effects when ticks recolonized northern latitudes (22). Tick populations within both LD-endemic foci show signs of genetic isolation from one another and from southern populations (22), and evidence exists for similar lack of gene flow among populations within regions (19). Identification of American-clade I. scapularis ticks in the southeastern United States (19,33) might reflect remnant American-clade lineages in the South or might indicate southward dispersal of American-clade ticks. Qiu et al. noted that coastal sites in southern states were associated with strictly Americanclade populations, whereas a mix of American-and southern-clade ticks was detected at inland sites (19). With respect to our study, we point to the recent lack of detection of I. scapularis ticks at high-elevation sites in western or central Virginia (17,25) and the presence of exclusively American-clade I. scapularis ticks in the current study as possible evidence consistent with the population expansion of American-clade ticks from northern population foci. However, we cannot exclude the possibility that the distribution of endemic American-clade ticks simply has expanded in Virginia.
Although American-and southern-clade I. scapularis ticks are now considered 1 species, apparent differences exist in host-seeking behavior, biting behavior, and duration of attachment to different host types (9,36,37). Genetic differences between the major I. scapularis lineages have been well documented (7,(19)(20)(21)(22), and if American-clade ticks are more likely to feed on humans, the emergence of LD in Virginia would be consistent with increased relative abundance of this variant. In the South, immature I. scapularis ticks feed predominantly on low-competence or noncompetent lizard species and are relatively uncommon on rodents (8,(36)(37)(38). Southern-clade nymphs may have questing behavior that makes them unlikely to be collected on cloth drags or to bite humans (9); thus, nymphal ticks are difficult to collect, even in places where adult ticks are common. LD risk should be very low in areas where I. scapularis nymphs are unlikely to bite humans, and immature ticks are more likely to feed on reptiles than on competent vertebrate reservoirs. However, data from a single mitochondrial gene, albeit one that has been widely characterized for this species, do not necessarily reflect patterns of differentiation found in nuclear markers (21) and probably are not useful for delineating among behavioral phenotypes. Moreover, we sampled in daytime hours during the presumed peak period of nymphal activity (late spring, early summer) and thus would not have detected ticks exhibiting different host-seeking behaviors. It is possible that multilocus genomic analysis or year-round sampling would yield different insights from those reached in this study. 1666 Emerging Infectious Diseases • www.cdc.gov/eid • Vol. 20, No. 10, October 2014 . Maximum-likelihood phylogenetic reconstruction of Ixodes scapularis lineages based on 16S rRNA gene sequences using Tamura 3-parameter model (35). All samples beginning with IS were collected during this study; reference sequence GenBank accession numbers are indicated, as were sampling locations (2-letter state abbreviation). The clade containing samples collected in GA, FL, NC, OK, and SC is known as the Southern clade (sensu Norris et al. [20]); the clade containing all samples from this study, indicated by the prefix IS, represents the American clade (more complete explanation of these terms is provided in the text). Bootstrap values at nodes are based on 500 replicates.
The latitudinal gradient in LD risk in the eastern United States is not easily explained and probably is driven by demographic and environmental factors (5,26,39). However, our data suggest that the boundary between regions to which I. scapularis ticks are and are not endemic is moving and that B. burgdorferi-infected ticks might be expanding in or into areas from which they historically have been absent. As a result, clinicians and epidemiologists need to be vigilant in the face of changing spatial distributions of risk, especially in transition zones where patterns of disease are rapidly changing (40).
|
2017-06-20T12:22:36.892Z
|
2014-10-01T00:00:00.000
|
{
"year": 2014,
"sha1": "bf66c87f73f416a2271b140e3b8683edd3510df6",
"oa_license": "CCBY",
"oa_url": "http://www.esri.com/library/whitepapers/pdfs/demographic-update-methodology-2012.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "423ce567147ac343a35fedbd2a5a342f713e7fdc",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
244691828
|
pes2o/s2orc
|
v3-fos-license
|
Microbiological profile and antibiotic vulnerability of bacterial isolates from cancer patients.
The development of multiple types of infections in patients admitted to the oncology ward is quite obvious. The infection accompanying mortality in cancer patients is attributed majorly to bacteria and then to fungi. Infections can be successful if an appropriate antibiotic is used based on the knowledge of their sensitivity pattern as well as commonly occurring bacteria. A retrospective study was designed to assess numerous bacteria isolated from infections in cancer patients reported to oncology centers of tertiary care hospitals in the Makkah region, Saudi Arabia. Total, 678 cancer patients were enrolled during this study. The clinical isolates were obtained from urine, blood, respiratory samples, soft tissues and skin areas. The processing of the samples was done in accordance with the "Standard Microbiology Laboratory Operating Procedures". The identification of the isolated was done to their species and vulnerability tests were done as per the guidelines of "Clinical Laboratory Standards Institute". During this study, 300 samples were acquired from both medical and surgical oncology wards and were cultured during the study period. Klebsiella pneumonia, Staphylococcus aureus, Acinetobacter species, Escherichia coli and Pseudomonas aeruginosa were the microbes that were encountered mostly. The resistance against various antibiotics was found to be encountered by Acinetobacter species whereas resistance against fluoroquinolones, cephalosporin and carbapenems was >50%, found to be encountered by K. pneumonia. There was 43.80% resistance was found against methicillin by the Staph. aureus species. This study concludes that an enhanced antibiotic resistance was found by gram-negative bacilli specifically, E. coli, K. pneumonia and Acinetobacter species. The resistance pattern was not found remarkably in gram-positive strains although, MRSA frequency is found to be upsurged.
Introduction
The research in the field of cancer has increased in past few years. Despite newer approaches to treat cancer, an important source of "morbidity and mortality" is considered to be the onset of numerous infection types in cancer patients. The development of multiple types of infections in patients admitted to the oncology ward is quite obvious. The major reason behind this statement is the chemotherapy and cancer that makes the immunity of these admitted patients compromised (1). The immunity in the cancer patient remains compromised due to the particular nature of the disease as well as an interruption by the chemotherapy. Many other factors also contribute significantly towards the onset of bacterial infections in cancer patients The infections that are emerged in cancer patients cause a disorder of the treatment pattern, hospital stay, an increase in treatment cost as well as a reduced survival rate in patients. The infection accompanying mortality in cancer patients is attributed majorly to bacteria and then to fungi (2). Successful treatment of infection is possible only if antibiotic therapy was selected appropriately using the great knowledge of their sensitivity pattern as well as commonly occurring bacteria. This is the reason for a decline of bacterial infections caused by gram-negative (-ve) pathogens over the past two decades (3). The identification, treatment and prevention of infections need a striking knowledge of the ever-altering spectrum of in-fections. In most cases, infection management is a main deal among the patients since infections are considered as one of the main causes of patient illness which could to mortality if not handled properly (4). The bacterial infections epidemiology among cancer patients has changed significantly with the passage of time in recent decades and has shifted from gram -ve to gram +ve pathogens. In numerous countries of the world, gram -ve pathogens have dominated the scene as a principal cause of infections among cancer patients (5,6). Not only the mortality rate among the patients has reduced with the appropriate use of antibiotics but the danger of multidrug resistance has also been diminished (7). The incidence of multidrug-resistant bacteria is directly associated with the patients having compromised immunity. The members of the Enterobacteriaceae group are the culprit of such a handful of infections. In many Asian countries including India, Pakistan, Bangladesh, etc., the pattern of resistance in already remarkable against cephalosporin (8). The resistance against carbapenems is due to extensive use of this antibiotic against infections in which, the microbes produce carbapenems (9). In India, the incidence of metallo-β-lactamase producing microbes is on the go (10).
Another matter of great concern is the rising resistance against antibiotics in gram-positive strains. An important result is published in a study from Northern India in which a higher incidence of MRSA (methicillin resistance Staph. aureus) in clinical samples has been found (11). In the same way, the resistance among enterococci isolates is also been increasing for glycopeptide antibiotics (12). The broad-spectrum antibiotics are randomly tried to treat infections in cancer patients till the receipt of culture tests reports. Clinicians need to get to know about the pattern of antibacterial susceptibility that might differ from one geographical zone to another. The study was designed to investigate numerous bacteria isolated from the infections in cancer patients reported to tertiary care cancer hospital at the Medical City, Makkah Saudi Arabia.
Ethical approval
The study was conducted at the Laboratory Medicine Department, Faculty of Applied Medical Sciences, Umm Al-Qura University, Makkah, Saudi Arabia, in accordance with the declaration of Helsinki and its amendments. The studies involving human participants were reviewed and approved by the Internal Review Board of the local Human Research Ethics Committee of Security Forces Hospital Makkah (SFHM) (Reference No. 0430-140621). The samples were taken from the patients only for this current investigation and for no other study. Written informed consent was obtained from all the study patients after the elucidation of the purpose and objective of the study to them. All obtained information was kept confidential by giving a unique code and evaluated only by the PI of this study. The lab results were communicated to doctors and nurses for better patient management.
Study design, area, and duration
This was a retrospective study that was conducted at the Laboratory Medicine Department, Faculty of Applied Medical Sciences, Umm Al-Qura University, Makkah, Saudi Arabia for a duration of one year from June 2020 to May 2021.
Demographic data of the patients
Sociodemographic variables including patient's age, education, profession, residence, marital status, and other relevant clinical data were collected using a predesigned questionnaire. Each sample was obtained for bacteriological lab analysis by a trained health care professional.
Sample size and sampling technique
During this study, 678 patients enrolled at the early stage of this study. Amongst the enrolled, 300 of them were available for microbiology workup and samples were obtained. All surgical (solid tumors) and medical oncology (hemato-lymphoid malignancies) patients were enrolled in the study. The clinical isolates were obtained from urine, blood, respiratory samples, soft tissues and skin areas. The processing of the samples was done in accordance with the "Standard Microbiology Laboratory Operating Procedures". The identification of the isolated was done to their species and vulnerability tests were done as per the guidelines of "Clinical Laboratory Standards Institute" (13,14).
Microbiological studies
The clinical samples were collected from alleged cases of infections. Samples were stained using gram staining following inoculation onto chocolate agar, blood agar and MacConkey agar (HiMedia). Prepared samples were incubated in the presence of air at a temperature of 35°C for 15 hours. Blood culturing was performed by BacT/ALERT system (BioMerieux, USA). The acquired positive cultures were further subjected to sub-culturing onto the chocolate agar, blood agar, and MacConkey agar (HiMedia) and then incubated at a temperature of 35°C for 15 hours in the presence of air. The proof of identity of the pathogenic growth and antimicrobial vulnerability assay of the acquired isolates were confirmed using the VITEK 2 system (Bio-Merieux, France), and minimum inhibitory concentration values were taken as sensitive or resistant using the CLSI guidelines (Clinical and Laboratory Standards Institute).
Antibiotics used for gram-positive pathogens
The following antibiotics were chosen for gram-
Statistical analysis
Descriptive statistical methods were used for the analysis of the data which was entered into Microsoft Excel 2010. Chi-square test and odds ratio was used to establish the connotation. The results are characterized in the form of frequency tables and graphs.
Results
A total of 678 patients were admitted to both medical and surgical oncology wards during the study period. Out of all these patients, 442 were diagnosed with a bacterial infection. Figure 1 below displays the age of all the admitted patients.
Patients having solid organ tumors possessed an ave- carbapenems was >50%, found to be encountered by K. pneumonia. There was 43.80% resistance was found against methicillin by the Staph. aureus species.
Discussion
Epidemiology of infections associated with cancer patients has shifted from gram-negative to gram-positive and this is the observation of researchers around the globe. Bacteremia and pneumoniae infections are the ones that are seen mostly in cancerous patients (15). The results of our study showed relatively more patients with skin and soft tissues infections in comparison to those with UTIs (Table-1). In our investigation, an extended number of both gram+ and gram-organisms were isolated as shown in Table 2. The results displayed a higher resistance of E. coli and K. pneumoniae against third-generation cephalosporins including cefotaxime and ceftazidime as well as beta lactamase inhibitors. The results are in consistent with the study that was conducted in Karnataka (16) and Bhopal (17). More or less fifty percent of the strains of both E. coli and K. pneumoniae were found to be extended-spectrum betalactamase producers. The susceptibility to aminoglycosides including gentamycin and amikacin was retained by 50% of the P. aeruginosa and E.coli isolates. Though >50% of Acinetobacter and K. pneumoniae isolates were resilient to antibiotic gentamicin.
A Chinese study showed a total resistance of 6.6% with less than half of which were producing the carbapenemases KPC-2, IMP-4, and NDM-1 (18). Another Indian study indicated NDM-1 from numerous sites, rage age of 50 years. The majority of the patients were in an age group of 55-60. The maximum age patient was recorded at 72 years whereas; minimum age was recorded in a patient as 4 years. The patients having hematological malignancy were in an age group of 32 years. The results are shown in Figure 2.
A sum of 300 samples were collected from both medical (n=150) and surgical (n=150) oncology wards and were cultured during the study period. The details are displayed in Table 1.
The microbial profile for both medical and surgical cancer patients is given in Table 2 below.
The susceptibility pattern of numerous microbes against antibiotics is shown in Figures 3-5 below.
The resistance against various antibiotics was found to be encountered by Acinetobacter species whereas resistance against fluoroquinolones, cephalosporin and majorly among E. coli and K. pneumoniae and they were found highly prevalent and resistant to all antibiotics used in the study with an exception of tigecycline and colistin (19). In our study, molecular aspects were not done in order to identify carbapenemases and therefore, remained failed to characterize them (20,21). A study that was done in Odisha, displayed a higher resistance rate of Acinetobacter species to various antibiotics including ceftazidime (9%), meropenem (22%) and gentamycin (76%) (22). In very few isolates of Acinetobacter species were found to be colistin-resistant. It is to be revealed that antibiotic resistance is not significant amongst the gram+ bacteria. During the current study, it was observed that Gram-positive organisms were not that resistant to the antibiotics used in the study. There is a scarcity of reports related to the patient admitted to the oncology department for the treatment of various types of cancer in respect to the antibiotic vulnerability of different bacteria and other microorganisms to the antibiotics used during the treatment of infections. With the increase in the number of cancer patients, it is of very high demand that more information should be made available to understand and explore the vulnerability of the infection-causing bacteria to the antibiotics used in their treatment. This information will not only reduce the duration of healing and thus reduce the suffering of cancer patients but will save the therapy cast as well if the right antibiotic is used for the treatment of a specific infection. There is a lot of data available to express how a microorganism develops resistance to antibiotics and is a matter of great concern to physicians around the world especially in countries with a poor economy. Some of the key factors that are considered responsible for developing antibiotic resistance include antibiotic misuse and the use of antibiotics in the different sources of food like meat, poultry and dairy product. It has been established through many studies that antibiotics are used in the production process of these food sources (23)(24)(25)(26). Health regulatory authorities and WHO itself is in a struggle to make policies about how and when to use antibiotics on the national and international levels. One such effort is the "Chennai declaration" which is an initiative that provides advice and recommendations on this issue (27)(28)(29). Hopefully, with the information presented in this manuscript, the problem of antibiotic resistance will be solved since using this information a patient can be effectively treated using the right sort of antibiotics.
Sample collection site
This study revealed that an enhanced antibiotic resistance was found by gram-negative bacilli specifically, E. coli, K. pneumonia and Acinetobacter species. The resistance pattern was not found remarkably in grampositive strains although, MRSA frequency is found to upsurge.
|
2021-11-28T16:47:04.624Z
|
2021-11-25T00:00:00.000
|
{
"year": 2021,
"sha1": "7bad6fbc25fc5b2267112f661d540aa02a0c9e4a",
"oa_license": "CCBY",
"oa_url": "https://www.cellmolbiol.org/index.php/CMB/article/download/4024/1856",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "792aae1f40b1e3c2f684472b62c831eb2d61bbfd",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
90443691
|
pes2o/s2orc
|
v3-fos-license
|
Wetland microhabitats support distinct communities of aquatic macroinvertebrates
ABSTRACT The drivers of aquatic macroinvertebrate distribution in Prairie Pothole Region wetlands are not as well understood as in other aquatic ecosystems (e.g. rivers or lakes). We collected aquatic macroinvertebrates from 35 fishless prairie pothole wetlands in Alberta, including two habitat zones: the emergent zone and the open-water zone. Within each zone, we collected a vegetation sample and a water column sample, thus capturing four distinct microhabitats. We tested for community differences among these microhabitats with nested ANOVAs, looking at macroinvertebrate abundance, taxa richness, and evenness. We also visualized trends in community composition among the microhabitats with nonmetric multidimensional scaling ordination. Interestingly, we observed no difference in macroinvertebrate communities between the open-water and the emergent habitat zones. However, we found significant differences in richness and evenness between water column and vegetation sample types nested within habitat zones. Additionally, we observed high taxonomic turnover between sample types. Our results emphasize the importance of within-zone microhabitats in structuring aquatic macroinvertebrate communities in prairie pothole wetlands, and the relative insignificance of emergent and open-water habitat zone distinctions. Future analyses of macroinvertebrates in wetlands should sample both the vegetation and the water column, regardless of habitat zone, to prevent biased surveys of macroinvertebrate communities.
Introduction
The Prairie Pothole Region (PPR) spans over 700,000 km 2 crossing three central provinces in Canada and five US states. The region is characterized by small, relatively isolated depressional wetlands known as prairie potholes. The prairie potholes of the PPR do not have well-developed surface inflows and outflows (LaBaugh et al. 1998) and consequently are mainly influenced by precipitation (Euliss et al. 1999). Prairie pothole wetlands flood with snowmelt in the spring and most dry out within weeks or months due to high levels of evapotranspiration (Hayashi et al. 2016). Aquatic macroinvertebrates inhabit these productive wetland ecosystems, providing an important trophic link between macrophytes and vertebrates (Zimmer et al. 2003;Wrubleski and Ross 2011). The importance of macroinvertebrates as a food source to nesting waterfowl in the PPR (Wrubleski and Ross 2011) has prompted research into the drivers of macroinvertebrate community composition in these wetlands. Nearly two decades have passed since Euliss et al. (1999) suggested that macroinvertebrate communities in prairie pothole wetlands would be dominated by generalist taxa that can tolerate the dynamic environment created by natural draw-down cycles. However, since then, results of wetland macroinvertebrate studies have often been contradictorya review by Batzer (2013) concludes that the determinants of wetland macroinvertebrate distributions remain uncertain.
One factor often cited as driving variation in wetland macroinvertebrate communities is vegetation structure (e.g. Zimmer et al. 2000). The composition and structure of vegetation influences wetland invertebrate community structure as macrophytes serve as a source of food, a refuge from predators, and an egg laying substrate (Bazter and Wissinger 1996;Keddy 2010). For example, Hanson et al. (2005) observed that aquatic invertebrates increased in abundance in shallow, heavily vegetated wetlands compared to less vegetated sites. Christensen and Crumpton (2010) agree that the presence of emergent vegetation has a significant role in determining invertebrate community structure. However, there is the potential for vegetation presence to be confounded with water depth (Zimmer et al. 2000) or fish presence (Maurer et al. 2014) and so the importance of vegetation in structuring macroinvertebrate communities is not clear.
In prairie pothole wetlands, vegetation-based habitat zones assemble along a moisture gradient, with submersed and floating vegetation typically at the wetland center and emergent vegetation surrounding it (Stewart and Kantrud 1971). The physical structure of these zones is highly distinct, with emergent cattails, sedges, and grasses providing mainly vertically oriented leaves and stems that connect the sediment to the water surface and protrude beyond the water surface, whereas submersed aquatic vegetation is typically delicate with finely divided buoyant leaves that fill the water column. These different habitat zones presumably create different microhabitats for feeding, emergence, and oviposition. Thus, we expect distinct invertebrate communities to occupy these two habitat zones.
Despite extensive evidence that microhabitat structure exerts an important influence on macroinvertebrate communities in rivers (e.g. Gregory 2005;Henshall et al. 2011;Verdonschot et al. 2016) and lakes (e.g. Weatherhead and James 2001;Bazzanti et al. 2009;Sychra et al. 2010;Cai et al. 2011), the associations between wetland invertebrates and vegetation zonation in wetlands of the Northern Prairie Pothole Region (NPPR) has received relatively little study. In lakes, for example, macroinvertebrates exhibit taxonomic turnover between the deeper open-water and the littoral zone (Sychra et al. 2010), and different lake microhabitats (benthic, macrophyte patches, open-water) are known to support distinct macroinvertebrate communities (Weatherhead and James 2001). These community differences are often attributed to the life history of particular taxonomic groups, as invertebrates with similar life histories will often exploit the same microhabitats (Vannote et al. 1980, Bazzanti et al. 2009). In lotic environments, for example, shredders will usually prefer the shelter of vegetation, whereas filter feeders and predatory invertebrates will occur in higher abundance in the water column to take advantage of pelagic resources (Wallace and Anderson 1996).
Our goal was to determine whether similar habitat partitioning of macroinvertebrate taxa among microhabitats is evident in prairie pothole wetlands. We hypothesized that there would be a significant difference in macroinvertebrate abundance, taxa richness, evenness, and community composition among the two primary habitat zones that characterize prairie potholes in the NPPR, open-water and emergent vegetation, based on macroinvertebrate functional traits. Further, nested within those zones, we expected to detect differences between invertebrate taxa using the water column and those taking refuge in or feeding on wetland vegetation. We anticipated that agile swimmers like Corixidae would occupy the open-water zone and the water column, whereas climbers like Zygoptera or grazers like Lymnaeidae would reside among the emergent vegetation (e.g. Sychra et al. 2010). Thus, we tested for differences in macroinvertebrate community among four distinct microhabitats typical of prairie pothole wetlands. We sought to control the potentially confounding effect of fish predation by sampling only fishless wetlands, which dominate Alberta's NPPR. Further, we sought to control the effect of water depth by sampling in both open-water and emergent vegetation zones across the same gradient in water depth. Our results should inform sampling protocols to ensure comprehensive and representative sample collection of wetland macroinvertebrates.
Wetland sites and microhabitat distinction
We selected 35 prairie pothole wetlands for our study from the sample frame created by Alberta's provincial wetland inventory ( Figure 1; Government of Alberta 2014), following a protocol described in detail in Gleason and Rooney (2017). The wetlands were all situated within either the Parkland or Grassland Natural Regions of Alberta and each included both an open-water zone and an emergent vegetation zone. Collectively, the sites spanned a range in May water depth from 0.29 to 1.02 m (see Appendix 1 (Supplemental material).
At each wetland, we first delineated the areal extent of the emergent vegetation zone, where rooted macrophytes grow and protrude beyond the surface of the water. These macrophytes were typically cattail, bulrushes, or robust Carex spp. Next
Macroinvertebrate collection and identification
Sampling for aquatic macroinvertebrates took place during early May of 2014 and 2015. Macroinvertebrate sampling followed the quadrat-column-core (QCC) method described by Meyer et al. (2013) and modified by Gleason and Rooney (2017) for use in prairie pothole wetlands. In brief, we used a floating 0.25 m 2 quadrat to collect emergent or submerged vegetation. In the emergent zone, vegetation within the floating quadrat was collected by clipping within 2 cm of the substrate, whereas in the open-water zone, submersed or floating vegetation was collected with a rake into a bucket. The collected vegetation was then rinsed repeatedly in buckets of filtered water to dislodge clinging invertebrates. The rinse water was filtered through a 500-mm mesh sieve and the collected residues preserved in 90% ethanol. A Marchant box was used to randomly sub-sample invertebrates in vegetation samples to an enumeration total of 300, based on our initial collector's curves. Water column samples were collected in clear acrylic tubes of 10 cm inner diameter to integrate across water depth. The tube was inserted vertically into the water to a depth just above the sediment. The entire contents of the tubes were emptied into a 500-mm mesh sieve, and the residues preserved in 90% ethanol. The water column samples were enumerated in their entirety.
Collected macroinvertebrates were identified to family-level for most taxa using keys by Clifford (1991) and Merrit et al. (2008). The total number of individuals in each sample was recorded for each taxon. See Appendix 2 (Supplemental material) for details on taxonomic resolution, by order.
Data analysis
Because there was no significant difference in richness or abundance of invertebrates between samples collected in 2014 and 2015 (Mann-Whitney U tests, p > 0.05), the data from the two years were combined and analyzed jointly. Macroinvertebrate abundances were converted to counts per m 2 to relativize the different areas captured by the two sample types. Taxa richness was then a count of all taxa observed within each microhabitat at the wetland. We calculated Simpson's Dominance to measure community evenness using the following formula: D ¼ P n N À Á 2 (Magurran 2004).
To determine if there was a significant difference in (1) abundance (individuals per m 2 ), (2) taxa richness (number of taxa observed), and (3) evenness among the four microhabitats sampled, we performed three nested ANOVAs using SYSTAT version 13.0 (SYSTAT Software, San Jose, CA). We used a nested model because the sample types (vegetation sample and water column sample) are embedded within the habitat zones (emergent zone and open-water zone). Prior to analysis, values for total invertebrate abundance and taxa richness were log transformed and square-root transformed, respectively, to achieve a normal distribution.
To characterize differences in macroinvertebrate community composition among microhabitats, we performed a nonmetric multidimensional scaling (NMDS) ordination on the Bray-Curtis distance measure. To do so, we used the 'metaMDS' function in the vegan package (Oksanen et al. 2016) in R statistical software (R Core Team 2016). Prior to ordination analysis, rare taxa (detected in fewer than five sites) were excluded from the dataset to reduce sparsity. Taxon abundances were relativized by the maximum abundance of each taxon. Ninety percent confidence ellipses were delineated in order to visualize trends in the data, and taxa whose abundance was reasonably correlated (r 2 > 0.15; using the function 'envfit' in the vegan package) to at least one ordination axis were overlayed as vectors. All graphing was performed in R statistical software (R Core Team 2016) using the package ggplot2 (Wickham 2009).
Results
We collected 56 macroinvertebrate taxa from 35 prairie pothole wetlands in Alberta. The most abundant taxa in all microhabitats were Chironomidae and Ostracoda, which were present in every sample. In water column samples, Conchostraca and Lestidae were consistently abundant, regardless of habitat zone. Gastropod families (Planorbidae and Lymnaeidae) were abundant in the vegetation samples (Appendices 2 and 3 (Supplemental material)).
Macroinvertebrate abundance (F 1,2 = 1.72, p = 0.319), taxa richness (F 1,2 = 0.602, p = 0.519), and evenness (F 1,2 = 2.44, p = 0.259) did not differ significantly between open-water and emergent vegetation habitat zones (Figure 3). The was no significant difference in abundance (F 2,136 = 0.75, p = 0.476) between water column and vegetation sample types nested within habitat zones, but there was a strongly significant difference in taxa richness (F 2,136 = 92.9, p < 0.0001) and a slight but significant difference in evenness (F 2,136 = 3.92, p < 0.022) between water column and vegetation subsamples nested within the two habitat zones (Figure 3). Taxa richness was higher in vegetation samples than in water column samples, regardless of wetland zone.
The optimal NMDS solution for macroinvertebrate community composition was three dimensions after 146 iterations, with a final stress of 21.71 (Procrustes RMSE = 0.0007, max residual = 0.004). Axis 1 of the ordination segregated samples of the water column from vegetation samples, regardless of habitat zone (Figure 4). Axis 2 did not differentiate among the four microhabitats. However, axis 3 provided some segregation of vegetation samples collected from the emergent and open-water zones, though water column samples overlapped substantially. Indeed, the water column samples from the emergent zone reflect a nested subset of the water column samples from the openwater zone.
As suggested by our ANOVAs and multivariate analysis, we conclude that aquatic macroinvertebrate taxa richness, evenness, and diversity differ between water column and vegetation samples, regardless of wetland habitat zone. In contrast, we see limited differentiation between habitat zones. The invertebrates collected from the water column are particularly indistinguishable between openwater and emergent habitat zones.
Discussion
Though we observed no difference in abundance, taxa richness, or evenness between habitat zones (emergent and open-water), we detected differences in the richness and evenness of macroinvertebrates collected from the water column and vegetation microhabitats nested within habitat zones. Taxa richness was higher in vegetation samples than water column samples, regardless of habitat zone. Similarly, macroinvertebrate community composition differed significantly between water column and vegetation samples. Though differences between the open-water and emergent vegetation zone were not detected, we did observe some differentiation between macroinvertebrate communities in these zones when only the vegetation quadrat samples are considered. We conclude that vegetation exerts a strong influence on macroinvertebrate community structure, regardless of zone (e.g. emergent or submerged/floating vegetation).
There is little published work comparing macroinvertebrate distributions among wetland microhabitats, though research into the segregation of macroinvertebrates among different microhabitats in shallow lakes offers some grounds for comparison. For example, in shallow Polish lakes, invertebrate abundance and taxa richness were positively correlated with macrophyte presence and richness ( _ Zbikowski and Kobak 2007). Similarly, research from a shallow lake in China concluded that macroinvertebrate diversity and community evenness was higher in vegetated areas compared to open-water (Cai et al. 2011). Like these studies in lakes, we observed higher taxa richness in vegetated samples; however, we also detected greater community dominance in vegetated samples, with water column samples yielding more even communities. Our vegetation samples were dominated by large numbers of Chironomidae, which led to the reduced evenness. Because richness and evenness displayed opposing relationships to sample type in our study, the difference in biodiversity between vegetated and water column microhabitats is not clear. It is possible that greater taxonomic resolution would reveal a different diversity pattern, but our study was limited to primarily familylevel identifications (Appendix 2 (Supplemental material)).
Microhabitats support functional diversity
We observed high taxonomic turnover between water column and vegetation quadrat sample types, regardless of which habitat zone they were collected from. We believe this is because water column and vegetation microhabitats support taxa of differing functional groups that are able to take advantage of distinct ecological niches. Sychra et al. (2010) reported that free-swimming taxa preferred open-water habitat, whereas grazing macroinvertebrates such as snails were associated with areas possessing dense macrophytes. In support of this, we observed a positive association between freeswimming taxa and water column samples. Water column samples were characterized by abundant Culicidae and Lestidae larvae, more Ostracoda, and more adult and larval Dytiscidae. Adult Dytiscidae are free-swimming predaceous diving beetles that we expect to spend more time foraging in the water column than hiding in vegetation. Both Culicidae larvae and Ostracoda are free-swimming filter feeders, likely also achieving greater foraging success in more pelagic habitat. In contrast, the Limnephilidae trichopterans dominated the vegetation samples from both open-water and emergent vegetation zones. Limnephilidae are mainly detritivores that feed on decaying plant matter that would be in abundance within the vegetation quadrats we sampled.
The association between Lestidae and water column samples challenges our functional guild framework as these predaceous odonate nymphs are typically described as vegetation climbers; however, some species within the family are categorized as climber-swimmers and will swim to hunt (Tennessen 2008). Both Weatherhead and James (2001) and Hinden et al. (2005) reported that odonate richness increased with macrophyte biomass, yet odonate families were not more common in vegetation samples from our wetlands. Interestingly, though both Lestidae damselflies and Ostracoda were more abundant in water column samples than vegetation samples, we did collect some of both taxa in our vegetation samples. Where this occurred, ostracods were more common in vegetation samples collected from the open-water zone, whereas Lestidae nymphs were more common in vegetation samples collected from the emergent vegetation zone. This suggests that Ostracods may swim among the submersed macrophytes, but appear to avoid emergent ones. Lestidae, in contrast, will more commonly seek refuge among the robust vertical stalks of emergent plants, but avoid the thinly divided foliage of submersed aquatic vegetation. Perhaps this association with emergent macrophytes explains the discrepancy between our observations and the general descriptions of Lestidae habitat use.
Implications
We highlight the great similarity in wetland macroinvertebrate community occupying the openwater and emergent vegetation zones in prairie pothole wetlands. This similarity is surprising in light of the relatively significant and consistent differences in richness, evenness, and taxonomic composition of macroinvertebrates collected from water column samples as opposed to vegetation quadrat samples, nested within those zones. Though we were able to explain many of the patterns in community composition by applying a functional guild framework, there remains unexplained variation in macroinvertebrate community composition and not all differences between invertebrates occupying the water column versus those residing in vegetation are attributable to feeding strategy or behavioral guild, perhaps due to limited taxonomic resolution.
Importantly, our results have implications for those planning to sample macroinvertebrates in wetland habitats. Whereas sampling to obtain representation from different habitat zones is often stressed in invertebrate sampling protocols, our results suggest that a comprehensive sampling of macroinvertebrate diversity depends more on the collection of different sample types than from sampling different habitat zones. For future sampling of macroinvertebrates in prairie pothole wetlands, it is integral that samples are taken from both the vegetation and the water column to adequately reflect the diversity of macroinvertebrates present. Finally, our work highlights the complexity in wetland microhabitats and the macroinvertebrate taxa they support. This may be informative for studies on waterfowl and other vertebrates that prey on vertebrates as well as inform management practices.
|
2019-04-02T13:13:22.302Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "f84efad46cb527095b78b65a94740509607ac6f6",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/02705060.2017.1422560?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "54410dd987c82edb1e3d78e788e6d152be505c03",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
}
|
269778443
|
pes2o/s2orc
|
v3-fos-license
|
HYBRID FORECAST SYSTEM OF OVERTOPPING WITH INFRAGRAVITY WAVE
,
INTRODUCTION
Overtopping is defined as flow exceeding the freeboard of coastal structures mainly due to oscillating wave action.This why, the overtopping of structures is a process inherited from the run-up, which depends on various factors such as waves, infragravity waves, instantaneous sea level, and physical characteristics of the structure (e.g.slope, material of protective layers, porosity, apparent friction, and depth at the foot of the structure).However, when the wave reaches the high energies associated with storms, infragravity fluctuations are known to be particularly significant.Therefore, the resulting infragravity wave associated with the swell also oscillates over tens of centimeters.
Traditionally, the evaluation of overtopping on coastal structures is carried out by the combination of meteooceanographic information and the analytical evaluation of the variable, through semi-empirical formulae (EurOtop, 2018).Nevertheless, this approach is only associated with waves and sea level, with no considerations given to e.g.wind and infragravity waves effects.Wind and infragravity waves can dramatically modify run up and overtopping values, especially in high waves situations and on reflected beaches (de Beer et al., 2021;Roelvink et al., 2018;McCall et al., 2014).Furthermore, the overtopping may behave differently than predicted by semi-empirical tools due to changes in both flow rate and geometry.
Nowadays, wave propagation models that solve the phase can be used as approximation methods to calculate run up and overtopping processes.They take into account wave dispersivity and non-linear interactions between spectral components, allowing the transfer of incident energy to super and sub harmonics; this makes them suitable for managing coastal run up processes caused by waves and infragravity waves.These models solve the hydrodynamic equations of the flow motion integrated in depth, with time evolution based on the theory of long waves.For example, models based on the Boussinesq equations, models solving the Reynolds-Averaged Navier-Stokes equations (RANS) in simplified form and models solving the Non Linear Shallow Water equations (NLSW).
This contribution does not concentrate on the effect of wind.However, it takes advantage of the SO3 prediction system (Alfaro, 2017), which combines wave and infragravity waves, to couple it to a model that solves the NLSW equations, such as the non-hydrostatic version of the XBeach model (Roelvink et al., 2009), to determine the overtopping.The availability of joint wave and infragravity wave variables provides an opportunity to enhance the comprehension of these processes resulting from the interaction of waves with the beach coastal.It can also provide an improved method for the functional analysis of coastal structures, significantly increasing the overtopping flow rates, allowing more realistic studies to be proposed for the design of protective structures.
The study site is a section of Caldera beach located in Golfo de Nicoya, on the Pacific coast of Costa Rica (see Figure 1A).Over the years, this site has experienced frequent overtopping events over the perimeter of the rubble mount structure that protects a segment of the beach (see Figure 1B).This situation endangers people, a major road running parallel to the beach and access to the main commercial port on the Pacific coast of Costa Rica.
METHODOLOGY
The XBeach model (Roelvink et al., 2009) in its non-hydrostatic module and in its one-dimensional (1D) scheme was used to calculate the overtopping events, specifically, the profile located in front of the Caldera beach section where overtopping on the beach protection structure occurs (Figure 1A).This model was preferred among others due to its low computational cost.
Before the overtopping calculations, the ability of the 1D model to properly generate and propagate both short waves and infragravity waves along the profile was tested; the 1D model was calibrated and validated with field measurements carried out during different months in the 2019, 2021 and 2022 years (Camacho, 2022;Alfaro-Chavarria & Govaere-Vicarioli, n.d.).The calibrated parameters were the maximum wave steepness parameter (maxbrsteep) and wave steepness criterium to reform after breaking (reformsteep) which were assigned values of 0.7 and 0.6 based on reflective beach; in addition, the White-Colebrook method with a grain size of D90=9x10 -4 m as estimated by Govaere-Vicarioli et al. ( 2013) was utilized to calculate bedfriction in the model.Once the hydrodynamic capability of the model was verified, a series of numerical experiments were performed based on three different wave input configurations: the first one (TS) corresponded to forcing the model through the free surface time series recorded by the pressure gauge 1 (Figure 1C); the second (spcSW) was to force the model with wave parameters (Hs and Tp) associated with the same time series, which the model uses to generate a JONSWAP-type spectrum and then a random free surface time series; the third configuration (spcSW+IGW) consisted of forcing XBeach with a directional spectrum generated by the operational system SO3 (Alfaro, 2017).All configurations used the same bathymetric-topographic profile.
Beach profile and free surface data were collected during a 7-day field survey between August 31 and September 6 of 2023, which overlapped with one of the major storms that caused overtopping at the study site during 2023 (Piña, 2023) (Figure 1B).The profile was measured with an echo sounder in the water and on the dry beach with topography, extending from 15 m depth to the surf and swash zones (Figure 1C).Free surface data were measured by three pressure gauges placed along the profile, which were set at 1 Hz to measure 1048 samples hourly to represent the sea states; and significant wave height and peak period (Hs and Tp) wave parameters were calculated for each sea state.
The SO3 forecasts directional spectra with their respective wave parameters (Hs, Tp and Dir) and infragravity wave parameters (HIG y TIG), every 3 hours for the next 7.5 days, at different sites (nodes) distributed inside the Gulf of Nicoya; the SO3 has one of its nodes near pressure gauge 1 (Figure 1A).The SO3 was calibrated and validated by Alfaro (2017), is based on a hybrid downscaling model derived from a Production Hindcast reanalysis (Tolman and Chalikov, 1996).
In each of the three configurations, two numerical experiments were conducted: the first (Exp 1) was to simulate a sea state measured at 16:00 hours on September 2, 2023, which is known to have generated overtopping but not in an excessive manner; the second (Exp 2) was a sea state measured at 5:00 hours on September 3, 2023, which according to people from the community was when most overtopping occurred.Each experiment had a duration of 1 hour to comply with the sea state concept; this time was considered sufficient to have an adequate randomness of the generated wave phases, an adequate statistical significance, and a proper energetic integration of the wave.The spcSW and spcSW+IGW configurations were repeated a total of 100 times each, to evaluate the effect of the randomization of the phases performed by the model.The 100 simulations lasted approximately 2.5 hours, on a computer with a Xeon W-2133 CPU @ 3.6 GHz, with 6 cores and 128 GB of RAM.
The tide level corresponding to the time when the sea state was measured was associated with Exp 1 and Exp 2, which were 3m and 3.2m respectively.Table 1 gives a summary of the information on the experiments carried out in each configuration.
RESULTS
The skill of the XBeach model in its 1D numerical scheme to generate and propagate waves and infragravity wave along the profile was teste; the XBeach model was forced with the free surface time series measured by pressure gauge 1 during the field campaign carried out between August and September 2023.The results were compared with the data measured by the pressure gauges at the three locations where they were deployed (see Figure 1C), using free surface time series plots.Figure 2 compares the modelled and measured free surface time series at the three locations, corresponding to a sea state with a noticeable grouping wave characteristic of swell.
Figure 2A demonstrates a good fit between the model and the data measured by pressure gauge 1, in both amplitude and phase.Figure 2B shows a fit of the phases generated by the model, with a slight increase in amplitude.Figure 2C corresponds to the site where pressure gauge 3 was placed, and shows some time lags between the time series, as well as a slight decrease in amplitude according to the model.The validation process confirmed the capability of the XBeach model for wave propagation along the beach profile.Table 2 shows that the TS configuration produced the highest number of overtopping events in the two experiments, with a total of 1 and 25 for Exp 1 and Exp 2 respectively.This makes sense because the wave magnitude of Exp 1 was smaller than the wave of Exp 2. The spcSW and spcSW+IGW of Exp 1 generated on average 0.01 and 0.03 overtopping after the 100 simulations.In comparison, spcSW and spcSW+IGW in Exp 2 generated an average of 8 and 9 overtopping events respectively in the 100 simulations performed for each; in this experiment spcSW+IGW generated 12.5% more overtopping events than spcSW, but this only represents 36% of the overtopping events generated by TS, meanwhile spcSW generated 32% of the overtopping with respect to TS.The two experiments presented in Table 1 have been repeated using the same methodology as before.However, the XBeach model was used with default and uncalibrated parameters.After running the simulations (1 simulation for TS, 100 simulations for spcSW and 100 simulations for spcSW+IGW), none of the evaluated configurations resulted in an overtopping events.
CONCLUSIONS
The numerical experiments and their results confirm that the non-hydrostatic version of the calibrated XBeach model is capable of transferring wave energy at low frequencies along the surf and swash zones, as well as generating overtopping events on a stretch of Caldera beach.However, there is no overtopping under any of the configurations used in this study if the model is not calibrated.
The overtopping events produced by the spcSW and spcSW+IGW configurations have been validated with TS, reports from inhabitants near the study site, and press articles of extreme events (Piña, 2023).It has been found that the XBeach model produces the most amount of overtopping when forced with free surface time series (TS).When it is forced only with the energy content of the waves generated by parameters (spcSW), it generates the least amount of overtopping, while when it is forced with a directional spectrum containing energy in both the wave frequency bands and infragravitational waves (spcSW+IGW), the number of overtopping events increases but is still less than the results in TS.
The Exp 1 represents a sea state with an energy level close to the average, the overtopping generated in the spcSW and spcSW+IGW after 100 simulations each, only reaches 1% and 3% respectively.On the other hand, Exp 2 which represents a sea state with more energy generates overtopping in the 100 simulations performed in each spcSW and spcSW+IGW configuration; furthermore, both present similar values in terms of the average amount of overtopping events obtained, but less than half of those achieved in TS.The spcSW configuration underestimates the number of overtopping events by 68% while spcSW+IGW underestimates it by 64%.
Coupling the XBeach model with SO3 directional spectra, which contain both wave and infragravity wave energy, allows wave propagation from deep to shallow water, which is considered more realistic at sites where the incident wave is mainly swell.It is known that this type of wave carries energy related to infragravity oscillations; when these waves reach the coast, they can significantly impact sediment transport, alter beach morphology, and affect run up and overtopping.In addition, forcing the XBeach model with directional spectra generated by the SO3 and then performing multiple simulations is considered a computationally low-cost technique.
At present, the system is working properly, and it predicted appropriately the overtopping events, that's way is a good first approximation to reproduce overtopping events.This work could serve as the basis for a future operational overtopping system to help authorities manage this issue, mitigate material risks, and prevent emergencies.
Figure 1 .
Figure 1.View zone of the study.A) location of profile at Caldera beach; B) Overtopping event in September 2023; C) bathymetry profile along the cross shore transect at Caldera beach and location of pressure gauges (1, 2 and 3).
Figure 2 .
Figure 2. Free surface time series measured and modeled in three locations along the beach profile.A) pressure gauges 1, B) pressure gauges 2 and C) pressure gauges 3.
Figure 3 CFigure 3 .
Figure3presents the results of Exp 2 in three configurations: TS (figure3A), and one of the 100 simulations performed in spcSW and spcSW+IGW(figures 3B and 3C, respectively).Each figure displays the number of overtopping over the crest (+8.8 m) of the beach profile during the simulation.TS configuration produced the highest number of overtopping events, spcSW generated the minimum number of overtopping in comparison to TS and spcSW+IG.This behavior was consistent in the two experiments (Exp 1 and Exp 2), and all simulations.
|
2024-05-16T15:14:26.723Z
|
2024-04-29T00:00:00.000
|
{
"year": 2024,
"sha1": "76011608ab348e160c1bca09dea362e7ffe5ba12",
"oa_license": "CCBY",
"oa_url": "https://proceedings.open.tudelft.nl/coastlab24/article/download/699/679",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "63a2b477df73ba8ec886f5a32ce44963e3e31d9e",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Physics"
],
"extfieldsofstudy": []
}
|
56120095
|
pes2o/s2orc
|
v3-fos-license
|
Endometrial sampling: a comparison between the Pipelle® endometrial sampler and the Endosampler®
Abstract Objective: To compare the adequacy of endometrial sampling by the Endosampler® and the Pipelle®. Methods: A total of 68 women were randomly assigned to submit to pre-hysterectomy endometrial sampling, either by the Endosampler® or the Pipelle®. The amount of endometrial tissue sampled was measured by calculating the percentage endometrium sampled by each of the two devices. Acceptance by the gynaecologist was measured on a linear scale. Results: The Endosampler® sampled significantly more endometrial tissue than the Pipelle® endometrial sampler (p-value = 0.03). Acceptance of the Endosampler® was better than that of the Pipelle® (p-value = 0.0005). With the use of the Pipelle®, three significant endometrial lesions were missed, including endometrial carcinoma in one instance. Conclusion: The Endosampler® appears to be an easy-to-use device for endometrial sampling, with reliable diagnostic yield.
Introduction
Dilatation and curettage (D and C) is used to collect endometrium in patients with abnormal uterine bleeding who are at risk of acquiring endometrial cancer. It has long been considered the gold standard against which other endometrial sampling techniques have been measured. The disadvantages of D and C include the use of a general or regional anaesthetic, and therefore use of an operating theatre. D and C is consequently costly and time consuming, with potential complications from a general anaesthetic. For these reasons, several, mainly outpatient, procedures have replaced the D and C, and are now performed routinely. 1 Many outpatient endometrial sampling methods utilising disposable devices have been studied. Due to time constraints and studies involving small patient numbers, it has proven difficult to prove the superiority of one endometrial sampling device over another. Moreover, not all studies include diagnostic correlation with hysterectomy, which thus precludes the diagnostic accuracy of endometrial sampling techniques. [2][3] Over the years, the Pipelle® endometrial suction curette (Unimar, Wilton CONN) has become a very popular device in outpatient endometrial sampling. Reasons for its popularity are that the Pipelle® is easy to use, obtains supposedly enough tissue for diagnosis, and confers acceptable patient comfort. However, there are con icting reports about its appeal, in particular when comparing the Vabra® aspirator with the Pipelle®. Rodriguez The Endosampler®, a joint venture between MedGyn and Lombard ILL, is a disposable device that is used to sample endometrium. It works in the same way as the Vabra® aspirator, but without the cumbersome set-up. The purpose of the current study was to compare the yield of the Endosampler® with that of the Pipelle® in patients undergoing hysterectomy.
Method
A total of 68 consecutive patients who had been booked for hysterectomy were recruited for this study. The endometrial samples were obtained just prior to hysterectomy, while the patient was already anaesthetised. After inserting a catheter to empty the bladder prior to the operation, the endometrium was sampled. Patients were randomly assigned to either endometrial sampling using the Pipelle® endometrial sampler or the Endosampler®. All of the patients gave their written consent to undergo the endometrial sampling. Patients who had had an endometrial sample less than one month prior to hysterectomy were excluded from the study.
The Pipelle® endometrial sampler is a exible plastic device which is 23.5 cm long, with a 3.1 mm external diameter. At the side of the rounded tip of the Pipelle® is a 2 mm circular opening (Figures 1a and 1b). After grasping the cervix with a single tooth vulsellum, the device is inserted into the uterine cavity and negative pressure is created by withdrawing an internal piston. The length of the cavity was carefully recorded. With negative pressure, and with the opening against the endometrium, the Pipelle® was then moved ve times back and forth to complete an approximate circle. While maintaining negative pressure, the Pipelle® was removed from the uterine cavity. Its contents were subsequently placed in a container with formalin, after the distal tip of the Pipelle® had been removed. The Endosampler® is a plastic device with length of 23 cm. The external diameter of the tube is 3 mm. At the side of the rounded tip of the device is a 4 mm opening which is not entirely flush with the tube, but has the shape of a small curette (Figures 2a and 2b). Six centimetres from the tip is an angle of 160 degrees to accommodate the angle of the uterus. Negative pressure is created by a 10 ml syringe at the base of the device. The syringe can be detached from the base and has a spring lock to maintain negative pressure. The cervix was grasped with a singletooth vulsellum, and after insertion of the device, and with the opening against the endometrium, five passes were made to complete an approximate circle. The device was removed from the cavity while maintaining negative pressure. The length of the cavity was carefully recorded and the spring lock was then unfastened. By pushing in the piston of the syringe, the contents were then deposited in a container containing formalin.
All biopsies were analysed by the same investigator, who was unaware of which device was used. In the laboratory, each uterus was opened in half along the lateral borders. The endometrial surfaces and the endometrial/myometrial interface were photographed in the fresh/un xed state. A photomicrograph was produced, from which endometrial denudation was quanti ed. The surface area of the endometrial cavity after hysterectomy was photocopied from the photomicrograph on to gridded graph paper, and areas of denudation caused by endometrial sampling were blocked (Figure 3). These blocked areas were then proportionally assessed as the percentage of endometrial surface that had been sampled. This method has been described previously. 4 The endometrial biopsies and the uteruses were subsequently processed in the standard way. At least four sections of the endometrium where submitted (two from the anterior and two from the posterior surface). Additional sections from any macroscopically abnormal areas, or areas of denudation, were also removed. The histological sections were reported in the standard way by one of the authors (JW). A pro forma questionnaire was completed by this author. Demographic data such as age, parity, postmenopausal status and uterine length were recorded. The surgeon was also asked to score the acceptability of the device used, with "1" indicating "well accepted" and "5" for "unacceptable".
Results
A total of 68 patients were entered into the study and randomly assigned to pre-hysterectomy endometrial sampling with either the Pipelle® or the Endosampler®. One patient in the Endosampler® group was excluded from the study, as her pre-hysterectomy endometrial sample specimen has been lost. This left 67 patients for analysis. Thirty-four patients were assigned to the Pipelle® group and the remaining 33 patients underwent endometrial sampling using the Endosampler®. The mean age for both groups was similar (Table I). Other demographic variables, such as mean parity, number of Caesarean sections, mean uterine size, number of postmenopausal patients and number of previous cervical procedures [large loop excision of the transformation zone (LLETZ)] for the two groups were also similar.
The mean acceptability score by the surgeon for the two devices showed a statistical di erence. The spectrum ranged from "1" for "easy to use" to "5" for "very di cult to use". The score for the Endosampler® was 1.2, while that for the Pipelle® was 1.8 (p-value = 0.0005).
Although 8 out of 34 patients (23.5%) in the Pipelle® group rendered inadequate specimens for evaluation, while the corresponding numbers were 5 out of 33 (15.2%) in the Endosampler® group, this proved not to be statistically di erent (p-value = 0.29). In the Pipelle® group (Table II), in 3 out of 8 patients with an inadequate sample, access to the endometrial cavity could not be obtained. Corresponding gures for the Endosampler® group were 2 out of 5 patients.
An analysis of the sampled endometrium showed that, in the Endosampler® group, statistically signi cant more endometrium (14.5%, range 1-50%) had been sampled when compared to endometrial sampling with the Pipelle® (9.4%, range 1-20%) (p-value = 0.03) In the Endosampler® group, no signi cant endometrial abnormalities were found. In the Pipelle® group, three patients with signi cant endometrial abnormalities were missed using the pre-hysterectomy sampling. Of these three patients, one (44 years old) harboured a grade 1 endometrioid adenocarcinoma in a polyp, surrounded by atypical complex hyperplasia, in the uterine specimen. The second patient (60 years old) was found to have atypical hyperplasia in the uterine specimen, and mild hyperplasia in a polyp was discovered in the remaining patient (77 years old). Signi cantly, the Pipelle® specimens of these three patients were thought to contain proliferative endometrium. Review of both the endometrial sampling specimens, as well as the hysterectomy specimens in these three patients, revealed no changes from the initial histological assessment. In all three patients it was thought that the Pipelle® specimen was su cient for a histological assessment, although the percentage of the endometrium sampled was only 5.5% and 2% respectively. None of these patients complained of abnormal bleeding prior to the procedure.
Discussion
O ce endometrial sampling has been widely accepted as a procedure that can be used to diagnose endometrial pathology. The yield of endometrial devices for endometrial carcinoma has been reported to compare favourably to that of the classical D and C. 1 There are several reasons why o ce endometrial sampling is preferable to D and C. O ce endometrial sampling should reduce costs accrued by hospital admission and theatre time. As o ce endometrial sampling is done with a local anaesthetic, or without it, the general anaesthetic required for D and C is avoided. Lastly, it is convenient for the patient to have the procedure performed during the rst visit to the clinic, decreasing the delay in diagnosis.
Choosing the best device for o ce endometrial sampling has been the topic of many publications. The ideal device should be simple to use, causing minimal, if any patient discomfort, should be inexpensive and should not be associated with major complications.
Lastly, the tissue yield should be su cient for histopathological evaluation.
In this small prospective study, the Pipelle® was compared with the Endosampler® in a randomised fashion. Patient discomfort could not be investigated as the specimens were obtained while the patients were under general anaesthetic. However, compared to the Pipelle®, the Endosampler® appeared to be easier to use (p-value = 0.0005). Neither devices were associated with major complications.
It can be assumed that the percentage of endometrium that is removed is a reasonable re ection of the e cacy of any endometrial sampling device. The Endosampler® performed signi cantly better than the Pipelle® (14% versus 9%; p-value = 0.03). Similar ndings were reported when the Pipelle® was compared with other devices. In a publication by Rodriguez et al, the Pipelle® was compared with the Vabra® aspirator. The authors found that, with the Pipelle®, the percentage of sampled endometrium obtained was signi cantly lower than with that of the Vabra® aspirator (p-value < 0.001). 4 A worrying nding was, that with the Pipelle®, signi cant endometrial abnormalities were not sampled on three occasions (8.8%). One patient had endometrial cancer and another two had hyperplasia. This seems to be higher than ndings reported earlier (4.8%). 5 The failure to detect a malignant change of the endometrium is by no means limited to patients undergoing outpatient sampling, and may also occur in patients having a formal D and C. 6 Two of the three patients in the current study had polyps. Similar failures to detect polyps were reported earlier. "Blind" endometrial sampling is an unreliable method to detect polyps. [5][6][7][8] No signi cant abnormalities were found in patients from whom endometrial samples were obtained with the Endosampler®.
The current study, due to its design, could not be used to evaluate patient acceptance, but ease of use of the Endosampler®, when compared with the Pipelle®, was perceived by the gynaecological surgeon to be signi cantly better.
There may be some criticism concerning this study. The number of patients entered into the study is relatively small. Sixty-seven patients were eligible for analysis. However, the intention of the study was to test whether the Endosampler® was acceptable for surgeons, and secondly, whether or not it would render acceptable results. Although this is a prospective comparative study, the fact that the device could be identi ed by the surgeon made it impossible for the study to be "blind. " Even so, the pathologist was unaware of which device was used. Lastly, no signi cant histological abnormalities were found in the patients where the endometrium was sampled with the Endosampler®, while in the Pipelle® group, three cases of signi cant histological abnormalities were recorded. In this respect, a comparison between the Endosampler® and the Pipelle® is not possible.
The Endosampler® collected more endometrium than the Pipelle®. This may well translate into a higher adequacy of specimen obtained by the Endosampler®. The nding that three signi cant lesions were missed by Pipelle® sampling shows that, wherever possible, the clinical picture should correlate with the endometrial sampling ndings.
|
2018-12-12T13:25:02.503Z
|
2011-01-01T00:00:00.000
|
{
"year": 2011,
"sha1": "423f99b3881fa08a2c7824afbf8277438b06b623",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/20742835.2011.11441172?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "be717b5dea2dc986efa1f3265f511bac81b91fb9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
271541056
|
pes2o/s2orc
|
v3-fos-license
|
Risk Factors and Cardiovascular Disease in the Elderly
Age is associated with increased cardiovascular risk factors and cardiovascular disease, which constitutes the leading cause of morbidity and mortality in elderly population. In this text we thoroughly review current evidence regarding the impact on cardiovascular disease of the most important cardiovascular risk factors, especially prevalent and common in the elderly population. Diagnosis and treatment approaches are also addressed, also highlighting the importance of adequate primary and secondary prevention and management. Also, the relationship between cardiovascular disease and some comorbidities and geriatric conditions, such as frailty, particularly common in the elderly, is reviewed, together with some other issues, less often addressed but closely related to ageing, such as genetics, structural and electrical heart changes and oxidative stress. All such questions are of great importance in the comprehensive approach of risk factors and cardiovascular disease in the elderly.
Introduction
Age has been associated with increased cardiovascular (CV) risk factors and cardiovascular disease (CVD), being CV and cerebrovascular events the leading cause of morbidity and mortality in elderly population.However, not only age, but some other factors that play a key role in the development of CVD, ought to be known and addressed.Fig. 1 summarizes main CV risk factors involved in initiation and progression of CVD in elderly population, all of which are thoroughly reviewed in this text, also addressing the importance of primary and secondary interventions aimed to improve both quality of life and life expectancy.
Age itself is the main risk factor for vascular disease, involving macrovascular and microvascular impairment [1][2][3][4].Age-dependent arterial injury clinical manifestations typically occur after the fifth or sixth decade of life, although there is a high individual variability in vascular disease onset as ageing is a heterogeneous process [2].
In addition, future research must improve the information about aging biomarkers and tools to identify more accurate aging indicators to better understand the different velocities of ageing [2].
Interventions for achieving health vascular aging are behavioural and pharmacological [2,3].They are aimed to achieve normal blood pressure and to reduce arterial stiffness.Among the pharmacological interventions, some are clearly established, such as antihypertensive agents and statins [2,3].Table 1 (Ref.[3]) shows a summary of the main interventions for achieving health vascular aging.
Hypertension
The prevalence of hypertension (HT) increases with age, especially for isolated systolic HT [5].In adults ≥70 years, the estimated prevalence for HT is 73.6% for men and 77.5% for women in those belonging to high-income coun-
Strategies Evidence
Lifestyle strategies Aerobic exercise Conflicting evidence for vascular ageing.Important evidence to avoid frailty Weight loss and total energy intake In overweight and obese adults, it reduces arterial stiffness and blood pressure Healthy dietary patterns: High consumption of fruit and vegetables and/or Mediterranean diet Conflicting evidence of arterial stiffness reduction.Evidence in blood pressure reduction.Evidence to avoid frailty Sodium restriction Evidence in reduction of arterial stiffness and blood pressure.Flavonoids (citrus fruits, seeds, olive oil, tea, red wine, legumes) Evidence in reduction of arterial stiffness.
Pharmacological strategy
Antihypertensive treatment Evidence in reduction of arterial stiffness and blood pressure.
Statins
Evidence in reduction of arterial stiffness.
For many years, advanced age has been a barrier to the treatment of HT because of concerns about potential poor tolerability, and even harmful effects of BP-lowering interventions in people in whom mechanisms preserving BP homeostasis and vital organ perfusion may be more frequently impaired [14].However, current evidence shows that in old and very old patients, antihypertensive treatment substantially reduces CV morbidity and CV and all-cause mortality.Of note, data from the HYVET (Hypertension in the Very Elderly) trial, the SPRINT (Systolic Blood Pressure Intervention) Trial, and the STEP (Strategy of Blood Pressure Intervention in the Elderly Hypertensive Patients) trial could reflect the benefit of more intensive BP reduction in relatively healthy octogenarians, more than the effect on frail patients [15][16][17].On the other hand, older patients are more likely to have comorbidities such as renal impairment, atherosclerotic vascular disease, and postural hypotension, which may be worsened by BP-lowering drugs.Polypharmacy may also interact with BP control treatment.
According to this approach, current recommendations, as summarized on the European Society of Hypertension (ESH) and European Society of Cardiology (ESC) guidelines, which consider older as those patients with age ≥65 years and the very old as those with age ≥80 years, recommend emphasizing on biological rather than chronological age, also taking into account other aspects such as frailty, independence, and the tolerability of treatment [5].Target of SBP and DBP are 130-139 (if tolerated), and <80 mmHg, respectively.As BP-lowering drug treatment, a two-drug combination, in a single pill combination, is recommended, though in very old patients it may be appropriate to initiate treatment with monotherapy.Antihypertensive treatment may also be considered in frail older patients if tolerated [16].Close monitoring of adverse effects, especially orthostatic hypotension, is recommended and withdrawal of BP-lowering drug treatment based on age is not recommended if treatment is well tolerated [5].Unless required for concomitant diseases, loop diuretics and alpha-blockers should be avoided because of their association with injurious falls [18,19].Renal function should be frequently assessed to detect possible increases in serum creatinine and reductions in estimated glomerular filtration rate (eGFR) because of BP-related reductions in renal perfusion.
Type 2 Diabetes
High prevalence of type 2 Diabetes (T2D) is especially important in the elderly population: more than 25% of people over 65 years have T2D and 50% of older adults have prediabetes [20].Moreover, elderly patients with T2D have higher rates of heart disease, cerebrovascular disease, and stroke, than those without DM [21].
Achieving adequate glycemic control in the elderly with T2D continues to be a challenge, especially due to the great clinical, cognitive, and functional heterogeneity.Balancing risks and benefits of glycemic control is mandatory to establish an individualized reasonable glycosylated hemoglobin (A1C) goal in elderly patients [22].In addition, age, comorbidities, and life expectancy must be considered.Usually, A1C in older patients should be maintained below 8.0% to prevent both complications and mortality.However, some studies have shown a U-shaped relationship between A1C and mortality, highlighting that strict glycemic control increases the risk of mortality in diabetic older patients [23].The American Diabetes Association proposes a practical approach with different A1C goals for the elderly.Those who are healthy, with intact cognitive and physical functions and with long life expectancy should have an A1C <7.0-7.5%;older patients with several coexisting chronic illnesses or mild-to-moderate cognitive impairment should have and intermediate A1C goal (<8.0%).Finally, in patients with multiple coexisting chronic illnesses, poor health or moderate-to-severe cognitive impairment an A1C goal should be avoided and glucose control decisions should be based on avoiding hypoglycaemia and symptomatic hyperglycaemia [22].
T2D treatment interventions in the elderly should aim to prolong life but also to improve quality of life.Considering the heterogeneity of T2D older patients, both individualizing and simplifying (as well as assessing for drug interactions) the treatment should be mandatory [24].Most T2D treatment clinical trials do not include elderly patients, which makes it necessary to consider data from studies including younger patients to establish treatment recommendations for the elderly [25].Thus, drug side effects could be underestimated [26,27].
Current treatment guidelines generally recommend the use of monotherapy with metformin as initial treatment for the elderly [22].It can be safely used in patients with estimated glomerular filtration rate ≥30 mL/min/1.73m 2 , and the risk of hypoglycemia is low [28].When T2D older patients do not achieve their A1C target with metformin, it will be necessary to consider combination therapy with two or more antidiabetic drugs, or insulin.The selection of the appropriate drug should be based on efficacy in A1C reduction, relative risk of hypoglycemia, adverse effects, and CV profile [29].
Insulin is often used in elderly patients with A1C >9% or with persistent symptomatic hyperglycemia.Its use requires appropriate visual, motor, and cognitive skills.
Once-daily basal insulin treatment is a reasonable option in many older patients [22].Insulin dose will be lowest possible at baseline and should be titrated to meet individualized glycemic targets and to avoid hypoglycemia.Multipledaily injections can be a challenging option for older people with cognitive/functional limitation and increases the risk of hypoglycemia [24].
Sulfonylureas and thiazolidinediones should be used with caution in elderly patients.Sulfonylureas are associated with high risk of hypoglycemia and thiazolidinediones with congestive heart failure (HF), osteoporosis, and bone fractures [22].
Regarding incretin-based therapies, studies performed with dipeptidyl-peptidase-4 (DPP-4) inhibitors in elderly patients confirmed the efficacy of these drugs, which are well tolerated, with few side effects and very low risk of hypoglycemia [30][31][32].Despite this, saxagliptin should be used cautiously, if used at all, in older T2D patients because they may increase risk of hospitalization for HF, particularly in patients with history of previous HF or chronic renal disease, based on data from SAVOR TIMI 53 (Saxagliptin and Cardiovascular Outcomes in Patients with Type 2 Diabetes Mellitus) [33].
GLP1 receptor agonists main benefits are efficacy in A1C reduction and weight loss, cardiorenal protection, and negligible risk of hypoglycemia [34].Despite this, GLP1 receptor agonists are injectable agents (except for oral semaglutide) so visual, motor, and cognitive abilities are required for appropriate administration [35].Moreover, given weight loss and their gastrointestinal side-effects, these drugs should be avoided in frail patients, particularly those with malnutrition [29].Cardiorenal benefits seem to be consistent also in the elderly.Post-hoc analysis of the LEADER (Liraglutide Effect and Action in Diabetes: Evaluation of CV Outcome Results) study, with 9% of the study population ≥75 years old, found elderly patients had a 34% risk reduction in the frequency of MACE and a 35% in allcause mortality [35].Post-hoc analysis of the SUSTAIN-6 (Semaglutide and Cardiovascular Outcomes in Patients with Type 2 Diabetes), with 43% of patients ≥65 years, showed once weekly semaglutide reduced the risk of the first occurrence of MACE and each MACE component consistently across all age subgroups, compared with placebo [36].
Last, but not least, sodium-glucose co-transporter 2 (SGLT2) inhibitors have demonstrated efficacy in A1C reduction, very low risk of hypoglycemia, and great CV benefits in patients with established atherosclerotic CVD or heart failure.Moreover, these agents slow the progression of chronic kidney disease [37].This cardio-renal protective effect appears to emerge early.A systematic review and meta-analysis of SGLT2 inhibitors CV outcome trials showed that the protective effect was consistent across age categories, and elderly constituted about 50% of the total participants in the three major SGLT2 inhibitors trials [38][39][40].SGLT2 inhibitors use is safe in elderly patients, although they should be used cautiously in patients with previous genitourinary infections, and in older patients with factors predisposing to diabetic ketoacidosis [29].
Cardio-renal protection results of SGLT2 inhibitors and GLP1 receptor agonists in elderly patients are impressive.Therefore, their use in older people is likely to increase.However, evidence for those above the age of 80 years or frail individuals with multiple comorbidities is still lacking.
Obesity
Obesity is associated with an increased risk of developing CVD.Despite this, in patients over age 75, relative risk of death from all causes and CVD has been found to decrease with increasing body mass index (BMI) [41].Individuals with class I obesity present a more favorable prognosis compared to individuals who are normal or underweight.Thus, several studies have identified a BMI of 24 to 35 as "ideal".This phenomenon is called the obesity paradox, and it is particularly evident in HF [42].However, BMI could be an imperfect measure of obesity in the elderly, and most studies suggesting the existence of this puzzling paradox could have underestimated other key aspects as body composition, visceral adiposity, and sarcopenic obesity.
Dyslipidemia
Dyslipidemia is defined as elevated total or lowdensity lipoprotein (LDL) cholesterol levels above 90th percentile, or low levels of high-density lipoprotein (HDL) cholesterol below 10th percentile [43].Ageing is associated with impaired lipid metabolic pathways, leading to higher LDL-cholesterol and triglycerides levels due to less degradation.Cholesterol levels progressively increases from puberty, reaching a plateau until 70 years and then those levels persist or fall slightly [44].Not surprisingly, dyslipidemia is estimated to affect almost 40% of population over 65 years, with those with higher plasma LDL-cholesterol levels entailing the higher risk of atherosclerotic disease and acute cardiac events [45,46].
Clinical guidelines support the same recommendations that in younger ones regarding secondary prevention.They recommend a goal-directed therapy, with a reduction ≥50% of LDL-cholesterol levels, reaching a target of LDL-cholesterol <55 mg/dL in very high CV risk and <70 mg/dL in high CV risk [47].Accordingly, a recently published meta-analysis showed that lipid lowering therapies effectively reduce CV death, myocardial infarction, stroke and coronary revascularisation in patients aged ≥75 years [48].
Treatment involves lifestyle modification and pharmacologic treatment.Statins are the most prescribed drugs.They are considered safe in elderly patients, with some studies demonstrating clinical benefits also in primary pre-vention [49,50].Other pharmacologic tools include ezetimibe, PCSK9 inhibitors, bempedoic acid and inclisiran.Ezetimibe, used alone or combined with statins, associates CV benefits in elderly patients with a safe profile in specific clinical studies [51,52].Regarding evolocumab and alirocumab, sub-analysis from FOURIER (Further Cardiovascular Outcomes Research With PCSK9 Inhibition in Subjects With Elevated Risk) and Odyssey (Evaluation of Cardiovascular Outcomes After an Acute Coronary Syndrome During Treatment With Alirocumab) trials demonstrated CV benefits also in elderly patients [53,54].Inclisiran and bempedoic acid are considered the most recent treatments for dyslipidemia.Both drugs have showed to provide a safe and effective reduction in LDL levels in patients over 65 and 75 years of age, similar to their younger counterparts [55,56].
Finally, yet importantly, we recommend dyslipidemia treatment to be adapted to baseline patient specific conditions such as frailty or polypharmacy, also considering the risk of drug interaction, thus minimizing possible side effects and improving compliance to treatment, which in turn associates greatest clinical benefits [57].
Other Cardiovascular Risk Factors: Air Pollution, Alcohol and Tobacco
Recent studies have demonstrated a most pronounced effect of air pollution on health status in older adults.Air pollution with fine particulate matter has been associated with frailty, particularly in the very elderly or in those with low incomes [58].The underling mechanisms might be pollution-associated oxidative stress and inflammatory status.Furthermore, a recent critical review showed a consistent effect of air pollution on cognition impairment and dementia, probably due to neuro-inflammation and reduction in white matter volume [59].Heart failure seems to be one of the cardiac conditions most influenced by air pollution, with different contaminants like ozone (O 3 ), nitrogen dioxide (NO 2 ) and sulfur dioxide (SO 2 ) being associated with heart failure hospitalizations in the elderly [60].
Chronic alcohol intake is associated in older adults with higher body mass index and blood pressure, as well as atherosclerotic events [61,62].Paradoxically, alcohol consumption and risk of incident frailty are inversely related.In a recent meta-analysis the highest alcohol consumption was associated with lower frailty risk (odds ratio = 0.61, 95% confidence interval = 0.44-0.77)although two of the four individual studies suggested a U-shape association with lowest risks for moderate drinkers [63].This might be explained by a poorer basal heath status in non-drinkers; especially cognition, depressive symptoms, education, comorbidities, and self-reported general health.The main limitations of these studies were that alcohol consumption was mainly self-reported and that consumption pattern was completely different among countries.In fact, a Mediterranean drinking pattern, defined as moderate al-cohol intake, with wine preference and drinking only with meals, has been associated with a lower risk of frailty [64].One of the possible explanations to this finding could be that alcohol is often consumed socially; so moderate consumption frequently means an active social life.
Tobacco consumption has a direct effect in the cardiac structure and function [65].Besides, compared with nonsmokers, smokers are more likely to develop frailty, which seems to be related with the increase in mental and physical illnesses directly associated to smoking [66].
Other Conditions and Comorbidities Associated with Cardiovascular Disease in the Elderly
Several comorbidities may impact on the development of CVD or interfere with CV risk factors.The prevalence of all these factors increases with age, as well as the comorbidity burden, thus they often coexist.Comorbidities must be considered when assessing CV risk in the elderly, because they increase the risk of non-CV mortality and traditional risk scores may overestimate CV risk [67].Decreased life expectancy and quality of life may modify the aims of the CV prevention, especially in patients with severe or multiple comorbidities [68].Nevertheless, we must not fall into therapeutic nihilism and abandon the control of CV risk factors in the elderly patient.Therefore, a comprehensive approach to the patient and his comorbidities is key, and goals must be individualized.It is advisable that multidisciplinary teams and close collaboration between primary care and specialists lead to a holistic patient management, in which all their circumstances are considered when making decisions [67].
Chronic kidney disease is highly prevalent in the older patient, and share pathophysiological features with CVD [69].Aging itself is associated with changes in renal anatomy and physiology which led to a reduction of glomerular filtration, but this renal function decline is multifactorial and CV risk factors play an important role [69].On the other hand, severe impairment of renal function is considered as a major CV risk factor, and intensive control of classical risk factors should be aimed [67].However, several challenges are encountered when dealing with CV risk control in this setting, including difficulties in dose adjustment, interactions, and lack of specific evidence.Statins must be included for lipid control, but high-dose regimes should be avoided [70].Recent evidence supports the use of PCSK9 inhibitors in mild to moderate renal dysfunction [71].Specific benefits may be expected with the use of angiotensin-converting enzyme (ACE) inhibitors or angiotensin receptor blockers (ARB) for HT and SGLT2 inhibitors for T2D, as previously addressed, due to their renoprotective effects [72].
There is evidence supporting the relationship between serum uric acid and CV risk factors in the older patient, since elderly individuals with hyperuricemia have a higher prevalence of obesity, HT, lipid profile alterations and impaired glucose metabolism [73].There is also a close relationship between dietary intake of purine-rich food (mainly meat) and high uric acid levels, while vegetables consumption has a protective effect [74].A subanalysis of the PREDIMED (PREvención con DIeta MEDiterránea) trial including 4449 elderly patients at high CV risk, found that the adherence to a mediterranean diet is associated with a lower risk of hyperuricemia [75].
Cognitive impairment is closely related to CVD.CV risk factors and other pathophysiological pathways are common in both entities [76].Moreover, they show a twoway relationship: cognitive impairment may hinder therapeutic control and adherence, and the development of CV events is associated with a progressive deterioration of mental status [77].There is an obvious relationship between CV risk factors and cerebrovascular disease, as it can be considered as a different clinical manifestation of the same disease.Interestingly, in the older patient, evidence of cerebrovascular lesions in the absence of any stroke history is a common finding at neuroimaging, and it is associated to a higher prevalence of CV risk factors and marker of increased risk of stroke [78].Additionally, depression is common in the older patient, which may overlap with cognitive decline and impact on CV outcomes.Therefore, special attention should be made on cognitive function and depression screening in these patients.
Role of Frailty and Other Geriatric Conditions
Frailty is a clinical state with increased vulnerability to stressors, due to a decline in physiological reserve and function.It is usually related to age, but different to disabilities.Frailty increases CV morbidity and mortality and has been associated to conservative management and poorer clinical outcomes [79,80].Several tools for frailty assessment have been developed, and some questions regarding the moment for frailty measurement, which tool to use in each clinical setting and which clinical decisions must be taken remain open.In summary, there are two approaches to assess frailty, using a physical phenotype or using a multidomain approach.The frailty physical phenotype is a clinical syndrome with three or more of the following criteria: unintentional weight loss, self-reported exhaustion, weakness, slow gait speed, and low physical activity.In this approach frailty is a predictor of disability, being this latter the result of frailty.On the contrary, multidomain approach considers frailty as the result of deficits in multiple domains, being disability one of them.Usually, in patients older than 65 years old, we assess disability using a Barthel scale and, when it is not present, we assess physical frailty.When disability is present, we use a comprehensive geriatric assessment [80].
When deciding the best prevention strategy in older patients we should consider both disabilities and frailty.Firstly, a comprehensive geriatric assessment should be performed.If the patient has physical or cognitive disability, high burden comorbidity or reduced life expectancy, CV risk quantification with CV risk scales is useless.If not, physical frailty should be assessed.If the patient is robust, conventional CV risk scales would be used for primary prevention, as if the patient were younger than 75 years old.If the patient is frail or prefrail, clinicians should consider the potential reversibility of frailty, and consider changes in diet and physical activity to do so.Clinicians should carefully individualize decisions in these patients.In addition, if a patient is frail, CV disease should be excluded.If present, the patients should be treated according to current clinical practice and secondary prevention could be appropriate [81].Currently, new options as the frailty team and eHealth are going to better manage frailty [79].
Optimal control of CV risk factors is essential not only to prevent CVD, CV mortality and hospital admission but also to mitigate the economic burden of CVD.The economic impact of CVD is increasing worldwide, as a consequence of both progressive ageing of population and increasing prevalence of CV risk factors.HF, which represents the leading cause of hospitalization in patients over 65 years old, represents an estimated expense of more than 100 billion dollars annually.Thus, achieving a good control of predisposing factors such as HT or T2D is fundamental [82].
Other Risk Factors for the Promotion of
Cardiovascular Conditions in the Elderly (Fig. 2)
Oxidative Stress
There is increasing evidence that age itself is associated with an imbalance between reactive oxygen and nitrogen species production and neutralization by endogenous antioxidants.This disparity may partially explain the agerelated functional decline [83].Oxidative stress in the elderly has been related with increased levels of oxidized LDL-cholesterol (oxLDL).OxLDL easily accumulates in the arterial wall, promoting atherosclerosis, independently of the other CV risk factors [84].Besides of that, oxidative stress contributes to vascular endothelial dysfunction and vascular remodelling, leading to HT and atherosclerosis disease.Healthy diet and physical activity reduce oxidative stress particles; however, more evidence is needed and target treatments should be developed [83].
Structural and Electrical Heart Alterations
Ageing is associated with structural changes in the heart such as development of myocardial fibrosis, decline in diastolic function and left atrial dilatation.Conduction system is also affected, with decrease in intrinsic heart rate and conduction delays [85].Both, structural and electrical changes are associated with an increase in the prevalence of atrial fibrillation (AF) in this population.AF itself appears to be associated with increased cardiovascular risk, especially in women [86].Older patients with AF have higher rates of stroke, bleeding and death [87].It has been suggested that early intervention and control of CV risk factors reduces AF burden and may improve maintenance of sinus rhythm [67].Optimal anti-coagulation therapies, especially with direct oral anticoagulants (DOAC), should be provided in order to reduce adverse outcomes.Direct oral anticoagulants use was superior to warfarin in reducing stroke or systemic embolization in elderly AF patients without a significant increase in bleeding risk [88].
Genetic and Epigenetic Impact
Genetics represents a revolution in cardiovascular field.It is already known its importance in the aetiology of many cardiovascular diseases, but the genetic information is not currently used in cardiovascular risk factors scores.Additional evidence is still required; however, family history of CVD should be compiled [67].Moreover, recent evidence has shown that lifestyle and environmental factors may affect genetic expression.Genetic information may be significantly altered by different mechanisms globally known as epigenetics.Those mechanisms include DNA methylation, histone acetylation, and miRNA expression.All of them are mainly induced by environmental factors (such as pollution, smoking) or chronic inflammation.Not only they may lead to premature atherosclerosis and CVD but also they may be transmitted to the offspring [89].
Secondary Prevention and Cardiac Rehabilitation
Secondary prevention objectives in older adults do not differ much to their younger counterparts, as shown in Table 2, but particular attention on drug side effects, overdosing and intolerance is recommended [90].The cornerstone of the treatment is acquiring a healthy lifestyle, based on dietary habits, regular exercise and tobacco cessation.The accomplishment of these three goals can lead to optimal weight, better control of the CV risk factors, and reduction of morbimortality [91].Dietary counselling is one of the bases of the control of the risk factors, but malnutrition is more prevalent in older people and it should be addressed, even in patients with normal weight [92].A diet based on fruits, vegetables, legumes, and fish and polyunsaturated fats, like the Mediterranean pattern, is advised [93].Regular exercise should be divided in moderate intensity aerobic sessions (at least 30 minutes, 5 days per week), moderate resistance training (2 non-consecutive days per week), flexibility and balance exercises (10 minutes, 5 days per week).Exercise can be adapted to osteomuscular disabilities or individual preferences of the patients [94].
Tobacco cessation is mandatory, as benefits of its withdrawal are observed independently to age [95].Nicotine replacement therapy have been proved safe and effective in older adults.Bupropion and varenicline can be useful, but evidence is weaker and there is concern of their neuropsychiatric effects in the elderly [96].Blood pressure target is less than 140/90 mmHg.The stricter goal of less than 130/80 mmHg, when tolerated, has also shown benefits in healthy older adults, but it remains unknown its effects on the very frail, where drug-to-drug interactions, orthostatic hypotension and subsequent risk of fall, are more frequent.Closer control of adverse effects and slow titration of medication is recommended.In secondary prevention, ACE inhibitors and ARBs are preferred over other choices [97].
LDL cholesterol goal is less than 55 mg/dL (<1.4 mmol/L).High potency statins are the cornerstone of the pharmacological treatment, and there is no evidence of more muscle-related symptoms than in younger patients.However, polypharmacy has to be addressed again, as the probability of interactions rise [57].As previously explained, ezetimibe and PCSK9 inhibitors seem to be also effective and safe [53,54].
Regarding diabetes, goals depend on life expectancy and risk of adverse events.In healthy older adults, a level of A1C <7-7.5% (53-58 mmol/mol) can be achieved, especially under treatments with low risk of hypoglycaemia and disease self-management.When those conditions are lacking, a more lenient goal of A1C <8% (64 mmol/mol) should be of choice to avoid either hypoglycaemia or acute hyperglycaemic states.In type 1 diabetes, continuous glucose monitoring is useful in reducing hypoglycaemia episodes.In type 2 diabetes, overweight and obesity are usually related conditions, and those patients should be encouraged to losing weight, given its benefits, with SGLT2 inhibitors and GLP-1 receptor agonists prescription depending on clinical situation.The use of insulin, sulfonylureas and meglitinides has to be extremely careful because of the risk of hypoglycaemia and should be reduced or even withdrawn when diet and exercise adherence improve or when new diabetes drugs are prescribed [22,98].
Cardiac Rehabilitation Programmes (CRPs) have shown morbimortality benefits in older patients [99].CRPs can handle the peculiarities of the secondary prevention in the elderly.However, older patients are referred less often than younger ones.Nevertheless, given the progressive aging of the population, one third of the patients attending CRPs are over 75 years old.This is why frailty should be routinely addressed in CRPs, although the scale to do so is yet to be determined [100].Scales including physical performance as modified Fried scale or the SPPB test could be of use [80].
The exercise program must be individualized, with longer periods of warming and cooling.Sudden movements should be avoided.Both central and peripheral functional limitations are improved, and so is pain control [101].Moderate-vigorous aerobic and resistance exercise has beneficial CV and non-CV effects in the elderly.In this sense, tailored rehabilitation programs are especially appropriated to older population, since CRP present a similar benefit in older people after a CV event than in younger patients [102].Exercise training is, associated to an increase in exercise duration, peak oxygen consumption, and ventilatory threshold in older patients with chronic heart failure with reduced ejection fraction [103].Nevertheless, whether exercise training can reduce mortality, hospitalizations, and overall health care costs in older adults with CVD is still under research [104].
The effects of exercise in non-CV outcomes in elderly people have also been analyzed.The evidence is controversial regarding the incidence of falls with some data supporting a protector effect of exercise in pre-frail but not in frail patients and other suggesting no effect of training on the number of falls [105,106].Functional improvement and increased muscle strength after training programs also support the beneficial effect of exercise [107].Although rarer in younger patients, nutritional deficits, cognitive decline and social/familiar support should be routinely taken into account in elder patients, as these comorbidities can alter the usual approach to control de classic CV risk factors.Exercise, combined with nutrition supplementation might even reverse frailty and prevent cognitive impairment [108,109].In a recent multicenter clinical trial, a transitional progressive rehabilitation intervention showed a greater improvement in physical function than usual care [110].Finally, higher fitness level identifies older people with good long-term survival regardless CV risk factors burden [111].
Conclusions
CV risk factors are highly prevalent in the elderly.Not only traditional CV risk factors, such as HT, T2D, dislipemia or obesity, but also other CV risk factors such as tobacco or alcohol abuse and air pollution significantly impact in the long-term prognosis.Additionally, specific comorbidities as chronic kidney disease, hyperuricemia, or cognitive impairment should be taken into account.Although genetics, ageing-related structural and electrical heart changes and oxidative stress play a key role in CV disease in the elderly, additional studies addressing these issues are required.Also geriatric syndromes such as frailty, are highly prevalent in the elderly and closely related with CVD.Hence, primary and secondary interventions are of great importance since they reduce both morbidity and mortality in the elderly.Of note, such interventions should consider baseline conditions, including life expectancy and quality of life, thus providing most adequate care to patients' necessities.
Fig. 1 .
Fig. 1.Main cardiovascular risk factors involved in initiation and progression of cardiovascular disease in elderly population.The most important cardiovascular risk factors are summarized, thus including classical risk factors, but also common in the elderly, such as certain comorbidities and geriatric conditions.
Fig. 2 .
Fig. 2. Other risk factors for the promotion of cardiovascular conditions in the elderly.Oxidative stress, structural and electrical heart alterations, together with genetic and epigenetic factors, may be involved in the development of cardiovascular disease in the elderly.
|
2024-07-31T05:12:30.044Z
|
2022-05-25T00:00:00.000
|
{
"year": 2022,
"sha1": "45b6e0c515acde52d9253d95ce25026769443322",
"oa_license": "CCBY",
"oa_url": "https://www.imrpress.com/journal/RCM/23/6/10.31083/j.rcm2306188/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "45b6e0c515acde52d9253d95ce25026769443322",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258235120
|
pes2o/s2orc
|
v3-fos-license
|
Partnering with charity‐care services to manage cirrhosis with ascites in an adult experiencing homelessness: A case report
Abstract Charity care services can be an important tool for reducing healthcare disparities among populations with housing instability.
| INTRODUCTION
Persons with housing instability or homelessness face increased barriers to accessing and maintaining medical care, healthy lifestyles, and quality living conditions. In the context of these inequities, alcoholic liver disease (ALD) with cirrhosis can be especially challenging to manage. We present the case of a 54-year-old male referred to our local free clinic who developed new-onset homelessness due to inability to work because of debilitating ascites from ALD with cirrhosis.
The homeless population often struggles with accessible health care, especially during the COVID-19 pandemic, when this case occurred. Homelessness and health interact in paradoxical ways in that poor health can lead to homelessness, and homelessness results in or complicates poor health. 1 Quality of life and experience with health care is often complicated by the social determinants of health, which can be especially challenging for those with unstable housing. [2][3][4] While cirrhosis can be due to multiple risk factors, alcohol abuse is associated with approximately 50% of cirrhosis cases. 5,6 Liver disease due to alcohol misuse disorder is complex and readers are encouraged to see statements published by the American College of Gastroenterology discussing the management of alcoholic liver disease. 5,7 Medical nutrition therapy providing adequate calories and protein, sometimes with additional enteral nutrition supplements, and frequent visits to health care providers are integral parts of the treatment plan for those with ALD. Additionally, patients with ascites also benefit from sodium-restricted diets. 8-10
| CASE REPORT
Yakima County is a geographically large, medically underserved, and mostly rural region serving as an important agricultural center in the Pacific Northwest United States. 11,12 The community's only free medical clinic provides charity-based primary care to adult patients who cannot pay for health care, reports over 9200 patient visits annually. The clinic, based partly out of a renovated hotel, also has a shelter for homeless individuals and families, in addition to four rooms reserved for temporary medical housing for homeless patients with acute and severe medical conditions. The clinic also has a kitchen providing free meals three times daily to residents of the mission and daytime visitors. 13 Our patient was referred from a local federally qualified health clinic (FQHC). In the United States (US), FQHCs are medical organizations that qualify for special funding from the US government under the Public Services Health Act. They serve underserved communities or populations, typically offer a sliding scale fee system, and provide comprehensive primary care services. Our patient presented to the free clinic with chronic liver cirrhosis and ascites secondary to alcohol use disorder on January 15, 2021. The patient worked in the area's agricultural fields, which included climbing up and down ladders throughout his workday. However, his ascites and abdominal pain made it impossible to continue working. He lived with a roommate, until he had to move out due to inability to pay rent because of his declining health status. The patient identified as a 54-year-old male and reported a long history of drinking eight beers a day. Further alcohol history was not provided, but he did report that he had stopped drinking. Medical history also included type 2 diabetes with an unknown diagnosis date. The patient had no transportation, so he walked to the local emergency department (ED) several times prior to starting care at the free clinic to receive therapeutic abdominal paracentesis for ascites.
The patient was referred to the area's only gastroenterology group, but new patients were not being accepted. Therefore, we consulted regularly over the phone with our local liver specialist who works for the local gastroenterology clinic, which is not formally affiliated with our area FQHC's or our free clinic. She manages our free clinic's cirrhosis patients locally and she assisted us with medical and medication management of this patient. Upon beginning his care with the free clinic, the patient was taking spironolactone 100 mg and furosemide 40 mg daily. He was also started on metformin 850 mg daily.
Upon initial examination, the patient presented with an observable distended, taught, non-tender abdomen. Other physical exam findings were normal, except for scleral icterus; mild jaundice; umbilical hernia; and skeletal muscle wasting. Due to the ascites, our patient's body mass index was 28.43 kg/m 2 . The patient had a hepatitis panel in 2014 at the FQHC, which showed non-reactive hepatitis B surface antigen (AgHb), hepatitis B core antibody (anti-HBcor), and hepatitis C antibody (anti-HCV). He was not checked for hepatitis D. An alpha-fetoprotein level was not obtained.
He was offered temporary medical housing at the free clinic due to homelessness and acute severe ascites. Managing ALD with cirrhosis and ascites includes a low salt (2 gm sodium per day), high protein, low fructose diet, and avoidance of malnutrition, [8][9][10] which was provided as best as possible by the clinic's kitchen. Therapeutic paracentesis was done by the clinic providers when indicated. Upon initial evaluation, the patient's degree of cirrhosis was scored as: Child Class C and upon discharge, the patient's Child Class score was revised to Class A. He initially came in with class III ascites in January 2021 and improved to Class I ascites by discharge. His only ultrasound was done in December 2020 before we took over his care. The ultrasound reported hepatic morphology suggestive of cirrhosis with recommended clinical correlation. There was moderate ascites but no evidence of splenomegaly and the pancreas not wellvisualized. No other imaging (e.g., computerized tomography (CT) or magnetic resonance imaging (MRI) scan) was available. No assessment of fibrosis status via FibroScan or Fibromax was available. Other vital signs, physical exam, and laboratory details are presented in Table 1.
Our patient's overall status improved and by September 2021, after 7 months living onsite in the clinic's medical housing, he was able to resume working and find housing with a friend. Upon departure from the clinic, the patient was instructed to continue the meal plan as he had been receiving to control the ascites. Lactulose (20 mg twice daily) was also added to his regimen.
| DISCUSSION
Complex and chronic diseases like ALD with cirrhosis and ascites require careful management, which can be exceptionally challenging for patients with unstable housing and healthcare access. Barriers to preventative care and chronic disease management services are common and difficult for many to overcome without assistance. 3,14 Close follow-up is warranted, ideally from gastroenterology and primary care to prevent worsening progression of the disease and avoid complications. 4,14 The complications of cirrhosis had impaired our patient's ability to work and caused interruptions in earning income and maintaining housing. Persons living in unstable housing circumstances are at significant risk for exacerbation of chronic conditions, and much less likely to qualify for organ transplant. 3,15 Without addressing these important social determinants of health, he was unable to properly manage his advancing cirrhosis.
| CONCLUSION
This unique collaboration between the patient, various local healthcare stakeholders, including a FQHC, a charity-based clinic and homeless shelter, and the local gastroenterology service, played a key role in this patient's recovery. He was able to reduce the ascites, improve his quality of life, and regain independence. This also translated into fewer visits to the local ED. We believe this story illustrates how cooperation and communication using diverse community stakeholders can be effective and meaningful for patients. We were able to provide personalized, evidence-based care for this patient, despite the limitations of our free clinic and patient's low-resource situations. Moreover, this case shows how we can help mitigate health disparities among homeless individuals using creative and collaborative approaches.
|
2023-04-20T15:22:53.134Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "225d78f66d1c28adb9e6bebd7d093f9d9ac5ccbe",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ccr3.7191",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fb98d040477151caf906d9f702ec7df2cc7d3074",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
6750400
|
pes2o/s2orc
|
v3-fos-license
|
Lepton Number Violation in Supersymmetric Grand Unified Theories
We argue that the nature of the global conservation laws in Supersymmetric Grand Unified Theories is determined by the basic vacuum configuration in the model rather than its Lagrangian. It is shown that the suppression of baryon number violation in a general (R-parity violating) superpotential can naturally appear in some extended SU(N) SUSY GUTs which, among other degenerate symmetry-breaking vacua, have a missing VEV vacuum configuration giving a solution to the doublet-triplet splitting problem. We construct SU(7) and SU(8) GUTs where the effective lepton number violating couplings immediately evolve, while the baryon number non-conserving ones are safely projected out as the GUT symmetry breaks down to that of the MSSM. However at the next stage, when SUSY breaks, the radiative corrections shift the missing VEV components to some nonzero values of order M_{SUSY}, thereby inducing the ordinary Higgs doublet mass, on the one hand, and tiny baryon number violation, on the other. So, a missing VEV solution to the gauge hierarchy problem leads at the same time to a similar hierarchy of baryon vs lepton number violation.
Introduction
The Standard Model (SM), while being extremely successful in describing interactions of quarks and leptons at low energies, still has many unanswered questions. Among these one problem is predominant: the unification of all elementary forces within the framework of a simple gauge theory and the ensuing hierarchy of mass scales in particle physics at large and small distances. Possible solutions to this problem are commonly related to supersymmetry (SUSY) [1] and Grand Unified Theories (GUT) [2]. At present they receive some indirect (largely qualitative) experimental support from the apparent lightness of the Higgs boson, the values of gauge couplings given by precision measurements and the heavy top quark mass. At low energies the SUSY GUT turns into the Minimal Supersymmetric Standard Model (MSSM). Of course the MSSM can certainly be considered on its own as a simple SUSY extension of the Standard Model, leaving aside for the moment the question of unification. However, no matter which level of theory is considered, there is one point that crucially distinguishes SUSY from non-SUSY models. This is that they do not contain the automatic accidental symmetries, corresponding to baryon (B) and lepton (L) number conservation, which are present in the ordinary Standard Model. Does this mean that B and L number can be violated in the SUSY context or must some special protecting symmetry be postulated in the MSSM? Usually, the requirement of B and L number conservation in the MSSM is indeed satisfied by postulating the existence of some multiplicative discrete symmetry called R-parity [1]. An exact R-parity (RP ) implies that SUSY particles should be produced in pairs and that the lightest SUSY particle (LSP) is stable.
On the other hand, there is no fundamental reason to prefer models with exact RP over those with broken RP in the framework of the supersymmetric SM, where not only fermions but also their scalar superpartners automatically become the carriers of lepton and baryon numbers. Thereby, among the basic renormalisable couplings in the low-energy MSSM superpotential,one would generally expect to find the lepton and baryon number violating ones Here, i, j, k are generation indices and a summation is implied (colour and weak isospin indices are suppressed); L i (Q j ) denote the lepton (quark) SU(2)-doublet superfields and E i (U i , D i ) are SU(2)-singlet lepton (up-quark, down-quark) superfields; µ i are mass parameters which mix lepton superfields with the up-type Higgs superfield H u , while λ ijk (λ ijk = −λ jik ), λ ′ ijk and λ ′′ ijk (λ ′′ ijk = −λ ′′ ikj ) are dimensionless couplings. The first three terms in (1) violate lepton number, while the last violates baryon number.
While SUSY-inspired B number violation (BNV) leads in general to unacceptably fast proton decay and must be highly suppressed, SUSY-inspired L number violation (LNV) could readily occur at a level consistent with present experimental constraints, but large enough for the observation of some of its spectacular manifestations [3] at present or future colliders. Remarkably enough, instead of RP , another (gauge) discrete symmetry could appear in the MSSM: superstring-inherited Z 3 baryon parity [4], which strongly protects B number and allows for L number violation only. Thus, as long as it is not in conflict with any phenomenology, SUSY-inspired lepton number violation merits further detailed investigation, both theoretically and experimentally.
From the theoretical point of view, the principal question concerns the search for a Grand Unified framework within which, while treating quarks and leptons equally, L-violation should be allowed at the same time as B-conservation. Unfortunately, the discrete symmetries acceptably protecting B-conservation while allowing L-violation in the MSSM, such as the above mentioned Z 3 baryon parity, transform quarks and leptons differently and, thereby, are incompatible with the known GUTs. Nevertheless several extended GUT models have been constructed [5], where the coexistence of lepton number violation and baryon number conservation can in principle be arranged. This is typically achieved by introducing at the Planck scale high-dimensional operators, involving Higgs and matter multiplets in some exotic representations of the underlying GUT symmetry, and then imposing additional custodial symmetries to ensure that only the required set of LNV high-order operators is allowed. These operators become the renormalisable LNV couplings (1) at lower energies after the GUT symmetry breaks at the GUT scale M GU T .
Despite some progress, one has the uneasy feeling that such a solution to this problem looks rather artificial, as it is generically correlated neither with the nature of the GUT nor with its breaking pattern. Instead, we suggest that it is just the breaking pattern of the underlying GUT symmetry which could give a fundamental reason for the difference in treatment of the baryon and lepton numbers of the matter particles involved in GUTs. We show that a suppression of baryon number violating interactions in the superpotential (1) naturally occurs in some SU(N) SUSY GUTs where a missing VEV vacuum configuration develops, which also gives a solution to the doublet-triplet splitting problem [6]. We construct explicit examples of RP -violating SU (7) and SU (8) GUTs where the effective LNV couplings immediately evolve from the GUT scale, while the baryon number non-conserving ones are safely projected out by the missing VEV vacuum configuration breaking the GUT symmetry down to that of the MSSM. However, at the next stage when SUSY breaks, radiative corrections shift the missing VEV to some nonzero value of order M SU SY and induce BNV violating couplings with hierarchically small coupling constants λ ′′ ijk = O(M SU SY /M GU T ), which appear to be of phenomenological interest [3].
Missing VEV solutions in SU (N ) GUTs
The most elegant solution to the gauge hierarchy problem in supersymmetric SU(N) GUTs could well be related to the existence of a missing VEV vacuum configuration [7], according to which the basic adjoint scalar Σ i j (i, j = 1, ..., N) does not develop a VEV in some of the directions in SU(N) space. Through its coupling with a pair of Higgs fields H and H, their masses are split in a hierarchical way so as to have light weak doublets breaking electroweak symmetry and giving masses to up and down quarks, on the one hand, and superheavy colour triplets mediating proton decay, on the other. However, it is well known [7] that a missing VEV solution can not appear in SU(N) GUTs in the ordinary one-adjoint scalar case. This is due to the presence of a cubic term Σ 3 in the general Higgs superpotential W leading to the unrealistic trace condition T rΣ 2 = 0 for the missing VEV vacuum configuration, unless there is a special fine-tuned cancellation between T rΣ 2 and driving terms stemming from other parts of the superpotential W .
The two-adjoint alternative
So, it seems the only way to obtain a natural missing VEV solution in SU(N) theories is to exclude the cubic term Σ 3 from the superpotential, by imposing some extra reflection symmetry on the adjoint supermultiplet Σ Σ → −Σ On its own the elimination of the Σ 3 term leads to the trivial unbroken symmetry case. However the inclusion of higher even-order Σ terms (supposedly inherited from superstrings or induced by gravitational corrections) in the effective superpotential leads to an all-order missing VEV solution, as was shown in recent papers [6]. Alternatively one can introduce another adjoint scalar Ω with only renormalisable couplings appearing in W . Let us consider briefly the high-order term case first. The SU(N) invariant superpotential for an adjoint scalar field Σ conditioned also by the gauge Z 2 reflection symmetry (2) contains, in general, all possible even-order Σ terms scaled by inverse powers of the (conventionally reduced) Planck mass M P = (8πG N ) −1/2 ≃ 2.4 · 10 18 GeV. It is readily shown that the necessary condition for any missing VEV solution to appear in the SU(N)⊗ Z 2 invariant superpotential W A is the tracelessness of all the odd-order Σ terms This condition uniquely leads to a missing VEV pattern of the type where the VEV value σ is calculated using the Σ polynomial taken in W A (3). The vacuum configuration (5) gives rise to a particular breaking channel of the SU(N) GUT symmetry which we will discuss in some detail a little later. So we conclude from Eqs. (5,6) that a missing VEV solution could actually exist, with the ordinary MSSM gauge symmetry SU(3) C ⊗ SU(2) W ⊗ U(1) Y surviving at low energies, provided that N ≥ 7. The superpotential (3) can be viewed as an effective one, following from an ordinary renormalisable two-adjoint superpotential with the second heavy adjoint scalar integrated out. Hereafter, although both approaches are closely related, we deal for simplicity with the two-adjoint case. So let us consider a general SU(N) invariant renormalisable superpotential for two adjoint scalars Σ and Ω, also satisfying the gauge-type Z 2 reflection symmetry (Σ → −Σ, Ω → Ω) inherited from superstrings: Here the second adjoint Ω can be considered as a state originating from a massive string mode with the Planck mass M P . The basic adjoint Σ may be taken at another well motivated 10 13 ) GeV [8] where, according to many string models, the adjoint moduli states (1 c , 1 w ), (1 c , 3 w ) and (8 c , 1 w ) (in a self-evident SU(3) C ⊗SU(2) W notation) appear. In the present context these states can be identified as just the non-Goldstone remnants Σ 0, Σ 3 and Σ 8 of the relatively light adjoint Σ which breaks SU(N) in some way. However, all our conclusions remain valid for any reasonable value of m, which is the only mass parameter (apart from M P ) in the model considered.
As a general analysis of the superpotential W A (7) shows [6], there are just four possible VEV patterns for the adjoint scalars Σ and Ω: (i) the trivial unbroken symmetry case, Σ = Ω = 0; (ii) the single-adjoint condensation, Σ = 0, Ω = 0; (iii) the ′′ parallel ′′ vacuum configurations, Σ ∝ Ω and (iv) the ′′ orthogonal ′′ vacuum configurations, T r(ΣΩ) = 0. The Planck-mass mode Ω, having a cubic term in W A , in all non-trivial cases develops a standard ′′ single-breaking ′′ VEV pattern which breaks the SU(N) GUT symmetry to However, in case (iv), the basic adjoint Σ develops the radically new missing VEV vacuum configuration (5), thus giving a ′′ double breaking ′′ of SU(N) to (6). Using the approximation h λ >> m M P , which is satisfied for any reasonable values of the couplings h and λ in the generic superpotential W A (7), the VEV values are given by respectively. Surprisingly, just the light adjoint Σ develops the largest VEV in the model which, for a properly chosen adjoint mass m and coupling constant h, can easily come up to the string scale M str (see [9]). Furthermore, as concluded above, one must consider SU(N) GUTs with N ≥ 7, in order to have the standard gauge symmetry SU(3) C ⊗SU(2) W ⊗U(1) Y remaining after the breaking (6). As is easily seen from Eqs. (5,6), there are two principal possibilities: the weak-component and colour-component missing VEV solutions respectively. If it is granted that the "missing VEV subgroup" SU(N −k) in (6) is just the weak symmetry group SU(2) W , as is traditionally argued [7], one is led to SU(8) as the minimal GUT symmetry (N − k = 2, k/2 = 3) [6]. Another, and in fact the minimal, possibility is to identify SU(N − k) with the colour symmetry group SU(3) C in the framework of an SU (7) GUT symmetry (N − k = 3, k/2 = 2) [9]. The higher SU(N) GUT solutions, if considered, are also based on just those two principal possibilities: the weak-component or colour-component missing VEV vacuum configurations respectively.
Let us see now how this missing VEV mechanism works to solve the doublet-triplet splitting problem in SU (8) or SU (7) GUT with the superpotential W A (7). It is supposed that there is a reflection-invariant coupling of the ordinary MSSM Higgs-boson containing supermultiplets H andH with the basic adjoint Σ, but not with Ω, in the superpotential W H The second part W ′ H contains possible mixings with other scalar fields, which are inessential for the moment. The superfields H andH do not develop VEVs during the first stage of the symmetry breaking (6). Thereupon the first term in W H turns into a mass term for H and H determined by the missing VEV pattern (5). This vacuum, while giving generally heavy masses (of the order of M GU T ) to H andH, leaves their weak components strictly massless. To be certain of this, we must specify the multiplet structure of H andH for both the weakcomponent and the colour-component missing VEV vacuum configurations, that is in SU (8) and SU (7) GUTs respectively. In the SU (8) case H andH are fundamental octets whose weak components (ordinary Higgs doublets) do not get masses from the basic coupling (11). In the SU (7) case H andH are 2-index antisymmetric 21-plets in which, after projecting out the extra heavy states (see Section 3.1), there is left just one pair of massless Higgs doublets as a consequence of the coupling (11). Thus, there is a natural doublet-triplet splitting in both cases and we also have a vanishing µ term at this stage. However, radiative corrections generate a µ term of the right order of magnitude at the next stage when SUSY breaks [6].
Projection to low energies
Missing VEV vacua, which ensure the survival of the MSSM at low energies, only appear in SU(N) GUTs with a higher symmetry group than the standard SU (5)) model. In order not to spoil gauge coupling unification, the extra gauge symmetry should also be broken, SU(N) → SU (5), at the GUT scale. Then the following question arises: how can the missing VEV survive this extra symmetry breaking with at most a shift of order the electroweak scale? This requires, in general, that the superpotential (7) be strictly protected from any large influence from the N − 5 scalars ϕ k (k = 1, ..., N − 5) providing the extra symmetry breaking (or from uncontrollable gravitational corrections). Technically, such a custodial symmetry may be a superstring-inherited anomalous U(1) A [10], which can naturally keep two sectors of the total superpotential separate and then induce a high-scale extra symmetry breaking through the Fayet-Iliopoulos (FI) D-term [11]: Here the sum runs over all "charged" scalar fields in the theory, including those which do not develop VEVs and which contribute to T rQ A only. For realistic or semi-realistic models, T rQ A has turned out to be quite large, T rQ A = O(100) (see [12] for a recent discussion). Therefore, the spontaneous breaking scale of the U(1) A symmetry and of the related extra gauge symmetry is naturally located at the string scale. The protecting anomalous U(1) A symmetry is needed to keep the scalars ϕ (k) and ϕ (k) essentially decoupled from the basic adjoint superpotential W A (7), so as not to strongly influence its missing VEV vacuum configuration (5). Otherwise potentially dangerous couplings could appear of the type ϕ (k) Σϕ (k) , where the ϕ (k) and ϕ (k) scalar superfields are taken in pairs of conjugate fundamental representations (N and N ) of SU(N). If these couplings actually appeared, they would give rise to shifts in the missing VEV components of the adjoint scalar Σ A B of the order Σ , as directly follows from the minimisation condition for the scalar potential. So the presence of a protecting symmetry is essential for the missing VEV mechanism to function properly.
We will now enlarge on this key point in order to gain a better understanding of the missing VEV approach. The symmetry protected separation of the adjoint scalar and the ϕ k scalar sectors in the total superpotential implies the appearance of an accidental global symmetry SU(N) Σ−Ω ⊗ U(N)ϕ in the SU(N)⊗ U(1) A gauge theory considered. This global symmetry is in turn radiatively broken, resulting in a set of pseudo-Goldstone (PG) states of the type 5 +5 + SU(5) − singlets (13) which gain a mass at the TeV scale where SUSY softly breaks [6]. There can be a maximum of N − 5 families of PG states of the type (13), corresponding to the case where the scalars ϕ (k) and ϕ (k) are only allowed to appear in the Higgs potential through the basic SU(N) and U(1) A D-terms. In this case the U(N)ϕ global symmetry is increased to U(N) ϕ (1) ⊗ .... ⊗ U(N) ϕ (N−5) . This case would occur if the U(1) A charges of the bilinears ϕ k ϕ k ′ were all positive (or negative), so that they could not appear in the SU(N) ⊗ U(1) A invariant superpotential in any order. However, in a properly extended model it is possible for the adjoint and fundamental scalar sectors in the superpotential to overlap without disturbing the adjoint missing VEV configuration. This naturally occurs when the scalars ϕ (k) are conditioned by the U(1) A symmetry to develop orthogonal VEVs along the ′′ extra ′′ directions ϕ (k) As a result, some safe non-diagonal couplings ϕ (m) Σϕ (n) are generated between the two sectors, giving contributions to the pseudo-Goldstone masses which leave only one light PG family (12). Let us consider this possibility in some detail. The least restrictive choice of such safe mixing terms for the general SU(N) case is achieved by introducing two sets of new singlet scalar superfields fields, S mn and T mn , with non-diagonal couplings of the type which are also invariant under the reflection symmetry Σ → −Σ, T mn → −T mn . The coupling constants a mn and b mn are all of order O(1) and O(1/M P ) respectively, and the (N −5)(N −6)/2 singlet scalars S mn and T mn (m < n) get their VEVs through the FI D-term (12), as do all the ϕ and ϕ scalars. One can consider the fields S mn as the basic carriers of the U(1) A charges Q mn which are all taken positive in the model (the fields T mn carry the same charges Q mn ). The U(1) A charges of the ϕ (m) ϕ (n) bilinears (m < n) appearing in W mix are then determined to be −Q mn , while the charges of all the other bilinears, diagonal ϕ (m) ϕ (m) and non-diagonal ϕ (n) ϕ (m) , can always be chosen positive. This implies that any terms containing ϕ and ϕ scalars can only appear in the superpotential if they also include the bilinears ϕ (m) ϕ (n) so as to properly compensate the U(1) A charges. However, for a vacuum configuration where the orthogonality conditions ϕ (m) ϕ (n) = 0 naturally arise, such terms do not lead (in any order) to the dangerous ϕ (k) Σϕ (k) couplings, although they can can contribute to the pseudo-Goldstone masses. In fact these orthogonality conditions are satisfied at the SUSY invariant global minimum of the Higgs potential, as follows from the vanishing F -terms of the superfields S mn (T mn ), ϕ (m) and ϕ (n) involved in (15): (no summation is implied). Here the orthogonal VEV values of the scalars ϕ (n) (14) have been used. One can now readily see that non-diagonal mass terms appear for the PG states related to the multiplets ϕ (m) and ϕ (n) where I is the N ×N unit matrix. Diagonalisation of the mass matrix (17) explicitly shows that one PG superposition 5 +5 (13) is left massless, while the others become heavy 1 . This is in fact a general consequence of the symmetry breaking pattern involved. The point is that neither of the other mass-terms M mm and M nm can be allowed by U(1) A symmetry for any generalisation of the superpotential W mix (15). Otherwise the dangerous ϕ (k) Σϕ (k) couplings inevitably appear as well. So, one can conclude that even in the general case one PG family of the type (13) always exists. Together with the ordinary quarks and leptons and their superpartners these PG states, both bosons and fermions, determine the particle spectrum at low energies. In most of what follows the existence of just one family of PG states at the sub-TeV scale will be assumed. We consider below both of the minimal possible GUTs, SU (7) and SU (8), with the missing VEV solution naturally allowing the survival of the MSSM down to low energies. Whereas the SU(7) model is taken as an ordinary one-family unifying GUT [9], the SU(8) model can include unification of the quark-lepton families as well [13].
3 One-family unifying GUT: SU (7) By analogy with the standard SU(5) model, we take the simplest anomaly-free set of matter fields, consisting of the combination of the fundamental and 2-index antisymmetrical representations of the SU (7) gauge group There is also a set of Higgs superfields among which are the two already mentioned adjoint Higgs multiplets Σ A B and Ω A B , responsible for the breaking (6,9) of SU (7), and a conjugate pair of multiplets H [AB] andH [AB] (being the 21-plets of the SU (7)) where the ordinary electroweak doublets H u and H d reside. Besides, as in the general SU(N) case (see Section 2.2), there should be extra-symmetry breaking scalar superfields ϕ (p) and ϕ (p) (p = 1, 2) which are taken to be fundamental septets and anti-septets respectively. They are supposed to develop their string-scale order VEVs along the "extra" directions ϕ (1) only through the (FI) D-term related to the U(1) A symmetry (12). The protecting anomalous U(1) A symmetry keeps the ϕ scalars decoupled from the basic adjoint superpotential W A (7), so as not to strongly influence the missing VEV solution (5) through dangerous couplings of the type ϕ (p) Σϕ (p) . With the given assignment of matter and Higgs superfields, the particle spectrum at low energies looks as if one had just the standard SUSY SU(5) as a starting GUT symmetry, except that one family of PG states of type (13) appears, when a missing VEV vacuum configuration develops in the SU (7) GUT. With this exception, all the other SU(7) inherited states in matter and Higgs multiplets aquire GUT scale masses due to symmetry breaking, thus completely decoupling from low-energy physics. We demonstrate this for the Higgs sector in the next sub-section.
Higgs sector
We now show that all the states, except for one pair of weak doublets in the basic Higgs multiplets H [AB] and H [AB] , become superheavy. Firstly one substitutes the colour-component missing VEV solution, obtained from the general case (5) by setting N = 7 and k = 4, into the superpotential (11). Superheavy masses are thereby generated for most of the components of the H and H multiplets. However, the following states (weak, colour and extra symmetry components are explicitly indicated) still remain massless at this stage of SU (7) symmetry breaking (6). Therefore one of the two pairs of weak doublets in (20), as well as the colour triplets, must further become heavy in order to get the ordinary picture of MSSM at low energies. This happens as a result of mixing H and H with the specially introduced heavy scalar supermultiplets Φ [ABC] and Φ [ABC] (being 35-plets of SU (7)) in the basic Higgs superpotential (f , f and y are dimensionless coupling constants) when the scalars ϕ get their VEVs, thus breaking the extra gauge symmetry. The presence of the ′′ conjugated ′′ Φ − H and Φ − H mixings in W ′ H could allow the dangerous ϕΣϕ terms, destroying the missing VEV solution, unless the bilinear term ΦΦ has nonzero U(1) A charge. Therefore, this term appears in W ′ H together with the singlet scalar superfield S, the basic U(1) A charge carrier introduced earlier in W mix (15) in a general SU(N) context (for SU (7) there appears only one pair of such singlets, S and T ).
It should be clear now that the W ′ H couplings (21) will rearrange the mass spectrum of the states (20), so as to leave just one pair of massless weak-doublets as needed for the MSSM. By diagonalising the 2 × 2 mass matrix for the states H [cc ′ ] and H [cc ′ ] and the double-coloured components Φ [cc ′ 6] and Φ [cc ′ 6] , the mass of the colour triplet components in (20) is found to be of order where the combination of the primary coupling constants f, f and y, can be taken O (1) In much the same way all the additional states in the SU(7) matter multiplets (18) become superheavy during the starting GUT symmetry breaking SU(7) → SU(5) [9].
Yukawa couplings
The usual dimension-4 trilinear Yukawa couplings are forbidden by SU (7) gauge invariance. So we suppose that all the generalized Yukawa couplings, the RP -conserving (ordinary up and down fermion Yukawas) as well as the RP -violating ones allowed by the SU(7) ⊗ U(1) A symmetry, are given by a similar set of dimension-5 operators of the form (i, j, k = 1, 2, 3 are the generation indices, the SU (7) indices A, B, C = 1, ..., 7 are hereafter omitted): Further, substituting the VEVs of the scalars Σ (5) and ϕ (19) into the basic operators (23-25), one obtains at low energies the effective renormalisable Yukawa and LNV interactions with coupling constants At the same time the baryon number non-conserving couplings λ ′′ ijk completely disappear. The crucial point is that the adjoint field Σ develops a VEV configuration with strictly zero colour components (5) in the SUSY limit. When SUSY breaks, radiative corrections will shift the missing VEV components of Σ to nonzero values of order M SU SY , thus inducing the ordinary µ-term of the MSSM, on the one hand, and baryon number violating interactions with hierarchically small coupling constants of the order M SU SY /M GU T , on the other.
The effective dimension-5 interactions (23-25) could be generated by the exchange of some heavy states, such as massive string modes. When generated by the exchange of the same superheavy multiplet (that is a vector-like pair of fundamental septets 7 + 7), the resulting operators (24) and (25) have effective coupling constants (26) aligned in flavour space [15]: The parameters ǫ k (k = 1, 2, 3) include some known combination of the primary dimensionless coupling constants and a ratio of the VEVs of the scalars Σ and ϕ. This relation (27) further splits into the ones for charged lepton (cl) and down quark (dq) LNV couplings respectively, when evolved from the SU(7) scale down to low energies. So we see that the possible common origin of all the generalised Yukawa couplings, both RP -conserving and RP -violating, at the GUT scale results in some minimal form of lepton number violation, provided that the appropriate heavy-state mediator exists. As a result, we are driven to a simple picture where the flavour structure, as well as the hierarchies of the trilinear LNV couplings in ∆W (1), are essentially aligned with the down quark and charged lepton mass and mixing hierarchies. At the same time, the effective bilinear LNV terms appear to be generically suppressed by the custodial U(1) A symmetry (for a detailed exposition see a recent paper [15]).
At low energies, the minimal LNV model presented can be viewed as an alternative to another minimal model based on the MSSM, in which only the bilinear LNV terms µ i L i H u in ∆W (1) are included [5,16]. Depending on the U(1) A charges assigned to the matter and Higgs superfields involved, one can generically obtain at low energies either the bilinear model or the trilinear one considered here. The bilinear model also leads to LNV-Yukawa coupling alignment, by virtue of which many predictions of both models concerning quark flavour conservation are very similar [15]. However, there are principal differences as well. The point is that the influence of the SUSY soft breaking sector, being predominant for the bilinear model, is quite negligible for the present one. Therefore, the LNV-Yukawa alignment, while appearing in both models, leads in the latter case to distinctive flavour-dependent relations between various LNV processes arising from slepton and squark exchanges (which are basically conditioned by the quark and lepton mass hierarchy) [15]. By contrast, in the bilinear model these processes appear to be essentially determined by W and Z bozon exchanges and, as a result, are largely flavourindependent. On the other hand, the bilinear model has a serious problem of extension to the GUT framework. Any such extension leads, together with a lepton mixing with a weak Higgs doublet, to a quark mixing with a colour Higgs triplet, thus inducing baryon number violation as well. The only handle one has to address this problem seems to be the use of electroweak scale masses µ i in the GUT-symmetry invariant bilinear couplings. Their use would mean that new fine-tuning conditions, besides the ordinary gauge hierarchy one, should be satisfied in a very ad hoc way.
An extended discussion of the properties of the SU (7) GUT, including the solution to the doublet-triplet splitting problem, string scale unification, proton decay, hierarchy of baryon vs lepton number violation and neutrino masses, can be found in our recent paper [9].
4 Three-family unifying GUT: SU (8) It is tempting to treat the extra gauge symmetry in a general SU(N) GUT as a flavour symmetry. If so, according to the particular solution (5) for the weak-component missing VEV configuration, the numbers of fundamental colours and flavours must be equal (n C = n F = k/2) for any even-order SU(N) group, among which the minimal one is SU(8) (n C = n F = 3). Thus, in the SU(8) case, the missing VEV configuration requires an additional colour-flavour symmetry: Having considered the basic matter superfields (quarks and leptons and their superpartners), the question of whether the above flavour symmetry SU(3) F is really their family symmetry naturally arises. Needless to say, among many other possibilities, the special assignment treating the families as a fundamental triplet of SU(3) F is the most attractive. In such a case the anomaly-free set of SU(8) antisymmetric multiplets (in a self-evident notation; A, B, C = 1, 2, ...., 8 ) is singled out, if we require that after flavour symmetry breaking only three massless families of ordinary quarks and leptons (and their superpartners) are left as chiral triplets of SU ( The remaining SU(5) ⊗ SU(3) F components, in these as well as in the other multiplets (29), acquire heavy masses of order M F ∼ M GU T 2 . So, one arrives at the chiral SU(3) F family symmetry case [18], which leads to a natural conservation of flavour both in the particle and sparticle sectors. Furthermore, there is a universal see-saw mechanism in the SU(8) model, with heavy intermediate states provided by the multiplets (29), which induces non-trivial fermion mass-matrices with many texture ansätze available. So the observed pattern of quark and lepton masses and mixings can appear once the electroweak SU(2) ⊗ U(1) Y symmetry breaks [13]. At the same time, by analogy with the SU (7) case, see (25), the only RP -violating coupling allowed by SU(8) ⊗ U(1) A symmetry is supposed to be given by the dimension-5 operator Here the matter fields Ψ and Ψ belong to the basic multiplets (30). One can see now that the weak-component missing VEV solution for Σ (5), when substituted into the operator O rpv , leaves only the LNV couplings and projects out the baryon number violating ones. At low energies the surviving effective couplings take the form Here α, β, γ = 1, 2, 3 are the generation indices, belonging to the family SU(3) F symmetry, and r is a factor giving the relative coupling constant renormalisation after evolution from the SU(8) scale to low energies. So, as in the SU (7) case, one has baryon number conservation at the same time as lepton number violation in the SUSY limit. Meanwhile, despite their common origin, there is a principal difference between the SU(7) and SU (8) cases. The point is that the basic adjoint Σ moduli mass ratio M 3 /M 8 appears, according to the missing VEV vacua (5), to be 2 and 1/2 for SU (7) and SU(8) respectively. As was shown in recent papers [9,14], this ratio essentially determines the high-energy behavior of the MSSM gauge couplings. In fact it follows that the unification scale in SU (7) is pushed to the string scale [9], while the unification scale in SU(8) ranges, at best, close to the standard unification value [6].
Conclusions
The absence of automatic global conservation laws in SUSY theories, in contrast to the Standard Model, is frequently considered as a drawback of supersymmetry. Meanwhile phenomenologically, whereas SUSY-inspired B number non-conservation must be highly suppressed, SUSYinspired L-number violation could occur at a level large enough for the observation of its many spectacular manifestations [3,15]. One of these manifestations may be the sizeable atmospheric neutrino oscillations recently reported [19], according to which one of the neutrino species is expected to have a mass at least of order 0.1 eV. That means, in general, the particle content of the MSSM or the minimal SU(5) SUSY GUT should be extended to include new states, that is fundamentally heavy right-handed neutrinos or even light sterile left-handed ones. Neutrino masses per se do not yet give any conclusive evidence in favour of SUSY theories. However sizeable LNV in the charged lepton sector and, of course, in the decays of the lightest supersuymmetric particle [15], if actually observed, could qualify as generic SUSY inspired phenomena. In such a situation the following question would arise, which should be addressed within time") set of SU (11) multiplets [17] after the symmetry breaking SU (11) → SU (8) and the exclusion of all the conjugated (under SU (8)) multiplets except the self-conjugated one 70 [ABCD] . the framework of Grand Unification rather than the MSSM: what could stand behind such a tremendous hierarchy of lepton vs baryon number violation?
In this connection we suggested that the nature of the global conservation laws in SUSY theories is determined by the basic vacuum configuration which breaks the underlying GUT symmetry. Following this idea, we have argued that the GUTs with a natural missing VEV solution to the doublet-triplet splitting problem could, simultaneously, provide the reason for treating lepton and baryon number carrying matter fields differently. We have shown that missing VEV vacuum configurations, ensuring the survival of the MSSM gauge symmetry at low energies, only emerge in extended SU(N) GUTs with N ≥ 7. Further, the one-family unifying SU(7) and the three-family unifying SU(8) GUTs have been constructed. In both cases the effective LNV couplings immediately evolve from the GUT scale, while the baryon number non-conserving ones are safely projected out by the missing VEV vacuum configuration, which breaks the starting GUT symmetry down to that of the MSSM. However, at the next stage when SUSY breaks, radiative corrections shift the missing VEV to some nonzero value of order M SU SY , thus inducing the ordinary µ-term of the MSSM, on the one hand, and BNV couplings with the hierarchically small constants λ ′′ ijk = O(M SU SY /M GU T ), on the other. So, a missing VEV solution to the gauge hierarchy problem leads, in a literal sense, to the same hierarchy of baryon vs lepton number violation.
|
2014-10-01T00:00:00.000Z
|
2000-04-10T00:00:00.000
|
{
"year": 2000,
"sha1": "e16b87c51e285947f3911cb388898d1ac41b4824",
"oa_license": null,
"oa_url": "http://arxiv.org/abs/hep-ph/0004090",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f2351390e3572610ad4cd3f82af75eb270c05a85",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
260972887
|
pes2o/s2orc
|
v3-fos-license
|
Deposit morphology and structure under interactions of sliding mass and erodible layers: experimental insights
Landslides are a kind of highly concerned geological disasters that occurring with complex motion processes and mechanisms. They often significantly affect the human life and properties located in their pathway. In some circumstances, the geological phenomena and structural features generated by the interactions between landslides and their substrates are still unclear, which makes it difficult to be forecasted and mitigated on its effects. In this study, a sandbox experiment was conducted to study the velocity and the displacement of the sliding mass, the geometry of the deposit, and the internal and the external structural characteristics of the deposit under the interactions between the sliding mass and the erodible layer by varying the depth of the erodible layer. Results show that the motion process of sliding mass consists of three stages: falling, shovel push-extrusion, and push-nappe accumulation. In the first stage, the velocity of the sliding mass increases sharply to a peak velocity before colliding with the erodible layer. In the latter two stages, the mobility of landslide is greatly limited by the erodible layers at the foot of the inclined plate, and the secondary acceleration of the sliding mass is observed. The deposits were divided into three zones (Ia, Ib, and II), in terms of the morphological and structural characteristics of their positions. The action forms were mainly pushing and covering in the zones II and I respectively. There were phenomena, such as strata inversion, pushover, and entrainment, which occurred in the deposits—the folds, ridges, and bulge that occurred on the surface of deposits. These structural characteristics reflect the stress states of laboratory landslides in motion from compressing to shearing. The results of this research will provide a valuable theoretical reference for the calculation of the disaster range when erodible layers exist in landslides' motion paths.
Introduction
Landslides, a common geological phenomenon, can rapidly spread on complex terrain and over various substrates, leading to property loss and casualties. These landslides can spread rapidly over complex terrain and substrates with different lithologies after destabilization (Roberts et al. 2021;Roche et al. 2011). Erosion and entrainment of the substrate are often observed during the motion of landslides, increasing their volume significantly (Haas and Woerkom 2016;Lucas et al. 2014;Mergili et al. 2020). Despite the increasing number of experimental, field, and numerical studies on the interactions between landslides and their erodible layers (Crosta et al. 2017;Duan et al. 2023;Iverson et al. 2011;Peng et al. 2018), a systematic study on the stratigraphic structure evolution of deposits at different depths of erodible layers is still lacking. Field investigations are the basis for analyzing the interactions between landslides and their substrates. Geophysical granular flows interact with their substrates in different ways, depending on the mechanical properties of the underlying material (Farin et al. 2014). The mobility and deposit areas of landslides are significantly influenced by erosion (Sovilla 2004), which occurs preferentially along a sloping path, leading to the scraping and erosion of sediment along the path (Conway et al. 2010). However, the erosion phenomenon is also observed along sub-horizontal substrates (Farin et al. 2014;Mangeney et al. 2010). Previous field investigations and laboratory studies on basal erosion have suggested that the erosion action of a landslide depends on its lithology, mechanical conditions, and the geomorphology of the motion path (Iverson 2012;McCoy et al. 2012). The geomorphology and landforms have a great influence on the lateral spreading of landslides. Moreover, the process of bed entrainment can lead to different propagation paths of the slide body in front of the mountain (Cuomo et al. 2016).
Erosion can occur in landslide substrates with different lithologies. When the substrate is rocky, structural phenomena, such as diapiric intrusions, convoluted laminations, faults, recumbent asymmetrical folds, and broken boulders, easily form in the interior regions of the deposits Zeng et al. 2020). These phenomena are considered to be typical evidences of compression (Dufresne et al. 2009). Other structural phenomena can be found in the substrates, such as faulted, folded, or strongly distorted, which suggests strong shear coupling at the flow base (Dufresne 2012;Farin et al. 2014). When the landslide substrate is composed of glacial residue, the sliding mass exhibits higher mobility, primarily due to the concomitant ice melting and entraining of water during motion (De Blasio 2014; Sosio et al. 2012). Typical evidences of erosion can also be found in the deposits, such as boulders with internal shear fracture surfaces, large fragments, striations, and furrows (Dufresne et al. 2019). Similar structural phenomena have also been found in loess landslides (Duan et al. 2018;Xue et al. 2021).The structural phenomena induced by landslides are an important indicator of their motion process and force characteristics (Hungr et al. 2013;Rana et al. 2016). However, due to the complex terrain, infrequent occurrence, high cost, and long measurement time, available field data are scarce (Bai et al. 2021;De Haas et al. 2020;Dietrich and Krautblatter 2019). For this reason, physical model experiments have been conducted by researchers to study the movement process and internal structural characteristics of landslides. Physical experiments (De Haas et al. 2020;Dowling and Santi 2014;Hu et al. 2021;Iverson et al. 2011) demonstrate the important effects of erosion phenomena in landslides. The process of reproducing natural phenomena through laboratory experiments is a common technique to study granular flows and has achieved good performance (Delannay et al. 2015;Duan et al. 2022;Wu et al. 2022). Steady uniform flows of granular material over an inclined bed and landslide geometry with erodible beds on its horizontal plane are often previous facilities. For the former, landslides' erosion process and has been quantified with erodible materials on the inclined bed. (Iverson et al. 2011;Mangeney et al. 2007). Mangeney et al. (2010) revealed that the landslides' runout was increasing linearly with the depth of the erosion bed when the slope angle was larger than the critical one, namely the internal friction angle. The existence of erodible bed would not affect the accelerating stage of landslides but the decelerating stage. For the latter, experiments of granular collapse make it possible to establish scaling law of deposit features with the initial geometries of sliding mass (height, radius, and aspect) (Balmforth and Kerswell 2005;Lajeunesse et al. 2005). Other researchers have focused on the erodible material and laying position as experimental variables, recording the landslide accumulation landforms (Crosta et al. 2017;Dufresne 2012;Lacaze et al. 2008;Shea and van Wyk de Vries 2008). In the study of Crosta et al. (2017), the landslides' evolution, dynamics, erosion, and deposit mode were studied by changing the slope angle and sliding material. The internal structures of the deposit had well been reproduced with a three-colored sand layer of 21 mm. However, how the internal structures vary for thinner and thicker erodible strata is still an unresolved question. We therefore raised questions: whether the internal structures of deposit will be more intensive; whether the increasing in the thickness of erodible beds controls the entrainment and hence the mobility of landslides; what a relation is the landslides' internal and surface structures.
In order to having an attempt to these questions, we plan to change the thickness of erodible beds on the horizontal plane of model experiments. To this end, the objectives of this study are to (1) identify the influence of the depth of the erodible layer on the velocity and displacement of landslides; (2) clarify the influence of the depth of the erodible layer on the deposit morphology; (3) ascertain internal and external structural characteristics of the deposit under the interactions of the sliding mass and erodible layer; and (4) generalize the motion process of the landslide with respect to the erodible layer under its motion path.
Experimental equipment and materials
In this study, a sandbox experimental device was designed to study the process of landslides impacting their erodible substrate. The experimental device consisted of a physical model system and a monitoring system. The physical model system included an inclined plate, a horizontal basin, and a sand container. The inclined plate was 1.5 m × 1.2 m, and the angle of the inclined plate could be adjusted using a bracket. The horizontal basin was also 1.5 m × 1.2 m, with curbs on both sides. The sand container had a side-by-side door, whose height of the center of gravity can be adjusted through the sand container track, which was used to fill the sliding mass. The monitoring system included one 3D scanner and two high-velocity cameras. The 3D scanner had a shooting velocity of 8 frame/s to obtain the detailed motion process of the sliding mass and the digital elevation model data (DEM) of the deposit. Two high-velocity cameras (120 frames/s, 0.4 MPix of resolution) were used to record the profile and bird's eye images during the movement of the sliding mass, respectively. The positions of the aforementioned monitoring equipment are shown in Fig. 1.
A dry medium-fine quartz sand was used as the medium for the sliding mass and erodible layer (Fig. 2). Pre-experiments showed that this kind of sand could characterize fluidization, which could not easily maintain its initial geometry, and would immediately collapse and propagate when constraints are removed during motion (Duan et al. 2020). The coefficient of nonuniformity, coefficient of curvature, average diameter, and specific surface area were 2.39, 1.19, 0.2 × 10 -3 m, and 0.02 m 2 •kg −4 , respectively. The accumulative percentage of particle sizes in the range of 0.075-0.5 mm was 87.71%. The distribution curve is shown in Fig. 3. The internal friction angle φ and cohesion c obtained through direct shear tests were 31.5° and 0 kPa, respectively. The friction coefficient was 0.613. For other direct shear tests, Plexiglas-with dimensions of φ 61.8 mm × 10 mm-was placed in the lower shear box. Utilizing this setup, the interfacial friction coefficient between the Plexiglas and medium-fine sand was measured as 0.403. To better distinguish the stratum structure, the quartz sand used in this study was colored sand. The physical and the mechanical properties of the different colored sands are consistent with one another.
Experimental methods
The angle between the inclined plate and the horizontal basin was 60°, the height of the center of gravity of the sandbox H was 1 m, and the volume of the sliding mass was 3.6 × 10 -3 m 3 . When filling sand into the sand container, the mass and the volume of sand were controlled to ensure the initial density of the sliding mass remained constant. The depths of the erodible layer were paved on the horizontal basin as 0 mm, 4 mm, 8 mm, 12 mm, 16 mm, 20 mm, and 24 mm. Among them, 0 mm indicated that there was no erodible layer on the horizontal basin. The erodible layer was paved using a sand spreader (Fig. 4a). During this process, the colored sand was placed into the funnel of the sand spreader first, and the baffle was then pulled slightly to generate an even gap at the bottom of the sand spreader. The colored sand in the funnel would flow out to form a flat erodible layer when the scraper moved forward along the curbs. The depth of the erodible layer was controlled by adjusting the elevation setter. The thickest erodible layer (24 mm) consisted of six layers of colored sand: blue, purple, gray, red, green, and blue, from top to bottom; each was layer was 4 mm thick. (Fig. 4b, c).
The sliding mass was released from the bottom of the sand container when the switch of the side-by-side door was opened. The motion process and the digital deposit morphology of a laboratory landslide were recorded by the monitoring system. To ascertain the experimental conditions and characteristics of the deposits, we defined the height of the center of gravity of the sand container (H), the angle of the inclined plate (α), the total sliding distance of the sliding mass (L), the depth of the erodible layer (D e ), the length of the deposit (L d ), the width of the deposit (W d ), the depth of the deposit (D d ), the eroded width (W e ), and the eroded length (L e ). The total sliding distance of the sliding mass (L) spans from the bottom of the sand container to the front of the sliding mass along the plates, as shown in Fig. 5.
The motion direction of the sliding mass and its perpendicular direction were defined as the y-axis and x-axis, respectively. After the motion of the sliding mass ceased, a The paving process of erodible layers; c The completed erodible layer of 12 mm is shown as an example, with each layer being 4 mm deep transparent L-shaped piece of Plexiglas was used to visualize the cross sections along the y and x directions of the deposits, which made it possible to observe the internal structure and interactions between the sliding mass and erodible layer. The x-and y-axis cross-sectional images of the deposit were recorded using a camera (Fig. 1).
Limitations of laboratory testing
Due to the scale effect, some important physical and mechanical processes in field landslides could not be reflected in the model experiments, including the dynamics, electrostatic phenomena (Iverson et al. 2004), rock breakage, and seismic effects (Davies and McSaveney 1999) in large landslides. Different structural geological phenomena can be generated under the influence of the slope angle, the material characteristics, and the topographical friction coefficients. Compared to field landslides which are not continuously accelerated and frequently followed by rock fragmentation, the slope angle and the test material selected for the test are fixed and cannot be completely consistent with the practical phenomena. Besides, the complex boundary conditions and the diverse material properties of field landslides are also difficult to reproduce in model experiments (Delannay et al. 2015). This makes it difficult to clarify how a single factor influence landslides' evolution. In this study, we simplified the boundary conditions and material selection, which facilitates testing repeatability (Dufresne 2012;Mangeney et al. 2010). The simplified experimental apparatuses and materials are beneficial to systematically study the effect of the erodible layer depth on the landslide motion and deposit characteristics. In addition, it is easier to describe and quantify the experimental results.
There are no side constraints to the boundary conditions of this study. Our experiments were carried out at 60°, because the kinematic parameters of sliding masses are more discernable Li et al. 2021). The sliding mass will not convert sufficiently into kinetic potential energy and has less impact on the erodible layer without facilitating the extensive development of the internal structure when the angle does not exceed 60 degrees. In our pre-experiments, the erodible layer cannot be penetrated completely at 24 mm thickness, and the deposit boundary is clearer at 3.6 × 10 -3 m 3 . Therefore, such a thickness and a volume are chosen. At 1 m height of the sand container, the sliding mass can possess adequate kinetic energy at the slope break to interact with the erodible bed to produce more observable internal and external deposit structures. This kind of material is demonstrated with similar behaviors of low rheological strength and deposit structures to natural events (Crosta et al. 2017;Duan et al. 2022;Manzella and Labiouse 2009), therefore being as the experimental material.
Characteristics of the movement of the sliding mass
According to the changing characteristics of the sliding mass, the motion process consisted of three stages: falling, shovel push-extrusion, and push-nappe accumulation. At the falling stage, the velocity of the sliding mass could reach 2.7-3 × 10 3 mm/s. At the shovel push-extrusion stage, the velocity of the sliding mass rapidly decreased when the sliding mass collided with the erodible layer (or bare board). The greater the depth of the erodible layer, the faster the sliding mass velocity decreases after the collision. The velocity of the sliding mass decreased about 79.3% within 0.125 s after the collision when D e was 24 mm. However, the velocity decreased from 3024.35 to 1572.64 mm/s within 0.125 s after the collision when the horizontal basin did not contain an erodible layer. Under this condition, the difference in velocity after colliding at the slope break was significantly less than the conditions when the erodible layer exists (Figs. 6 and 7).
At the stage of push-nappe accumulation, the interaction between the sliding mass and erodible layer weakened. Further, the sliding mass propagated forward with a relatively low velocity, which slowly decreased until it stopped. It should be noted that a secondary acceleration of the sliding mass was discerned in this stage under conditions where the D e were 4 mm, 12 mm, 16 mm, and 20 mm. From 0.875 s to 1.0 s, the increased velocity due to the secondary acceleration was 421.6 mm/s and 281.6 mm/s at D e = 4 mm and D e = 12 mm, respectively. However, the timing of the secondary acceleration was relatively late and the increase in value was smaller when the D e = 16 mm and D e = 20 mm. After the transient secondary acceleration, the velocity decreased again. It was noted that there was no secondary acceleration when D e = 0 mm and D e > 20 mm (Figs. 6 and 7).
Characteristics of deposit morphology
The contour map and the photo of the deposit were superimposed, as shown in Fig. 8. When the horizontal basin was paved with an erodible layer, the surface morphology of the deposits was similar to a crescent-shaped sand dune, where distinct upheaval is observed at the front edge of the deposits. By considering the upheaval as a boundary, the deposits were divided into a forward slope and a reverse slope. With increasing D e , the range of the deposit gradually decreased from 3.0 × 10 5 to 1.5 × 10 5 mm 2 , the upheaval gradually approached the slope break, and the distance decreased from 255 to 150 mm. When no erodible layer was paved on the horizontal basin, the deposits performed an ellipse shape with the x-axis being long and the y-axis being short. Its middle part exhibited an upheaval, and the deposit was completely detached from the slope break.
Relevant deposit data was obtained using the digital elevation model (DEM), as shown in Fig. 9. When D e = 0 mm, the length, width, depth, and area of the deposits were 503.35 mm, 775.90 mm, 32.51 mm, and 294.78 × 10 3 mm 2 , respectively. When the erodible layer was on the horizontal plate, the length, width, and area of the deposits were significantly reduced, but the depth increased. With increasing D e , the difference in the length and depth of the deposits was more significant than the width (see Table 1).
Characteristics of the internal structure of the deposit
To observe its internal structures, we cut profiles in the deposit using an L-shaped transparent piece of Plexiglas, as shown in Fig. 9a-c. In the figure, a, b, and c are not only the position of the sectional points, but also the cutting order in each experiment. The measured points in the erosion area were obtained by these profiles of the deposits, such as points D, E, F, and G in Fig. 9b and c. The erosion boundary of the sliding mass could be obtained by fitting these points, where the area and volume of the eroded area could be quantified in terms of the erosion boundary (Fig. 9).
It was found that the erosion distance of the deposit on the y-axis was longer, and the erosion effect was more significant than that on the x-axis when observing the profile of the deposit. The erodible layer was completely penetrated, except for D e = 24 mm. Owing to the interaction between the sliding mass and the erodible layer, phenomena, such as buckles, recumbent folds, thrust shear, horizontal shear, and convolute bedding, were in the interior of the deposit ( Fig. 9c and d).
It is important to note that buckles and recumbent folds developed on the front edges for all deposits with erodible layers (Fig. 9c). As can be seen in the x and y profiles, the positions where the erodible layer formed folds corresponded to the erosion range, namely the core of the horizontal fold. The core of the recumbent folds had a larger distance than the upheaval of the deposits in the horizontal direction at the 4 mm and 8 mm conditions of D e . The positions of both the core of the recumbent folds and the upheaval became closer to the slope break as the D e increased. When the D e was at 8-20 mm, their projected positions in the horizontal plate were almost identical. When D e increased to 12 mm, structures of thrust shear formed. The angle of the shear plate gradually became steeper as D e increased to 20 mm. It is worth noting that secondary shear fracture surfaces, with relatively gentle angles, were formed at D e = 24 mm. When the D e values were 16 mm and 20 mm, the front sliding mass was convoluted into the recumbent folds during their formation, leading to a sandwich structure. When D e increased to 24 mm, the convoluted bedding structures could hardly be observed (Fig. 9c).
As D e increased, the erosion length, width, and area decreased generally, but the erosion volume increased first and then decreased. The erosion volume reached a maximum when the D e was 20 mm (Table 1).
Relationship between internal and external features of deposits
The deposits were divided into three zones (I a , I b , and II), according to the structural morphology and characteristics of their positions. The position of upheaval was regarded as the boundary between zone II and zone I. Zone II was the region from upheaval of the deposits to the slope break, zone I b was the region from the ridge to the front of the deposits, and zone I a was the region beyond the front of the deposits, where stratum disturbance was present (Fig. 10). From the side view, in zone II, erosion was obvious, and the internal structures formed in the deposits were relatively simple. At D e = 4 mm and 12 mm, the erodible layer was eroded completely and pushed to zone I (Fig. 10a, b and g); when D e = 24 mm, the erodible layer was not penetrated completely, and a slope with a dip opposite to the sliding direction was formed (Fig. 10c, l). There are shear ridges formed on the deposit surface (Fig. 10e, h). In general, the thrust and covering are the main structures in the zone I. Some complex structures, such as buckles, recumbent folds, thrust shear, horizontal shear, and convoluted bedding, formed in the inner region of the I b zone and the transition zone with the II zone. There were small bulges on the deposit surface in the I b zone (Fig. 10j). Zone I a was not covered by the sliding mass, but developed with folds formed in the erosion layer (Fig. 10 d). Zone I a vanished gradually with the increasing of D e (Fig. 10f, i).
Influence of the erodible layer on the velocity and displacement of landslides
From the velocity and displacement of the sliding masses, it is clear that the presence of the erodible layer on the horizontal basin had a significant restraining effect on the mobility of the sliding mass. Under the restraining effect, not only was the movement distance of the sliding mass reduced, but also the length and the width of the deposits were also decreased. The main reason for this was that the density and depth of the erodible layer increased due to interactions with the sliding mass. It was also for this reason that the topography in the front edge of the deposits elevated and the shear strength of the erodible layer was fully exerted under the force of the sliding mass, which increased its resistance during motion. After the collision between the sliding mass and erodible layer, the kinetic energy of the sliding mass was consumed primarily by shearing and colliding among particles. On the other hand, the friction coefficient in the interior regions of the sand was 0.613, which is larger than the friction coefficient for the sand-plate interface (0.403). Therefore, the mobility of the laboratory landslides without an erodible layer was larger than that with an erodible layer (Crosta et al. 2017;Liu et al. 2020;Yuan et al. 2014;Zhou et al. 2019).
To reduce the influence of material differences on the experimental results, the materials of the sliding mass and erodible layer were both medium-fine sand, with the same physical and mechanical attributes. In some model experiments using dry particles as the erodible layer, experimental results have shown that the erodible layer had an inhibitory effect on the mobility of the sliding mass (Crosta 2013;Crosta et al. 2017;Dufresne 2012). It should be noted that all of these studies set the erodible layer on the nearly horizontal basin. Nevertheless, when the erodible layer was set on a downward sliding path, such as an inclined plate, opposing results were found (Mangeney et al. 2010). Further, the mobility of the sliding mass would increase due to entrainment during collisions with the erodible layer, irrespective of whether the underlying layer was dry or wet, especially when the slope angle of the erodible layer was close to the internal friction angle of the material (Crosta et al. 2015a;Mangeney et al. 2010).
In this study, when the D e was less than 20 mm, the larger the D e was, the more significant the hindering effect was on the mobility of the sliding mass, especially in the planform of the deposits. However, when the D e was 24 mm, the length and the width of the deposits increased, from 287.59 and 630.00 mm to 291.34 mm and 650.60 mm, respectively. At the same time, the erosion volume decreased from 905.14 × 10 2 to 190.40 × 10 2 mm 3 due to the decreasing Fig. 10 Internal geological geometry. a-c profile: The depths of the erodible layer are 4, 12, and 24 mm, respectively; d, f, i are the front fold of the deposits, e, h, k are the ridge on the surface of the zone II; g, l are the cross section of the deposits parallel to the horizontal plane; j is the surface bulge when De = 24 mm. The dividing line of zone I and II is the upheaval, the upheaval to the slope break is zone II, and the rest part is zone I; Zone Ia and zone Ib are bounded by the front of sliding mass. Here the zone Ia is the upheaval to the front, and zone Ib is the front to maximum disturbance range boundary of the erodible layer kinetic energy of the sliding mass and its weakening erosion ability.
The position of the erodible layer may play an important role on the mobility of the sliding mass. Both the mobility and duration of the sliding mass will increase as the increasing thickness of the erodible layer when it is at an inclined plane (Mangeney et al. 2010). However, the increasing mobility will be maximum at D e = 4d to 24d (d: particle diameter) when the sliding mass is compacted (Farin et al. 2014). With the propagating of the sliding mass, its velocity decreases, and consequently decreasing in erosion depth. During its slowly propagating, there is almost no erosion (Farin et al. 2014). The larger the velocity of the sliding mass is, the stronger its erosion ability is, and the deeper the erosion depth is, which is also true for the present study. The difference of this study with those previous studies is that the mobility of the sliding mass is inhibited when the sliding mass is on the horizontal plane (Fig. 7). It can be seen that the position of the erodible layer on the landslides' motion path has significant influences on its mobility and erosion ability.
Internal and external structural characteristics of the deposit
In physical model experiments, it is common to analyze the interaction between the sliding mass and erodible layer through the internal structural characteristics observed from a profile of the deposit (Deng et al. 2015;Paola et al. 2009). The structures of buckles, recumbent folds, thrust shear, horizontal shear, and convoluted bedding revealed in the profile of the deposits in this study indicated that there was an intense interaction between the sliding mass and erodible layer during sliding mass motion Figs. 9 and 10.
With increases in the D e , the effect of the sliding mass on erodible layers changed from pushing to covering (Fig. 10). According to the profile features, the sliding mass in zone II mainly acted by pushing, scraping, and carrying the erodible layer to zone I b . The structures formed in zone II are relatively simple, where some ridges formed on the deposit surface (Fig. 10e, h). However, the ridges disappeared at thicker erodible layers (Fig. 10k). The range of zone II along the y-axis became shorter with increases in D e . Zone I b was mainly characterized by compression deformation, demonstrated by a relatively high compression of the strata and complex internal structures, such as buckles, recumbent folds, thrust shear, horizontal shear, and convoluted bedding. There were small bulges (Fig. 10j) and no fully covered area by the sliding mass (Fig. 10f) in zone Ib. These phenomena were due to that the resistance between the erodible layer and horizontal plane is greater than its internal one when the sliding mass interacted with it. The erodible layer in zone I a is mainly subjected to thrusting. There are obvious ridges formed at the front of the deposits (Fig. 10d). With continuous increases in D e , the extent of compression of the erodible layer in zone I a decreased, and the extension of the y-axis gradually decreased until the area disappeared (Fig. 10c). When the erodible layer is thick, the pushing effect of landslide is weakened, and the front sliding mass will over the erodible layer without strong disturbance. The upheaval formed in the deposits separated zone II from zone I b , which is usually caused by compression and accompanied by rapid decline in velocity of sliding mass. When the inner shear stress of the debris in motion is not sufficient to cause discrete shear and inverse thrust, longitudinal compression would take place, forming the transverse ridges in the debris mass (D'Agostino et al. 2013). This phenomenon is also often found in the field, such as in scenarios where suddenly decreases in the topographic gradient; encounters with soft and/or deformable substrates; or slide into water (Clavero et al. 2004;Johnson 1978;Shea et al. 2007).
Folds are common geological phenomena, which form not only under tectonic stress (Deng et al. 2022), but also under sudden loading caused by turbulence or mountain floods (Rana et al. 2016). The friction resistance or shear stress applied to weak strata is the basis for the formation of folds (Owen 1996). The folds in this study were distributed in the erodible layer close to the upheaval and front edge of the deposit (Fig. 10 I b ). Furthermore, a series of compressive deformation characteristics are also found in zone I a (Fig. 10). Through model experiments with colored sand, Dufresne et al. (2009) reproduced the folds sustaining its initial color sequences and reported that there are bulldozing effects in the substrata when being covered rapidly by the propagating avalanche. A resistance was faced by the lower part of the sliding mass, while the upper part continued to move forward and into the core of the fold when the sliding mass scraped the erodible layer. Under the influence of inertial force, the sliding mass convoluted into the fold moved forward along with the folded erodible layer under the force of the trailing sliding mass, leading to the convolution phenomenon in zone I b . The formation process of this phenomenon can explain the thrust nappe structure in loess landslides .
Thrust shear tends to occur in positions where the stress is most concentrated. These shear stresses may combine with the force of gravity to further deform folds in complex ways (Lowe 1975). In this study, the shear position in the deposits mainly developed in the core and limbs of the recumbent folds.
The secondary shear surfaces in the deposits mentioned in Sect. "Characteristics of the internal structure of the deposit" were caused by the stress concentration and velocity differences in the sliding mass. The secondary sliding surfaces formed by thrust shear primarily existed in the interior of the deformed erodible layer and the contact area between the erodible layer and sliding mass. At the thin erodible layer, there were few secondary sliding surfaces with a shallow burial depth, but a number of complex secondary sliding surfaces in the medium thickness erodible layers (Fig. 10a-c). At D e = 24 mm, the increase in resistance led to an upward movement of the secondary sliding surfaces, which reduced their number. Further, thrust shear is common in landslides, and is observed in the Ancient Longwu Xishan Landslide No.2 (Tian et al. 2022).
Although many structural phenomena were reproduced in this study, there are still some others not observed, such as flash heating and consequently friction weakening (Habib 1975), and fluidization of substrata due to the impact of the sliding mass that results in rearranging of the soil skeletal particles (Sassa 1998). In the study by Zhou et al. (2016), the sliding mass entrains in the manner of impact and scour, in which the water contained in substrata plays an important role during the process. However, our experiments is conducted without the engagement of water, which makes the changing pore water pressure and consequently fluidization of substrata impossible Iverson et al. 2011). In addition, diapiric intrusions, which are considered due to compression or friction shearing (Phillips et al. 2013) in the substrata, are also not observed. Therefore, further experiments need to be performed to reproduce these phenomena.
In fact, during the interaction between the sliding mass and erodible layer, the sliding mass impacted the erodible layer with a certain velocity. The erodible layer was shoveled, scraped, and pushed by the sliding mass. The erodible layer dissipated the kinetic energy of the sliding mass through the collision and friction among particles, thereby restraining the velocity and displacement of the sliding mass. The base of the moving mass accommodated transport by large amounts of simple shear (Roche et al. 2013). The erosion ability of the sliding mass with a certain volume is finite. When D e was small, the interaction between the sliding mass and the erodible layer was gentle. Therefore, the kinetic energy of the sliding mass dissipated due to fewer collisions and less friction among particles. The sliding mass could push most of the erodible layer on its path a longer distance along the bare board. As a consequence, the position of the fold core was far from the slope break. Owing to the hindrance of the erodible layer on the sliding mass being small, the sliding mass could easily climb over the erodible layer and move horizontally on top of the erodible layer. Consequently, the shear surface was relatively gentle. In contrast, when D e was large, the interaction between the sliding mass and the erodible layer was significant; therefore, there was a large fraction of kinetic energy loss in the sliding mass. The sliding mass had to move upward along the erodible layer under the thrust of the sliding mass and the resistance of the erodible layer. As a consequence, a steeper shear surface was formed in the deposit. The geological structures formed in this study are highly consistent with the structural evidence found in the field. Field evidences suggest a bulldozing action occurred at the front of the Nixu rock avalanche (Zeng et al. 2021). The high-speed avalanche debris in motion plowed the erodible alluvial substrates along a basal decollement like a bulldozer. Complex structural phenomena, such as entrainment and diapiric structure, were formed in the deposits. This work reproduced inner structures seen in natural cases and reflected the kinematics in the final deposits.
Generalization of the sliding mass movement process
The motion process of the sliding mass was divided into three stages, according to differences in their velocity and motion characteristics. In the first stage, the velocity of the sliding mass increased sharply to a peak velocity before colliding with the erodible layer (Fig. 11a).
In the second stage, there was a strong collision between the sliding mass and the erodible layer. The length of the sliding mass became significantly shorter, and the velocity of the sliding mass was significantly reduced. While squeezing, scraping, and pushing the erosion layer forward, the sliding mass was also subjected to increasing resistance due to the erodible layer (Fig. 11b). The disturbed erodible layer moved forward along with the sliding mass, which resulted in an increase in the erosion range and a further compressed erodible layer (Fig. 11c). The horizontal thrust of the sliding mass reached a maximum when all of the sliding mass slid on the horizontal basin. Therefore, secondary shear surfaces were formed at the position of the stress concentration (Fig. 11d). The strong interaction between sliding mass and erodible layer led to the sliding surfaces formed not only on the interface between the sliding mass and horizontal basin, but also at the contact surface between the sliding mass and erodible layers and the inner regions of the erodible layers in the second stage. Therefore, the movement mode of landslide was relatively complex and rich structural features were formed.
In the third stage, the interaction between the sliding mass and the erodible layer became gentle, and the disturbed erodible layer became dense under the pressure of the sliding mass. The erodible layer entrained by the sliding mass eventually stopped moving forward at a lower velocity. During this motion process, there was a small relative displacement between the front sliding mass and the erodible layer. Furthermore, a series of buckles formed at the front of the erodible layer under the squeezing of the sliding mass. In this stage, a secondary acceleration was observed when the D e values were 4 mm, 12 mm, 16 mm, and 20 mm because the trailing sliding mass, under the action of an inertial force, would climb over and surpass the front sliding mass obstructed by the disturbed erodible layer.
Comparisons with previous studies
In sandbox experiments carried out by Dufresne (2012), there was an arc-shaped joint between the inclined and horizontal basins. The angles of the plate in the sliding mass zone and the accumulation zone were set 60° and 0°. Experimental results showed that the effects due to the sliding mass interacting with the erodible layer were mainly horizontal shearing. When the sliding mass pushed and collided with the erodible layer, the deep layer of the erodible layer was disturbed. However, the erodible layer was not completely eroded. Duan et al. (2021) In a study by Crosta et al. (2017), the experimental device used was similar to that in this study. Structures, such as convolutions, buckles, and thrust shearing, were also found in the deposits (Fig. 12a-c), which further demonstrated that these structural phenomena tend to form under the condition that there was an erodible layer on the horizontal basin. However, with an erodible layer of 21 mm it is hard to answer how the internal structures vary at different D e . In addition, the influence of the D e and the characteristics of the amount of eroded layer have not been quantified. Found in this study, with the increase of D e , the action form of sliding mass on erodible layer changes gradually from pushing to covering. At the same time, there is a certain relation between internal structure and surface structure.
The material of the erodible layer used in this study was dry medium-fine quartz sand. This kind of sand is very loose under conditions without compaction; therefore, it can easily be disturbed. In actual landslides, although the erodible layers differ due to different geo-environments, they all have certain structural properties after long-term consolidation. It is difficult for a landslide to cause a large-scale or deep scraping effect when the landslide impacts the erodible layer (Crosta et al. 2015b;Peng et al. 2018). Therefore, recumbent folds are rarely observed in actual landslides. Conversely, the folds, thrusts, and pushover phenomenon on the erodible layer of landslides are more common and significant in actual landslides.
In addition, the volume of landslide also profoundly affects its erosional ability. For instance, the Yigong landslide primarily comprises weathered granite, and the erodible layer is sandy alluvial deposits . The landslide, with a volume of about 91 × 10 6 m 3 along the gully moving downwards, squeezes the erodible layer, with a volume of about 24 × 10 6 m 3 , to form pushover structures. During this process, the erodible layer is scraped and entrained by the sliding mass, which makes the volume of the landslide continuously increase. The large impact force generated by the landslide takes away the erodible layers in the gully, which results in an outcrop of the bedrock in the eroded area (Delaney and Evans 2015;Zhou et al. 2016). The underlying bedrock prevents deeper erosion and the flow obviously do not lose momentum in these sections (Dietrich and Krautblatter 2019). This field observation is reflected in the experiment conducted herein, when the D e is 4 mm. The impact force of the sliding mass makes the erodible layer at the slope break completely erode, resulting in direct contact between the sliding mass and the plexiglass plate. This experiment reproduces the geological phenomena observed in the field.
Another case, the Nixu landslide is also mainly composed of granite, and its erodible layer is mainly horizontal alluvial deposits, composed of coarse sand and gravel (Zeng et al. 2020). The landslide exhibits an intense erosion ability when moving downwards with a volume of about 47 × 10 6 m 3 , and the erodible layer undergoes strong deformation and distortion. Structures, such as crushed substrate clasts, convoluted lamination, diapiric intrusion, decollement, and sand boils, are manifestations of deformation and distortion, as shown in Fig. 12d-h Zeng et al. 2021). Under this condition, the interaction between the landslide and the erodible layer, as well as the structures in the deposit, are similar to those at D e is 24 mm in this study. These structures reflect the large resistance of the erodible layer to the sliding mass. However, due to the differences in the material properties of the landslide and the erodible layer in the field, the structural phenomenon is more widespread than that in laboratory experiments.
Finally, the Xingyuan landslide consists of silty clay, and the erodible layer is alluvial deposits comprising silty clay, sandy silt, and gravel Peng et al. 2017). The landslide, with a volume of 0.17 × 10 6 m 3 , impacts the erodible layer consisting of silty clay, resulting its liquefaction. The position of the sliding surface in the landslide moves down 5-6 m of the ground surface when the landslide falls onto the erodible layer, then the landslide covers the undisturbed erodible layer (Fig. 12i, j). This is similar to the experimental conditions of D e is 24 mm in this study.
Conclusions
In this study, the motion characteristics of the sliding mass, morphological characteristics of the deposit, and the structural changes inside the stratum were studied by varying the depth of the erodible layer. The following conclusions were drawn: (1) Based on the changes in velocity and displacement fields, the motion process of the sliding mass can be summarized into three stages: falling, shovel pushextrusion, and push-nappe accumulation. In the first stage, the velocity of the sliding mass increases sharply to a peak velocity before colliding with the erodible layer. In the second stage, there was a strong collision between the sliding mass and the erodible layer, with a rapid decrease in speed. In the third stage, the interaction between the sliding mass and the erodible layer became weak. In the latter two stages, the existence of the erodible layer on the horizontal basin had an inhibitory effect on the mobility of the sliding mass. The physical reason is the kinetic energy of the sliding mass is dissipated by the shear strength generated by the deformation of the erodible layer. With an increase in the depth of the erodible layer, there is a corresponding increase in the inhibitory effect, resulting in a reduction of movement distance for the sliding mass.
(2) The deposits were divided into three zones (I a , I b , and II), in terms of the structural morphology and characteristics of their positions. The action forms were mainly pushing in the zone II, and covering in the zone I. With the increase of D e , the action form of sliding mass on erodible layer changes gradually from pushing to covering. (3) There were phenomena, such as strata inversion, pushover, and entrainment, which occurred in the depositsthe folds, ridges, and bulge that occurred in the surface of deposits. These structural characteristics reflect the stress states of the laboratory landslides in motion from compressing to shearing. Furthermore, these phenomena have been confirmed in natural landslides, which showed that physical model experiments could be used to study landslides' motion process and interactions with erodible layers.
It is difficult to observe the whole erosion process of a landslide in field, and few information is inferred from the profile of deposit. From the experiments, we find that there are relations between the landslides' internal and surface structures. This information, combined with the experimental results and the geological phenomena through field surveys, will provide a reference for analyzing the movement and deposit morphology of landslides.
|
2023-08-19T13:54:36.925Z
|
2023-08-19T00:00:00.000
|
{
"year": 2023,
"sha1": "55e48547e7bc1bec7ef0137401280459724563f9",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-2618491/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Springer",
"pdf_hash": "c36e23b9bd011b0f72b7a96d8c5fab3d00254ad8",
"s2fieldsofstudy": [
"Geology",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
209508764
|
pes2o/s2orc
|
v3-fos-license
|
Diphtheria or Streptococcal Pharyngitis: A Case Report Highlighting the Diagnostic Dilemma in the Post-vaccination Era
Diphtheria is an acute, highly infectious, toxigenic, and vaccine-preventable disease that commonly affects children under 12 years of age. The incidences of diphtheria have significantly dropped due to vaccination with diphtheria, pertussis, tetani (DPT). Recently, there is an increasing trend in reports of diphtheria throughout the world and specifically from developing countries. According to a World Health Organization (WHO) report, more than 80% of the global diphtheria cases in the post-vaccination era were from India and Indonesia. This could probably be signaling its re-emergence, which may be attributed to several factors that include incomplete immunization. Pharyngitis caused by group A Streptococcus is most frequently seen in children and can be clinically similar in presentation to diphtheria. We share our experience of managing a case of an eight-year-old child, who was clinically suspected to be suffering from diphtheria.
Introduction
Diphtheria is a bacterial infection caused by Corynebacterium diphtheriae (C. diphtheriae). It is transmitted among humans through the respiratory route (aerosols). Diphtheria is an acute, severely debilitating illness, usually affecting children less than 12 years of age. In diphtheria, there is a formation of thick pseudomembrane or a leathery sheet (diphtheros) on the posterior pharyngeal wall, formed by the accumulation of bacterial cells, epithelial cells, and other inflammatory cells. This causes a mechanical obstruction and results in difficulty in swallowing and, in some cases, dyspnoea (difficulty in breathing). Since C. diphtheriae is a toxin-producing bacteria, the exo-toxin is released by the bacteria, which then enters the blood/general circulation, resulting in several, other complications in the infected patients. Clinically, diphtheria may present as faucial, laryngeal, cutaneous, and others. C. diphtheriae has been noted to occur in three different strains/types depending on the intensity of the infection they cause, the gravis type produces a severe infection, the intermedius type results in moderate infection, and the mitis strain causes a mild type of diphtheria [1].
Clinical and laboratory diagnosis assumes great significance to efficiently manage the suspected cases of diphtheria and minimize the resultant morbidity and mortality. Patients with diphtheria usually present with sore throat and fever, which also is the presentation of patients suffering from infection with the more common bacteria, Streptococcus pyogenes (beta-hemolytic streptococci/group A streptococci) and other microbial infections [2]. In view of the fact that diphtheria, pertussis, and tetani (DPT) has been in regular use as a vaccine against diphtheria for many years, the clinical cases of diphtheria have almost been negligible. Most pediatricians are now in a dilemma regarding the prevalence of diphtheria and probably misdiagnose the potential cases of diphtheria as a streptococcal sore throat. This type of diagnosis and a delay in the appropriate management of cases of diphtheria may result in severe complications among infected patients and could result in mortality.
Recently, there have been some reports of the re-emergence of diphtheria, which should be considered as a cause of serious concern [3][4][5]. We report our experience of managing a clinically diagnosed case of diphtheria and emphasize its significance in the era of vaccination.
Case Presentation
An eight-year-old boy was brought to the casualty department attached to the Prathima Institute of Medical Sciences with the chief complaints of fever, malaise, vomiting, and difficulty in swallowing. The boy was admitted to the pediatric intensive care unit (PICU) for further evaluation. The boy's parents complained of acute onset of low-grade fever three days back. The boy was previously normal and was going to school regularly. The fever episodes were not associated with any type of skin rash. Three to four episodes of vomiting per day were noted along with the fever. The vomiting was non-projectile, nonbilious, blood-tinged, and was stimulated by both solid and liquid food intake. The boy also complained of pain in the throat and had difficulty swallowing. The patient had a loss of appetite, gave a history of highcolored urine, and had generalized weakness.
No previous history of similar complaints in the patient, as well as his two other siblings, was reported. There was no documented evidence/medical record that the patient was immunized with DPT although the parents claimed that the patient was immunized according to the national immunization schedule.
On clinical examination, the patient's vitals were all good. A noisy breath, probably due to the infection in the throat, was noted, without any dyspnoea. Clinical examination of the pharynx showed grade IV tonsillitis with a grayish-white membranous patch covering the tonsil, which was extending towards the soft palate. The posterior pharyngeal wall revealed congestion, with both sides of the tonsil showing enlargement. The uvula was central, oedematous, showed congestion, and was bleeding on touch.
General physical examination of the patient revealed sunken eyes, loss of the buccal pad of fat, a prominent maxilla, and a scaphoid abdomen. The patient was noted to be underweight (20 kg) as against the recommended weight at the same age (32 kg) and was 133 cm tall as against the recommended height (140 cm) at a corresponding age.
The patient's parents reported a low-calorie intake of 1200 KCal/day as against the recommended 1920 KCal/day. Also, the patient was only taking 24 g of protein against the daily recommended intake of 38.4 g/day. Considering this, the patient was diagnosed/categorized as suffering from protein-energy malnutrition.
A preliminary diagnosis of grade IV tonsillitis was made, and a throat swab was sent to the clinical microbiology laboratory for a direct Gram's stain, culture, and sensitivity. On direct Gram's stain of the throat swab, plenty of Gram-positive bacilli were observed, with occasional Gram-positive cocci, as shown in Figure 1. The colonies on blood agar were non-hemolytic (C. diphtheriae forms hemolytic colonies), glossy (gravis type is matt-like) in appearance, raised (intermedius are flat), and were glistening and butyrous (butterlike) in texture, as shown in Both the gram-positive cocci and the bacilli showed varied sensitivity patterns using the Kirby-Bauer disk diffusion method. The antimicrobial susceptibility pattern of gram-positive bacilli showed sensitivity to vancomycin, linezolid, tetracycline, and ofloxacin. Resistance was observed against penicillin, oxacillin, clindamycin, ciprofloxacin, cefotaxime, cefepime, cefoperazone, ceftriaxone, ceftazidime, amikacin, gentamicin, and piperacillin-tazobactam, as shown in Figure 5.
FIGURE 6: Antimicrobial sensitivity pattern of Streptococcus species by the Kirby-Bauer disk diffusion method
Considering the transmissibility of the infection, the patient was put under isolation. Treatment was initiated with 80,000 units of diphtheria antitoxin through the intravenous route. Since the isolated organisms were resistant both to penicillin and erythromycin, the drugs of choice in the case of suspected diphtheria, and because the patient also had beta-hemolytic Streptococci, the patient was started on a course of piperacillin-tazobactam and amikacin. The patient had a gradual and uneventful recovery.
The bacterium morphologically resembling C. diphtheriae was not confirmed using the standard anti-toxin by the Elek's gel precipitation test due to the unavailability of a suitable identification system. Also, the close household contacts were neither screened nor administered prophylactic antibiotics as suggested/recommended by the World Health Organization (WHO) because of the non-cooperation of the patient's parents.
Discussion
Diphtheria is a highly infectious and reportable bacterial disease that is prevalent throughout the world. The introduction and success of the DPT vaccination had been instrumental in the control of the disease, which mostly affects children below 12 years of age, causing significant morbidity and mortality. Streptococcal sore throat, caused by group A Streptococci is another upper respiratory tract infection prevalent among children. The clinical presentation of both diphtheria and streptococcal pharyngitis appears similar, and the clinical diagnosis becomes difficult. Assuming that the cases of diphtheria are almost negligible due to DPT vaccine, and with limited knowledge of the prevalence of diphtheria in the post-vaccination era, most physicians/pediatricians may misdiagnose the diphtheria cases as streptococcal infections. Such diagnoses may result in the spread of diphtheria among the contacts and also delay the initiation of treatment.
Global scenario of diphtheria
The re-emergence of diphtheria has been a point of discussion almost since the past decade. The occurrence of diphtheria in the post-vaccination era was attributed to the discrepancies (incomplete vaccination) in the immunization. Most infections in the post-vaccination era have been noted to emerge from the developing nations, which include the incidences of outbreaks from Indonesia, Bangladesh, and Yemen [6-9].
Isolated reports of outbreaks of diphtheria have also been reported from the developed nations, including the United States of America (USA). Even such reports of outbreaks were attributed to low socioeconomic conditions similar to those observed in the developing nations [10][11][12].
Indian scenario of diphtheria
Recent reports of outbreaks and isolated case reports from developing countries like India reassert the fact that there is a possibility of the re-emergence of diphtheria in the post-vaccination era [13][14][15]. Also, reports of incidences of diphtheria from the combined state of Andhra Pradesh and the separated state of Telangana (India) support the fact that the infection is prevalent and that the pediatricians need to be cautious while diagnosing the suspected patients [16][17].
The epidemiological data of diphtheria in India appears to be inadequate. A recent article reported an analysis of diphtheria in India over the past two decades (1996-2016) [18]. This report suggested that diphtheria cases are frequent among school-going children and adolescents in India. The study had also noted that there was an 80% coverage of the initial three doses of vaccine and that there is no reliable data on the coverage of the booster dose. This report also observed that India accounts for more than half of the diphtheria cases reported worldwide (2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015). It also confirms that most states in India had reported outbreaks/cases of diphtheria, and amongst all, the combined Andhra Pradesh and Telangana reported an increased frequency of diphtheria cases (>1000 cases/year between 2005 and 2014) [18].
There have been several recent reports of outbreaks and newspaper articles highlighting the seriousness of the present situation, which emphasizes the role of the public, healthcare workers, and governments in order to control and prevent the spread of diphtheria [19][20]. A local newspaper published (in the Telegu language) a picture of parents and relatives carrying and transporting a pediatric patient to a better medical facility whose condition worsened with suspected diphtheria, as shown in Figure 7. Diphtheria has been controlled to a great extent with the introduction of DPT throughout the world. In spite of the vaccination, several studies in the past have reported the incidences of diphtheria globally. The condition in developing and financially constrained third-world countries appears to be worse due to illiteracy, malnutrition, overcrowding, and inadequate immunization. Isolated clinical cases and frequent reports of outbreaks of diphtheria-like infections should be adequately addressed in order to eliminate the infection. The governments should, therefore, actively perform surveillance of immunization as well as document the burden of morbidity and mortality associated with diphtheria.
Conclusions
The eight-year-old boy who presented with symptoms of fever, sore throat, and thickening of the posterior pharyngeal wall was provisionally diagnosed as a possible case of diphtheria. The laboratory confirmation of diphtheria was not possible because of inadequate facilities. The diagnosis was based on careful clinical and laboratory observations. The patient was isolated from others to avoid contact infections and was successfully treated with antibiotics and antidiphtheritic serum.
Additional Information Disclosures
Human subjects: Consent was obtained by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2019-11-22T00:45:14.320Z
|
2019-11-01T00:00:00.000
|
{
"year": 2019,
"sha1": "aa49c4711a3b690a0b50f9af5fd3e907108dcf53",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/24658-diphtheria-or-streptococcal-pharyngitis-a-case-report-highlighting-the-diagnostic-dilemma-in-the-post-vaccination-era.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "84733256c1dbda8ac0efeb0cd1a2e07bcb55fd25",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
60579675
|
pes2o/s2orc
|
v3-fos-license
|
nestor Guideline for Preservation Planning – a Process Model
The nestor guideline for preservation planning is the latest in a series of nestor publications. nestor is the German competence network for digital preservation and it offers all interested parties from the private and public domains the possibility to participate in working groups. The guideline for preservation planning is the result of such a working group, which discussed the conceptual and practical issues of implementing the OAIS Functional Entity “Preservation Planning”. The guideline describes a process model and offers some guidance on potential implementations. It integrates and builds on recognized community concepts like Significant Properties, the OAIS Designated Community, the National Archives of Australia’s Performance Model, the PREMIS concept of Intellectual Entities and Representations, and the PLANET’s approach to preservation planning. Furthermore, it introduces the concepts “intended use” (Nutzungsziele), “information type” (Informationstyp) and “preservation group” (Erhaltungsgruppe). The purpose of these new categories is that information objects shall be grouped by information type (e.g., nestor Guideline for Preservation Planning – a Process Model 202 Liber Quarterly Volume 23 Issue 3 2014 audio, video, text...) and intended use (e.g., reading for pleasure, search for specific information...) to preservation groups for automatic processing. Significant properties can then be derived for whole preservation groups. The file format alone is considered as not completely sufficient for such categorisation. Some exemplary implementation solutions of the new concepts are presented in an annex. The guideline takes into account that resources for preservation planning and preservation actions are limited and has therefore adopted 4 premises: adequacy, financial viability, automation, and authenticity of archived objects. Its pragmatic approach becomes apparent in the definition and explanation of these dimensions. The guideline is written from the point of view of representatives of memory institutions, i.e., libraries, archives, museums, and is primarily targeted at this context although it may be useful for other information preserving institutions too. This contribution introduces the nestor guideline for preservation planning (for now only available in German; English translation envisaged for 1st half of 2014) to an international audience for the first time. It also matches the process model and the new concepts intended use, information type and preservation group to the collection and preservation reality of the German National Library.
Introduction: The nestor network and its working groups
nestor, the German network of expertise in digital preservation, was set up in 2003 in recognition of the fact that digital preservation is a task too big to be solved by any single institution. The mission of the network is to bring together experts in digital preservation, to foster knowledge exchange and networking, and to provide information, expertise, training and a forum for standardisation opportunities to the interested communities. Since its beginning, nestor has offered working groups on different preservation-related topics to all interested parties from the private and public domains. One of the latest Working Groups (WG), the WG on Digital Preservation, set up in 2009, dealt with the question of how to plan preservation measures for intangible digital assets. In contrast to books or paper-based archive records, digital objects can only be preserved by means of proactive measures on the part of the archive or library. Due to a lack of longstanding experience in the relatively new field of digital preservation, however, it is hard to tell which measures are appropriate in the first place. The working group brought together experts from the archives, libraries, and museums sector. They discussed, for example, at what point in the preservation lifecycle certain assumptions can or must be made and certain actions can be initiated. They investigated how user interests, significant characteristics and practical requirements can be reconciled. Finally, they published the nestor guideline for preservation planning as a result of their work (Nestor Arbeitsgruppe, 2012).
Relevant work
The working group started with extensive desk research and discussion of recognised community concepts: The term "Significant Properties" was first used by the CEDARS project in 1999(Cedars Project, 2002 and the concept was then widely discussed within the preservation community. It refers to those characteristics of a digital object that must be maintained over changes in technology and potential file format migrations. The Reference Model for an Open Archival Information System (OAIS), first published in 2002and revised in 2009(CCSDS, 2009, introduced several key concepts. The OAIS Functional Entity "Preservation Planning" and the concept of a "Designated Community" were considered most relevant for the working group's questions. The Functional Entity "Preservation Planning" encompasses tasks such as development of preservation strategies and standards, development of packaging designs and migration plans, and monitoring of technology and the Designated Community. The Designated Community is in OAIS defined as an identified group of potential consumers of digital information who should be able to understand a particular set of (preserved) information.
The National Archives of Australia's Performance Model (Heslop, Davis, & Wilson, 2002) is central because it systematically breaks down the concept of a digital record into fundamental components: The source (the data file), the process by which it is mediated (hardware and software) and its performance (when it is rendered on a screen). The goal of any preservation action is always the preservation of the performance, even if the source has to be changed (i.e., migrated) in order to reach the goal. The determination and preservation of the "essence" (another term for Significant Properties) of any given digital object becomes a key activity in this context.
Finally the PREMIS data model introduced the concepts of Intellectual Entities, Objects and Representations (PREMIS, 2002). Thereby, the Intellectual Entity is the intellectual work that can be described as a whole with properties such as author, title, publication date. It can be manifested in several representations, e.g., as a text and an image file. Each representation can consist of several objects, e.g., one digitised book can consist of hundreds of individual images.
The PLANETS approach to preservation planning (Strodl, Becker, Neumayer, & Rauber, 2007) builds to some extent on the work described above. However, it is not trivial to operatively build it into regular archival business routines.
These concepts have, more or less, existed side-by-side so far so that each digital archive had to take the decision which aspects it needed to integrate how in its own systems and working routines. The nestor guideline on preservation planning now draws all of these concepts together and integrates them in a single process model. Thereby, it intends to describe a pragmatic approach that is easy to implement in small scale as well as in large scale institutions.
Key assumptions and concepts
With unlimited resources, the digital preservation challenges were more easily manageable. The available resources, however, restrict the scope of action and often require a prioritisation of preservation goals. When dozens of original production formats cannot be supported, the archive must make a decision about suitable preservation formats. When that means that not all archival holdings can be preserved with their full original functionality, the archive must take decisions about the most preservation worthy features and functionalities. Each decision for something is also a decision against something else. In order to acknowledge these framework conditions, the nestor guideline has adopted 4 premises: 1. Financial viability: Digital preservation and the related preservation planning must be economically affordable. 2. Adequacy: Preservation goals must be adequate for the particular archival institution and may, for the same type of digital information, differ between institutions according to their preservation mandate or their designated community. 3. Authenticity: The goal of any preservation action must always be to maintain the authenticity of all archived objects. If (future) users cannot trust the archival holdings, all preservation efforts are in vain. 4. Automation: The sheer amount of digital objects requires that archival objects are processed group-wise and as automatized as possible.
To support the decision making, which is inseparably linked with proactive preservation planning in times of restricted budgets, the guideline introduces three new concepts: • "Intended use" is used to describe for what purpose the designated community will use the archived information, or with which questions it will approach them, e.g., reading for pleasure, search for specific information, … • "Information type" describes the type of information, e.g., audio, video, text, … • "Preservation groups" are created when information objects are grouped by information type and intended use.
The idea behind the new concepts is that information objects shall be grouped by information type and intended use to preservation groups for subsequent automatic processing. Significant properties can then be derived for whole preservation groups. The file format alone is considered insufficient for such categorisation because it hardly relates to the intellectual content of the information object. For example, a collection of avant-garde digital photographs with multiple different imaging formats may have more commonalities in terms of intended use and preservation goals than a stock of various digitized text and image records that happen to be saved in the same digitisation format.
Process model -Initial information ingest
In accordance with the Performance Model, the goal of any preservation action is the preservation of the performance of any given digital record. Idealtypically, a digital archivist perceives the performance as it is rendered from the data source when a record is submitted from the producer to the archive. At that moment, a certain combination of hardware and software is available in order to recreate the original performance. The digital archivist cannot "see" the data source or the software operations needed for the rendering process, he can only perceive the resulting performance. He can, for example, look at an image or text that is rendered on a screen or listen to sound from speakers or earphones. Thereby, the archivist perceives all features of the rendered performance, and he can draw conclusions as to the underlying information objects. Some features might be more obvious that others, e.g., the order of words in a text or the colour of a graphic, some might be less tangible, e.g., the font, the underlying colour space of an image or the tonal space of a music performance.
However, as the original hardware and software that creates the performance is likely to change over the years, and as in addition the data source might be converted to new file formats, it must be assumed that not all original features of the archived record can be preserved forever. It is the task of the digital archivist to determine the essential characteristics, the so-called significant properties, which will become the benchmark for future preservation actions. As the significant properties are a means to enable meaningful future use of the archived holdings, the archivist ought to start from the requirements of the designated community and its intended usage scenarios (who will use the archived information in the future, with which preconditions, for what purpose?).
In order to deal with large amounts of digital information objects, the archivist can group information objects with similar features and the same designated communities and intended uses together into preservation groups to determine their significant properties. The affiliation of information objects with preservation groups is recorded in the metadata of the information object.
Process model -Creation of preservation groups
Ideal-typically, the creation of preservation groups starts with the definition of the information types that the archive has to deal with (e.g., audio, video, etc.), followed by the definition of designated communities and intended uses per information type. Thereby, it is up to the archive how detailed it wants to describe and characterize its designated communities. It could, for example, leave it with "historian", but also go into more detail, e.g., economic historian, medievalist etc. The characterisation of the designated community could, among others, take into account aspects like the level of expertise about content and technology, standard technical equipment, legal restrictions, or the size of the designated community. The same holds true for the description and characterisation of the intended uses. The nestor guideline proposes four main cases as starting point: 1. Perception of the work 2. Analysis of the work/information retrieval 3. Further processing of the content 4. Execution of the item/running its applications (just for software) Thus, a variety of qualified subsets of the rather broad information type groups are created. They are the preservation groups. Based on exemplary performances rendered from information objects of a specific preservation group, the archivist gets an idea of the characteristics of the underlying information objects within the group. It is clear that there is a dependency and reciprocal effect with the definition of intended uses: The archivist must, possibly by previous rendering, have ascertained that the intended use corresponds with the possible use of an information object. A still image, for example, cannot be executed in the same way as a computer game.
Depending on the intended use of the objects by the designated community, the archivist derives their significant properties from all features that characterize the objects within the group. Finally, the degree of necessary fulfilment of the significant properties is determined per group. The significant properties and their degrees of performance are not static but must be adapted as the designated communities and also the intended uses change over time.
The preservation groups as such do not need to remain static over time. As the requirements concerning information objects change, preservation groups may have to be restructured, split or merged. The preservation groups have the benefit that they lay, early in the preservation lifecycle, the foundations for a smart, since differentiated, automated group-wise processing of the archival holdings. The obvious grouping by file format alone seems rather undifferentiated in comparison and carries the risk that significant properties can only represent the lowest common denominator.
Preservation Planning
Again, the idea of the preservation of the performance of a digital record guides all considerations concerning preservation planning. The underlying data stream and/or the hardware and software that renders it will sooner or later be subject to change. For preservation planning purposes, the archive ought to take several actions.
Monitor Designated Community and Technology
Most importantly, the digital archive must make sure it recognizes when its archival holdings are threatened by obsolescence. For that purpose, it monitors the general technological development, and of equal importance, the designated community as it is the most important reference for the archive. When the technical prerequisites of the designated community change and it becomes apparent that this will affect their use of the archive's information objects, the archive must react to it and bridge the gap for the designated community. It can do so by accumulating and providing sufficient representation information to the designated communities or by initiating preservation actions.
Preservation Strategies: Migration and Emulation
Of all preservation strategies, migration and emulation are best understood and widely accepted. The concepts in the nestor guideline can be integrated with both concepts. Moreover, they help to conceptually compare the capacity of the emulation and migration strategy for any given preservation group.
Preservation actions are planned and executed preservation group-wise. When the archive decides to migrate information objects to new file formats, it must identify a target file format that supports the intended uses comparable to the original format. The recorded significant properties act as a benchmark to evaluate the success of the migration action and, as a migration always changes the underlying information object, to retain the authenticity of the migrated objects. The values of the significant properties of the new information object and its performance are recorded again and compared with the values of the significant properties of the original object. The results of the comparison are documented and stored with the preservation group's objects.
In order to perform the emulation strategy, an emulator must be acquired or developed that supports the intended uses of a preservation group. Similar to the procedure described for migration, the recorded significant properties act as benchmark. In order to evaluate the success of the emulation action, the values of the significant properties of the emulated information object and its performance are recorded and compared with the values of the original object. The results of the comparison is documented and stored for the long term.
Mapping the nestor guideline with the long term preservation routines in the German National Library
The starting point for the digital preservation activities of the German National Library (DNB) was the kopal project (2004)(2005)(2006)(2007), in which a long term archival system based on IBM DIAS was developed. For data management, it makes use of the specifically developed Universal Object Format (Steinke, 2006), which allows for archiving of digital objects along with preservation metadata. The preservation system was enhanced in the DP4lib project (2009)(2010)(2011)(2012) and the long term preservation workflows, which were previously rather isolated, were integrated with the established online publication collection routines. In the first quarter of 2013, the nestor guideline for preservation planning were used as a benchmark to evaluate the current status and to reinforce the basic preservation planning approach.
Information ingest at the DNB
Digital publications are processed entirely automatically at the DNB. Some descriptive metadata for each information object is supplied by the publishing companies and other information providers. Technical metadata is generated automatically on file level (one information object can consist of multiple files).
Each information object is automatically assigned to a so-called "object group", which is roughly, although not exactly, comparable to the nestor guideline's information type. Examples of object groups include audio book, e-book, e-paper, online dissertation, print-on-demand, journal, journal article, digitisation, website. It is conceivable that these object groups could be refined according to intended use and user group, e.g., "all e-books with multimedia content", or "all scientific audio books". Thus, preservation groups could be created as subsets of the DNB object groups.
Significant properties are currently not recorded during the ingest process. They could, however, be recorded on the object level as well as on the file level as the metadata formats allow that.
"Preservation groups" at the DNB
The data management structures in the long term archive do not allow group-wise organisation of archival holdings according to intellectual criteria like the affiliation to a preservation group, because it does not hold descriptive metadata. The archival database takes a mere technical view on archival objects and allows selection according to technical characteristics that are recorded as part of the technical metadata. This could be, for example, "all PDF versions older than PDF 1.4", or "all TIFF files within the object group digitisation". In the event of a migration project, files are selected according to these technical criteria.
The selection and group-wise treatment of preservation groups involves a small detour: Via the library catalogue which holds all descriptive metadata, all objects of a specific object group (or, in perspective, preservation groups as potential subsets of an object group) can be selected. In this case, a list of the URNs of the related information objects may be created and on the basis of the URNs, the objects may finally be retrieved from the long term archive for group-wise treatment.
Preservation Planning at the DNB
The DNB takes a threefold approach to the monitoring tasks: It has made provisions to monitor the technical fitness of the long term archive itself, to do regular risk assessments of the stored object types (which includes technology monitoring for the said object types), and to plan for preservation actions. All three strands can and will in the future be conceptually supported by the nestor guideline, especially in reference to monitoring preservation groups, conceptually or technically.
Migration and emulation exist, so far, only as conceptual strategies, or rather, migration has been executed as a proof of concept. Because no significant properties are recorded at the time of ingest, the designed migration process differs from the one outlined in the nestor guideline. Again, the DNB takes an event-based approach: The significant properties of the selected information objects are not determined until the decision for migration is taken. Depending on the significant properties, a target file format is selected. A migration tool is selected and tested paying attention to the preservation of the significant properties. After successful tests, the migration is executed automatically and only the significant properties of samples are compared.
Conclusion of the mapping exercise
Although the DNB could not completely map to the nestor guideline, the concepts and outlined process models were found very useful as a benchmark. As a result of the mapping a gap was revealed with regard to significant properties because they are, conceptually, only determined in the case of a migration event. This has two disadvantages: (1) they cannot support preservation planning, although their preservation should be the goal of any preservation action; (2) it may be too late to save some of the significant characteristics when they are determined retrospectively. The creation of preservation groups, or the determination of significant properties, at least on object group level, is therefore perceived as an ideal approach to determining significant properties relatively pragmatically without deeply altering existing data management structures.
Outlook
The nestor guideline were publicly presented and discussed several times at national level, most recently at a workshop at the German Librarians' Day in March 2013. The audience valued the concepts and considerations of the guideline and unanimously considered it a good starting point for implementing preservation planning at the institutional level. Several participants expressed the wish, however, for more practical guidance and experience sharing. In addition to theoretical examples of preservation groups they would find a knowledge base of tried and tested preservation groups with file format recommendations valuable. This goes slightly beyond the scope of the published guideline but is certainly an interesting perspective to follow up in nestor if or once a critical mass of interested parties has started to implement the concepts of the guideline.
|
2018-10-21T14:10:33.332Z
|
2014-02-12T00:00:00.000
|
{
"year": 2014,
"sha1": "6d279bd72cff34b306591615c8f661041e9d0b8f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.18352/lq.9166",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "6d279bd72cff34b306591615c8f661041e9d0b8f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
119084165
|
pes2o/s2orc
|
v3-fos-license
|
Universal RG Flows Across Dimensions and Holography
We study RG flows between superconformal field theories living in different spacetime dimensions which exhibit universal properties, independent of the details of the UV and IR theories. In particular, when the UV and IR theories are both even-dimensional we establish exact universal relations between their conformal anomaly coefficients. We also provide strong evidence for similar relations between appropriately defined free energies for RG flows between odd-dimensional theories in the large $N$ limit. Holographically, these RG flows across dimensions are described by asymptotically AdS black branes in a gauged supergravity theory, which we exhibit explicitly. We also discuss the uplift of these solutions to string and M-theory and comment on how the entropy of such black branes is captured by the dual field theory.
Introduction and summary
Supersymmetric quantum field theories (QFTs) placed in background fields provide a rich laboratory for testing our understanding of quantum theories. The toolbox for studying such theories has greatly expanded in recent years, leading to a cornucopia of exact results such as the nonperturbative computation of physical observables, nontrivial tests of holography and other known dualities, and the discovery of many new dualities. In the landscape of consistent supersymmetric QFTs, superconformal field theories (SCFTs) play a distinguished role. Their enhanced symmetry offers greater calculational control and they serve as anchors around which to structure our understanding of the renormalization group (RG) flow. SCFTs in background fields will be the main objects of interest in this work.
Consider placing an SCFT on a curved manifold M d . If the manifold is equipped with a conformally flat metric, superconformal symmetry is preserved (up to well-understood anomalies in even dimensions). On a general curved manifold, however, both supersymmetry and conformal symmetry are generically broken, which leads to a reduced computational control. This can be remedied by employing a simple and powerful idea, due to Witten [1]. Loosely speaking, the basic observation is that for supersymmetric QFTs with a continuous global R-symmetry one can turn on a background gauge field for this symmetry, and tune its magnitude so as to cancel (part) of the curvature of the manifold. This procedure, dubbed a topological twist, ensures that there is a particular covariantly constant spinor defined on the curved background, which can be used as a supersymmetry generator. 1 While Witten's idea was originally used to obtain a topological QFT on M d , here we will be interested in a different application, possible when the manifold has a product structure of the form M d = R p × M d−p , where R p is flat Euclidean space and M d−p is a general curved, smooth, compact and orientable manifold. 2 In this case one needs to perform a topological twist on M d−p only, a procedure often referred to as a partial topological twist. Then, at length scales much larger than the one set by M d−p , the effective dynamics is controlled by a nontopological supersymmetric theory on R p . This procedure can be interpreted as an RG flow across dimensions, triggered by the local operators in the d-dimensional UV SCFT which are turned on by the background fields implementing the twist. This procedure is often applied in the literature to study specific QFTs, leading to many new insights into the physics of the 1 There are more general ways to place a supersymmetric QFT on a curved manifold, see for example [2] for a systematic approach. In this work we focus on the topological twist. 2 For concreteness, here we focus on QFTs in Euclidean signature, although the discussion is also valid for Lorentzian theories, and we later switch between both signatures. We assume that M d−p is smooth and compact, in particular there are no boundaries, punctures, or other defects, but it should be possible to relax this requirement.
resulting p-dimensional theory at low energies, as well as to unexpected dualities between this theory and a topological theory on M d−p ; see [3] for a recent review and further references. If, in addition to a continuous R-symmetry, the d-dimensional SCFT in the UV has continuous flavor symmetries, one is free to turn background values for the corresponding flavor gauge fields, without breaking any additional supersymmetry. While this freedom often leads to a large and interesting zoo of p-dimensional SCFTs in the IR (see [4][5][6][7][8][9][10][11][12][13][14][15] for a selection of recent references), it is clear that the details of such constructions depend on the particular choice of UV SCFT and background flavor fields. In contrast, any QFT has a stress-energy tensor which, for the SCFTs of interest here, sits in the same supermultiplet with the superconformal R-symmetry current. The universality of this multiplet structure suggests that twisted compactifications involving only the metric and the exact superconformal R-symmetry gauge field posses special properties, common to all d-dimensional UV SCFTs with a given amount of supersymmetry. Indeed, as we demonstrate below, this expectation bears out.
A main objective of this work is to establish universal properties of this class of RG flows. Since SCFTs do not exist in dimension greater than six, we take d ≤ 6. Since we are interested in obtaining dynamical theories on R p , we consider p ≥ 1, and since no topological twist is required on S 1 we take p ≤ d − 2. The tools we employ to study this setup are general and non-perturbative. Since the construction is clearly strongly dependent on the identification of the correct superconformal R-symmetry, we make use of the various maximization/extremization principles which determine the exact superconformal R-current [16,17,7,6]. Another powerful tool we use, when both p and d are even, is 't Hooft anomaly matching. As we discuss in detail below, this allows us to establish universal relations between conformal anomalies in the UV and IR theories. In the case of 3d SCFTs placed on Riemann surfaces (i.e., d = 3, p = 1) we use recent supersymmetric localization results to identify a universal relation between certain supersymmetric partition functions. Finally, another important weapon in our arsenal is the holographic duality. We now proceed to present a short summary of the main results of our work.
Main results
Consider a d-dimensional SCFT with a continuous R-symmetry and, possibly, global flavor symmetries. The corresponding currents organize themselves into supermultiplets of the form: where T µν is the stress-energy tensor, J R µ is the exact superconformal R-symmetry current, and J F µ are flavor currents. The ellipsis stand for other possible bosonic and fermionic operators which depend on the number of supercharges and the dimension of spacetime. While the stressenergy tensor multiplet is omnipresent in SCFTs, the flavor symmetry currents depend on the specific theory under consideration. As described above, the topological twist is implemented by giving the corresponding sources, g µν , A R µ , A F µ , · · · , nontrivial background values. This procedure triggers an RG flow which depends on the details of the UV theory and in general requires a case-by-case analysis. The basic observation we make here is that if the topological twist is implemented only by turning on background fields which couple to the stress-energy tensor multiplet, i.e. set A F µ = 0, the resulting RG flow exhibits universal properties. Based on this observation, we make the following definition: Given a d-dimensional UV SCFT with a certain number of supercharges and continuous R-symmetry, placed on R p ×M d−p , we define the universal twist of the theory as the partial topological twist on M d−p along the exact UV superconformal R-symmetry, preserving the maximal possible number of supercharges. We note that for theories with a large amount of supersymmetry there may be more than one universal twist and we discuss this in detail below. For theories with rational central charges one may find constraints on the topology of the compactification manifold M d−p due to Dirac quantization of the background gauge field (see Section 2.3). 3 As is clear by now, the exact superconformal R-symmetry plays a crucial role in our story. Upon a generic (non-universal) twisted compactification of the theory, the exact R-symmetry of the IR SCFT may differ from the one in the UV, due to possible mixing of Abelian R-symmetries with Abelian flavor symmetries along the flow. 4 Although the precise mixing may be determined by an appropriate extremization principle, the result depends on the details of the theory and is thus not universal. In the case of flows between even-dimensional SCFTs with a universal twist, however, one can show that there is no such mixing along the flow. 5 This in turn leads to universal relations between the conformal anomaly coefficients, which we denote by the vector (a, c), of the UV d-dimensional theory and the IR p-dimensional theory of the form: Here U is a matrix that depends only on the topology of the compactification manifold, M d−p , and the data specifying the partial topological twist, but is independent of the details of the SCFTs at both ends of the RG flow. These relations are exact and do not rely on the existence of a Lagrangian description of the UV or IR SCFTs.
We will argue that this universal behavior is not limited to RG flows between even-dimensional theories and that similar universal relations exist for flows between theories in various dimensions. For RG flows between odd-dimensional theories one cannot rely on 't Hooft anomalies but in view of the F -theorem [18] it is natural to search for universal relations between the round-sphere free energy of the UV and IR SCFTs. Although computing free energies exactly is much harder than computing anomalies, we are nonetheless able to show that the free energies indeed are related by to leading order in N , in an appropriate large N limit. Here u is again a universal coefficient, depending only on the topology of M d−p and the topological twist performed, but not the details of the SCFTs. The free energy on the l.h.s. of (1.3) can be thought of as an appropriate free energy of the IR theory on S p . This universal relation can also be proven by pure field theory methods for the twisted compactifications of a large class of 3d N = 2 SCFTs on a Riemann surface by an analysis of the corresponding matrix models at large N [19] (see Section 3.3). For theories in other dimensions we establish the relation in (1.3) using holography.
As mentioned above, in order to establish the universal relations (1.2), (1.3) one needs to assume that the RG flow ends at an interacting SCFT in the IR. Whether this is actually the case is a nontrivial dynamical question, difficult to establish by field theory methods. When the d-dimensional SCFT admits a weakly coupled holographic dual, however, one can bring holography to bear on this question. Indeed, one way to establish the existence of an interacting superconformal fixed point in the IR (at least in the planar limit) is by constructing a supergravity solution that explicitly interpolates between the UV and IR SCFTs. The holographic description of twisted compactifications was first studied in the foundational work of Maldacena and Núñez (MN) [20], which built upon results for D-branes wrapping calibrated cycles [21] (see [22] for a review and further references). We exploit the same approach in our holographic analysis.
In this holographic setting, it is natural to ask what is the supergravity manifestation of universal RG flows across dimensions. Consider the supergravity fields dual to the operators in (1.1). These also organize themselves into multiplets, of the form where the gravity multiplet contains the metric g µν and the graviphoton A R µ . The gauge fields A F µ belong to vector multiplets. 6 Since universal twists involve only operators dual to the gravity multiplet, it is natural to expect that the dynamics of this multiplet is sufficient to capture the corresponding universal RG flow. Indeed, one can restrict to this "minimal" gauged supergravity 6 These are dynamical supergravity fields and coincide with the background metric and gauge fields in the boundary field theory discussed below (1.1) only at the asymptotically AdS boundary. theory 7 in (d + 1) dimensions and construct domain wall solutions with a metric of the form Here r is the "holographic" direction, and the metric is (locally) asymptotic to AdS d+1 for large r and approaches AdS p+1 ×M d−p for small r. The solution is supported also by a nontrivial magnetic flux for the graviphoton through 2-cycles in M d−p . The entire spacetime can be thought of as a BPS (p − 1)-brane living in AdS d+1 , interpolating between the SCFT d dual in the UV and the SCFT p dual in the IR. We identify the black brane solutions corresponding to various twisted compactifications in minimal gauged supergravity for (d + 1) = 4, 5, 6, 7 and describe them in detail in Section 4. In addition, we show that the field theory universal relation in (1.2) and (1.3) are correctly reproduced holographically. Many of these supergravity solutions have been found in the literature before, but their field theory interpretation as universal RG flows has not necessarily been appreciated.
To make contact with top-down constructions in string and M-theory we also emphasize that these low-dimensional gauged supergravity theories arise as consistent truncations from ten and eleven-dimensional supergravity. This means that it is possible to uplift the universal holographic RG flows to string and M-theory. The choice of the internal manifold determines the details of the particular SCFT dual. For instance, uplifting a five-dimensional solution to IIB SUGRA with S 5 as internal manifold describes a twisted compactification of N = 4 SYM, while taking the internal manifold to be Y p,q corresponds to a twisted compactification of N = 1 quiver gauge theories of the type discussed in [9].
The holographic perspective not only establishes the existence of the IR fixed point, but also suggests universal relations among quantities for flows between even-dimensional and odddimensional SCFTs of the form Here F is a free energy, a is a conformal anomaly coefficient, and u aF , u F a are again universal coefficients, depending only on the compactification manifold M d−p and the topological twist performed. These are nontrivial and powerful predictions that would be interesting to establish directly in field theory, including finite N corrections. Finally, we should stress that the supersymmetric black branes realizing these universal RG flows across dimensions have a nonvanishing entropy. We compute these entropies in Appendix B and observe interesting relations with field theory quantities such as conformal anomalies and sphere free energies. The idea of universal RG flows common to a large class of SCFTs has appeared before, both in a holographic [23] as well as in a purely field-theoretic context [24,25]. In these papers, 7 Note that by "minimal" here we mean that we consider only the theory containing the gravity multiplet and no extra matter multiplets, not that the theory has the minimal number of supercharges. however, the UV and IR theories live in the same number of space time dimensions. Universal supergravity domain walls dual to holographic RG flows, similar in spirit to the ones studied here, were also discussed in [7-9, 26, 27].
The rest of the paper is organized as follows. In Section 2 we review some background material on 't Hooft and conformal anomalies and topological twists. Section 3 is devoted to a study of twisted compactifications of field theories in various dimensions with a particular focus on universal relations among conformal anomalies. In Section 4 we present the holographic dual description of these universal twisted compactifications and discuss universal relations among various quantities, such as conformal anomalies and free energies from the holographic perspective. We conclude with a short summary and a discussion of various open problems in Section 5.
In the three appendices we present our conventions on characteristic classes, a short discussion on the relation between some of our results and the entropy of extremal black branes, as well as an observation of a possible two-dimensional analog of the universal RG flow discussed in [25].
Generalities
We begin by reviewing some general background on anomalies in QFTs and basics of topologically twisted supersymmetric QFTs. Readers familiar with this material may skip to Section 3.
't Hooft anomalies
In even-dimensional QFTs, classical symmetries may become anomalous at the quantum level. Quantum anomalies for local symmetries are forbidden in consistent QFTs. However, global (or 't Hooft) anomalies are not only allowed, but are in fact robust physical observables containing exact information about the theory (see [28] for a pedagogical review). 't Hooft anomalies for continuous global symmetries are packaged efficiently in the anomaly polynomial, I d+2 , of the theory. This is a gauge-invariant (d + 2)-form which is a polynomial in characteristic classes for the global symmetries of the theory.
We will consider two-dimensional N = (0, 2), four-dimensional N = 1, and six-dimensional N = (1, 0) theories, whose R-symmetry groups are U (1) R , U (1) R , and SU (2) R , respectively. 8 In addition, these theories generically also have flavor symmetries. The anomaly polynomials are given by: 2d : 4d : 6d : Here c n (F) denotes the n'th Chern class of the corresponding bundle and p n (T d ) denotes the n'th Pontryagin class of the tangent bundle of the manifold on which the theory is placed. See Appendix A.1 for our conventions. The various coefficients multiplying these characteristic classes encode the corresponding 't Hooft anomalies for the energy-momentum multiplet in the theory. For the theories considered here this multiplet always contains the superconformal Rsymmetry current. If the QFT at hand admits a Lagrangian description these anomalies can be computed by one-loop Feynman diagrams with insertions of the R-current and the energymomentum tensor.
In the anomaly polynomials above we have not included anomalies for gauge symmetries since in this work we only study consistent QFTs, where such anomalies are absent. Note that we allow for gravitational anomalies since we are discussing QFTs and thus the metric is treated as a non-dynamical background field. We have not given explicit expressions for flavor anomalies as well as mixed flavor-R-symmetry anomalies. These are schematically encoded in the anomaly polynomial I flavor d+2 in (2.1)-(2.3). The rationale for doing this is that these anomalies depend on the details of the theory and can be ignored (under certain mild assumptions, to be discussed in Section 3) for the purposes of our discussion.
Weyl anomaly
Another important anomaly for our story is the Weyl (or conformal) anomaly, which captures the failure of the stress-energy tensor to be traceless when an even-dimensional CFT is placed in a nontrivial curved background. Ignoring conventional normalizations the anomaly has the form: where E d is the Euler density in d dimensions and the W i are a set of local, independent, Weyl invariants of the manifold on which the theory is placed. The number of independent invariants of this type depends on the spacetime dimension. There are none in two dimensions, one in four dimensions, and three in six dimensions.
In a superconformal theory, the stress-energy tensor and the superconformal R-symmetry current sit in the same supermultiplet. As a consequence, the Weyl and R-symmetry anomaly coefficients in (2.1)-(2.3) and (2.4) are related by supersymmetry through Ward identities. In 2d (see for example [29]) and 4d [30] these relations are: where R denotes the exact superconformal R-symmetry. In 6d there are three tensor structures W i and thus three c-type coefficients (see e.g. [31] and references therein for details). For theories with N = (1, 0) supersymmetry there is a linear relation 9 among the c i 's and thus the independent Weyl anomaly coefficients in 6d can be taken to be (a 6d , c (1) 6d , c (2) 6d ). The expression for the a-anomaly coefficient in terms of R-symmetry anomalies was found in [33]. The expression for the c (1) 6d , c (2) 6d coefficients was recently determined in [34][35][36]. The result is given by the following formulae where a T is the value of a 6d for the free N = (2, 0) tensor multiplet. 10 In the special case of N = (2, 0) theories one has the relations [34], γ = β 4 and δ = −β which imply that c and there is only one independent c-type coefficient, which we take to be c 6d ≡ 96 . 11 The relations (2.6) for N = (2, 0) SCFTs then simplify to For N = (2, 0) SCFTs in the ADE class one has where d G , r G , and h G are the dimension, rank, and Coxeter number of the group G, respectively, satisfying the group theory identity d G = r G (1 + h G ). We note that for these theories 4 7 ≤ a 6d /c 6d ≤ 1, the lower bound being saturated in the large N limit for the A N and D N theories. 9 As argued in [32] the relation imposed by supersymmetry is 6c (2) 6d = 0. 10 In [34] this was normalized to a T = − 7 1152 . Here we follow the conventions in [33] and set a T = 1. 11 Compared to [34] we use the normalization c 6d = 7 4 c so that the c-type coefficient for the free tensor multiplet is 1.
Topological twist
If a supersymmetric QFT in flat space is placed in a general background for the metric and gauge fields, supersymmetry will be broken. It was recently understood how to systematically arrange the background fields in such a way that some amount of supersymmetry is preserved [2]. Here we will specialize to one particular such way, introduced by Witten in [1] and known as a topological twist. The basic idea can be summarized as follows. If the QFT at hand has a continuous R-symmetry, one can turn a background gauge field, A R µ , that couples to the Rsymmetry current. One can then adjust the magnitude of the background field so as to cancel the nontrivial part of the spin connection ω µ on the curved manifold. Schematically, one tunes A R µ = − 1 4 ω µ so that the generalized Killing spinor equation takes the form The last equation in (2.9) admits a constant spinor solution on any spin manifold, implying that some amount of supersymmetry is preserved in this nontrivial background for the metric and R-symmetry gauge field. 12 It is clear that this method for preserving some supersymmetry can work only if the nontrivial part of the spin connection can be embedded in the R-symmetry group of the QFT in flat space. 13 The case of interest to us here is when M d is a product manifold of the form M d = R p × M d−p , with M d−p a curved, compact, smooth manifold. Then, the topological twist is performed only along M d−p and thus at low energies (compared to the scale set by the size of M d−p ) one expects to have a physical, i.e., non-topological, supersymmetric theory on R p . This is known as a partial topological twist. 14 An important point for many of our constructions below is that this topological twist is naturally realized in sting theory on the world-volume of D-and M-branes wrapping calibrated cycles in special holonomy manifolds [21].
If, in addition to a continuous R-symmetry, the QFT at hand has a continuous flavor symmetry, one can turn a more general background where A F i µ are background fields for the flavor symmetry and the a i are free parameters. Since, by definition, the super- 12 Here we have been schematic, omitting Lorentz and R-symmetry indices. If these are included one sees that the cancellation of the spin connection by the background R-symmetry can occur, at most, when acting on half of the components of and thus, at most, half supersymmetry can be preserved in this way. 13 We are being slightly imprecise here. In general, one needs to cancel only part of the spin connection on the curved manifold if the spin connection after the twist admits covariantly constant spinors. For example, on Kähler manifolds in four real dimensions one may cancel only the U (1) part of the U (2) = U (1) × SU (2) structure group. 14 In the original construction of Witten the manifold M d did not necessarily have a product structure with a flat factor and thus the resulting theory on the curved manifold was topological. Although also interesting, we do not consider such theories here.
symmetry parameter is not charged under flavor symmetries, the Killing spinor Equation (2.9) is not modified by turning on background flavor fields and the amount of supersymmetry preserved is unchanged. When the R-symmetry group is Abelian this freedom reflects the fact that the R-symmetry is ambiguous, as any linear combination of a "reference" R-symmetry with Abelian flavor symmetries is again an R-symmetry. Although one is free to choose any reference R-symmetry one likes, in the case of SCFTs there is a preferred R-symmetry, namely the superconformal R-symmetry, R SC , whose corresponding current belongs to the stress-energy tensor supermultiplet. This unique R-symmetry can be determined by maximization/extremization principles in any integer dimension in the range 1 ≤ d ≤ 4 [16,17,7,6,12]. For SCFTs it is thus natural to take the superconformal R-symmetry to be the reference R-symmetry and write the background field as: It is then easy to understand why the universal RG flows across dimensions studied in this paper are special; these correspond to setting a i = 0, i.e., the special choice of background gauge field which extends only along the exact superconformal R-symmetry in the UV. Although not the focus here, flows for generic values of a i are of course also interesting, and lead to a plethora of RG flows across dimensions; see for example [4-9, 12, 37].
There is an important subtlety to keep in mind when performing topological twists. For any gauge-invariant operator O in the theory one must impose the Dirac quantization condition 1 2π Tr where F R is the background R-symmetry curvature for the background field (2.10) and C 2 is any compact 2-cycle in M d . As a consequence, for the universal topological twist to be well defined, the exact superconformal R-charge of all gauge-invariant operators in the theory must be a rational number. This may not be the case in some theories with four Poincaré supercharges, such as 3d N = 2 and 4d N = 1 SCFTs, in which case the universal topological twist is ill-defined. Nonetheless, one can easily find an infinite number of such SCFTs with rational R-charges, so this is not an important obstruction to discuss universal properties of these constructions. For theories with rational R-charges the quantization condition in (2.11) may lead to constraints on the topology of M d−p and we shall discuss such cases below (see Table 1 and the discussion above it). As emphasized in [9], in such situations one can circumvent these constraints on the universal topological twist by including flavor magnetic fluxes a i , and adjusting them in a way consistent with (2.11). This procedure, however, is theory-specific and thus not universal. In this paper we restrict ourselves to SCFTs, and choices of manifolds M d−p , for which the quantization condition (2.11) is satisfied when all flavor fluxes are set to zero. This excludes, in particular, theories with irrational R-charges.
Field theory
As outlined in the Introduction, the main characters in our story are superconformal field theories with a continuous R-symmetry, which we place on a manifold of the form M d = R p × M d−p with a partial topological twist on M d−p . In this section, we consider SCFTs with different number of supercharges and various values of d and p. Our goal is to extract some physical information of the low-energy effective theory on R p at the end of the RG flow. For flows between even-dimensional SCFTs (both d and p even) the basic tools that allow us to establish the universal relations (1.2) are anomaly matching and superconformal symmetry. The calculation proceeds along the lines of the analysis in [38,39,[4][5][6][7][8][9]. Since this will be used repeatedly throughout this section, let us summarize the general strategy before studying different cases. We start with a SCFT d with an anomaly polynomial I d+2 . Performing a partial topological twist on M d−p modifies the global symmetry bundles, leading to a new anomaly polynomial I twisted
d+2
, which we then integrate over M d−p to arrive at the I p+2 anomaly polynomial of the p-dimensional IR theory . (3.1) This equation determines the R-symmetry anomalies in the IR in terms of those in the UV, encoding the 't Hooft anomaly matching condition [40]. Assuming the UV and IR theories are superconformal we can then use superconformal Ward identities at the two fixed points to express the R-symmetry anomalies in terms of conformal anomalies. This ultimately leads to the universal relations of the form (1.2) among IR and UV central charges.
For situations in which either d or p is odd, one cannot rely on anomaly matching, making it harder to analyze the resulting RG flow. As argued in the Introduction, a natural quantity to consider in the absence of anomalies is an appropriate supersymmetric partition function, or free energy, of the CFT. 15 This is a much harder task, not only because the computation of such partition functions is technically more involved, but also because it is not obvious how to approach such calculations in a universal way, i.e., without referring to a specific theory. Some progress on this hard question has been made recently for twisted compactifications of 3d N = 2 theories on a Riemann surface Σ g (d = 3, p = 1) in the planar limit. In this case one indeed finds a universal relation between the supersymmetric three-sphere free energy, F S 3 , of the UV 3d SCFT and a certain topologically twisted partition function, F Σg×S 1 , which is identified with a Witten index of the effective 1d theory in the IR [19] (see also [42]). This relation has been established explicitly for a large class of quiver gauge theories, to leading order in N , and we discuss it in more detail in Section 3.3.
6d SCFTs
We begin our exploration of RG flows across dimensions from d = 6, the maximal dimension in which a SCFT can exist [43]. We will consider theories with both N = (1, 0) and N = (2, 0) supersymmetry.
N = (1, 0)
SCFTs with N = (1, 0) supersymmetry have eight real supercharges 16 and an SU (2) R Rsymmetry. They may also have global flavor symmetries. The study of these theories has recently attracted much attention and a general formula for their anomaly polynomial was derived in [44,45]. The relation between 't Hooft and Weyl anomalies imposed by superconformal Ward identities was found in [33][34][35][36]. We now study these theories with a partial topological twist on Riemann surfaces and on Kähler four-manifolds.
On Riemann surfaces. Consider a smooth Riemann surface Σ g with holonomy group U (1) Σ . There is a unique way to embed U (1) Σ into SU (2) R while preserving minimal 4d N = 1 supersymmetry. At the level of line bundles, the topological twist amounts to the replacement: where t g is the Chern class of the tangent bundle to Σ g , normalized as in (A.7), and the coefficient −κ/2 is fixed by supersymmetry, where κ is the normalized curvature of Σ g defined in (A.6). Implementing the twist in the anomaly polynomial (2.3), integrating over Σ g , and using (A.7) one finds Comparing this to (2.2) we read off the resulting 4d 't Hooft anomaly coefficients Many N = (1, 0) SCFTs have non-Abelian flavor symmetry groups and thus we will assume that the U (1) R superconformal R-symmetry of the IR 4d theory is the same as the Cartan subgroup of the UV SU (2) R preserved by the topological twist. 17 With this identification, we can use (2.5) to find the following expression for the 4d Weyl anomalies 4d N = 1: a 4d c 4d = (g − 1) 32 9 −6 9 −10 α β .
(3.5) 16 Here we count only real components of Poincaré supercharges. When we have a conformal theory there is as usual the accompanying superconformal supercharges. 17 It would be nice to put this statement on a firmer footing by analyzing the general 6d N = (1, 0) anomaly polynomials of [44,45].
Note that the 4d 't Hooft and Weyl anomalies depend only on α, β and not on the purely gravitational anomalies γ, δ. Recall that the 6d Weyl anomaly coefficients are given in terms of 't Hooft anomalies in (2.6). Since there are three independent Weyl anomaly coefficients in N = (1, 0) theories, but four R-symmetry anomalies, one invert the relations (2.6) to write (3.5) as a relation purely among 6d and 4d Weyl anomalies. It is possible, however, to do so in the case of a N = (2, 0) theory, which we discuss below. We note that requiring that the ratio a 4d /c 4d satisfies the Hofman-Maldacena bound 1 2 ≤ a 4d /c 4d ≤ 3 2 imposes conditions on the values of α, β. We discuss Hofman-Maldacena bounds in more detail in Section 3.4.
If the SCFT at hand admits a suitable large N limit the pure R-symmetry anomaly dominates over gravitational anomalies, i.e., α >> (β, γ, δ) in (2.6) in which case the relation (3.5) becomes Assuming a 6d > 0, a positive central charge in 4d is obtained only for g > 1. We will derive this universal ratio from holography in Section 4.1.1.
Finally we note that the universal relation in (3.5) is satisfied for the particular models of twisted compactifications of six-dimensional (1, 0) SCFTs discussed in Section 7 of [46], see in particular Equations (7.1) and (7.9).
On Kähler four-manifolds. Consider a Kähler four-manifold M 4 , whose holonomy group is (contained into) U (2) s = SU (2) s × U (1) s . We denote the first Pontryagin number and the Euler number of M 4 by P 1 and χ, respectively. These can be written in terms of the Chern roots t 1,2 of the tangent bundle to M 4 as To preserve 2d N = (0, 2) supersymmetry we turn on a background for the Cartan of the 6d SU (2) R-symmetry, proportional to the U (1) s spin connection. 18 This amounts to Making this replacement in the anomaly polynomial (2.3), using the relations in (A.4), and integrating the twisted anomaly polynomial over M 4 leads to the 2d 't Hooft anomalies 18 One may also consider turning on a background gauge field proportional to the SU (2) s spin connection. For 6d theories with only (1, 0) supersymmetry this results in a 2d theory with (0, 1) supersymmetry and thus no continuous R-symmetry. Our anomaly matching procedure is thus not applicable and we do not consider this case further here. For N = (2, 0) SCFTs this twist leads to a 2d theory with (0, 2) supersymmetry and was studied in Section 5.1 of [47] as well as Section 6.3 of [7].
Thus, the central charges at the 2d fixed point are given by (3.10) The same result for the two-dimensional conformal anomalies was derived recently in [48] (see in particular Equations (2.36)-(2.37) in [48]). As we shall see in Section 4.1.1, the holographic dual of this flow across dimensions exists only when the Kähler manifold is negatively curved. In this case we can use the following relation between the topological invariants and the volume of the manifold identity 19 Using this, together with the fact noted above that α >> (β, γ, δ) in the holographic limit, (3.10) becomes to leading order in N . We will derive this universal ratio holographically in Section 4.1.1.
The twisted compactification of a 6d N = (1, 0) theory on a three-manifold is possible by mixing the SO (3) SU (2) holonomy group with the SU (2) R R-symmetry. However, in this case one does not expect to obtain a theory with a continuous R-symmetry in the IR and we do not study this case here. For N = (2, 0) SCFTs, however, one is equipped with an SO(5) R-symmetry group which allows for more general topological twists. We study this next.
N = (2, 0)
General twisted compactifications of N = (2, 0) SCFTs on Riemann surfaces and on fourmanifolds were studied in [4,5] and [7], respectively. Particular twists on Riemann surfaces, which we identify here as universal twists, were studied in [20,39]. Here we reproduce the results in these references, emphasizing the universal aspects.
We will focus on two types of topological twists involving only an Abelian background gauge field. The first is the universal twist of the N = (1, 0) theory described in Section 3.1.1 but now applied to N = (2, 0) theories. This corresponds to twisting along the diagonal combination of the Cartan SO(2) A ×SO(2) B ⊂ SO(5) R . The second twist is possible only for N = (2, 0) theories and preserves twice the amount of supersymmetry. It can be viewed as first decomposing the R-symmetry group in a block-diagonal form as SO(2) A × SO(3) B ⊂ SO(5) R and then twisting along SO(2) A .
On Riemann surfaces. We first discuss the universal twist of the N = (1, 0) theory, applied to the N = (2, 0) case. This twist of the maximal theory on Σ g was considered by Maldacena and Núñez in [20] and we shall refer to it as the MN N = 1 twist. As argued above, (3.5) still holds in the N = (2, 0) case, which using (2.7) can be written as This reproduces the central charges for the 4d N = 1 twisted compactifications of the ADE N = (2, 0) theories derived in [39,5]. The twist along SO(2) A on Σ g preserves 4d N = 2 supersymmetry [20] (see also [49]) and we shall refer to it as the MN N = 2 twist. It amounts to the following line bundle shifts Performing the twist in the anomaly polynomial and integrating over Σ g one finds: Again, this result is compatible with the results in [50,5]. In the large N limit the relation between 4d and 6d conformal anomalies becomes We will derive this relation holographically in Section 4.1.2.
On Kähler four-manifolds. Applying the universal twist of the N = (1, 0) theory on M 4 to N = (2, 0) theories, the 2d central charges are still given by (3.10) which using (2.7) can be written as This matches the result obtained in [7] for ADE theories (see Equation (6.6) there and set z = = 0, which corresponds to the universal twist discussed here). The second twist, along SO(2) A , amounts to and leads to a 2d (0, 4) theory with the following anomalies derived in [7]: As discussed in [7] this 2d SCFTs does not seem to admit a holographically dual AdS 3 description at large N . One reason for this might be that the theory obtained in this way does not have a normalizable vacuum, like the N = (4, 4) σ-model onto the Hitchin moduli space discussed in [51]. Another possible twist on Kähler four-manifolds is to turn on a nonabelian R-symmetry background. This corresponds to identifying the U (2) s spin connection with the U (2) ⊂ SO(4) ⊂ SO(5) R inside the R-symmetry group. This twist preserves 2d N = (1, 2) supersymmetry and was studied in Section 6.2 of [7] and we do not discuss it further here.
On product four-manifolds M 4 = Σ 1 × Σ 2 . When M 4 is taken to be a product of Riemann surfaces Σ 1 ×Σ 2 , with spin connections ω 1,2 , the holonomy group is reduced to U (1) Σ 1 ×U (1) Σ 2 and there is an additional universal twist possible. We take both Riemann surfaces to have negative curvature for simplicity. This was studied in [7]. The twist can be defined by considering the Cartan SO(2) A ×SO(2) B subgroup of the SO(5) R-symmetry and identifying the spin connection ω 1 with SO(2) A and the spin connection ω 2 with SO(2) B (or vice-versa). This is the twist studied in Section 3 of [52]. This preserves 2d N = (2, 2) supersymmetry and leads to a 2d theory with central charges (see Equation (5.23) in [7] Using the explicit conformal anomalies for the ADE series of N = (2, 0) theories one can show that the 2d central charges are an integer multiplet of 3 which suggests an interpretation of the 2d theory as a nonlinear σ model on a Calabi-Yau target space. In the large N limit the central charges become We will reproduce this holographically in Section 4.1.2.
On five-and three-manifolds. The twisted compactifications of 6d N = (2, 0) theories on the worldvolume of M5-branes on smooth five-and three-manifolds were discussed in [53,47]. In the former case, the whole SO(5) R R-symmetry group is turned on to implement the twist, which leads to a superconformal quantum mechanics with a single supercharge (see Section 3.3 in [47]). In the latter case the twist is obtained by considering SO(2) × SO(3) ⊂ SO(5) R and twisting along SO (3), which leads to a 3d N = 2 theory (see Section 3.1 in [47]). These 3d theories were studied also later in [54].
Since we do not have anomalies at our disposal in 3d and 1d, we are limited in our ability to extract universal information about the RG flow across dimensions with field theory techniques. However, the holographic dual description of these RG flows has been constructed in [47], showing that the IR fixed points exist, at least at large N . The AdS/CFT dictionary then leads to a universal prediction which would be interesting to test directly in field theory by computing the (partially) twisted partition functions Z S 1 ×M 5 and Z S 3 ×M 3 and comparing it to the Weyl anomaly coefficients a 6d , c 6d . We discuss this further in Section 4.1.2 below.
4d SCFTs
Here we consider 4d N = 1 and N = 2 SCFTs on a smooth Riemann surface Σ g . In the case of N = 1 theories there is only one possible topological twist, preserving N = (0, 2) supersymmetry. In the case of N = 2 one may also consider twists preserving N = (2, 2) and N = (0, 4). We also comment briefly on twisted compactifications of theories with N > 2 as well as compactifications on 3-manifolds.
N = 1
The partial topological twist of N = 1 SCFTs on a compact Riemann surface was discussed in some detail in [9], where an example of the universal relations discussed in this work was presented. At the level of the R-symmetry bundle the topological twist amounts to the shift: Integrating the twisted anomaly polynomial I twisted 6 over Σ g and comparing to (2.1) we read off the 2d anomaly coefficients. Assuming 9k RRF i = k F i = 0 for all flavor symmetries F i , 20 one can show that the 2d trial central charge is extremized by the UV R-symmetry (see [9] for details) and thus the IR and UV superconformal R-symmetries coincide and one finds the relations Using (2.5) this can be written as In the large N limit this becomes Once again, a positive 2d central charge requires a negatively curved Riemann surface. We derive this universal relation from holography in Section 4.3.1.
N = 2
For N = 2 SCFTs there are more possibilities for twisted compactifications. The R-symmetry group is SU (2) × U (1), and we denote the Cartan generators with R 3 and R 0 , respectively. A partial topological twist along R 3 preserves 2d N = (2, 2) supersymmetry, while a twist along R 0 preserves N = (0, 4) supersymmetry. Following the terminology in [55] we refer to these as the α-and β-twist, respectively. The twist performed for the N = 1 theory in Section 3.2.1 above is a particular combination of these, since when one considers the N = 2 as a particular example of an N = 1 SCFT, the N = 1 superconformal R-symmetry is given by the linear combination We first discuss the α-twist. Since this corresponds to a twist along the Cartan of SU (2), it breaks the R-symmetry to U (1) R 3 × U (1) R 0 , with R 3 becoming the vector R-symmetry and R 0 becoming the axial R-symmetry of the N = (2, 2) theory (see, e.g., Appendix F in [56] for details). The anomaly polynomial of the 4d theory is given by where we have used the fact that any trace with an odd number of R 3 vanishes. The α-twist amounts to the shifts: Performing the twist in the anomaly polynomial (3.27) and integrating over Σ g leads to the 2d 't Hooft anomalies Using the relations 21 which are valid for any 4d N = 2 SCFT, one obtains the 2d central charge It is interesting to note that the linear combination of 4d conformal anomalies, 4(2a 4d − c 4d ), appears in the Shapere-Tachikawa formula [57] Here r is the complex dimension of the Coulomb branch and D(O i ) are the conformal dimensions of the operators O i which parametrize it. Therefore, one can rewrite the 2d conformal anomalies in (3.31) as a sum of dimensions of Coulomb branch operators, . This clearly suggests a relation between the 2d (2, 2) SCFT in the IR and the Coulomb branch of the 4d N = 2 SCFT in the UV. It would be nice to understand this relation more precisely. Let us also note that if the 4d N = 2 SCFT admits a Lagrangian description in terms of a vector multiplet, with gauge group G of dimension d G , one can show that the 2d central charges can be written as If the 4d N = 2 SCFT admits a large N limit (3.31) becomes We will derive this holographically in Section 4.3.2. Finally we note that for g = 0 the twodimensional central charge in (3.33) is negative and has the same numerical value as the central charge of the chiral algebra associated to the 4d N = 2 SCFT following the procedure in [58]. We now discuss the β-twist, which amounts to the following shift of the U (1) R-symmetry bundle F (4d) The SU (2) R-symmetry of the 4d theory is untouched and becomes an SU (2) R symmetry of the 2d N = (0, 4) theory. The central charges of the theory can be computed from an N = (0, 2) subalgebra, whose U (1) R-symmetry is generated by We note that the β-twist has been considered also in [56] and [59]. The expression in (3.35) is the same as the one discussed in Appendix A of [59] after setting α = 1 in that paper. For 4d N = 2 SCFTs with a Lagrangian description it is easy to check that for the α-twist c r = c l is an integer multiple of 3 and for the β-twist c r is an integer multiple of 6, as expected on general grounds from the (small) N = 4 superconformal algebra, see for example [60]. It is natural to conjecture that the IR 2d N = (2, 2) SCFT for the α-twist can be described in terms of a nonlinear σ-model on a Calabi-Yau target space. It is certainly desirable to find explicitly such a description.
As an illustration of our formulas we now consider the well-known "rank 1" 4d N = 2 SCFTs discussed, for example, around Table 1 of [61]. These SCFTs are distinguished by having a one-dimensional Coulomb branch of vacua and we summarize the results for the two-dimensional central charges for the three universal twists we have studied so far, namely the α-and β-twists discussed above as well as the universal (0, 2) twist discussed in Section 3.2.1, in Table 1. Some comments are in order. First, note that for all theories in Table 1, except the H 0 and H 1 theories, c is an integer multiple of 3 and c (β) r is an integer multiple of 6, as should be the case. Theories H 0 and H 1 should be treated more carefully in view of the quantization conditions on R-charges discussed around (2.11). The operator with lowest R-charge in the H 0 theory has the value r H 0 = 2/5 and the one in the H 1 theory has r H 1 = 2/3. It follows from the quantization condition (2.11) that for the H 0 theory the α-and β-twist are well-defined only when (g − 1) is an integer multiple of 5. For the H 1 theory, by the same analysis, (g − 1) should be an integer multiple of 3. These constraints then ensure that also for these two SCFTs, c is an integer multiple of 3 and c (β) r is an integer multiple of 6. It is also curious to note the following relations between the 2d conformal anomalies and the dual Coxeter number of the flavor symmetry group of the four-dimensional theory: 22 .
We note, as a consistency check, that the relation (3.20) for compactifications from 6d to 2d can be obtained by first compactifying the 6d theory to 4d by the MN N = 2 twist (3.15) on Σ 1 , and subsequently compactifying to 2d by the α-twist (3.31) on Σ 2 . Similarly, the 6d to 2d flow relation in (3.19) for the case when the Kähler manifold is a product of Riemann surfaces can be obtained by composing the MN N = 2 twist relation (3.15) with the β-twist result in (3.35).
Compactification on M 3 . One may consider the twisted compactification of a 4d N = 2 SCFT on a three-manifold with holonomy SO(3) SU (2) by turning a background gauge field for the SU (2) R symmetry. This twist preserves two real supercharges and thus leads to a 1d supersymmetric quantum mechanics. Since we do not have anomalies at our disposal in 1d, we cannot apply the procedure above and we are not able to say much about the properties of the resulting 1d theory. However, for M 3 = H 3 the RG flow across dimensions can be constructed holographically (see Section 4.3.1), which corresponds to a magnetically charged BPS black hole with an AdS 2 near-horizon geometry in locally AdS 5 . This suggests that the quantum mechanical theory in the IR has an emergent conformal symmetry, at least at large N . We comment further on this model in Section 4.3.1.
Comments on N = 3 and N = 4 SCFTs
It has been recently observed [62][63][64] that four-dimensional theories with N = 3 supersymmetry, and no N = 4 supersymmetry, can exist at strong coupling. These have a U (3) R-symmetry, no flavor symmetries, and it can be shown that a 4d = c 4d [62]. The twisted compactification of these theories on a Riemann surface was considered in [65]. Since the R-symmetry group is contained in that of N = 4 SYM, and the ranks are the same, the possible twists of N = 3 theories are contained in those possible for N = 4 SYM, studied in [7]. Viewing an N = 3 theory as an N = 1 or N = 2 theory with additional flavor symmetry, one can apply the twists discussed in Sections 3.2.1 and 3.2.2. In particular, the α-and β-twists of the N = 2 theory applied to an N = 3 theory leads to a 2d theory with N = (2, 2) and N = (2, 4) supersymmetry, respectively. Since only the second twist preserves half the amount of supercharges, it is a genuine universal twist of the N = 3 theory, in the sense used here. The resulting central charges are given by which is obtained by simply setting a 4d = c 4d in (3.35). 23 Another twist preserving six supercharges arises from taking U (3) = SU (3) × U (1) and twisting along the U (1). This leads to a 2d theory with N = (0, 6) supersymmetry, U (3) R-symmetry, and central charges Note that one always obtains a 2d theory with no gravitational anomaly, c r − c l = 0, as a consequence of the absence of gravitational anomalies in 4d N = 3 theories. For a discussion of other twists of N = 3 theories see [65]. We refer the reader to [7] for general twists of N = 4 SYM.
SCFTs in odd dimensions
Based on the results for flows between even-dimensional SCFTs described above, it is natural to wonder if analogous universal relations exist when at least one of the fixed points in the RG flow is an odd-dimensional SCFT. Since there are no 't Hooft and conformal anomalies in odd dimensions, and in view of the F -theorem [18], one may look for universal relations involving an appropriately defined partition function, or free energy, of the odd-dimensional SCFT. The simplest case for which this can be explored by pure field-theoretic methods is the case of three-dimensional N = 2 theories placed on a Riemann surface Σ g . The two quantities we wish to compare in this case are the S 3 partition function of the theory before compactification and the partition function on S 1 × Σ g with a partial topological twist on Σ g . The former, which we denote by Z S 3 (∆ I ) was computed by supersymmetric localization in [66,17] and is a function of trial R-charges ∆ I for the theory on S 3 . The latter, which we denote by Z(y I , n I ), was computed by localization in [67][68][69]. It is a function of background magnetic fluxes n I specifying the topological twist and flavor fugacities y I and can be interpreted as a twisted index of the 3d theory or a Witten index of the 1d low-energy theory.
Progress in understanding the relation between these quantities was made recently in [12,68,13,19,42], which we summarize next. Supersymmetric localization shows that both partition functions localize to a matrix model on the Coulomb branch of the theory. Although the resulting matrix models appear to be quite different at finite N , it has been observed that for a large class of Chern-Simons matter theories at large N the partition functions are in fact intimately related as follows: 24 Re log Z(y I , n I ) = (g − 1) where one makes the identification y I = e i∆ I , and subject to the constraints I n I = 2(1−g) and . 25 The observation made in [19] is that many 3d N = 2 theories admit a universal topological twist, 26 which amounts to setting the flux parameters n I to be proportional to the exact R-charges∆ I of the theory on S 3 . Imposing this in (3.39) and denoting F S 1 ×Σg (∆ I ) ≡ −Re log Z(∆ I ), the end result is the simple large N universal relation (3.40) This is the first example we encounter of a universal relation of the form (1.3). It would be interesting to determine the exact role of subleading orders in N in this relation. As we discuss in Section 4.4, this twisted compactification is described holographically by a magnetically charged black hole in AdS 4 , whose entropy at large N is computed by F S 1 ×Σg . This gives further motivation for studying subleading corrections to this quantity. Let us comment on 3d SCFTs with N > 2. Clearly, all such theories can be viewed as N = 2 theories and one can readily apply the results discussed above. However, as in the case of the α-and β-twists of 4d theories discussed in Section 3.2.2, one may wonder if twisted compactifications with enhanced supersymmetry exhibit any universal properties. For N = 3 SCFTs the R-symmetry is SO(3) and thus the only topological twist available is the universal one. We note that if the N = 3 theory at hand admits a Lagrangian description (and perhaps even more generally), the R-charges are rational and thus the universal twist can always be performed for some appropriate value of the genus, g > 1. For 3d N = 4 SCFTs the situation is more interesting. 27 These theories have SU (2) C × SU (2) H R-symmetry and thus admit more general topological twists on Σ g . The N = 2 universal twist preserves two real supercharges and amounts to turning on a magnetic flux along the Cartan generator of the diagonal SU (2) subgroup of the R-symmetry group. There are, however, two other twists which preserve four real supercharges and are obtained by turning on a magnetic flux along the Cartan generator of either SU (2) C or SU (2) H . These twisted compactifications were recently studied in [70]. One 24 This was first shown for Σ g = S 2 in [13], for a generic genus in the case of ABJM theory in [68], and for a larger class of quiver theories and generic Riemann surface in [19]. 25 The constraint on I ∆ I is rather subtle and requires a detailed large N analysis of the matrix model. See [19] for a more detailed discussion. 26 This is consistent only if the exact R-charges in the SCFT are rational. Although this is not a problem for many 3d N = 2 theories, there are an infinite number of examples for which this issue arises. See [19,42] for a more detailed discussion. 27 We refrain from discussing SCFTs with N > 4 here.
can then study the topologically twisted index for both of these twists using the results of [67][68][69].
In order to make a connection with the universal relations derived above, and ultimately with holography, we are interested in the large N limit of these indices. It turns out, however, that the topologically twisted index is trivial to leading order in N for both topological twists, i.e. the corresponding free energy vanishes. 28 It would certainly be interesting to explore these two twisted compactifications further and understand whether the topologically twisted indices obey any type of universal relation at finite N . Let us briefly comment on the case d = 5. The only superconformal algebra in five dimensions is F (4), with eight supercharges and an SU (2) R R-symmetry. The twisted compactification of such SCFTs on a Riemann surface Σ g by twisting along the Cartan of SU (2) R leads to a 3d SCFT with N = 2 supersymmetry. The twisted compactification on a three-manifold M 3 by a nonabelian twist along the full SU (2) R leads to a 2d SCFT with N = (1, 1) supersymmetry. Finally, one may also consider the compactification on a Kähler four-manifold, leading to supersymmetric quantum mechanical theories. The holographic description of these RG flows across dimensions was considered in [71,72], which we discuss in Section 4.2. While the partition function of 5d SCFTs on various five-manifolds has been studied extensively in the literature, much less is known for partially twisted compactifications. 29 In view of the simple holographic relations between the UV and IR free energies (or conformal anomalies in the case of M 3 ) that we uncover in Section 4.2, it would be interesting to explore this further in field theory.
Finally, let us point out that for RG flows in which d is even and p is odd (or vice versa) it is tempting to look for universal relations among free energies and conformal anomalies. While we have not studied this in field theory, and are not aware of any discussion in the literature, 30 the holographic analysis in Section 4 provides evidence that such universal relations exist. This certainly deserves further study.
Comments on Hofman-Maldacena bounds
The Hofman-Maldacena (HM) bounds are bounds on the ratio a 4d /c 4d in four-dimensional CFTs derived from energy positivity constraints. These were first proposed in [74] (see also [57,75]), and were recently proven in [76] using conformal bootstrap methods. For supersymmetric theories these bounds read: (3.41) 28 We are grateful to Alberto Zaffaroni for informing us of this result. 29 See however [73] for a localization calculation of 5d theories on S 3 × Σ g with a topological twist on Σ g . 30 See [41] for a related but distinct discussion on the connection between conformal anomalies and sphere free energies for theories in different dimensions. It is interesting to study how various values of a 4d /c 4d are mapped to six and two dimensions by the RG flows across dimensions discussed above. Consider first 4d N = 1 theories obtained by the twisted compactification of 6d N = (2, 0) theories on a Riemann surface. The four-and six-dimensional central charges are related by (3.13). One can further compactify to 2d by the universal twist preserving 2d N = (0, 2) supersymmetry. The two-and four-dimensional central charges are then related by (3.24). In Figure 1 we show how various values are mapped across dimensions. We observe that 6d ADE theories (shaded region) always lead to 4d N = 1 theories satisfying the HM bounds, with the upper HM bound in 4d saturated when a 6d /c 6d = 1. Since the value a 4d c 4d = 3 2 is associated to a free 4d N = 1 vector multiplet it is natural to conjecture that six-dimensional N = (2, 0) SCFTs with a 6d = c 6d flow to a free gauge theory upon this twisted compactification. Upon further compactification to 2d we note that the positivity constraint c r ≥ 0 imposed by 2d unitarity requires a 6d /c 6d ≥ 3/7. 31 Notice also that the two dimensional unitarity bound, c r > 0, translates into the bound a 4d /c 4d > 3/5 for 4d N = 1 SCFTs. A free N = 1 chiral multiplet has the ratio a 4d /c 4d = 1/2 but we are not aware of any other 4d unitary N = 1 SCFTs which have a ratio of conformal anomalies in the range 1/2 ≤ a 4d /c 4d < 3/5. A natural question arising from this observation is whether one can use two-dimensional unitarity in combination with the universal RG flows from two to four dimensions to derive a stronger bound on the ratio a 4d /c 4d than the one in (3.41).
Consider now 4d N = 2 theories obtained by twisted compactification of 6d N = (2, 0) theories on a Riemann surface; the central charges are related by (3.15). Once again, 6d ADE theories lead to 4d theories satisfying the HM bounds, with the upper HM bound in 4d saturated when a 6d /c 6d = 1. Again, this naturally suggests that a 6d N = (2, 0) theory with a 6d = c 6d leads to the theory of a free N = 2 vector multiplet upon this twisted compactification. One may further reduce to 2d, either by the α-twist (3.31), or the β-twist (3.35). In both cases the lower HM bound in 4d is mapped to c r = 0 in 2d, as shown in Figure 2. This is very intriguing and suggests a relation between unitarity in two dimensions and the bounds in (3.41). As noted in Section 2.2, the 6d N = (2, 0) ADE theories 32 have conformal anomaly coefficients obeying 4/7 < a 6d /c 6d < 1. Upon the N = 2 twisted compactification on Σ g and using (3.15) this implies that the 4d class S SCFT has central charges obeying a 4d /c 4d > 1. Note also that this class of 4d SCFTs arise from M5-branes wrapping Σ g and have well-known holographic dual descriptions [20,50]. This observation is in conflict with the recent results in [78] where the authors argued for a positivity bound on the Gauss-Bonnet coupling in higher-curvature gravitational theories. In particular, the results in [78] imply that for CFTs with a weakly coupled holographic dual, such as the theories at hand, one should find a 4d /c 4d < 1. It would be interesting to understand the reasons for this inconsistency.
Gauged supergravity
We now turn to the holographic description of the universal RG flows described in Section 3. As we have argued, when the topological twist involves only the stress-energy tensor multiplet the RG flow is universal and is not sensitive to the details of the particular theory being compactified. General results in superconformal representation theory show (see for example [79,80] for a recent account) that the stress-energy multiplet for SCFTs in d = 3, 4, 5, 6 has an operator spectrum exactly dual to the fields in the gravity multiplet of a gauged supergravity in one dimension higher. This suggests that the holographic description of universal RG flows across dimensions should be captured entirely by the dynamics of the gravity multiplet in the appropriate gauged supergravity. We refer to a supergravity theory containing only the gravity multiplet as minimal gauged supergravity. 33 Motivated by the analysis of Section 3, in this section we discuss supersymmetric solutions of minimal gauged supergravities in 4, 5, 6, and 7 dimensions describing universal RG flows.
We impose the following conditions on the supergravity solutions. At asymptotic infinity the metric should approach an asymptotically locally AdS d+1 background, with an R p × M d−p boundary. In the interior of the geometry there should be another asymptotic region, where the metric approaches AdS p+1 × M d−p . This is captured by an Ansatz of the form with the following asymptotics: where r is the holographic direction, with the UV corresponding to r → ∞ and the IR to r → 0, and f 0 , g 0 are constants determined by the supergravity BPS equations. In addition, there must be a nontrivial graviphoton gauge field A R µ , proportional to the spin connection on M d−p . This gauge field flux, which is the holographic manifestation of the topological twist, ensures that the solution preserves a certain amount of supersymmetry. In theories with more than 8 supercharges, there are also scalars in the gravity multiplet which generically also acquire a nontrivial radial profile as a function of r. The entire supergravity solution can be viewed as a (p − 1)-dimensional BPS black brane in AdS d+1 carrying a magnetic charge under the graviphoton A R µ . We emphasize that the magnetic charge of the black brane is fixed to a unique value by supersymmetry. 34 Since for the purposes of our discussion it suffices to focus on the AdS p+1 and AdS d+1 asymptotics of the solution, we will not present the full interpolating domain wall explicitly. 35 The careful reader may have noticed a sleight of hand in our discussion. When one performs the topologically twisted reduction in the field theory there is no guarantee that the IR dynamics 33 This might not be standard terminology but we employ it here to emphasize that there are no other multiplets apart from the gravity multiplet. 34 Generalizations including charges under additional vector multiplets, preserving the same amount of supersymmetry are possible. However, these solutions will not be universal, as argued in the Introduction. 35 The full radial dependence of the metric functions and possible scalar fields can be found analytically in some examples but in general one has to resort to numerical integration of the BPS equations.
of the QFT will be governed by an interacting SCFT with a weakly coupled holographic dual. We have allowed for this possibility in our discussion below, by allowing the IR region of the gravitational domain wall to have a metric different from AdS p+1 and studying the BPS equations in this more general setup. All such solutions, however, turn out to be singular, suggesting that the corresponding twisted compactifications have some "pathological" behavior in the IR. For instance, the IR theory could become free, have a non-normalizable vacuum, or accidental global symmetries. We should emphasize that there is a vast literature on constructing domain wall solutions in gauged supergravity. An important vantage point on these solutions was offered by the work of Maldacena and Núñez [20], where twisted compactifications of 6d N = (2, 0) theories and 4d N = 4 SYM on Riemann surfaces were studied holographically, and shown to correspond to supergravity backgrounds of the kind discussed above. Our main goal in this section is not to find new supergravity solutions, but rather to collect and organize various backgrounds scattered throughout the literature, and to interpret them in the context of the universal flows discussed in Section 3. In particular, a crucial point in our story is that these universal black brane solutions can be embedded in ten-or eleven-dimensional supergravity in infinitely many ways (see Section 4.5 for a more detailed discussion). This is precisely the statement of universality of the solutions. Different embeddings of these supergravity backgrounds in string or M-theory describe the twisted compactification of different SCFTs, but the universal relations for field theory observables discussed in Section 3 always hold, regardless of the details of the SCFTs. This point of view not only establishes the existence of RG flows across dimensions for a large class of SCFTs (at least to leading order in N ), but can also be a powerful tool in counting the microstates of infinite families of black branes. This was made explicit for the case of black holes in AdS 4 in [19], following the approach of [12]. It is natural to expect that the observations made here will be useful in generalizing these results to even larger classes of black branes in different dimensions.
Before describing the solutions of interest, let us collect some expressions for the holographic evaluation of central charges and free energies, which will be used repeatedly below. For odddimensional AdS solutions the central charges of the dual even-dimensional CFTs are given by [81,82]: is the Newton constant in (d+1) dimensions and L AdS d+1 is the length scale associated with the given AdS vacuum. In the case of even-dimensional AdS solutions we are interested in the renormalized value of the on-shell action, which is mapped holographically to the free energy of the dual SCFT placed on a round sphere. In the case of AdS 4 and AdS 6 vacua one finds the following expressions for the sphere free energy (see for example [83]): One should not be bothered by the minus sign in F S 5 since the "proper" monotonically decreasing quantity under RG flow in an odd dimension d is conjectured to be given by (−1) (d−1)/2 log Z S d [18], where Z S d is the partition function on S d and we always define In the case of AdS 2 vacua, describing the near-horizon geometry of BPS black holes, the main quantity of interest will be the Bekenstein-Hawking entropy of the black hole. As shown explicitly in [19] for black holes in AdS 4 , the black hole entropy is intimately related to the renormalized gravitational free energy of the solution. 36 Finally, all the solutions we discuss are locally asymptotic to the AdS d+1 vacuum of the gauged supergravity theory. The length scale of this vacuum is set by the value of the scalar potential of the theory at its AdS d+1 critical point, V | φ crit . This value in turn sets the cosmological constant scale, which is related to the value of the gauge coupling constant in the supergravity theory. In our conventions the radius, L AdS d+1 , of AdS d+1 is given by We choose a normalization in which L AdS d+1 = 1, thus fixing the value of the gauged supergravity coupling constant to a particular value, as determined by (4.6).
7d supergravity
In this section, we provide the holographic description of RG flows from 6d N = (1, 0) and N = (2, 0) SCFTs to lower-dimensional SCFTs by the twisted compactifications described in Section 3.1. The relevant supergravity theories are the 7d N = 2 gauged supergravity of [84] (see also [85] for some details) and the 7d N = 4 gauged supergravity of [86], respectively.
N = 2
The bosonic content of this minimally supersymmetric theory is the graviton, an SU (2) graviphoton, a real scalar λ, and a three-form potential C 3 . In the solutions of interest, the gauge field is excited only along the Cartan of SU (2) and C 3 = 0, in which case the bosonic Lagrangian reads where m is the gauge coupling constant. The AdS 7 vacuum of the theory corresponds to the extremum of the potential at λ = 0, where it takes the value V | λ=0 = −15 m 2 /2. Comparing to (4.6) we set m = 2 in what follows to normalize L AdS 7 = 1. Universal holographic RG flow solutions in this theory were studied in [26,27].
AdS 5 vacua. We are interested in supersymmetric solutions of the form (4.8) We focus on g > 1 since there are no regular solutions for g = 0, 1. To obtain the BPS equations we note that when the gauge field points along the Cartan of SU (2), as in the Ansatz above, the minimal theory can be obtained as a subsector of the U (1) 2 truncation of the 7d SO(5) maximally supersymmetric gauged supergravity. 37 General RG flows in this U (1) 2 truncation were studied in [5]. We may thus borrow the BPS equations for an Ansatz of the form (4.8) from this reference, 38 which read where a prime denotes differentiation with respect to r. These equations describe the full RG flow from AdS 7 to AdS 5 × Σ g . At the AdS 5 fixed point the metric functions take the form (4.3) , with e f 0 = e 8λ 0 , where λ 0 is the value of the scalar field at the fixed point. The 4d central charges associated to this solution are readily computed from (4.4), giving: This result is in perfect agreement with the large N limit of the field theory calculation in (3.6). This is a nice consistency check that the supergravity solution at hand is dual to the universal RG flow across dimensions discussed around (3.6). 37 More precisely, the bosonic content of this truncation is two gauge fields A µ , a three form potential C µνρ and two scalar fields λ 1 , λ 2 . To obtain the minimal theory one must set A µ ≡ A µ and λ 1 = λ 2 ≡ λ. One can check, for instance, that the Lagrangian and supersymmetry variations obtained in this way are consistent with those of [87]. 38 These can also be obtained from the BPS equations derived in [20].
The entire solution corresponds to a BPS 3-brane asymptotic to AdS 7 , whose entropy density is given by (B.6) and can be expressed in terms of UV field theory data by using (4.11).
AdS 3 vacua. The Ansatz in this case is where ds 2 M 4 is a constant-curvature metric on a Kähler-Einstein manifold M 4 , with Kähler form ω M 4 , normalized such that where we took M 4 to be negatively curved (only this case leads to regular supergravity solutions). A solution of this form within 7d maximally supersymmetric supergravity was found in [47] (see Equation (4.12) there and recall we set m = 2). As argued above, for this Ansatz the same BPS equations hold in minimal supergravity, which read: 39 where λ 0 is the value of the scalar field at the fixed point. The corresponding 2d central charges can be computed from (4.4) and read: This result is in nice agreement with the large N limit of the field theory calculation in (3.12). The entire solution describes a BPS 1-brane asymptotic to AdS 7 , whose entropy density can be expressed in terms of UV SCFT data by combining (B.5) and (4.16).
AdS 4 vacua. The relevant AdS 4 solution in 7d minimal supergravity was found in [88] (see also the later work [53]) and preserves only two Poincaré and two superconformal supercharges, i.e. the dual 3d SCFT has N = 1 supersymmetry. The universal nature of this solution was also emphasized in [27]. We do not discuss this further here.
One may embed the 7d N = 2 gauged supergravity into the N = 4 theory, and the solutions discussed in the previous section are also solutions of the maximally supersymmetric theory, with the additional fields set to zero. From the field theory perspective this corresponds to applying the universal twist of general 6d N = (1, 0) theories to the special case of N = (2, 0) theories. In this section, we discuss additional twists possible for N = (2, 0) theories and their gravity duals.
AdS 5 vacua The solution describing the twisted compactification of the 6d N = (2, 0) theory to a 4d N = 2 theory was first found by Maldacena and Núñez in [20]. For completeness, we reproduce the answer here, following the notation and conventions in [5]. The Ansatz is ds 2 = e 2f (r) (−dt 2 + dr 2 + dz 2 1 + dz 2 2 + dz 2 3 ) + e 2g(r) ds 2 Σ g>1 , (4.17) At the AdS 5 fixed point the metric functions take the form (4.3), with (see Equation (3.8) in [5]): where λ which exactly reproduces the large N field theory result (3.16). The entire solution describes a BPS 3-brane in AdS 7 and its entropy density is readily expressed in terms of UV data by combining (B.6) and (4.19).
AdS 3 vacua. The compactification on M 4 = Σ 1 × Σ 2 , with both Riemann surfaces of negative curvature, preserving 2d N = (2, 2) supersymmetry is given by (see Equation (5.26) and Appendix G of [7]) which exactly reproduces the field theory result (3.21). The entire solution describes a BPS 1-brane asymptotic to AdS 7 , whose entropy density is given in terms of UV data by combining (B.5) and (4.22).
AdS 4 vacua. AdS 4 vacua arising from M5-branes wrapping special Lagrangian 3-cycles were found in Section 3.1 of [47] (see also [53]). These describe the twisted compactification of 6d N = (2, 0) theories on three-manifolds, discussed at the end of Section 3.1.2. For M 3 an Einstein space of negative constant curvature, there is an AdS 4 fixed point, for which the metric functions take the form (4.3), with where λ 0 is the value of the only non-trivial scalar field at the fixed point. The S 3 free energy of the corresponding AdS 4 solution follows from (4.5) and is given by where in the last equality we used (4.4). The entire solution describes a BPS 2-brane asymptotic to AdS 7 , whose entropy density is given in terms of UV data by combining (B.8) and (4.24).
AdS 2 vacua. AdS 2 vacua arising from M5-branes wrapping special Lagrangian 5-cycles M 5 were found in Section 3.3 of [47]. These describe the twisted compactification of 6d N = (2, 0) theories on five-manifolds discussed at the end of Section 3.1.2. Regular solutions to the supergravity BPS equations were found for M 5 being S 5 or H 5 with an Einstein metric of normalized curvature κ = 1 and κ = −1 respectively. 40 The supergravity scalars vanish and the metric functions are given by With this at hand the black hole entropy corresponding to this near horizon AdS 2 solution can be written as where for the last identity we made use of (4.4). It would be nice to reproduce this result by a field theory calculation.
6d supergravity
There is a unique six-dimensional gauged supergravity theory with a supersymmetric AdS 6 vacuum that can be constructed out of the gravity multiplet. This was done by Romans in [89]. The theory has 16 supercharges and the AdS 6 vacuum is invariant under the supergroup F (4), which is also the unique superconformal group in 5d. The bosonic field content of the theory is given by the graviton g µν , an SU (2) gauge potentials A I µ , an Abelian one-form potential A µ , a two-index tensor gauge field B µν , and a scalar φ. The two-form "eats" the one-form A µ and becomes massive, which can be implemented by choosing A µ = 0. This bosonic content mimics precisely the bosonic operators in the energy-momentum multiplet of 5d SCFTs. In particular, the SU (2) gauge field A I µ is dual to the R-current. The fermionic field content is four gravitinos ψ µ i and four gauginos χ i with a symplectic Majorana condition. To implement the topological twist of interest here we only need to use the metric, the gauge field, and the scalar. Therefore we will set the two-form B µν = 0 from now on. Using the conventions summarized in [72], the bosonic Lagrangian reads 41 whereḡ is the SU (2) coupling constant and m is a mass parameter associated to B µν . Depending on the signs ofḡ, m , or if any of these parameters vanishes, this Lagrangian actually describes five different theories. Here we are interested in the caseḡ > 0, m > 0, which corresponds to the theory labelled N = 4 + in [89]. Further settingḡ = 3m the theory admits an AdS 6 vacuum with F (4) symmetry, corresponding to the extremum of the potential at ϕ = 0 where it takes the value V | ϕ=0 = −10 m 2 . Comparing this to (4.6) we set m = √ 2 in what follows to normalize L AdS 6 = 1.
Our main interest here is in the solutions of this F (4) gauged supergravity describing the twisted compactification of 5d SCFTs on two-, three-, and four-manifolds, as briefly discussed at the end of Section 3.3. The corresponding AdS 4 , AdS 3 , and AdS 2 vacua, which we review below, were constructed in [71,72]. We mostly follow the conventions in [72], where the relevant BPS equations are written (see Equations (5.12) there). We denote by D = 5 − p the dimension of the compactification manifold. The supergravity equations imply the metric on this manifold should be Einstein, whose normalized curvature we denote by κ. With this notation the BPS equations read (4.28) Next, we describe the asymptotic behavior of solutions to these equations for the relevant values of D.
AdS 4 vacua. The twisted compactification of a 5d SCFT on a Riemann surface to a 3d SCFT is described by a solution of the form ds 2 = e 2f (r) (−dt 2 + dz 2 1 + dz 2 2 + dr 2 ) + e 2g(r) ds 2 Σg , , with with ϕ 0 the value of the scalar field at the AdS 4 fixed point.
The free energy on S 3 of the 3d N = 2 SCFT dual to this AdS 4 vacuum is computed from (4.5) by evaluating the on-shell action and reads where we have also used (4.5). This universal relation is a nontrivial prediction of supergravity which would be interesting to derive directly in field theory. The entire solution describes a 2-brane asymptotic to AdS 6 , whose entropy density is given in terms of UV data by combining (B.8) and (4.31).
AdS 3 vacua. The solution describing the twisted compactification on a hyperbolic threemanifold M 3 = H 3 /Γ was constructed in Section 3.1 of [71] and is of the form ds 2 = e 2f (r) (−dt 2 + dz 2 + dr 2 ) + e 2g(r) ds 2 H 3 , is the metric on hyperbolic space, which we quotient by an appropriate discrete group. Setting D = 3 and κ = −1 in (4.28) one finds an AdS 3 fixed point, where the metric functions take the form (4.3), with with ϕ 0 the value of the scalar at the horizon. This twisted compactification preserves 2d N = (1, 1) supersymmetry and thus we have fewer technical tools to study the IR SCFT. However, we can compute holographically the central charge of the 2d theory using (4.4) and find where in the last equality we have used (4.5). The entire solution describes a 1-brane asymptotic to AdS 6 , whose entropy density is given in terms of UV data by using from the formulas (B.5) and (4.34).
AdS 2 vacua. AdS 2 vacua were found in [72], corresponding to twisted compactification of 5d SCFTs on four-manifolds M 4 . Since the gauge field in the supergravity theory is only SU (2), this restricts the possible 4-manifolds one may consider. One option is for M 4 to be Kähler. 42 Then, setting D = 4 in (4.28) the BPS equations dictate that κ = −1 and one finds an AdS 2 fixed point, where the metric functions take the form (4.3), with with ϕ 0 the value of the scalar at the horizon. The entire spacetime is a BPS black hole asymptotic to AdS 6 , with entropy where in the last equality we used (4.5). This is another universal prediction from holography which would be interesting to test with field theory methods, by comparing the partition function on S 1 × M 4 (with a universal topological twist on M 4 ) and the partition function on S 5 at large N .
5d supergravity
We now proceed to study AdS 3 and AdS 2 vacua of 5d N = 2 and N = 4 minimal gauged supergravity. These describe universal twisted compactifications of 4d N = 1 and N = 2 SCFTs on Riemann surfaces and three manifolds.
N = 2
This theory has 8 supercharges and bosonic field content the metric g µν and a U (1) gauge field A µ dual to the stress-energy tensor and the R-current in the dual 4d N = 1 SCFT. 43 Here we follow the conventions of [20,7] in which the Lagrangian reads (see Equation (46) in [20]) where the cosmological constant is normalized such that L AdS 5 = 1.
AdS 3 vacua. We are interested in BPS solutions of the supergravity theory with AdS 5 asymptotics and near-horizon geometry AdS 3 × Σ g . These describe the universal flow from 4d N = 1 SCFTs to 2d N = (0, 2) discussed in SCFTs [9,8] and reviewed in Section 3.2.1. The Ansatz for the metric and gauge field are (4.39) The solution to these equations with the required asymptotics, which exists only for κ = −1, describes a magnetically charged BPS black string in AdS 5 and was discussed in [90]. The entire domain wall solution preserves 2 real supercharges, which is enhanced as usual to 4 supercharges at the horizon. For the purpose of computing the central charge of the IR 2d SCFT, we focus on the AdS 3 fixed point of (4.39), where the metric functions take the form (4.3), with The 2d central charges can then be easily be found to read reproducing the universal field theory result in (3.25). The entropy density of the black string can be written in terms of the data of the 4d UV SCFT by combining (B.5) with (4.41).
AdS 2 vacua. Since the minimal gauged supergravity in 5d contains only a U (1) gauge field and the holonomy group of a generic Riemannian three-manifold is SO(3), we do not expect to find supersymmetric AdS 2 vacua. However, it is possible to construct such vacua in the 5d N = 4 theory, which we review next.
found in [91]. Here we are interested in the unique theory, denoted by N = 4 + in [91], for which there is a supersymmetric AdS 5 critical point. To obtain that one has to setḡ ≡ g 2 = √ 2g 1 in the notation of [91]. Here we are interested in supersymmetric solutions of this theory which describe the topological twisted compactifications discussed in Section 3.2.2 and thus we set the two-form fields B 1,2 µν to zero to find the following bosonic Lagrangian 45 Normalizing the scale of AdS 5 as L AdS 5 = 1 requires to set the value of the potential at the critical point φ = 0 to be V | φ=0 = − 3ḡ 2 2 = −12 and henceḡ = 2 √ 2. We use this normalization from now on and for convenience define φ ≡ √ 6ϕ.
AdS 3 vacua. We are after a solution that describes the universal flow from a 4d N = 2 theory to a 2d N = (2, 2) theory. As discussed in Section 3.2.2 this is obtained by a topological twist along the Cartan of SU (2). Thus, we are after AdS 3 solutions where only the gauge field along the Cartan of SU (2) is turned on and the U (1) gauge field vanishes. The Ansatz for the metric and gauge fields is thus The corresponding BPS equations read (4.44) The solution to these equations with the desired asymptotics, which exists only for κ = −1, was constructed in [91]. 46 At the AdS 3 fixed point, the metric functions take the form (4.3), with e 2f 0 = e 2g 0 = 1 2 4/3 , e ϕ = 2 1/3 . (4.45) 45 We obtain this by setting all fermionic fields and antisymmetric tensor fields to zero in the Lagrangian (2.14) in [91] and sending g µν → −g µν to change to a "mostly plus" signature. We have also rescaled the scalar field φ there = 1 2 φ here and the SU (2) gauge field F I here = √ 2F I there . 46 Indeed, setting x = 1 2 in [91] corresponds to turning off the U (1) gauge field and leaving only a non-zero gauge field for the Cartan of SU (2) R , see Equation (4.5) in [91].
The entire domain wall solution preserves 4 real supercharges which are enhanced to 8 supercharges at the horizon. The corresponding 2d central charge is given by which nicely matches the large N field theory result (3.33). The entropy density of this supersymmetric black string solution in terms of UV SCFT data is obtained by combining (4.46) and (B.5). We note that the BPS equations (4.44) coincide with the BPS equations describing the twisted compactification of N = 4 SYM on a Riemann surface preserving 2d N = (2, 2) supersymmetry discussed in [20] (see Equations (14)-(16) there). 47 In that reference, the flow was described within the U (1) 3 truncation of N = 8 supergravity, which consists of the metric g µν , three Abelian gauge fields A 1,2,3 µ in the Cartan of SO(6) and two neutral scalars φ 1 and φ 2 . The solution described above corresponds to a solution with A 1 µ = A 2 µ , A 3 µ = φ 2 = 0, and identifying the remaining scalar φ 1 = √ 6ϕ in which case one can see that the Lagrangians and supersymmetry transformations coincide. 48 These results are in harmony with the fact that the N = 4 + Romans supergravity theory can be obtained also as a truncation of the 5d SO(6) maximally supersymmetric gauged supergravity of [92].
The twisted compactification corresponding to the β-twist discussed in Section 3.2.2 is realized in the N = 4 + supergravity by turning on magnetic flux for the U (1) gauge field, A µ , and switching off the SU (2) gauge field flux, A I µ . However one can show that an Ansatz with this field configuration leads to a singular supergravity flow solution which does not flow to an AdS 3 vacuum in the IR. This suggests that the dual 2d (0, 4) theory in the IR has some pathology, for example an accidental symmetry or a non-normalizable vacuum state.
AdS 2 vacua. AdS 2 vacua in 5d N = 4 minimal gauged supergravity were found [93]. This describes the twisted compactification of a 4d N = 2 theory on M 3 = H 3 . Since the structure group of the three-manifold is SO(3) to implement the topological twist one has to switch off the Abelian gauge field in the supergravity theory. The Ansatz for the metric and SU (2) gauge field is ds 2 = e 2f (r) (−dt 2 + dr 2 ) + e 2g(r) ds 2 47 These can also be obtained by setting a 1 = a 2 = − κ 2 and a 3 = 0 in Equation (3.20) of [7]. 48 One can check that setting A 1 µ = A 2 µ , A 3 µ = φ 2 = 0, and φ 1 = √ 6ϕ in the Lagrangian given in Equation (46) in [20], it matches (4.42), assuming only the Cartan of SU (2) is excited and settingḡ = 2 √ 2.
where ds 2 H 3 = dφ 2 + sinh 2 φ (dθ 2 + sin 2 θ dν 2 ). The corresponding BPS equations were derived in [93] (see Equation (56) there), which we reproduce here for completeness: , , (4.48) As discussed in [93] (see Equation (42) there), at the AdS 2 fixed point the metric functions take the form (4.3), with with ϕ 0 the value of the scalar at the horizon. The domain wall solution which interpolates between AdS 5 and this AdS 2 vacuum preserves two real supercharges (enhanced to 4 supercharges in the near horizon limit) and can be thought of as a BPS black hole with a hyperbolic horizon. The entropy of the black hole is given by It would be interesting to establish this universal relation using field theory methods.
4d supergravity
Here we discuss black hole solutions in four-dimensional gauged supergravity describing the universal twisted compactification of 3d N = 2 and N = 4 SCFTs on a Riemann surface. The field theory setting was briefly discussed in Section 3.3 and we refer to [19] for more details. The study of asymptotically AdS 4 back holes has received renewed attention, following the discovery of the 3d superconformal theories describing the worldvolume of M2-branes and their AdS 4 duals [94]; see [95][96][97] and references thereof. The interpretation of some of these solutions as twisted compactifications of the ABJM theory was provided in [12,37] (see also [98] for earlier work), where the microsocpic entropy of these black holes was reproduced using the supersymmetric index defined in [67]. In this section, we focus on universal solutions describing the twisted compactification of a large class of 3d N = 2 theories. This was recently used in [19] as a tool to count the black hole microstates for a large class of theories with M-theory as well as massive IIA duals (see also [99,100] for non-universal examples in massive IIA).
N = 2
We are interested in asymptotically AdS 4 black hole solutions in minimal N = 2 gauged supergravity which preserve two supercharges. The near-horizon geometry is AdS 2 × Σ g and the entire solution describes the holographic RG flow from a 3d N = 2 SCFT on Σ g to a 1d superconformal quantum mechanics. Solutions in non-minimal gauged supergravity were summarized in [12] where references to the extensive earlier literature on the subject can also be found. The minimal theory is obtained by setting n a = κ 2 , φ = 0 , L a = 1 in [12]. The Lagrangian reads where V = −12ḡ 2 . Comparing to (4.6) we setḡ = 1/ √ 2 to normalize L AdS 4 = 1. The Ansatz of interest takes the form 52) and the BPS equations read (4.53) For κ = −1 these equations admit a full analytic solution corresponding to the magnetically charged black hole of [101,102], preserving two supercharges. 49 The uplift of this solution to eleven dimensions, and its interpretation as wrapped M2-branes on a Calabi-Yau five-fold, was given in [98], where other interesting wrapped membranes solutions were also studied. The uplift to massive IIA is provided in [19]. For κ = 1 one finds an IR singularity (see Section 3.4 of [98]). Setting κ = −1, one finds a regular horizon asymptotic to AdS 2 × Σ g and the metric functions take the form (4.3), with The black hole entropy is given by where in the last equality we used (4.5). This exactly reproduces the large N field theory result (3.40), provided the identification S BH = Re log Z(∆ I ), as shown in [19] for this class of solutions. 50 It is worth highlighting the power behind the rather simple-looking universal relation (4.55). As we have argued on general grounds, this four-dimensional black hole can be uplifted to ten-or eleven-dimensional supergravity in infinitely many ways depending on the choice of six-or seven-manifold; each uplift describes the twisted compactification of a different 3d N = 2 SCFT. This fact, combined with the universal relation (4.5), was recently used in [19] to arrive at a microscopic derivation of the entropy of this infinite family of black holes. Similarly, we expect that the various universal relations derived in this paper will be useful in generalizing these results to even larger classes of black branes in different dimensions.
Non-universal compactifications, i.e., corresponding to turning background flavor fluxes in the SCFT, are described by black holes charged under additional vector multiplets. Such solutions were first considered in [95] and studied further in a number of papers, notably [96,97] (see [12] for a more complete list of references). The field theory interpretation of these black holes and a microscopic derivation of their entropy was carried out in [12,37] using the twisted index of [67]. The solution of the minimal theory can be seen as a special case of these, obtained by setting the vector multiplets to zero.
N = 4
In this section, we investigate which twisted compactifications of 3d N = 4 SCFTs admit a holographic description. These theories have an SO(4) SU (2) C × SU (2) H R-symmetry and the compactification on Riemann surfaces was recently considered in [70]. The appropriate supergravity is minimal N = 4 gauged supergravity [103]. This supergravity has sixteen supercharges and bosonic content the graviton, an SO(4) gauge field, a dilaton, and an axion. It can be obtained as an S 7 reduction of eleven-dimensional supergravity [104].
Following the notation in [104] we denote the two SU (2) gauge fields by A andÃ, the dilaton by φ, and the axion by χ. Setting χ = 0 the bosonic Lagrangian is given by 51 where I = 1, 2, 3 are SU (2) indices and the scalar potential is given by We setḡ = 1/ √ 2 in what follows to normalize the scale of the AdS 4 vacuum to L AdS 4 = 1. We are interested in solutions with gauge fields excited only along the Cartan of the two SU (2)'s which leads to the following Ansatz: ds 2 = e 2f (r) (−dt 2 + dr 2 ) + e 2g(r) ds 2 Σg , (4.58) 51 We should note that the axion field χ is sourced by F ∧ F and thus it is consistent to set it to zero only if F ∧ F = 0. Since we are interested here in purely magnetic solutions, it is consistent to do so.
The BPS equations read
: (4.59) These BPS equations can also be obtained as a truncation of the well-studied U (1) 4 STU model of four-dimensional gauge supergravity by setting pairs of the four U (1) gauge fields equal and two dilatons and all three axions to zero. This was analyzed in some detail in [12]. 52 To determine the parameter space for which regular black hole solutions exist it suffices to look at possible AdS 2 vacua, where the metric functions take the form (4.3). Then, the first two equations in (4.59) imply that φ = φ 0 is a constant and we obtain the set of algebraic equations
Uplifts to 10d and 11d
As emphasized throughout the paper, a crucial aspect of our story is the fact that the gauged supergravity solutions presented above are universal. This manifests itself in two distinct ways.
On the one hand, they are solutions of the minimal gauged supergravity in a given spacetime dimension with a given amount of supersymmetry, i.e., we have switched off any possible matter multiplets. This is the simple holographic dual of the fact that in the SCFT the universal partial topological twists discussed in Section 3 involve only the stress-energy tensor multiplet. On the other hand, these minimal supergravity solutions can be embedded into string and M-theory in infinitely many distinct ways by using various consistent truncation results in the literature. This is ultimately the core statement of universality of these constructions; a particular embedding in string or M-theory describes the twisted compactification of a particular dual SCFT, and the universal relations between QFT quantities derived above hold independently of the details of the SCFTs at hand. Let us briefly describe how these embeddings into ten and eleven-dimensional supergravity are realized. It was shown in [105] and [106] that any supersymmetric AdS 5 vacuum of 11d or type IIB supergravity admits a consistent truncation to minimal 5d N = 2 gauged supergravity. The same also holds for any supersymmetric AdS 4 solution of 11d supergravity which can be consistently truncated to 4d minimal N = 2 gauged supergravity as shown in [106]. 53 Since there are infinitely many such AdS 4 and AdS 5 solutions we arrive at the conclusion that the universal black string and black hole solutions presented in Section 4.3.1 and Section 4.4.1 can be embedded in 10 and 11d supergravity, respectively, in infinitely many ways. We note that this does not apply only to the usual Freund-Rubin type solutions and one can also find realizations of the universal flows for warped AdS 5 and AdS 4 vacua of IIB and 11d supergravity as shown explicitly in [8] and [42] respectively. In [109] and [110] similar consistent truncation results were also derived for the minimal 5d N = 4 gauged supergravity discussed in Section 4.3.2. In [109] it was shown that every supersymmetric AdS 5 vacuum of 11d supergravity which preserves 16 supercharges admits a consistent truncation to the minimal 5d N = 4 gauged supergravity. The same holds true for the AdS 5 × S 5 solution (and its orbifolds preserving with 16 supercharges) in type IIB supergravity [110]. 54 The fact that supersymmetric AdS 7 vacua with 16 supercharges of type IIA and 11d supergravity lead to a consistent truncation to the minimal 7d N = 2 gauged supergravity of Section 4.1.1 was shown in [26]. Finally, the solutions of Romans' minimal sixdimensional supergravity theory discussed in Section 4.2 can be embedded into massive IIA or type IIB supergraviy using the results of [111] and [112], respectively. All of these consistent truncation results for AdS vacua with 16 supercharges are nicely captured by the recent analysis in [113]. It was shown in [113] that any AdS D solution of 10d or 11d supergravity admits a truncation to the respective minimal (i.e. containing only the gravity multiplet) gauged supergravity in D 53 There is also an embedding of the minimal 4d N = 2 gauged supergravity in massive IIA supergravity as discussed in [107,108]. 54 We are not aware of any other AdS 5 vacua with 16 supercharges in type IIB supergravity. If such solutions exist it is reasonable to conjecture that they will admit a consistent truncation to the 5d N = 4 minimal supergravity theory.
dimensions. 55 The upshot of this collection of supergravity results is that every d-dimensional SCFT (for d ≥ 3) with a weakly coupled string/M theory dual captured by a supersymmetric AdS d+1 vacuum of 10d or 11d supergravity admits a consistent description in terms of minimal gauged supergravity in d + 1 dimensions. This in turn implies that every such SCFT also enjoys the universal RG flows across dimensions by a twisted compactification on M d−p and the holographic description of this RG flow is in terms of the supergravity domain wall solutions discussed in this section.
This observation is particularly useful in the case of flows between odd-dimensional SCFTs. A simple example of this is the uplift of the AdS 4 black hole solution of Section 4.4 to elevendimensional supergravity and to massive IIA supergravity, recently discussed in [19]. Using the universal relation (4.55) then leads to the microscopic counting of the entropy of a large class of AdS 4 black holes [19] both in M-theory and in massive IIA string theory. It would be interesting to apply this approach to the microscopic counting of the entropy of the various black brane solutions described in this work. This requires a detailed understanding of the field theory quantity computing the corresponding entropy, which is currently lacking. One example that might be interesting to study is the AdS 5 black hole, with near-horizon geometry (4.47) and entropy (4.50). In analogy to the AdS 4 case, a natural guess for the field theory quantity that should capture its entropy is the partition function of a 4d N = 2 theory on S 1 × H 3 , with a partial topological twist on H 3 . We leave the exploration of this interesting question for future work.
Discussion
We have established universal relations between physical observables in SCFTs with a continuous R-symmetry, connected by RG flows across dimensions. The precise flows are triggered by a partial topological twist on a compact manifold along the exact UV superconformal R-symmetry. The underlying reason for this universality is the fact that the deformation in the UV amounts to coupling the omnipresent stress-energy tensor multiplet of the SCFT to background fields, namely a background metric and R-symmetry gauge field, and switching off any possible couplings to flavor symmetry currents.
If the SCFT d in the UV admits a weakly coupled AdS d+1 gravity dual in string or M-theory, and if the compactification manifold admits a constant negative-curvature metric, we have provided ample evidence that the p-dimensional theory in the IR is also conformal and admits a 55 We note that the results in [113] are somewhat implicit and do not immediately lead to convenient explicit uplift formulas from D to 10 or 11 dimensions. weakly coupled AdS p+1 dual. The gravitational description provides an explicit realization of the RG flow across dimensions via a simple domain wall solution of gauged supergravity, interpolating between the AdS d+1 vacuum in the UV and an AdS p+1 vacuum in the IR. The universality of such flows is understood holographically by the fact that these domain walls can be uplifted to string or M-theory in infinitely many ways.
Our results suggest various interesting questions and directions for future work. Clearly, the most pressing and general question is how to find an independent description of the low-energy p-dimensional SCFTs. We have defined these theories via a twisted compactification of higherdimensional theories. It would be valuable, however, to have a UV definition in terms of a theory living in the same number of spacetime dimensions. This can be achieved either through a direct definition of the CFT, e.g., by a nonlinear σ-model on a Ricci-flat target manifold in the case p = 2, or via some p-dimensional asymptotically free UV description that flows to the interacting CFT in the IR. The gold standard for this is set by the N = 1 and N = 2 theories of class S and the 3d theories of class R arising from M5-branes wrapping Riemann surfaces and threemanifolds, respectively-see [114,54] and references thereof. Perhaps the most accessible setup to generalize this success to other dimensions is to focus on d = 4, p = 2, in particular the αand β-twists of four-dimensional N = 2 theories discussed in Section 3.2.2 and holographically in Section 4.3.2. Some progress in this direction was made recently in [56,115], but there is certainly more to be understood, especially for theories in the large N limit.
When the RG flow is between even-dimensional SCFTs we relied purely on field theory methods, in particular the power of 't Hooft anomaly matching and superconformal symmetry, to derive exact, finite N , relations between quantities in the UV and IR theories. As shown in Section 4 these relations are reproduced holographically to leading order in N , thus providing strong evidence for the existence of such flows and IR fixed points. The supergravity analysis, however, is not limited to flows between even-dimensional SCFTs and we have used properties of various supergravity solutions to predict similar universal relations when one (or both) of the SCFTs is odd-dimensional, in which case the appropriate physical quantity is the round-sphere free energy. These relations, as currently stated in Section 4, are established only in the large N limit. It would be most interesting to study whether this picture extends beyond the planar limit. Ideally, this could be approached by an exact field theory calculation using supersymmetric localization on the appropriate curved manifold as, e.g., the case of three-dimensional theories on Riemann surfaces [12,13,19]. It is likely that this can be generalized further by studying, for instance, suitable supersymmetric partition functions of five-dimensional SCFTs with a partial topological twist on Σ g to reproduce the holographic prediction in (4.31) by pure field theory methods. Similarly, it should be possible to study four-dimensional N = 2 SCFTs with a topological twist on M 3 , with M 3 an appropriate hyperbolic manifold, to reproduce the black hole entropy in (4.50). An alternative approach to incorporating subleading corrections in N would be to analyze the universal RG flows holographically, including higher-curvature corrections in gauged supergravity. Although a technically challenging problem in general, this was addressed successfully in [116] for various domain walls interpolating between AdS vacua corresponding to RG flows between even-dimensional SCFTs. It would be very interesting to extend this approach to the various domain wall solutions described here. It would also be interesting to study whether there are universal relations among other physical observables in the p-dimensional and d-dimensional SCFTs, e.g., Wilson loop expectation values or partition functions with other insertions.
A series of interesting questions relate to the choice of M d−p . As discussed, most universal RG flows we have studied, both in field theory as well as holographically, require M d−p to be hyperbolic or negatively curved in order for the IR p-dimensional theory to be unitary. 56 We do not have an explanation why this must be the case in general and it would be interesting to have a better understanding of this. In addition, the supergravity solutions we constructed require the metric on the compact manifold M d−p in the IR to be Einstein. From field theory considerations, however, it is clear that in the UV one should be able to use any metric on M d−p , since we are performing a topological twist. Thus, holography suggests that the RG flow across dimensions uniformizes the metric on M d−p . This has been understood in some detail for Riemann surfaces in [117] and it would certainly be very interesting to explore the interplay between holographic RG flows and uniformization for higher-dimensional manifolds. The Einstein metric at the IR end of the RG flow may still admit a moduli space of deformations compatible with the Einstein condition. These moduli should correspond to exactly marginal couplings in the p-dimensional SCFT. This is also clear from the gravitational construction of the holographic dual, where the moduli of the Einstein metric on M d−p lead to massless scalar excitations on the AdS p+1 space dual to the SCFT. This picture is well-established for the case d = 6, p = 4 [114,39,4,5] and partially explored for the case d = 6, p = 2 in [118]. Additional exactly marginal deformations of the p-dimensional SCFT may be present if the d-dimensional parent theory admits global, non-R, symmetries for which one can turn on flat connections on M d−p . Finally, let us note that we have assumed throughout the paper that M d−p is smooth and compact. It is natural to consider generalizations of this setup to allow for boundaries, punctures, or other defects. It would be interesting to study whether there is a generalization of the universal relations uncovered in this work in these more general situations.
As discussed at length above, the main reason behind the existence of the universal RG flows across dimensions is that the partial topological twist triggering the RG flow is performed using 56 An exception to this general rule is the case of 6d SCFTs on a 5-manifold at large N , as noted in Section 4.1.2.
We have assumed throughout the paper that the UV d-dimensional theory is unitary.
only background fields that couple to the universal stress-energy tensor multiplet, which exists for all SCFTs with a continuous R-symmetry. The holographic manifestation of this universality is realized by the fact that gauged supergravities always admit a truncation to a universal sector including only the gravity multiplet. In addition, these (d + 1)-dimensional "minimal" supergravities arise as universal consistent truncations from string and M-theory in infinitely many ways, that are distinguished by the choice of internal manifold (and the fluxes through it) used for the reduction from 10 (or 11) dimensions. This holographic perspective suggests that SCFTs with a holographic dual enjoy a truncation of the OPE, at least at large N , for operators belonging to the stress-energy multiplet. It would be very interesting to understand the mechanism behind such an OPE truncation, as this could offer an explanation and organizational principle for the plethora of consistent truncations in the supergravity literature.
A few other observations made throughout this work deserve further analysis. What is the relation between unitarity of the IR p-dimensional theory at the end of the RG flow across dimensions and the Hofman-Maldacena-type bounds for the parent d-dimensional UV theory? Is there a Hofman-Maldacena-type bound on the four anomaly coefficients in six-dimensional SCFTs? 57 From all the examples of RG flows across dimensions studied here, it seems that to obtain an SCFT with an AdS dual in the IR, the UV theory must also be conformal and strongly interacting. We are not aware of any a priori reason for this to be the case, and it would be interesting to find more general examples of RG flows across dimensions, where the UV theory is not strongly interacting. In Appendix B we observe intriguing relations between field theory observables such as conformal anomalies and free energies and the entropies of various supersymmetric black branes. It is desirable to put these on a firmer footing and calculate the black brane entropies from a more rigorous field theory setting, as done recently for supersymmetric black holes in AdS 4 [12,19]. Finally, it is natural to wonder whether there is some notion of a "monotonicity theorem," similar to the c-, a-, or F -theorems for RG flows across dimensions. Based on the universal relations between conformal anomalies and free energies studied in this work it is clear that simply comparing the natural monotonic quantity in the IR p-dimensional SCFT with the one in the UV d-dimensional SCFT is too naive. Perhaps one should search for a more refined definition of a monotonic function along the RG flow which removes the explicit factor of the volume of the compactification manifold M d−p .
It is clear that the universal RG flows described here provide a fertile area for exploring the physics of supersymmetric QFTs and holography. We expect many further exciting developments ahead of us.
A Conventions and normalizations
In this appendix we review our conventions and normalizations and collect useful formulae used throughout the paper.
A.1 Characteristic classes
The total Chern class of a vector bundle and the Pontryagin class of the tangent bundle are given by (here we are following the conventions in [28], see also [120]) C(F) = det 1 + i F 2π = 1 + c 1 (F) + c 2 (F) + · · · , P (T ) = det 1 − R 2π = 1 + p 1 (T ) + p 2 (T ) + · · · , with F the field strength two-form, which we take to be antihermitian, and R the Riemann curvature two-form. From this we find the following expressions for the first two Chern and Pontryagin classes where in the first line the Tr is over "gauge" indices in the fundamental representation and in the second line the Tr is over tangent frame indices. Note that p 1 (T ) differs by a sign from that used in [33] which explains the minus sign difference in β in (2.3) in comparison to the expression in [33]. Finally we note that for a four-manifold which is a product of two Riemann surfaces Σ 1 × Σ 2 , the integrated first Pontryagin class, P 1 , and the Euler characteristic, χ, are P 1 = 0 and χ = 4(g 1 − 1)(g 2 − 1).
A.2 Metric on Riemann surfaces
Throughout this paper we often consider smooth Riemann surfaces Σ g of genus g. We always put a constant curvature metric on these manifolds with the following explicit form The volume form dvol Σg ≡ e 2h dx 1 ∧ dx 2 integrates to: for g = 1 . (A.7) The normalized curvature of Σ g is denoted by κ = {1, 0, −1} for g = 0, g = 1, and g > 1, respectively. We note that with these definitions and using (A.7) one has the relation κ η Σ = −2(g − 1) for all g. Finally, t g denotes the first Chern class of the tangent bundle of Σ g , which in our normalizations integrates to Σg t g = η Σ .
B Entropy of black branes
All regular supergravity solutions discussed in this paper can be viewed as extremal (p − 1)-brane solutions in (d + 1) spacetime dimensions. The near horizon geometry is of the form: where ds 2 AdS p+1 = 1 r 2 (−dt 2 + dr 2 + dz 2 1 + · · · + dz 2 p−1 ) , (B.2) and ds 2 M d−p is the metric on the compact horizon of the (p − 1)-brane. The field theory interpretation of these (p − 1)-brane solutions is given by the universal RG flows across dimensions discussed extensively in the main text. In particular, there is a p-dimensional SCFT captured holographically by the AdS p+1 factor in the near horizon geometry. This is the IR SCFT which arises from a d-dimensional UV SCFT via the RG flow across dimensions. For p even, the conformal anomaly coefficients of the SCFT can be computed holographically using (4.4). Another physically interesting quantity is the entropy density of the black brane. To compute this we take the spatial coordinates on the boundary of AdS p+1 , z i , i = 1, · · · , p − 1, to have a finite range z i ∈ [0, l i ]. We can then easily compute the Bekenstein-Hawking entropy per unit of spatial volume, V ≡ l 1 × . . . (B.7) 58 We note that this is the spatial volume of the boundary of AdS p+1 and not that of the horizon M d−p .
In the first line we defined c 2d ≡ c r = c l , which is valid in the large N limit. Similarly, for p = 3, 5 we can write the entropy density in terms of the corresponding roundsphere free energy in (4.5) for the SCFT p , in the following way We emphasize that the anomaly coefficients and free energies appearing in (B.5)-(B.9) are those corresponding to the SCFT living on the worldvolume of the (p − 1)-brane, i.e., the IR theory. The universal relations between conformal anomalies and free energies in the large N limit that we discussed in this work allow us to relate these IR observables to UV quantities corresponding to the SCFT d dual to the asymptotically locally AdS d+1 boundary of the black brane solution.
Finally, we point out that the case p = 1, i.e., an AdS 2 near horizon region, is somewhat special since we do not have a microscopically well-established AdS 2 /CFT 1 duality. The best understood setup is that of extremal black holes in AdS 4 discussed in Section 4.4.
C Universal RG flows in the same dimension
While the main focus in this work has been to uncover universal relations between SCFTs connected by an RG flow across dimensions, it is important to note that there are also similar relations for SCFTs living in the same number of spacetime dimensions. A particularly simple relation between the conformal anomalies of 4d N = 2 and N = 1 SCFTs connected by a specific universal RG flow was derived by Tachikawa and Wecht in [25]. Inspired by this here we seek a similar result for 2d SCFTs.
First, let us revisit the main result in [25] from the perspective of our discussion. A particular class of examples of 4d N = 2 and N = 1 SCFTs which are connected by the RG flow of [25] are the MN N = 2 and MN N = 1 SCFTs discussed in Section 3.1.2 above. The relation between the anomalies of these two classes of 4d SCFTs can be obtained by using the two universal relations, (3.13) and (3.15), of these anomalies to the ones of the class of N = (2, 0) SCFTs in 6d. Using This is precisely the relation derived in [25].
We can now apply the same idea for the α-and β-twists discussed in Section 3.2.2 and the universal (0, 2) twist of Section 3.2.1. For the α-twist the matrix relating the 2d and 4d anomaly coefficients is not invertible and thus we cannot derive the analog of a Tachikawa-Wecht result. For the β-twist, however, the matrix in (3.35) is invertible and one can combine that with the matrix in (3.24) to find the following 2d analog of the Tachikawa-Wecht relation
|
2017-08-23T21:34:17.000Z
|
2017-08-16T00:00:00.000
|
{
"year": 2017,
"sha1": "3cd28bca1936d5f45c1ad82d69a50e28492053d4",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP12(2017)065.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "3cd28bca1936d5f45c1ad82d69a50e28492053d4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
225909960
|
pes2o/s2orc
|
v3-fos-license
|
A novel CNN model for fine-grained classification with large spatial variants
Convolutional Neural Networks (CNN) have achieved great performance in many visual tasks. However, CNN models are sensitive to samples with large spatial variants, especially severe in fine-grained classification task. In this paper, we propose a novel CNN model called ST-BCNN to solve these problems. ST-BCNN contains two functional CNN modules: Spatial Transform Network (STN) and Bilinear CNN(BCNN). Firstly, STN module is used to select key region in input samples and get it spatially modified. Since the adoption of STN will cause an information loss phenomenon called boundary loss, we design a brand-new IOU loss method to solve it. We make a theoretical analysis of the IOU loss method. Secondly, to discover discriminative features for fine-grained classification task, BCNN module is applied. BCNN interacts CNN features from different channels to produce more discriminative bilinear features than fully connected features of CNN. ST-BCNN works by reducing irrelevant spatial states and producing fine-grained features. We evaluate our model on 3 public fine-grained classification datasets with large spatial variants: CUB200-2011, Fish100 and UAV43. Experiments show that the IOU loss method can reduce boundary loss and make STN module output spatial transformed image appropriately. Our proposed ST-BCNN model outperforms other advanced CNN models on all three datasets.
Introduction
Over recent years, the study of computer vision tasks has been pushed forward by the development of deep learning algorithms. Convolutional Neural Networks (CNN) is a primary deep learning algorithm [1]. With the connection of multiple learnable convolutional and fully connected layers, it can extract more representative features than traditional human-designed features. Despite these progresses, there're still some drawbacks in CNN. It is sensitive to samples with large spatial variants, especially severe in fine-grained classification task.
ICSP 2020 Journal of Physics: Conference Series 1544 (2020) 012138 IOP Publishing doi: 10.1088/1742-6596/1544/1/012138 2 module in any large network. It allows end-to-end training. However, due to end-to-end training strategy, it will sometimes not converge and have boundary loss.
A modified architecture is Inverse Compositional spatial transformer network (IC-STN) [3]. Different from traditional STN, it uses cascaded STNs to predict the transformation. The next STN takes the output of previous STN as input. The final transformation is multiplied by all relevant STN transformations. This network improves the classification performance but still suffers from boundary loss.
Fine-grained classification task
For classification tasks, many researches of CNN focus on coarse-grained classification. But finegrained classification is more challenging. Fine-grained classification means classifying samples from the same class into different subclasses. The samples may be very similar and may differ in just subtle parts. For samples in the same subclass, they may have very different appearance due to spatial variants. In all, fine-grained samples have small inter-class invariance and large intra-class variance.
For fine-grained classification task, there have been some studies. The studies can be divided into two types of methods: strongly-supervised method and weakly-supervised method. Strongly-supervised method needs manual object part annotations. Since samples differ in just subtle parts, comparing parts between samples rather than the entirety is realistic [4]. Major methods include part-based R-CNN [5], Mask-CNN [6] and Pose Normalized CNN [7]. Although the performance of them is better than weaklysupervised method, it's time-consuming to annotate object parts.
Without extra labels, weakly-supervised method can be trained just on class label. There exist two main types: attention-based type and bilinear pooling type. With the mechanism of attention, the weaklysupervised method can automatically detect important object parts. Recurrent Attention CNN [8] uses an Attention Proposal Network (APN) module to zoom in key parts recurrently. Multi-Attention CNN (MA-CNN) [9] divides the last convolutional layers into groups, each group means a part attention. Other important works include MAMC [10], RAM [11] and RAN [12]. Bilinear pooling [13][14] is a method to interact convolutional features on different channels. It digs into the relationship between different convolutional features since features from different convolutional channels are very rich [15][16]. It also has some variants: compact bilinear pooling, hierarchy bilinear pooling etc. However, these weakly-supervised methods don't focus on the problem of spatial variants. When the spatial variants increase, their performance will be worse.
In this paper, we aim to solve the above problems. By introducing a novel IOU loss, we solve the problem of boundary loss in STN, making the performance of STN more stable. By combining STN and BCNN, we design a novel network ST-BCNN for fine-grained classification with large spatial variants. It works by comparing samples under similar spatial states and digging into fine-grained features. It outperforms other advanced CNN models on three datasets.
Spatial transformer network (STN)
STN contains a localization network. This localization network takes the input feature and outputs a 6-dimensional vector, the vectors is reshaped into transformation matrix A θ . The localization network can be in any structure as long as the output is a 6-demensional vector. A common solution is a CNN with several convolutional layers and fully-connected layers.
Suppose that input feature map I∈R H×W×C is with height H, width W and C channels. Output feature map O∈R H ' ×W ' ×C have the same channels C but different height H' and width W'. A sampling kernel is used to get the value at a particular pixel in the output O [2]. The form of sampling kernel is (2). Since the coordinates of output feature pixels must be integers. It is approximated by its nearby transformed points.
Boundary loss.
Since STN takes an end-to-end framework. Thus the training process is usually unstable. The outputs of STN will suffer a problem called boundary loss (see Figure 2).The output image samples only from the cropped image. Pixel information outside the crop region is discarded, causing the boundary of the output image to be empty. CNN can't deal with the empty boundary very well. Boundary loss sometimes makes the performance of a CNN with STN module even worse than original CNN. L is referred generally which can include position and scale [13]. The bilinear feature combination at each position l is the matrix outer product as (3): The bilinear image descriptor Φ(I) is gotten by a pooling method of bilinear features at different locations. A common pooling method is sum pooling over all locations.
The descriptor is followed by a signed square root step y=sign(Φ(I))√|Φ(I)| and a l2-norm normalization. The bilinear feature combination and sum pooling method are all differentiable. The bilinear CNN model can be optimized with end-to-end training.
Structures of BCNN model can be divided into fully shared, partially shared and no sharing [14]. In fully shared model (Figure 4(a)), only one CNN model is used, f A (l, I) and f B (l,I) are the same feature mapping. In no sharing model (Figure 4(b)), two different CNNs are used to extract features. In partially shared model (Figure 4(c)), a part of CNN model is shared.
IOU loss
Intersection over Union (IOU) reviews intersecting area formed by original ( Figure 5(a)) and transformed images ( Figure 5(b)). More information will remain after transformation with higher IOU value. The green part in Figure 5(c) is the IOU area. There will be less information loss with larger IOU area. The maximum IOU value is 1.0 ( Figure 5(d)). In this case, there will be no boundary loss. Figure 5. IOU area However, the shape of transformed image is usually irregular quadrilateral, making the IOU value hard to calculate. We introduce an approximate calculation method to calculate IOU value. If more points on the boundary of transformed image is outside the area of original image, there will usually be higher IOU value. In this case, we inspect positions of these boundary points.
Since there're infinite points on the boundary. We select eight key points based on the flowing principle (see Figure 6). In figure 6 respectively. If is outside the area of original image, at least one of and will be greater than 1. However, if the value is infinitely great, the transformed image will be amplified too much. Considering these factors, an IOU loss is established as (5) Figure 6. Eight key points Inspired by ReLU loss [21], We establish IOU loss. If the infinite norm is less than 1, The loss will be greater than 0. But the infinite norm is nondifferentiable and can't be optimized by back propagation. We modified the loss into (6).
A single point form of IOU loss can be extended with original point coordinates (7). The derivation form of (7) is (8) and (9). The derivation about parameter , , is in the same form of parameter , while that about parameter t2 is in the same form of parameter t1. Since IOU loss is differentiable about all six parameters, it can be adopted to train CNN with back propagation.
=ReLU(1-|ax i s +by i s +t 1 |)+ReLU(1-|ax i s +by i s +t 2 |) Figure 7. Structure of ST-BCNN Spatial variants make CNN produce many redundant features, which expends feature space. If we can reduce spatial variants, thus the irrelevant subspace is reduced. So, we want to transform the sample before it's fed into CNN. Features from different channels of CNN are relevant in some dimensions, bilinear pooling can help to find the relationship between them. In fact, bilinear pooling can be viewed as a coupled feature transformation method to produce more discriminative features for fine-grained task.
Details of ST-BCNN
Inspired by the ideas above. We adopt two functional CNN modules: spatial transformer network and Bilinear CNN together to get a novel network called ST-BCNN. Firstly, the input image is transformed by a STN module, Thus the image is spatially modified and key part is focused. Then, the transformed image is processed by a CNN module. Finally, the output convolutional features of CNN module are processed by bilinear pooling. After a softmax layer, a class is predicted. (see Figure 7).
It is worth mentioning that in ST-BCNN, we adopt fully-sharing BCNN: only one CNN model is used to extract features. Compared with no-sharing BCNN, it costs about halt operation time. In bilinear pooling operation, only the last convolutional layer of CNN is used. Features from different channels are interacted by an outer-product operation. If there're c channels, the bilinear feature has c 2 dimensions.
In training process, the total loss is composed of two losses (10). The first part is cross entropy loss, it's used to measure classification accuracy. The second part is IOU loss, which is used to reduce information loss in the spatial transform process. A parameter α is used to balance these two losses. min(Loss) = min (L cross entropy +αL IOU ) (10) To train the model, we use a two-stage training method. Firstly, STN module is combined with CNN module. Only these two modules are trained. Secondly, the fully connected layers of CNN are replaced with Bilinear pooling layers, we finetune parameters of the whole model.
Baseline: Inception Resnet V2
We select Inception Resnet V2 [17] as our baseline CNN module. It combines both Resnet connection and inception module. On one hand, Resnet connection deepens CNN depth. On the other hand, inception module uses multi-scale receptive fields to produce multi-scale features. It performs better on ImageNet dataset than other famous CNN structure like Resnet [19] and Inception-v4 [18].
Experiments
Our STN architecture is relatively simple. It takes original image as input. It consists 2 convolutional layers, 3 max-pooling layers and 2 fully connected layers. It outputs a 6-dimensional vector. These 6 elements are parameters of transformation matrix. [20], Fish100 datasets [23], and an Unmanned Aerial Vehicle (UAV) dataset established by ourselves. They are all fine-grained classification datasets. Besides, samples of them all have large spatial variants. We split 80% images of each dataset into training sets, while the rest is for testing.
Parameter selection
The parameter in loss will affect STN accuracy. We conduct an experiment to find the optimal value on CUB200-2011 dataset.
The experiment result is in Table 2. Performance of our baseline Inception Resnet V2 is 78.17%. If IOU loss is not applied, adding a STN module even worsens the accuracy, which is 66.17%. By applying IOU loss, the performance is better than the baseline. When the value of is 5.0, the best performance 83.42% is achieved. Figure 8 denotes the value of IOU loss with respect to steps. Log value of IOU loss is used in Figure 8 for plotting consideration. If = 0.1, the loss can hardly converge to zero. Inversely, if is no less than 1.0, the loss can converge to zero. The convergence process will be faster if the value of is greater.
Classification Experiment
To evaluate our ST-BCNN model, we compare it with other state-of-the-art models. These models include some classical CNN models (Vgg-16 [22], Inception V3 [18] and Inception Resnet V2), and some specific models for fine-grained classification problems, like BCNN and RAN (Residual attention network). We set that equals to 5.0. We conduct experiments on 3 datasets. Table 3. We find that Inception Resnet V2 outperforms other classical CNN models in all 3 datasets. This proves Inception Resnet V2 is a suitable baseline model. All the specific fine-grained models perform better than classical CNN models. Among them, the baseline + STN using IOU loss ranks second. ST-BCNN model is the best one in all 3 datasets. It achieves around 1% accuracy higher than baseline + STN model. Figure 9 denotes some examples of transformation made by STN module. By using IOU loss, the results avoid boundary loss: there is no empty area on transformed image. Transformed images focus on key area on input image like attention mechanism. In figure 9, objects are zoomed in by STN module. However, different from common attention method, STN module makes a translation modification (Figure 9(b)), making the classification performance better than common attention method.
(a)Bird (b)UAV (c) Fish Figure 9. Some examples of STN transformation (In the left of each example is original image, and the right are transformed images).
Conclusion
In this paper, we propose a novel CNN called ST-BCNN to solve fine-grained classification with large spatial variants. ST-BCNN contains two functional CNN modules: Spatial Transform Network (STN) and Bilinear CNN(BCNN). Considering boundary loss in STN model, we design an IOU loss method. We make detailed analysis of IOU loss to prove it reasonable and differentiable. We use a parameter to balance it with cross entropy loss. By comparing classification accuracy at different value, we find when = 5.0, the model performs best. With IOU loss, STN module avoids boundary loss and makes appropriate spatial transformation. ST-BCNN model outperforms some state-of-the-art methods in different datasets. The accuracies on CUB200-2011, Fish100 and UAV43 datasets are 84.21%, 94.23% and 86.08% respectively. ST-BCNN has both advantages of STN and BCNN, making it better than STN. It concludes our model can solve fine-grained classification with large spatial variants very well.
|
2020-06-04T09:11:16.912Z
|
2020-05-01T00:00:00.000
|
{
"year": 2020,
"sha1": "b4ffb5c19566231000fce04c0e107c691a3efe83",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1742-6596/1544/1/012138",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "fff195c9a2224276e4c770701b783d5ca4caf2ca",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
119219407
|
pes2o/s2orc
|
v3-fos-license
|
Testing conformal mapping with kitchen aluminum foil
We report an experimental verification of conformal mapping with kitchen aluminum foil. This experiment can be reproduced in any laboratory by undergraduate students and it is therefore an ideal experiment to introduce the concept of conformal mapping. The original problem was the distribution of the electric potential in a very long plate. The correct theoretical prediction was recently derived by A. Czarnecki (Can. J. Phys. 92, 1297 (2014)).
The question is: when one measures the voltage difference U d between C and D, how does the voltage difference behave further down the ruler? Assuming that the thickness of the ruler can be neglected, this can be reduced to a two-dimensional problem. A. Czarnecki derived the correct solution to this problem conformal mapping [1]. In this tutorial, we present the experimental verification of these new calculations.
II. CONFORMAL MAPPING
Conformal mapping is a mathematical technique which is widely used not only in physics but also in engineering. The main idea behind this technique is to simplify a certain problem by mapping it to a better suited geometry in order to simplify its solution.
A. Mathematical definition
A complex function f : U → C is called holomorphic, if it is complex differentiable at any point in its domain. Or in other words, if the following limit exists: A holomorphic function g : U → C is said to be conformal if g (z) = 0 ∀z ∈ U . Conformal functions have the properties to preserve locally angles and the shapes of infinitesimally small figures.
B. Example: the mercator projection
The Mercator projection is a cylindrical map projection. It is probably the most common way to map the spherical earth surface in two dimension. The corresponding map is: Where R is the Earth radius, φ is the latitude, θ the longitude and θ 0 an arbitrary central meridian (commonly chosen to be the one of Greenwich). This mapping satisfies the above condition of a conformal map and visualises well its properties. The circles of longitude and latitude are perpendicular on the map and on small scales the shapes of objects are preserved. whereas large objects can change their shape and size depending where they are located on the globe. For example according to Fig. 2 the size of Greenland and Africa would be of the same order.
III. SOLUTION
The solution of the problem presented in Sec. I as suggested in [2] starts with comparing the potential differences U x at two nearby pairs of points: It is then argued that since the ruler is semi-finite, the coefficient α(x) = α is constant and does not depend on position. This leads to the differential equation: If this equation would hold everywhere one would get the final solution: where U s is the voltage applied at the edge and U d the measured voltage difference at a given point with a distance d from the edge. However, the ruler is only semiinfinite and not infinite, so the assumption that the voltage does not depend on the position is not fulfilled. In fact, points very close to the beginning of the ruler do not have the same neighbourhood as points further down the ruler.
Czarnecki [1] derived the correct solution to this problem using conformal mapping. Complex coordinates z = x + iy were introduced such that the corner B corresponds to z B = 0 and the upper corner A to z A = i. Looking at the image of the ruler under the mapping z → w(z) = e πz , one sees that the corners A and B are mapped on the x-axis, w A = (−1, 0) and w B = (1, 0) as shown in Fig. 2. If an infinite ruler would be mapped with this function, this would cover the whole upper half-plane. The actual advantage of this mapping is that every function that only depends on the distance to the corners is now symmetric to the real axis since both corners lie on that axis. This symmetry leads to the solution: where s is the length of the contacts and c is a constant depending on their detailed geometry but not on their size if s is small. The expression (c − ln s) describes physical rather than idealized contacts and we thus refer to it as the reality factor. These contacts parameters are dependent on the width of the ruler. If one assumes that the distance between the left corner and the position where we measure the voltage difference is sufficiently large (more then a third of the width of the stripe), this formula can be approximated to: where U d1 and U d2 are the voltage differences measured at distances d 1 and d 2 from the edge.
A. Experimental setup
To realize this experiment the following basic laboratory equipment is required: • Standard power supply (30 V, 3 A) • Standard voltmeter (±0.01 mV) • Aluminium foil (10-50 µm, typical thickness of aluminium kitchen foil) The experiment consists of applying and measuring the voltage difference at different points. To simulate a semi-infinite metallic ruler, the metal stripe made of aluminium foil was cut much longer than actually needed. The wires, used to apply the voltage difference, were pulled through holes in the contact stripes as shown in Fig. 4. The experimental configuration can be described by the following parameters: the length of the ruler L, its width W R , its thickness T and the width of the contact stripes W C (see Fig. 4).
B. Results
The voltage difference was measured in 5 mm steps form 0 to 50 mm. These measurements were performed with different settings in order to investigate the influence of the following factors: L, T , W R and W C . The experimental uncertainties to be taken into account are the ones of the voltmeter (±0.01 mV) and the one of the measuring position (±0.3 mm).
The results for L and T are presented in Figs. 5-6. As one can see, the measured points for ruler lengths of 300 and 500 mm are the same within the experimental errors, one can thus conclude that a ruler with more than 300 mm is sufficiently long for our experiment and it is a good approximation of a semi infinite ruler. The results are also unaffected when using two different thicknesses of T =0.01 and 0.05 mm. Therefore aluminium kitchen foil is thin enough to approach a 2-dimensional problem as required by this experiment. To study the effect of the contacts geometry and thus the (c−ln s) parameter of Eq.7, the width W C was varied. Apart from the first measured point at x = 0 mm, the obtained values are the same for all the others distances within the experimental errors (see Fig. 7).
The last parameter to be investigated is the influence of the width W R of the ruler. According to the original calculation (Eq. 6) one would expect that the measured values are independent on W R . This is in contradiction with the data as shown in Fig. 8
V. CONCLUSIONS
Our measurements are in very good agreement with the predictions of the new calculations obtained applying conformal mapping by Czarnecki and point out that the original solution in which the problem of calculating the distribution of the electric potential in a very long plate was proposed is not adequate. This problem is a very nice example of the application of conformal mapping and since this experiment is very simple and uses only basic equipment, it can reproduced in any undergraduate laboratory thus providing a very good introduction to students on this subject.
|
2016-11-18T12:32:49.000Z
|
2016-11-18T00:00:00.000
|
{
"year": 2016,
"sha1": "c10a0fb0d9888b65e910f48fae83beef0c825f24",
"oa_license": null,
"oa_url": "https://tspace.library.utoronto.ca/bitstream/1807/76718/1/cjp-2016-0843.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c10a0fb0d9888b65e910f48fae83beef0c825f24",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
246509939
|
pes2o/s2orc
|
v3-fos-license
|
Lipofilling in Post-Treatment Oral Dysfunction in Head and Neck Cancer Patients
Lipofilling is a new treatment option for head- and neck cancer patients who suffer from chronic and severe (chemo-) radiation or surgery-related swallowing problems. Lipofilling is a technique of autologous grafting in which living fat cells are transplanted from one location to another in the same patient. In the case of head and neck cancer patients, volume loss or muscle atrophy of the tongue or pharyngeal musculature caused by the treatment may result in oropharyngeal dysfunction. Firstly, intensive swallowing therapy will be given, but if that offers no further improvement and the functional problems persist, lipofilling can be considered. By transplantation of autologous adipose tissue, the functional outcomes might improve by compensating the existing tissue defects or tissue loss. Only a few studies have been published which evaluated the effectiveness of this new treatment option. The results of those studies show that the lipofilling technique seems safe and of potential value for improving swallowing function in some of the included patients with chronic and severe dysphagia after surgery and/or (chemo-) radiation therapy for head and neck cancer. The lipofilling procedure will be described in detail as well as the clinical implications.
Introduction
Head and Neck Cancer (HNC) is the seventh most common type of cancer worldwide [1]. The regions of HNC include cancers of the nasal cavity, oral cavity, nasopharynx, oropharynx, hypopharynx, larynx, and paranasal sinuses (see Figure 1). Risk factors are tobacco use, alcohol consumption [2], and viral infections with the Human Papilloma Virus (HPV) (for oropharyngeal cancers) [3] and Epstein-Barr Virus (EPV) (for nasopharyngeal cancers) [4].
TNM classification
HNC tumors can be classified using the TNM stage classification published by the American Joint Committee on Cancer and International Union for Cancer
Dysphagia in HNC patients
One of the most critical and potentially life-threatening functional problems in patients who are treated for advanced HNC is acute and chronic dysphagia. One of the causes of dysphagia after HNC treatment might be a reduced tongue strength, insufficient contact between the base of tongue and pharyngeal wall, reduced hyolaryngeal elevation, and reduced opening of the upper esophageal sphincter. Due to this altered physiology, the food bolus is swallowed less powerfully, leading to stagnation of food ('residue'), with a high risk of laryngeal penetration or even (silent) laryngeal aspiration of the residue into the trachea. The swallowing problems may worsen when the swallowing musculature is no longer actively used, and the so-called 'non-use' atrophy occurs, causing further deterioration of the swallowing function [10]. Dysphagia (chronic) can lead to reduced body weight, long-term and even lifelong feeding tube dependency, depression, reduced quality of life, aspiration pneumonia and can even lead to death [6,11,12].
Treatment options of HNC related dysphagia
In the next paragraph, the treatment options of HNC-related dysphagia will be described. Firstly, the importance of interdisciplinary head and neck rehabilitation will be described, secondly (preventive) swallowing protocols and finally, surgical options.
Interdisciplinary head and neck rehabilitation
The treatment of dysphagia, and the treatment of HNC patients in general, involves a high level of variety and complexity of problems. Therefore, it is recommended to have a specialized multidisciplinary team of medical specialists and allied health professionals, specialized in head and neck oncology [13]. Rehabilitative care aims primarily at reducing and/or preventing negative effects of head and neck cancer treatment, and thereby improving daily functioning. The effectiveness of head and neck rehabilitation program have been proven [13,14].
(Preventive) swallowing protocols
Over the last years, the prevention of dysphagia has become a major focus point in HNC research. The assumed disadvantages of (prophylactic) feeding tube placement to prevent weight loss and with that effectively immobilizing the swallowing musculature, have led to the so-called 'eat or exercise' principle [10]. This means that oral intake should be maintained as long as possible, and that preventive swallowing rehabilitation programs should keep the swallowing musculature 'active' as much as possible before and during treatment. Studies on preventive rehabilitation in the Netherlands and elsewhere have shown that preventive swallowing protocols (in particular in the short-term) are associated with better post-treatment functional outcomes and quality of life, and are cost-effective, compared to standard care [10,[15][16][17][18][19][20][21][22].
There are several (swallowing) exercises that have proven their value in the treatment of dysphagia. Those exercises are used in standard swallowing protocols, but also within preventive rehabilitation protocols. Most frequently used exercises include a range of motion or resistance exercises (with or without medical devices such as the TheraBite® device), compensatory techniques (postural changes, diet/ bolus modifications), behavioral swallow exercises such as the (super-)supraglottic swallow [23,24], the effortful swallow [25], the Mendelsohn maneuver [26], and the Masako (tongue-holding) maneuver [27], and non-swallow exercises such as the Shaker (head-raising) exercise [28]. Also, devices, such as the Swallow Exercise Aid (SEA) have been developed to be able to perform multiple exercises more efficiently. The SEA device allows adaptation to individual subjects' capacity, and thus the application of progressive overload during the training program, and has shown to activate important swallowing structures [29][30][31]. Nevertheless, in some cases severe, therapy-refractory dysphagia may still exist.
Surgical procedures
Surgical treatment of functional impairment may be considered when rehabilitative measures, such as those described above, are insufficient to help ensure safe and efficient oral intake. The primary goals of surgery are to reduce the risk of aspiration, improve bolus transfer, and prevent malnutrition and/or dehydration. What the best surgical technique will depend on the etiology of the dysphagia. If there is less relaxation of the upper esophageal sphincter this can result in a less efficient movement of the bolus into the esophagus. This impaired relaxation can sometimes be remedied by reducing the tonus of the musculature of the pharynx. Cricopharyngeal myotomy, either endoscopically using a CO 2 laser or by an open surgical procedure, can be helpful [32,33]. Myotomy of the cricopharyngeal muscle results in lower resistance of the upper esophageal sphincter. Due to this lower resistance, the bolus can be more easily be transported through the upper esophageal sphincter and enter the esophagus.
Other surgical techniques that can widen the cricopharyngeal muscle are dilatation (in case of fibrosis) or botulinum toxin (botox) injection in case of spasm. Several studies have reported promising results in patients with upper esophageal sphincter dysfunction caused by muscle spasm or hypertonicity [34,35].
If dysphagia is caused by a serious limitation in laryngeal elevation, an invasive surgical technique called hyolaryngeal suspension can be performed. In this procedure, the hyoid bone is suspended and the thyroid-cricoid complex is fixated to the anterior mandible. This results in a permanent more cranial position of the larynx [36]. This procedure can be very effective in the restore a full oral intake without aspiration. However, it is also reported that previous treatment with (chemo) radiotherapy will negatively influence the outcome [37].
Finally, in some cases, none of the abovementioned treatment options are suitable or effective. If the larynx has severe functional impairments and there is no reasonable likelihood of functional recovery as a 'last refuge', a functional total laryngectomy can be considered. In the case of a total laryngectomy, the airway is surgically separated from the digestive tract by sacrificing the larynx.
Surgery procedures as described above, however, can have serious complication risks. Myotomy (especially open) can cause infections and even pharyngocutaneous fistulas or (retropharyngeal) infection [34,37]. Besides, studies have shown that the improvement rate is much higher for neurologic dysphagia and idiopathic dysfunction than in patients with swallowing problems due to HNC treatment [32].
New treatment option: lipofilling
Since 2013, the Netherlands Cancer Institute has been using lipofilling as an alternative treatment option. Lipofilling has the advantage of being less radical, less invasive and presenting less of a burden for the patients [38].
Lipofilling is a technique in which autologous fat is transplanted to a site that lacks volume. In 1893, fat was transplanted for the first time with variable success [39]. Since the 1980s with the advent of modern liposuction, the technique of lipofilling has become a standard modality for esthetic as well as reconstructive purposes; however, it is rarely used in HNC patients.
Physiology of fat grafting
Of all tissues in the human body, fat possesses the highest percentage of adiposederived stem cells with more than 5.000 of these per gram of fat. Adipose-derived stem cells are present in the mesenchyme, and are a type of multipotent stem cells. This means that these stem cells can differentiate into multiple cell types including osteoblasts, endothelial cells, myocytes, neuronal type cells, adipocytes and chondrocytes [40,41].
A microscopic view shows that fat consists of a complex matrix of adipocytes mixed with collagen, endothelial cells, adipose-derived stem cells, and fibroblasts. All these adipocytes play an important role in the physiological processes, such as angiogenesis, metabolism, lipid storage and endocrine functions [40]. There is evidence that stem cells may even contribute to the reduction in fibrosis, and the restoration of tissue vascularization and organ function [42,43].
Evaluation tools to check patient eligibility
Lipofilling might be a suitable treatment option for specific patients with chronic dysphagia after HNC treatment. Patients might benefit from lipofilling when part of the etiology of the dysphagia consists of lack of volume, for instance, of the tongue or pharyngeal wall. There are different examination tools to analyze the severity and etiology of dysphagia. Before considering if lipofilling is suitable for a patient, it is recommended to perform objective assessments such as Fiberoptic Endoscopic Evaluation of Swallowing (FEES) or a Video Fluoroscopic Swallow Study (VFSS) and a Magnetic Resonance Imaging (MRI) assessment.
FEES, in which a flexible endoscope is inserted via the nose and the patient is asked to swallow different consistencies, visualizes directly the anatomy and function of the pharyngeal swallowing phase. Also, the sensory and motor components of swallowing can be assessed [44]. On the other hand, VFSS (also known as Modified Barium Swallow) provides information about the oral and oropharyngeal phases of the swallow, including dynamics of the swallowing process. With VFSS, it is possible to analyze the contact between the tongue base and posterior pharyngeal wall and it is more suitable for diagnosing aspiration during swallowing. VFSS is also more informative for detecting problems below the upper esophageal sphincter [45]. Preferably a VFSS is performed, to select eligible patients, but the choice of examination also depends upon clinical presentation, available instruments and clinician's preferences.
To visualize the potential injection sites in the oral cavity and pharynx the most crucial examination of the pre-lipofilling work-up is the Magnetic Resonance Imaging (MRI) [38]. Besides, with the MRI it is possible to evaluate the volume of the tongue and pharyngeal wall. In Figure 2, an MRI assessment pre-and post-lipofilling treatment is presented.
In addition to the objective assessments, it might also be helpful to explore patient-reported experiences. The MD Anderson Dysphagia Inventory (MDADI) [46] and the Swallowing Quality of Life questionnaire (SWAL-QOL) [47] are often used in HNC patients to analyze patients' reported swallowing-related quality of life.
Lipofilling procedure
Different techniques exist for lipofilling injection [41]. There are many preparation techniques for adipose tissue. There is no universally accepted standard method. The Coleman technique, which was described in the early 1990s, is the most frequently used method. This technique aims to prevent damage to the fragile adipose cells as much as possible during transplantation and thus promote tissue survival [48]. The technique involves three steps and is described by Hsu et al. [41]. The first step consists of the harvest of fatty tissue from the upper abdominal wall or inner thigh using large-or small-volume liposuction (see Figure 3a). The upper abdominal wall or lateral thigh is very useful as donor sites because of the high amount of local fat cells. The donor site can be infiltrated with tumescence fluid (for instance, ringers lactate, adrenaline and lidocaine) just before the liposuction, but this can also be done after the suction. After liposuction, the second step involves the preparation of the adipose tissue. During the preparation phase, the fat sample is transferred in a 10 cc syringe for centrifugation (see Figure 3b). The syringe is centrifuged for 2-3 minutes at 3000 rounds per minute (800 g) to separate out oils, debris, water (including lidocaine or adrenaline, saline and blood) and a layer of cell pellets/ residue from the cellular fraction. In the syringe, three layers will be visible: the oil layer at the top, cellular fraction in the middle, and cellular debris and red blood cells at the bottom (see Figure 3c). The segregated cellular fraction, composed of adipocytes and stromal vascular cells, is transferred to a small 1 cc syringe. The third and last step consists of the injection into the predetermined spots in the base of the tongue. Using a needle, the side of the tongue is perforated, and the injection cannula is introduced. The dominant hand is used, or the injection is performed on cannula retraction in a three-dimensional "fan pattern." The aim is to transfer small aliquots of fat with multiple passes at different depths. The non-dominant fingers can be placed behind the tongue to control the process. It is helpful if the assistant pulls on the tongue (see Figure 3d). The same procedure is usually performed separately on both sides of the tongue. In general, we inject 10-15 cc of fat per session.
The lipofilling procedure can be carried out under local anesthesia or general anesthesia. Because 30-50% of the injected fat might be resorbed, and not too much fat can be injected at the same time, it is recommended to repeat the assessments,
Short-term outcomes
In the last few years, different studies, primarily case reports, have been published about the use of lipofilling in patients with chronic dysphagia due to HNC (treatment) [38,49,50]. Navach et al. [49] reported about a 58-year-old patient with impaired swallowing after treatment for a nasopharyngeal carcinoma. This patient complained about dysphagia, the loss of body weight, aspiration pneumonia, and frequent episodes of bronchitis. A VFSS was conducted where a lack of bolus compression, asymmetry of the lingual movements, stagnation in the valleculae, lack of projection of the base of tongue, and more were visualized. The patient received 7 months of speech and language therapy to improve mobilization and strengthening of the swallowing muscles. The treatment improved the preparation and presentation of the bolus, although it was not sufficient enough. After 6 weeks, another VFSS showed a worsened bolus stagnation in the valleculae and at the base of tongue. This patient received a lipofilling injection in the base of tongue, which was performed following Coleman's procedure. In total, 5 cc of fat was injected into both sides of the base of tongue. After surgery, the patient experienced an improvement in swallowing, and minimal post-operative swelling was reported. A new VFSS was made 1 month after surgery, showing an improved swallowing mechanism due to greater elevation of the base of tongue, the effective elevation of the larynx, and an improved closure of the larynx. After 3 months, the swallowing function was still stable, and the patient gained body weight.
In our institute, a study was performed by Kraaijenga et al., to investigate the feasibility and potential value of lipofilling in HNC patients with post-treatment oropharyngeal dysfunction [38]. This case series included seven patients. One patient dropped out of the study because of progression and therefore, he chooses a total laryngectomy procedure. Pre-assessment of the six remaining patients included VFSS, MRI, and the SWAL-QOL measurements. VFSS showed penetration and/or aspiration in all but one patient. Reduced or absent contact between the base of the tongue and pharyngeal wall was seen in all six patients. This reduced or absent contact resulted in residue above and below the hyoid bone. MRI showed volume loss or atrophy of the tongue in five patients. Two patients had reduced tissue of the tonsillar in the right tonsillar arch. The lipofilling session was performed using the Coleman technique. Patients received two to three injection sessions at 3-month intervals. In total, 20-35 cc adipose tissue was transplanted in all patients. No complications, such as necrosis, infection, swelling, or edema, were observed. The follow-up took place 1-3 months post-surgery. VFSS showed that four patients had improved swallowing function, and two of them were no longer feeding tube dependent. The MRI showed increased tongue volume with the injected fat spread out at the base of tongue. The SWAL-QOL showed improved quality of life in almost all patients.
Recently, Ottaviani et al. [50] published a case report about a 76-year-old patient with severe chronic dysphagia who had undergone a horizontal supraglottic laryngectomy and adjuvant radiotherapy. FEES showed a mobile right arytenoid and tissue loss in the base of tongue. VFSS demonstrated constant intra-swallowing aspiration and moderate pooling of food at the base of tongue with post-swallowing penetration and aspiration. The patient received 6 months of speech therapy focused on muscle strengthening and postural compensation techniques. The intervention turned out insufficient, and therefore, lipofilling injection was offered as a treatment option. The surgery was performed following the Coleman technique, and 5 cc was injected into the base of tongue. Intraoperatively, FEES was performed and demonstrated an improved swallowing function. However, trace aspiration for liquid textures and minimal residue was seen. After 1 week, FEES demonstrated only aspiration for liquids. After 1 month, the VFSS showed mild to moderate dysphagia. These results were also stable at 6 months post-surgery.
These three studies showed that lipofilling might be an effective treatment for HNC patients with chronic dysphagia. No complications were reported, and therefore, lipofilling seems safe [38,49,50]. Many patients showed improved objective and subjective swallowing function after lipofilling. Nevertheless, it remains difficult to predict how much fat will be resorbed and thus how long a therapeutic effect will persist. With the Coleman technique, absorption of fat seems to be reduced to some extent [32,33]. In general, after 20-30 cc injections (in 2-3 procedures), positive effects are seen. However, sometimes repeated injections might be needed to achieve and hold a therapeutic effect. Hopefully, the injected tissue may also become less fibrotic, and no further injections are needed. Until now, there is no large data available yet, supporting this hypothesis.
Case reports
To give a better insight into lipofilling and how it can be used in post-treatment swallowing problems in HNC patients, three cases will be described in detail (see Table 1). Patients' pre lipofilling objective and subjective swallowing function are analyzed and compared with the swallowing function after the last lipofilling (short-term results) and between 2.5 years and 5.8 years after the last lipofilling treatment (long-term results).
Case 1
A 67-year-old male, had been treated in 1997 for a T3N2c carcinoma of the floor of the mouth. His treatment consisted of local resection, partial mandibulectomy with free fibula reconstruction, and post-operative radiotherapy which resulted in complete remission. In 2013, 16 years after treatment, he visited the outpatient clinic with increasing swallowing difficulties, with particularly solid foods getting stuck in his throat, requiring placement of a PRG feeding tube to maintain adequate nutritional intake.
VFSS
VFSS assessments showed severe dysphagia with the occurrence of penetration and a high amount of oropharyngeal contrast residue due to insufficient contact between the base of tongue and posterior pharyngeal wall.
MRI
An MRI was made to rule out a new tumor. Since standard swallowing exercises for more than 1 year did not improve the persisting swallowing problems, and other surgical options were unlikely to improve the swallowing function. In Figure 3, the pre-lipofilling MRI scan can be found on the right.
Number of injections
This patient underwent three lipofilling sessions (3 times 8-12 cc) into the base of the tongue at 3-months intervals. After the second procedure, the patient noticed an improvement in swallowing function. He resumed oral intake following the third injection and his feeding tube could be removed.
Short-term results
A VFSS assessment showed improved scores for thick liquids (lower penetration and aspiration (PAS), see Table 2). This patient also reported notable improvement in subjective swallowing function, with substantially less effort and less choking. In Figure 3, the short-term post-lipofilling MRI scan is shown in the middle.
Long-term results
However, 2.5 years later the SWAL-QOL subscale scores deteriorated (see Table 3.). Until 2020, this patient was able to maintain oral intake without a PRG. Swallowing was not easy, but he managed to have a full oral intake with additional diet modifications. He died in 2020 due to urosepsis. In Figure 4, the long-term post-lipofilling MRI scan is shown on the right.
Case 2
A 59-year-old female, was diagnosed with a T3N2c base of tongue tumor in 2004. Organ-preservation treatment with concurrent chemo radiotherapy resulted in a complete remission. In the post-treatment period, however, the patient developed severe dysphagia and dysarthria due to oropharyngeal scarring and base of tongue atrophy. Despite intensive swallowing rehabilitation with strengthening exercises, several esophageal dilatations, and a customized intraoral prosthesis lowering the Table 2.
Pre-and post-treatment outcomes after the lipofilling session. hard palate to also improve speech, the patient remained completely feeding tube dependent due to persistent oropharyngeal dysfunction/stagnation of food.
VFSS
VFSS evaluation demonstrated minimal contact between the base of tongue and the pharyngeal wall during swallowing, with large amounts of a residue located at the piriform sinus, and occurrence of aspiration, even at a 1 cc swallow administered with a pipet to improve bolus transport.
MRI
MRI showed an atrophic tongue, sagged posteriorly (see Figure 5). Since intensive swallowing exercises offered no solution, in 2014 the patient opted for lipofilling into the base of tongue.
Number of injections
Three lipofilling sessions were needed at 3-months intervals with 10-12 cc filled per session.
Short-term results
The post-operative MRI showed several fat depositions at the right base of the tongue (see Figure 5), and the patient was able to eat and drink again for the first time in 10 years. However, although the patient was very satisfied with being able to swallow again, the VFSS evaluation still showed aspiration. Four months later the patient presented with aspiration pneumonia, and a nasogastric feeding tube was indicated. However, although being aware of the possible risks, she chose to resume her per oral intake. At 8 months post-lipofilling (short-term results), she remained happy with the procedure resulting in good SWAL-QOL scores.
Long-term results
However, after 4 years of the last lipofilling this patient experienced more swallowing problems. Her subjective swallowing outcomes deteriorated (see Appendix, Table 3 for her long-term SWAL-QOL scores) and she decided to have another lipofilling session. Nevertheless, even with that extra lipofilling (17 cc at the left and 17 cc at the right base of tongue) the SWAL-QOL scores increased meaning worse swallowing-related quality of life (see Table 3 in the Appendix). In addition, the repeated VFSS showed worsening swallowing function (severe dysphagia). Since she was familiar with developing aspiration pneumonias, and weight loss, we decided to place a PRG and stop any oral intake. In Figure 5, the long-term MRI scan can be found on the right.
Case 3
A 73-year-old male, who had been diagnosed with a T2N1 hypopharynx carcinoma in 1984. He was treated with radiotherapy, which resulted in complete remission. This patient also had a history of esophageal carcinoma in 1964 for which he needed several dilatations in 1990/1991. Since 2009, he suffered from severe swallowing problems (several aspiration pneumonias) caused by a dysfunctional larynx and he needed a PRG.
VFSS
A VFSS showed a severe swallowing problem. All food consistencies were (silently) aspirated, the epiglottis was rigid, and the laryngeal elevation was limited. This patient started with intensive swallowing rehabilitation since he had had no swallowing exercises before. However, the rehabilitation did not improve the swallowing function enough to increase oral intake or to remove the PRG. In 2017, the patient opted for a lipofilling injection in the base of tongue.
Number of injections
In total 20 cc was injected, 10 cc on the left and 10 cc on the right.
Short-term results
After this first injection, the patient was still not able to swallow anything. He continued to develop pneumonias for which he used antibiotics daily. Because of the serious health risks related to the recurrent pneumonias, and his low swallowing related quality of life as measured by the SWAL-QOL (see Table 3), this patient decided to undergo a functional total laryngectomy in 2018.
Clinical implications
In our institute, the Netherlands Cancer Institute, lipofilling is considered as a safe procedure. Therefore, this procedure is embedded as standard care for specific swallowing therapy to refractory patients. When a patient visits the hospital with swallowing complaints, the first step is to start swallowing rehabilitation under the guidance of a specialized speech and language pathologist. If the swallowing exercises do not give a satisfactory result, lipofilling can be considered. Patients are eligible if they have severe dysphagia caused by volume loss or muscle atrophy of the tongue or pharyngeal musculature due to HNC treatment. Patients may be eligible if they have no history of major oral surgery.
In the past 5 years, 20 patients have been treated with lipofilling injections at our institute. The procedure is preferably performed in collaboration with the plastic surgeon and under complete anesthesia. We prefer general anesthesia because, in our experience, especially injecting the fat into the tongue felt uncomfortable. General anesthesia makes the injection less stressful for the patient. In general, we inject 10-15 cc of fat, and on average, two to three sessions are needed. No severe complications have been developed since we started performing this procedure.
Conclusions
This chapter describes the possible role of lipofilling in patients with chronic dysphagia after HNC treatment. Lipofilling is a technique for transplanting fat cells within one individual. This procedure has the potential to increase tissue volume and increase oropharyngeal function. Based on published results, the lipofilling technique seems to be safe and-in selected cases-of potential value for improving swallowing function in therapy-refractory HNC patients. For this reason, lipofilling should be considered as a treatment option for chronic dysphagia after HNC treatment.
|
2022-02-04T16:12:19.944Z
|
2022-01-24T00:00:00.000
|
{
"year": 2022,
"sha1": "95b0ac7189d78dd41a020af60fee64118cdfd3ac",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/80200",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "41132210883d82904bcf3a138bffad39b6258b13",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
248841962
|
pes2o/s2orc
|
v3-fos-license
|
Retail Sales Forecasting Using Deep Learning: Systematic Literature Review
. This systematic literature review examines the deep learning (DL) models for retail sales forecast. The accuracy of a retail sales forecast is a prevalent force for uninterrupted business operations. Accuracy for retailers means limiting supply chain and storage costs, ensuring no product is out of stock, and facilitating smooth promotional operations. The study analyses the DL frameworks used in reviewed literature. Tested DL models are listed, as well as other machine learning and linear models used for the evaluation comparison. Additionally, the review presents the metrics used by the authors for the model evaluation. This article concludes by describing the benefits and limitations of DL models for sales forecasting.
Introduction
To stay competitive retail companies must look for ways of increasing the efficiency of operations. For any retailer the main focus of the business is the volumes of sales, consequently having precise sales estimates is the cornerstone of every business. Without it, the supply chain, finance, marketing, or any other function in the company cannot operate without disruptions. As a result of underestimated sales, the product may end out of stock, marketing activities can be disrupted, and customers can be lost; in turn, overestimation can cause problems with the shelf life of products, and ultimately increase the cost of storage, products, and operations. Thus, sales forecast accuracy trickles down to the overall efficiency of the business. With ever-growing technological capabilities, it makes the most sense for retailers to look for solutions in this area.
Artificial intelligence (AI), although it can now be found almost everywherein our phones, laptops, cars, watchesstill has many unexplored and insufficiently developed applications. One of the most advanced technologies of AI is deep learning (DL). DL as technology has a variety of applications such as image recognition, speech recognition, natural language understanding, acoustic modeling, and prediction modeling [1].
This article is a piece of secondary research with the aim of identifying, evaluating, and interpreting currently available research regarding the usage of DL.
The goal of this systematic literature review is to summarize the existing knowledge regarding retail sales forecasting using DL technology and to provide an evaluation of benefits and limitations of the approaches used in DL for retail sales forecasting.
To achieve this goal the following research questions have been identified: RQ1. What are the DL models used for sales forecasting? RQ2. What metrics are used for model evaluation? RQ3. What are the benefits of using DL models for sales forecasting? RQ4. What are the challenges and limitations of using DL models for sales forecasting?
The structure of the article is as follows. Section 2 introduces sales forecasting, the approaches used for estimating it, with a specific focus on DL concepts. Section 3 presents the research method and search strategy. Section 4 shows the process of literature search and article selection. Section 5 contains the data analysis and results, and Section 6 concludes the article.
Background
This section provides the background to sales forecasting and the approaches used, additionally giving a brief overview of DL models and metrics commonly used.
According to Mentzer & Moon [2], a sales forecast is a "projection into the future of expected demand, given a stated set of environmental conditions". In some of the earlier works, instead of "sales forecast", the terms "sales prediction" or "demand forecast" have been used as synonyms of "sales forecasting". The general approach of using time series historical data for estimating the future value of sales is the common factor in the reviewed literature, therefore, for this study, "sales forecasting" as an umbrella term will be applied.
For addressing forecasting problems, different models and methods have been used. Two of the classical forecasting methods are Auto-Regressive Integrated Moving Average (ARIMA), Seasonal Auto-Regressive Integrated Moving Average (SARIMA), where exponential smoothing performs statistical time series analysis. These are often used for market-level sales forecasts [3], [4].
Besides the conventional methods, there are also methods based on machine learning (ML). ML is a subset of AI. ML algorithms rely on data and learning to reach a specific goal by extracting patterns from the data. Some of the ML models are Linear Regression, k-Nearest Neighbor (k-NN), Random Forest (RF), and Support Vector Machine (SVM) [5], [6]. Artificial neural networks (ANN) are a specific discipline within ML. DL is an even smaller part of AI and ML. DL focuses on multilayer ANN and uses them as a backbone for DL algorithms [5]. Since 2006 the third popularity wave of ANN algorithms has started, and the term "deep learning" solidified its presence in the academic literature [7]. Some of the DL architectures commonly used are Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM), and Recurrent Neural Network (RNN). Schmidhuber [7] provides a comprehensive historical overview of the development of DL in NN. Additionally, more information on the DL models and architectures can be retrieved from [5] and [6]. Figure 1 describes the inclusive nature of the AI-related terms. The ultimate goal of classification is prediction. Varied measures are used to capture the quality of the prediction or forecast. Mean Absolute Error (MAE) calculates the average size of the error that the forecast contains (see Formula 1). Additionally, the Mean Absolute Percentage Error (MAPE) shows the average absolute percent of how far a forecast value is off the actual sales (Formula 2). As it is expressed as a percentage, MAPE allows evaluation of the overall accuracy of the model and comparison with other models. One of the typical measures used to evaluate the error of a model in predicting numerical data is Root Mean Square Error (RMSE), which enables the comparison of the predicted value with the actual observation for different models (Formula 3). A lower RMSE value means that the model has been able to forecast the values within a smaller error range and thus fits the data the best. The formulas (1-3) describe the most common metrics where at is the actual sales value, while ft is the value forecasted by the model.
Additional information on accuracy measures is available in [8].
Emmert-Streib et al. [1] summarized the DL architectures used for every kind of prediction model. Their article focuses on the theory of DL and the various ways in which it can be applied. Also, Fildes et al., [4] wrote about the methods and aspects of retail forecasting not, however, focusing on DL. To the best of our knowledge, no work so far has analyzed the literature regarding retail sales forecasting with the use of DL prediction models, and moreover, there is a lack of comprehensive analysis of DL models and their applicability in this field.
Research Method
The systematic literature review process is performed according to Kitchenham and Charters report [9]. Figure 2 displays the main phases of the method.
The process starts with the research need identification. During this phase, the focus of the study has been selected, and relevance established. The process continues with specifying research questions. This step is important as it sets the framework of the research scope and findings. The next phase is to develop a search strategy. This serves as a roadmap for the research to find all the relevant literature and show the completeness, rigor, and transparency of the process. As shown in Figure 2, the next task is to perform the literature analysis of which the main part is data extraction and amalgamation. Further, we move to presenting study results followed by a discussion which answers the research questions set in Section 1. Lastly, we derive conclusions from the study.
Search Strategy
The following search strategy is used for the selection of relevant studies. First, a list of keywords is compiled. The list consists of word groups derived from research questions, synonyms of the words, a preliminary review of the topic is Scopus database, and the taxonomy of IEEE.
Identified keywords: deep learning, deep neural network, retail, purchasing prediction model, sales prediction, predictive models, prediction modeling, prediction methods, sales forecasting.
Second, these keywords are used for search string development. The search string was used to search article titles, keywords, and abstracts.
Search string: (Deep learning OR Deep Neural network) AND Retail AND (Purchasing prediction model OR sales prediction OR Predictive models OR Prediction modeling OR prediction methods OR sales forecasting) Third, the search string was used in the following digital libraries: Scopus, IEEE Explorer, ACM Digital Library, and Science Direct.
Fourth, the exclusion and inclusion criteria, to which the studies had to comply were stated : The year is chosen as it gives a sufficient period for review and, around that time, the term "deep learning" started to gain popularity [1]. Criterion 4 An article must be a conference proceeding or journal article, other types of works like books, standards, and courses are excluded. Criterion 5 An article must be relevant to the topic and subject area of retail sales prediction using DL. Figure 3 presents the literature search strategy implementation process in the selected databases and the criteria applied to the studies. Initially, using the search string, 137 studies were identified, further, with the criteria of language, publication year, and source type, 100 studies were selected for further examination. These 100 studies were examined based on title, abstract, and full access rights. In total 19 articles have been selected after applying the search strategy inclusion and exclusion criteria described in the previous section.
Findings and Results
The reviewed literature concerned data from different retail businesses and industries. Table 1 displays the industries represented and the number of studies reviewed from those industries. Grocery, e-commerce, apparel and accessory industry, alcohol, health and beauty, and shopping mall sales have been forecasted by the authors. Most frequently authors used Python for the model implementation. Overall, the 19 studies reviewed are dating from 2016 to 2021, so it is the most current academic literature that has been considered. The following sections will introduce the analysis of the extracted data, answer the research questions, and present the results.
Prediction Models
DL can be achieved by using many different neural network architectures. As a subset of machine learning, DL uses perceptron, heuristics, and it often utilizes large datasets. The most common architectures of DL are ANN, RNN, CNN, LSTM, and MLP. Each one is slightly different in the tasks it can perform and its architectural complexity [1]. The studies of prediction modeling used various techniques and developed additional frameworks based on the above-mentioned architectures. One study used the K-means algorithm for data clustering and LSTM architecture for the prediction model. This combination allowed the model to reach a high level of prediction accuracy even with limited historical data [5]. Kaneko and Yada [16] use a simple DL framework to predict whether the sales would increase or decrease. Table 2 consists of the DL models considered by reviewed articles, and machine learning or linear models used for the comparison. DL models are divided into two parts. The first 11 entries listed and marked grey in Table 2 are original frameworks or architecture adaptation proposals from the reviewed papers' authors (H2O, DSF, STANet, ASFC, NN MPL, EE-CNN, EMD-G, EMD-MG, NN Model, AGA-LSTM, CNN-LSTM). The following 7 models have established DL architectures that are used to perform the analysis in a novel setting or make an extensive model comparison with DL, ML, and linear models, like [11], [13]. The most common DL model used by the authors is LSTM architecture. It is a type of RNN architecture, which is applicable to many uses including natural language processing, voice recognition, and, of course, predictions [1].
DL Benefits
This section addresses the benefits that are achieved by applying deep learning to the prediction model. DL prediction models open new capabilities, that might not be possible to reach with standard models, for instance, Giri et al. [14] proposed a neural network MPL model, that by examining apparel features in the product picture can estimate the sales of new, previously unsold products. STANet framework developed by Liao et al. [26] achieves superiority in sales prediction accuracy by considering, not only historical sales, but also relationships among products. They hypothesized that a relationship among products, like being of the same brand, would play a role in the sales forecast. Qi et al. [13] and Chena et al. [19] similarly consider product relationships and promotions in their model to achieve higher accuracy. DL models allow the estimation of data with non-linear relationships, which is one of the biggest advantages over linear models [19]. However, the most important benefit is apparent from the comparative analysis of DL and other approaches. As displayed in Figure 5, 82% (14 articles) of the research articles that had DL model comparison with machine learning or linear models found that DL had superior results. Two papers found that other types of model had a better result, while one study did not have a conclusive answer, as the results varied based on metrics.
DL Limitations
This section considers the challenges and limitations of using DL models for sales predictive modeling. Some specific limitations can be mentioned for each of the reviewed approaches. For instance, the Kaneko and Yada [16] model was able to predict increase or decrease, but not the actual sales figure. However, this also means that their model was simple and easy to implement. For more complex problems, the DL models do become extensively complex to implement and understand. DL models are the so-called black-box solution; thus, their decision-making process is not traceable, and the approach would not be applicable if knowledge extraction and outcome explanation is required [19]. Additionally, similarly to non-neural network ML models, DL models may also be subject to biases, overfitting (the model being trained too close to given data) or underfitting (model not suitable for generalization and not being able to model the data) [5]. That is the reason why so many DL algorithms are still being developed, evaluated, and tested. Thus, although the potential of DL models is there, the limitations cannot be ignored.
Conclusions
This study is a summary of sales forecast models using DL. Based on the article reviews performed in this study, it is possible to see that varied models and evaluation metrics have been applied to the grocery store, e-commerce, apparel and accessory, health and beauty store, shopping mall, and alcohol sales forecasts. The DL architectures, used most commonly for sales forecast, are LSTM, DNN, and MPL. However, other authors choose to develop their own frameworks (11 cases were found). For grocery store sales forecasting, in 4 out of 9 applications, LSTM was chosen, while 4 out of 9 applications used novel frameworks. For the evaluation of the models, the authors of reviewed articles have compared DL applications with non-neural network ML algorithms and linear models. Grocery store forecast comparison is most commonly done with the linear Regression model (4 instances), SVM (4 instances), and ARIMA (3 instances). For other industries, the data pool is too small to draw any tendencies regarding the most prevalent models for sales forecasting. The most often used metrics for evaluation are RMSE, MAE, and MAPE; however other metrics are accepted and used as well. The most frequently used combination of metrics is RMSE and MAE (5 instances). In some of the studies, the particular accuracy measure used was not denoted and could not be identified thus pointing to the lack of consistent and repeatable research methodology of these studies. Application of DL frameworks proves to provide superior sales forecast methods and allows for capabilities not possible with other methods; however, most often, they are complex solutions and are harder to implement than other ML or linear models. Additional research for improving the models and broadening the application areas is needed and it is clear that DL architectures are still developing, and we can expect to see more works using DL in the near future. This research can serve as a basis for further development of retail sales prediction models, taking into consideration the findings, strengths, and weaknesses of up-to-date solutions using DL, other ML, and linear models.
|
2022-05-18T15:13:16.542Z
|
2022-04-30T00:00:00.000
|
{
"year": 2022,
"sha1": "7cfa9252b4d945d77280e06c6ed8b2f52c9af14a",
"oa_license": "CCBY",
"oa_url": "https://csimq-journals.rtu.lv/article/download/csimq.2022-30.03/2949",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0dc125c4b772021e31919436fad3a88e26286948",
"s2fieldsofstudy": [
"Computer Science",
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
80809426
|
pes2o/s2orc
|
v3-fos-license
|
Burnout among nurses
Introduction: The process of occupational burnout develops slowly; its initial symptoms are discreet, they increase progressively and become manifest suddenly, with great power. The burnout syndrome constitutes a serious personal and social problem whose cause lies in the workplace or is work-related. Aim of the research: To assess occupational burnout of nurses and the sense of satisfaction with career. Material and methods: The study covered 100 nurses working for the Health Care Unit in Dąbrowa Tarnowska. The method of a diagnostic poll was used in the research. A survey questionnaire regarding career and the standardised Maslach Burnout Inventory were the research tools used. The calculations were made with the use of the IBM SPSS Statistics 20 software. The adopted statistical significance was p < 0.05. Results: As many as 48% of the nurses felt job satisfaction. Among 41% of the nurses a high level of a burnout related with emotional exhaustion was determined, and 63% of the nurses felt a low level of burnout related to depersonalisation. Lack of the feeling of personal achievements was the cause of the high level of burnout in 62% of the surveyed. The average result of burnout scale was 50.34 points, indicating that 38% of the nurses were threatened with burnout. The age of the studied nurses, place of work (ward), and feeling the need for further education do not influence the frequency of occupational burnout occurrence. Conclusions: Good relationships within a therapeutic team and support from the ward head nurse are strongly linked with the feeling of less occupational burnout.
Introduction
The problem of occupational burnout is increasingly addressed in the literature.The notion first occurred in the 1970s [1,2].Nowadays, the issue is dealt with not only by psychologists and sociologists, but also by researchers from other areas of medicine, pedagogy, as well as the theory of management and organisation [3,4].In the course of many years' re-search into occupational burnout, a few definitions of this syndrome have been created.The American psychologist Freudenberger was the first to introduce the notion of "staff burnout" in 1974.He defined it as a decline in the level of an employee's energy, occurring as a result of being overwhelmed with the problems of others [2].At present, one of the most frequently used definitions is the one created by Christina Maslach.According to her, burnout is a syndrome of emotional exhaustion, depersonalisation, and a diminished sense of personal accomplishment, which may occur among various professionals who work with other people in a specific manner [2].The quoted definition is most often used in research into health care employees.The main professional group particularly exposed to the occurrence of burnout syndrome are health care workers, and among them nurses and doctors are the centre of the attention of this research [3,[5][6][7].The work environment is the main source of stress [8], and stress is the source of occupational burnout [4].
The process of occupational burnout develops slowly, its initial symptoms are discreet, they increase progressively, and they become manifest suddenly and with great power [3].Burnout syndrome constitutes a serious personal and social problem [1,3] whose cause lies in the workplace or is work related [1].Its consequences concern the mental, emotional, physical, professional, and family sphere; thus, it is not only the worker experiencing burnout who suffers but also his or her environment [9].
A nurse is constantly involved in the patient's illness, life problems, and quite often she accompanies the patient in the dying process.It poses great emotional requirements of him/her, the consequence of which is stress [4].Its accumulation and the lack of the ability to release it leads to chronic occupational stress, which is destructive for a nurse, bringing about lowered self-esteem and quality of work, and it affects contact with patients [4].Many women working in the capacity of a nurse become burned out due to a lack of appreciation and recognition.Low wages generate disappointment and frustration among nurses, while responsibility in the workplace leads to burnout syndrome increasing year after year [10].As a result, multi-dimensional burnout syndrome appears [1].
Work should be a source of happiness and a sense of fulfilment in life and a sense of job satisfaction to everyone.Satisfaction upholds the employee's readiness to work, which is why the most important goal of any actions in this area is to strive to accompany employees at all times.Burnout syndrome is a problem that we have to talk about because it is a serious threat to the employee's health.Knowledge about its existence is key to proper decisions being made by the employer to create a friendly work environment.
Aim of the research
The aims of the paper were to assess occupational burnout of nurses and the sense of satisfaction with career.
Material and methods
The method of a diagnostic poll was applied in the study.An anonymous survey questionnaire created by the author and concerning the job of a nurse and job satisfaction, as well as the standardised Maslach Burnout Inventory were the research tools.The Polish version of the Maslach Burnout Inventory (MBI) was used for the assessment of occupational burnout [11].It is composed of 22 statements, comprising three scales: emotional exhaustion (nine items), depersonalization (five items), and personal accomplishment (eight items).
Respondents answer questions about how often they feel a particular way on a 0-6 scale, where 0 indicates "never" and 6 means "daily".Results are calculated for each of the subscales separately, according to the key.Burnout is confirmed by high results obtained on a subscale of emotional exhaustion (EE: 9-54 points) and depersonalisation (DP: 5-30 points) and low results on a personal accomplishment subscale (PA: 8-48 points).This questionnaire has been validated in Polish and achieved the following a values for the scales: EX = 0.85, DP = 0.60, PA = 0.76.
The study included 100 professionally active nurses working for the Health Care Unit in Dąbrowa Tarnowska.Having obtained the written consent of the hospital director, the survey was carried out in December 2014 and January 2015 in the following wards: Internal Medicine, Orthopaedic, Intensive Care Unit, Systemic Rehabilitation, and Neurological.The respondents participating in the research were informed about anonymity, the study objectives, and the right of refusal to participate or to withdraw consent to participate in the study at any time and without any implications.Having given their informed consent, the nurses completed the questionnaire survey unassisted.
The average age of respondents was 40.37 ±9.68 years.The youngest woman was 24 years old, and the oldest was 58.The average number of years of service in the studied group of nurses was 16.83 ±10.54.The number of the years of service ranged from 1 year to 38 years.The majority of nurses (86%) worked in the two-shift system.85% of nurses were happy with the work system in which they worked (Table 1).
Statistical analysis
The calculations were made with the use of IBM SPSS Statistics 20.The adopted level of significance was p < 0.05.
Results
A sense of job satisfaction prevents the occurrence of occupational burnout.A person who is satisfied works more effectively and with greater enthusiasm.The survey data analysis did not show any significant differences between sense of job satisfaction and occupational burnout in nurses, related to emotional exhaustion (p = 0.2304) and depersonalisation (p = 0.1106).A high sense of lack of personal accomplishment was experienced significantly more frequently (p = 0.0211) by respondents who were only sometimes (80.0%) or never (80.0%) satisfied with the job done, in comparison with nurses who felt satisfied with their occupation (Table 2).
The job satisfaction of nurses did not significantly influence the general level of occupational burnout among them (p = 0.1429).Professional experience gained over the years enables nurses to assess their job satisfaction.The survey data analysis did not show any significant statistical differences between the years of service and burnout related to emotional exhaustion and depersonalisation (p > 0.05).No statistically significant relation was observed between the years of service and sense of job satisfaction (p = 0.5371).The satisfaction from the job done was slightly more often felt by nurses with the greatest number of years of service (the average 18.02 years) -66.7%.The average number of years of service of nurses who only sometimes felt job satisfaction was 15.13 (37.5%).Nurses who more often felt job satisfaction than did not had worked in the profession for 17.76 years on average (34.8%).
The nurses' age did not have any statistically significant influence on the general level of occupational burnout and on the occurrence of occupational burnout related to emotional exhaustion and depersonalisation (p > 0.05).
Medicine is a science that is constantly developing; therefore, throughout the whole period of their professional career nurses have to educate and improve their qualifications.The survey data analysis did not show any statistically significant differences between feeling the need for further education by nurses and the occurrence of occupational burnout related to emotional exhaustion, depersonalisation, and the sense of lack of personal accomplishment (p > 0.05).A nurse is a member of a therapeutic team providing broadly understood patient care.His/her job is primarily team work based on mutual respect, trust, and providing assistance to one another.Most often, a nurse cooperates with another colleague-nurse, the ward nurse, and the doctor.Nurses who could always count on help from nurse colleagues most often had a very low level of occupational burnout (18.0%), and those who could only sometimes count on cooperation were more often affected by medium (34.2%) or high (15.8%)levels of occupational burnout.The differences were statistically significant (p < 0.0001) (Table 3).In other components, there was no statistically significant connection with the occurrence of the willingness to help/cooperate from a nurse colleague.
Cooperation with the doctor and the sense of respect from him/her is one of many factors influencing job satisfaction.
The sense of lack of personal accomplishment among nurses significantly depended on the occurrence of the willingness to help/cooperate from the doctor (p = 0.0116).Nurses who could always count on help from the doctor had a low (38.9%) or moderate (22.2%) level of occupational burnout in that respect.A high sense of lack of personal accomplishment was usually felt by nurses who could sometimes count on cooperation with the doctor (72.7%).In the remaining components there was no statistically significant connection with the occurrence of the willingness to help/cooperate from the doctor.
The occurrence of occupational burnout among nurses, related to emotional exhaustion, significantly depended on the possibility to obtain help from the ward nurse (p = 0.0207).A low level of occupational exhaustion in this respect was usually felt by nurses who could always count on help from the ward nurse (51.6%), and it was more often high in nurses who could never count on such cooperation (54.4%).The survey data analysis did not prove that occupational burnout related to depersonalisation in nurses depended in a statistically significant way on the possibility to obtain help from the ward nurse (p = 0.0766).
The sense of lack of personal accomplishment among nurses depended significantly on the occurrence of the willingness to help/cooperation from the ward nurse (p = 0.0008).A low level of occupational burnout in that respect was presented by nurses who could always count on the ward nurse's help (45.2%), and a high level of burnout connected with the sense of lack of personal accomplishment more frequently pertained to nurses who could only sometimes (74.5%) or never (77.3%) count on such help.The level of occupational burnout in nurses significantly depends on the occurrence of the willingness to help/ cooperate from the ward nurse (p = 0.0003).A very low level of occupational burnout occurred in nurses who could always count on help from the ward nurse (32.3%).A low (46.8%) and medium (38.3%) level of occupational burnout was more frequent in persons who could sometimes count on help from the ward nurse.A high (22.7%)and very high (13.6%)level of occupational burnout more often concerned nurses who could never count on such cooperation (Table 4).
Wards are varied in terms of disease entity and the patient's age, and they are characterised by their own specificity of work.Depending on the workplace, The study showed that the workplace properly equipped with medical devices significantly influenced the sense of contentment with the organisation of work among the respondents (p = 0.0002).Job satisfaction was significantly more often felt by nurses who claimed that the workplace was always equipped with medical devices (36.4%).Nurses discontented with the organisation of work more often claimed that their workplace was not equipped with adequate medical equipment (32.1%).
Discussion
Occupational burnout is a very common phenomenon among numerous professions worldwide.More and more people experience exhaustion of work; therefore, the subject [6] arouses a need for constant diagnosis of causes, results, and prevention of this phenomenon.The main professional group included in the research into the burnout syndrome are nurses.Due to constant contact with people expecting emotional support and help, the profession evokes a specific kind of occupational stress affecting all spheres of life, which is the reason why nursing staff belongs to the group with the highest level of the risk of the occurrence of occupational burnout [6].Its symptoms are insidious, often explained as temporary fatigue or "a bad day at work", and in fact they develop slowly, sometimes even for years, and lead to serious conse-quences in professional and private life.Mastalerz thinks that a professional career is both a source of satisfaction and stress, the effect of which may be the development of burnout [12].
From our own research it has not been proven that the sense of job satisfaction of nurses is significantly related to the general level of occupational burnout among them.A similar relationship was identified in the research conducted by Sowińska et al. [13].In her publication, Andruszkiewicz expresses her concern caused by the growing lack of satisfaction (discontentment) of nurses from the job done, which in the future may be a serious source of crisis because it adversely affects the development of the profession [14].It has been confirmed for years in the observed strikes and protests of nurses.
As a result of our own research, no significant influence of the years of service of nurses on the general level of occupational burnout was discovered; however, it influences individual dimensions of burnout, including the sense of lack of personal accomplishment.High results were more often achieved by nurses with 1-10 years of service (80.0%), which is reflected in the research conducted by Nowakowska and Rasińska [15].In respondents with the shortest period of work experience the symptoms of developing occupational burnout were greatest.This happens because responsibility for another person's health and life brings about much greater difficulties to people who are just commencing their job.It may potentially be influenced by the lack of accumulation of personal problems, the sense of the freshness of knowledge, or not experiencing the significance of this type of re- sponsibility [16].The lack of a statistically significant relationship between the years of service and burnout was also obtained in the research carried out by Lewandowska and Litwin [9], in a research of bridging part-time undergraduate studies at the Medical University of Gdansk [13], and in a study of nurses from Kuyavian-Pomerenian Voivodeship [17].The last study proved that the number of years of service influences only the elements of the personality sphere, such as: professional ambitions and offensive problem solving [17].Other studies have proven that the greater the number of the years of service, the more often occupational burnout occurs [6,18].The contradictions present in the available literature on the subject mean the relationship continues to be an open problem [13].
When analysing the relationship between the age of the surveyed nurses and occupational burnout, our own research did not reveal any significant influence of age on the general level of occupational burnout.Similar results were presented by Tarcan et al. [19], and contrasting ones by Zhu et al.It can be concluded from their research that with age workers' occupational burnout increases [20].Statistically significant differences occurred between the age and occupational burnout in nurses, related to the sense of lack of personal accomplishment, which was more often presented by younger nurses.Similar results were presented by the research conducted by Lewandowska and Litwin [9].
The job of a nurse requires constant improvement of qualifications, which is related to an increase in the sense of professionalism and competence [7].Qualification improvement positively influences the self-assessment of a nurse, it extends his/her scope of competences, can lead to promotion, and increases the prestige of the job.It is also a way to prevent the occurrence of occupational burnout [7,21].In our own research, more than a half of the respondents expressed a need/willingness to further their education; however, it did not correlate with the occurrence of occupational burnout.The research of Karakoc et al. indicate that continuous education is important, particularly for younger nurses, because it will minimise the risk of occupational burnout in their further career [22].
In the job of a nurse, mutual cooperation in the therapeutic team is very important.The sense of support from a nurse colleague, the ward nurse, and the doctor influences the sense of satisfaction from the job and prevents occupational burnout [13].As a result of our own research, it was discovered that the assessment of the atmosphere in the ward significantly depended on cooperation with a nurse colleague, the doctor, and the ward nurse.The results obtained in the research are comparable with others available in the literature.In the research carried out by Sowińska et al., the majority of respondents said that a nervous atmosphere among the therapeutic team affects their satisfaction with the job done [13].An analysis of the research conducted by Dłużewska, Poncet, et al. enables us to draw the conclusion that conflicts among colleagues positively correlate with all three dimensions of burnout [16,23].
When analysing the relationship between the nurses' place of work and the occurrence of burnout, in our own research it was not shown that the general level of occupational burnout among nurses depends on the ward in which they work.Different results are presented by the research carried out by Nowakowska and Rasińska, in which significant relationships between the type of ward and occupational burnout in all three dimensions were proven [15].Likewise, the research by Wyderka et al. [3], as well as by Ríos-Risquez and García-Izquierdo [24], showed an impact of the workplace on the intensity level of the occurrence of burnout symptoms in the dimension of emotional exhaustion and the lack of personal accomplishment.
Proper conditions for patient treatment and care, including adequate furnishing of work stations with medical equipment, are one of the more important elements influencing the sense of job satisfaction.In our own research job satisfaction was significantly more often felt by nurses who claimed that the workplace was always equipped with medical devices (36.4%).This was confirmed in the research carried out by Głowacka and Nowakowska [25].In the research by Sowińska et al., respondents also thought that the equipment in the workplace was satisfactory [13].Those research findings are consistent with the opinion that the supply in medical devices influences job satisfaction.
Professional burnout is an individual and social problem [3]; therefore, its consequences affect a person experiencing this phenomenon, as well as the environment in which he or she lives.Its essence is mental erosion which takes place in a person in terms of the sense of self-esteem, personal dignity, and will.It leads to numerous disorders in psychological, social, and physiological functioning of an individual [12] as well as resignation from their current job [26].The main cause of occupational burnout is stress formed in the workplace which a given person cannot handle, persisting for a longer time; therefore, occupational burnout prevention is closely linked to stress prevention [27,28].
Conclusions
The sense of the lack of personal accomplishment among nurses, being one of the elements of occupational burnout, is impacted by sense of satisfaction with career and length of work experience.The age of studied nurses, place of work (ward), and feeling the need for further education do not influence the frequency of occupational burnout occurrence.
Lack of appropriate number of staff and amount of medical equipment generate dissatisfaction of nurses with wok organisation.Good relationships within a therapeutic team, cooperation with colleagues and doctors, and support from the ward head nurse are strongly linked with the feeling of less occupational burnout.
Table 1 .
The characteristics of the studied group
Table 2 .
Job satisfaction felt and sense of lack of personal accomplishment (PAR) Medical Studies/Studia Medyczne 2018; 34/3
Table 3 .
The occupational burnout level and the occurrence of the willingness to help/cooperate from a nurse colleague
Table 4 .
The level of occupational burnout and the occurrence of the willingness to help/cooperate from the ward nurse
|
2019-03-18T14:03:39.227Z
|
2018-09-01T00:00:00.000
|
{
"year": 2018,
"sha1": "3978c379237cbeb0733c1c2a3c4fb82f1d3bcc4a",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.termedia.pl/Journal/-67/pdf-33867-10?filename=Burnout.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0a5044f85d2e1f5b96b7b3261e7f5dd1d3a8246c",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
222272232
|
pes2o/s2orc
|
v3-fos-license
|
Large Scale Indexing of Generic Medical Image Data using Unbiased Shallow Keypoints and Deep CNN Features
We propose a unified appearance model accounting for traditional shallow (i.e. 3D SIFT keypoints) and deep (i.e. CNN output layers) image feature representations, encoding respectively specific, localized neuroanatomical patterns and rich global information into a single indexing and classification framework. A novel Bayesian model combines shallow and deep features based on an assumption of conditional independence and validated by experiments indexing specific family members and general group categories in 3D MRI neuroimage data of 1010 subjects from the Human Connectome Project, including twins and non-twin siblings. A novel domain adaptation strategy is presented, transforming deep CNN vectors elements into binary class-informative descriptors. A GPU-based implementation of all processing is provided. State-of-the-art performance is achieved in large-scale neuroimage indexing, both in terms of computational complexity, accuracy in identifying family members and sex classification.
Introduction
Medical image repositories such as modern hospital picture archiving and communications systems (PACS) store increasingly large diverse 3D patient anatomy, e.g. diagnostic CT, MRI scans, histology, etc. In order to fully exploit these data via machine learning, it is important to model both specific instances, e.g. individuals and family members in personalised medicine, in addition to characteristics shared between broader group categories, e.g. males and females. What would be the most suitable computational approach, or data representation for such an approach? In the computer vision literature, deep convolutional networks (CNNs) [19,17] excel at classifying broad image categories, particularly where the number of training data samples N is large relative to the number of classes labels of interest, e.g. the ImageNet dataset [9] consisting of 1000 generic object categories with 1000 photographs each [29,12,14,13,7,38,38,28]. Shallow keypoint based representations remain highly effective in the case of instance-retrieval where there may be at most a handful of examples per category [1,4,37,3]. Additionally, keypoint correspondences can be used to achieve robust spatial alignment [32] required prior to classification via deep network, which are generally not invariant to rotation or scale changes. In the case of 3D brain MRI for example, keypoint-based methods have been used to identify individuals and family members with high accuracy from large, multi-modal datasets [18,6]. How do deep network activations and shallow keypoint descriptors compare in the context of medical image data indexing and classification? Can they be combined in a complementary, synergistic fashion? We address these questions in the context of a novel data model combining both shallow/early and deep/late CNN information, where each image is represented as 1) a variable-sized set of generic 3D SIFT keypoints and 2) a fixed-sized vector of deep CNN activations from networks pre-trained on generic visual object categories. In our model, shallow convolution maxima are used to establish robust spatial alignment, e.g. to a suitable atlas reference space. Additionally, we propose a novel informationtheory based scheme to adapt CNN filters derived from generic objects to 3D medical image data, where CNN vectors are binarized according to element-wise threshold maximixing the mutual information between binary activation and class label. Our work builds about memory-based models [31,5], where image data are stored and used in on-the-fly kernel density estimation, an approach that approaches optimal Bayes error as the number of training data becomes large [8], i.e. big data context. Experiments on 1010 MRIs from the Human Connectome Project [35]. Shallow keypoint descriptors found to be individually most informative for family member indexing and retrieval, although combined deep and shallow information leads to the highest overall performance, improving upon the keypoint signature method [6] that was used to discover previously unknown subject labelling errors in OASIS [21], ADNI [15] and HCP [35] public datasets widely used by the neuroimaging community. For the task of group classification, here male-female, both shallow and deep descriptors show similar performance individually and their combination results in a slight increase in AUC (area under the curve) performance.
Related Work
Our work seeks to combine generic shallow keypoint methods [20,32] and CNN technnology [29,12,14,13,7,38,38,28] for the purpose of large-scale generic indexing and classification of medical image data. A variety of context-and task-specific approaches may be used to detect and describe keypoints [36,23], and deep filter responses excel for few-label-manydata contexts such as group classification [37]. Nevertheless, comparisons have shown variants parametric (or 'hand-crafted') descriptors such as gradient orientation histograms can be more effective, particularly for matching specific visual scenes [26,10] or retrieving specific object instance retrieval [37]. A possible explanation is that bias is introduced in early (shallow) filtering layers [11,25]. For example, input filters derived from stochastic backpropagation are generally non-symmetric and biased to specific image oriented image patterns [19,17], unlike rotationally symmetric operators such as the difference-of-Gaussian or uniformly sampled Gaussian derivative filters used in SIFT descriptors which are not biased to any particular orientation [20,32]. The importance of shallow filter information has been recently demonstrated [3]. 3D keypoint indexing demonstrated state-of-the-art performance in identifying individuals and family members from large sets of brain MRI data [6], but also identifying previously unknown subject labeling errors in widely neuroimaging datasets. Keypoint and CNN representations can be combined resulting in superior performance for instance identification [4].
Method
We seek a model combining shallow, variable-sized local keypoint sets {f i } and fixed-length deep descriptorv into a generic system suitable for both indexing and classification. To this end, we consider an instance or memory-based model based on adaptive kernel density estimation, combining generic shallow and deep CNN information. Figure 1 illustrates our approach, where (a) represents generic, existing feature extraction technology and (b) represents the model we investigate here. We propose a Bayesian a maximum a posteriori (MAP) formulation, where the goal is maximize the posterior probability over a discrete random variable of class C = {C k } conditioned on data ({f i },v) and spatial transform T defined as: In Equation (1) [33,34]. Spatial alignment is applied to the image T • I after which a deep CNN response vectorv is sampled from layers near the output of a pre-trained CNN. Resampling the image according to a global transform prior to CNN feature extraction is similar to a spatial transformer network [16].
α such as with the first term being proportional to the log Jaccard keypoint signature distance proposed in [6]. In Equation (4), parameter α = [0, 1] is an exponent factor balancing the relative variances of conditionally independent shallow and deep feature distributions, empirically determined. The assumption of conditional independence is validated from data in Figure 3, where shallow and deep features are virtually uncorrelated, to our knowledge the first time this result is reported. The specific formulation of each term in equation (4) is as follows: p({f i }|C) represents the probability of keypoint set {f i } conditional on C is modeled as the product of independent and identically distributed keypoints where the density associated with an individual keypoint p(f i |C) is expressed as proportional to the marginalisation over a training keypoint set {f j }: where N k,f = |{f j : C j = C k }| is the number of training keypoints f j with class is an exponential kernel with variance defined by dN N i = min j f i − f j and p(f j |C k ) = [C j = C k ] is the Iverson bracket evaluating to 1 if label C j associated with f j is equivalent to label C k and zero otherwise. The +1 term is included to represent the contribution of uniform background noise, and ensures the product in Equation (5) does not vanish.
In a similar fashion, the term p(v|C k , T ) in equation (4) is defined as where N k,v = |{v j : C j = C k }| is the number of training vectorsv j with class j v −v j the mean distance between feature vectorv each training feature vectorv j , and p(v j |C k ) = [C j = C k ] is the Iverson bracket.
CNN Vectors and Domain Adaptation
In order to develop a generally applicable CNN implementation, we used the strategy of transfer learning, adapting deep CNN information in the form of generic models pre-trained on ImageNet dataset [9], consisting of 1,000,000 images of 1000 object classes. While specialized training on domain-specific data may improve results, large pre-trained CNNs are commonly used as generic feature extractors and are surprisingly effective in medical image analysis where data may be scarce [30]. We adopt a novel domain adaptation scheme based on information theory, where a raw vector of CNN informationv extracted from 2D ROI is converted into an informative, lightweight binary vector. Each elementv[i] is binarized by a threshold τ i , where τ i is determined such that the mutual information (or information gain) between binary elementv[i] and the class of interest C is maximized, as in classic decision tree training [2]:
Experiments
The goal of experiments was to evaluate the effectiveness of shallow and deep feature combinations in two classification contexts: 1) many-class-few-data (e.g. few-shot learning) and 2) few-class-many-data (e.g. group classification). Shallow keypoints {f i } are extracted with the 2D and 3D SIFT-Rank method based on a GPU implementation, with a memory footprint 150x smaller than the original image, deep feature vectorsv are taken from DenseNet201 [14]. CNN and 2D SIFT features are sampled from a single 2D mid-axial MRI in each 3D volume, additional slices or volumetric CNN information could potentially improve classification, however a single slice evaluation protocol is similar to diagnostic MRI data which often consist of a small number of high-res mid-axial slices. 10 different CNN models have been evaluated (InceptionV3 [29], ResNet50 [12], DenseNet121,169,201 [14], MobileNet [13], Xception [7], NASNetMobile [38], NASNetLarge [38], InceptionResNetV2 [28]), but for clarity, only the best layer of the best model (here, layer 704 of binarized DenseNet201), identified through a semi-exhaustive evaluation has been preserved in the following experiments, resulting in a 1920-dimension feature vector per image. Results presented in Figure 2 and Table 1 show that domain adaptation through binarization significantly improves performance and for few-shot learning classification, shallow keypoints perform slightly better than deep features, but best performance is achieved via a fusion of both, suggesting complementary information.
General Group Classification (Sex Labels) : In this experiment, we used a subset of 424 images of the HCP dataset, to preserve only one family member per family, with an equal number of males and females in order to reduce the possible biases due to genetic influences or imbalanced groups. A total of 624,643 keypoints have been extracted in 12min for a total size of 38MB, and 424 deep features. Results in Figure 2 and Table 1 still indicate a significant improvement through domain adaptation, but relatively similar performances with deep or shallow features. It should be noted that combining features also lead to an improvement in performances.
Discussion
We propose a novel Bayesian formulation in order to combine generic keypoint and CNN information into a single, highly efficient memory-based model for indexing and classifying generic 3D medical image data. Our model is invariant to Table 1. 3D similarity transforms and the keypoint extraction process is highly efficient and greatly reduce the memory footprint necessary to store images in memory (by a factor of 150), which proved to be very useful for large scale image analysis and classification. The approach presented here for domain adaptation of deep features lead to an increase in family and sex classification performances, particularly in the case of many-class-few-data indexing scenario of family member prediction. As our current work is based entirely on generic features requiring no specialised training, and the entire system can be applied as is to large medical image datasets. To date, we have processed 25K Brain MRIs from the UK Biobank [27] and 20K Lung CT COPD-Gene [24], these datasets can now fit into standard RAM on a laptop computer, and an individual image query against data is well p({f i }|C) p(v|C,T) r= 0. 20 8 Fig. 3. A plot of the distribution of shallow p({fi}|C, T ) and deep p(v|C, T ) data pairs shows a very low correlation coefficient (r = 0.208), indicating high degree of independence between these two information sources. under 1 second. Kernel variance parameters can be estimated on-the-fly, CNN vector domain adaptation and further learning-based methods will be investigated against baseline generic performance, model of conditional independence appears to be a promising strategy supported by evidence here.
|
2020-10-12T01:01:05.255Z
|
2020-10-08T00:00:00.000
|
{
"year": 2020,
"sha1": "95c1ea8beb686e7ad2e491c4111bd03cc00222b3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "95c1ea8beb686e7ad2e491c4111bd03cc00222b3",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
39270210
|
pes2o/s2orc
|
v3-fos-license
|
Performance of normal adults on rey auditory learning test A pilot study
The present study aimed to assess the performance of healthy Brazilian adults on the Rey Auditory Verbal Learning Test (RAVLT), a test devised for assessing memory, and to investigate the influence of the variables age, sex and education on the performance obtained, and finally to suggest scores which may be adopted for assessing memory with this instrument. The performance of 130 individuals, subdivided into groups according to age and education, was assessed. Overall performance decreased with age. Schooling presented a strong and positive relationship with scores on all subitems analyzed except learning, for which no influence was found. Mean scores of subitems analyzed did not differ significantly between men and women, except for the delayed recall subitem. This manuscript describes RAVLT scores according to age and education. In summary, this is a pilot study that presents a profile of Brazilian adults on A1, A7, recognition and LOT subitem. Key wORdS: memory, adult, evaluation. desempenho de indivíduos saudáveis no rey auditory Verbal learning test (raVlt): estudo piloto resumo – O objetivo deste estudo foi avaliar o desempenho de adultos normais brasileiros no Rey Auditory Verbal Learning Test (RAVLT), um teste destinado à avaliação da memória, e investigar a influência das variáveis idade, sexo e escolaridade no desempenho obtido, além de sugerir escores que possam ser utilizados na avaliação da memória segundo este instrumento. Foi avaliado o desempenho de 130 indivíduos, subdivididos em grupos de acordo com a idade e escolaridade. O desempenho geral no teste diminuiu com o aumento da idade. A escolaridade apresentou relação forte e positiva com os escores em todos os subitens analisados, exceto no aprendizado, no qual não foi verificada influência. As médias dos escores dos subitens analisados não foram estatisticamente diferentes entre homens e mulheres, exceto no subitem recordação tardia. descrevemos os escores no RAVLT de acordo com faixa etária e escolaridade neste manuscrito. PALAVRAS-chAVe: memória, adulto, avaliação. department of Speech Therapy, Universidade Federal de São Paulo (UNIFeSP), São Paulo SP, Brazil: Speech Therapist graduated from UNIFeSP/ePM; Speech Therapist, Phd, Assistant Professor of the Speech Therapy department, UNIFeSP/ePM; Neurologist, Phd, Professor of the Neurology and Neurosurgery department of UNIFeSP/ePM. Financial Support: cNPq (conselho Nacional de desenvolvimento científico e Tecnológico). Received 25 August 2008, received in final form 12 November 2008. Accepted 4 February 2009. Dra. Leila Cardoso Teruya – Rua Embu 92 07050-260 Guarulhos SP Brasil. E-mail: leila.teruya@yahoo.com.br Memory is defined as a group of abilities involving acquisition, storage and retrieval of different types of information. Long-term memory allows the storage of large quantities of information for an indefinite period of time, while short-term memory is the ability to store small quantities of information for limited periods. Models representing memory and its subsystems incorporate the concept of working memory, defined as a memory system which involves temporary storage and manipulation of information for performing a wide variety of cognitive activities such as reasoning, comprehension, and repetitive tasks. This system is divided into three interconnecting subsystems: the articulatory loop, visuo-spatial sketchpad and central executive. The articulatory loop is responsible for processing and temporarily retaining speech knowledge, and is made up of different components: a phonological store which retains the information in phonological code, and the articulatory process, responsible for maintaining the material in this store active as well as recoding all other non-phonological material. Arq Neuropsiquiatr 2009;67(2-A) 225 Rey Auditory Learning Test Teruya et al. certain variables may impact the performance on working memory tests including educational level, sex and age. Although several tests to evaluate this type of memory have been described , little is known about the performance of the Brazilian population on these tests. A test frequently referred to in the international literature is the Rey Auditory Verbal Learning Test (RAVLT), which evaluates memory and learning. The study of this instrument is important to delineate the performance profile of the Brazilian population. This study intends to investigate the influences of age, sex and educational level on the performance on RAVLT of normal adults, and suggest scores to assess memory according to this instrument.
Memory is defined as a group of abilities involving acquisition, storage and retrieval of different types of information 1 .Long-term memory allows the storage of large quantities of information for an indefinite period of time, while short-term memory is the ability to store small quantities of information for limited periods.Models representing memory and its subsystems incorporate the concept of working memory, defined as a memory system which involves temporary storage and manipulation of information for performing a wide variety of cognitive ac-tivities such as reasoning, comprehension, and repetitive tasks 2 .This system is divided into three interconnecting subsystems: the articulatory loop, visuo-spatial sketchpad and central executive.The articulatory loop is responsible for processing and temporarily retaining speech knowledge, and is made up of different components: a phonological store which retains the information in phonological code, and the articulatory process, responsible for maintaining the material in this store active as well as recoding all other non-phonological material 3 .
Rey Auditory Learning Test
Teruya et al.
certain variables may impact the performance on working memory tests including educational level, sex and age 4 .Although several tests to evaluate this type of memory have been described , little is known about the performance of the Brazilian population on these tests.A test frequently referred to in the international literature is the Rey Auditory Verbal Learning Test (RAVLT) 5 , which evaluates memory and learning.The study of this instrument is important to delineate the performance profile of the Brazilian population.
This study intends to investigate the influences of age, sex and educational level on the performance on RAVLT of normal adults, and suggest scores to assess memory according to this instrument.
metHod
One hundred and thirty Brazilians participated in this study, divided into two different age groups (young adults: aged between 34 and 59 years; and elderly individuals: aged between 60 and 85 years) and educational levels (low: 4 to 8 years of formal education; and high: 9 or more years of education).The inclusion criteria were: age between 34 and 85 years and at least 4 years of schooling.The exclusion criteria were: uncorrected hearing impairment, severe neurological or psychiatric disorders; chronic psychotropic drug use; traumatic brain injury with loss of consciousness of 15 minutes or more; previous history of stroke or epilepsy.This study was approved by the Research ethics committee of UNIFeSP, under protocol no.0993/06.
The RAVLT was administered after having been translated and adapted to Brazilian Portuguese 6 .The test was administered according to its original standards 5 : fifteen translated nouns (list A) were read by the examiner, followed by subjects' free recall (A1-A5), five times consecutively.After the fifth recall, the examiner read a further list (list B) of 15 new words, followed by the subjects' free recall (B).Immediately after and 20 minutes later, another recall of list A (A6 and A7) were assessed A recognition test, with 15 words from List A intermingled with 15 new words, was read to the subjects, who should identify which words belonged to the original list and which were new.
A method developed in MOANS (Mayo's Older Americans Normative Studies) 7 was used to analyze learning, by standardizing scores: the Learning Over Trials (LOT), which is the total number of words recalled from all five trials minus five times the number of words obtained in the first trial.
we examined the scores of A1, A7, recognition and LOT to observe the performance of the subjects on immediate recall, delayed recall, recognition and learning, respectively.
Multiple linear regression was used to investigate the individual relationship between the independent (sex, age and education) and dependent (memory test results of A1, A7, recognition and LOT subitems) variables.The assumptions of these analyses were verified.A p value of <0.05 was considered to indicate statistical significance; all tests were two-tailed.All statistical analysis was performed with the Statistical Package for Social Science for windows (Version 11.5.1).
results
One hundred and thirty patients were assessed: 65 from the young adult group and 65 from the elderly group.Of these 130 subjects, 75% were women.
The scores obtained in A1, A7, recognition and LOT subitems of the RAVLT are shown in Table 1.
Multiple linear regression analysis was performed to verify which factors impacted RAVLT scores.The results of A1, A7, recognition and LOT subitems were used as dependent variables whereas sex, age and education (Table 2) were used as independent variables.
Age correlated strongly with all RAVLT subitems independently of educational level and sex.education was strongly correlated with all except with the learning RAV-LT subitem.Sex was strongly correlated only with the delayed recall subitem.
Table 3 shows a descriptive analysis of the results obtained for the RAVLT subitems analyzed, with subjects divided by age and education.
discussion
The main finding of this study is that age influenced the scores of all the subitems analyzed, however, the positive and significant effect of education was not observed in learning.Sex had no influence on any subitems, except for delayed recall.
It is well described the negative effect of age on RALVT score 4 .however, only few researchers have evaluated the sex variable.The performance on A1, A7, recognition and LOT subitens revealed an increase in the number of words recalled in these two attempts (A1 and A7) This improvement could be attributed to the learning that occurred following consecutive readings of the list.
The analyses of these subitems indicated correlation between the scores, such that all the combinations were statistically significant.The mean and median scores were very close to the maximum possible score for the recognition test, which suggests a ceiling effect for normal subjects.however, individuals with impaired memory perform poorly on this test 8 .
Our results are consistent with those observed by diniz et al. 9 : the means of A1 and A7 in the adult and elderly groups were similar to one another in both studies.however, the test version used was different from ours.
The score analysis of immediate recall (A1) indicates influence of education and age, independently, but not sex.The main component of working memory responsible for this task is the articulatory process of the articulatory loop, which function is to hold auditory information in memory using subvocal reverberation.Increase in age has a negative influence on the functions of the articulatory loop articulatory process whereby the aging process All previous studies investigating the influence of age on A1 scores noted a negative effect for this variable 8,-14 .
Some studies have investigated the influence of schooling on A1 scores and also found a strong and positive relationship between schooling and A1 subitem performance 13,14 .In contrast to our findings, several studies suggest sex influences A1 subitem performance, with women outperforming men 8,14 .Nevertheless, this effect was not observed in a previous Brazilian study 9 .
The results on the delayed recall (A7) were influenced by sex, education and age, independently.The better performance was related to younger age, higher educational level and female sex.This task involves two components of working memory: the articulatory loop and episodic buffer 15 .The function of the episodic buffer is to integrate information from the articulatory and visuo-spatial loops, along with long-term memory material 16 .Thus, to achieve delayed recall, the subject needs to use both the articulatory loop, to store the audibly received material and maintain it active, as well as the episodic buffer.The poor performance of the elderly subjects and those with a lower educational level can be attributed to the articulatory loop and episodic buffer, based on the hypothesis that the aging process leads to impairment in mechanical components.Moreover, lower educational levels would indicate less efficient use of these components.
concerning sex influence, our findings indicated that women, independently of age or education, presented better use of the articulatory loop and the episodic buffer.
earlier studies that analyzed the influence of age on A7 scores have also verified the negative influence of this variable 9,13,14 .however, the effects of education on A7 scores have been examined in a few studies found in literature 13,14 , where all demonstrated this variable's positive influence on A7 performance.
Sex influence on A7 scores has also been examined by studies 9,14 , where results on sex effects were contradictory.
Scores for recognition were influenced, independently, by age and education, but not by sex.
Poorer performance on recognitions tasks by elderly and less-educated subjects, as for A7 subitems, can be attributed to inefficiency of the articulatory loop and episodic buffer, which also require the participation of these two components.
The functions of these two components are also impaired with aging and go underused with lower-educa-tional background.however, as in recognition, the articulatory loop has a less significant role in recall.Of the two components required for the recognition task, the episodic buffer is the most jeopardized by the effects of age and education.
The present study results corroborate findings of other studies that examined the effect of age on the test scores of the cognition subitem .The negative influence of age, revealed in the present study has been reported in earlier work 9,11,13,14,16 .however, a number of studies in the literature did not confirm the negative effect of age on the test, stating this variable had no influence 8,12 .All the studies that examined the effects of educational level on recognition scores found this variable to have positive influence.however, this effect was not always independent of the age 11 .Influence of sex was also absent in previous studies 8,9 , but conflicting results exist in the literature 14 .
The recognition task, however, was not administered in the same manner in all the studies cited 8,9,11,12,14,16 .
differences in procedure for administering the recognition test hampers comparison of study results.This is the case for both test scores obtained, and how the variables age, education and sex influence the results. .
Regarding the LOT subitem performance, statistically significant influence was observed for age, but not sex or education.
Learning, akin to delayed recall and recognition tasks, requires the participation of the articulatory loop and episodic buffer.during learning, the articulatory loop allows the input of more permanent information for storage in long-term memory 17 .The connection between working memory and long-term memory is made possible through the episodic buffer 15 .
This allows us to assume that younger subjects perform better on the immediate recall task due to more effective use of the articulatory process of the articulatory loop, as seen in the A1 results.This younger group was also able to considerably improve performance on subsequent recalls.Improvement occurs due to the input and storage of more permanent information in longterm memory enabled by the articulatory loop and episodic buffer.This results in a higher learning score than that obtained by older individuals who have poor immediate recall performance and worsening performance on subsequent recall trials.
The lack of influence of education on learning, although present in immediate recall, allows us to infer that subjects with higher educational levels present better results on A1 than those with lower educational levels and points to better use of the articulatory process.The articulatory loop and episodic buffer, by themselves, do not aid in recalling a sufficiently large number of words during the five trials to result in a higher learning score in in-dividual's with greater schooling than subjects with lower educational background.
comparing our results with literature, LOT subitem scores were negatively influenced by age, as observed in previous studies 8,9,11,14,16 .The absence of influence of this variable on learning scores has been reported in a previous study 12 .Regarding influence of education, the findings in this study differ from those found in other studies stating the positive effect that the education had on the learning score.depending on the subject' s age, this effect was not always evident 11 .different results were also found for influence of sex on learning score.A number of studies 8,14 have reported the influence of sex, with females scoring higher than males, although this was not observed in the present study.
The learning measurement used in our study was not the same at that used in previous studies investigating learning over testing, which adopted different learning score parameters 8,9,11,12,14,16 .different methods used in the various studies limit the comparison of the results.
Analysis of our data for age and education yielded highest scores in young adults with higher education levels.Lowest scores were observed in the elderly group with lower education levels.These results reflect the tendency for age and education to impact test results negatively and positively, respectively.Several studies in the literature confirm that the younger and more schooled an individual, the better their performance on neuropsychological test 8,9,[11][12][13][14]16 .
The present study also provided scores for the A1, A7, recognition and LOT subitems of the test, which are suggested in the memory evaluation of Brazilian subjects according to age and education.Although Brazilian results are available for the test cited in the literature 9 , the differences in methodology used should be noted.These differences involved learning measurement, recognition test and sample characterization, since the age groups in the study have different educational levels.Future studies involving the application of the RAVLT in the Brazilian population should include a greater study sample size to standardize the scores obtained.In practical terms, the main contribution of this study was to highlight the influence of the variables age, sex and educational background on the test scoring process.
Table 1 .
Descriptive analysis of subject performance on RAVLT subitems.
A1: First attempt at List A recall A; A7: Seventh attempt at list A recall; LOT: learning over trials.
Table 2 .
Multiple linear regression analysis to demonstrate the relationship between the demographic data and RAVLT subitem scores.A1, A7, Recognition and LOT were used as dependent variables.
A1: First attempt at list A recall; A7: Seventh attempt at list A recall; LOT: learning over trials.
Table 3 .
Means and standard deviations of number of words recalled on A1, A7, REC and LOT subitems of the RAVLT by age and educational level.
A1: First attempt at list A recall; A7: Seventh attempt at list A recall; Rec: recognition; LOT: learning over trials.
|
2017-06-10T08:16:41.316Z
|
2009-06-01T00:00:00.000
|
{
"year": 2009,
"sha1": "cde18b786a35aaf11245e487b96c6dca5cb8e0b0",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/anp/a/fXpzZyZwSCX9ffRwDTXwhRM/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cde18b786a35aaf11245e487b96c6dca5cb8e0b0",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
688488
|
pes2o/s2orc
|
v3-fos-license
|
Long-lasting control of Anopheles arabiensis by a single spray application of micro-encapsulated pirimiphos-methyl (Actellic® 300 CS)
Background Pyrethroid-resistant mosquitoes are an increasing threat to malaria vector control. The Global Plan for Insecticide Resistance Management (GPIRM) recommends rotation of non-pyrethroid insecticides for indoor residual spraying (IRS). The options from other classes are limited. The carbamate bendiocarb and the organophosphate pirimiphos-methyl (p-methyl) emulsifiable concentrate (EC) have a short residual duration of action, resulting in increased costs due to multiple spray cycles, and user fatigue. Encapsulation (CS) technology was used to extend the residual performance of p-methyl. Methods Two novel p-methyl CS formulations were evaluated alongside the existing EC in laboratory bioassays and experimental hut trials in Tanzania between 2008-2010. Bioassays were carried out monthly on sprayed substrates of mud, concrete, plywood, and palm thatch to assess residual activity. Experimental huts were used to assess efficacy against wild free-flying Anopheles arabiensis, in terms of insecticide-induced mortality and blood-feeding inhibition. Results In laboratory bioassays of An. arabiensis and Culex quinquefasciatus both CS formulations produced high rates of mortality for significantly longer than the EC formulation on all substrates. On mud, the best performing CS killed >80% of An. arabiensis for five months and >50% for eight months, compared with one and two months, respectively, for the EC. In monthly bioassays of experimental hut walls the EC was ineffective shortly after spraying, while the best CS formulation killed more than 80% of An. arabiensis for five months on mud, and seven months on concrete. In experimental huts both CS and EC formulations killed high proportions of free-flying wild An. arabiensis for up to 12 months after spraying. There was no significant difference between treatments. All treatments provided considerable personal protection, with blood-feeding inhibition ranging from 9-49% over time. Conclusions The long residual performance of p-methyl CS was consistent in bioassays and experimental huts. The CS outperformed the EC in laboratory and hut bioassays but the EC longevity in huts was unexpected. Long-lasting p-methyl CS formulations should be more effective than both p-methyl EC and bendiocarb considering a single spray could be sufficient for annual malaria control. IRS with p-methyl 300 CS is a timely addition to the limited portfolio of long-lasting residual insecticides.
Background
Indoor residual spraying (IRS) has produced profound changes in malaria burden in a range of settings with several different insecticide classes [1]. Interruption of malaria transmission in the USA was achieved partly through DDT house-spraying and led to the initiation of the World Health Organization (WHO)-led Global Malaria Eradication Scheme (1955)(1956)(1957)(1958)(1959)(1960)(1961)(1962)(1963)(1964)(1965)(1966)(1967)(1968)(1969) [2]. Malaria was subsequently eradicated from Europe, parts of the Soviet Union, Israel, Lebanon, Syria, Japan, and Chinese Taiwan. Despite numerous positive outcomes, the benefits were not on the global scale that was anticipated. There were about 20 pilot IRS projects in sub-Saharan Africa between the mid 1950s and early 1960s [3] that demonstrated IRS significantly reduced malaria transmission even in highly endemic (intense transmission) areas [4]. Despite this, Africa was largely sidelined for eradication due to the high malaria burden; while elsewhere dramatic reversals were seen once IRS spraying was prematurely reduced in countries such as India and Sri Lanka [5,6]. As a result interest in IRS subsequently waned and was not taken to scale in most sub-Saharan malaria-endemic countries as part of the global eradication campaign [4,7].
Southern Africa was the exception. IRS programmes using DDT began in the 1960s and were supported for several decades, with later introduction of pyrethroids and carbamates. Countries with sustained IRS activities in Africa, including South Africa, Zambia, Namibia, Swaziland, Zimbabwe, and Botswana, achieved sizeable reductions in malaria vector populations and malaria incidence [7]. Focal IRS in the southern Africa region has remained important in areas of higher malaria burden and at risk of epidemics. In 2007, about 14 million people in southern Africa were protected by IRS [4,7].
In 2006 WHO reaffirmed the importance of IRS as a primary intervention for reducing or interrupting malaria transmission [8,9]. In recent years an unprecedented level of funding has initiated new IRS campaigns across sub-Saharan Africa, often in parallel with long-lasting, insecticide-treated bed nets (LLIN) distribution. In 2012 President's Malaria Initiative (PMI) supported IRS in 15 African countries, covering seven million structures [10]. The implementation of new IRS programmes, together with sustained IRS programmes in Southern Africa has elevated the importance of IRS as a primary intervention for malaria control in Africa. Greater emphasis has been placed on ensuring that IRS in Africa can be sustained [11].
Pyrethroids are the only group of insecticides approved by WHO Pesticide Evaluation Scheme (WHOPES) for LLINs [12]. Pyrethroid insecticides have also been preferred for IRS in Africa in recent years due to low cost, longevity of three to six months, and low mammalian and non-target toxicity [13]. Subsequently, pyrethroid resistance has become widespread in malaria vectors across Africa [14]. Reduced efficacy of insecticide interventions in areas of pyrethroid resistant malaria vectors has been demonstrated in several settings. A notable example is in South Africa where four years after the introduction of deltamethrin IRS a four-fold increase in malaria cases was recorded in Kwa-Zulu Natal, coinciding with re-invasion of pyrethroid resistant Anopheles funestus s.s. This trend was reversed after re-introduction of IRS with DDT in 2000 and new introduction of artemisinin-based combination therapy in 2001, with an accompanied decline in malaria cases by 91% [15]. In Bioko Island, Equatorial Guinea a single spray round with pyrethroid failed to reduce the population density of pyrethroid-resistant Anopheles gambiae s.s. Subsequent spraying of a carbamate significantly reduced the number of An. gambiae s.s. caught exiting in window traps, thus demonstrating the utility of non-pyrethroid IRS [16].
The residual lifespan of alternative IRS insecticides is of key importance. Based on WHOPES recommendation, DDT is the longest lasting IRS, with a duration of effective action greater than six months [17]. However, the Stockholm Convention on Persistent Organic Pollutants stipulates that, 'countries using DDT are encouraged to reduce and eliminate the use of DDT over time and switch to alternative insecticides' [18]. Carbamates are a commonly used alternative to DDT and pyrethroids, and were sprayed in ten African countries in 2012 through PMI funding. Based on WHOPES recommendation, bendiocarb has a short residual action of only two to six months [17]. In areas of intense yearround (perennial) transmission, multiple spray rounds of short lasting insecticides are expensive, logistically demanding, and inconvenient to householders [8]. Despite added impetus for the development of new public health insecticides, notably from Innovative Vector Control Consortium (IVCC), alternative classes of insecticide for public health use are emerging slowly [11]. For improved cost-effectiveness of IRS programmes it is important to develop new long-lasting formulations of currently available insecticides [19].
Encapsulation technology can extend the residual performance of established insecticides. Pirimiphos-methyl (p-methyl) is an organophosphate insecticide, most commonly and intensively used in the protection of cereal grain [20]. Several small and medium scale IRS trials conducted since the 1970s showed high toxicity to anopheline mosquitoes [21], leading to WHOPES' recommendation. According to WHOPES, p-methyl EC formulation has a relatively short residual IRS activity of two to three months but was used successfully for IRS in Malawi and Zambia in 2012 [22]. The overall aim of this study was to evaluate longevity of two capsule suspension (CS) formulations in comparison with emulsifiable concentrate (EC).
Insecticide formulations
Two capsule suspension (CS) formulation variants of Actellic 300CS, containing 300 g/L p-methyl and coded as CS 'B' and CS 'BM' (Syngenta, Basel, Switzerland) were evaluated alongside the existing EC formulation (Actellic 50EC®, Syngenta, Basel, Switzerland) in laboratory bioassays and experimental hut trials at 1 g/m 2 . Lambdacyhalothrin CS (0.03 g/m 2 ) (Icon CS®, Syngenta, Basel, Switzerland) is a WHOPES recommended formulation that was sprayed in Tanzania as part of the national malaria control programme (NMCP) from 2007-2012 [23] and was included in laboratory bioassays as a positive control but was not sprayed in experimental huts (due to availability of huts).
Laboratory assessment of residual performance
Cone bioassays to assess insecticidal duration on sprayed mud, concrete and plywood substrates were conducted every month based on WHO guidelines [9]. Substrates were stored at ambient temperature and humidity (~20-28°C, 40-80% RH). For each formulation three blocks were sprayed and~nine replicates of~ten female Anopheles arabiensis were tested, (i.e. three replicates per block), for an exposure of 60 minutes. This is longer than the 30 minutes standard exposure time as specified by WHO for IRS cone bioassay, regardless of the insecticide [9]. Test mosquitoes were transferred to 150 ml paper cups with 10% glucose solution provided ad libitum and mortality recorded after 24 hours. Substrates were sprayed at an application rate of 40 ml/sq m using a Potter Tower Precision Sprayer (Burkard Scientific, Uxbridge, UK). Resistance status of insectary-reared female test mosquitoes An. arabiensis Dondotha, Culex quinquefasciatus TPRI and Cx. quinquefascaistus Muheza was determined in WHO susceptibility tests (Table 1).
Indoor residual spraying experimental hut trials
An experimental hut trial was conducted at Kilimanjaro Christian Medical University College (KCMUCo) Field Station in Lower Moshi Rice Irrigation Zone (3°22'S, 37°1 9'E) nightly for 12 months between December 2008 and December 2009. The walls and ceiling of the p-methyl EC hut were covered with untreated plastic sheeting for 1 month in January 2010 to investigate the possibility of mosquito movement between huts. To determine the relative contribution of the sprayed mud and concrete walls to mortality of An. arabiensis the palm thatch ceiling was covered with unsprayed plastic sheeting every second week for 2 months from March-April 2010 in all huts. Further description of the supplementary experimental hut tests is included in the results section. Anopheles arabiensis densities were heavily dependent on rice cropping cycles with flooded rice fields adjacent to the Field Station being the main breeding site. In 2009, wild An. arabiensis were tested in WHO cylinder bioassays and were found to be susceptible to organophosphates, including p-methyl, and resistant to permethrin ( Table 2).
Verandah experimental huts were constructed to a design described by WHO [9]. The working principle of these huts has been described previously [24]. The interior walls of experimental huts were plastered with either mud or concrete. A palm thatched mat, typical of organic fibres used in some rural housing [25], was affixed to the wooden ceiling before spraying.
The walls and ceiling were sprayed at an application rate of 40 ml/sq m with a Hudson X-pert sprayer (H D Hudson Manufacturing Company, Chicago, Ill, USA) with flat fan 8002E nozzle [26]. A constant flow valve (CFV) was not used, but compression was maintained at 55 psi by repressurizing after each swath. Flow rate was 840 ml/minute. A guidance pole was used to ensure a consistent vertical swath 71 cm wide and swath boundaries were marked out with chalk on walls and ceiling to improve spray accuracy. High performance liquid chromatography (HPLC) was not done to confirm the accuracy of the spray concentration. Verandahs were protected during spraying by blocking the open eaves with a double layer of plastic and Hessian sackcloth. IRS treatments were randomly assigned to huts. Rotation of IRS treatments was not feasible as the mud and concrete substrates were permanent. Hut position is known to bias the number of mosquitoes entering a hut, but is unlikely to affect the primary proportional outcomes, per cent mortality and per cent blood-fed of Results of susceptibility testing with insectary strains exposed for one hour using WHO diagnostic dosages in cylinder bioassays.
those entering the huts. The following treatments were sprayed in a total of six experimental huts.
Pirimiphos methyl CS 'B', 1 g/sq m (one mud and one concrete walled hut) Pirimiphos methyl CS 'BM', 1 g/sq m (one mud and one concrete walled hut) Pirimiphos methyl EC, 1 g/sq m (one mud walled hut) Unsprayed (one mud walled hut) The trial protocols were based on WHOPES procedures for small-scale field trials for IRS [9]. Adult trial participants gave informed consent and were offered free medical services during the trial and up to three weeks after the end of participation. An adult volunteer slept in each hut nightly from 20:30-06:30. Sleepers were rotated between huts on successive nights to reduce any bias due to differences in individual attractiveness to mosquitoes. Each morning mosquitoes were collected from the verandahs and window traps of huts and recorded as blood-fed or unfed and dead or alive. Live mosquitoes in the sprayed room were not collected in order to allow for natural resting times on treated surfaces, and were only collected after exiting to verandahs or window traps. 10% glucose pads were placed in the window traps and verandahs to prevent death by starvation. Live mosquitoes were transferred to 150 ml paper cups and provided with 10% glucose solution before scoring delayed mortality after 24 hours. All members of the An. gambiae species complex identified by morphological characteristics were assumed to be An. arabiensis based on recent PCR identification [27].
Experimental hut bioassays
Cone bioassays of the sprayed walls and ceiling were conducted monthly using sugar-fed, two to five day-old female An. arabiensis dondotha, for an exposure of 60 minutes. In each experimental hut four to eight replicates of ten female mosquitoes were tested on the walls and ceiling surfaces. Cones were positioned randomly for each test.
Fumigant activity
The possibility of fumigant activity of the treatments was determined using insectary-reared wild female F1 An. arabiensis (no tarsal contact) [9]. Wire cages measuring 15 cm × 10 cm × 10 cm covered with netting were hung in the corner of the room~5 cm from the wall and 25 mosquitoes exposed overnight. Testing was done monthly in for all treatments until mortality decreased to low levels.
Analysis of laboratory assessment of residual performance
Treatments were compared according to the time interval since spray application for mortality to fall to 80% (based on WHOPES criteria) and 50% [9]. Mixed effect logistic regression models were used to fit mortality trajectories over time separately for each strain of mosquito (An. arabiensis Dondotha, Cx. quinquefasciatus TPRI and Cx. quinquefasciatus Muheza), treatment (P-methyl EC, CS 'B' and CS 'BM' and lambdacyhalothrin CS) and substrate (mud, concrete and plywood). All statistical modelling was performed on the log odds scale at the individual mosquito level and results back transformed to the proportion scale. Linear, quadratic and cubic terms in time were specified as predictors in the models to allow for potential drops and then levelling off in mortality rates over time. A random effect was specified in all models to account for similarities in mosquitoes tested at the same time point and for potential behavioural clustering within the same test batch. The cubic equations given by the estimates from the polynomial models were solved to obtain estimates of the time points at which mortality fell to 80 and 50%. Ninetyfive per cent confidence intervals (CI) were estimated using the bias corrected bootstrap method with 2,000 replications. Differences between treatments in estimated time for mortality to fall to 80 and 50% were calculated and statistically significant differences inferred from the bootstrap 95% CI (p = 0.05).
Analysis of experimental hut bioassays
Analysis of hut bioassays was similar to that described for laboratory bioassays. For wall assays, separate models were fitted for each hut. For ceiling assays, data from huts treated with the same insecticide (but with different wall materials) were combined. There was little evidence of a departure from a linear decrease in the log odds of mortality over time for either the wall or ceiling assays, so a linear term in time was specified as the only predictor in all models. Two-to five-day old sugar-fed offspring (F1) of Anopheles arabiensis collected from cattle-sheds in Lower Moshi were exposed for one hour in WHO cylinders lined with papers treated with diagnostic dosages of malathion and permethrin, and a range of dosages of p-methyl.
Analysis of experimental hut trial
The number of mosquitoes collected from the two closed verandahs was multiplied by two to adjust for the unrecorded escapes through the two open verandahs which were left unscreened to allow routes for entry of wild mosquitoes via the gaps under the eaves [9,24]. The data were analysed to show the effect of each treatment in terms of: Overall mortality = Total proportion of mosquitoes dead on the morning of collection, plus delayed mortality after holding for a total of 24 hours; Blood feeding inhibition = Percentage of blood-fed mosquitoes from a treated hut relative to percentage from the unsprayed negative control; Mortality-feeding index = The null hypothesis was that mortality and blood-feeding are independent so that mosquitoes surviving or killed by the treatment have an equal probability of having fed or not. Deviation from the null hypothesis shows whether there is association between feeding and mortality and may indicate the sequence of events experienced by individual mosquitoes after entering in the hut. The mortality-feeding index is calculated as follows: total unfed dead=total unfed ð Þ Interpretation of mortality−feeding index 0 ¼ equal chance of unfed and blood −fed mosquitoes being killed 0 to−1 ¼ deviation towards unfed mosquitoes being killed 0 to 1 ¼ deviation towards blood−fed mosquitoes being killed Separate mixed effect logistic regression models were fitted to the mortality and blood-feeding data. The main predictors in each model were treatment, one or more time parameters and interactions between treatment and each of the time terms. There was little evidence of a departure from a linear decrease in the log odds of mortality over time since spraying, so only linear terms in time were specified in the statistical model for mortality. A model with linear, quadratic and cubic terms in time provided the best fit to the blood-feeding data. A random effect was specified in both models to account for similarities among mosquitoes entering huts on the same day and potential behavioural clustering. Both models controlled for sleeper. Predicted trajectories were plotted over the duration of the 12 months for mortality alongside actual results.
Laboratory residual bioassay
The duration of residual activity of the p-methyl formulations on mud, concrete, and plywood are presented in Table 3 and the differences in residual activity are shown in Table 4. Using >80% mortality and >50% mortality as the duration of residual efficacy, there was evidence that the two CS formulations showed significantly longer activity than the EC on mud and concrete substrates for both An. arabiensis and for two strains of Cx. quinquefasciatus, but differences between the two CS formulations were non-significant in most instances. There was no evidence that treatment performance differed between species or strains.
Residual activity of formulations in experimental huts
One-hour cone bioassays of An. arabiensis were conducted on walls and ceilings at monthly intervals. Both CS formulations showed improvement over the EC on mud, concrete and palm thatch. Mortality was 100% one week after spraying the CS 'B' and CS 'BM' formulations on mud and concrete walls ( Figure 3). Mortality was >80% for CS 'B' for 4.8 months (95% CI: 1.9-6.9) on mud and 7.0 months (95% CI: 5.4-8.3) on concrete, compared with 0.9 months (95% CI: 0-4.4) and 6.6 months (95% CI: 3.0-9.0) for CS 'BM' respectively ( Table 5). The EC was ineffective on mud and killed a small proportion one week after spraying.
Twelve-months experimental hut trial against wild free-flying Anopheles arabiensis All formulations of p-methyl (CS 'B', CS 'BM', and EC) were highly effective against free-flying wild An. arabiensis shortly after spray application ( Figure 5). Mortality gradually decreased over time for all formulations up to five months after spraying, followed by a small increase between months five to seven, possibly due to climatic changes. Subsequently, between months seven to 12 there was a gradual decrease in mortality ( Figure 5). Overall mortality rates remained high for both CS treatments up to12 months after spraying regardless of wall substrate. P-methyl EC performed equally well as CS 'B' and CS 'BM' after 12 months, based on 95% CIs from estimated curves. Twelve months after spraying predicted mortality was 62.8% (95% CI: 54.4-71.2) for EC, 72.0% (95% CI: 64.5-79.6) for CS 'B' (mud) and 69.5% (95% CI: 62.0-77.0) for CS 'BM' (mud) ( Table 6).
Blood-feeding was high in the unsprayed hut throughout the study but did show considerable variation over time and ranged from 40% (after nine months) to 90% (five and 12 months) ( Figure 6). The two periods of lowest percentage blood feeding in the unsprayed hut, one and nine months after spraying, coincided with the period of highest mosquito density during rice transplantation cycles ( Figure 6). For the first month after spraying, treated huts provided no protection from being bitten by hostseeking An. arabiensis. Between two and 12 months after spraying all treatments provided some degree of personal protection ( Figure 6). Blood-feeding inhibition was relatively high after six and nine months across all treatments .1) †Indicates that statistical models produced estimates outside the study period: for Culex quinquefasciatus TPRI, estimated mortality for Actellic CS-B on mud was higher than 50% throughout the entire study period; for Culex quinquefasciatus Muheza, estimated mortality for Lambda CS was lower than 80% throughout. ranging between 39-49% for CS formulations and 36-43% for EC (Table 7). Blood-feeding inhibition was similar for both CS and EC formulations over the trial. The mortality-feeding index (total blood-fed dead/total blood-fed) -(total unfed dead/total unfed) was 0.08 and 0.05 for CS 'B' and 0.08 and 0.03 for CS 'BM' on concrete and mud walled huts compared with 0.07 for EC and 0.15 for the unsprayed hut (mud walls). For all treatments the mortality-feeding index was close to 0 indicating mosquitoes had an equal chance of surviving whether fed or unfed.
Fumigant activity tested in small cages resulted in 100% mortality of An. arabiensis F1 one week and two months after spraying for CS 'B', 'BM' and EC formulations. A large decrease to 42% fumigant mortality was recorded after three months for CS 'BM' (concrete) with fumigant mortality less than 10% for all other treatments. n/s †Indicates that statistical models produced estimates outside the study period for one or more of the treatments or their 95% CI and treatment differences cannot therefore be estimated.
Supplementary explanatory bioassays in experimental huts
The walls and ceiling of the p-methyl EC hut were covered with untreated plastic sheeting between months 12-13. This was done to investigate the possibility of mosquito movement between huts, picking up a lethal dosage of p-methyl CS before exiting, flying into the EC hut and dying. All other huts were left uncovered. Mortality for the covered EC hut was 29%, which was greater than the unsprayed hut, 1% (P = 0.001) but less than huts sprayed with CS 'B', 65%, 78% and CS 'BM', 67%, 74% with concrete and mud walls respectively (P = 0.001) ( Table 8). The proportion of An. arabiensis that blood-fed was significantly higher in the covered EC hut (63%), than for CS formulations (19-38%, P < 0.05) but was less than the unsprayed hut 94% (P = 0.001).
To determine the relative contribution of the sprayed mud and concrete walls to mortality of An. arabiensis the palm thatch ceiling was covered with unsprayed plastic sheeting every second week between months 15-16. As the palm thatch ceiling remained highly insecticidal over the duration of the study ( Figure 4) the hypothesis was that it masked any differences in efficacy between the concrete and mud walls (Figure 3). The covering of the ceiling had little impact on overall mortality trends for the EC hut (mud) with 43% mortality when uncovered and 46% covered (P = 0.255) ( Table 8). For both CS 'B' and CS 'BM' any differences in mortality after covering the ceiling were small for both mud and concrete huts.
Extended cone bioassays of up to 12 hours were undertaken, as may occur when mosquitoes enter a house early in the evening to blood-feed and subsequently rest on treated surfaces until the following morning before exiting. With one-hour exposure, four months after spraying the CS 'B' and CS 'BM' killed a far greater proportion (P = 0.001) of An. arabiensis than EC, with mortality 18% for EC compared with 57% and 79% for CS 'B' and CS 'BM' (Figure 7). With longer exposure of two hours, the EC killed 88% of An. arabiensis compared with 100% for CS formulations. A similar trend P-methyl CS BM Thatch 10.8 (9.9 to 11.7) 14.4 (13.7 to 15.2) †Indicates that statistical models produced estimates outside the study period: in all cases estimates were lower than the specified mortality (50 or 80%, respectively) throughout the entire study period.
was observed after ten months as the EC killed 15% with one-hour exposure but killed 73% with a four-hour exposure compared with 80% for CS 'BM' (P = 0.401) and 97% for CS 'B' (P = 0.014). After 17 months mortality was low for both CS 'B' (20%) and EC (20%) with onehour exposure but increased to 52% for EC, 72% CS 'B', and 98% for CS 'BM' with 12-hour exposure.
Discussion
Laboratory bioassays showed that p-methyl CS 'B' and CS 'BM' formulations were effective at killing high proportions (>80%) of An. arabiensis and Cx. quinquefasciatus for significantly longer than the EC formulation on mud, concrete and plywood substrates. The most important improvement was observed on mud. The EC was ineffective on mud and killed >80% of An. arabiensis and Cx. quinquefasciatus for one month or less. In contrast, the best performing CS formulation killed >80% of An. arabiensis for five months and sustained control above 50% for longer than seven months. Similar longevity was observed in The Gambia where p-methyl CS sprayed in village houses persisted for at least five months (when testing was ended) on mud and Time since spraying (months) Actellic CS-B -concrete Actellic CS-B -mud Actellic CS-BM -concrete Actellic EC -mud Actellic CS-BM -mud Figure 5 Mortality of wild Anopheles arabiensis freely entering experimental huts over 12 months after spraying. Data on wild mosquitoes recorded on a daily basis were variable. Graphs of observed mortality over time plot data pooled for each month since spraying.
painted walls [28]. Mud is a problematic substrate for IRS owing to loss of available insecticide due to sorption. Early work in Tanzania in the 1960s characterized the performance of organophosphates and carbamates on various types of soil and showed rapid loss of efficacy on several types of mud, while on less porous substrates, such as wood, high levels of mortality were recorded over several months [29,30]. In the present study, microencapsulation substantially improved the surface bioavailability of p-methyl on mud. Mud or adobe is still a common wall material in rural, low-income areas of Africa. In Tanzania in 2010, 78% of houses were constructed from a form of mud; the most common types being mud plaster (27%), sun-dried mud bricks (28%) and burnt mud bricks (23%) [25].
Both CS formulations showed improved longevity over EC on concrete and wood substrates in bioassays. The alkaline Ph of concrete can rapidly degrade insecticides commonly used for IRS, particularly pyrethroids, resulting in reduced residual efficacy [17]. In laboratory bioassays on plywood, CS formulations lasted for several months longer than the EC, and killed >80% of An. arabiensis 12 Estimates are adjusted for sleeper and account for similarities among mosquitoes entering huts on the same day and potential behavioural clustering. months after spraying. Wood is relatively non-porous with a tendency for long residual bioavailability of organophosphates and pyrethroids [29,31]. Cone bioassays on mud and concrete experimental hut walls showed similar findings to laboratory results and showed that both CS formulations were effective for significantly longer than the EC. For all bioassays in the laboratory and experimental huts an exposure time of 60 minutes was used rather than the standard WHOPES 30 minutes exposure. It is likely that the residual duration of action would be shorter if tested using WHOPES guidelines.
Results for free-flying, wild An. arabiensis showed that huts sprayed with p-methyl CS formulations maintained Estimates are adjusted for sleeper and account for similarities among mosquitoes entering huts on the same day and potential behavioural clustering. BFI = blood-feeding inhibition compared to the untreated control. high rates of mortality for up to 12 months after spraying. This finding is comparable to that in Benin where 1 g/sq m of p-methyl sprayed in mud and concrete experimental huts killed around 75% of wild free-flying An. gambiae s.s. ten months after spraying [32]. In Tanzania, there was an increase in mortality for all formulations five to seven months after spraying between May-July. This was the cool season when mean night-time temperature outdoors dropped to 20°C compared with 24°C inside the experimental huts (USB Wireless Touchscreen Weather Forecaster, Maplin, UK). This may have resulted in longer indoor resting times, which would explain the increase in mortality during this three-month period. It has been reported elsewhere that at higher altitude where differences between indoor and outdoor temperature are greatest, indoor resting is more common [33,34].
An unexpected finding was that the EC formulation matched the performance of the CS against wild free-flying An. arabiensis despite being considered by WHOPES to have an effective duration of only two to three months [17,32]. Recent studies in Ghana on painted cement, and Mozambique on several surfaces, showed high levels of mortality for the EC formulation > four months after spraying, indicating that the EC can remain effective for a relatively long duration [35]. In this study the EC maintained high levels of mortality for wild free-flying An. arabiensis but paradoxically showed poor performance in one-hour cone bioassay on hut walls only weeks after spraying. Several explanations were postulated: Mosquito resting location: Mortality in the EC hut may have been generated by tarsal contact with palm thatch ceiling, with mud walls providing a small proportion of overall mortality. Covering the ceiling with untreated plastic did not result in a decrease in mortality, indicating that mosquitoes were able to pick up a lethal dosage from treated mud walls.
Mosquito movement between huts: It was plausible that mosquitoes were picking up a lethal dosage of p-methyl CS before exiting through open verandahs, flying into the EC hut and falsely being recorded as killed by the EC. Covering all sprayed surfaces (walls and ceiling) with untreated plastic for one month (13 months after spraying) in the EC hut should have resulted in low mortality rates similar to an unsprayed hut if there was no movement of mosquitoes between huts. When covered, mortality was 29%, which although slightly higher than the unsprayed hut, suggested that few mosquitoes were flying between huts. Throughout the trial mortality in the unsprayed control was <20%. This suggests that mortality was generated by insecticidal activity within each individual hut and any movement of mosquitoes between huts had a limited effect on mortality trends.
Mosquito resting duration: The standard exposure time as specified by WHO for IRS cone bioassay is 30 minutes, regardless of the insecticide [9]. This exposure time is probably suitable for excito-repellent insecticides such as pyrethroids and DDT. Resting times of blood-fed An. gambiae on a wall sprayed with a non-irritant insecticide, such as p-methyl, may be longer than 30 minutes. For this study an exposure of one hour was selected for monthly bioassays with supplementary bioassays of up to 12 hours. In the EC hut the finding that one-hour bioassays killed a small proportion of An. arabiensis, while hut collections showed high levels of mortality may indicate that mosquitoes either, i) rested for a short time and exited before picking up lethal dosage or ii) rested for several hours. Extended cone bioassay of two hours after four months and four hours after ten months showed high levels of mortality for both EC and CS formulations. Anopheles arabiensis may have rested on treated surfaces for several hours overnight and may partially explain why EC mortality was similar to that of the CS formulations for wild, free-flying An. arabiensis. While this offers some understanding to why the EC was effective for a longer duration than expected, it does not provide a full explanation for this. As new insecticides are developed for IRS with low excitorepellency, WHOPES may have to revisit the standard 30 minutes exposure for IRS, if this period of exposure does not provide an accurate prediction of field performance.
The mortality-feeding index showed that unfed mosquitoes were equally likely to be killed by p-methyl as those blood-fed. The concept of IRS is to kill mosquitoes that blood-feed and then rest on treated surfaces while processing the blood meal. This finding indicates that some An. arabiensis rested on hut surfaces before attempting to blood-feed and explains why there was some protective effect of p-methyl IRS [36]. There were apparent seasonal changes in percentage blood-feeding in the unsprayed hut. The periods of lowest proportion blood-fed coincided with peak mosquito densities during rice transplantation. It is likely that a larger proportion of newly emerged An. arabiensis entered experimental huts from adjacent paddies for resting or sugar feeding, rather than host-seeking [37].
There was a fumigant effect of all formulations that killed a high proportion of mosquitoes in cage bioassays during the first two months after spraying. The microcapsules in the CS would have limited any fumigant effect because the majority of active ingredient is enclosed within the capsule membrane; however some active ingredient is also present in external solution. Slow release of active ingredient from microcapsules was sufficient for contact mortality but insufficient for a fumigant effect. Questionnaires of volunteers sleeping during the hut trial resulted in Actellic EC ranked consistently last in terms of odour appeal, with typical comments including, "Smells like cabbage and white spirit" or, "Not pleasant and produces irritation". The CS formulations ranked better, and were generally considered to be much milder than the EC, with comments such as, "Smells like cow insecticide, appealing as not too strong".
Of 17 African countries sprayed with PMI-funded IRS in 2012, only one was classified as having pyrethroid susceptible anophelines; the remainder had confirmed or emerging resistance [10]. The Global Plan for Insecticide Resistance Management (GPIRM) states that in areas of pyrethroid resistance IRS rotations should be used with non-pyrethroid insecticides [38]. Despite added impetus from the IVCC there have been no new insecticides for IRS and LLIN since the pyrethroids in the 1980s [11]. As a result, the majority of African PMI-funded IRS programmes are currently spraying IRS with bendiocarb which has a short residual efficacy of only two to six months and is relatively expensive [10,17]. In Malawi, where resistance to both pyrethroids and carbamates was detected, p-methyl EC was sprayed in 2011, but "although effective, the high unit cost substantially increased the IRS costs and PMI subsequently suspended direct support due to increased costs" [39]. Long-lasting p-methyl CS formulations should be more cost-effective than both p-methyl EC and bendiocarb, but this estimation is sensitive to both the duration of efficacy and the relative cost per unit area sprayed. Use of p-methyl IRS + pyrethroid LLIN is preferential for resistance management to pyrethroid IRS + pyrethroid LLINs as p-methyl and pyrethroids have different modes of action which should result in redundant killing of mosquitoes resistant to a single insecticide [40]. Crossresistance of organophosphates and carbamates due to altered acetylcholinesterase (AChE) target site is present at low frequency in limited parts of west and central Africa and may increase in frequency as a result of current IRS programmes using bendiocarb. Nevertheless, IRS with p-methyl CS should prove an effective solution for control of pyrethroid resistant An. gambiae and, having received recent recommendation from WHO [41], is a welcome addition to the limited portfolio of long-lasting IRS.
Ethical approval
Ethical approval was granted from the review boards of LSHTM (5256) and Tanzania National Institute of Medical Research (NIMR/HQ/R.8c/Vol.I/24).
|
2016-05-16T11:50:20.446Z
|
2014-01-29T00:00:00.000
|
{
"year": 2014,
"sha1": "86d9a0ce2aa1ae53221e4d22c067dae03ff2fd6a",
"oa_license": "CCBY",
"oa_url": "https://malariajournal.biomedcentral.com/track/pdf/10.1186/1475-2875-13-37",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3415793d890a3e6c64474f9cc6eaafeba73b44ff",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
252915430
|
pes2o/s2orc
|
v3-fos-license
|
Serendipitous Identification of a Covalent Activator of Liver Pyruvate Kinase
Enzymes are effective biological catalysts that accelerate almost all metabolic reactions in living organisms. Synthetic modulators of enzymes are useful tools for the study of enzymatic reactions and can provide starting points for the design of new drugs. Here, we report on the discovery of a class of biologically active compounds that covalently modifies lysine residues in human liver pyruvate kinase (PKL), leading to allosteric activation of the enzyme (EC50=0.29 μM). Surprisingly, the allosteric activation control point resides on the lysine residue K282 present in the catalytic site of PKL. These findings were confirmed by structural data, MS/MS experiments, and molecular modelling studies. Altogether, our study provides a molecular basis for the activation mechanism and establishes a framework for further development of human liver pyruvate kinase covalent activators.
Introduction
Most current small-molecule drugs are reversible binders designed to interact with their targets in non-covalent equilibrium conditions. [1] However, some of the most widely used drugs, such as acetylsalicylic acid, penicillin, and omeprazole, are based on covalent inhibition of the target enzyme. [2] The mode of action of most of these compounds has been discovered serendipitously rather than as a result of systematic development of reactive drugs. In fact, designing drug candidates that form covalent bonds with their targets has historically been considered to be highly risky due to the potential toxicity arising from promiscuous modification of other proteins, unwanted immune response, and idiosyncratic drug reactions. [3] Nevertheless, over 40 covalent modifier drugs are currently approved by the FDA. [4] Recently, a more coordinated effort to develop covalent drugs has begun. In the past decade, covalent targeting has been revitalised through the use of inhibitors targeting poorly conserved amino acids, and now provides the basis for a multitude of industrial drug discovery programs, especially in oncology. [3,[5][6][7] Currently, small-molecule covalent activators are rare; studying them may reveal opportunities for drug development or aid in the discovery of molecular tools. [8][9] In particular, irreversibly bound compounds can eliminate competition with natural ligands or substrates. Their durable target engagement can further decouple pharmacodynamics from pharmacokinetics, provided that the protein of interest has a sufficiently slow turnover rate. For these reasons, irreversible covalent activators may be useful biological tools, providing a better understanding of endogenous enzyme activation, which may provide a valuable pharmacological mechanism for treatment of specific pathologies. [9] Depending on the protein microenvironment, amino acids such as serine, cysteine, lysine, tyrosine, threonine, aspartate and glutamate can be nucleophilic. Cysteine has long been used for covalent modifications due to its particular reactivity; several examples can be found in the literature. [10][11][12] In the last decade, interest in catalytic and noncatalytic lysine residues as targets for covalent chemical modifiers has increased. [13][14][15][16] Here, we report on the serendipitous discovery of a class of small-molecule activators of the liver isoform of pyruvate kinase (PKL). PKL catalyses the last step in glycolysis, converting phosphoenolpyruvate (PEP) and ADP to pyruvate and ATP, respectively. In humans, there are four pyruvate kinase isoforms (PKM1, PKM2, PKR, PKL), each of which is enriched in different tissues. [17][18][19][20] The L and R isozymes are transcribed from the PKLR gene by differential splicing of RNA; the M1 and M2 forms are produced from the PKM gene by differential splicing. The L type is the major isozyme in the liver; the R type is mainly in red blood cells. The M1 isoform is expressed in skeletal muscle and brain tissue; M2 is expressed in foetal tissue and in most cancer cells. This study provides new insights into the activation mechanism of PKL, highlighting a possible site for regulatory control of this enzyme.
Benzo[d]isothiazole-1,1-dioxide derivatives as PKL activators
Morgan et al. reported on an interesting new class of saccharin derivatives as covalent irreversible inhibitors of Leishmania mexicana pyruvate kinase (LmPK). [21] It was demonstrated that the compound 4-[(1,1-dioxo-1,2-benzothiazol-3yl)sulfanyl]benzoic acid (1) reacted with an active-site lysine residue (K335 in LmPK), forming a covalent bond and sterically hindering the binding of ADP/ATP to LmPK (Figure 1). This lysine residue in the active site is conserved across species and is present in the four human isoforms of PK, suggesting that compound 1 could be used as a possible lead compound for development of human PKL inhibitors. We initially synthesised compound 1, making minor modifications to the procedure reported in the literature (Supporting Information, SI). [21] The effect of 1 on the activity of PKL was assessed in a cell-free biochemical assay at 10 μM (Table 1). Surprisingly, no inhibitory effect of this compound was observed at this or higher concentrations (data not shown). As it is possible that the reactivity of the equivalent lysine residue in PKL differs from that in LmPK, several structural analogues of 1 were designed and synthesised ( Figure 2).
As the saccharin moiety of compound 1 is required for activity, we evaluated the effect of (a) the heteroatom linker, (b) the aromatic 'leaving group', and (c) the substituent on the aromatic ring. All compounds were obtained in good yields with minor modifications of the procedure from the literature (synthesis and full characterisation of all compounds are reported in the SI). [21] The effect of the compounds on the PKL enzymatic activity was measured in a cell-free assay system and is presented in Table 1 and Figure S1. Fructose-1,6-bisphosphate (FBP) was used as a positive control for its known ability to allosterically activate the enzyme. [23] Similar to 1, compounds 2 and 3 showed no activity against PKL. Interestingly, compounds 4 and 5, the des-carboxyl analogues of 1 and 2, activated the enzyme in a concentration-dependent fashion. Compound 6, the nitrogen des-carboxyl derivative, showed no activity. Similarly, none of the aliphatic analogues 7-9 had an effect on PKL. With their mechanism of action, the stability of these compounds plays a pivotal role in their activity. These molecules must possess enough reactivity to interact with the catalytic lysine residue and enough stability to reach the target and not react with any other lysine residues. The stability of all analogues was evaluated using HPLC-UV-MS in buffer conditions (1 mM compound, 100 mM TEA buffer, pH 7.5, 0.1 % DMSO) with and without α-N-acetyl-L-lysine methyl ester (5 mM) (SI 3.1.3). [21] The compounds were mixed with the buffer components and pre-incubated with mixing for 2 min; their concentrations were monitored for 48 h. Compounds 3 and 6-9 were very stable; no hydrolysis product or lysine adduct was observed, suggesting that their lack of activity was related to low reactivity. In contrast, the ether-containing analogues 2 and 5 hydrolysed in the buffer and reacted rapidly to form a lysine adduct in the presence of α-N-acetyl-L-lysine methyl ester, indicating high reactivity and likely low stability. As described by Morgan et al., the sulphur-containing analogues 1 and 4 demonstrated the best reactivity/stability profile; they were stable in buffered conditions, slowly forming the lysine adduct with concomitant release of thiophenol in the presence of the model amino acid. [21] These results explained the lack of activity observed for compounds 3 and 6-9, but not the trend observed for the other compounds, especially 1 and 4. Thus, we speculated that the substituent on the aromatic ring may be a major determinant in the activity of the compounds. For this reason, methyl ester analogues of 1 and 2 (10 and 11) were synthesised ( Figure 3). These ligands had the same structure as their parent compounds, but they lacked the negative charge of the carboxylic acid.
Compound 10 proved to be the most potent of this series, with an EC 50 of 0.29 μM and a maximum potentiation of 220 % at 1 μM, confirming our hypothesis. To explore this concept further, we designed compounds 12 and 13 ( Figure 3). These were the amide and the N-methyl amide analogues of compound 10, respectively. They proved to be PKL activators, confirming the pivotal role of the substituent on the aromatic ring in this class of molecules. The presence of a negative charge on the molecules seemed to be detrimental to the activity, suggesting that covalent binding is preceded by specific non-covalent interactions. The stability and reactivity of 10-13 were also evaluated using HPLC-UV-MS in buffer conditions with and without α-N-acetyl-L-lysine methyl ester, as reported previously (SI). [21] Similar to their parent compounds, 10, 12 and 13 were stable in buffered conditions but reacted readily with N-acetyl-L-lysine methyl ester to form the corresponding adducts. Ether-containing 11 displayed lower stability in buffered conditions, similar to that of 2, and reacted rapidly in the presence of the amine nucleophile.
PKL activation in biochemical assay
The activation of compound 10 on PKL was further characterised by measuring its modulation of the isolated enzyme in a time-dependent manner. Recombinant PKL was incubated with compound 10 at 10 μM concentration for 30 min and the protein was washed to produce a pure enzyme-ligand complex (PKL-10). Titration experiments with PKL-10 were conducted to find a suitable working concentration ( Figure S2). The activity of the PKL-10 complex was compared to that of recombinant PKL; the PKL-10 complex exhibited approximately five-fold greater activity than PKL. From the titration, a PKL-10 concentration of 1.2 ng/μL was found to be the minimum required to complete the reaction during the incubation period. Accordingly, this concentration was used for the time-resolved experiment. ATP production was measured using a linked luciferase assay. The reaction rate of the PKL-10 complex was significantly higher than that of the unmodified enzyme, increasing the activity approximately five-fold ( Figure 4).
These findings were confirmed employing a lactate dehydrogenate (LDH) assay. The activity of PKL-10 complex was compared to that of unmodified PKL and PKL incubated with FBP (10 μM) in a time-resolved measurement. The activity of the PKL-10 complex was significantly higher than that of unmodified PKL, but lower than PKL incubated with FBP ( Figure S7).
Identification of mechanism of action and mapping of covalent modification sites
To clarify the molecular basis of PKL activation, we performed mass spectrometric mapping of modified residues to determine whether compound 10 reacts with different lysine residues. [24] Recombinant PKL was incubated for 1 h with a 10-μM solution of compound 10, digested with trypsin and analysed using LC-MS/MS ( Figure 5). The peptides obtained by the modified enzyme were compared with the unmodified protein, allowing mapping of the reaction sites. In particular, the corresponding covalent benzo[d]isothiazole-1,1-dioxide modification was identified in the three lysine-containing peptides, I 279 ISKIENHEGVK 290 , I 283 ENHEGVKR 291 and K 259 ASDVAAVR 267 . For each peptide, partial series of y-and b-ion fragments were observed ( Figure 5). In peptide I 279 ISKIENHEGVK 290 , the measured masses for fragments y1-y8 were identical to those of the unmodified protein. The mass of fragments y9 (K259) and higher were upshifted by 164.988 Da compared to the unmodified peptide, matching the weight of the benzo[d]isothiazole-1,1-dioxide group of 10. With similar reasoning, the covalent modifier was found in y2 of peptide I 283 ENHEGVKR 291 (K290) and in a1 of peptide K 259 ASDVAAVR 267 (K259).
Impact of identified sites for allosteric activation
The MS experiment determined that the most modifications to the peptides were on K282 and K290. K259 is located on the surface of the protein; the modest modification may be related to nonspecific binding. The other two lysines were less exposed to the solvent; their modification should be related to a specific binding. Although K282 is the same catalytic residue described by Morgan et al. (K335 in LmPK), K290 resides in the dimer interface, and is engaged in a salt-bridge interaction with D46. Tang and Fenton recently performed a whole-protein alaninescanning mutagenesis on PKL demonstrating that a K282A mutation led to a complete loss of activity, whereas a K290A mutation did not affect enzymatic activity. [25] Moreover, the equivalent residue in PKM2 is an arginine. [26] To rule out the possibility that the modification of K259 or K290 was affecting PKL activity, the K259A and K290A mutants of the enzyme were prepared, and the activity of compound 10 was monitored as a function of the PEP concentration. The activities of both the K259A and K290A mutants were congruent with the indigenous enzyme. The mutants displayed similar degrees of activation upon incubation with compound 10, indicating that modification of these residues was not responsible for the activation observed ( Figure S3). However, these findings do not exclude the small possibility of synergistic effects between ligation sites. Our data suggest that the potentiation of the enzyme is caused by the covalent modification of K282.
Morgan et al. observed the complete opposite effect when the saccharin moiety was installed in the lysine present in the catalytic site of LmPK. [21] This class of covalent enzyme activators is rarely investigated; to our knowledge only a few examples exist. [8,27] Our attempts to obtain crystal structures of compound 10 in complex with PKL were unsuccessful; thus, we conducted covalent docking and molecular dynamic (MD) simulation experiments to understand the mechanism of activation.
Covalent docking and MD simulations as key tools to understand the activation mechanism
PKL is a homotetramer that consists of an A-domain, B-domain, C-domain and N-terminal domain. [25] Residues K282 and K290 were selected for the covalent docking experiments based on their proximity to the active site and the LC-MS/MS data ( Figure 6). PKM2 has been reported to exist in an equilibrium between an active R-state and an inactive T-state. [25][26] While this has not been explicitly proven to also apply for PKL, we explored this possibility based on the high sequence homology with PKM2 (Table S2).
We created a homology model of PKL based on the inactive conformation of PKM2 and performed covalent dockings and molecular dynamics (MD) experiments with both the active and inactive structures.
As previously observed, the K290A mutant showed activity similar to the unmodified protein, indicating that the interaction of K290 with D46 in the oligomeric interface is not essential for enzymatic activity. [28] The covalent docking suggest that when the saccharin moiety of compound 10 is bonded to K290, the salt-bridge interaction with D46 is lost. The modified K290 residue interacts with R328 via hydrogen bonds ( Figure 7A). These new hydrogen bonds, which replace the non-crucial interaction with D46, do not indicate any plausible cause of increased enzyme activity. These results agree with the observations of enzymatic activity in the cell-free system with the mutant, as the enzyme is still functional and still shows activation upon incubation with compound 10. This substantiate that K290 is not the allosteric control point; otherwise, no potentiation of the activity would have been observed with the mutant.
Unlike K290, the residue K282 resides in the PEP binding site and stabilises PEP through interaction via hydrogen bonds. When K282 is replaced with alanine, PKL activity is completely eliminated. [28] Once the saccharin derivative of compound 10 is covalently attached to K282, the interaction of K282 with PEP is lost ( Figure 7B). However, an interaction occurs between the Mn 2 + ion and the lactam oxygen of the saccharin moiety. It has been demonstrated that divalent metal Mg 2 + or Mn 2 + is critical for the catalysis of converting PEP into pyruvate by pyruvate kinase. [29] The additional interaction between Mn 2 + and the modified protein could result in increased enzymatic activity, explaining the boost in the catalytic activity observed after the enzyme was incubated with compound 10.
To explore these findings further, MD simulations were performed. We particularly wanted to analyse the effect of the lysine modification on the enzyme structure and on the position of Mn 2 + over time. Four different systems were examined using MD simulations. The native protein without modifications and the protein with adduct on K282 (PKL*) were studied in the simulated tetrameric R-states and T-states, respectively. The root-mean-square-deviation values, obtained after the analysis, indicated that all four systems achieved stability after 5 ns. As expected, both T-state PKL and T-state PKL* fluctuated more than the R-state systems, suggesting lower stability ( Figure S8). Of the two R-state systems, the active PKL* system fluctuated the most ( Figure S8A). The root-meansquare-fluctuation (RMSF) values show the fluctuations of each amino acid residue during the simulation. The amino acids in the B domain (residues 126-230) underwent severe fluctuations in all the systems, especially in the R-state PKL* ( Figure S8B).
The key interactions of R-state and T-state PKL tetramers were also investigated throughout the simulation time. In the R-state systems, the distance between K434 and amino acid residues L410-A425 was monitored as a parameter to evaluate the stability of the active tetramer. In all R-state systems, the distance between them remains stable between 3 and 5 Å, suggesting that the covalent modification on both lysine sites does not change the key active tetramer formation ( Figure S8C).
Similarly, in the T-state systems, the distance between D499 and W527 was monitored to investigate the influence of the saccharin moiety bonded to the two different lysine residues. The distance between D499 and W527 remained within a 0.3nm radius of their starting positions ( Figure S8D). Again, the key interaction in the inactive systems seemed to be stable and not perturbed by the covalent modification. This suggests that the mechanism of activation of 10 is not related to a switch in conformation from the T-state to the R-state. Subsequently, the solvent-accessible surface area (~SASA) calculated from the MD simulation was evaluated. The data show that the SASA of Rstate PKL* increased more during the simulation than that of Rstate PKL (Table S1). This implies that the covalent modification of K282 in the active tetrameric form loosens the packing of the tetrameric assembly. The active PKL* significantly decreases its interaction energy relative to the other protomers (Table S1). During the simulation, the activator began to occupy the PEP binding site (Figure 8A), implying that in the T-state PKL tetramer, the activator may hinder PEP binding, leading to inhibition. In the R-state PKL*, the activator coordinated the Mn 2 + to PEP and fluctuated the amino acid residues surrounding PEP, as implied by the RMSF calculations. In particular, it pushes the amino acid residues D125-E130 further away, generating more space in the PEP binding region ( Figure 8B).
Enzymatic experiments complement the computational model
We performed substrate titration in the presence and absence of compound 10 to understand how the activator affects the reaction kinetics of PKL. Fitting of Hill plots to the titration curves generated apparent V max app , K M app and Hill coefficient (n) values for the different systems. Titration curves of ADP versus a static concentration of PEP indicated no significant changes in the various parameters (within margin of error). Titration of PEP against a static concentration of ADP showed however that 10 decreased K M app by approximately 30 % while leaving V max app unaffected ( Figure S4). Furthermore, the Hill coefficient was decreased from n = 1.63 � 0.06 to n = 1.19 � 0.03 upon introduction of 10, manifesting as a change in the shape of the titration curve from sigmoidal to Michaelis-Menten-like. To test if 10 effect the interaction between PKL and the divalent metal cation, Mn 2 + was titrated in the presence and Figure 8. A) The saccharin moiety of compound 10 occupies the PEP binding site in the inactive PKL tetramer while linking to the residue K282. The activator was initially coordinating Mn 2 + . After removing Mn 2 + and PEP in the binding site for MD simulations, the activator starts to occupy the PEP binding site and interfere with PEP binding. The carbon atoms of PEP are coloured in green; the carbon atoms of initial K282-activator are coloured in yellow, and the K282activator after MD simulations is coloured in white. B) The saccharin moiety of compound 10 coordinates Mn 2 + and generates more space for PEP allocation during MD simulation. The protein backbone prior to the simulation is coloured in white; the backbone after MD simulation is coloured in yellow. The carbon atoms of PEP are coloured in green; the carbon atoms of K282-activator are coloured in yellow, and the carbon atoms of D125 and T126 are coloured in white and yellow, corresponding to the starting structure and the structure after MD simulation, respectively. absence of 10 and the kinetic response of the protein was monitored. Similar to the substrate titration, introduction of 10 changed the kinetics of the reaction. In the absence of an activator, the Mn 2 + titration curve manifested a sigmoidal shape (n = 1.30 � 0.02) and changed to resemble Michaelis-Menten kinetics upon addition of 10 (n = 0.97 � 0.05). This was also observed for FBP. In addition, the K app value for Mn 2 + was greatly reduced from 1207 μM to 223 μM upon addition of 10, and the V max app was increased from 6.98 μMmin À 1 to 14.08 μM min À 1 , indicating that 10 had a significant impact on the response to changing Mn 2 + concentrations ( Figure S5).
Finally, we tested for potential synergistic effects between 10 and FBP by performing titrations of PEP in the presence of both activators and comparing the level of activation to that of FBP alone ( Figure S6). Fitting of Hill plots to the data did not indicate any additive effect of 10 and FBP.
Conclusion
To the best of our knowledge, this study has identified the first small molecules that covalently activate PKL. This activator was discovered during a literature search aiming to find inhibitors. Subsequent rational chemical modification of the lead compound allowed identification of activators that are effective at sub-micromolar concentrations. Computational modelling indicates that Incubation of compound 10 with PKL results in relaxation of the protein structure. Mass spectrometry analysis, mutagenesis studies and molecular modelling experiments were used to identify the catalytic lysine residue K282 as the site of covalent modification. MD simulations indicated that a modified enzyme could facilitate higher enzymatic activity by coordinating to the Mn 2 + in the active site, this was supported by enzymatic experiments. Considering that small-molecule enzyme activators remain an underexplored field, these results could incentivise further development of novel PKL activators.
Experimental Section
The datasets supporting this article have been uploaded as part of the ESI and contain details of the chemical synthesis, including 1D NMR spectra of described compounds, in vitro stability testing of compounds, bioassay data, LC-MS-MS data, the protein mutagenesis procedure, and the molecular modelling procedures.
Materials and equipment
Unless otherwise noted, commercially available chemicals, reagents and solvents were used without any purification. Microwave reactions were performed in capped vials using a Biotage Initiator Sixty instrument with fixed hold time. Reactions were monitored by Thin Layer Chromatography (TLC, Merck, silica gel 60 F254) and visualized under UV (254 nm/365 nm). Purification by flash column chromatography was performed by manual flash chromatography (wet-packed silica, 0.04-0.063 mm) or by automated flash chromatography on a Biotage SPÀ X or Isolera instrument using prefabricated silica columns. 1 H-and 13 C-NMR spectra were obtained at 400 MHz, using a Varian 400/54 spectrometer. Chemical shifts are reported in ppm with the solvent residual peak as internal standard: 1 H -residual CHCl 3 (δH 7.26), CD 3 OD (δH 3.31) or DMSO-d 6 (δH 2.50) and 13 C -CDCl 3 (δC 77.16), CD 3 OD (δC 49.80) or DMSO-d 6 (δC 39.52). NMR data are reported as follows: chemical shift, number of protons/carbons, multiplicity (s, singlet; d, doublet; t, triplet; q, quartet; m, multiplet; br, broadened), coupling constants (Hz). Melting points were recorded on a Büchi melting point apparatus B-545 or a Mettler FP82 hot stage equipped with a FP80 temperature controller and are uncorrected. Accurate mass analyses were done by using an Agilent 6520 quadrupole time of flight instrument coupled to Agilent 1290 infinity ultra-performance liquid chromatograph (Santa Clara, USA). Samples were dissolved in acetonitrile and eluted by using isocratic elution (100 % acetonitrile) with a flow rate of 0.4 ml/min. Mass spectrometer was operated in positive electrospray ionization scanning between 50 and 1200 m/z. Ion source parameters were as follows; drying gas flow 10 L/min and temperature 325°C and nebulizer pressure 35 psi. The mass spectrometer was calibrated before analyses.
Synthetic procedures
All compounds except for 7 (see SI) were obtained by linear synthesis starting from Saccharin. Saccharin was chlorinated using PCl 5 to obtain pseudo-saccharin chloride S1. The chloride was reacted with various O, N and S nucleophiles to obtain compounds 4, 5, 6, 8, 9, 10 and 11. Precursors to benzoic acid derivatives 1, 2 and 3 were deprotected by either basic hydrolysis, reductive hydrogenation, or acidic hydrolysis. Detailed synthetic procedures can be found in the SI.
Synthesis of chloro-saccharin S1: Saccharin (8.00 g, 43.67 mmol) and PCl 5 (10.00 g, 48.00 mmol) were mixed and heated to 180°C for 2 h. After cooling, the reaction was concentrated in vacuo and the resulting solid residue was triturated in Et 2 O and filtered. The solid was recrystallized from CHCl 3 to give 4.
In vitro compound stability
In aqueous buffer: To aqueous buffer (135 μL of 100 mM TEA buffer, pH 7.5) in an LC/MS vial was added the selected compound (15 μL of a 10 mM DMSO stock solution). The vial was capped, and the mixture was vortex-mixed for 2 minutes. Hydrolysis of the suicide inhibitor was monitored by injecting 10 μL of the solution on to the Agilent LC/MS system and tracking, at 254 nm, the AUC (area under the curve) of the SM (starting material) and product (saccharin). No other by-products or side reactions were observed by LC/MS in all experiments. The percentage remaining was determined using the following equation: Percentage remaining = [AUC of SM]/[AUC of SM + AUC of product]. Subsequent injections were made at intervals (0, 60, 120, 240, 480, 720, 1440, 2160, 2880 minutes).
Compound stability in lysine buffer: Aqueous buffer (60 μL of 100 mM TEA, pH 7.5) in an LC/MS vial was sequentially added a solution of N-α-acetyl-L-lysine methyl ester in TEA buffer (75 μL of a 10 mM solution) and a DMSO solution of suicide inhibitor (15 μL of a 10 mM DMSO stock solution). The vial was capped, and the mixture was vortex-mixed for 2 minutes. Covalent modification of the suicide inhibitor was monitored by injecting 3 μL of the solution on to the Agilent LC/MS system and tracking, at 254 nm, the AUC (area under the curve) of the SM (starting material) and product (lysine adduct). No other by-products or side reactions were observed by LC/MS in all experiments. The percentage remaining was determined using the following equation: Percentage remaining = [AUC of SM]/[AUC of SM + AUC of product]. Subsequent injections were made at intervals (0, 60, 120, 240, 480, 720, 1440, 2160, 2880 minutes).
Kinase activity assays
Assays were performed using Corning™ 384-Well White Polystyrene Microplates (model 3824), in TRIS-HCl pH 7.4 buffer (50 mM Tris-HCl, 10 mM MgCl 2 , 100 mM KCl, 0.05 % Tween® 20) using ultrapure PEP and ADP purchased from Merck. Luminescence was measured on a SpectraMax® iD5 plate reader from Molecular Devices. The Kinase-GLO MAX® assay kit was purchased from Promega Sweden (cat. Nr. V6072). The assay reagent consists of recombinant firefly luciferase enzyme and luciferin substrate. In the presence of ATP and O 2 , luciferase catalyses the transformation of luciferin into excited state oxyluciferin which generates a chemiluminescent signal proportional to the concentration of ATP.
Computational modelling
The homology model of PKL based on the inactive T-state of PKM2 was constructed using the StructurePredicting panel in Schrödinger Suite (Schrödinger, LLC, New York, NY). PKM2 (PDBID: 6GG4) was downloaded from PDB and used as template for PKL sequence which was retrieved from Uniprot (entry: P30613). Schrodinger was used to perform the covalent docking at residues K282 and K290. GROMACS software was used for molecular dynamics simulations with the AMBER 99 force field with TIP3P solvation model in 10 Å periodic boxes (0.1 M NaCl).
|
2022-10-18T06:17:38.158Z
|
2022-10-17T00:00:00.000
|
{
"year": 2022,
"sha1": "9953b907e66d6398381795f4be3d6516a48e2af3",
"oa_license": "CCBYNC",
"oa_url": "https://www.repository.cam.ac.uk/bitstream/1810/343070/2/cbic.202200339.pdf",
"oa_status": "GREEN",
"pdf_src": "Wiley",
"pdf_hash": "89547173dfb119d987d47a5ff01e056077d839f8",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
232259002
|
pes2o/s2orc
|
v3-fos-license
|
Extract from the Marine Seaweed Padina pavonica Protects Mitochondrial Biomembranes from Damage by Amyloidogenic Peptides
The identification of compounds which protect the double-membrane of mitochondrial organelles from disruption by toxic confomers of amyloid proteins may offer a therapeutic strategy to combat human neurodegenerative diseases. Here, we exploited an extract from the marine brown seaweed Padina pavonica (PPE) as a vital source of natural bioactive compounds to protect mitochondrial membranes against insult by oligomeric aggregates of the amyloidogenic proteins amyloid-β (Aβ), α-synuclein (α-syn) and tau, which are currently considered to be major targets for drug discovery in Alzheimer’s disease (AD) and Parkinson’s disease (PD). We show that PPE manifested a significant inhibitory effect against swelling of isolated mitochondria exposed to the amyloid oligomers, and attenuated the release of cytochrome c from the mitochondria. Using cardiolipin-enriched synthetic lipid membranes, we also show that dye leakage from fluorophore-loaded vesicles and formation of channel-like pores in planar bilayer membranes are largely prevented by incubating the oligomeric aggregates with PPE. Lastly, we demonstrate that PPE curtails the ability of Aβ42 and α-syn monomers to self-assemble into larger β-aggregate structures, as well as potently disrupts their respective amyloid fibrils. In conclusion, the mito-protective and anti-aggregator biological activities of Padina pavonica extract may be of therapeutic value in neurodegenerative proteinopathies, such as AD and PD.
Introduction
Neurodegenerative proteinopathies represent a range of devastating medical disorders collectively defined by the accumulation and deposition of protein aggregates in the brain and spinal cord. Examples of peptides or proteins forming fibrillar deposits in the central nervous system include amyloid-β (Aβ) in Alzheimer's disease (AD), α-synuclein (α-syn) in Parkinson's disease (PD), tau in frontotemporal lobar degeneration (FTD), and TDP-43 in amyotrophic lateral sclerosis (ALS) [1,2]. Intriguingly, although there is no overt similarity among these amyloidogenic proteins in structure or function, they all feature regions which are prone to significant structural disorder and which predispose the native protein to aberrant folding and assembly into highly toxic aggregates [3]. Increasing evidence suggests that the latter are represented by the soluble clusters that form initially during aggregation, known as oligomers [4]. It is believed that the combination of a small size and a high degree of hydrophobic surface exposure make oligomeric aggregate species Overall, we show for the first time that PPE maintains the integrity of mitochondrial lipid membranes in the face of insult from oligomers of different amyloid proteins (Aβ42, α-syn and tau) and hence protects mitochondria from dysfunction.
PPE Inhibits Amyloid Aggregate-Induced Damage to Isolated Mitochondria
Previously, we had demonstrated that extract of Padina pavonica [27,28] alleviated neurodegenerative phenotypes in fruit fly models of AD and PD when added as a food supplement [32]. In separate studies, we had also shown that pathogenic aggregates of three amyloid-forming proteins involved in the major brain neurodegenerative diseases-Aβ42, α-syn and tau-were highly damaging to mitochondria and the mitochondrial doublemembrane. Permeation of mitochondrial membranes was typically manifest with swelling of the organelle and efflux of cyto c from the mitochondrial intermembrane space [16,17,33]. Therefore, here we were interested in looking at whether PPE might antagonize aggregate toxicity to mitochondria in vitro. We started by looking at whether PPE can attenuate cyto c release (CCR) from isolated mitochondria incubated with aggregated species of Aβ42 and α-syn previously shown to permeabilize lipid membranes [33,34]. Thus, to enable an anti-aggregator effect of the seaweed extract, we allowed PPE to incubate with the pre-formed Aβ42 or α-syn aggregates for 10 min, prior to adding the peptide and PPE mixture (0.1-1 µg/mL) to the mitochondria. We observed that 1 µg/mL PPE exerted a highly significant reduction in both Aβ-and α-syn-induced CCR from mitochondria (Aβ42: from 2.24 ng/mL to 0.61 ng/mL, p < 0.001; α-syn: from 2.95 ng/mL to 1.43 ng/mL, p < 0.001). The inhibition by PPE was compared to that of black tea extract (BTE) as a 'positive control', since BTE is a well-known anti-amyloid and neuroprotectant against both Aβ and α-syn toxicity in vitro and in vivo [35][36][37]. Indeed, PPE was similarly effective to BTE in preventing CCR, especially against Aβ ( Figure 1A,B). We also tested PPE and BTE alone with isolated mitochondria, i.e., a 10-min incubation of the extract with mitochondria in the absence of any peptide. Intriguingly, we found that PPE was able to consistently reduce the low 'background' CCR that inevitably occurred during the incubation procedure from control mitochondria alone (control: 0.58 ng/mL vs. PPE: 0.35 ng/mL, p = 0.0383). This effect was not observed for BTE (BTE: 0.54 ng/mL, p > 0.05) ( Figure 1C). To exclude any artifactual interference by PPE of the Quantikine ® assay, for example by binding to cyto c protein, or by non-specific binding to antibodies in the assay, PPE was incubated with a known concentration of cyto c protein (provided with the Quantikine ® kit) and the assay performed as before. No statistically significant change was found between the cyto c concentration as determined in the presence or absence of PPE, thereby establishing that the extract was not itself interfering with the assay ( Figure 1D).
Encouraged by these results, we proceeded to look at changes in mitochondrial volume, since swelling of the mitochondrial matrix is often a precursor to loss of cyto c from mitochondria [38]. A convenient and frequently used assay to monitor mitochondrial swelling involves measuring the turbidity in mitochondrial suspensions (A 540 ) [38]. Kinetic traces of changes in absorbance were thus obtained for isolated mitochondria exposed to Aβ42 oligomers (Figure 2A), which resulted in a relative decrease in absorbance of −0.119 AU after 1 h. This was similar to the degree of swelling induced by Ca 2+ ions alone (−0.110 AU, 93 ± 6% of Aβ42 swelling), here used as a positive control since high concentrations of Ca 2+ ions are known to initiate mitochondrial swelling [39]. Preincubation of the Aβ42 aggregates with BTE strongly protected mitochondria from swelling, with a slight increase in mitochondrial volume not significantly different from control mitochondria in respiring buffer alone (control: −0.04 AU, 33 ± 12% of Aβ42 swelling; BTE: −0.06 AU, 49 ± 10% of Aβ42 swelling). The anti-swelling effect of pre-incubating the Aβ42 oligomers with PPE was significantly greater than BTE, with practically no net change in absorbance after 1 h (~0.008 AU, 6 ± 4% of Aβ42 swelling) (Figure 2A,B). It was therefore decided to similarly probe protection by PPE against mitochondrial swelling induced by pre-aggregated α-synuclein and tau proteins. Again, a marked inhibition of swelling (circa 50% less) was found upon addition of PPE to both types of amyloid aggregates, equivalent to inhibition by BTE ( Figure 2C,D). As a control, we incubated mitochondria with 1 µg/mL PPE or 0.5 µg/mL BTE for 1 h, but this caused no significant change in absorbance compared to mitochondrial alone (data not shown). Hence, as had been observed in the case of CCR, PPE seemed to exhibit a powerful mito-protectant effect against damage by different oligomers of amyloid proteins.
Molecules 2021, 26, x Figure 1. Inhibition of aggregate-induced cyto c efflux from isolated mitochondria. (A,B) Exo nous Aβ42 (5 μM) or α-syn (2 μM) oligomers were added to freshly isolated mitochondria, fo ing a 10-min pre-incubation of the oligomers with 0.01% DMSO (solvent control) alone and th extracts, PPE (0.1-1 μg/mL) or BTE (0.5 μg/mL). The concentration of cyto c in the supernatan released from the mitochondria was determined after 30 min in the presence of the Aβ42 or α oligomers. (C) Mitochondria were incubated with solvent control (Ctrl; 0.01% DMSO) alone, a with PPE or BTE (in the absence of aggregates). (D) To check for possible interference of PPE the Quantikine ® assay, a known concentration of cyto c (0.4-0.6 ng/mL) was determined in th absence (CCC) and presence of 1 μg/mL Padina extract (CCC+PPE). Values for cyto c concentr ([Cyto c]) are presented as the means ± standard error of the mean (SEM) performed using du cate readings (n = 3-5). Significance was determined using one-way ANOVA. In A-B, *** p < 0 ** p < 0.01, compared to Aβ42 or α-syn alone. In C, ns = not significant, * p < 0.05, compared to Refer to Suppl. Table S1 for F-values and p-values of one-way ANOVA.
Encouraged by these results, we proceeded to look at changes in mitochondria ume, since swelling of the mitochondrial matrix is often a precursor to loss of cyto c mitochondria [38]. A convenient and frequently used assay to monitor mitochon swelling involves measuring the turbidity in mitochondrial suspensions (A540) [38 netic traces of changes in absorbance were thus obtained for isolated mitochondri ) are presented as the means ± standard error of the mean (SEM) performed using duplicate readings (n = 3-5). Significance was determined using one-way ANOVA. In (A,B), *** p < 0.001, ** p < 0.01, compared to Aβ42 or α-syn alone. In (C), ns = not significant, * p < 0.05, compared to Ctrl. Refer to Suppl. Table S1 for F-values and p-values of one-way ANOVA.
(circa 50% less) was found upon addition of PPE to both types of amyloid aggre equivalent to inhibition by BTE ( Figure 2C,D). As a control, we incubated mitocho with 1 μg/mL PPE or 0.5 μg/mL BTE for 1 h, but this caused no significant change sorbance compared to mitochondrial alone (data not shown). Hence, as had bee served in the case of CCR, PPE seemed to exhibit a powerful mito-protectant effect ag damage by different oligomers of amyloid proteins. Figure 2. Inhibition of aggregate-induced swelling of isolated mitochondria. Kinetic traces (A changes in relative absorbance at 540 nm due to swelling of mitochondria upon exposure to 5 Aβ42 or 250 μM CaCl2 (positive control). Padina pavonica extract (PPE; 1 μg/mL), black tea ex (BTE; 0.5 μg/mL) or 0.01% DMSO (solvent control) were incubated with the Aβ42 aggregates for 10 min before adding to the mitochondria. The swelling assays were performed over thre dependent experiments, representative tracings being shown. Maximal swelling over 1 h in p ence of PPE and BTE extracts was calculated as a percentage of that induced by 5 μM Aβ42 ( μM α-syn (C) and 1 μM tau (D) oligomers alone. Data are presented as means ± SEM (n = 3-5 < 0.05, ** p < 0.01, *** p < 0.001 relative to protein aggregates alone (one-way ANOVA). Refer Suppl. Table S1 for F-values and p-values of one-way ANOVA.
PPE Protects Mito-Mimetic Membranes from Amyloid Aggregate-Induced Permeabiliz and Poration
We next considered whether the mito-protective effect of PPE could also be de strated in minimalist model membranes, consisting of fluorophore-loaded LUVs w multi-component bilayer mimicking the composition of mitochondrial membranes Extracts were therefore incubated with pre-aggregated Aβ42 and α-syn for 10 min, b addition to the mito-mimetic liposomes. A powerful inhibitory effect by PPE on pep induced permeabilization of the mito-mimetic LUVs was seen: release of encapsu Oregon Green ® fluorophore from the LUVs was substantially reduced to 12% and of changes in relative absorbance at 540 nm due to swelling of mitochondria upon exposure to 5 µM Aβ42 or 250 µM CaCl 2 (positive control). Padina pavonica extract (PPE; 1 µg/mL), black tea extract (BTE; 0.5 µg/mL) or 0.01% DMSO (solvent control) were incubated with the Aβ42 aggregates at RT for 10 min before adding to the mitochondria. The swelling assays were performed over three independent experiments, representative tracings being shown. Maximal swelling over 1 h in presence of PPE and BTE extracts was calculated as a percentage of that induced by 5 µM Aβ42 (B), 2 µM α-syn (C) and 1 µM tau (D) oligomers alone. Data are presented as means ± SEM (n = 3-5); * p < 0.05, ** p < 0.01, *** p < 0.001 relative to protein aggregates alone (one-way ANOVA). Refer to Suppl. Table S1 for F-values and p-values of one-way ANOVA.
PPE Protects Mito-Mimetic Membranes from Amyloid Aggregate-Induced Permeabilization and Poration
We next considered whether the mito-protective effect of PPE could also be demonstrated in minimalist model membranes, consisting of fluorophore-loaded LUVs with a multi-component bilayer mimicking the composition of mitochondrial membranes [33]. Extracts were therefore incubated with pre-aggregated Aβ42 and α-syn for 10 min, before addition to the mito-mimetic liposomes. A powerful inhibitory effect by PPE on peptideinduced permeabilization of the mito-mimetic LUVs was seen: release of encapsulated Oregon Green ® fluorophore from the LUVs was substantially reduced to 12% and 8% of that triggered by Aβ42 and α-syn aggregates alone, respectively. This compared favorably, and was in fact slightly better, than the inhibitory effects of BTE ( Figure 3). Thus, we were able to show that PPE protects against mitochondrial membrane permeabilization by the amyloid aggregates. that triggered by Aβ42 and α-syn aggregates alone, respectively. This compared f bly, and was in fact slightly better, than the inhibitory effects of BTE ( Figure 3). Th were able to show that PPE protects against mitochondrial membrane permeabil by the amyloid aggregates. Figure 3. Inhibition of mito-mimetic lipid vesicle permeabilization. Padina pavonica extract (P μg/mL), black tea extract (BTE; 1 μg/mL) or 0.01% DMSO (solvent control) were incubated w (A) 1 μM Aβ42 or (B) 0.5 μM α-syn oligomeric preparation at RT for 10 min, before adding μM LUVs loaded with the fluorophore Oregon Green ® . Maximal permeabilization of liposo the presence of aggregates was calculated as a percentage of that induced by the aggregates (100%). Data are presented as means ± SEM (n = 3); ** p < 0.01, *** p < 0.001 (one-way ANOV Refer to Suppl. Table S1 for F-values and p-values of one-way ANOVA. In previous work, we had shown that permeabilization of mitochondrial-like b by amyloid oligomers of α-syn and tau is associated with the formation of large and nanopores in the membrane that allow ionic flux [16,17]. Therefore, we next interr whether PPE could inhibit formation of amyloid nanopores by Aβ42, α-syn and t gomers in a single-channel electrophysiology setup, using an identical mito-mimeti position for the planar bilayer as in the liposome assays. Prior to introduction to t chamber, oligomeric preparations of Aβ42, α-syn and tau were pre-incubated wi for 15 min. Pore activity was monitored in the tracings of ionic current passing th the planar lipid bilayer ( Figure 4A).
Electrical recordings characteristic of pore formation were completely absent either of the three types of protein oligomers had been pre-incubated with PPE (0 o trials each for Aβ42, α-syn and tau protein), with no deviation of the current tracing baseline detected for at least 2 h of recording ( Figure 4B). This in comparison to a pore formation of 45-75% when the oligomers alone were added to the cis-chambe trials for each peptide/protein) [16,17]. Given the laborious (low-throughput) nature electrophysiological method, BTE was tested against α-syn oligomers only. It was to be marginally less effective than PPE, with a pore formation frequency of 17% (1 6 trials), compared to 67% (4 out of 6 trials) with α-syn oligomers alone. 1 µg/mL), black tea extract (BTE; 1 µg/mL) or 0.01% DMSO (solvent control) were incubated with (A) 1 µM Aβ42 or (B) 0.5 µM α-syn oligomeric preparation at RT for 10 min, before adding to 60 µM LUVs loaded with the fluorophore Oregon Green ® . Maximal permeabilization of liposomes in the presence of aggregates was calculated as a percentage of that induced by the aggregates alone (100%). Data are presented as means ± SEM (n = 3); ** p < 0.01, *** p < 0.001 (one-way ANOVA). Refer to Suppl. Table S1 for F-values and p-values of one-way ANOVA.
In previous work, we had shown that permeabilization of mitochondrial-like bilayers by amyloid oligomers of α-syn and tau is associated with the formation of large and stable nanopores in the membrane that allow ionic flux [16,17]. Therefore, we next interrogated whether PPE could inhibit formation of amyloid nanopores by Aβ42, α-syn and tau oligomers in a single-channel electrophysiology setup, using an identical mito-mimetic composition for the planar bilayer as in the liposome assays. Prior to introduction to the test chamber, oligomeric preparations of Aβ42, α-syn and tau were pre-incubated with PPE for 15 min. Pore activity was monitored in the tracings of ionic current passing through the planar lipid bilayer ( Figure 4A).
Electrical recordings characteristic of pore formation were completely absent when either of the three types of protein oligomers had been pre-incubated with PPE (0 out of 6 trials each for Aβ42, α-syn and tau protein), with no deviation of the current tracings from baseline detected for at least 2 h of recording ( Figure 4B). This in comparison to a rate of pore formation of 45-75% when the oligomers alone were added to the cis-chamber (n = 6 trials for each peptide/protein) [16,17]. Given the laborious (low-throughput) nature of the electrophysiological method, BTE was tested against α-syn oligomers only. It was found to be marginally less effective than PPE, with a pore formation frequency of 17% (1 out of 6 trials), compared to 67% (4 out of 6 trials) with α-syn oligomers alone.
PPE Modulates the Fibrillization Pathways of Aβ42 and α-Synuclein
Thioflavin T (ThT) is a widely used fluorescent dye to kinetically monitor mation of amyloid fibrils along the aggregation pathway [40]. Upon binding wi cross-β-architecture along the long axis of amyloid fibrils, ThT emits a strong fluor signal [41]. Previous studies have shown prevention of aggregation and disaggreg fibril formation of the amyloid-β peptide fragment Aβ (25)(26)(27)(28)(29)(30)(31)(32)(33)(34)(35) by extracts of the s P. gymnospora [30]. We therefore sought to determine the effects of PPE on the fibri pathways of Aβ42 and α-syn using the ThT binding assay. In the absence of PP Aβ42 and α-syn fully aggregated into β-sheet-rich amyloid fibrils incorporating T ecules: in line with published literature, the highly amyloidogenic amyloid-β pept less than 1 h, while the α-syn protein took ~2 days to reach maximal ThT fluor under the experimental conditions used [40,42]. In the presence of PPE, howev mation of Aβ42 and α-syn amyloid fibrils was powerfully suppressed ( Figure 5A 10 μg/mL PPE, a much slower rate of fibril growth was observed, with ThT i reaching 52 ± 4% for Aβ42 and 35 ± 2% for α-syn ThT-positive fibrils. At a higher 50 PPE concentration, minimal fibril formation occurred for the duration of the expe with peak ThT fluorescence only reaching 27 ± 6% of Aβ42 and 15 ± 5% of α-sy ( Figure 5B,D). The above experiments thus indicate that PPE exhibits excellent against the polymerization of Aβ42 and α-syn monomers into fibrillary aggregate In view of the fact that in our experiments we always incubated pre-aggregate or α-syn with PPE, we decided to additionally test the amyloid-disrupting prop the seaweed extract using the ThT assay. Therefore, PPE (10 μg/mL) was added formed fibrils of Aβ42 and α-syn, and the ThT fluorescence intensity tracked for h. The ThT signal for amyloid fibrils incubated with PPE decreased prominently fo an inverse sigmoidal curve, to 10% and 21% for Aβ42 and α-syn fibrils, respective
PPE Modulates the Fibrillization Pathways of Aβ42 and α-Synuclein
Thioflavin T (ThT) is a widely used fluorescent dye to kinetically monitor the formation of amyloid fibrils along the aggregation pathway [40]. Upon binding within the crossβ-architecture along the long axis of amyloid fibrils, ThT emits a strong fluorescence signal [41]. Previous studies have shown prevention of aggregation and disaggregation of fibril formation of the amyloid-β peptide fragment Aβ(25-35) by extracts of the seaweed P. gymnospora [30]. We therefore sought to determine the effects of PPE on the fibrillization pathways of Aβ42 and α-syn using the ThT binding assay. In the absence of PPE, both Aβ42 and α-syn fully aggregated into β-sheet-rich amyloid fibrils incorporating ThT molecules: in line with published literature, the highly amyloidogenic amyloid-β peptide took less than 1 h, while the α-syn protein took~2 days to reach maximal ThT fluorescence under the experimental conditions used [40,42]. In the presence of PPE, however, formation of Aβ42 and α-syn amyloid fibrils was powerfully suppressed (Figure 5A,C). At 10 µg/mL PPE, a much slower rate of fibril growth was observed, with ThT intensity reaching 52 ± 4% for Aβ42 and 35 ± 2% for α-syn ThT-positive fibrils. At a higher 50 µg/mL PPE concentration, minimal fibril formation occurred for the duration of the experiment, with peak ThT fluorescence only reaching 27 ± 6% of Aβ42 and 15 ± 5% of α-syn alone ( Figure 5B,D). The above experiments thus indicate that PPE exhibits excellent activity against the polymerization of Aβ42 and α-syn monomers into fibrillary aggregates. Table S1 for F-values and p-values of one-way ANOVA. (E,F) Disaggregation properties of PPE are illustrated by time-dependent ThT fluorescence profiles following addition of PPE (10 µg/mL) to completely aggregated Aβ42 or α-syn (regarded as 100%). Data points represent the means ± SD of three replicate experiments (n = 3) expressed as percentages of the control. (G) Dot blots were probed with Fibril OC antibody to detect Aβ42 (left panel) and α-syn (right panel) fibrils, prepared in the absence (−) or after the addition of PPE (10 µg/mL) for 2 h (+). A fainter spot indicates that amyloid fibrils were significantly reduced by PPE.
In view of the fact that in our experiments we always incubated pre-aggregated Aβ42 or α-syn with PPE, we decided to additionally test the amyloid-disrupting properties of the seaweed extract using the ThT assay. Therefore, PPE (10 µg/mL) was added to preformed fibrils of Aβ42 and α-syn, and the ThT fluorescence intensity tracked for up to 2 h. The ThT signal for amyloid fibrils incubated with PPE decreased prominently following an inverse sigmoidal curve, to 10% and 21% for Aβ42 and α-syn fibrils, respectively, suggesting extensive disruption of the fibrillar β-architecture by PPE ( Figure 5E,F). Another method, that of immunoblotting using an antibody which specifically binds to the fibrillary form of Aβ42 and α-syn, was used to more directly visualize the disaggregation potential of PPE. In accordance with the ThT assays, very faint spots were seen after incubation of the fibrils with PPE, indicating a highly effective disaggregation activity ( Figure 5G).
Thus, the results of the antiaggregation and disaggregation assays together indicate that PPE curtails the ability of two key amyloid disease proteins, Aβ42 and α-syn, to self-associate and accrue into larger β-aggregate structures, as well as potently disrupt the respective mature fibrils.
Discussion
Defects in neuronal mitochondria are linked to early pathophysiology of several human neurodegenerative disorders of the amyloid type, such as AD and PD [43]. In such disorders, mitochondrial dysfunction in neurons and synapses can be triggered by toxic conformations of intrinsically disordered proteins such as Aβ42, α-syn and tau, directly interacting with mitochondria to cause mitochondrial poration, increased membrane permeability and swelling, and through dysfunctional OXPHOS, diminished ATP production [16,17,44,45]. Indeed, neuronal and synaptic mitochondria are especially more susceptible to damage and more sensitive to swelling than non-neuronal mitochondria [46]. Finding therapeutic molecules that help preserve and restore mitochondrial integrity in the face of onslaught by neurotoxic amyloid entities, thus represents an important goal in the search for effective treatment of these neurodegenerative diseases.
Our findings indicate that an acetonic extract derived from the marine brown seaweed Padina pavonica demonstrates remarkable mito-protective properties, by preserving the membrane integrity of mitochondria directly exposed to membrane-active aggregates of three amyloid proteins (Aβ42, α-syn and tau) believed to play important causal roles in the most common neurodegenerative diseases [16,17,47]. Specifically, a robust (~50% or more) decrease in abnormal morphology (swelling) and in cyto c efflux from isolated mitochondria were observed, after pre-incubation of the amyloid oligomers with PPE. The mito-protective effects of PPE were especially prominent against the Aβ42 aggregates. Interestingly, both mitochondrial swelling and loss of cyto c from mitochondria have been shown using multiphoton microscopy in the brains of living mouse models of AD (APP/PS1 transgenic mice) in the vicinity of Aβ plaques [48]. Similarly, α-syn overexpression in mouse brain resulted in enlarged and swollen mitochondria, as well as increased levels of cyto c in the cytosol [49]. Hence, our in vitro model using isolated mitochondria from SH-SY5Y cells and low micromolar concentrations of aggregated recombinant synthetic peptides, in which we showed inhibitory activity of PPE, recapitulated two prominent mitochondrial events seen in amyloid mouse models of AD and PD. Furthermore, our inference that PPE is effectively preventing disruption of mitochondrial membranes is reinforced by the finding that oligomeric aggregates exposed to PPE were much less able to permeabilize mitomimetic LUVs, or form ion-conducting nanopores in the bilayer membrane (BLM). Formation of pore-like structures in the mitochondrial outer membrane were seen in atomic force microscopy (AFM) images of mitochondria from brains of transgenic mice exhibiting expression of human α-syn [49]. Therefore, although our minimal reconstituted model membrane systems oversimplify the true complexity of mitochondrial membranes, they can nonetheless provide a powerful experimental means to obtain mechanistic insights [50,51].
One aspect we wanted to explore further was the anti-aggregation activity of PPE. In the present study, experiments involving PPE were conducted after first pre-incubating the amyloid aggregates with the extract. Hence, it would be reasonable to assume that during this time, molecular constituents present in the extract interacted with the membrano-toxic amyloid entities and converted them into less harmful aggregates. In agreement with this notion, PPE manifested potent disaggregating activity of the β-sheet structure of Aβ42 and α-syn fibrillary aggregates, which possess an otherwise highly stable antiparallel β-architecture [52]. The anti-aggregation properties of PPE were further substantiated by demonstrating strong inhibition of Aβ42 and α-syn amyloidogenesis in the ThT-based assays. The latter kinetic assays complement immunoblotting experiments carried out previously by our group showing that PPE suppressed the formation of both Aβ42 and α-syn protofibrils [32]. In support of our work, acetone extracts of another Padina species, Padina gymnospora, prevented aggregation and caused disaggregation of mature fibrils of the Aβ25-35 peptide, which in an aggregated state is toxic to cultured neurons [30]. Chemical composition studies of the acetone extract used in the present study identified a rich content of polyphenols, in particular flavonoids, and tannins. The role of polyphenols on the aggregation and disaggregation of Aβ peptide, tau and α-syn haven been extensively studied and described [53]. Naturally occurring dietary flavonoids have in fact gained considerable attention as providing an alternative approach to slowing the progression of AD or PD pathogenesis [54]. Possible mechanisms include modulation of monomermonomer interactions, inhibition of oligomerization into a toxic species, and remodeling of toxic confomers into nontoxic forms by way of hydrogen bonding, electrostatic effects and/or π-π (pi-pi) stacking [55,56].
Although it is indeed most likely that the principal mechanism of action of PPE is via association with, and modification of, the toxic oligomeric structures, we cannot exclude a priori a concomitant modulation of the permeability of the phospholipid membrane by the seaweed extract. In this regards, it is pertinent to draw attention to the apparent mito-protective effect of PPE on isolated mitochondria alone, in which the background release of cyto c from mitochondria incubated with PPE was significantly lower than from control mitochondria: this protective effect was robust and specific to PPE. Thus, we may speculate that the phenols, sterols or terpenoids found in Padina pavonica extract could possibly be altering physicochemical properties of the mitochondrial membranes, such as their membrane fluidity [57,58]. Further experimental work is underway in order to delve deeper into this phenomenon, which might have important implications on the biological activity of PPE.
Another interesting point to come out from this work is that in the mitochondrial and liposome permeabilization assays, as well as in the electrical recordings for amyloid pores, PPE was as effective, and in several instances even more effective, than a theaflavinbased extract from black tea in its anti-amyloid activity. Notably, BTE and theaflavins, the main polyphenolic components found in fermented black tea, were reported as among the strongest inhibitors of Aβ42 and α-syn fibrillogenesis [34,56,59]. Hence, even more than BTE, PPE is a potent dual inhibitor of both Aβ42 and α-syn toxicity-few amyloid inhibitors have been found to have excellent activities against different amyloid peptides, and this challenge has become even more pressing given the multiple reports of co-assembly and co-deposition of amyloidogenic peptides in vitro and in vivo into hetero-amyloids [60][61][62]. As with other herbal extracts, it is likely that the extract milieu of PPE may be crucial for providing such optimal bioactivity. In this manner, synergies among the multiple single components of the extract may provide an ideal environment in which the effect of the natural product mixture is greater than that of the individual purified compounds [63,64]. Further studies are therefore underway to explore the biological effects of the whole PPE extract formula on the complex disease-related molecular network represented by amyloid pathology.
Padina Pavonica and Black Tea Extracts
Extract derived from the alga Padina pavonica (PPE) was supplied by the Institute of Cellular Pharmacology (ICP Concepts Ltd., Mosta Technopark, Malta). PPE was produced and chemically characterized as described previously [27,28,65]. Briefly, the seaweed was dried and milled before solid-liquid extraction was carried out by the Soxhlet extraction method using acetone as solvent. The extracted product was then filtered and fed into a rotary evaporator where it was dried under vacuum at 55 • C for several hours. PPE was supplied in the form of crude extract of the active fraction. Stocks (10 mg/mL) in 100% dimethyl sulfoxide (DMSO) were stored at −20 • C.
Thioflavin T (ThT) Fluorescence Fibril Assay
To detect the formation of Aβ and α-syn fibrils, ThT (final concentration 40 µM in PBS, pH 7.4) was added to wells in a black and clear flat bottom, non-binding microplate (Corning ® catalog number 3881, New York, NY, USA), and mixed with aggregated protein (Aβ42 or α-syn) alone and in the presence of Padina pavonica extract (10-50 µg/mL). The plate was sealed with clear polyolefin tape and incubated at 37 • C with agitation at 450 rpm. Fluorescence intensities of the solutions were subsequently measured using a FLx800 microplate reader (Bio-Tek, Bedfordshire, UK) with excitation and emission wavelengths at 445 nm and 490 nm, respectively. Fluorescence readings were background subtracted by that of ThT alone. For the disaggregation assay, 10 µg/mL Padina pavonica extract was added to preformed Aβ42 or α-syn fibrils (22.5 µM Aβ42 for 1 h; 25 µM α-syn for 72 h) and mixed thoroughly. Then, ThT was added and fluorescence of the solution was measured at 37 • C without shaking for 100 min.
Preparation of Isolated Mitochondria from SH-SY5Y Cells
Isolated mitochondria for the cyto c assay and swelling experiments were prepared fresh for each experiment from~5 × 10 7 cultured SH-SY5Y human neuroblastoma cells (ATCC ® CRL-2266TM, Manassas, VA, USA) using the MITOISO2 ® kit (Sigma-Aldrich, Germany) according the manufacturer's instructions. For downstream application, the mitochondrial pellet was resuspended in respiring buffer (50 mM HEPES, pH 7.5, containing 1.25 M sucrose, 25 mM succinate, 5 mM ATP, 0.4 mM ADP, 10 mM K 2 HPO 4 ) at 1-2 mg/mL (final mitochondrial protein concentration determined using NanoOrange ® kit, ThermoFisher Scientific, Waltham, MA, USA). Mitochondria were kept on ice during the entire isolation procedure. Purity of the "heavy" mitochondrial fraction was confirmed as described previously [33].
Quantikine ® Immunoassay for Determination of Cytochrome c Release
The Quantikine ® assay kit (R&D Systems, Ely, UK) provides for an accurate quantification of cytochrome c in supernatant fractions using a colorimetric ELISA method [33]. Thus, fresh isolated mitochondria (~12 µg) in respiring buffer were incubated for 30 min at 37 • C, alone or in the presence of pre-aggregated amyloid oligomers; when needed, the pre-formed oligomers were left for 10 min in presence of extract (PPE or BTE) at room temperature prior to the addition to mitochondria. Following centrifugation (16,000× g for 10 min, 4 • C), the supernatant was used for the cyto c immunoassay as per kit instructions. Background CCR from control mitochondria not exposed to peptides was subtracted from other values.
Mitochondrial Swelling Assays
Mitochondrial swelling was determined by measuring changes in mitochondrial volume as described [16]. Briefly, mitochondria (1-2 mg/mL of protein) were incubated in 80 µL of respiration buffer containing 10 mM HEPES, 5 mM succinate, 250 mM sucrose, 1 mM ATP, 0.08 mM ADP, 2 mM K 2 HPO 4 , pH 7.5 at 25 • C. Baseline levels of absorbance at 540 nm (OD~0.35-0.40) were measured for 10 min to ensure stability of mitochondria, and the optical density monitored for 60 min after the addition of oligomeric peptide (Aβ42, α-syn, or tau). Where needed, extracts were incubated with the protein aggregates for 10 min before being added to the mitochondria.
Preparation of Mito-Mimetic Liposomes and Vesicle Leakage Assays
Lipids in chloroform were all purchased from Avanti Polar Lipids (Alabaster, AL, USA). Briefly, the lipids were mixed at the following ratios (by % weight): 45 PC (phosphatidylcholine), 25 PE (phosphatidylthethanolamine), 10 PI (phosphatidylinositol), 5 PS (phosphatidylserine), 15 CL (caradiolipin), which mimics the composition of the outer mitochondrial contact sites and the inner mitochondrial membrane [68,69]. Large unilamellar vesicles (LUVs) loaded with Oregon Green ® 488 BAPTA-1 fluorophore (OG; ThermoFisher Scientific, Waltham, MA, USA) were prepared using the detergent-dialysis method as described previously [33]. The size and uniformity of the vesicle population were checked using a Zetasizer Nano S dynamic light scattering (DLS) device (Malvern, Worcestershire, UK). The vesicles were relatively uniformly sized with an average diameter of 87 ± 20 nm, and hence categorized as LUVs.
Planar Lipid Bilayer Electrophysiology
Ion current across planar lipid bilayers was recorded using single-channel electrical recordings on an Ionovation Compact automated bilayer workstation (Ionovation GmbH, Osnabrück, Germany) as described previously [17,70]. Mito-mimetic bilayers were formed using the same defined lipid ratios as for the LUV preparation (45 PC/25 PE/10 PI/5 PS/15 CL) by spreading the lipid up-and-down ("painting technique") across a~120 µm aperture in a Teflon septum (Ionovation GmbH, Osnabrück, Germany) separating cis and trans compartments containing electrolyte (250 mM KCl, 10 mM MOPS/Tris, pH 7.2). Formation of the bilayer membrane was verified throughout the experiment, visually using a built-in low amplification microscope and by taking capacitance measurements. In previous studies, this membrane composition was shown to be stable for at least 2 h (typical capacitance of 50-70 pF and a conductance of 12-14 pS) [16,17]. Oligomeric peptide preparations of Aβ42, α-syn or tau were added to the electrically grounded cis-chamber just below the bilayer. Experiments were performed with peptide aliquots that had not been freeze-thawed more than once. Chambers contained magnetic stirrers to facilitate oligomer incorporation into the bilayer. To evaluate the effect of extracts on amyloid pore formation, the aggregate preparation was preincubated for 15 min with 1 µg/mL PPE or 0.5 µg/mL BTE before introducing into the electrolyte solution. Preliminary experiments had determined that at these concentrations the extracts caused no increase in ionic current over baseline over at least 4 h of recording (n = 3 for each extract). Measurements of transmembrane currents were recorded in applied ±40 mV voltage clamp mode using a HEKA ® EPC10 amplifier with a sampling frequency of 15 kHz. Data acquisition was carried out using Patchmaster software version 2x90 (HEKA, Lambrecht/Pfalz, Germany).
Dot Blot Assay
Dot blot assays were performed using the fibril-specific OC antibody [71]. Briefly, samples of 4 µL containing Aβ or α-syn were spotted onto a nitrocellulose membrane (Hybond-ECL, GE Life Sciences) and after air-drying, membranes were blocked with 2.5% BSA in Tris-buffered saline containing 0.1% (v/v) Tween-20 (TBS-T) for 1 h at room temperature. After rinsing briefly with TBS, membranes were probed with the OC antibody (1:2000 in TBS; AB2286, Millipore, Bedford, MA, USA) for 2 h at room temperature. The membranes were then washed three times for 5 min each with TBS-T on an orbital shaker, and incubated with secondary horseradish peroxidase-conjugated anti-rabbit antibody (1:5000 in TBST) for 1 h at room temperature. Three subsequent washes were performed with TBS-T and the last wash with TBS only, for 5 min each. Lastly, the blots were developed using the ECL immunoblotting kit (RPN2108, GE Life Sciences, Little Chalfont, United Kingdom) as per manufacturer instructions.
Statistical Analysis
All statistical analyses were performed using GraphPad Prism v8 (GraphPad Software, San Diego, CA, USA). Statistical significance was examined by one-way ANOVA and Bonferroni's multiple comparisons tests (F-values and p-values of one-way ANOVA are provided in Suppl. Table S1). Normality was assessed on all samples subjected to statistical analysis to ensure data met the assumptions of the tests used and statistical outliers identified. The data are presented as means ± standard error of the mean (SEM) unless stated otherwise, with n as the number of independent experiments.
Conclusions
Extracts from seaweed plants and their bioactive compounds are becoming increasingly recognized as useful resources of molecules to help combat neurological disease of the amyloid type. An effective treatment will likely include a combination of drugs that protect mitochondrial function and prevent amyloid accumulation. Here we showed that Padina pavonica extract is an efficient anti-aggregator of amyloid proteins and protects mitochondria organelles by preserving mitochondrial membrane integrity. Further investigations using cellular models of AD and PD, characteristically involving overexpression of the wild-type or mutant amyloid protein, will be required to evaluate further whether the protection of mitochondria by Padina pavonica extract represents a potential route to combat the pathological effects associated with aggregation in neurodegenerative proteinopathies.
|
2021-03-18T05:13:46.996Z
|
2021-03-01T00:00:00.000
|
{
"year": 2021,
"sha1": "24d59b3119d01af259a241727dc9d17a049492bf",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/26/5/1444/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "24d59b3119d01af259a241727dc9d17a049492bf",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
267728408
|
pes2o/s2orc
|
v3-fos-license
|
Primary anterior visual pathway germinoma in a 13-year-old boy: A case report
Background: Primary optic nerve and chiasmal germinomas are very rare. These lesions can commonly be mistaken for optic pathway gliomas based on imaging alone. It is radiosensitive and cured in most of the cases. Case Description: We report a rare case of a 13-year-old boy with primary bilateral optic nerves and chiasmal germinoma who underwent partial surgical resection followed by radiotherapy. Follow-up brain imaging after two months post-radiotherapy showed interval regression of the tumor. Our literature review identified that 12 reported cases of primary anterior visual pathway germinoma had been reported to regress significantly post-radiotherapy alone or with chemotherapy. Conclusion: Histologic correlation is essential for appropriate treatment, alleviating symptoms, and avoiding irreversible vision loss.
INTRODUCTION
Intracranial germ cell tumors (GCTs) comprise <1% of all central nervous system (CNS) tumors [1] and 3.8% of pediatric brain tumors. [18]germinoma is a more frequent tumor type of intracranial GCTs that occurs in the pediatric age group. [1,8]e pineal, suprasellar, basal ganglia regions and thalamus are the commonly reported locations of germinoma. [11]][21][22] Distinguishing primary anterior visual pathway germinoma from glioma based on radiologic features has proved to be challenging; moreover, the treatment regimens differ between primary anterior visual pathway germinoma and glioma, hence why tissue diagnosis is crucial. [13,21]e literature demonstrates that intracranial germinoma has favorable outcomes compared to other pediatric malignant tumors. [1]In this case report, we present a case of a 13-year-old boy diagnosed with primary anterior visual pathway germinoma by discussing associated clinical and radiological findings and reviewing related published literature.
CASE DESCRIPTION
A 13-year-old boy who was born to second-degree consanguineous parents was not known to have any medical illness.He presented to the emergency department of King Faisal Specialist Hospital and Research Center at Jeddah with a persistent headache for four weeks, followed by progressive visual loss for over two weeks associated with polyurea before the presentation.e parents denied having a similar condition or history of any malignancies in the family.On examination, the pupils were dilated bilaterally with a sluggish reaction in the left eye.e ophthalmologic assessment revealed diminished visual acuity with no light perception in the right eye and light perception OS.A fundoscopic examination showed bilateral optic atrophy.No skin stigmata or caféau-lait spots were inspected.A brain magnetic resonance imaging (MRI) revealed a lobulated enhancing lesion arising primarily from the optic chiasm involving the preoptic nerves and the hypothalamus, measuring 3.1 × 3.7 × 1.7 cm (anteroposterior × mediolateral × craniocaudal dimensions).e infundibulum was normal.Radiologic findings were most suggestive of optic pathway glioma [Figure 1].e hormonal assay revealed central hypothyroidism, central hypoadrenalism, central diabetes insipidus (DI), and hyperprolactinemia.e patient was evaluated by a pediatric endocrinologist for hormonal replacement therapy.e patient underwent a right pterional craniotomy and a Transylvanian approach to the optic chiasm and nerves.Intraoperatively, the optic chiasm and bilateral optic nerves diffusely thickened with a fleshy appearance.A tumor biopsy was obtained, followed by lesion debulking for size reduction.Postoperative recovery was uneventful.e patient was discharged home in stable clinical condition.e specimen exhibited histological patterns in keeping with germinoma [Figure 2].e whole spine MRI showed no drop metastatic lesion.Serum level of alpha-fetoprotein and beta-human chorionic gonadotropin was normal.e case was discussed in a multidisciplinary tumor board with the conclusion of starting SIOP CNS GCT 96 protocol, with four cycles of carboplatin and etoposide alternating with etoposide and ifosfamide, followed by radiation therapy after evaluation.During the first cycle, the patient developed a significant reaction to etoposide in terms of chest tightness, rash, and low blood pressure.As the patient was unfit for chemotherapy, the treatment plan consisted of solely adjuvant radiotherapy.He received a total of 4600 centigray (cGy) in 29 fractions.After completion of the radiotherapy sessions, the brain MRI revealed an interval reduction in the size of optic chiasm thickening without residual enhancing lesion [Figure 3].On the follow-up visit after two months from radiotherapy, no vision improvement was noted; otherwise, he was well.
DISCUSSION
GCTs classified by the 2021 World Health Organization classification of the tumors of the CNS into mature teratoma, immature teratoma, teratoma with somatic-type malignancy, germinoma, embryonal carcinoma, yolk sac tumor, choriocarcinoma, and mixed GCT. [14]Intracranial GCTs are usually diagnosed at 10-21 years old. [11]erminoma comprises 65-76% of all intracranial GCTs. [1,11]e majority of intracranial GCTs occur in pineal (48%), suprasellar (37%), and basal ganglia-thalamus (3%) regions; it may occur intracranially within the ventricle. [11]][17] One of the pathogenesis theories that explain intracranial GCTs is a disruption of primordial germ cell migration control where dislocation is intracranially followed by malignant transformation. [7]e neurological symptoms and signs depend on the lesion's location.Most of the suprasellar germinoma present initially with DI. [2] Furthermore, the triad of visual deterioration, DI, and hypopituitarism are common suprasellar lesion. [2,13]Bowman and Farris, in 1990, were the first who reported a case of primary chiasmal germinoma in a 44-year-old man based on tumor biopsy, who was cured completely by radiotherapy. [2][21][22] e mean age of diagnosis of all reported cases including our case was 21 years.e painless progressive vision deterioration was the main complaint of all reported cases.About 58% (n = 7) for those with chiasmal germinoma presented with DI.Hormonal abnormalities were reported in 66% of the anterior visual pathway germinoma.[20][21][22] Moreover, most of the reported cases presented with the triad of visual deterioration, DI, and hypopituitarism.e lesion, when small and solely chiasmal, can be missed on a non-enhanced computed tomography (CT) scan of the brain; thus, an MRI of the brain becomes essential to assess intracranial involvement. [2]e imaging features of anterior visual pathway germinoma can largely overlap with those of optic pathway gliomas on CT and MRI.Suprasellar germinomas typically arise from the hypothalamus, showing extension into the pituitary infundibulum.Both lesions can show variable signal intensity on the different pulse sequences, heterogeneous enhancement, and lack of calcifications.A study by Panyaping et al. showed that the significant difference in apparent diffusion coefficient (ADC) values could be utilized as a distinguishing feature between suprasellar germinomas and chiasmatic/hypothalamic gliomas, germinoma histological features demonstrate high cellularity with densely packed cells whereas low cellularity is observed in chiasmatic/hypothalamic gliomas.us, suprasellar germinoma had a lower average ADC value compared to the minimum ADC value in chiasmatic/ hypothalamic gliomas. [19]It is important to distinguish between germinoma and other differential diagnoses of anterior visual pathway lesions because the treatment protocol and prognosis are different. [21]Obtaining tissue diagnosis is necessary. [4]All formerly reported cases underwent surgical procedures for tumor biopsy.e prognosis is better in germinoma when compared to other GCTs. [22]Radiotherapy is the cornerstone in the management of localized intracranial germinoma, while chemotherapy can be utilized in disseminated disease. [13,16]e survival rate is 91% for those who are treated with radiotherapy at 5 and 10 years. [5]e primary anterior visual pathway germinoma has been reported to regress significantly post-radiotherapy alone or with chemotherapy in reported cases, including our case.
CONCLUSION
Primary intracranial optic and chiasmal germinoma should be considered in the differential diagnosis of optic pathway lesions.Brain imaging alone can be suboptimal in differentiating between anterior visual pathway germinoma and gliomas.Histologic correlation is essential for appropriate treatment, alleviating symptoms, and avoiding irreversible vision loss.
Figure 1 :
Figure 1: (a) Coronal T2-weighted image shows lobulated thickening of the optic chiasm with normal pituitary infundibulum and (b) axial fluid-attenuated inversion recovery shows the involvement of pre-chiasmatic optic nerves and optic tracts, (c) midsagittal contrast-enhanced T1-weighted spoiled gradient recalled echo demonstrates the avid lesion enhancement with involvement of the hypothalamus.
Figure 3 :
Figure 3: (a) Coronal T2-weighted image shows interval reduction in the size of optic chiasm thickening (b) no residual enhancing lesion on midsagittal contrast-enhanced T1-weighted magnetic resonance image.
|
2024-02-18T16:08:09.712Z
|
2024-02-16T00:00:00.000
|
{
"year": 2024,
"sha1": "49ab5bb41e9a2cae91cc6c949c03fbed6b189e10",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a52487c24547c12d672e1ec339f68639b5570c0a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
5039797
|
pes2o/s2orc
|
v3-fos-license
|
Association between myocardial extracellular volume and strain analysis through cardiovascular magnetic resonance with histological myocardial fibrosis in patients awaiting heart transplantation
Background Cardiovascular magnetic resonance (CMR)-derived extracellular volume (ECV) and tissue tracking strain analyses are proposed as non-invasive methods for quantifying myocardial fibrosis and deformation. This study sought (1) to histologically validate myocardial ECV against the collagen volume fraction (CVF) measured from tissue samples of patients undergoing heart transplantation and (2) to detect the correlations between myocardial systolic strain and the myocardial ECV and histological CVF in patients undergoing heart transplantation. Methods A total of 12 dilated cardiomyopathy (DCM) and 10 ischaemic cardiomyopathy (ICM) patients underwent T1 mapping with the Modified Look Locker Inversion recovery (MOLLI) sequence, T2 mapping and ECV. Myocardial systolic strain, including left ventricular global longitudinal (GLS), circumferential (GCS) and radial strain (GRS), were quantified using CMR cine images with tissue tracking analysis software. Tissue samples were collected from each of 16 segments of the explanted hearts and were stained with picrosirius red for histological CVF quantification. Results A strong relationship was observed between the global myocardial ECV and histological CVF in the DCM and ICM patients based on a per-patient analysis (r = 0.904 and r = 0.901, respectively, p < 0.001). In the linear mixed-effects regression analysis, ECV correlated well with the histological CVF in the DCM and ICM patients on a per-segment basis (β = 0.838 and β = 0.915, respectively, p < 0.001). In the multivariate linear regression analysis, histological CVF was the strongest independent determinant of ECV in the patients awaiting heart transplantation (standardised β = 0.860, p < 0.001). However, the T2 time, GLS, GCS and GRS showed no significant associations with ECV and CVF in the patients awaiting heart transplantation. Conclusions ECV derived from CMR correlated well with histological CVF, indicating its potential as a non-invasive tool for the quantification of myocardial fibrosis. Additionally, impaired myocardial systolic strains were not associated with the ECV and CVF in the patients awaiting heart transplantation.
Background
Myocardial fibrosis is a common feature and the pathological basis of a variety of heart diseases, regardless of aetiology [1][2][3][4]. Myocardial fibrosis also leads to myocardial stiffness and dysfunction, resulting in the progression of heart failure and adverse clinical outcomes [2][3][4]. However, myocardial fibrosis might be reversible and has been proposed as a potential therapeutic target and prognostic factor [5,6]. Therefore, detecting and quantifying myocardial fibrosis play an important role in diagnostic, prevention and prognostic assessments of cardiac diseases.
Cardiovascular magnetic resonance (CMR) imaging is a reliable non-invasive imaging modality that is widely used to evaluate cardiac morphology, function and tissue characterisation. CMR with late gadolinium enhancement (LGE) is a well-established modality for detecting regional myocardial fibrosis associated with adverse cardiovascular outcomes [7][8][9][10]. However, LGE cannot quantify diffuse myocardial fibrosis due to the lack of a remote myocardium as a reference. Recently, CMR T1 mapping technique has emerged as a non-invasive modality for quantifying myocardial fibrosis by measuring myocardial extracellular volume (ECV) and native T1 time [11][12][13][14][15][16][17][18][19][20][21]. Nevertheless, previous studies have shown relatively high variability in native T1 time for quantifying myocardial fibrosis. Studies by Lee et al. [20] and Bull et al. [21] demonstrated that native T1 mapping correlated with diffuse myocardial fibrosis by biopsy in patients with aortic stenosis. In contrast, Ravenstein et al. [14] reported no significant correlation between native T1 times and histological myocardial fibrosis at 3T. Compared to native T1 mapping, myocardial ECV, as derived from myocardial and blood pre-and post-contrast T1 relaxation time changes, has been validated as a preferred method for measuring extracellular matrix expansion [12][13][14][15][16][17][18][19]. While the correlation between ECV and histological collagen volume fraction (CVF) has been validated using endomyocardial biopsy, so far only sparse data exist on whole-heart histological validation from explanted hearts in patients undergoing heart transplantation. Furthermore, the role of T2 mapping in the histological validation of ECV remains uncertain.
Additionally, myocardial deformation analysis can supply useful information for the evaluation of myocardial function, which is very important in the management of patients with heart failure [22][23][24]. CMR tagging is considered a reference standard for the assessment of myocardial strain [25]. However, additional acquisition sequences and time-consuming protocols have limited its clinical application. Recently, new CMR tissue tracking technology, which agrees well with CMR tagging, has allowed for the assessment of global and regional myocardial strain by tracking the endocardial and epicardial borders during cardiac cycles using cine images; this technology has a higher signal-to-noise ratio (SNR) and a lower investment of time [22,23,26]. Currently, the relationships between myocardial systolic strain and CMR-derived ECV and histological CVF remain to be explored.
Therefore, the purposes of this study were to examine the relationship between CMR-derived ECV and histological CVF measured from explanted hearts and to explore the role of T2 mapping in the histological validation of ECV. Additionally, we aimed to determine whether the alterations of myocardial systolic strain are associated with ECV and histological myocardial fibrosis in patients undergoing heart transplantation.
Study population
Between June 2016 and July 2017, 40 consecutive patients with dilated cardiomyopathy (DCM) or ischaemic cardiomyopathy (ICM) on the heart transplant waiting list were referred for CMR. Of the 40 patients, 5 patients with DCM were prohibited from CMR due to a pacemaker; 5 DCM and 4 ICM patients were unable to complete CMR because of difficulties with breath-holding; and 2 DCM and 2 ICM patients lacked CMR images because they underwent CMR examinations in other hospitals. Thus, a total of 12 DCM and 10 ICM patients undergoing electrocardiogram, echocardiography, invasive coronary angiography, CMR and heart transplantation were included in the present study. The DCM diagnosis was based on (1) the presence of left ventricular (LV) dilatation with an increased LV end-diastolic volume index (EDVI) by CMR; (2) systolic dysfunction with a reduced LV ejection fraction (LVEF) < 35% and symptomatic heart failure with a New York Heart Association (NYHA) functional class III or greater; and (3) the absence of coronary artery disease by coronary angiography or subendocardial LGE indicating previous myocardial infarction [27,28]. For all the ICM patients, coronary angiography was performed to diagnose coronary artery disease and LV systolic dysfunction with an LVEF ≤35%. ICM was diagnosed based on patient clinical histories as well as electrocardiogram (ECG), echocardiography, CMR, cardiac positron emission tomography (PET), invasive x-ray coronary angiography and histological samples [29]. Fifteen age-and sex-matched healthy subjects who responded to advertisements were recruited to participate in this study. The inclusion criteria included no known history of cardiovascular diseases, hypertension or diabetes mellitus, normal electrocardiography, and normal cardiac morphology, function and tissue characterisation (without LGE) by CMR. The exclusion criteria for all the subjects included renal insufficiency with an enhanced glomerular filtration rate (eGFR) < 30 mL/min/1.73 m 2 , an allergy to the contrast materials, and contraindications to CMR, including severe claustrophobia and device implantation. This study was approved by the Ethics Committee of Tongji Medical College, Huazhong University of Science and Technology. Written informed consent was obtained from all the participants.
CMR imaging protocol
All the subjects underwent standard CMR examinations with a 1.5T scanner (MAGNETOM Aera, Siemens Healthineers, Erlangen, Germany). The cine images included the acquisition of three long-axis slices (two-, three-, and four-chamber) and a stack of short-axis slices covering the entire LV using a balanced steady state free precession (bSSFP) sequence.
CMR image analysis
CMR images were analysed on a dedicated workstation using commercial software (Argus, Siemens Healthineers). Cardiac volumetric and functional parameters were quantified based on manual delineation of the endocardial and epicardial borders using a stack of continuous short-axis slice cine images (after excluding papillary muscles from the myocardium). All the parameters were indexed to the body surface area (BSA). The left ventricular EDVI, endsystolic volume index (ESVI), EF, stroke volume index (SVI), cardiac index and myocardial mass index were obtained automatically. The haematocrit was obtained through a blood sample analysis on the day of the CMR scanning. The ECV maps were automatically calculated from pre-and postcontrast T1 times and haematocrit using a prototype inline processing function from Siemens. The myocardial T1, T2 times and ECV measurements were determined by drawing a region-of-interest (ROI) in each segment of each subject on a dedicated workstation with a ROI measuring tool (Siemens Healthineers, Erlangen, Germany), according to the 16-segment model from the American Heart Association (AHA) [30]. ROIs for all the subjects were drawn in a mid-wall region of the myocardium to minimise partial volume effects at the epicardial and endocardial borders. The ROIs were copied between the pre-and post-contrast T1, T2 and ECV maps. Segments with artefacts including poor breath holding, cardiac motion and off-resonance artefacts, as well as contamination from surrounding lung, liver, blood and epicardial fat, can lead to inaccurate T1 or ECV measurements and must be excluded. The image quality of the myocardial segments was visually divided into three levels-good, acceptable and poor-by two observers (YC and YKC), and discordant opinions were resolved by the third observer (HSS) to reach a consensus review [31]. The poor images were considered non-evaluable segments and were excluded from further analysis. The global myocardial T1, T2 and ECV values were calculated as an average of all evaluable segments for each subject. The method used to measure the T1 and ECV values is shown in Fig. 1. One observer measured the native T1 time and ECV and repeated the measurement after 4 weeks for intra-observer variability analysis. The other observer performed the measurement again, using the same method, for the inter-observer variability analysis.
The LGE was quantified using a threshold of 4 standard deviations (SD) above the mean signal intensity of the remote normal myocardium within the same slice [32]. The LGE images were assessed by an independent observer who was blinded to mapping and histological data. All the LV myocardium segments were classified as segments with and without LGE.
Myocardial deformation analysis was performed using dedicated tissue tracking software (CVI 42 , Circle, Calgary, Canada). Myocardial systolic strain analysis was quantified through manual delineation of the LV endocardial and epicardial borders in a stack of short-axis and three long-axis slice cine images with the initial contour place at end-diastole, as previously described [33]. The papillary muscles were excluded from the myocardium. The contours were manually corrected. The results of the LV GLS, GCS and GRS were automatically calculated and displayed for further analysis.
Histological analysis
After each patient underwent heart transplantation, the explanted hearts were cut in the apex, mid, and basic LV levels using the positions of CMR T1 mapping slices as the reference. Next, 16 tissue blocks were immediately taken from the LV apex, mid, and basic slices of each explanted heart; the positions of the tissue samples matched the sites of CMR T1 mapping of the 16 LV segments according to the AHA 16-segment model [30]. The tissue samples were immediately fixed with 10% buffered formalin, embedded in paraffin, and stained with picrosirius red. The stained sections were photographed at high-power (× 200) magnification after excluding artefacts and perivascular fibrosis tissues, as previously described [12]. Twelve high-power fields from each stained section were analysed using Image-Pro Plus 6.0 software (Media Cybernetics, Rockville, Maryland, USA). As shown in Fig. 2, the collagen was stained with red and myocytes with yellow. A colour-threshold macro-based calculation algorithm was used to separate collagen from myocardium. The collagen area was obtained from a combination of SD from mean signal and isodata automatic thresholding, as previously described [16,17]. The histological CVF was defined as the percentage of collagen area divided by the total myocardial area. The average CVF of the 12 high-power fields was calculated as the myocardial fibrosis of each segment. All the tissue samples were analysed by an observer who was blinded to the CMR imaging results.
Statistical analysis
Normality was detected using the Kolmogorov-Smirnov test. Continuous variables are presented as the mean ± SD, and categorical variables as percentages or frequencies.
Comparisons between multiple groups were analysed using one-way ANOVA or the Kruskal-Wallis test, with the Bonferroni correction as the post hoc test, as appropriate. Categorical variables were analysed using the Chisquare test or the Fisher exact test. Correlation between ECV and CVF was assessed using Pearson's or Spearman's correlation coefficients, as appropriate. A linear mixedeffects regression analysis was used to assess the relationship between ECV and CVF for per-segment analyses. Univariate and multivariate linear regression analyses, with a stepwise algorithm, were performed to detect the determinants of ECV and CVF in the patients awaiting heart transplantation. Intra-and inter-observer variability of native T1 times, ECV and myocardial strain were assessed using an intra-class correlation coefficient (ICC) with 95% confidence intervals (CI). For all the tests, a two-sided p value < 0.05 was considered statistically significant. Statistical analyses were performed with IBM SPSS Statistics 21
Clinical characteristics of the study population
The baseline characteristics of the study population are listed in Table 1. The mean ages of the healthy subjects, DCM and ICM patients were similar (50.5 ± 6.5 vs. 49.3 ± 16.0 vs. 54.9 ± 7.3 years, p = 0.475). There were no significant differences in sex (86.7% vs. 91.7% vs. 90.0% males, p = 1.000), height, weight, body mass index (BMI), BSA and haematocrit. Heart rate was significant higher in the DCM patients than that in the healthy controls. The DCM and ICM patients showed a mean symptom duration of 6.5 and 2.2 years, respectively. The mean time between heart transplantation and CMR was 15 days (range: 0-32 days) and 27 days (range: 1-114 days) in patients with DCM and ICM, respectively. Table 2 shows the CMR parameters of the study population. As expected, the DCM and ICM patients had significantly lower LVEF and greater EDVI, ESVI and myocardial mass index than the healthy controls (p < 0.05 for all). No significant differences in the LVSVI and cardiac index were observed between the three groups. Myocardial systolic strain analysis demonstrated that the LV GLS, GCS and LGE was present in all the DCM and ICM patients. The mean native T1 time, T2 time and ECV were significantly higher in the DCM and ICM patients than those in the controls (p < 0.05 for all).
CMR parameters comparison
In the whole myocardium analysis, 11 of 192 (5.7%) myocardial segments in the DCM patients and 12 of 160 (7.5%) myocardial segments in the ICM patients were excluded due to artefacts identified by image quality assessment. In total, 64 segments with LGE and 117 segments without LGE from the DCM patients and 68 segments with LGE and 80 segments without LGE from the ICM patients were included in the analyses ( Table 3). The mean native T1 time, T2 time and ECV of all segments and segments without LGE were significantly higher in the DCM and ICM patients compared with those in the controls (p < 0.001 for all). The ECV of all the segments from the ICM patients was significantly greater than that in the segments without LGE (p < 0.05).
Histological validation
The mean histological CVF was 14.3 ± 4.6% (range: 9.7-23.8%) and 17.0 ± 5.5% (range: 9.6-26.2%) in the DCM and ICM patients, respectively. Figure 3 shows a segmental comparison of the LV myocardial CMR-derived ECV and histological CVF as the mean ± SD according to the AHA 16-segment model in patients awaiting heart transplantation. Based on the per-patient analysis, the ECV values strongly correlated with the histological CVF in the DCM and ICM patients (r = 0.904, p < 0.001 and r = 0.901, p < 0.001, respectively; Fig. 4). The persegment analysis also showed that the ECV correlated well with the histological CVF in the DCM and ICM patients (r = 0.750, p < 0.001 and r = 0.806, p < 0.001, respectively; Fig. 4). After excluding the segments with LGE, the ECV was moderately correlated with the histological CVF in the DCM and ICM patients (r = 0.525, p < 0.001 and r = 0.650, p < 0.001, respectively; Fig. 4). The per-segment analysis using linear mixed-effects regression showed that there was a significant relationship between ECV and histological CVF in the DCM and ICM patients (β = 0.838, p < 0.001 and β = 0.915, p < 0.001, respectively). Table 4 shows the results of the univariate and multivariate linear regression analyses of the ECV, CVF and other indices in the patients awaiting heart transplantation. In the univariate regression analysis, the ECV was associated with sex, N-terminal pro-brain natriuretic peptide (NT-proBNP), time between CMR and transplantation and histological CVF in patients awaiting heart transplantation. However, in the multivariate regression analysis, the independent determinants of ECV were sex and histological CVF (standardised β = 0.250, p = 0.007 and standardised β = 0.860, p < 0.001, respectively). In addition, the univariate regression analysis showed that histological CVF was correlated with the NT-proBNP, time between CMR and transplantation, native T1 time and ECV in the patients awaiting heart transplantation. The multivariate regression analysis demonstrated that the ECV was the independent determinant of the histological CVF (standardised β = 0.911, p < 0.001). However, no significant associations between ECV and histological CVF with left ventricular GLS, GCS, GRS and T2 mapping were observed in the patients awaiting heart transplantation (p > 0.05 for all).
Discussion
The results of this study demonstrated that (1) myocardial ECV calculated by CMR T1 mapping correlated well with the degree of myocardial fibrosis measured in wholeheart histological samples from the patients undergoing heart transplantation; (2) T2 mapping was increased in the patients awaiting heart transplantation but was not related to myocardial ECV and histological CVF after adjusting for potential confounding factors in the multivariate regression analysis; and (3) in this cohort of patients, the LV GLS, GCS and GRS were decreased, and impaired myocardial systolic strain was not associated with CMR-derived ECV and histological myocardial fibrosis. CMR T1 mapping is increasingly being recommended as a non-invasive diagnostic tool for myocardial tissue characterisation. Previous studies have validated the use of CMR T1 times and myocardial ECV against biopsy samples in patients with severe aortic stenosis or regurgitation, DCM, hypertrophic cardiomyopathy and ICM [12,[14][15][16]34]. Although the different field strengths and CMR T1 mapping techniques limit comparability, previous studies have generally indicated that accurate measurements of ECV calculated by CMR T1 mapping reflected actual myocardial fibrosis or extracellular matrix expansion in patients with a variety of cardiac diseases [12][13][14][15][16][17][18]. Our results are consistent with these studies, and we comprehensively demonstrated good correlations between whole-heart ECV measurements and histological myocardial fibrosis for 22 patients awaiting heart transplantation. To the best of our knowledge, most previous studies have evaluated myocardial fibrosis using an endomyocardial biopsy as the reference standard. Myocardial samples by biopsy can only reflect a few millimetres of subendocardial pathological information, which may be affected by procedurerelated tissue distortion [35]. Sampling-induced contraction bands can dislocate intracellular organelles and alter the structural relationship between myocytes and the extracellular matrix [35]. Furthermore, if an endomyocardial biopsy is performed from the right ventricular side of the interventricular septum, the pathological data will not necessarily reflect LV information. However, the above limitations do not exist in the wholeheart histological samples from explanted hearts in this study. Additionally, for endomyocardial biopsy, it is impossible to ensure that samples correspond exactly to the CMR imaging sites, and they might not necessarily be representative of whole-heart myocardial fibrosis. However, our tissue samples were collected from each of 16 segments of the explanted hearts, which might better correspond to the site of CMR T1 mapping and could more accurately provide whole-heart histological validation. Additionally, previous studies by Miller et al. and Iles et al. have validated CMR T1 mapping against histological samples from patients using 6 and 11 explanted hearts, respectively [12,34]. However, the study by Iles et al. only analysed post-contrast T1 times against histological CVF without ECV measurements in a single mid-ventricular short-axis slice, and thus their results can only partially assess myocardial fibrosis due to the influences of renal excretion [34]. ECV corrected by the haematocrit minimises the impact of some of confounding factors compared with T1 times and can provide accurate information for the quantification of myocardial fibrosis. In addition, their analysis considered a limited number of patients with a variety of heart disease aetiologies, including DCM, ICM, and restrictive and congenital heart diseases, which might make their results less useful. In our study, we analysed 12 DCM and 10 ICM patients separately and demonstrated good correlations between myocardial ECV and histological CVF, as measured by whole-heart histological samples from these patients. ECV measurements can be an effective alternative for clinical risk stratification and the prognostic evaluation of antifibrotic treatment. Furthermore, we analysed CMR T1, T2 mapping, ECV and myocardial systolic strain in one-stop examinations, without extra images, which provided comprehensive insight into myocardial fibrosis, oedema and cardiac function. Similar to CMR T1 mapping, T2 mapping techniques can also be used to assess myocardial tissue properties. It has been suggested that CMR T2 mapping can be used to detect myocardial oedema in acute myocardial infarction, myocarditis or cardiac allograft rejection [36][37][38]. In the present study, we enrolled a group of end-stage heart failure patients undergoing heart transplantation, and myocardial inflammation might have occurred in these patients. Our results suggest that the T2 times in the DCM and ICM patients were slightly increased compared with those in the healthy subjects. However, in the multivariate regression analysis, the T2 time did not significantly contribute to the ECV measurement, which indicated that myocardial oedema might play a minor role in the expansion of the extracellular space in this study cohort and was unlikely to alter the ECV value. A previous study by Bohnen et al. reported that the optimal cut-off value of the global myocardial T2 time was 60 ms for active myocarditis in a 1.5-T scanner [36]. The T2 value in the present study was lower than the above cut-off value, which suggested that the pathological changes in the study patients were primarily myocardial fibrosis, without obvious myocardial oedema. Therefore, the good correlations between ECV and histological CVF observed in our study demonstrated that CMR-derived ECV is a useful tool for the quantification of myocardial fibrosis.
Recently, an advanced CMR tissue tracking technique was proposed as a non-invasive and accurate modality for myocardial deformation analysis using cine images. Myocardial systolic strains can be used to characterise early myocardial dysfunction in clinical practice [22]. In the present study, the decreased GLS, GCS and GRS in the patients awaiting heart transplantation suggested serious LV myocardial dysfunction. In the multivariate regression analysis, GLS, GCS and GRS were not associated with ECV and histological myocardial fibrosis in patients awaiting heart transplantation. Previous studies have shown relatively high variability in the relationship between myocardial fibrosis and myocardial systolic strains [39,40]. A recent study by Cameli et al. indicated that GLS was associated with the degree of myocardial fibrosis by tissue samples in patients requiring heart transplantation [39]. However, Dusenbery et al. reported that decreased GLS correlated with LGE but not ECV [40]. Our results also showed that decreased LV GLS, GCS and GRS showed no correlation with histological fibrosis or ECV in the patients undergoing heart transplantation. The differences between various studies might be associated with clinical stage, duration, myocardial strain acquisition method, pathogenesis and medical treatment in different diseases. We studied a group of patients with end-stage heart diseases. The pathogenesis and pathological processes are highly complicated and diverse. Myocardial fibrosis could be just one of the many causes of impaired myocardial systolic strain in the study patients. Therefore, further multicentre studies with larger sample sizes are required to validate these results.
Study limitations
Our study evaluated the correlation between ECV and histological CVF using whole-heart tissue samples from 22 patients awaiting heart transplantation. Although the number of patients was limited, the sample sizes were relatively large, given that we used whole-heart tissue samples. Furthermore, we aim to collect more wholeheart tissue samples from patients awaiting heart transplantation at our institution for further study. The time delay between CMR and heart transplantation was a major factor affecting the results. However, the mean time between transplantation and CMR was less than one month in our study, which would not allow for a significant change in the myocardial collagen content [34]. Additionally, in the multivariate analysis, the time between CMR and transplantation was not associated with histological myocardial fibrosis. Although this study validated ECV against histological CVF in whole-heart samples, the tissue samples represented only small myocardial sections, which cannot be accurately located using CMR. Thus, sampling bias still existed. However, this technique is more robust than endomyocardial biopy, which only reflects the subendocardial part of the myocardium and not the whole myocardium. Finally, excluding patients with pacemakers and with difficulty about breath-holding may induce a selection bias.
Conclusions
Our results show that CMR-derived ECV correlates well with the histological CVF, indicating its potential use as a novel non-invasive imaging technique for quantifying myocardial fibrosis and for guiding clinical interventions and monitoring clinical therapy. Decreased LV myocardial systolic strain was not related to histological myocardial fibrosis or ECV in the present study.
|
2018-04-23T09:13:12.704Z
|
2018-04-23T00:00:00.000
|
{
"year": 2018,
"sha1": "b036d319a3e745f5ebffe4873fe84481448274fa",
"oa_license": "CCBY",
"oa_url": "https://jcmr-online.biomedcentral.com/track/pdf/10.1186/s12968-018-0445-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b036d319a3e745f5ebffe4873fe84481448274fa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2389869
|
pes2o/s2orc
|
v3-fos-license
|
e-PROCUREMENT ADOPTION BY SUPPLIERS Enablers , Barriers and Critical Sucess Factors
This paper presents a current research with the aim to identify the enablers, barriers and critical success factors for e-procurement adoption by suppliers. Any successful e-procurement system needs suppliers disposed to trade electronically. We present a review of the actual literature about e-Procurement, with focus on the barriers, enabler ́s and CSF ́s already identified. A research methodology is proposed to study the problem, and this work will contribute to better address the issues faced by suppliers on e-procurement
INTRODUCTION
E-procurement allows buyers to automate transactions and focus on more strategic activities.It has ben defined as "the use of electronic technologies to streamline and enable the procurement activities of an organizations" (OGO, 1999).
E-procurement solutions contribute to a better organizational performance, allowing reductions in cost and time when ordering from suppliers, and helping to achieve a well-integrated supply chain with the objective of reaching the market as quickly as possible with the right products and services in the most cost-effective manner (Hawking, Stein, & Wyld, 2004).
Although there are many benefits in eprocurement solutions, there also appears to be some barriers to their successful implementation.Former research shows that numerous companies still prefer the traditional methods (telephone, fax and e-mail) to communicate and exchange with business partners.Companies need to better understand how to implement e-procurement solutions in an efficient and effective manner.Any successful e-procurement system relies on suppliers that are willing and able to trade electronically.Their co-operation is crucial to the project's success.This degree of openness and transparency is new to most organizations, and it requires relevant cultural changes and high levels of trust between the participants (Harris & Dennis, 2004).
Users of e-procurement technologies reported that they can acquire goods over the Internet from only 15 per cent of their supply base (Davila, Gupta, & Palmer, 2003).A report from EU also confirms that only 13% of EU enterprises are receiving orders online (EU, 2005).If suppliers are not involved from the beginning,then a low adoption rate can constrain buyers from leveraging the full associated capabilities from e-procurement solutions.The lack of a critical mass of suppliers accessible through the organization's e-procurement system might limit the network effect that underlie these technologies, further delaying the acceptance and adoption of the technology.
Suppliers need to be convinced about eprocurement benefits.E-procurement may not guarantee additional sales but it can provide many other benefits like lower transaction costs and faster payment through better invoice processing.However, small and medium entrepises (SME´s) in particular may experience a number of barriers that need to be overcome.The main issues must be addressed to achieve effective electronic trading between companies.
Supply Chain Management and Collaboration
Supply chain management includes the planning and management of all activities included at the sourcing and acquisition process.According to the Council of Supply Chain Management Professionals (CSCMP, 2008), supply chain management encompasses the planning and management of all activities involved in sourcing and procurement, conversion, and all logistics management activities.It also includes coordination and collaboration with channel partners, which can be suppliers, intermediaries, third party service providers, and customers.In essence, supply chain management integrates supply and demand management within and across companies.
The Web and associated technologies facilitate the constitution of collaborative networks, which enable collaboration and sharing of information among companies.Collaboration may range from intra-organizational to inter-organizational and across the boundaries of the organization.
The procurement is an integral component of an organization supplier relationship management strategy, and often is the first and major step towards trading partner collaboration (Gilbert, 2000).Collaboration between supply chain members also requires the exchange of sensitive information.Teo and Ranganathan (2004) argued that the heart of B2B e-commerce was in inter-organizational collaboration and it required a fundamental shift in the organizational mindset to collaborate and engage in effective B2B e-commerce.
e-Commerce
E-commerce is the process to buy, sell or exchange products or services across the internet.In a business perspective the e-commerce definition is much more than simply buying and selling goods and services.It also represents the collaboration with business partners and electronic transactions between organizations.Different models of e-commerce have been presented in order to describe the nature of these transactions.When all members are companies or other organizations, the type of association is named business-to-business (B2B).A B2B association can be supported by the company's private network or have an extranet basis, taking advantage from the internet to establish multiple networks.
Electronic marketplaces allow collaboration and data sharing within or across industries.They are attractive to both the buy and sell-side organizations for different reasons.On the buy side they provide demand aggregation, enable quick and easy supplier comparisons, and allow activity reporting, strategic sourcing, and so on.On the sell side, they provide low-cost introduction to customers, better capacity management and efficient inventory, production management via demand aggregation, and analytics that help suppliers position their product better in the market .
There are several criteria for classifying emarketplaces.Kaplan and Sawhnew (2000) offered a classification based on the type of goods and the way these goods are purchased.An e-marketplace can either provide indirect goods that support the business process or the direct goods used in production.The way the buying process occurs can fall into two categories: long term contractual buying between two entities, or a one-time (spot) purchase with no long term relationship between the two parties.
It is also possible to classify the e-marketplaces based on their degree of openness.E-marketplaces with a high degree of openness are those accessible to any company.At the other end of the spectrum, emarketplaces with a low degree of openness are accessible only upon invitation.Based on this distinction, Hoffman, Keedy and Roberts (2002) recognized three main types of e-marketplaces: public e-marketplaces, consortia and private exchanges.E-procurement is closely related with supplier´s selling activities.Kim and Shunk (2004) consider eprocurement systems as various internet B2B commerce systems, which are located at the buyer, the supplier or at a third party, with the following categorization: Buyer-centric e-procurement systems; Supplier-centric e-procurement systems; Neutral e-marketplaces; e-PROCUREMENT ADOPTION BY SUPPLIERS -Enablers, Barriers and Critical Sucess Factors End-to-end electronic document/message exchange systems.
It is common to distinguish direct procurement from indirect procurement.The role of procurement and the emerging use of large information systems to conduct e-procurement was analyzed by Hawking and colleagues (2004) and presented with the results of a survey of 38 major Australian organizations.The main results show that direct procurement is heavily dependent upon traditional practices while indirect procurement is more likely to use "e" practices.Dedrick (2008) also found that the use of electronic procurement is associated with buying from more suppliers for custom goods but from fewer suppliers for commodity goods.In an efficiently functioning transparent market, few suppliers are sufficient for commodity goods, whereas for custom goods the need for protection from opportunistic vendor leads to the use of more suppliers.
e-Procurement Adoption
Companies are approaching e-procurement adoption with different strategies.Davila and colleagues (2003) identified two main types of companies.The first type is moving aggressively to adopt eprocurement technologies, frequently experimenting with various solutions.The second type adopts a more conservative strategy by selectively experimenting, typically with one technology.(Davila, Gupta, & Palmer, 2003).
Also an increasing number of public institutions identified electronic purchasing as a priority to egovernment.
Many implemented or are implementing e-procurement systems.The adoption of e-procurement in public administration has a huge impact since governments spent large amounts in acquiring materials and services (Pereira & Alturas, 2007).
Country maturity in ICT also plays an important role in e-procurement adoption by companies.A report from European Union declares that e-business activity is higher in companies belonging to countries more advanced in their Information Society those who are not so advanced (EU, 2005).
Enablers for Supplier Adoption
On this research, we considered enablers as the factors identified as having a positive influence on the adoption of e-procurement by suppliers.By understanding the main enablers that influence suppliers, companies can develop strategies to leverage their adoption on e-procurement implementation.
Suppliers need to gain conscience of the benefits resulting from their adoption of e-procurement.For suppliers, the adoption of e-procurement may be an opportunity to expand their market.According to Sharifi, Kehoe, and Hopkins (2006) they will find eprocurement attractive because they could easily and cost effectively reach new customers, improving their sales.Also, on private e-marketplaces, by making the electronic catalogue accessible in a direct way to all employees and buyers, or using ehubs and e-commerce communities, the seller can widely increase the number of selling's (Berlak & Weber, 2004).
The integration between the buyer and the seller systems allows exchanging information automatically.Therefore, it is possible for the buyer to make an order more quickly.This will also reduce the chance of occurring errors that are common when an order is dependent on paper (Berlak & Weber, 2004).Linking to a customer directly and collaborating to ensure accurate and on-time delivery provides better service and lower overall procurement costs to the customer, and can result in much more collaborative buyer-seller relationships (Neef, 2001).Carayannis and Popescu (2005) analyzed and evaluated some electronic procurement projects carried out by European Commission.They concluded that the transparency of EU public procurement market was improved by a systematic use of electronic tendering.The improvements on the transparency allow the involved stakeholders to know how the system is intended to work, and all potential suppliers have the same information about procurement opportunities, award criteria, and decisions.
In considering how e-procurement will impact buyer-seller relationships Ellram and Zsidisin (2002) argue that close buyer-supplier relationships have a strong positive impact on the adoption of eprocurement.Therefore, while e-procurement may not deliver improved levels of trust, it has been found that e-procurement transactions are more likely to be established first between partners in high trust relationships.In addressing this issue, both Croom (2001) and Kumar & Qian (2006) support the view that increased use of e-procurement tend to create more effective customer-supplier relationships over time.
Barriers for Supplier Adoption
For the purpose of this study, barriers are considered as the factors not contributing to the successful adoption of e-procurement by suppliers.
Cooperation with suppliers requires them to meet the business criteria that organizations have set to accept them in their networks.Since some of the business models associated with e-procurement technologies clearly envision the use of suppliers with whom the buyer has not previously transacted business, companies need to develop mechanisms that provide the buyer with assurances that the supplier meets or exceeds recognizable and industry enforced standards (Davila, Gupta, & Palmer, 2003) Buyers are concerned that e-procurement technologies will push prices down to the point where suppliers cannot invest in new technology or product development, upgrade facilities, or add additional productive capacity.Additional price pressures can even push suppliers with a poor understanding of their cost structure out of business (Davila, Gupta, & Palmer, 2003) For e-procurement technologies to succeed, suppliers should provide sufficient catalogue choices to satisfy the requirements of their customers.Ideally, suppliers will provide e-catalogues in the formats required by customers, reflecting custom pricing and/or special contractual agreements and will send updates on a regular basis (Davila, Gupta, & Palmer, 2003).
The majority of the companies believe that barriers include insufficient financial support, lack of interoperability and standards with traditional communication.Developing standards and systems for facilitating effective interoperability with traditional communication systems will help the adoption of e-procurement fairly well with minimum investment and changes to the business processes through reengineering (Hawking, Stein, & Wyld, 2004).
A study conducted in SME´s revealed a lack of knowledge of e-business related benefits.For those companies e-business adoption is an incremental process that involves on-the-job learning.This means that companies beginning to do some online sales are educating their staff through experience.Experience will accumulate and companies will move towards more online activities as they and their business partners become more experienced (Archer, Wang, & Kang, 2008).
Critical Success Factors
The factors that are critical to the successful adoption of e-procurement have been identified based on previous experience and literature available.This could be defined as the best practices for the successful adoption of an eprocurement solution.
The organizations who are implementing an eprocurement solution should assess the impact of the system on suppliers and their technological promptness to implement the system at their end, providing the services necessary for the system to succeed.It is necessary to put together a supplier adoption team, train the suppliers, and get them synchronized with the organization's implementation (Rajkumar, 2001).
According to Davila and colleagues (2003) providing suppliers with Internet or Intranet access to company internal data, or integrating suppliers applications with company information systems, both key to supply chain management, is still unusual.This observation reinforces the prudence that companies must demonstrate on integrating eprocurement technologies into existing systems and relationships.
A study conducted in the Swiss market revealed that the lack of supplier involvement and infrastructure to optimize B2B processes was a hindrance to integrate the B2B solution scenarios.Integration solutions are not always offered appropriate to suppliers and the majority of companies agree that the position of the suppliers is insufficiently considered (Tanner, Woumllfle, Schubert, & Quade, 2008).
Some case studies in Scotland and Italy where a supplier engagement process was developed, documented and facilitated to ensure supplier's business and technical requirements were met resulted in a high incident of supplier activity.In contrast, the buyer centric approach adopted in Western Australia meant that suppliers did not understand the benefits of joining the marketplace and therefore were reluctant to join (AGIMO, 2005).
Research Questions and Methodology
This research will provide a better understanding of issues affecting the suppliers within an eprocurement implementation.The research questions formulated were based on the enablers, barriers and CSF´s felt by suppliers when confronted with the eprocurement adoption.
The following research questions will be answered: • What are the major perceived barriers to the adoption of e-procurement by suppliers, and how can they be addressed?• What are the major perceived enablers to the adoption of e-procurement by suppliers, and how can companies explore it?• What are the critical success factors to the adoption of e-procurement by suppliers?
This effort started by reviewing the background to the application of e-procurement, which was then followed by various dentitions of e-procurement.Subsequently, we made a review of the literature available on the adoption of e-procurement by suppliers with the objective of developing a theoretical framework for determining the barriers against, enablers and possible solutions for the successful supplier adoption of e-procurement.The questionnaire will be pilot tested by e-procurement consultants and academics, before being sent out.The proposed framework will be validated with the help of empirical data collected from Portuguese companies.Finally, based on the empirical results and analysis, we will develop a framework for the supplier adoption of e-procurement.
DISCUSSION
Based on the database that we hope to collect, we plan to apply a quantitative approach to identify the enablers and the barriers that influenc companies to adopt e-procurement solutions.Besides, these empirical evidences could be relevant for managers of companies who seek better understanding and predict the procurement of their products.We hope that companies could leverage their e-procurement implementations by engaging the maximum number of suppliers, and to successfully collaborate on a win to win basis.
Figure 2 :
Figure 2: Enablers, Barriers and CSF for e-Procurement Adoption by Suppliers.
|
2017-10-19T08:04:19.075Z
|
2018-08-08T00:00:00.000
|
{
"year": 2009,
"sha1": "f84fc5cdb7f60c6a7ef639522518b6e75871b0fc",
"oa_license": "CCBY",
"oa_url": "https://repositorio.iscte-iul.pt/bitstream/10071/28249/1/conferenceobject_20378.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "0d0cc14ab08e6004a1ea2a7bed48a51ef93fe1a3",
"s2fieldsofstudy": [
"Business",
"Computer Science"
],
"extfieldsofstudy": [
"Business",
"Computer Science"
]
}
|
212445898
|
pes2o/s2orc
|
v3-fos-license
|
Effect of prosthetic rehabilitation on oral health-related quality of life of patients with head and neck cancer: a systematic review
Background To review the evidence on the oral health-related quality of life (OHRQoL) of head and neck cancer survivors after they have been treated with prosthetic rehabilitation. Methods Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) were utilized as the framework in designing, implementing and reporting the current review. Search of literature was done electronically using Medline, Embase, and Cochrane databases. Intervention component of the patient, intervention, comparison, outcome (PICO) for the current review was the prosthetic rehabilitation performed on the surgically treated head and neck cancer patients (participants); and outcome was the OHRQoL. Methodological index for non-randomized studies was the assessment tool utilized to report on the quality of the included studies. Results The initial search had identified 799 records and the final level of screening included eight articles. Six studies were experimental in design and two were cross-sectional. Cumulative sample of the head and neck cancer cases from the selected studies was 354, with 35.9 (14.9) and 72.4 (8.7) years as the highest and lowest mean age recorded from the included studies. More male cases (69.5%) were reported than female cases (30.5%) and squamous cell carcinoma was the most commonly diagnosed malignancy. Maxillary reconstruction and implant supported prosthesis were the choice of treatment for most of the cases. Different versions of oral health impact profile (OHIP) constructs were preferred by six studies. While, one study utilized University of Washington quality of life questionnaire and the other utilized European Organization for Research and Treatment of Cancer’s Quality of Life Questionnaire. Arguably, three studies had compared the OHRQoL scores of head and neck cancer patients with healthy counterparts through a follow-up period ranging from 1 to 2 years. Conclusions The included studies did not provide substantial evidence to demonstrate the improvement in OHRQoL of head and neck cancer patients after prosthetic rehabilitation. More prospective studies are needed with representative sample, robust methodology and a longer follow-up period. The current study provides a direction to the clinical decision-making process and the epidemiological research to enhance the patients and public health-related outcomes.
Introduction
Head and neck cancers constituting malignancies of lip, tongue and oral cavity (ICD10: C00-06), nasopharynx, oropharynx and hypopharynx (ICD10: C09-C10), salivary glands (ICD10: C07-08), larynx, and paranasal sinuses (ICD10: C11-C13) are reported with high morbidity rates (1,2). It can be difficult for the diseased to cope and adapt with its physical, psychological and emotional repercussions affecting their general well-being. With an intention to improve longevity it can be equally challenging for the clinicians to manage such cases, as they not only need to deliver effective treatment but also restore the functional capabilities of the survivors (3).
The vital structures of head and neck enable functions such as mastication, speech, communication, expressions and more. Pathophysiological changes caused by malignancies can substantially impede these functions leading to nutritional deficiencies and social isolations thus hampering the general well-being of an individual. This puts the quality of life (QoL) of such patients more in the forefront than ever before. In a seminal paper published as early as 1995, the author argues that QoL is frequently used in head and neck cancers but it is still not clearly defined (4). Since then, there has been a gradual evolution in the approach by oral health care providers and oral epidemiologists that have led to personalized and conditionspecific constructs termed as oral health-related quality of life (OHRQoL) (5). OHRQoL is a multi-dimensional concept that broadly identifies the impact of oral conditions on daily living, such as, problems related to a person's eating, sleeping, social-interaction and emotional habits (6)(7)(8). Generic QoL constructs have long been used to evaluate the QoL in patients with head and neck cancers. However, these questionnaires often do not cater to specific oral health conditions affecting the OHRQoL, as patients with head and neck cancer may be at higher risk of depleted oral health-related daily performances (9). Even the ones treated have been reported with impairment of voice, speech difficulty, and problem to swallow food (10). One recent study states that the oral functions of the patients suffering from head and neck cancers are far worse than the non-head and neck cancer patients (11).
Surgical intervention is a common treatment modality for most head and neck cancers; and oral defects, deformities, dysfunction, and dysphagia are its related complications (12). These oral defects are later treated using various types of prostheses; the outcome of which is to restore the oral functions (13). This idea of oral rehabilitation of patients after their treatment is one of the foremost priorities to the clinicians. Although such improvement in function could be assessed using clinical parameters, but patients' self-reports using QoL instruments provide insights into their needs, expectations and treatment effectiveness.
The existing evidences talk about the importance of having good health-related QoL among head and neck cancer patients (14,15). Another systematic review done by So et al., 2012; evaluated the QoL of head and neck cancer survivors after treatment (16). However, there is no systematic review to date that assesses the QoL of head and neck cancer patients who have undergone oral rehabilitation using condition specific QoL measuring instrument. The findings are paramount to clinicians and the oral health researchers, as OHRQoL reflects patient's own evaluation of their oral health status, functional and emotional wellbeing. It is essential to understand the patients' perspective to enhance the QoL among the head and neck cancer survivors. Thus, the objective of the current study is to conduct a systematic review to evaluate the OHRQoL of head and neck cancer survivors after they have been treated with prosthetic rehabilitation. We hypothesize that prosthetic rehabilitation given to head and neck cancer patients after surgical interventions improves their OHRQoL.
Methods
Guidelines provided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) were utilized as the framework in designing, implementing and reporting the current review (17). This invited systematic review was registered at the Research Unit, College of Dentistry, Jazan University, Saudi Arabia. The search was performed during the month of May 2019 and the all published studies until the date were subjected to the selection criteria.
Inclusion criteria
Publications in English language that use OHRQoL or oral functions as prognostic measure after performing a surgical reconstruction with a prosthetic appliance to treat the patients suffering from head and neck cancer were included. The search of articles was not limited to a particular research design.
Exclusion criteria
Abstract presentations, opinion-based commentaries, and dissertations were excluded. Articles using general health QoL measures with no mention of OHRQoL or oral functions were excluded after reading the abstracts.
Exposure and outcome
The exposure of interest for the current study was the prosthetic rehabilitation performed on the surgically treated head and neck cancer patients, irrespective of the size of the defect, material used for prosthetic reconstruction, duration of the prostheses, age and gender of the patient. Outcome was the OHRQoL after the restoration of oral functions of the treated patients. The patient, intervention, comparison, outcome (PICO) question ordered for the current study: "Does prosthetic rehabilitation improve the OHRQoL among the head and neck cancer survivors?"
Study selection and data extraction
Search of literature was done electronically using Medline, Embase, and Cochrane databases. Two authors (MF Quadri and SK Tadakamadla) independently performed searches using the mentioned keywords and the Boolean operators ( Table 1). T John and M Nayeem then independently retrieved the articles according to the set selection criteria, which was later cross checked again by MF Quadri, A Jessani and SK Tadakamadla. Data extraction chart was prepared and used by T John, AW Alamir and M Nayeem, and the information such as, name of the authors, year of publication, type of study, age of patients, gender, type and number of head and neck cancer cases, type of prosthetic appliance used, follow-up period, OHRQoL questionnaire used, assessed oral functions, result of the study and conclusion were extracted.
Quality assessment of included studies
Methodological index for non-randomized studies (MINORS) was the assessment tool utilized to report on the quality of the included studies. It has a total of 12 questions assessing the various aspects of published researches, specifically focusing on their methodologies. Each question could be scored on a scale of 0 to 2 with "2" being ideal and "0" being not reported. An ideal score of 16 is suggested for non-comparative studies and 24 for comparative studies. The scale is exclusively designed for research involving compulsory surgical procedures wherein randomization of the patients is not always possible (18).
Results
The initial search had 799 hits and after removal of duplicates 709 published articles remained. Titles and abstracts of these publications were reviewed for their eligibility and 51 articles were selected. Full texts of these were reviewed and assessed in detail according to the selection criteria. Out of the 51 mentioned earlier, 35 articles had no description of the prosthetic rehabilitation, 6 did not assess the OHRQoL and 2 were published in languages other than English ( Figure 1).
Squamous cell carcinoma was more frequently reported among all the assessed malignant tumors and maxillary reconstruction and implant supported prosthesis were the choice of treatment for most of the cases ( Table 3). The patients in seven studies were followed for at least a year before assessing their oral function and OHRQoL, however one study did not report the duration of follow-up ( Table 2).
OHRQoL assessments
Different versions of oral health impact profile (OHIP) constructs were preferred by most of the studies to assess Table 4).
OHRQoL findings
Overall, the results were inconclusive to demonstrate the improvement in OHRQoL of head and neck cancer patients after prosthetic rehabilitation. Three studies displayed poor OHRQoL among the survivors (11,20,22) by comparing them with healthy controls, irrespective of the type and make of the prostheses. Fromm et al. stated that the oral habits like chewing and swallowing, and the overall esthetic score became worse in comparison to the control group post treatment (11). Similarly, most of the participants in the study conducted by Akinmoladun et al. (22) had reported issues pertaining to swallowing, chewing, speech, taste, and esthetic appearance after prosthetic reconstructions; and Schweyen and colleagues (20) indicated that the OHRQoL among the cancer patients did not improve well enough as compared to their normal counterparts. However, the other included publications had indicated good OHRQoL outcomes after the patients treated for head and neck cancers had been given prosthetic units (12,13,19,21,23). Table 4).
Findings of quality analyses using MINORS criteria
Quality analysis of included studies revealed the highest score of 13 for a non-comparative study and 15 for a comparative study. These are below the suggested ideal scores of 16 and 24, respectively. The objectives were clearly defined and the endpoint of the study, i.e., OHRQoL was properly assessed by most of the studies. However, not all the eligible patients were recruited by majority of the studies and one of them had adopted convenience sampling technique. Two studies were retrospective in nature and complex analyses with adequate control groups were missed by many of the included studies ( Table 5).
Discussion
Significant advances have been made in treating cancer of head and neck with the emphasis to restore oral functions using prosthetic units. However, evaluating the success of such interventions using OHRQoL among these treated patients seems to be in its initial stages. The current systematic review is first to evaluate the effect of prosthetic rehabilitation on OHRQoL of patients with head and neck cancers. However, the finding based on the eight selected articles involving 382 patients with a minimum of Total 11 15 9 13 13 13 9 10 † , the items are scored 0 (not reported), 1 (reported but inadequate) or 2 (reported and adequate). Global ideal score being 16 for noncomparative studies and 24 for comparative studies.
1-year follow-up is inconclusive to support the hypothesis. In this context, an earlier published report using the QoL construct had suggested that most of the treatment morbidities of head and neck cancer survivors do not return to baseline after being treated (16). However, another study concluded that it usually takes more than 12 months in order to completely restore the functions and thus improve the QoL among the survivors (24 (26). This could be attributed to the difference in the stability of both the prostheses, and the rehabilitations performed using removable units may lead to functional limitation and physical discomfort thus hampering the OHRQoL (13). It is to further discuss that the findings derived from the current review also depended on the methodology of the included studies. For instance, the sample sizes were relatively small and not representative. Most of them did not evaluate the OHRQoL of patients before and after the prosthetic rehabilitation. Due to these inconsistencies among the retrieved reports, a meta-analysis was not possible. In addition, studies should have considered longer follow-up period, as 12 months may not be appropriate to The strength of this review is exhibited in its comprehensive search strategy that had been applied. Studies were not exclusively limited to prosthetic related search terms as there could be several methods of prosthetic rehabilitation. To avoid loss of relevant articles, all the retrieved texts that spoke about QoL in head and neck cancer patients were individually assessed for their eligibility. Also, the titles involving placement of implants for jaw reconstruction were evaluated in the search.
Implications and future directions
There are volumes of published literature revealing the advancement in the biomedical model focusing on surgical techniques and dedicated man hours to treat head and neck cancer patients (27,28). In addition, similar efforts is required to improve the means of supporting care, restoring of functions and enhancing the QoL of the survivors (29), as this will in turn contribute towards a personalized treatment strategy and rehabilitation process (29)(30)(31). Experts have also put forth that the patients or their caregivers must be enlightened with the evidence based self-management strategies to overcome the persisting functional and emotional difficulties that the patients may encounter during the first 12 months of their treatment (16). Also, there are a variety of OHRQoL questionnaires currently available in multiple languages to ease the collection of data for the health care providers while assessing the final outcome of their treated patients, and the findings obtained will specify the type of care that is obtained while restoring the functional and emotional capabilities (32).
To conclude, the included studies in the current systematic review do not provide substantial evidence to support the statement that, prosthetic rehabilitation performed on the surgically treated head and neck cancer patients improves their OHRQoL. The findings are paramount for the clinical decision making and the epidemiological research to enhance patients and public health-related outcomes.
Footnote
Provenance and Peer Review: This article was commissioned by the Guest Editors (Shankargouda Patil, Sachin C. Sarode and Kamran Awan) for the series "Oral Pre-cancer and Cancer" published in Translational Cancer Research. The article has undergone external peer review.
Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at http://dx.doi. org/10.21037/tcr.2019.12.48). The series "Oral Pre-cancer and Cancer" was commissioned by the editorial office without any funding or sponsorship. The authors have no other conflicts of interest to declare.
Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the noncommercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
|
2020-01-30T18:15:26.698Z
|
2020-01-18T00:00:00.000
|
{
"year": 2020,
"sha1": "408b32d665d9003f1722dcfeffd774f1a58ce096",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.21037/tcr.2019.12.48",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "33ca19f8e61f1d79b6ab27c09a213c885c035fcc",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
215741941
|
pes2o/s2orc
|
v3-fos-license
|
Permanent Inactivation of HBV Genomes by CRISPR/Cas9-Mediated Non-cleavage Base Editing
Current antiviral therapy fails to cure chronic hepatitis B virus (HBV) infection because of persistent covalently closed circular DNA (cccDNA). CRISPR/Cas9-mediated specific cleavage of cccDNA is a potentially curative strategy for chronic hepatitis B (CHB). However, the CRISPR/Cas system inevitably targets integrated HBV DNA and induces double-strand breaks (DSBs) of host genome, bearing the risk of genomic rearrangement and damage. Herein, we examined the utility of recently developed CRISPR/Cas-mediated “base editors” (BEs) in inactivating HBV gene expression without cleavage of DNA. Candidate target sites of the SpCas9-derived BE and its variants in HBV genomes were screened for generating nonsense mutations of viral genes with individual guide RNAs (gRNAs). SpCas9-BE with certain gRNAs effectively base-edited polymerase and surface genes and reduced HBV gene expression in cells harboring integrated HBV genomes, but induced very few insertions or deletions (indels). Interestingly, some point mutations introduced by base editing resulted in simultaneous suppression of both polymerase and surface genes. Finally, the episomal cccDNA was successfully edited by SpCas9-BE for suppression of viral gene expression in an in vitro HBV infection system. In conclusion, Cas9-mediated base editing is a potential strategy to cure CHB by permanent inactivation of integrated HBV DNA and cccDNA without DSBs of the host genome.
Current antiviral therapy fails to cure chronic hepatitis B virus (HBV) infection because of persistent covalently closed circular DNA (cccDNA). CRISPR/Cas9-mediated specific cleavage of cccDNA is a potentially curative strategy for chronic hepatitis B (CHB). However, the CRISPR/Cas system inevitably targets integrated HBV DNA and induces double-strand breaks (DSBs) of host genome, bearing the risk of genomic rearrangement and damage. Herein, we examined the utility of recently developed CRISPR/Cas-mediated "base editors" (BEs) in inactivating HBV gene expression without cleavage of DNA. Candidate target sites of the SpCas9-derived BE and its variants in HBV genomes were screened for generating nonsense mutations of viral genes with individual guide RNAs (gRNAs). SpCas9-BE with certain gRNAs effectively base-edited polymerase and surface genes and reduced HBV gene expression in cells harboring integrated HBV genomes, but induced very few insertions or deletions (indels). Interestingly, some point mutations introduced by base editing resulted in simultaneous suppression of both polymerase and surface genes. Finally, the episomal cccDNA was successfully edited by SpCas9-BE for suppression of viral gene expression in an in vitro HBV infection system. In conclusion, Cas9-mediated base editing is a potential strategy to cure CHB by permanent inactivation of integrated HBV DNA and cccDNA without DSBs of the host genome.
INTRODUCTION
Chronic hepatitis B virus (HBV) infection often leads to adverse clinical outcomes, including cirrhosis and hepatocellular carcinoma (HCC). 1 Although current antiviral therapies have dramatically improved the outcomes of individuals with chronic hepatitis B (CHB), most patients experience rebound viremia after discontinuation of nucleos(t)ide analogs (NAs). 2 The major obstacle for eradicating HBV infection by NAs is the persistent covalently closed circu-lar DNA (cccDNA), which is the episomal form of a virally replicative template. 3 Curative strategies for CHB need to either eliminate all of the infected hepatocytes or purge all of the replication-competent cccDNA. 4,5 So far, curing HBV remains extremely challenging because no drugs can specifically target and destroy cccDNA.
Integration of HBV DNA into host genomes is a common event occurring upon HBV infection. 6,7 Unlike retrovirus, HBV integration is not a requisite for the viral life cycle because integrated HBV DNA does not serve as a template for productive viral replication. 8 Nevertheless, integrated HBV DNA has recently been proven to be a crucial source for the continuous secretion of hepatitis B surface antigen (HBsAg). 9 An excessive amount of secreted HBsAg likely has the immunosuppressive effect and acts as a "decoy" for antibody responses in order to allow HBV to escape from host immunological control. 10 Recently, there has been emerging enthusiasm for the functional cure of HBV, which is defined as loss of HBsAg. However, disruption of cccDNA alone may not necessarily result in HBsAg loss. It is reasonably assumed that a functional HBV cure cannot be achieved without targeting integrated HBV genomes. 11 The recent advance of genome-editing tools has provided a novel approach to treat viral infections by cutting and destroying viral genomes in a sequence-specific manner. [12][13][14][15] Particularly, the CRISPR/ Cas RNA-guided DNA endonuclease has gained the widest interest because it can be conveniently redirected to the desired DNA sequences by simply redesigning the sequences of guide RNAs (gRNAs) that are perfectly matched with the protospacer sequences of the target genomes. Cleavage of target genomes by Cas9/gRNA causes double-strand breaks (DSBs) of DNA, which are often repaired by the non-homologous end joining (NHEJ) pathway. 16,17 The NHEJ pathway frequently leads to nucleotide insertions or deletions (indels) and thus disrupts the open reading frames (ORFs) of genes.
Previous studies, including ours, have examined the utility of CRISPR/Cas9 in disruption of HBV genomes. [18][19][20][21][22] Most studies have taken advantage of the wild-type (WT) CRISPR/Cas9 system and demonstrated its utility in specific cleavage of intrahepatic HBV templates, including cccDNA and integrated HBV genomes, for curing HBV infection. 23,24 However, the CRISPR-meditated cleavage of integrated HBV DNA also results in DSBs of the host genome, which may cause large deletions and chromosomal rearrangements, leading to pathological consequences. 25 Recently, a novel CRISPR-derived base-editing strategy has been shown to generate precise C-T/G-A conversion without DSBs at specific genome loci. 26 The initial "base editors" (BEs) utilized a catalytically impaired Cas9 endonuclease (dCas9) tethered with APOBEC deaminase. To enhance C-T/G-A conversion, the dCas9-deaminase construct was fused with a uracil glycosylase inhibitor (UGI) that suppresses uracil excision following deamination to prevent the reversion of the U:G pair to a C:G pair. A widely used third-generation BE (BE3) was thus designed with the combination of APOBEC1, Cas9-derived nickase, and UGI. 27 Since then, a growing number of modified BEs have been developed to improve various aspects of BE tools. 28 For example, the fourth-generation BE4 increases the efficiency of C-T/ G-A conversion, while halving the frequency of undesired base changes compared to BE3. BE4Gam is generated by fusing BE4 to DSB-binding protein Gam from bacteriophage Mu to further reduce indel formation. 29 In addition, the efficacy of base editing can be significantly improved by optimizing codon usage (BE RA ), and further enhanced by including nuclear targeting motifs at the N terminus of BE enzymes (FNLS-BEs). 30 Theoretically, base editing the target nucleotides without DSBs of DNA should reduce the risk of genome rearrangement and carcinogenesis. 26 Although interesting, the effect of the Cas9-mediated BE on the episomal cccDNA remains unclear.
In this study, we explored the utility and efficacy of the CRISPR/Cas9derived BEs in introducing nonsense mutations to cccDNAs and integrated HBV genomes. We screened the entire HBV genomes and identified candidate sites that could effectively be base edited to create premature stop codons of viral genes. We further proved the successful base editing of integrated HBV DNAs and cccDNAs and the reduced expression of both surface and polymerase genes, demonstrating the potential for curing HBV by Cas9-BE.
Designing and Screening HBV-Specific gRNAs for Inducing Nonsense Mutations by SpCas9 BEs
To construct the HBV-specific gRNAs that are suitable for an SpCas9 base editor (SpCas9-BE), we first searched for the candidate protospacer sequences across the four ORFs of the HBV genome by using the website software BE-Designer. 8 We identified 23 candidate target sequences, including 3 in the core, 9 in polymerase, 9 in the surface, and 2 in X ORFs ( Figure 1A; Table 1). HBV-specific gRNAs were then constructed and co-transfected with codon-optimized FNLS-P2A-Puro (puromycin), hereafter named BE3 for the sake of simplicity, into integrated HBV genome-containing HEK293T cells (HBV-HEK293T cells), including HEK293T-core, HEK293Tpolymerase (pol), and HEK293T-surface cells. The efficiency of C-T/G-A conversion in these gRNA targeting sites was evaluated by Sanger sequencing ( Figure 1B). The ratio of C-T/G-A conversion at each target site was roughly categorized as approximately 0%, less than 50%, approximately 50%, and greater than 50%. The information and base-editing efficiency for each gRNA are summarized in Table 1. The extent of sequence conservation for the protospacer sequence and protospacer adjacent motif (PAM) of each gRNA is summarized in Table S1. Overall, among all of the screened HBVspecific gRNAs, we discovered 14 gRNAs targeting the polymerase and surface ORFs with base-editing efficiency of approximately or greater than 50%, whereas the gRNAs targeting the core and X ORFs has less optimal base-editing efficiency (approximately 0% or less than 50%).
We also performed the screening for candidate gRNAs using SpCas9-BE variants, including VQR, VRER, and EQR, which recognize altered PAMs NGAN, NGCG, and NGAG, respectively (Figure S1A). 31 The results are summarized in Figure S1B and Table S2, which show that the overall editing efficiency of SpCas9-BE variants was lower than that of the natural SpCas9-BE.
Inhibition of the Expression of HBsAg and Polymerase by Introducing Nonsense Mutations into Integrated HBV Genomes through Base Editing As mentioned above, continuous expression of HBsAg from the integrated HBV genomes prevents loss of HBsAg. We thus examined whether the selected gRNAs combined with BE4Gam-P2A-Puro were able to suppress the expression of viral genes from integrated HBV genomes, particularly HBsAg, in HepG2.2.15, which harbors integrated replication-competent dimeric HBV genomes. 32 BE4Gam-P2A-Puro, hereafter named BE4, contains codon-optimized dCas9 nickase and nuclear targeting motifs and Gam at the N terminus, and thus has higher base-editing efficacy and reduced undesired base changes and indel formation compared to BE3. 30 We chose a number of gRNAs targeting surface ORFs with high base-editing efficiency, including gS3 (preS2), gS7 (S), and gS8 (S). Following lentiviral transduction of HepG2.2.15 cells with gRNAs and BE4 twice, the supernatants and lysates of HepG2.2.15 cells were collected to examine the expression of HBsAg by a semiquantitative ELISA and immunoblotting, respectively. The genomic DNA (gDNA) was also extracted for Sanger sequencing. The results show the ratios of successful C-T/G-A conversion at target sites of gS3, gS7, and gS8 were R50%, R50%, and approximately 50%, respectively ( Figure 2). Additionally, a significant decline of HBsAg secretion was observed in the supernatants of HepG2.2.15 cells treated with gS7 and gS8, www.moleculartherapy.org which target an S gene, whereas cells treated with gS3, which targets a pre-S2 gene, did not show significant HBsAg reduction ( Figure 2A). The expression of HBsAg was also confirmed by immunoblotting analysis, which shows a pattern of HBsAg reduction similar to the ELISA results. Only HepG2.2.15 cells treated with gS7 and gS8, but not those treated with gS3, exhibited significantly decreased expression of small surface protein ( Figure 2B). Furthermore, we examined the effect of base-editing inactivation on HBV polymerase by utilizing gRNAs with high efficiency, that is, gP7, gP8, and gP9, which target protospacer sequences that are conserved in 88% of genotype D HBV strains. We generated lentiviruses of gRNAs and BE4 and co-transduced HepG2.2.15 with them twice during an interval of 14 days. gDNA was subsequently extracted from transduced cells and subjected to Sanger sequencing. The results show that the C-T/G-A conversion rates at all three sites are approximately 50% ( Figure S2). The HBV DNA level of the supernatants in HepG2.2.15 cells treated with gP7, gP8, and gP9 decreased by more than 60%, indicating effective inactivation of HBV replication (Figure 2C). In addition, the effective base editing was further confirmed by next-generation sequencing (NGS), which showed that the C-T/G-A conversion rates at the target sites of gP7, gP8, gP9, gS3, gS7, and gS8 were around 35%-80% ( Figure 2D). To evaluate the specificity of Cas9-BE, we analyzed the off-target effects of base editing for two of the most effective gRNAs (gP9 and gS8). We chose the top three predicted off-target sites for each gRNA. The mutagenesis rates of all of the off-target sites measured by NGS were very low (Figure S3). Finally, we analyzed the frequency of Cas9-BE-induced ontarget indels by NGS. Measurement of indels can be used to evaluate the risk of DSB, which is often repaired by the NHEJ mechanism and causes indels. Our results showed that base editing with all of the above six gRNAs (gP7, gP8, gP9, gS3, gS7, and gS8) resulted in low on-target indels (0.5%-5%). We further compared the frequencies of indels caused by WT Cas9 and Cas9-BE at the target sites of gP9, gS7, and gS8 and found that WT Cas9 indeed induced significantly much higher levels of indels (>70%) than did Cas9-BE ( Figure 2E).
Dual Suppression of HBsAg and Polymerase by Base Editing-Specific Loci of the HBV Genome
The HBV genome is compactly organized and arranged into four ORFs with substantial overlapping regions. Therefore, we were interested to determine whether the nucleotide change in the ORF of polymerase would introduce a missense mutation and influence the expression of surface protein and vice versa. Actually, the three gRNAs, gP7, gP8, and gP9, not only generated nonsense mutations in the polymerase gene, but they also resulted in G50L (pre-S1), G25/26N (pre-S2), and G71N (S) missense mutations in the surface ORFs. We found that the HBsAg levels of the supernatants from HepG2.2.15 treated by gP8 and gP9 decreased significantly using a semiquantitative HBsAg ELISA assay ( Figure 3A). The decline of HBsAg was even more profound in gP9-treated cells. In contrast, there was no significant decrease of HBsAg level in the supernatant of gP7-treated cells. Consistently, immunoblotting analysis of cell lysates also showed a similar pattern of the reduced surface antigen expression ( Figure 3B). Likewise, the three gRNAs, gS3, gS7, and gS8, not only caused nonsense mutations in the surface gene, but also led to E292L (gS3) and G500S (gS7 and S8) missense mutations in the polymerase gene. Interestingly, the HBV DNA levels in the supernatants of HepG2.2.15 cells treated with gS3, gS7, and gS8 also decreased significantly ( Figure 3C). Taken together, we demonstrate that the expression of both polymerase and surface genes can be significantly reduced by simultaneous introduction of a missense mutation into the polymerase gene and a nonsense mutation into the surface gene, respectively, or vice versa using gRNAs targeting the overlapping regions of these two genes.
Validation of the Dual Suppression Phenomenon by Specific Point Mutations of the HBV Genome
To further confirm the effective dual suppression of polymerase and surface gene expression by the particular gRNAs, including gP9, gS7, and gS8, we intentionally introduced these nonsense mutations into HBV genomes by site-directed mutagenesis, including W156X in the surface (W156X-S) and W414X in polymerase (W414X-P), which correspond with the base-editing sites of gS7/gS8 (same base-editing site) and gP9 ( Figure 4A). Because the surface and polymerase genes of the HBV genome are extensively overlapped, the W156X-S nonsense mutation also introduces G500S in polymerase, and W414X-P causes G71N in the S gene as well. By transfection of Huh7 cells with the WT or mutant HBV-expression plasmid, we observed that the W156X-S nonsense mutation led to dramatic HBsAg reduction in the supernatant and cytoplasm to below the detection limit, and the G71N missense mutation (W414X-P) also caused a significant decrease of HBsAg ( Figures 4B and 4C). In addition, for the mutations in the polymerase gene, both the W414X-P nonsense mutation and the G500S missense mutation (W156X-S) resulted in significant reduction of the viral DNA in the supernatant at 5 days post-transfection ( Figure 4D). We further performed a Southern blot to measure the intracellular replicative intermediates of transfected cells and found that the two mutant HBV genomes with polymerase mutations W414X-P and G500S (W156X-S) caused a significant reduction of relaxed circular DNA (RC-DNA) to beyond the detection limit ( Figure 4E). Taken together, our results confirm that the nucleotide changes at these two particular loci of the HBV genome can result in profound dual suppression of the polymerase and surface gene expression.
Generation of Nonsense Mutations in HBV cccDNA by SpCas9-BE
Since cccDNA is the replicative template of HBV, we further determined whether the SpCas9-BE could indeed generate nonsense mutations in cccDNA, which is an extrachromosomal DNA. We first showed that following HBV infection of HepG2-NTCP-C4 cells The results of (A) and (C)-(E) are combined from three independent experiments and shown in bar graphs with mean plus standard deviation (SD). *p < 0.05, **p < 0.01, ***p < 0.005 (Student's t test). n.s., not significant.
in vitro, HBV RC-DNA and cccDNA could be detected by Southern blotting ( Figure 5A). The identity of cccDNA was further validated by the appearance of 3.2-kb DNA after linearization of RC-DNA and cccDNA with EcoRI digestion. Additionally, T5 exonuclease treatment significantly enhanced the purity of cccDNA isolation ( Figure 5A). To prove C-T/ G-A conversion in cccDNA, we then conducted an experiment by initially repeated transduction of HepG2-NTCP-C4 cells with gRNAs/ SpCas9-BE followed by HBV infection. Our preliminary results showed that BE3 exhibited higher base-editing efficacy on cccDNA than did BE4 (data not shown), so we chose BE3 for further experiments. By an immunofluorescence assay (IFA), we showed that the efficiency of HBV infection and delivery of Cas9 was around 22.1% and 15.4%, respectively, and only 4.8% cells were double positive. It is estimated that around 21.7% of HBV-infected cells were positive for Cas9-FLAG ( Figure S4). Then we isolated cccDNA from HBV-infected HepG2-NTCP-C4 by T5 exonuclease treatment and analyzed C-T/G-A conversion in cccDNA by Sanger sequencing and NGS (Figures 5B and 5E). The results showed that gP9 and gS8 target sites were effectively edited by SpCas9-BEs at the efficiency close to 50% estimated by Sanger sequencing ( Figure 5B). Consistently, the results of NGS analysis showed 25%-35% of base editing ( Figure 5E). In addition, BE3 also induced far fewer undesired ontarget indels (<0.5%) ( Figure 5F). Finally, we demonstrate that the secreted HBsAg levels were significantly decreased in HBV-infected cells treated with gP9 and gS8, and viral DNAs were also significantly reduced in cells treated by all these three gRNAs ( Figures 5C and 5D). Collectively, our results prove that cccDNA could be base edited to reduce the expression of viral proteins.
DISCUSSION
In this study, we demonstrate that CRISPR/Cas9-mediated BEs could successfully introduce nonsense mutations to specific loci of HBV genomes. The BEs derived from SpCas9 variants VQR, VRER, and EQR were also able to generate nonsense mutations in HBV genomes, further expanding the candidate protospacer sequences. With appropriate gRNAs and BEs, both integrated HBV genomes and cccDNAs could be base edited with high efficacy. More importantly, generation of premature stop codons in the viral surface and polymerase genes of integrated HBV genomes and cccDNAs led to significant reduction of HBsAg secretion and viral replication, a critical step toward HBV cure.
Although Cas9-mediated BEs have been shown to effectively edit a variety of host genomes, their efficacy in episomal forms of viral DNA remains questionable. The core component of the base-editing enzyme is APOBEC, which has been shown to mutate cccDNA. 33 However, little is known about the effects of the subsequent DNA repair mechanisms and the uracil DNA glycosylase inhibitor on the episomal cccDNA. By using the in vitro HBV infection system, we proved the nucleotide C to T conversion of cccDNA, which was accompanied by the significant reduction of HBsAg secretion and HBV replication. This is a proof of concept that Cas9-mediated BEs can be utilized to target and silence cccDNA.
Integration of HBV genomes into host chromosomes occurs in the early stage of HBV infection. 7 Although the integrated HBV genome is not a source for productive HBV infection, it often causes continuous secretion of HBsAg, which has long been suggested to suppress the antiviral immunity and allows for establishment of persistent HBV infection. As a result, targeting integrated HBV genomes or silencing surface gene expression to prevent persistent HBsAg secretion is considered a critical step toward the functional cure of HBV. 11,34 Nevertheless, prior attempts to cleave integrated HBV genomes by WT SpCas9 endonuclease may result in host genome large deletions and complex rearrangements, which can cause pathologic consequences. 25 In contrast, Cas9-mediated BEs change the target base in genomic DNA without making DSBs of DNA, and they may reduce the risk of genomic damage. Recently, inactivation of HBV genes by siRNA-based strategies has gained wide interest for silencing the expression of HBsAg, but the effect is transient unless restoration of antiviral immunity can be achieved. 9 Unlike siRNA-based strategies, Cas9-mediated BEs can silence HBV gene expression permanently by introducing nonsense mutations to viral genes. As we show herein, Cas9-mediated BEs effectively generated premature stop codons of the surface gene in both integrated HBV genomes and cccDNAs and reduced HBsAg secretion. Therefore, Cas9-mediated BEs are advantageous for its transient expression to achieve long-term suppression of HBsAg expression, demonstrating its potential for functional HBV cure.
Interestingly, we discovered several HBV genome loci that cause simultaneous nonsense mutation and missense mutation of polymerase and surface genes, respectively, or vice versa, leading to their dual suppression. For example, W414X in the polymerase gene causes G71N in the surface gene, and W156X in the surface gene results in G500S in the polymerase gene. Significant reduction of both polymerase and surface gene expression could be observed when these specific loci of the HBV genome were base edited. G71N (W414X-P) and W156X-S are located in the inner face and the proposed amphipathic helix of surface protein, which are important for S dimer formation. 35,36 The amino acid change of these residues causes the reduction of intracellular and secreted HBsAg, indicating that it may render HBsAg unstable and susceptible to protein degradation. W414X-P and G500S (W156X-S) are located at the palm and finger domains of the polymerase gene, which are critical for viral replication. We further validated the critical role of these residues by generation of W414X in polymerase and W156X in surface genes, respectively, by site-directed mutagenesis. Notably, these two sites and cognate protospacer sequences are highly conserved in 88% and 77% of HBV strains of genotype D (Table S1), so they may serve as ideal targets for a base-editing strategy to treat HBV of genotype D. Despite the promising potential of the Cas9-mediated BE as an HBV cure, there remain several daunting challenges, including off-target effects and the difficulty for in vivo delivery of Cas9, the same as those faced by WT Cas9 endonuclease. 24 Moreover, mutagenesis with premature stop codons will generate truncated viral proteins and may carry potentially pathogenic or carcinogenic effects, which should be cautiously evaluated. Although we showed that the off-target mutations caused by base editing were quite low, this risk still cannot be ignored. 37 In addition, in vivo delivery is particularly challenging for Cas9 BEs because they are larger than WT Cas9 for the appended base-editing domains. Nevertheless, several strategies have been adopted to minimize the off-target effects of Cas9-mediated genome editing. 38,39 Recently, the intein-mediated split-Cas9 systems have also been developed to reduce the insert size to fit the cargo capacity of the AAV system. 40,41 Alternatively, the advance of non-viral delivery systems may improve their in vivo delivery efficiency. 42,43 Future study in a disease-relevant animal model is required to prove the in vivo efficacy of Cas9 BEs for inactivation of HBV.
In conclusion, Cas9-mediated BEs provide an opportunity for permanent inactivation of both cccDNA and integrated HBV DNA without DSBs of DNA. Combined with NAs, which effectively inhibit ongoing viral replication, Cas9-mediated BEs may eventually achieve the ultimate cure of HBV by suppressing both HBV replication and HBsAg production.
Transfection of Cell Lines
DNA transfection was performed using Lipofectamine 3000 according to the manufacturer's protocol with some modifications. For transfection-based editing experiments, HEK293T, HEK293T-C, HEK293T-P, and HEK293T-S cells were seeded to 70%-80% confluence and cotransfected by the expression vectors containing the BE (pLenti-FNLS-P2A-Puro; BE3) and the sgRNA at the ratio of 4:1.
For the experiments comparing the viral expression of 1.3Â HBV-WT and the derived HBV with site-directed mutagenesis, Huh7 cells were transfected by the indicated plasmids, 1.3Â HBV-WT, W414X-P, or W156X-S, and were then harvested at 3 days or 5 days posttransfection. Subsequently, genomic DNAs were extracted by a blood and tissue kit (QIAGEN) and subjected to Sanger sequencing, or Hirt's DNA was extracted for a Southern blotting assay.
Lentiviral Production and Transduction
For the production of lentivirus of BEs pLenti-FNLS-P2A-Puro (BE3) and pLenti-BE4Gam-P2A-Puro (BE4), HEK293T cells were seeded in 10-cm dishes containing 5 mg/mL poly-D-lysine (Sigma, St. Louis, MO, USA). Cells were seeded 1 day before transfection, and, the next day, cells at 95% confluence were transfected with a prepared mix in Opti-MEM (Gibco) containing 6 mg of lentiviral backbone, 4 mg of p8.91, and 2 mg of pMD.G. The media were replaced with Opti-MEM containing 5% FBS, and the culture media were collected after 48 and 72 h. The supernatant was filtered with a 0.4-mm filter (Millipore, Billerica, MA, USA) and subsequently ultracentrifuged with a 20% sucrose cushion in the bottom of the tube and incubated at 26,000 rpm (4 C) for 2 h. The precipitated viral pellet was resuspended in Opti-MEM overnight and then stored at À80 C. For the production of the lentivirus of sgRNAs, HEK293T cells were seeded in a six-well plate containing 5 mg/mL poly-D-lysine (Sigma, St. Louis, MO, USA) 1 day before transfection. On the next day, cells at 95% confluence were transfected with a prepared mix in Opti-MEM containing 1.5 mg of lentiviral backbone, 1 mg of p8.91, and 0.5 mg of pMD.G. The procedures for collection, purification, and storage of lentiviruses are the same as those described above.
Transduction with BE Lentiviruses
For transduction of HepG2.2.15 and HepG2-NTCP-C4 cells, 5 Â 10 5 individual cells were seeded in a 12-well plate. After 24 h, cells were transduced with viral supernatants in the presence of Polybrene (8 mg/mL), and the plates were centrifuged for 1 h at 1,250 Â g, 32 C. Three days after transduction, cells were treated with puromycin (2.5 mg/mL) and blasticidin S (5 mg/mL) for 7 days of selection. The transduced cells were trypsinized and reseeded at the same number, and subsequently transduced by the same lentivirus again as the above procedures. For the transduced HepG2.2.15 cells, the supernatants were collected at 3 and 5 days after transducing twice with pLenti-BE4Gam-P2A-Puro (BE4), and the cell lysates were collected at 5 days post-transduction. For the HepG2-NTCP-C4 transduction, cells were transduced twice with pLenti-FNLS-P2A-Puro (BE3) and gRNAs.
Preparation and Infection of HBV
Infectious HBV was produced from HepAD38 cells as previously described. 44 The supernatant was harvested and concentrated by 20% sucrose cushion. For the HBV infection experiment, HepG2-NTCP-C4 cells were seeded in a 12-well plate and transduced by pLenti-FNLS-P2A-Puro (BE3) and gRNA lentiviruses. After repeated lentiviral transduction for two times, HBV was infected at 5,000 genome equivalents (GE)/cell. All infections were performed as previously described. 45,46 In addition, HBV was infected into HepG2-NTCP-C4 cells in a 12-well plate at 50,000 GE/cell for cccDNA detection by Southern blot. Briefly, cells were mixed with HBV in the presence of 8% PEG8000 and 5% DMSO at 37 C for 16 h in suspension. To suppress the formation of newly synthesized RC-DNA, infected HepG2-NTCP-C4 cells were treated with 20 mM 3TC (lamivudine) from 3 days after infection.
Sanger and MiSeq Sequencing of Base-Edited Genomic DNA and cccDNA
Genomic DNAs of harvested cells were extracted using a DNase blood and tissue kit (QIAGEN) according to the manufacturer's instructions. The genomic regions of interest were amplified by PCR with the site-specific primers (Table S3) and PfuUltra II fusion HS DNA polymerase (Agilent Technologies) according to the manufacturer's protocol. The PCR products were purified by the Illustra GFX PCR DNA and gel band purification kits (GE Healthcare), and were subjected to Sanger sequencing.
To remove linear and RC-form HBV DNAs for Sanger and MiSeq sequencing of cccDNA, genomic DNAs were extracted and digested with T5 exonuclease (New England Biolabs) in the reaction mixture of 50 mL containing 500 ng of DNA, 5 mL of ,10Â reaction buffer and 1 mL of T5 Exo at 37 C for 1 h, and afterward 11 mM EDTA was added to stop reaction.
Immunoblotting Assay
Cells were washed with phosphate-buffered saline (PBS) and lysed with radioimmunoprecipitation assay (
ELISA of HBsAg
Elecsys HBsAg II (Roche Diagnostics) was used for HBsAg qualitative determination in the culture supernatant of G2.2.15. Samples with a signal/cutoff ratio (S/CO) of R1 are considered positive, and the values are considered as a semiquantitative level of HBsAg. The quantitative levels of HBsAg in the culture supernatant of HepG2-NTCP-C4 were measured using an Architect HBsAg kit (Abbott Laboratories). The calibration range recommended by the manufacturer was from 0 to 250 IU/mL. The positivity criterion of HBsAg was R0.05 IU/mL.
Quantitative Real-Time PCR
The viral DNAs were purified from the supernatant of G2.2.15 using a DNase blood and tissue kit (QIAGEN) according to the manufacturer's instructions. The PCR reaction was performed in a total volume of 10 mL, which contains 4 mL of DNA template, 0.25 mM for each primer, a 0.1 mM probe, and 5 mL of TaqMan master mix. The program was 2 min at 50 C, 10 min at 95 C, and 40 cycles of 95 C for 15 s and 60 C for 1 min. The probe and primer sequences are listed in Table S3.
HBV DNA Extraction and Southern Blotting
HBV DNA was extracted by the modified Hirt method as previously described. 18 Infected HepG2-NTCP-C4 was lysed in Hirt's buffer (0.7% SDS, 10 mM Tris-HCl [pH 8.0], and 10 mM EDTA [pH 8.0]). The lysates were treated with 5 M NaCl and incubated at 4 C overnight and then centrifuged at 10,000 rpm for 30 min at 4 C. For extraction of DNA, the supernatant was treated by saturated phenol twice and phenol/chloroform (1:1) once. DNA was precipitated with 2Â vol of 100% ethanol at room temperature overnight and subsequently precipitated at 10,000 rpm centrifugation at 4 C for 30 min. 30 mg of Hirt DNA was analyzed by the modified Southern blot method as previously described. 18
DNA Library Preparation and MiSeq Sequencing
A Thermo Scientific Phusion high-fidelity DNA polymerases kit was used according to the manufacturer's recommendations (Illumina) for DNA library preparation. Adaptor-ligated DNA was indexed and enriched through limited-cycle PCR. The DNA library was quantified with NanoDrop and through real-time PCR. The DNA library was loaded on an Illumina MiSeq instrument according to the manufacturer's instructions and sequenced with 600 cycles by the Medical Microbiota Center of the First Core Laboratory, National Taiwan University College of Medicine.
The quality of raw reads was evaluated by FastQC. Base editing efficiency and indel rates of each sample were calculated using a Python script. Briefly, the sequence of gRNA target regions in each read was identified by splitting the reads by 10-bp flanking sequences with exact matches on both sides of the target regions. Indels were calculated as the number of reads with target regions that contain insertions or deletions divided by the total read number. Base editing efficiency was measured by counting the number of A, T, C, and G bases at each position on the target sequences and then the numbers were divided by the number of total reads. The ratios of induced premature stop codons in each sample were determined by dividing the number of reads containing induced premature stop codons with the number of total reads.
Statistical Analysis
An unpaired, two-sided Student's t test was used to compare the difference between two independent groups. Data associated with this study are present in the text or in Supplemental Materials and Methods.
|
2020-04-13T13:01:23.805Z
|
2020-03-19T00:00:00.000
|
{
"year": 2020,
"sha1": "7a919714c38aa045fa98f4810a939e1711d07931",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2162253120300962/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "df17661f5e8f087677417496a96ab4d427b0d432",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
26753777
|
pes2o/s2orc
|
v3-fos-license
|
Dichloridobis(pyridine-2-thiolato-κ2 N,S)tin(IV): a new polymorph
The title compound, [SnCl2(C5H4NS)2], is the product of reaction of 2,2′-dipyridyl disulfide with tin tetrachloride. The SnIV atom adopts a distorted octahedral geometry, with the two bidentate pyridine-2-thiolate ligands forming two planar four-membered chelate rings. The two Sn—Cl, two Sn—N and two Sn—S bonds are in cis, cis and trans configurations, respectively. The crystal grown from acetonitrile represents a new monoclinic polymorph in space group C2/c with the molecule having twofold rotational symmetry, the SnIV atom lying on the twofold axis. The molecular structure of the monoclinic polymorph is very close to that of the triclinic polymorph studied previously in space group P-1, the molecule occupying a general position [Masaki & Matsunami (1976 ▶). Bull. Chem. Soc. Jpn, 49, 3274–3279; Masaki et al. (1978 ▶). Bull. Chem. Soc. Jpn, 51, 3298–3301]. Apparently, the formation of the two polymorphs is determined by the different systems of intermolecular interactions. In the crystal of the monoclinic polymorph, molecules are bound into ribbons along the c axis by C—H⋯Cl hydrogen bonds, whereas in the crystal of the triclinic polymorph, molecules form chains along the a axis by attractive S⋯S interactions. The crystal studied was a pseudo-merohedral twin; the refined BASF value is 0.221 (1).
The title compound, [SnCl 2 (C 5 H 4 NS) 2 ], is the product of reaction of 2,2 0 -dipyridyl disulfide with tin tetrachloride. The Sn IV atom adopts a distorted octahedral geometry, with the two bidentate pyridine-2-thiolate ligands forming two planar four-membered chelate rings. The two Sn-Cl, two Sn-N and two Sn-S bonds are in cis, cis and trans configurations, respectively. The crystal grown from acetonitrile represents a new monoclinic polymorph in space group C2/c with the molecule having twofold rotational symmetry, the Sn IV atom lying on the twofold axis. The molecular structure of the monoclinic polymorph is very close to that of the triclinic polymorph studied previously in space group P1, the molecule occupying a general position [Masaki & Matsunami (1976). Bull. Chem. Soc. Jpn, 49, 3274-3279;Masaki et al. (1978). Bull. Chem. Soc. Jpn,51,[3298][3299][3300][3301]. Apparently, the formation of the two polymorphs is determined by the different systems of intermolecular interactions. In the crystal of the monoclinic polymorph, molecules are bound into ribbons along the c axis by C-HÁ Á ÁCl hydrogen bonds, whereas in the crystal of the triclinic polymorph, molecules form chains along the a axis by attractive SÁ Á ÁS interactions. The crystal studied was a pseudomerohedral twin; the refined BASF value is 0.221 (1).
The molecule of I possesses overall intrinsic C 2 symmetry. In contrast to the triclinic polymorph (the space group P1, the molecule occupies a common position), this symmetry is realised in the crystal of the monoclinic polymorph (the space group C2/c, the molecule occupies a special position on the twofold axis). The tin atom adopts a distorted octahedral geometry, with the two bidentate 2-pyridinethiolato ligands forming two planar four-membered chelate rings (Fig. 2). The two Sn-Cl, two Sn-N and two Sn-S bonds are in cis-, cis-and trans-configurations, respectively. Generally, the molecular structure of the monoclinic polymorph of I is very close to that of the triclinic polymorph.
Apparently, the formation of the two polymorphs of I is determined by the different systems of intermolecular nonvalent interactions. In the crystal of the monoclinic polymorph, the molecules are bound into the ribbons along the c axis by the weak intermolecular C3-H3···Cl1 i hydrogen bonds (Fig. 3, Table 1), whereas, in the crystal of the triclinic polymorph, the molecules form the chains along the a axis by the weak attractive intermolecular S···S (3.544 (3)Å) interactions (Fig. 4). Symmetry code: (i) -x+1, -y+1, -z.
Experimental
A solution of SnCl 4 (0.13 g, 0.5 mmol) in CH 2 Cl 2 (25 ml) was added to a solution of 2,2′-dipyridyl disulfide (0.11 g, 0.5 mmol) in CH 2 Cl 2 (25 ml) with stirring at room temperature. After 1 h, the powder of complex (C 5 H 4 NS) 2 SnCl 6 was separated by filtration. The filtrate was concentrated in vacuo. The solid was re-crystallized from CH 3 CN to give I as
The hydrogen atoms were placed in calculated positions with C-H = 0.95Å and refined in the riding model with fixed isotropic displacement parameters U iso (H) = 1.2U eq (C).
Figure 1
Reaction of 2,2′-dipyridyl disulfide with tin tetrachloride. The H-bonded ribbons along the c axis in the monoclinic polymorph of I. where P = (F o 2 + 2F c 2 )/3 (Δ/σ) max < 0.001 Δρ max = 0.81 e Å −3 Δρ min = −0.53 e Å −3 Special details Geometry. All s.u.'s (except the s.u. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell s.u.'s are taken into account individually in the estimation of s.u.'s in distances, angles and torsion angles; correlations between s.u.'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell s.u.'s is used for estimating s.u.'s involving l.s. planes. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. (10) Geometric parameters (Å, º)
|
2018-04-03T05:26:47.499Z
|
2012-06-13T00:00:00.000
|
{
"year": 2012,
"sha1": "5b4b14e1f53d57094e4de420aa7cfa4cecf5f034",
"oa_license": "CCBY",
"oa_url": "https://journals.iucr.org/e/issues/2012/07/00/rk2356/rk2356.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b4b14e1f53d57094e4de420aa7cfa4cecf5f034",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
250959378
|
pes2o/s2orc
|
v3-fos-license
|
Copy Number Variation of Circulating Tumor DNA (ctDNA) Detected Using NIPT in Neoadjuvant Chemotherapy-Treated Ovarian Cancer Patients
Analysis of circulating tumor DNA (ctDNA) can be used to characterize and monitor cancers. Recently, non-invasive prenatal testing (NIPT) as a new next-generation sequencing (NGS)-based approach has been applied for detecting ctDNA. This study aimed to investigate the copy number variations (CNVs) utilizing the non-invasive prenatal testing in plasma ctDNA from ovarian cancer (OC) patients who were treated with neoadjuvant chemotherapy (NAC). The plasma samples of six patients, including stages II–IV, were collected during the pre- and post-NAC treatment that were divided into NAC-sensitive and NAC-resistant groups during the follow-up time. CNV analysis was performed using the NIPT via two methods “an open-source algorithm WISECONDORX and NextGENe software.” Results of these methods were compared in pre- and post-NAC of OC patients. Finally, bioinformatics tools were used for data mining from The Cancer Genome Atlas (TCGA) to investigate CNVs in OC patients. WISECONDORX analysis indicated fewer CNV changes on chromosomes before treatment in the NAC-sensitive rather than NAC-resistant patients. NextGENe data indicated that CNVs are not only observed in the coding genes but also in non-coding genes. CNVs in six genes were identified, including HSF1, TMEM249, MROH1, GSTT2B, ABR, and NOMO2, only in NAC-resistant patients. The comparison of these six genes in NAC-resistant patients with The Cancer Genome Atlas data illustrated that the total alteration frequency is amplification, and the highest incidence of the CNVs (≥35% based on TCGA data) is found in MROH1, TMEM249, and HSF1 genes on the chromosome (Chr) 8. Based on TCGA data, survival analysis showed a significant reduction in the overall survival among chemotherapy-resistant patients as well as a high expression level of these three genes compared to that of sensitive samples (all, p < 0.0001). The continued Chr8 study using WISECONDORX revealed CNV modifications in NAC-resistant patients prior to NAC therapy, but no CNV changes were observed in NAC-sensitive individuals. Our findings showed that low coverage whole-genome sequencing analysis used for NIPT could identify CNVs in ctDNA of OC patients before and after chemotherapy. These CNVs are different in NAC-sensitive and -resistant patients highlighting the potential application of this approach in cancer patient management.
Analysis of circulating tumor DNA (ctDNA) can be used to characterize and monitor cancers. Recently, non-invasive prenatal testing (NIPT) as a new next-generation sequencing (NGS)-based approach has been applied for detecting ctDNA. This study aimed to investigate the copy number variations (CNVs) utilizing the non-invasive prenatal testing in plasma ctDNA from ovarian cancer (OC) patients who were treated with neoadjuvant chemotherapy (NAC). The plasma samples of six patients, including stages II-IV, were collected during the pre-and post-NAC treatment that were divided into NAC-sensitive and NAC-resistant groups during the follow-up time. CNV analysis was performed using the NIPT via two methods "an open-source algorithm WISECONDORX and NextGENe software." Results of these methods were compared in pre-and post-NAC of OC patients. Finally, bioinformatics tools were used for data mining from The Cancer Genome Atlas (TCGA) to investigate CNVs in OC patients. WISECONDORX analysis indicated fewer CNV changes on chromosomes before treatment in the NAC-sensitive rather than NAC-resistant patients. NextGENe data indicated that CNVs are not only observed in the coding genes but also in non-coding genes. CNVs in six genes were identified, including HSF1, TMEM249, MROH1, GSTT2B, ABR, and NOMO2, only in NACresistant patients. The comparison of these six genes in NAC-resistant patients with The Cancer Genome Atlas data illustrated that the total alteration frequency is amplification, and the highest incidence of the CNVs (≥35% based on TCGA data) is found in MROH1, TMEM249, and HSF1 genes on the chromosome (Chr) 8. Based on TCGA data, survival analysis showed a significant reduction in the overall survival among chemotherapyresistant patients as well as a high expression level of these three genes compared to that of sensitive samples (all, p < 0.0001). The continued Chr8 study using WISECONDORX
INTRODUCTION
Ovarian cancer (OC) is a tumor with the worst prognosis among female malignancies, and most cases are diagnosed in the advanced stage with peritoneal metastases (Bray et al., 2018). Neoadjuvant chemotherapy (NAC) is the gold standard treatment for OC patients with a high perioperative risk profile and/or a poor chance of effective debulking surgery (Elies et al., 2018), although a considerable percentage of patients demonstrate resistance to NAC treatment (Sato and Itamochi, 2014). Considering that the administration of proper chemotherapy regimens is dependent on imaging tests, cytology, and laparoscopic biopsy , inadequate tumor specimens are a flaw and difficulty in cytology or laparoscopic biopsy prior to the onset of NAC (Sharbatoghli et al., 2020). Biomarkers related to drug resistance and treatment, including prognostic and predictive molecules, play a key role in selecting appropriate treatment protocols and improving survival rates (Le Page et al., 2010). In OC, several studies have demonstrated changes in the serum levels of cancer antigen 125 (CA125) which may serve as a predictor of monitoring the response to NAC (Pelissier et al., 2014;Zeng et al., 2016), but its utility is often limited (Sharbatoghli et al., 2020). Serum tumor markers do not exhibit a significant increase in some histological types of OC (Sharbatoghli et al., 2020). Since gene amplification and deletion are common in cancer cells and contribute to cancer cell growth, angiogenesis, and drug resistance (Matsui et al., 2013;Rahimi et al., 2020), recently, copy number variants (CNVs) have been reported as potential biomarkers in cancer management (Pan et al., 2019;Yu et al., 2019;Jin et al., 2021). Moreover, comparing gene expression, CNV is considerably more stable and robust (Pan et al., 2019;Shao et al., 2019) and the gain or loss of gene copies often correlates with a corresponding increase or decrease in the amount of RNA and protein encoded by the gene (Hastings et al., 2009). It is critical to accurately measure copy number alterations in tumor samples in order to enable translational research and precision medicine. The majority of the CNV investigations were conducted on tumor tissue biopsy samples (Vives-Usano et al., 2021). A fraction of cell-free DNA (cfDNA) in cancer patients originates from tumor cells, known as circulating tumor DNA (ctDNA), which is obtained through liquid biopsy. Liquid biopsy as a semi-invasive diagnostic and prognostic tool has the advantage of being less invasive than tumor biopsy, and specimens can be frequently checked in realtime (Mathai et al., 2019). It seems that the molecular alterations identified in ctDNA might mirror molecular heterogeneity of tumor compared to those reflected by tumor biopsy (Sharbatoghli et al., 2020). Therefore, ctDNA analysis was applied to detect various types of genomic alterations, such as CNVs, mutation, and nucleosome positioning variation in cancers (Molparia et al., 2017;Huang et al., 2019;Noguchi et al., 2020).
In recent years, cancer detection by non-invasive prenatal testing (NIPT) as a new next-generation sequencing (NGS)-based approach is used to detect ctDNA (Cohen et al., 2016;Nakabayashi et al., 2018). The low coverage of whole-genome sequencing of cfDNA from maternal plasma is the basis for prenatal screening of common fetal autosomal aneuploidies and trisomy of chromosomes 21, 18, and 13 utilizing NIPT. One of the most often publicized benefits of the NIPT for chromosomal abnormality screening in pregnant women is its low false-positive rate (1-3%) (Filoche et al., 2017). Regarding fast advances in the NIPT, analysis of tumor CNV changes in ctDNA using NIPT was introduced as a potential cancer screening tool. In this context, Amant et al. (2015) discovered cancer in three pregnant women who had NIPT and indicated that CNVs may be used as a cancer screening tool. Furthermore, the implications of the whole-genome NIPT platform for cancer screening were shown in OC patients (Cohen et al., 2016). Furthermore, CNV analysis in cell-free DNA by low coverage whole-genome sequencing was used as a biomarker for the diagnosis of OC (Vanderstichele et al., 2017). For the first time, we applied the NIPT as a non-invasive test platform to compare the tumorderived CNV in ctDNA measured pre-and post-NAC in plasma samples obtained from OC patients. This is proof of the concept that NIPT might be useful for predicting responsive and resistant patients to chemotherapy.
Collection of Samples and Blood Processing
This study was conducted as a prospective study with 10 plasma samples derived from 6 OC patients of the Cancer Institute of Imam Khomeini Hospital (Tehran, Iran) between December 2018 and October 2019. This study was performed with the approval of the Ethics Committee of Iran University of Medical Sciences (authorization no: IR.IUMS.REC 1397.32825). Each participating hospital's ethical norms required that written informed permission has to be acquired. The FIGO (International Federation of Gynecology and Obstetrics) stage (Bhatla and Denny, 2018) was used to histologically confirm the patients' diagnosis. Blood samples were obtained in pre-and post-NAC treatment from OC patients that received platinumbased chemotherapy as an NAC regimen, at the first-line treatment. A week before the first dose of chemotherapy, baseline blood samples were collected, and post-NAC samples were taken after the first course (six cycles) of chemotherapy. The patients with complete response were defined as NAC-sensitive, while those with stable disease and progressive disease were defined as NAC-resistant (Noguchi et al., 2020). A total volume of 10 ml of whole blood was collected in K 2 EDTAcoated tubes (REF: CDLP 029, C.D. RICH ® , Romania) from each patient. Then, the blood was centrifuged at 1,600×g for 15 min at 25°C, and the plasma fraction was collected and centrifuged for a second time at 2,500×g for 10 min at 25°C. After the second spin, the plasma was transferred into barcoded tubes and immediately stored at ≤-70°C. The cell-free DNA was extracted from 3 ml of patient plasma using the QIAamp Circulating Nucleic Acid Kit (Qiagen, Gaithersburg, MD, United States) (Diefenbach et al., 2018).
Library Preparation, Sequencing, and Data Analysis
DNA libraries were prepared from 2 ng of cell-free DNA extracted from 3 ml of plasma using the VeriSeq NIPT Solution v2 according to the manufacturer's instructions for 75 bp single-end high via sequencing. All libraries were normalized to 1.6 nM, multiplexed, and sequenced on HiSeq: 4000 with 27 sequencing cycles of the cell-free DNA insert and an additional eight sequencing cycles for the index barcodes. Each research sample was sequenced alongside 12 clinical samples, with 36-cycle single-end sequencing on an Illumina NextSeq550. The read depth was low coverage at 0.2× to 0.3× based on the amount of sequencing data. The open-source algorithm WISECONDORX (WIthin-SamplE COpy Number Aberration DetectOR X) and NextGENe (Next GENeration sequencing software for biologists) were used for data analysis (Faircloth and Glenn, 2012).
Copy Number Variation Call Using WISECONDORX and NextGENe
WISECONDORX was used to identify whole chromosome (Chr) and subchromosomal abnormalities that the standard NIPT pipeline failed to detect (Raman et al., 2019). Segmental alterations of less than 0.05 Mb were prespecified as abnormal ("positive cancer screen"). FASTQC (Andrews, 2010) was used to do quality control on the raw single-end sequencing data. The BBduk tool from the BBmap toolbox (Bushnell, 2014) was used to trim and adjust the fastq files as needed.
Reads were mapped to the human reference genome (hg38) using bwa samse algorithm (Li and Durbin, 2009). The resulting sam files were subjected to corrections and converted to bam via samtools . CNVs were called by WISECONDORX run on default settings suggested by the developer (Raman et al., 2018). NextGene software version 2.4.1 (SoftGenetics, LLC) was used for CNV analysis according to the Kerkhof et al. (2017) method.
After obtaining WISECONDORX and NextGENe data, an investigation of these CNV results in pre-treatment was performed on all patients. Moreover, a comparison of these data for patients within the NAC-sensitive and NAC-resistant groups were performed separately to obtain common alterations in chromosomes and gene levels for each group. Then, the data of WISECONDORX and NextGENe software from NAC-resistant were compared to NAC-sensitive patients in order to receive alteration genes detected in NAC resistance. Similarly, posttreatment results were compared in patients with OC in order to detect CNVs.
Data Mining for Genes Detected in NAC-Resistant Patients
To obtain a more comprehensive and deep understanding of the biological process and molecular function of genes detected in NAC resistance, Gene Ontology (GO) term enrichment analysis was performed (Ashburner et al., 2000). Furthermore, KEGG (Kanehisa and Goto, 2000), Reactome (Sidiropoulos et al., 2017), and WikiPathways (Slenter et al., 2018) were investigated to examine the biological pathways in which these genes are involved. GO enrichment and pathway analysis of the genes detected in NAC-resistant patients was revealed using the ClueGO plug-in using Cytoscape (Bindea et al., 2009). Furthermore, genes detected in NAC resistance were investigated in the cBio Cancer Genomics Portal (cBioPortal) database in order to further evaluate the alterations of these genes in OC tissue samples. cBioPortal is an open-access database providing visualization and analyzing tools for multidimensional cancer genomics data, such as The Cancer Genome Atlas (TCGA) (Cerami et al., 2012). The genes detected in NAC resistance with more copy number alteration frequency on CBioPortal were selected to evaluate the mRNA expression levels on Gene Expression Profiling Interactive Analysis (GEPIA2) for OC tissue data compared to normal tissues . GEPIA2 is an online database, including RNA sequence expression data based on tumor and normal samples from TCGA and the GTEx . Gene Set Cancer Analysis (GSCALite) was applied to investigate the correlation between mRNA expression and CNV in OC patients. Spearman correlation coefficients were reported using GSCALite software, a user-friendly web server for dynamic analysis and visualization of gene sets in cancer which will be of broad utility to cancer researchers (Liu et al., 2018) from TCGA (Liu et al., 2018). The Cancer Virtual Cohort Discovery Analyses Platform (CVCDAP) portal was also utilized to compare the overall survival (OS) analyses between chemo-resistant and chemo-sensitive patients based on mRNA expression levels of these genes (Guan et al., 2020). CVCDAP is a web-based platform to deliver an interactive and customizable toolbox off the shelf for cohort-level analysis of TCGA and CPTAC public datasets (Guan et al., 2020). Finally, a volcano plot was applied to evaluate the protein expression in OC chemotherapy-resistant patients compared to that in chemotherapy-sensitive patients by TCGAbiolinks, an R/Bioconductor package for integrative analysis of TCGA data (Colaprico et al., 2016).
Patient Characteristics
The clinicopathological characteristics and response to NAC of the six OC patients are summarized in Table 1. The mean age of the patients was 54.3 years (38-78 years), and they included stages II, III, and IV high-grade serous ovarian carcinoma (HGSOC). Two of the six patients died undergoing chemotherapy. As a result, these two patients were unable to provide post-NAC plasma samples. Two patients were NACsensitive, while two cases were NAC-resistant and did not respond appropriately to chemotherapy treatment.
Investigation of CNV From ctDNA in NAC-Resistant and NAC-Sensitive Patients With OC
We detected 6/6 HGSOC cases including early-stage (II) to latestage (IV)-using the WISECONDORX ( Table 2) and NextGENe analysis (Supplementary Tables S1-S3). WISECONDORX results indicated that most patients (at least 5 cases) have abnormality (gain and/or loss) on chromosomes, 4, 9, 18, and 22 before NAC treatment ( Table 2). NextGENe data illustrated CNVs not only in coding genes but also alterations in duplication and deletion in non-coding genes (Supplementary Tables S1-S3). Chr changes detected by WISECONDORX software indicated that NAC-sensitive patients have fewer chromosomes' CNV changes before treatment rather than NAC-resistant patients as shown in Figure 1; Table 2.
Using the results of NextGENe software, common genes with copy number changes were assessed in pre-NAC and post-NAC treatment by the Vinny plot. We detected 17 common genes with CNV in pre-NAC patients from the NAC-resistant group (Figure 2A; Supplementary Table S4). Among these genes, six common genes, including NOMO2, ABR, GSTT2B, HSF1, TMEM249, and MROH1, exclusively have shown CNV in the NAC-resistant group, while in NAC-sensitive patients, none of these genes were altered before NAC treatment. Furthermore, as shown in Figure 2B; Supplementary Table S5, 38 common genes with CNV were discovered in NAC-resistant patients' post-NAC data. Furthermore, we discovered 14 genes, including LOC285441, LOC100996414, TTLL10, NXN, SMA4, SMA5, NOMO1, TIMM22, NOMO2, HSF1, ABR, TMEM249, GSTT2B, and MROH1, that exclusively revealed CNVs in NAC-resistant patients following chemotherapy. We indicate the CNVs for the six genes (NOMO2, ABR, GSTT2B, HSF1, TMEM249, and MROH1) that were frequent in the NACresistant group pre-and post-treatment as genes associated and found in NAC-resistant patients (See Table 3).
Data Mining Approaches for Genes Detected in NAC-Resistant Patients
The pathway enrichment analysis for six common genes detected in NAC-resistant patients indicated that GSTT2B was involved in chemical carcinogenesis and drug metabolism pathways, while ABR and HSF1 were enriched in p75 NTR receptor-mediating signaling and cellular response to heat stress/shock response pathways, respectively ( Figure 3A). Gene Ontology analysis illustrated that ABR and HSF1 contributed to apoptosis and programmed cell death (GO: 0012501 and 0006915, respectively) ( Supplementary Table S6). Moreover, ABR, GSTT2B, and HSF1 have a role in catalytic activity and metabolic processes ( Figure 3B).
Comparing our CNV results of six common genes detected in NAC-resistant patients with TCGA data through the cBioPortal illustrated that the total alteration frequency is amplification, similar to TCGA data from 585 OC patients ( Figure 4). Moreover, the most alteration frequency of CNVs (≥35%) was observed in HSF1, TMEM249, and MROH1 genes from OC patients ( Figure 4).
Loci of HSF1, TMEM249, and MROH1 genes are arranged on Chr8. CNV alterations in Chr8 in WISECONDORX data were solely found in NAC-resistant patients when compared to NACsensitive patients before treatment (Table.2). Investigation of these three genes at the mRNA level on GEPIA2 showed significantly high expression of HSF1, TMEM249, and MROH1 in ovarian serous carcinoma (OSC) compared with normal tissues (all, p < 0.05) ( Figure 5A-C). Moreover, the Spearman correlation coefficient shows the positive association between CNVs and mRNA expression of HSF1 (Cor = 0.83, FDR = 6e-74), MROH1 (Cor = 0.72, FDR = 4.8e-49), and TMEM249 (Cor = 0.48, FDR = 3.11e-18) across 585 OC patients from TCGA ( Figure 6). As shown in Figure 7, the volcano plot indicates the enrichment of TMEM249, MROH1, and HSF1 protein expression in resistant patients to chemotherapy from TCGA. In other words, the expression of these three proteins significantly increased in resistant patients (n = 90) versus sensitive patients to chemotherapy (n = 194) (all, p < 0.05). Survival analyses showed a significant reduction in OS across patients resistant to chemotherapy and a high expressed level of HSF1, TMEM249, and MROH1 compared to sensitive samples (all, p < 0.0001). Moreover, these data illustrate that high expression of MROH1 and TMEM249 significantly in resistant and sensitive samples can reduce OS among these patients (all, p < 0.0001) (Figure 8).
DISCUSSION
ctDNA from liquid biopsy has recently been shown to be a useful diagnostic tool in a variety of cancer patients (Saha et al., 2022). Indeed, it has been shown that ctDNA is correlated to tumor burden and the risk of recurrence (Fleischhacker and Schmidt, 2007). ctDNA as part of total body DNA from cancer cells is superior to other plasma biomarkers such as RNA and protein. It is more stable than RNA (Panawala, 2017) with more sensitivity and clinical correlations. Despite the fact that plasma protein biomarkers are commonly used in clinical management for different cancers, such as AFP, CEA, PSA, and CA15-3 (Mazzucchelli et al., 2000;He et al., 2013), some cancer patients are not positive for these biomarkers. Furthermore, they are found with lower concentrations in the serum of individuals free of cancer (Cheng et al., 2016), while the studies show that ctDNA can more accurately reflect the real-time tumor burden in patients receiving therapy (Bettegowda et al., 2014).
The current study is the first report to indicate the utility of analyzing CNVs in ctDNA using massively parallel sequencing (such as NIPT), before and after NAC treatment in OC patients. The previous study by Fleischhacker and Schmidt, (2007) has shown that ctDNA is low in the early stages of the tumor and difficult to detect. The method in this study can detect copy number alterations of ctDNA in various stages of HGSOC patients, including stages II-IV, through low coverage plasma DNA sequencing and analysis for chromosomal CNVs ≥0.5 Mb. It has previously been demonstrated that tumor DNA from cancer cells has been detected in plasma using NIPT (Amant et al., 2015;Bianchi et al., 2015;Cohen et al., 2016). We also investigated the response to chemotherapy and depicted the CNV changes in NAC-sensitive and NAC-resistant patients. CNV burden in all chromosomes of NAC-sensitive patients was fewer than that of NAC-resistant patients before NAC treatment. Since the CNVs are considered a key factor in the genetic variation of tumors (Hu et al., 2021), it seems that less CNV burden indicated better response in treating patients (Walker et al., 2017). CNVs were also discovered in a variety of genes following chemotherapy, notably in genes that control drug absorption into cells and drug metabolism (Willyard, 2015). As a result, we compared produced CNVs in NAC-sensitive and NAC-resistant patient groups with OC following NAC therapy. After the first course of chemotherapy, the CNV load increased in both NAC-sensitive and NAC-resistant individuals, according to our findings. This phenomenon could be in terms of the death of tumor cells (DNA damage) (Woods and Turchi, 2013) or resistance to therapy (Alfarouk et al., 2015).
The previous studies have suggested the role of CNVs in genes related to drug resistance (Alfarouk et al., 2015;Willyard, 2015;Costa et al., 2017;Qidwai, 2020). Our findings appear to well indicate that some of the CNVs detected by the NIPT are different in NAC-resistant groups compared to NAC-sensitive groups of OC patients. We found some common genes, including NOMO2, ABR, GSTT2B, HSF1, TMEM249, and MROH1, in the pre-and post-treatment for the NAC-resistant group which was no longer detected in NAC-sensitive patients before and after therapy. These findings suggest that CNVs discovered by the NIPT may contribute to treatment resistance. To confirm the presence of CNV in all patients' plasma, we matched the findings of genes with CNVs discovered in their plasma to the tissue samples in the TCGA data (Bell et al., 2011) owing to the unavailability of tissue specimens from these patients. It is better to investigate the plasma sequencing data with paired tumor DNA of tissue samples, but the lack of tissue or insufficient tumor tissue sampling of patients is a limitation for this type of study (Cohen et al., 2016). The cBioPortal findings indicated the existence of alteration and amplification in all six genes among TCGA patients so that some of these CNVs have a high frequency in OC tissue samples. Moreover, HSF1, TMEM249, and MROH1 are located on Chr8 which is highlighted in our TABLE 3 | Common genes in pre-and post-neoadjuvant chemotherapy (NAC) as well as its copy number variation (CNV) status (in pre-and post-NAC) for NACresistant patients that were detected by NextGENe software. Frontiers in Genetics | www.frontiersin.org July 2022 | Volume 13 | Article 938985 study for OC patients. The CNVs' investigation via WISECONDOREX shows Chr8 underwent CNV changes in NAC-resistant patients before NAC treatment, while in NACsensitive patients no CNV changes were found on Chr8 before chemotherapy. Amplification of Chr8 genes has also been identified as a recurrent genomic event in lung cancer (Baykara et al., 2015) and malignant peripheral nerve sheath tumor (MPNST) (Dehner et al., 2021). This finding indicates the power role and relationship among CNV alterations on Chr8 and NAC-resistant in OC patients. Other studies showed that approximately 80 genes on Chr8 are involved in cancer biology (Tabarés-Seisdedos and Rubenstein, 2009). We investigated the levels of RNA and protein expression from HSF1, TMEM249, and MROH1, which are related to NACresistant patients on TCGA data. DNA copy number variation is an important factor in the expression of genes (Gamazon and Stranger, 2015) and also an important influential factor for the expression of both protein-coding and non-coding genes (Liang et al., 2016). As expected, positive correlations between the level of mRNA and CNVs alterations were observed for HSF1, TMEM249, and MROH1. In this regard, survival analysis confirmed the influence expression of these three genes in the survival rate, so that resistant patients with higher mRNA expression of these genes had a reduced OS. Furthermore, in TCGA-chemotherapy-resistant OC patients, the protein expression of these genes was higher than that in sensitive individuals. So far, dysregulation and different roles of these genes in various cancers have been evaluated. Studies indicated that HSF1 has been implicated in tumorigenesis by its participation in cellular stress response pathways and its effect on regulatory pathways such as p53, mTOR, and insulin signaling (Vihervaara and Sistonen, 2014;Vydra et al., 2014;Powell et al., 2016;Barna et al., 2018). In line with our in silico analysis study, high levels of HSF1 have been identified in different types of cancers . Overexpression of HSF1 in tumor tissues is correlated with a worse prognosis in cancer patients (Kourtis et al., 2015). Regarding the correlation between a high level of HSF1 and the deterioration of disease in the initiation, promotion, and progression of cancer (Wang et al., 2020), HSF1 could act as a potential therapeutic target (Wang et al., 2020). OC studies reported that HSF1 induces epithelial-mesenchymal transition (EMT) in the in vitro Frontiers in Genetics | www.frontiersin.org July 2022 | Volume 13 | Article 938985 8 models (Powell et al., 2016) and targeting HSF1 leads to an antitumor effect (Chen et al., 2017). Zhang et al. (2017) identified some parts of human Chr8, a location hotspot, mediated by the master regulator HSF1 in different cancers. Furthermore, they interestingly indicated MROH1 and TMEM249 immediately flanking the upstream and downstream regions of HSF1. As predicted, our CNV data for Chr8 genes, such as HSF1, MROH1, and TMEM249, were duplicated in individuals who were resistant to NAC. Considering HSF1's critical involvement in cancer development, the lack of HSF1 in NAC-sensitive patients compared to NAC-resistant patients suggests a function for the Chr8 and HSF1 genes in NAC resistance. The activity of the GSTT2 enzyme is important for the protection of cells against toxic products of oxygen and lipid peroxidation (Tan and Board, 1996), which represents a major source of endogenous DNA damage in humans that contributes significantly to cancer and other genetic diseases (Marnett, 2002). Duplicated-CNV and also mRNA expression of GSTT2B in NAC-resistant patients by bioinformatics analysis are consistent with the previous cancer studies (Pool-Zobel et al., 2005;Doherty et al., 2014). Research on colon cancer cells indicated upregulated GSTT2 upon incubation with butyrate that is involved in defense against oxidative stress (Pool-Zobel et al., 2005). Furthermore, Doherty et al. (2014) FIGURE 6 | Correlation study between mRNA levels and copy number variation of three genes related to NAC resistance through cBioPortal in TCGA-ovarian cancer patients. The correlation coefficient analysis shows the positive association between mRNA expression and copy number variation of HSF1, TMEM249, and MROH1 across 585 ovarian cancer patients from TCGA.
FIGURE 7 | Volcano plot enrichment analysis for gene-related protein expression in ovarian cancer patients who were treated with chemotherapy from TCGA using TCGAbiolinks. The expression of HSF1, TMEM249, and MROH1 proteins significantly increased in resistant patients (n = 90) versus sensitive patients to chemotherapy (n = 194).
FIGURE 8 | Comparing the overall survival between chemo-resistant and -sensitive expression of HSF1, TMEM249, and MROH1 using the CVCDAP portal for ovarian cancer patients. The plot shows a significant reduction in the overall survival across patients resistant to chemotherapy and expressed a high level of HSF1, TMEM249, and MROH1 genes in comparison with sensitive samples. Moreover, the plot indicates that high expression of MROH1 and TMEM249 in chemo-sensitive samples can reduce OS among these patients.
Frontiers in Genetics | www.frontiersin.org July 2022 | Volume 13 | Article 938985 9 observed that the expression of GSTT2 was increased in drugresistant cervical cell models with cisplatin treatment.
Deletion of ABR indicated a tumor-suppressive role in several solid tumors, such as medulloblastoma (McDonald et al., 1994), astrocytomas (Willert et al., 1995), and breast cancer (Liscia et al., 1999). In acute myeloid leukemia, the ABR gene was detected as a prognostic factor in which the blockage of ABR prevents myeloid differentiation (Namasu et al., 2017). In our study, the ABR gene was duplicated in NAC-resistant patients before and after treatment. These data suggested more investigation into the role and function of the ABR gene that might influence resistance to chemotherapy for OC patients.
The NOMO2 gene is known as a diagnostic biomarker in radioresistance in human H460 lung cancer stem-like cells (Yun et al., 2016). We found NOMO2 duplication in NACresistant patients by NIPT. These data are consistent with the previous result in metastatic breast cancer by NGS that detected mutation or amplification in cfDNA samples (Page et al., 2017). CNVs were not only found in coding genes but also detected in non-coding genes, such as microRNA and long non-coding genes, although the previous study in bladder cancer indicated CNV alteration in long non-coding RNAs, which can be used as prognostic biomarkers for bladder cancer (Zhong et al., 2021). Furthermore, copy number changes in non-coding RNAs have been identified as prospective therapeutic targets and prognostic markers in lung squamous cell carcinoma (LUSC) (Ning et al., 2021). We are aware that our research has limitations to describe the biological behavior of these CNVs and their relationships with NAC resistance in OC cells. Although the number of records was adequate to establish a conclusion for the trend in data, however, in terms of seeking generalizability, larger sample sizes are strongly suggested, which could be covered via larger and multicenter investigations.
CONCLUSION
This study's findings highlighted low-coverage whole-genome sequencing analysis to investigate CNV changes in ctDNA. It seems that detected CNVs through the NIPT in ctDNA could be potential markers of clinical response to NAC treatment. Our results gave a clue that some alterations in the copy number of genes at the DNA level may relate to being the response to NAC treatment in OC patients, although further studies are warranted to understand the role of these CNVs in NAC-resistant patients. Using a platform for prenatal testing in diagnosis or monitoring therapeutic response for other cancer types as a novel way may hold promise that it should be examined.
DATA AVAILABILITY STATEMENT
The data presented in the study are deposited in the European Nucleotide Archive (ENA) at EMBL-EBI under accession number PRJEB53061. (https://www.ebi.ac.uk/ena/browser/view/ PRJEB53061).
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Iran University of Medical Sciences Human Research Ethics Committee in Iran (Ref No: IR.IUMS.REC1397.32825). All procedures performed in this study were in accordance with the 1964 Helsinki Declaration and its later amendments. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
ZM, MA-L, and MT designed and supervised the work; MS, a Ph.D. student, performed all examinations, collected the patient's samples and data, and also wrote the manuscript; FF participated in data mining and literature review using bioinformatics tools, wrote the bioinformatics sections, prepared the related figures, and assisted in the editing of the manuscript; HA participated in bioinformatics data analysis and in editing the manuscript; AA participated in NGS data analysis and wrote the method section; SA contributed to sample collection; all authors read and approved the final manuscript.
FUNDING
This work was supported by a grant from the Iran University of Medical Sciences (#32825) and Royan Institute (#97000072).
|
2022-07-23T13:19:59.779Z
|
2022-07-22T00:00:00.000
|
{
"year": 2022,
"sha1": "c7cf7e0939167f2aa51c920db7002a82de9905f2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "c7cf7e0939167f2aa51c920db7002a82de9905f2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256568243
|
pes2o/s2orc
|
v3-fos-license
|
Activity Patterns, Population Dynamics, and Spatial Distribution of the Stick Tea Thrips, Dendrothrips minowai, in Tea Plantations
Simple Summary We studied the activity patterns, population dynamics, and spatial distribution of Dendrothrips minowai Priesner, one of the most destructive pests of tea plants in tea plantations. A large proportion of D. minowai individuals were caught in traps placed at heights ranging from 5 cm below to 25 cm above the position of tender leaves at the top of the tea plant, with most captures at a height of 10 cm above this position. The flight activity of D. minowai was highest from 10:00 to 16:00 h on sunny days in the spring and from 06:00 to 10:00 h and from 16:00 to 20:00 h in the summer. The distributions of D. minowai females and nymphs on leaves were aggregated according to Taylor’s power law and Lloyd’s patchiness index. The D. minowai populations were dominated by females, and the density of males was high in June. The seasonal prevalence of tea thrips captured with sticky traps in the field was bimodal; adult thrips overwintered on the bottom leaves. The peak periods of activity were from April to June and from August to October. This work provides new insights that have implications for enhancing the efficacy of measures to control D. minowai. Abstract The stick tea thrips, D. minowai Priesner (Thysanoptera: Thripidae), is one of the most economically significant thrips pests of tea (Camellia sinensis (L.) O. Ktze.) in China. Here, we sampled D. minowai in tea plantations from 2019 to 2022 to characterize its activity patterns, population dynamics, and spatial distribution. A large proportion of D. minowai individuals were caught in traps placed at heights ranging from 5 cm below to 25 cm above the position of tender leaves at the top of the tea plant, and the greatest number of individuals were captured at a height of 10 cm from the position of tender leaves at the top of the tea plant. Thrips were most abundant from 10:00 to 16:00 h in the spring and from 06:00 to 10:00 h and from 16:00 to 20:00 h on sunny days in the summer. The spatial distribution of D. minowai females and nymphs was aggregated on leaves according to Taylor’s power law (females: R2 = 0.92, b = 1.69 > 1; nymphs: R2 = 0.91, b = 2.29 > 1) and Lloyd’s patchiness index (females and nymphs: C > 1, Ca > 0, I > 0, M*/m > 1). The D. minowai population was dominated by females, and male density increased in June. Adult thrips overwintered on the bottom leaves, and they were most abundant from April to June and from August to October. Our findings will aid efforts to control D. minowai populations.
Introduction
Thrips (Insecta: Thysanoptera) are major pests of agricultural and horticultural crops around the world [1][2][3]. Many thrips are pests of commercial crops due to the direct damage Insects 2023, 14, 152 2 of 15 they cause by feeding on developing flowers, fruits, vegetables, or leaves, affecting yield or cosmetic appearance [4,5]. Thrips may also serve as vectors for plant diseases, such as tospoviruses [6,7]. In most annual vegetable and row crop production systems, along with climatic variables, the seasonal availability of host plants proudly impacts thrips population dynamics [8,9]. However, in perennial plantation crops where the host is available throughout the year, thrips population dynamics and their dispersal patterns are largely influenced by climatic variables [4]. Therefore, in perennial agroecosystems, understanding yearly thrips population dynamics in the field and their dispersal patterns on host plants are very important in developing effective integrated pest management strategies [10]. The information can be used to predict thrips abundance on host plants in the field, and this may suggest further ways of developing their potential for pest management.
Thrips have become a threatening pest to tea plants (Camilla sinensis (L.) O. Ktze.) in China, which is one of the most important tea-producing and tea-exporting countries in the world [11,12]. A total of 28 thrips species have been documented in tea plantations in China, including D. minowai Priesner, Scirtothrips dorsalis Hood, and Mycterothrips gongshanensi Li, Li, and Zhang [13][14][15]. The stick tea thrips, D. minowai has become an increasingly significant pest of tea plants in China in recent years [16,17]. D. minowai induces direct damage to tea plants by sucking nutrients from the leaflets; whether it can also be a vector of viruses remains unknown. The presence of stripes and scarring along the leaf veins and blades on the abaxial and adaxial leaf surfaces is a sign of feeding damage, and heavy D. minowai infestations can lead to the gradual loss of leaf color, leaf stiffness, and decreases in tea yield and quality [18,19].
Insecticides are often used to control D. minowai in conventional tea plantations, and the use of insecticides can have negative environmental effects, decrease the abundance of beneficial natural enemies, and favor the evolution of resistance in thrips populations [20][21][22]. Thus, the frequency and timing of insecticide applications are critically important for the sustainable management of D. minowai populations. Knowledge of the abundance of thrips on tea plants is important for understanding seasonal variation in the activity of thrips, including the timing of infestations [23]. Knowledge of the spatial distribution of thrips on tea plants is also important for the development of strategies to control their populations [24]. The aims of this study were to monitor the flight heights of thrips, characterize their daily activity patterns, clarify the spatial distribution and population dynamics of D. minowai in tea fields, and provide data that will aid integrated thrips management programs in China.
Materials and Thrips Identification
Blue sticky traps made from PVC (10 × 25 cm) purchased from Hangzhou Yihao Agricultural Technology Co., Ltd., Zhejiang Province, China, were used to characterize the flight height and diurnal activity patterns of thrips. Actually, we identified D. minowai based on their morphological characteristics reported in the literature.
Flight Height Observations
The flight heights of D. minowai adults were evaluated using commercially available colored sticky traps. Blue sticky traps were used because they have been shown to be effective in attracting various other thrips species [25,26]. Traps were hung on branches at various heights below (negative numbers) and above (
Materials and Thrips Identification
Blue sticky traps made from PVC (10 × 25 cm) purchased from Hangzhou Yihao Agricultural Technology Co., Ltd., Zhejiang Province, China, were used to characterize the flight height and diurnal activity patterns of thrips. Actually, we identified D. minowai based on their morphological characteristics reported in the literature.
Flight Height Observations
The flight heights of D. minowai adults were evaluated using commercially available colored sticky traps. Blue sticky traps were used because they have been shown to be effective in attracting various other thrips species [25,26]. Traps were hung on branches at various heights below (negative numbers) and above
Diurnal Activity Patterns
Following previous studies, observations of the diurnal activity patterns of D. minowai were made on sunny, cloudy, and rainy days [27]. Thrips were not active at night; the number of thrips caught at night was counted once; however, the number of thrips captured during the day was counted every 2 h (specifically, counts were conducted at 6:00, 8:00, 10:00, 12:00, 14:00, 16:00, 18:00, and 20:00). Sampling was conducted at organic tea plantations of Shaoxing Royal Tea Village Co., Ltd., in the spring (24 April 2022: sunny day; 25 April 2022: rainy day; 26 April 2022: cloudy day) and the summer (24 June 2022: sunny day; 26 June 2022: cloudy day; 28 June 2022: rainy day). Blue sticky cards were placed 10 cm away from the surface of tea leaves, and there was a distance of 5 m between each trap to minimize interference between traps. Five traps were randomly placed at the study sites during each sampling period. The number of thrips caught per sticky trap was recorded.
Diurnal Activity Patterns
Following previous studies, observations of the diurnal activity patterns of D. minowai were made on sunny, cloudy, and rainy days [27]. Thrips were not active at night; the number of thrips caught at night was counted once; however, the number of thrips captured during the day was counted every 2 h (specifically, counts were conducted at 6:00, 8:00, 10:00, 12:00, 14:00, 16:00, 18:00, and 20:00). Sampling was conducted at organic tea plantations of Shaoxing Royal Tea Village Co., Ltd., in the spring (24 April 2022: sunny Figure 2. Schematic diagram of how the traps were placed at different heights from below (negative numbers) to above (positive numbers) the position of tender leaves at the top of the tea plant in tea plantations (a) and the numbers (mean ± SEM) of D. minowai adults captured in traps (b). Different lowercase letter denotes a difference at the p < 0.05 level, while the same lowercase letter was not significantly different (p > 0.05) (ANOVA followed by LSD).
Spatial Distribution and Sex Ratio
The spatial distribution and sex ratio of D. minowai on leaves were studied in tea plantations at four sites (Hangzhou Fuhaitang Tea Ecological Technology Co., Ltd.; Zhejiang Camel Jiuyu Organic Food Co., Ltd.; Shaoxing Royal Tea Village Co., Ltd.; and Shengzhou Tea Comprehensive Experimental Base) in Zhejiang Province, China. To characterize the spatial distribution of D. minowai, we visually inspected tea plants for morphological indicators of female and nymph D. minowai presence [28]. Our previous observations suggest that male D. minowai adults are rarely found on tea leaves and spend most of their time hiding in tea bushes. This suggests that the above sampling method of visually inspecting plants is not effective for detecting D. minowai males. Therefore, to evaluate the sex ratio, we visually inspected plants for the presence of female and male D. minowai adults using knockdown techniques [29], which involved holding tea branches over a rectangular 40 × 20 × 10 cm white-colored pan and striking the branch five times; the numbers of females and males that fell into the pan were then counted [28].
Seasonal Abundance
The seasonal abundance of D. minowai on tea leaves was monitored at weekly intervals between April 2019 and October 2022 in organic tea plantations of Shaoxing Royal Tea Village Co., Ltd. To measure thrips abundances, we divided the study area into plots of 20 × 30 m. We then sampled D. minowai adults from the 100 tea leaves at five random points within each plot. The numbers of D. minowai adults on the upper (second leaf under the tender shoot), middle, and bottom leaves were estimated using the method described above.
Statistical Analyses
All data were checked for normality and equality of variances prior to statistical analysis. Datasets that did not meet assumptions were square-root transformed to meet the requirements of equal variances and normality. Differences in the numbers of thrips per trap, at different heights, and during different periods were determined using analysis of variance (Minitab 13, Minitab Inc., State College, PA, USA).
The means (m) and variance (V) of the densities of thrips were calculated. Means and variances of D. minowai were modeled according to Taylor's power law (TPL) [lg(V) = lga + blg(m)], where a is a sampling factor, and b is the aggregation parameter. The distribution is considered regular if b < 1; random if b = 1; and aggregated if b > 1 [30]. The spatial distribution of the thrips was analyzed using density data and Lloyd's patchiness index [31,32]. Parameters were obtained using the following model: , index of clumping I = V/m − 1, mean crowding intensity M* = m + V/m − 1, and aggregation index M*/m. C < 1, I < 0, Ca < 0, M*/m < 1 represents a regular distribution, C = 1, I = 0, Ca = 0, M*/m = 1 represents a random distribution, and C > 1, I > 0, Ca > 0, M*/m > 1 represents an aggregated distribution.
Graphs of the flight height, daily flight activity, and seasonal distribution of D. minowai were made using GraphPad Prism 7.0, and graphs of the linear relationships between variances and means of D. minowai were made using OriginPro 2021. All analyses were conducted using SAS 9.4.
Flight Height
The numbers of D. minowai captured on blue sticky traps varied at different heights (F 14,60 = 385.65, p < 0.001) (Figure 2b). Overall, traps ranging from 5 cm below to 25 cm above the position of tender leaves at the top of the tea plant had a high number of thrips. Most thrips were captured at a height of 10 cm above the position of tender leaves at the top of the tea plant (Figure 2b).
Diurnal Activity Patterns
The daily flight activity of the stick tea thrips D. minowai was examined using blue sticky traps in tea plantations. We found that thrips flight activities were affected by both weather and temperature (Figure 3). In the spring, thrips were most abundant between 10:00 and 16:00 h on sunny days, and their abundances declined from 16:00 to 20:00; they were largely inactive from 20:00 to 08:00 h. The number of thrips captured in traps was significantly lower on rainy or cloudy days than on sunny days between 10:00 and 16:00 h; the daily flight curve was unimodal (Figure 3a). In the summer, the daily flight curve was bimodal on hot, sunny days ( Figure 3b). Specifically, thrips were more abundant between 06:00 and 10:00 h and between 16:00 and 20:00 h; only a few thrips were found between 12:00 and 16:00 h and between 20:00 and 06:00 h. On cloudy days, thrips were captured on sticky traps during the entire sampling period from 06:00 to 20:00 h; a bimodal daily flight curve was also observed on these days. Few thrips were captured in traps throughout the sampling period on rainy days. In the summer, the daily flight curve was bimodal on hot, sunny days (Figure 3b). Specifically, thrips were more abundant between 06:00 and 10:00 h and between 16:00 and 20:00 h; only a few thrips were found between 12:00 and 16:00 h and between 20:00 and 06:00 h. On cloudy days, thrips were captured on sticky traps during the entire sampling period from 06:00 to 20:00 h; a bimodal daily flight curve was also observed on these days. Few thrips were captured in traps throughout the sampling period on rainy days.
Spatial Distribution and Sex Ratio
Variances and means were significantly related according to Taylor's power law (females: R 2 = 0.92, p < 0.0001, b = 1.69 > 1; nymphs: R 2 = 0.91, p < 0.0001, b = 2.29 > 1), indicating that the distribution of D. minowai females and nymphs in the four different tea plantations was aggregated (Figure 4a,b). Lloyd's patchiness index indicated that the distribution of D. minowai females on tea leaves in Yuecheng District, Shaoxing, China, was aggregated from April to June 2021 and, in Xihu District and Yuhang District, Hangzhou and in Shengzhou County, Shaoxing, from April to June in 2022 (C > 1, Ca > 0, I > 0, M*/m > 1) ( Table 1). Lloyd's patchiness index indicated that the distribution of D. minowai nymphs on tea leaves was aggregated in Yuecheng District, Shaoxing, from April to June 2022 (C > 1, Ca > 0, I > 0, M */m > 1) ( Table 2). In general, the spatial distributions of D. minowai females and nymphs were relatively stable within different fields and different periods, respectively. Variances and means were significantly related according to Taylor's power law (females: R 2 = 0.92, p < 0.0001, b = 1.69 > 1; nymphs: R 2 = 0.91, p < 0.0001, b = 2.29 > 1), indicating that the distribution of D. minowai females and nymphs in the four different tea plantations was aggregated (Figure 4a,b). Lloyd's patchiness index indicated that the distribution of D. minowai females on tea leaves in Yuecheng District, Shaoxing, China, was aggregated from April to June 2021 and, in Xihu District and Yuhang District, Hangzhou and in Shengzhou County, Shaoxing, from April to June in 2022 (C > 1, Ca > 0, I > 0, M*/m > 1) ( Table 1). Lloyd's patchiness index indicated that the distribution of D. minowai nymphs on tea leaves was aggregated in Yuecheng District, Shaoxing, from April to June 2022 (C > 1, Ca > 0, I > 0, M */m > 1) ( Table 2). In general, the spatial distributions of D. minowai females and nymphs were relatively stable within different fields and different periods, respectively. The proportions of female and male D. minowai adults in different periods at different sites varied ( Table 1). The proportions of female thrips in tea plantations were higher than those of males throughout most of the sampling period, indicating that D. minowai populations were dominated by females in the tea plantations, especially as the density of the thrips population increased (from April to the end of May). However, the density of D. minowai males increased in early June and eventually outnumbered females by late June.
Seasonal Abundance of D. minowai
Annual cycles were observed in the D. minowai female population in tea plantations, with a bimodal type of occurrence, the two highest densities occurring in April to June and August to October on tea leaves, regardless of the cultivars and years in Zhejiang Province, China ( Figure 5). However, in 2022, the number of D. minowai females was near zero, different from the months of August to October 2019-2021. The abnormal phenomenon was caused by the continuous extremely high temperature and drought from June to August 2022 (Table S1).
Discussion
Characterizing the flight heights and diurnal activity patterns of thrips is important for accurately estimating their densities and dispersal patterns, as well as developing pesticide application strategies. The flight activity patterns of insects are related to their responses to sunlight, temperature, and relative humidity [24,31,33]. In our study, the flight activity patterns of D. minowai on sunny days differed in the spring and summer. D. minowai flight activity peaked from 10:00 to 16:00 h in the spring; however, the peaks of their flight activity were from 06:00 to 10:00 h and from 16:00 to 20:00 h in the summer (Figure 3). They appear to avoid flying in temperatures below 20 • C or above 30 • C (Table S1). No flight activity was observed at night (Figure 3). Frankliniella occidentalis Pergande females are immobile at midday and at night [34], and Thrips imaginis Bagnall, T. hawaiiensis Morgan, S. dorsalis, Megalurothrips usitatus Bagnall, and F. schultzei Trybom seek refuge on their host plants during the hottest times of day, which corresponds to the period when their densities are highest [35,36]. The number of thrips captured on traps was significantly lower on rainy days than on sunny days. The decreased abundance of thrips on rainy days might stem from the effect of temperature, solar radiation, or an effect of humidity.
In our study, the distribution of D. minowai females from April to June at all sites was aggregated according to Taylor's power law (b > 1) and Lloyd's patchiness index (C > 1, Ca > 0, I > 0, M*/m > 1) ( Table 1). These results indicate that the aggregated distribution of females was not affected by tea variety and geographic region. The distributions of other thrips species have also been shown to be aggregated. For example, the distribution of F. occidentalis was significantly aggregated on cucumber, cotton, tomato, and strawberry [37][38][39][40], the distribution of F. schultzei was significantly aggregated on cucumber [41], the distribution of Pezothrips kellyanus Bagnall was significantly aggregated in citrus groves [42], and the distribution of S. dorsalis was significantly aggregated on chili plants [43]. The distribution of nymphs was more aggregated than that of females (nymphs: b = 2.29 > females: b = 1.69) in tea plantations ( Figure 4). This pattern has been observed in many other thrips species; nymphs aggregate during the early nymphal stages mainly because of their limited mobility, and they become less aggregated as their mobility increases [44]. These findings are consistent with the results of previous studies showing that the distribution of nymphs is more aggregated than that of females in F. occidentalis on tomato flowers and on greenhouse cucumber leaves, as well as in T. hawaiiensis, T. palmi Karny, and S. dorsalis on their host plants [24,42,45].
Our study of the population dynamics of thrips across four years revealed two key periods in which the abundance of D. minowai and, thus, the damage that they induced to tea plants were highest in Zhejiang Province ( Figure 5). This information can aid the management and control of thrips on tea leaves in different seasons. In addition, D. minowai adults overwintered on the bottom leaves. The full-bloom stage of tea plants runs from mid-October to late November, and most tea flowers are present on the lower to middle parts of tea plants [46]. Some D. minowai adults began to colonize the lower middle leaves in tea plantations starting in October. D. minowai adults overwinter on the bottom leaves from November until the following March. During this stage, volatiles such as beta-ocimene, farnesene, and methyl benzoate have been identified [47]. According to our previous research, D. minowai is attracted to the above three volatiles [17]. Thus, the presence of overwintering adult thrips in the lower middle part of the tea plants might stem from their attraction to the volatiles of tea flowers. However, more laboratory studies and fieldwork are needed to clarify the overwintering mechanism of D. minowai. In any case, the bottom tea leaves merit increased attention, given that many of them serve as overwintering sites for these thrips.
Conclusions
Our data on the flight heights and activity patterns of thrips indicate that blue sticky traps hanging at 10 cm above the position of tender leaves at the top of the tea plant were more effective on sunny days. The distribution of D. minowai females and nymphs was aggregated, and a bimodal type of occurrence was observed in the female population in tea plantations. Adult thrips overwinter on the bottom leaves. Thus, the application of pesticides on old bottom leaves during the winter months could reduce population densities and pesticide residues in the following year. The results of this study provide new insights that will aid the management of D. minowai populations in tea fields, as well as the development of integrated pest management programs to control D. minowai infestations.
|
2023-02-04T16:11:56.310Z
|
2023-02-01T00:00:00.000
|
{
"year": 2023,
"sha1": "f4e8ed0c5481b3da117456a686f52f8add628f07",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4450/14/2/152/pdf?version=1675265634",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "85df854211ed01f594513dc1bdcf0529dabc779d",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
211555325
|
pes2o/s2orc
|
v3-fos-license
|
Self-care and Health Care in Postpartum Women with Obesity: A Qualitative Study
Objective To explore the experiences of women with obesity regarding self-care and the care provided by their families and health team after childbirth. Methods A clinical qualitative study performed at the Postnatal Outpatient Clinic of Hospital da Mulher, Universidade Estadual de Campinas, Brazil. The sample was selected using the saturation criteria, with 16 women with obesity up to 6 months after childbirth. Results The analysis comprised three categories: 1) postnatal self-care; 2) family support for woman after childbirth; and 3) postnatal health care service for women with obesity. Conclusion Women with obesity need support from the health team and from their families after childbirth, when they are overwhelmed by the exhausting care for the newborn. The present study reveals how important it is for health care professionals to broaden their perception and care provided after childbirth for women with obesity so they may experience an improvement in their quality of health and of life.
Introduction
The period after birth is a critical moment of transition and of physiological and psychological adaptations, which leaves women more susceptible to physical and emotional intercurrences. 1,2 The new maternity brings many challenges: besides her physiological recovery, the woman has to deal with the routine of caring for her baby and for herself. During this period, significant changes in her life demand new challenges, such as the acceptance of a new body image, sleep deprivation, adjustmentments in family relationships, as well as changes in her professional life, and in her health care and dietary care. 3 A continuum of maternal and newborn care is essential to guarantee maternal and neonatal physical and mental health, irrespective of any complications at birth. This care requires a support network on which these women have relied throughout their lives: family, community and healthcare services. 4 The postpartum period is considered by some authors 3 as the 4th gestational trimester. It is a perspective that considers the woman and her baby as still being a mutually-dependent unit, linked both physiologically and behaviorwise. The intention behind insisting on the postpartum period as the "4th trimester" is to encourage actions to support women and their families during this critical period. 3,5 The innumerable physiological, psychological and social changes that take place in the lives of women after childbirth constitute a learning process of changes in lifestyle. The development of public health interventions for this specific population, at this point in their lives, when they are so busy, remains a challenge. 6 Modern families are not receiving sufficient or quality support from family members or friends. Home visits by relatives and friends can help improve the mother's emotional wellbeing and self-esteem, as well as competency, family functioning, father-son/daughter relationship, and problem-solving. 7 The mother's well-being after childbirth is greatly influenced by her psychosocial state, as well as by family support and by her environment. 8 Having a baby brings emotional experiences to a woman's life. New psychological elaborations are needed, and some women feel more vulnerable to psychological problems during this period. Feelings of being overburdened and insecure about their ability to be a mother are linked to distress in the postnatal period. 8 Self-care is an important component of motherhood. 9 Time, limited resources and difficulty to accept help have been identified as obstacles to women's ability to care for themselves. 10 The present study endeavors to explore the experiences of women with obesity vis-à-vis their self-care and the care they receive during the postnatal period, both from family members and the healthcare team. We define self-care here in its broadest meaning, as any care an individual takes towards him/herself. Thus, we try to identify aspects that can enable healthcare professionals to offer comprehensive care suited to women with obesity after childbirth.
Methods
The clinical qualitative method was used, 11,12 which enables us to understand the emotional experiences of people involved in a healthcare setting. A fundamental part of this methodological structure is the interviewee's discourse. In this case, the scientific investigation is made based on the significance the interviewee attributes to the experiences, based on the premise that this is an efficient way of learning and inferring results that reveal the nexus of meanings. 12 The clinical qualitative methodhas three particularities that define it: a) existentialist attitude: appreciation of the angst and anxiety arising from falling ill; b) clinical attitude: appreciation of the reception of the emotional suffering of a person and the desire to provide help; c) psychoanalytic attitude: appreciation of the elements underlying the interview, also admitting that unconscious elements are present in the interviewer-interviewee relationship.
Setting
The present research was performed at the Postnatal Outpatient Clinic of Hospital da Mulher, Universidade Estadual de Campinas, a tertiary public teaching hospital, in Southeastern Brazil, which is a national benchmark in public care for women's and neonatal health. To this end, it relies on a multi-professional and interdisciplinary team, and it also promotes teaching, research and further education. The Postnatal Outpatient Clinic monitors postpartum women. The initial stage in the clinical qualitative research is acculturation, 12 through which the researcher establishes a direct relationship with the population to be studied. The main researcher went to the Postnatal Outpatient Clinic for three months (between January and April 2016). The information gathered in this stage (the perceptions of the researcher and the reports of dialogues with the professionals or women after childbirth) were recorded in a field diary and used to formulate the questions initially proposed for the interviews.
Participants
The selection of the sample was intentional: women over 18 years of age; up to 6 months after delivery; and with body mass index (BMI) 30 Kg/m 2 before pregnancy were included. Women who were not breastfeeding were excluded. The sample was selected using the information saturation criteria, 13 after discussion and validation with two research groups. The participants were women from the Postnatal Outpatient Clinic, and they were selected according to data recorded on the same day as their medical consultation. They were approached face to face by the interviewer (the main researcher), and were invited to take part in the study by means of an interview.
Data Collection
The data was collected at the Outpatient Clinic, and the interviews took place between April and August 2017. All of the participants signed an informed consent form before the interviews, which were held in a private room, thus guaranteeing confidentiality. A single, semi-directed interview was performed with each participant, with open-ended questions allowing for depth, 14 developed based on a script that was not rigid, thus enabling the interviewer to make the necessary adaptations based on the information provided by the interviewee. We have selected the questions pertaining to the theme of the present article that were made during the interview.
Trigger question: Tell me a little about how you have been feeling since your baby was born.
• Are you taking care of yourself?
• In what ways do you feel cared for?
• Do you have anyone to take care of you at home? • How do you think the healthcare team can help you at this time?
Data Analysis
Data analysis followed the seven steps described in the analysis of clinical qualitative content: 1) editing of the material: transcription of recorded interviews and convergence with material recorded in the field diary; 2) freefloating reading of the collected material: reading of material while suspending directed attention; 3) comments and impressions: taking notes and highlighting on the right hand margin of the transcript; 4) subcategorization and categorization: group and name significant speech within the same theme; however, the different categories contain heterogeneous ideas; 5) discussion with academic peers about the analyzed material; 6) category definition: refinement of the categories; and 7) validation of the analyzed material together with peers.
For the content analysis of the field research, the transcriptions of the interviews were performed by one of the co-authors. The editing of the written material, based on the transcriptions of the interviews and field analysis, was performed by the main researcher and author of the present study. At a later date, all of the material was read separately by the two independent researchers. Both completed the first stages of content analysis and comments individually. Following this, together they definedthe categories, which were also discussed with the research advisors and then presented and validated by two research groups.
The research was approved by the Ethics Committee of the Universidade Estadual de Campinas and the Brazilian National Board of Health in February 2017 (under the number CAAE62565116.3.0000.5404). The COREQ Checklist was also used for the present study.
Results
The 16 women approached agreed to take part in the study. There were no refusals (►Table 1).
The clinical qualitative content analysis revealed three categories: 1) postnatal self-care; 2) support from the family; and 3) postnatal healthcare services for women with obesity.
Postnatal Self-care
The interviewees revealed a desire to take care of themselves, but the lonely routine with the baby made it difficult for them to think of doing anything for themselves. Caring for the newborn was a priority, which is perfectly natural at this stage in which the helplessness of the baby demands a huge, intense effort: I don't take much care of myself. I have to admit I am not very vain, and now, less than ever, but I would like to […] the weight problem is something I would like to [deal with] because weight brings a lot of problems: it brought hypertension, it brought gestational diabetes, it brings, could bring me other problems that I don't want to deal with, I want to be healthy so she will be, you understand? The breastfeeding routine and other care measures for the newborn can aggravate this situation and constitute an excuse for not thinking about or caring for themselves. We perceive that the issue of self-care, for some of these women, was not part of their routine long before pregnancy. The interviewees associated the word self-care with vanity rather than a health issue.
Ah, it's complicated, you see, I just let myself go: it's hair, nails, and now I don't even go out. I just stay at home with him [the baby]. And there's no way I can go to the beauty parlor; you have to have time to have your nails done, have your hair done, right? And up until now, I haven't managed that, his feeding time is on demand, right? (Participant 4) From the health point of view, we perceived that despite identifying the postnatal period as a maturing process, they reveal a sense of negation of themselves in favor of the baby's emotional state, a feeling of their non-existence as 'beings' at this moment. They experience this process as something natural, because they feel that in some way they were already abnegating themselves in favor of pleasing others.
[…] We are trapped in a corner over there and you say, no, you are going to live for others and not for yourself. We have to understand ourselves and then the others, because if we are not well, we can't help others. Do you understand? Especially when we are mothers, there are times when we have to help, but if we are well, our child is well. During this 4-month phase [of the baby], they feel everything you have, normally I'm well, but the day when I was ill, the baby became ill, the day I'm feeling poorly, the baby feels a bit poorly. (Participant 6) The interviewees reveal the importance of this stage after childbirth and how powerful they are to bring about changes, as long as they have the support of family members, friends and/or healthcare professionals. Encouraging the positive changes arising from pregnancy, so that these accomplishments continue after childbirth, is a way of guaranteeing the health of the woman. The subjective experiences, the sensation of maturing, and the intense care for the baby are important factors in insuring they are conscious of the need for self-care.
So I got it into my head, I myself am going to change, at the right moment, so I had planned the change so when he came, he helps me too and encourages me, because it is good to have an incentive in life, isn't it? (Participant 9) Ah, how people talk, my God! How you are prettier, you lost weight, see! And I feel better, something like tiredness, that kind of thing, it's much better, [your] disposition (Participant 14)
Support from the Family
The postpartum period marks a change in the women's attachment to healthcare services and family relationships. They reported a feeling of loss of the attention and care that they had received during pregnancy. This is experienced as a sudden and violent disruption, which corroborates the feeling of loneliness and helplessness.
Things that happen after pregnancy that make us feel so out of it that we think that nobody is helping us, but they are, you see? We don't feel cared for but they are taking care [of us]. (Participant 6) Family members, in addition to helping with the routine at home and with the baby, can also provide support and care, while embracing the insecurities and helplessness felt by the woman during the postpartum period.
My father wanted to come and spend time with me, and I said "please come, dad"; then, my friend said "but your dad will not change diapers," but I know that my father, he is caring […]. And that he will take care of me.
(Participant 4)
Participants with strong family support reported the ability to organize meals in a more routine fashion, while those who did not have this support sought more practical and often unhealthy solutions. I don't have much time to go to the supermarket, as fruit and vegetables have to be bought at least weekly […]. So, we are eating lots of tinned, fried, or preserved food; it's bad, but unfortunately… (Participant 4) It is a great challenge for the woman after childbirth and those surrounding her to balance support and care for this woman, without depriving her of her autonomy. The family can be more aware and available to meet the physical and emotional needs of the woman: a caring gesture, the preparation of food, listening to her, holding her, always taking care not to see her as being fragile or less capable of making decisions about her life and that of her child(ren). As the woman finds herself in a period of greater emotional vulnerability, the care provided by the family can be seen as invasive or a threat to her autonomy.
Yes. I know that they are being excessively zealous towards me, I am grateful because few families, few pregnant women or new mothers get the opportunity of having their family close by, to take care of a newborn child; I'm lucky but it bothers me that I can't be in charge of my own life. (Participant 11) The relationships with partners were approached many times during the interviews. The women spoke of the changes in their relationships and of how they identified that their partners' behavior in relation to childcare was very different from their own, which was reinforced culturally, and often by themselves, with feelings of guilt about delegating the care of their children to their partners.
It's difficult to find a father who helps a lot, who accepts the routine, who bathes and dresses the baby, wow! It is difficult to find, most do not want to know, but I think it is cultural. (Participant 6) There is no more conversation […] we love each other and we get on well, but that's the way it is […] [my partner] cared for me and suddenly stopped, we miss it, it's the same as when something is taken away from you. (Participant 3)
Postnatal Healthcare Service for Women with Obesity
The interviewees reported strong affective bonds with their antenatal team, with difficulties in disengaging themselves from their care. This caused an impact on the eating habits of these women, who reported that it would have been easier to lose weight and maintain a healthy eating pattern under the constant supervision of the team. During the postpartum period, follow-up at the healthcare service becomes less frequent and focused on contraception and breastfeeding. This was considered a negative experience, as they had to deal with the loss of this bond. They reported the need for a support network, which should include both family members and the healthcare service. The development of not only consultations, but also approaches that enable a discussion about subjectivity, self-care, food and weight are required.
I really needed dietary follow-up, an incentive with someone saying you are doing everything right […] because when you look at yourself, you see that you really have that willpower, you are putting faith in yourself that you will achieve it and improve your self-esteem. (Participant 8) […] The healthcare team needs to provide this help to the mother, as it really is too much for the mother. (Participant 11) We perceived in the analysis of the interviews that women were keen to talk about their lives, showing interest in the possibility of being heard and welcomed. During the interviews, the moment in which the participants were most frequently emotional was when they were asked if they felt cared for. They revealed experiences of great solitude and a very strong desire that their relatives and health team perceive, understand and care for them without taking away their autonomy (►Fig. 1).
Discussion
Our results show that women with postnatal obesity tend to neglect care for themselves as caring for the newborn takes priority. Our interviewees report an experience associated with mourning for themselves, for their life up to that point, and how they lose themselves in a kind of 'temporary depersonalization.' These results are similar to those of other studies with postnatal women, irrespective of their BMI. 10,15 In their experience of motherhood, even women considered psychologically healthy experience a psychological withdrawal, giving up part of their interests as well as themselves to guarantee the baby's care. Mother and child become an autonomous unit that makes it possible for the mother to identify her baby's needs, something that is impossible to be identified by other people or in other circumstances. 16 However, in this discussion we would like to highlight that women with obesity deserve greater attention in the postnatal period because, as shown by the interviewees, before becoming pregnant these women had a behavior pattern of prioritizing the needs of others over their own. This could reveal a difficulty to perceive themselves in a positive manner. A woman with obesity already feels that she is seen in a bad light, both by herself and by others. Our results are compatible with those in the literature, which show that obesity is correlated with low self-esteem and low self-control, social stigma, and shame. 17,18 As a consequence, obesity and excess weight do not affect only the health, but also the individual's sociability. Obesity has a stigma, a form of social discrimination that can cause many negative psychological effects in an individual. 11,19 Another aspect to be emphasized in our study of the experiences of women vis-à-vis their self-care is that they show that this is an opportune moment for interventions, since they feel that the experiences of motherhood bring an important maturing, and that, since pregnancy, they have become more inclined to acquire new habits. In the literature, we find that interventions by women's healthcare teams should be present during the pregnancy and continue throughout the puerperium to ensure that new habits be maintained. 15,[20][21][22] Price et al 22 (2012) state that new mothers were more disposed and interested in talking about behavior and goals for their children, but that months after the birth, especially when returning to work, women begin to focus again on themselves, and there is a window of opportunity to talk about their goals and behavior.
Our study also points to a psycho-educational need on the part of the interviewees, since they limit self-care to esthetic issues, and do not perceive it as being linked to health care. Promoting health is linked to strengthening the subjects' autonomy; self-care, appreciation of subjective experiences, as well as of the sociocultural contexts in which the individuals find themselves. 23,24 The advances in health care can guarantee an improvement in women's quality of life, and are one of the greatest challenges of this century. We perceive that this ignorance may be associated with psychological and cultural issues, and that health care professionals play an important role in this process of awareness as to the importance of self-care in health. 25 Bearing in mind the experiences related by women with obesity after childbirth in relation to themselves, their families and the health care team, a discussion about the network of care for these women would be relevant to develop strategies to provide support and care for them. The postpartum period is a critical moment, and it demands a continuum of maternal care in its different human dimensions. 26 The intense care regarding the physical and emotional needs of the baby and his/her primitive psychological states also awakens/evokes states of primitive anxiety and a sense of internal solitude in the mother, as well as the mourning process a woman must face vis-à-vis her pregnancy and life prior to maternity. This concept of solitude was described by Klein 27 (1984) as a feeling of loneliness irrespective of the external context, irrespective of being among beloved people and surrounded by love and attention.
Family relationships should be encouraged and strengthened at this time. It is important that families are aware of what constitutes this moment in a woman's life and her needs. The interviewees reported that these relationships can be conflictual and disrespectful, and cause further emotional overload to the mother. The literature describes how much a good family relationship can help these women have a better quality of life in the postnatal period. Price et al 22 (2012) state that, during this period, women find many barriers to eating healthy, and when there is the constant presence of a family member, they are able to eat better and lead a healthier life.
It is very important that health care teams be aware of the family relationships of these women to identify failures in the family support network as well as when the care offered takes away their autonomy.
In order for women with obesity to become aware of the need for self-care, it is important for the team to begin with assertive comments and offer alternatives as to how this woman can care for herself, by showing that this is both an external and internal task. Our results correspond to those presented by Chugh et al 28 (2013) in that the disposition to lose weight depends as much on self-motivation as on the incentive of the health professional. The study 28 showed that the perception of the health professionals regarding the lowest level of motivation for these women to lose weight can make this process even more difficult.
The content of the care offered after childbirth must be developed in such a way as to include more priorities in women's health studies. 29,30 New mothers have multiple unmet needs, and health institutions should be aware of these needs and provide them with support by means of clear and precise information, so they do not feel alone, sheltering them in their search for information and rights. 3 Moreover, women have different ideas, experiences and expectations about losing weight in the postnatal period. Health care professionals can take care of the needs of each woman to promote their autonomy and better results in their health and lifestyle. 31 We have proposed a plan directed at healthcare professionals who care for women with postnatal obesity (►Fig. 1). Our study also sought to provide tools for healthcare professionals during the follow-up of these women. The follow-up should make the women feel a sense of belonging and care; in it, weight and nutrition should be monitored, and the support network, whether from family, friends, or other women going through the postpartum period, must be strengthened, and environments in which these women are encouraged to talk about their feelings, routine and relationship with food should be created. The idea is that the health service provides centrality for the women and their experiences. The women showed throughout the interviews that more important than talking was the feeling that they were being heard.
►Fig. 1 proposes a model of care for the psychological aspects of women with obesity. We emphasize the care already included in the gynecological and obstetric teams' routines of breastfeeding, contraception and clinical care.
Conclusion
The postnatal period is a landmark in the physiology, the social life and psychological state of women, which is capable of transforming their subjectivity and identity. This period requires interventions by health professionals. The women with obesity already feel discriminated against, both by themselves and by others, which also brings risks of psychopathological disorders and risks regarding their physical health and weight, which are more evident after childbirth. The women with obesity need attention as well as the presence of the health team and of family members in order to take better care of their health, at a moment when women are already overwhelmed by the exhausting care for the newborn. Effective strategies are needed for women with obesity in the postnatal period, thus guaranteeing their quality of life.
Contributions
DBFS, ERT and FGS conceived and designed the study. DBFS collect the data. All of the authors were involved in data analysis and interpretation. DBFS, LR, DSMP and FGS were involved in writing. All authors approved the final version of the manuscript.
Funding
The present study was partly funded by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), finance code 001.
|
2020-02-29T14:05:13.447Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "fa758cbfbb12aa8c2915b7ad4963b1a1f551c811",
"oa_license": "CCBY",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0039-3400456.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0e76277501aca2f2173200ba1d2add50537e8993",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
10320577
|
pes2o/s2orc
|
v3-fos-license
|
Postoperative occlusion of visual axis with fibrous membrane in the presence of anterior capsular phimosis in a patient with pseudoexfoliation syndrome: a case report
Background To report a case of postoperative fibrous membrane formation occluding the visual axis in the presence of anterior capsular phimosis in a patient with pseudoexfoliation syndrome. Case presentation A 79-year-old Asian woman with pseudoexfoliation syndrome underwent uneventful phacoemulsification and implantation of one-piece hydrophilic acrylic square-edged intraocular lens (Cristalens) in the right eye. Two months later, she had blurred vision in the right eye with the best-corrected visual acuity (BCVA) of 20/40. Formation of fibrous membrane occluding the capsulorhexis opening with contraction of anterior capsule was observed, which was confirmed by anterior segment optical coherence tomography. Clear visual axis was achieved by lysis of the membrane using Nd:YAG laser. The BCVA improved to 20/20. Conclusions Occlusion of the visual axis with fibrous membrane can develop in the presence of anterior capsular phimosis in a patient with pseudoexfoliation syndrome.
Previous reports showed that severe anterior capsular phimosis can cause complete occlusion of the pupil [2,3,7,8]. However, through a comprehensive search of the MEDLINE database, occlusion of visual axis with fibrous membrane in the presence of incomplete anterior capsular contraction with capsulorhexis opening of 3.0 mm has rarely been reported [2]. Although anterior segment imaging modalities including anterior segment optical coherence tomography (AS-OCT) is expected to visualize the occlusion of the capsular opening, there has been no report of application of the AS-OCT in the condition.
We recently experienced a case of fibrous membrane formation occluding the visual axis accompanied with anterior capsular phimosis after an uneventful cataract surgery in a patient with pseudoexfoliation syndrome which was confirmed with AS-OCT, thus herein report the case.
Case presentation
A 79-year-old Asian woman with pseudoexfoliation syndrome was referred for cataract surgery in the right eye. Her past medical history was unremarkable. Her bestcorrected visual acuity (BCVA) was 20/50 in the right eye.
She underwent right phacoemulsification under topical anesthesia. As she had a small pupil of approximately 5.5 mm diameter, CCC with a diameter of 5.0 mm was performed without iris manipulation. Phacoemulsification was completed without any intraoperative complication, and a foldable one-piece hydrophilic acrylic square-edged intraocular lens (IOL) (22.0 diopters, 6.0mm optical diameter, 10.75mm overall diameter, model number: CLARE ® ; Cristalens, Paris, France) was implanted through a temporal corneal incision. Postoperatively, anti-inflammatory treatment with topical prednisolone acetate 1.0% 4 times daily was applied for 1 month. The BCVA was 20/40 at 1 week, which improved to 20/25 at 1 month.
Two months after the surgery, she presented with blurred vision in the right eye. The BCVA was 20/40. Slit lamp examination after dilation revealed marked opacity and thickening of the anterior capsule. Anterior capsulorhexis opening reduced to a diameter of approximately 3.0 mm due to capsular contraction, and fibrous membrane occluding the capsulorhexis opening was observed (Fig. 1a). AS-OCT (Visante; Carl Zeiss Meditec, Oberkochen, Germany) confirmed the presence of the membrane (Fig. 1b). Slight anterior chamber (AC) cell reaction was found.
A neodymium: yttrium aluminium garnet (Nd:YAG) laser was used to clear the visual axis. The membrane was lysed with a total of 24 mJ (15 shots × 1.6mJ) Nd:YAG laser. As a clear anterior capsular opening with a diameter of 3.0 mm was attained ( Fig. 2a and b), no further excision of the anterior capsule was done. Topical prednisolone acetate 1.0% 6 times daily for 1 month was prescribed to prevent intraocular inflammation.
One month later, her BCVA improved to 20/20 in the right eye. Slit lamp examination revealed clear visual axis (Fig. 3a). AS-OCT demonstrated no membrane or pit on the anterior surface of the IOL (Fig. 3b).
Discussion
In this case, AS-OCT was used to visualize the anterior capsular phimosis and formation of the membrane occluding the capsular opening.
In the present case, the occluding membrane as well as the anterior capsular phimosis was detected at 2 months postoperatively, which is in agreement with the previous reports that maximal rate of capsular contraction occurred within postoperative 6 weeks [7,9]. Clear visual axis was observed at one month after the Nd:YAG laser treatment. As capsular stability is reported to be achieved at 3 months postoperatively [10], we expect that the capsulorhexis can be stabilized. However, we also believe further follow-up is needed as there is possibility of further capsular contraction, particularly because the patient has pseudoexfoliation syndrome.
Anterior capsular phimosis is postulated to consist of two mechanisms: 1) capsular shrinkage, probably due to actin filaments within residual lens epithelial cells (LECs) and 2) proliferation and fibrous metaplasia of these residual LECs which lead to the reduction of the size of the capsulorhexis opening [2,7]. Histopathological examination showed that the proliferative membrane was composed of subcapsular fibrous tissue interspersed with proliferated fibrocytic cells, derived from residual LECs [2,7]. Using scanning electron microscope, Ueno et al. [11] demonstrated the presence of fibroblast-like cells in the area of the anterior capsular occlusion. Kurosawa et al. [12] also revealed that anterior capsular phimosis involved outgrowth of fibrous tissue from the capsule margin and its contraction.
To our knowledge, anterior capsular phimosis after implantation of the Cristalens CLARE IOL has never been reported. Although hydrophilic acrylic IOLs with square-edge design and four haptics are expected to have enhanced uveal biocompatibility and capsular support [3,13,14], a few cases of anterior capsular phimosis after implantation of these IOLs were reported [4,5,15]. Notably, most of the cases were associated with pseudoexfoliation syndrome [4,5]. There have been several case reports of anterior capsular phimosis in patients with pseudoexfoliation syndrome despite the insertion of capsular tension ring [3,5,16]. Pseudoexfoliation syndrome appears to significantly increase the risk of anterior capsular phimosis due to the following reasons: 1) As capsular shrinkage is conceivably associated with an imbalance between centripetal and centrifugal forces that act on the zonules and the capsulorhexis edge [3], zonular weakness can exaggerate the contraction response. 2) Although larger CCC is correlated with less capsule contraction [17], small CCC is often inevitable in pseudoexfoliation syndrome due to poor mydriasis. 3) Complete cleansing of LECs is also important for the prevention of the fibrous proliferation [17]. However, thorough removal of the LECs, particularly those at the lens equator, is often difficult due to small pupil. 4) A compromised blood-aqueous barrier in the condition may result in increased postoperative inflammation, which can precipitate the progression of anterior capsular phimosis [1,5].
In the present case, fibrous membrane occluding the anterior capsulorhexis opening developed in the presence of capsular phimosis with capsulorhexis opening of 3.0 mm. We postulate that the phenomenon was due to the following mechanisms: 1) Formation of the fibrous membrane could be faster than the progression of the capsular contraction, which might cause the occluding membrane formation before marked reduction of the capsular opening size. 2) The design (square-edged one piece with 4 haptics) and material (hydrophilic acrylic) of the IOL might exert high strength of capsular support, which could help maintain the capsular opening despite the fibrous proliferation. Spang et al. [2] reported a similar case of anterior capsular phimosis in which proliferated LECs filled the capsular opening. In their case, they used an IOL with 13.5mm overall length, which might be advantageous for capsular support [2]. Another remarkable thing is that we used substantially less Nd:YAG laser energy compared to laser energy of 90 to 140 mJ used in other reports [3,5], probably because only removal of the fibrous membrane without manipulation of the capsule was needed to clear the visual axis.
Conclusions
We report a case of membrane formation occluding the visual axis in the presence of anterior capsular phimosis after implantation of Cristalens CLARE IOL in a patient with pseudoexfoliation syndrome. Nd:YAG laser can be effective in the treatment of the condition.
|
2018-04-03T00:11:28.951Z
|
2016-12-07T00:00:00.000
|
{
"year": 2016,
"sha1": "5813c4cf76ad9100e0ada274b968ea1cb0dbaede",
"oa_license": "CCBY",
"oa_url": "https://bmcophthalmol.biomedcentral.com/track/pdf/10.1186/s12886-016-0394-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5813c4cf76ad9100e0ada274b968ea1cb0dbaede",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225815052
|
pes2o/s2orc
|
v3-fos-license
|
NUTRITIONAL STATUS OF CHILDREN UNDER 5 YEARS RECEIVED IN CONSULTATION AT THE BASSILA ZONE HOSPITAL (NORTH-WEST BENIN)
1. Laboratory of Biomembranes and Cellular Signaling, Department of Animal Physiology, Faculty of Science and Technology, University of Abomey-Calavi, BP 526 Cotonou, Republic of Benin. 2. Bassila Health Zone, Donga Department of Health, BP: 20 Bassila; Republic of Benin. 3. Senade pediatrics and Neonatology Clinic , 06 BP 601 Cotonou, Républic of Benin. ...................................................................................................................... Manuscript Info Abstract ......................... ........................................................................ Manuscript History Received: 14 March 2020 Final Accepted: 16 April 2020 Published: May 2020
The nutritional status of children under 5 years of age was an important indicator for monitoring their growth. This study aims to determine the nutritional status of children under five 5 years of age received in consultation and/or hospitalized pediatric ward of Bassila Area Hospital. This was a prospective, descriptive and analytical study that is unwound from March 7 to May 21, 2019. The study included children under 5 years whose parents agreed to do medical exams at the hospital.The size and weight of the children were collected. Anthropometric indices were calculated using WHO Anthro® software (Version 3.2.2). Results of the Blood count, Thick Drop / Parasite Density and C-Reactive Protein Counts were obtained from the children's records. Consenting parents have been subjected to series of questions developed for this purpose. Excel and SPSS software were used for data processing and analysis. The threshold of significance was 5%. In total, 300 children were the subject of this study. 26.9% of the children surveyed were emaciated, 23% were stunted, 28.3% were underweight and 8.6% were overweight. In addition, 90.67% of children were anemic with 48.9% severe cases. Etiological research has shown that 90.44% of children had microcytic anemia and 69.37% had normocytic anemia.All of us, children were exposed to both energy malnutrition in all its forms and a worrying nutritional anemia from so, a "double" burden of malnutrition in this study population.
…………………………………………………………………………………………………….... Introduction:-
Nutritional status is the physiological state of an individual defined by relationship between intake and need for nutrients and body's ability to digest, absorb and use these nutrients (INSAE/ICF, 2013). Knowledge of children's nutritional status is a priori, an asset in monitoring their growth. It is one of the key determinants for physical, mental and psycho-affective growth for both children and adults. Its evaluation is based on anthropometric and/or biological data. Thus, anthropometric evaluation uses weight, height and age to diagnose macronutrients deficiencies/excess (energy, protein) and biological evaluation takes biological analyses into account to diagnose 873 deficiencies/excess micronutrients (iron, iodine, vitamins). Malnutrition, especially undernutrition, has been the most important nutritional problem in developing countries. Severe malnutrition, in most cases, is accompanied by anemia, which is an inherent part of the reductive adjustment process associated with weight loss, reduced lean mass and the presence of dropsy (Alan, 2007). Anemia is one of the most common public health problems in the world, especially in developing countries with the highest prevalences. In Benin, 32% of children fewer than 5 years are stunted and 72% of children aged 6-59 months have anaemia. In the Atlantic Department, there is prevalence of stunting and anaemia of 30.1% and 67.5% respectively compared to 27.2% and 75.8% in the Donga (INSAE/ICF, 2019). Recent studies have shown that these prevalences remain very high despite national control strategies. For example, some authors found prevalences of 56.48% respectively (Adébo et al, 2018) and 41.43% (Yessoufou et al, 2015) in the batch of children received in consultation/hospitalization in health facilities in the South (Abomeycalavi/So-Ava Zone Hospital) and the Centre du country (Zou/Collines Departmental Hospital Centre). So what about the prevalence of this pathology in a health structure in the northern part like Bassila?
Study type and population:
This study is prospective, descriptive and analytical by questionnaire and is unwound in period from March 07 to May 21, 2019 in paediatric department of bassila District Hospital ( Figure 1). Study population consists of children under five years, who were consulted and/or hospitalized during the period and who performed biological analyses such as Formula Sanguine, Thick Drop/Parasitic Density, the C-reactive Protein and whose parents are consenting. The variables studied related to age, sex, nutritional status, dietary diversity of children and socio-demographic, economic characteristics of parents.
Collection materials:
After obtaining parental consent, we took anthropometric measurements (weight and height) of the children. These are taken according to WHO standards. We used SECA brand baby scale for weight gain in children under two 874 years and LITTLE BALANCE person weighs for children over two. The size measurement was taken using an infantometer and stadiometer, all of precision 1mm. The age of the children was taken from either their birth certificate, their health record or from the information provided by their parents.
Laboratory analysis: Blood Formula Count:
Principle of variation of impedance has been used where a suspension of blood in a conductive dilutant causes a decrease in electrical conductivity, the voltage drop is proportional to the size of the cell and these pulses are counted. Which made possible to have the figured elements of blood (red bloodcells, leukocytes and thrombocytes), the rate of hemoglobin as well as the calculation of hematoctic, and the establishment of the leukocytic formula. According to the Weight-for-Age Index we have percentage of underweight children (P/A< 2 ET) and children who are severely underweight (P/A < -3 ET).
Goute Thick/Parasitic
Next T/A, we will determine the percentage of children with stunted growth (T/A < -2 ET) and those who have significant growth retardation (T/A< -3 ET).
Finally, P/T indices will allow us to determine percentage of emaciated children (P/T< -2 ET), severely emaciated (P/T < -3 ET) and those who are overweight.
Classification of red line parameters:
Anemia was defined by a hemoglobin level strictly less than 11g/dl in children (WHO, 2011). Depending on the degree of severity, we distinguished: 10.0≤ Hb ≤ 10, 9g/dl ( mild anemia) 9.9≤ Hb≤ 7.0 g/dl (moderate anemia) Hb< 7g/dl (severe anemia) Furthermore, on the basis of other erythrocytic constants to determine etiology of anemia we have distinguished according to French Society of Hematology (SFH, 2010): Normocytic anemia for normal VGM between 82fl-98fl Microcytic anemia for VGM of less than 80fl Macrocytic anemia for VGM greater than 98fl Hypochromy was defined by CCMH of less than 32 g/dl and normochromy by CCMH between 32 and 36 g/dl.
Statistical analysis:
The various data were collected and analyzed by Microsoft Office Word, Excel, SPSS and WHO Anthro software ® version 3.2.2. Quantitative data were submitted to Excel to identify descriptive statistics. SPSS software Khi² test 875 was used to verify the association hypothesis between assumed risk factors and pathological condition. The significance threshold has been set at 5% for all analyses.
Ethical considerations:
The investigation was conducted after the approval of the bassila District Hospital under memo 111/2019/MS/DDS-D/ZSB/HZ-Bla/SAAE on 6 March 2019. The validation of the research protocol and the survey questionnaire by the competent authorities of the structures concerned has been carried out. The data was analysed with discretion .
Results:-
A total of 300 children under the age of 5 received in consultation and/or hospitalized were recruited for this study. The average age is 21.74 15 months. Males predominated with a sex ratio of 1.4 and more than half were less than 24 months old. The most represented age group is 12 to 23 months with 26.33%. Socio-demographic characteristics of the study population are recorded in Table 1. The results show that 26.9% of the children surveyed are emaciated, 23% are stunted, 28.3% are underweight and 8.6% are overweight. In addition, 90.67% of children are anaemic, 48.9% of whom are severe. Etiological research has shown that 90.44% of children suffer from microcytic anemia and 30.63% from hypochrome anemia. In addition, 38% had normochrome normocytic anemia, 39.58% microcytic/normocytic hypochrome anemia and 2.94% macrocytic anemia. Statistical analyses revealed a statistically significant relationship between the occurrence of anaemia and malaria infection (p<0.000) on the one hand and other infections (p<0.048) on the other ( Table 2).
Discussion:-
In Benin, few studies have looked at the prevalence of malnutrition, either at the departmental or communal level. The latest Health Demographic Survey 2017-2018 in Benin shows that the prevalence of stunting is 27.2% and that of anaemia is 75.8% among children in the Donga department against a rate of 32% and 72% respectively at the national level. (INSAE/ICF, 2019).
Our study found that 23% of children were stunted. This result is comparable to that obtained at the Abomeycalavi/Sô-Ava Zone Hospital (24%) in the Atlantic Department in southern Benin by (Adébo et al, 2018). Moreover, this prevalence is below that obtained at the departmental level (Donga following the fifth Demographic Health Survey in Benin 2017-2018 (INSAE/ICF, 2019). This difference can be explained by characteristics of the population surveyed (children received in consultation for our study and children surveyed at the household level for the national survey). Moreover, our study shows that 91% of children are anemic, 49% of who are severe. This result is much higher than that obtained by (Adébo et al 2018) in the Municipality of Abomey-Calavi one of the departments of southern Benin. Etiological research has shown that hypochrome microcytic anemia is one of the most observed forms in our sample at 40%. According to the literature, this form of anemia is related to iron deficiency and inflammatory anemia. Iron deficiency anemia may be due to either insufficient intake of bioavailable iron-rich foods or, poor absorption of iron at the intestinal level or loss of iron in the body caused by parasitosis (malaria, intestinal worms). Iron is found in many types of food, both animal and plant. Only the most easily absorbed form is heminic iron found in animal foods. Depending on the dietary diversity of the children surveyed, most of them do not regularly consume foods rich in protein and bioavailable iron (meat, fish, egg, etc.). As a result, their diet can cause anaemia, as several authors have pointed out (Wolmaraus et al, 2003); (Baig-Ansari et al, 2008). In addition, other micronutrients such as Vitamin C in fruits and vegetables can increase the rate of iron absorption in the body. Besides, most children consume vegetables, but since these are consumed after cooking, they already lose their vitamin C, a thermolabile vitamin.
Also, consumption of fruits and vegetables after meals increases the risk of the deterioration of vitamin C in the stomach by gastric acidity and decreases the absorption of iron at the level of the duodenum especially non-heminic iron. On the one hand, it is easy to see that the diet of children is highly dependent on cereals, which can also contribute to iron deficiency due to the inhibitory effect of high levels of phytic acid on the absorption of trace elements, as bo Lonnerdal (2000) and (Qianyi et al 2011) have pointed out in their studies. On the other hand, presence of iron absorption inhibitors such as phytates or phenols, which are very present in plant-based foods, contribute to the development of anemia (Wolmaraus et al, 2003). Also, 38% of the children had normochrome normocytic anemia, which translates into either by the inability of the bone marrow to meet erythrocyte production needs (non-regenerative normocytic anemia) and/or extracorpuscular anemia due to destruction of red cells by infectious agents such as malaria or corpuscular hemolytic anemia (sickle cell disease) as pointed out by several authors (Badham et al., 2007); ( Barro et al., 2013), (Sellam et al., 2014) in their respective studies. The few cases of macrocytic anemia observed in this population are believed to be due to vitamin deficiency in B12 or folic acid (Scott, 2007). Statistical analyses revealed a statistically significant relationship between the occurrence of anaemia 877 and malaria infection (p<0.000) on the one hand and other infections (p<0.048) on the other. Almost half of anaemic children are infected with malaria (positive GE/DP) and other bacterial or viral infections (positive CRP). Added to this is an unhealthy environment (garbage mismanagement, insufficient latrine, consumption of unsuitable water, etc.) in which children's households live.
Conclusion:-
Assessing the nutritional status of children visited at bassila District Hospital reveals that children are exposed to energy undernutrition in all its forms and the majority suffered from anaemia, almost half of which were severe cases. In short, children are exposed to both chronic malnutrition and worrying nutritional anaemia, which places a "double" burden of malnutrition in this study population.
|
2020-06-25T09:09:03.633Z
|
2020-05-31T00:00:00.000
|
{
"year": 2020,
"sha1": "50b47d2d0471fe4940d1d18603a0668c7f492980",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21474/ijar01/10997",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "aa7c9010a2412f433fca96ff8a86d13d284a7ba3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Geography"
]
}
|
13968655
|
pes2o/s2orc
|
v3-fos-license
|
How Visual Body Perception Influences Somatosensory Plasticity
The study of somatosensory plasticity offers unique insights into the neuronal mechanisms that underlie human adaptive and maladaptive plasticity. So far, little attention has been paid on the specific influence of visual body perception on somatosensory plasticity and learning in humans. Here, we review evidence on how visual body perception induces changes in the functional architecture of the somatosensory system and discuss the specific influence the social environment has on tactile plasticity and learning. We focus on studies that have been published in the areas of human cognitive and clinical neuroscience and refer to animal studies when appropriate. We discuss the therapeutic potential of socially mediated modulations of somatosensory plasticity and introduce specific paradigms to induce plastic changes under controlled conditions. This review offers a contribution to understanding the complex interactions between social perception and somatosensory learning by focusing on a novel research field: socially mediated sensory plasticity.
Introduction
The tactile modality is the first to develop in a human embryo and has important implications for human sensation, action, and cognition. This review addresses the specific question how social cues influence the functional architecture and plasticity of the human somatosensory system. Social neuroscience is a rapidly developing field, but the specific influence of social cues on somatosensory perception is still an underinvestigated topic. Here, we first introduce basic mechanisms of tactile plasticity and learning, such as Hebbian plasticity, GABAergic learning mechanisms, and deprivation-related plasticity (Section 2). Then, we discuss the influence of social cues on human somatosensory cortex functioning and synthesize evidence on the neuronal pathways and experimental conditions that induce nonafferent (visually driven) activity in the human somatosensory system (Section 3). Before combining both research streams to answer the final question ("How do socially-induced 'resonance' responses in the somatosensory system influence tactile plasticity?"), we provide an overview over the role of touch in human cognition to broaden the scope in which the results can be discussed (Section 4). Finally, we use the introduced frameworks (tactile plasticity, Section 2; socially induced "resonance" responses in the somatosensory system, Section 3; and the role of touch in human cognition, Section 4) to discuss the influence of social cues on tactile plasticity and learning at multiple levels (both mechanistic and cognitive) and its consequences for human behavior in healthy participants, and in patients (Section 5). Whereas the first three sections therefore provide relevant background information, the final section combines the introduced research streams to focus on socially mediated tactile plasticity. We focus on the literature offered by human cognitive and clinical neuroscience, while sometimes referring to animal studies when specific plasticity mechanisms are introduced. This review offers a contribution towards the development of a better understanding of the complex interactions between social perception and somatosensory learning by focusing on a novel and rapidly developing research field: socially mediated sensory plasticity.
Plasticity Mechanisms in the Somatosensory System
Perceptual learning is the specific modification of perception following sensory experience. This, in turn, involves structural and functional changes in primary sensory cortices [1]. In tactile learning, most of our knowledge about brain plasticity is derived from primary somatosensory cortex (S-I). Larger representations of certain body parts, such as the fingers or lips, are partly due to higher receptor densities reflecting higher demands for cortical processing. These cortical body maps in animals and humans, however, are dynamic constructs that are constantly remodeled by changes in the sensory input statistics throughout life. Structural myelin borders between major body part representations such as the hand and the face in human S-I [2, 3] may to a certain extent limit such plastic changes [4]. Despite the traditional view that perceptual learning requires attention or reinforcement, there is also evidence that only the timing of input statistics can mediate cortical plasticity [5][6][7][8][9][10][11]. In fact, since Hebb [12] and even since James [13], the aspect of simultaneity has become a metaphor in neuronal plasticity. An important feature of the Hebbian metaphor is the coincident pre-and postsynaptic firing of synapses that evokes long-lasting changes in synaptic efficacy. First evidences that temporally correlated activity is required for input-dependent modification in synapses come from the hippocampus in rats [14] and Aplysia ganglia [15]. Although pairing of synaptic inputs and outputs has been hypothesized to play a key role in mediating plastic changes [16][17][18], more recent evidences suggest that Hebbian plasticity also occurs at dendritic spines without simultaneous pre-and postsynaptic activation [19].
In vitro and computational studies suggested that beyond a simultaneous activation of pre-and postsynaptic cells, there is a "critical time window" of spiking for synaptic modification that is highly specific to certain brain regions [20][21][22][23][24][25][26]. These activity-induced changes can occur in vitro at a precision of down to a few milliseconds thus influencing the strength and sign of synaptic plasticity. A critical window for the induction of long-term potentiation (LTP) and long-term depression (LTD) has been characterized in rat hippocampal neurons. This window is about 40 ms long and is temporally asymmetric. Bi and Poo found that repetitive postsynaptic spiking that precedes presynaptic activation in a time window of 20 ms (60 pulses at 1 Hz) resulted in LTP, whereas postsynaptic spiking 20 ms before repetitive presynaptic activation led to LTD. Apart from a critical window for modification of synaptic excitability, synaptic strength and specific postsynaptic cell types (NMDA and GABAergic receptors) are crucial factors for the induction of LTP and LTD [21]. In line with these findings, an almost identical time dependence was described in developing Xenopus retinotectal synapses [27]. In contrast, neurons in cortical layer 4 of somatosensory cortex seem to have only a symmetric time window for LTD within ±10 ms, whereas no long-term potentiation in synaptic response was observed [28].
Also, the deprivation of sensory input leads to changes in the functional architecture of S-I representations. In limb amputees, where afferent input to the limb is absent, the cortical representation of the face shifts towards the territory of the hand [29,30]. Recent findings indicate, however, that these shifts are smaller than originally suggested and that the representation of the absent hand is still preserved [30][31][32]. Also, altered hand use in amputees induces somatomotor plasticity in amputees. Makin et al. [33] showed that deprived sensorimotor cortex is employed by whichever limb individuals are overusing.
Perceptual learning occurs continuously throughout life and involves either transient or persistent changes in central nervous perceptual systems, which in turn improves the ability to respond to the environment [34][35][36]. To obtain information about the role of input statistics alone in mediating plasticity in perceptual systems, several protocols, in which neuronal activity was generated by associative pairing, have been developed [37]. In adult rats, for instance, it has been shown that "whisker pairing," which involves trimming of whiskers except two neighboring vibrissae, resulted in changes in sensory neural activity [37].
Based on the same idea of paired sensory inputs, several studies in animals and humans demonstrate that a variation of input statistics using passive stimulation protocols results in cortical plasticity [5][6][7][8][9][10][11]. Godde and coworkers developed a stimulation protocol, called "tactile coactivation," to receptive fields on the hindpaw of adult rats. The basic idea behind this stimulation protocol was to coactivate a large number of receptive fields in a Hebbian manner in order to strengthen their mutual interconnectedness. Coactivation consists of tactile stimuli that were presented at different interstimulus intervals from 100 to 3000 ms in pseudorandomized order with a mean stimulation frequency of 1 Hz. Coactivation of the hindpaw for 3 hours revealed a selective enlargement of corresponding cortical maps and receptive fields [38]. To investigate the perceptual relevance of the coactivation effect, tactile spatial discrimination performance was tested in humans. Coactivation of receptive field on the fingertip resulted in an improved tactile spatial discrimination ability that lasted for 24 hours. Perceptual changes were highly selective because no transfer of improved performance to nonstimulated fingers was found [5].
Pleger et al. studied the relation between those coactivation-induced perceptual changes and parallel plasticity in human S-I. Using somatosensory-evoked potential (SSEP) mapping [6] and functional magnetic resonance imaging (fMRI) [7], they found that coactivation-induced changes in tactile acuity were reflected in the degree of cortical reorganization. The cortical representation of the coactivated finger in S-I post-versus precoactivation was considerably larger on the coactivated side than on the control side [6,7]. Using fMRI, Pleger et al. extended the focus on cortical plasticity in the secondary somatosensory cortex (S-II). Contralateral to the coactivated finger, S-II presented with enhanced BOLD signal change comparable to the effects observed in S-I. In line with previous findings [5], tactile discrimination thresholds recovered to baseline 24 hours after coactivation. Furthermore, the relation between cortical plasticity in S-I and perceptual changes was linearly correlated, indicating a close link between the magnitude of plastic changes and coactivation-induced spatial discrimination improvements [6,7]. In S-II, no such brain-behavior relationship was observed, which might be due to the less fine-grained representational organization of S-II as compared to S-I [39]. Coactivation-induced cortical plasticity together with perceptual improvements was found not only in the young brain but also in older adults suggesting that coactivation-induced effects occur continuously throughout life [40].
To shed light on the underlying cellular mechanisms mediating this specific type of perceptual learning and associated cortical plasticity, Dinse et al. manipulated coactivation-induced perceptual learning with different drugs that specifically block or stimulate central nervous receptors assumed to play a key role in mediating brain plasticity [8,9]. Under memantine, a NMDA receptor blocker [41], they found that both perceptual improvements and associated cortical plasticity were blocked [8]. Under a single dose of amphetamine, which is known to modify long-term changes in synaptic function [42], perceptual improvements and cortical plasticity were boosted [8]. These results emphasize the prominent role of the NMDA receptor in mediating coactivation-induced perceptual improvements and cortical plasticity. Monoaminergic substances such as amphetamine instead seem to facilitate this specific type of perceptual learning. In line with these experimental findings, perceptual learning after the application of coactivation was shown to be dependent on GABAergic mechanisms. Tactile discrimination improvement was completely abolished by lorazepam, indicating that this GABA A receptor agonist acts to suppress the coactivation-induced effect [9]. Positive correlations between levels of GABA in primary brain regions and sensory discriminative abilities stress the importance of GABA in increasing the perceived contrast of sensory percepts [43].
The Influence of Social Cues on the Somatosensory System
A dominant theory holds that similar motor areas in the brain are activated when a specific action is either observed or executed [44][45][46]. This reactivation during observation is often referred to as "neuronal resonance" response [47] and has been investigated quite extensively in recent years [48][49][50][51]. Neuronal resonance responses in the motor system are supposed to allow an understanding of others' goals, intentions, and motor plans [44,46,[52][53][54][55] and can lead to interference effects between one's own actions and observed actions [55]. The concept of neuronal resonance was subsequently transferred to other domains, such as the domain of pain, emotion, and touch [52,[56][57][58][59][60], but also to the domain of touch [60][61][62][63][64][65]. Neuronal resonance responses in the pain matrix or sensory cortices are assumed to trigger shared affective or sensory states, respectively, between observed person and observer. S-I holds dense connections to S-II, the parvocellular area (PV), the primary motor cortex (M-I), the premotor cortex (PM), and the frontal cortices [66,67]. Particularly, the posterior end of S-I is strongly connected to the superior parietal cortex (SPC) and, even more densely, to the anterior bank of the inferior parietal sulcus (aIPS) [68][69][70]. The aIPS connections themselves are also widespread and include the motor and premotor cortices, the supplementary motor cortex, S-II, PV, other areas of the posterior parietal cortex (PPC), the cingulate cortex, and the extrastriate visual cortex [67]. Although most of these connections are stronger in the outward direction than in the inward direction, anatomical evidence shows that many connections between S-I and other brain areas are bidirectional and allow an influence of S-I activity not only on other parts of the brain but also on the reverse direction [71][72][73][74][75].
S-I and S-II are typically activated when people observe another person receiving tactile stimulation [61,[63][64][65]76]. The S-I activation is topographic, and the activation of single-finger receptive areas in S-I can be triggered purely by observing touches to different fingers [64,77]. The degree of S-I activity during touch observation seems to be particularly strong in vision-touch synaesthetes who actually feel touch on their own body when they merely observe touch to another person's body [62]. Somatosensory cortices also respond to observing actions [51,65,78] and to observing haptic explorations [79] not only in humans but also in monkeys [80].
S-I is composed of altogether four subunits, three of which are mainly responsible for tactile perception (area 3b, area 1, and area 2). Whereas activation of area 1 and area 2 during touch observation is established ( [63,64,76], but see [81]), there has been a long debate about the social response properties of area 3b, which is the homologue of S-I in other mammals [82]. Whereas some studies reported the activation of area 3b during the observation of touch [76], most studies found this area to be silent during touch observation [63,64]. A recent study shed light on this issue. Kuehn et al. invited 16 healthy participants to a series of fMRI measurements using a 7-Tesla MRI system. Participants viewed individual touches to four fingers (index finger, middle finger, ring finger, and small finger) or received physical touches to the same four fingers in a separate scanning session [77]. Weak but fine-grained finger maps in contralateral area 3b were activated both when participants physically perceived touches at their own fingers and when they merely observed touches at another person's hand. This effect was robust across viewing perspectives but did not occur when the observed hand was not touched. The tactile-driven finger maps and the visually driven finger maps in fact overlapped in area 3b in most participants. For the first time, this study provides empirical evidence that area 3b has mirror-like response properties and that plasticity mechanisms mediated by this area should in principle be influenced by vision of touch.
Also, a number of behavioral studies showed an influence of viewing the body on somatosensory processing. Taylor-Clarke et al. showed that perceived distances between objects touching the skin are altered when participants looked at a distorted version of their body [83,84]. Because this perceptual shift was induced by viewing the body, not by viewing the object touching the body, the effect was assumed to be driven by visual body perception. The ability to spatially discriminate two small needles applied to the skin surface ( [85] but see [86]) and the ability to judge the spatial orientation of gratings touching the skin [84,87,88] also increased specifically when looking at one's own body compared to looking at an object. Finally, the ability to detect and discriminate the amplitude of electrical stimuli when presented to the skin clearly above threshold improved when viewing the body [89,90].
An effect of visual body perception on tactile abilities, however, is not restricted to seeing one's own life body. They also seem to occur when participants look at a video image of a body [88,91], at another person's body [63,77,[92][93][94], or at a rubber hand [95,96], although the effect is often stronger the more the viewed body part can be assigned to the observer's own body ( [90,[95][96][97] but see [63]).
Evidence for a causal role of S-I in mediating tactile improvements when viewing the body was provided by a transcranial magnetic stimulation (TMS) study [98]. Here, repetitive TMS pulses were delivered to S-I or to S-II shortly after the body was visible but before the tactile stimulus arrived. TMS pulses applied to S-I, but not to S-II, diminished the effect of body vision on tactile abilities.
The Role of Touch in Human Cognition
S-I is known to be involved in the detection [99], perception [100], discrimination [101][102][103], and categorization [104] of touch. However, touch plays manifold roles in human cognition that go beyond the mere perception of object qualities [63,[105][106][107]. For example, tactile stimulation triggers emotions. Pleasant touch applied in a social context is assumed to build the basis for affiliative behavior, to contribute to the formation and maintenance of social bonds, and to build a means for communicating emotions [108,109]. Tactile C fiber afferents particularly respond to pleasant, caress-like touch applied to hairy parts of the skin [109][110][111]. They terminate at the posterior insula and are assumed to elicit positive, rewarding emotions [112,113]. Patients lacking C fiber afferents therefore perceive caress-like stroking as less pleasant than normal controls [111]. S-I may also play a role in processing affective aspects of touch [114], and it may aid in conveying socially elicited emotions to the perceiver [112].
Touch also influences human spatial perception. The incoming information in S-I is spatially ordered and represents the contralateral side of the human body in a mediolateral sequence. This body map in S-I offers a body-centered reference frame for sensory perception [115][116][117][118][119][120]. The body-centered reference frame is seen in some contrast to an external (spatial) reference frame mediated by the PPC (see [118] for a review) or the temporoparietal junction (TPJ) [121]. The body-centered reference frame may convey more self-centered information to the perceiver because information about the body as stored in S-I is assumed to be little influenced by spatial variables such as body posture [116-118, 122, 123], whereas information about the body that is stored in the PPC changes more dynamically with spatial variables (for reviews see [118,124]).
Touch may also provide structural information about the body and its parts [125,126]. In one experiment, participants were better in a tactile task when different tactile stimuli touched the same body part, compared to when they touched different (but adjacent) body parts [127]. Tactile processing in S-I may therefore take anatomical borders between body parts into account (see also [2]). Beauchamp et al. used multivariate pattern analyses (MVPA) to ask which aspects of body part-specific tactile processing are stored in S-I and which are stored in S-II [128]. They showed that touch applied to digits of one hand can be decoded on the basis of activity pattern in S-I, whereas gross anatomical distinctions are better decoded in S-II. Also, deafferented patients, who are deprived of somatosensory and proprioceptive input, have particular difficulties to distinctly control body parts that are nearby [129]. Finally, anaesthetizing a body part leads to an enlargement of the cortical area in S-I representing this body part [130], which presumably leads to the illusory feeling that this part of the body is larger than it actually is [131].
Touch also influences action and motor control. For example, when deprived of vision, humans have problems maintaining a stable body position. When allowed to touch an object, this supports balance, helps to control body sway [132][133][134][135], and prevents recovery falls [136]. Other examples of how tactile input influences action are haptic exploration [137,138] or precision grips [126,139].
The Influence of Visual Body Perception on Somatosensory Plasticity
Above, we have introduced basic mechanisms of somatosensory plasticity and learning (Section 2), discussed possible input pathways and experimental conditions that trigger nonafferent (visually driven) activations of human somatosensory cortices (Section 3), and provided an overview over the role of touch in human cognition (Section 4). Next, we will combine these research streams to target the final question, that is, "How do socially-induced 'resonance' responses in the somatosensory system influence tactile plasticity?" Hebbian tactile plasticity in S-I is mediated by NMDA receptors (see Section 2) that mostly reside in superficial cortical layers of the coactivated receptive field [140,141]. So far, it is not clear whether visual signals that reach human S-I during touch observation are integrated into deeper or more superficial cortical layers in S-I. This question is relevant, because only if signals integrated into superficial cortical layers and activated similar neurons, an influence of vision of touch on S-I-mediated Hebbian learning would be expected. To target this question, Kuehn et al. [11] used the established coactivation protocol as introduced above (see Section 2) to induce S-I-mediated Hebbian plasticity in three groups of healthy participants by applying weak tactile stimulation to the tip of the index finger for the duration of three hours. Whereas one group only received tactile stimulation, two other groups were additionally presented with temporally congruent visual signals during the learning phase. One group observed object-to-hand touch; the other group observed object-to-object touch. Whereas all three groups but not the control group showed the expected tactile learning effect as measured by decreased tactile spatial discrimination thresholds after the stimulation compared to before the stimulation, there were no significant learning differences between the tactile and the two visual groups. The additional visual inputs therefore did not influence tactile plasticity to a measurable (i.e., significant) extent. Whereas different reasons can explain this finding, for example, the specific training protocol used, or different cell types that were activated by vision of touch compared to touch [142], one possibility is that visual signals integrate into deeper cortical layers in S-I. Because Hebbian learning takes place primarily in superficial cortical layers as outlined above, this would explain weak or absent effects of touch observation on Hebbian-mediated plasticity in S-I.
GABAergic inhibitory interactions are an important driving force of S-I-mediated tactile plasticity (see Section 2). Inhibitory interactions in S-I are classically characterized by measuring the relative shrinkage of index-and middlefinger receptive areas in S-I when both are activated simultaneously, compared to when they are activated alone [143,144]. Kuehn et al. [64] replicated this effect using 7-Tesla fMRI and additionally showed that such inhibitory interactions between index-finger and middle-finger receptive areas in S-I also occur when touch to the fingers is only observed but not physically perceived. Also, a prior study has indicated an influence of vision of a body part on inhibitory interactions in somatosensory cortex during physical touch perception [87], and there is evidence that vision triggers particularly the activation of interneurons in S-I [142]. Positive correlations between levels of GABA in primary brain regions and sensory discriminative abilities stress the importance of GABA in increasing the perceived contrast of sensory percepts [43]. Weakened cortical inhibition is also a main contributor to age-related changes in somatosensation [40]. Suppressive interactions in S-I triggered by touch observation may therefore sharpen S-I receptive fields even without any afferent tactile input [64].
A single-neuron recording study in monkeys showed that there are not only (mirror) neurons that respond positively (i.e., with an increase in firing rates) to action observation but also neurons which respond negatively (i.e., with a decrease in their firing rates [145]). Although this study recorded mirror neurons in the vPM during action observation and not neurons in the somatosensory system during action or touch observation, this finding indicates that neuronal resonance responses can in principle also be inhibitory. And indeed, BOLD signals recorded in S-I during touch observation were mostly negative for the observation of noncongruent finger touches [77]. In line with this, viewing the body typically increases tactile detection thresholds [89,90].
To study the influence of environmental conditions on somatosensory plasticity, rats were in one experimental series either reared in groups of 12 rats in spacious cages that offered multiple possibilities for object manipulation and social interaction or they were reared alone in small cages that offered fewer possibilities for object manipulation and social interaction. Rats that were reared in groups and in spacious cages showed an expansion of the forepaw maps in S-I compared to those who were reared alone and in an impoverished environment [146]. This effect occurred both for young and older rats [147]. However, it cannot be derived from these studies whether the effects were driven by increased rates of object manipulation (i.e., sensorimotor experience) and/or the presence of social interaction partners (i.e., social touch). Dissociating both influences on somatosensory plasticity would be an important goal for future research.
Rubber hands are an often-used tool to study the influence of visuotactile stimulation on bodily awareness. Press et al. [148] used a similar paradigm to study the influence of vision on somatosensory plasticity. They applied touches to a rubber hand or to a rubber object when participants perceived either synchronous or asynchronous touches at their own hand. After the bimodal (synchronous or asynchronous) training, ERPs over somatosensory cortex were measured in response to unimodal tactile stimulation to the hand. The temporal contingency of visuotactile stimulation delivered during the training phase influenced the ERPs in response to pure tactile stimulation: those participants who trained with synchronous visuotactile stimulation showed an enhanced somatosensory N140 component compared to those who trained with asynchronous visuotactile stimulation. The N140 component is assumed to be elicited in S-II, which contains bilateral receptive fields. This may explain why the effect was not side specific but occurred for both hands. The enhanced N140 after the learning was found both after participants observed touch to a hand and after they observed touch to an object. Classical mirror mechanisms were likely not at play but perhaps bottom-up effects mediated by multisensory integration.
As introduced above (Section 4), touch plays a significant role in emotion perception. Disrupting S-I activity impairs the ability to recognize emotional facial expressions in peers [149][150][151], and S-I plays a role in recognizing emotional voices [152]. Somatosensory plasticity may therefore also influence emotion perception, such as those elicited by social stimuli. To study this, Friedrich et al. [153] conducted a training study on children with autism spectrum disorder (ASD). For the duration of 6 to 10 weeks, children were trained to either increase mu power (group 1) as measured with EEG over somatosensory cortex or decrease mu power (group 2) when performing a social interaction video game. Suppression of mu power is assumed to reflect neuronal resonance responses in the somatosensory cortex. When comparing pretraining with posttraining mu suppression during an independent task that was not used during training and where children observed emotional facial expressions, only group 2 showed more mu suppression after the training compared to before the training and also showed more mu suppression than group 1 after the training. It is worth noting that other outcome measures did not differ between groups. Training the responsivity of the somatosensory cortex during social perception may therefore enhance empathic responses towards emotional conspecifics also in situations that were not part of the training data set. Further work will have to confirm this finding.
As outlined above (see Section 4), touch contributes to a body-centered reference frame. S-I activity during touch observation may therefore trigger the ability to "put yourself into the shoes of others" [78]. The positive correlation between S-I activity during touch observation and perspective taking abilities as assessed by questionnaires ([154], see similar results in [78]) may be interpreted in this direction. Physical touch perception, on the other hand, could prevent undertaking such a shift in reference frames. This is indicated by a study of Palluel et al. [155]. Here, a virtual reality setup was chosen that allowed showing participants their own back in front. They saw their own back being stroked by brushes, either synchronously or asynchronously to the stroking they felt on their real back. This situation typically triggers participants to feel that the virtual back is their own back, which induces a strong visuotactile interference effect (see also [156]). When participants were stimulated by vibration stimuli on their leg, however, they did not feel the illusion anymore, and they also did not show the visuotactile interference effect [155]. One may argue that the perceived leg vibration caused an activation of their own body-centered reference frame, which prevented them from taking over the other person's reference frame. To the best of our knowledge, so far, no study has specifically studied the effect of tactile training on social perspective taking. The above-outlined studies would indicate a reverse relationship.
Touch observation has also been used to study clinical populations, such as limb amputees, and extinction patients. Hand amputees are an often used model system to study somatosensory plasticity in humans. As outlined above (see Section 2), in spite of an absent hand, limb amputees show an astonishingly intact and only slightly shifted representation of the missing hand in the sensorimotor cortex. It has been argued that both the degree of distortion [29] and the degree of preservation [31] of somatotopic maps in limb amputees contribute to the perception of phantom limb pain. To induce activation and/or to modify the representation of the S-I missing-hand territory in amputees therefore seems to be a goal worth pursuing. Again, the rubber hand illusion may be a suitable tool. Ehrsson et al. [157] showed that observing a rubber hand that is touched synchronously to the stump evokes the illusion in upper limb amputees that the observed hand is their own hand, even though in fact their hand is missing. This effect was present using different psychophysical markers and was also seen in self-report questionnaires. Also, Goller et al. [158] showed that when limb amputees observe another person being touched at different body sites, some of them start feeling touch on their own phantom limb (see also [159]). This did not only occur in patients who frequently experienced phantom limb sensations but also in patients who reported experiencing phantom limb sensations only occasionally, or not at all. Similar to the mirror-box illusion where the moving intact hand creates the illusion of a moving missing hand, also the rubber hand illusion may serve as a therapeutic tool for influencing somatosensory plasticity in limb amputees. However, congenital limb amputees do not show S-I activity when observing another person in pain [160], which indicates possible functional differences between S-I responsivity to touch and pain in limb amputees.
Extinction impairs the ability to perceive multiple stimuli of the same type simultaneously and occurs usually after damage to the contralateral hemisphere. One study investigated whether also the visual presentation of a rubber hand can cause tactile extinction in patients with right brain damage [161]. In patients with left tactile extinction, a visual stimulus was presented near a right rubber hand and near the real right hand. The rubber hand condition induced visuotactile extinction similar to the real hand, indicating that tactile extinction is not specific for perceiving one's own body, but it can also be induced by observing another person's body.
Finally, as outlined above (see Section 3), besides mirroring observed touches, S-I also responds to the observation of human movements, and S-I influences different aspects of human action and motor control (see Section 4). In this last paragraph, we therefore concern with the interaction between action observation, motor resonance, and somatosensory plasticity. TMS is an often-used tool to induce or modulate cortical plasticity. Avenanti et al. [162] investigated the specific influence of virtual lesions in S-I as induced by repetitive TMS (rTMS) over S-I on motor-evoked potentials (MEPs) measured at the hand during observed hand movements. The authors found rTMS to specifically disrupt the ability to resonate with extreme joint-stretching finger movements that induced, by subjective report, strong tactile/ proprioceptive sensations during observation. In a different study, TMS pulses delivered over S-I disrupted the ability to correctly judge the weight of a box lifted by a hand but not the ability to correctly judge the weight of a bouncing ball [163]. A contribution of S-I to proprioceptively driven weight judgments has also been indicated by a patient study. Here, deafferented patients were shown to be impaired in their ability to correctly estimate the weight of a box lifted by a person [164]. On the other hand, TMS-adaptation (TMS-A) over S-I can be used for behavioral enhancement [165]. Jacquet and Avenanti showed that TMS-A over S-I leads to a reduction in reaction times when participants were asked to recognize the goal (but not the movement) of an observed hand movement. Somatosensory plasticity can therefore potentially be used to enhance empathic abilities during action observation. There also seems to be the potential to use action observation to induce somatosensory plasticity.
Summary
Converging evidences from human and monkey research support the notion that S-I is not only involved in the detection, perception, discrimination, and categorization of touch but also linked to more complex cognitive and emotional functions. More recent work even proposes S-I as a reference frame for social "resonance" that involves those subareas formally assumed to only respond to "real" physical tactile inputs arising from the thalamus [77]. This raises the fundamental question of whether social tactile cues may induce or boost tactile processing, perception, and even plasticity [11] and whether this may offer new treatment options, for instance, in phantom limb pain, stroke rehabilitation, or even social distortions. Future research is needed to understand the functional role of cortical social "resonance" in primary and further downstream sensory regions and their specific contribution to perception and plasticity.
Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this paper.
|
2018-05-03T02:53:33.772Z
|
2018-03-11T00:00:00.000
|
{
"year": 2018,
"sha1": "ae1ed2cf5641525ccbe510396e6cf5fca3d6ed6a",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/np/2018/7909684.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aefc92f58b0751638a0be87f4661361a9f4861b6",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
268857420
|
pes2o/s2orc
|
v3-fos-license
|
GTRpmix: A Linked General Time-Reversible Model for Profile Mixture Models
Abstract Profile mixture models capture distinct biochemical constraints on the amino acid substitution process at different sites in proteins. These models feature a mixture of time-reversible models with a common matrix of exchangeabilities and distinct sets of equilibrium amino acid frequencies known as profiles. Combining the exchangeability matrix with each profile generates the matrix of instantaneous rates of amino acid exchange for that profile. Currently, empirically estimated exchangeability matrices (e.g. the LG matrix) are widely used for phylogenetic inference under profile mixture models. However, these were estimated using a single profile and are unlikely optimal for profile mixture models. Here, we describe the GTRpmix model that allows maximum likelihood estimation of a common exchangeability matrix under any profile mixture model. We show that exchangeability matrices estimated under profile mixture models differ from the LG matrix, dramatically improving model fit and topological estimation accuracy for empirical test cases. Because the GTRpmix model is computationally expensive, we provide two exchangeability matrices estimated from large concatenated phylogenomic-supermatrices to be used for phylogenetic analyses. One, called Eukaryotic Linked Mixture (ELM), is designed for phylogenetic analysis of proteins encoded by nuclear genomes of eukaryotes, and the other, Eukaryotic and Archaeal Linked mixture (EAL), for reconstructing relationships between eukaryotes and Archaea. These matrices, combined with profile mixture models, fit data better and have improved topology estimation relative to the LG matrix combined with the same mixture models. Starting with version 2.3.1, IQ-TREE2 allows users to estimate linked exchangeabilities (i.e. amino acid exchange rates) under profile mixture models.
Introduction
Models of amino acid substitution are of key importance to probabilistic molecular phylogenetic analyses of protein sequences.Typically, the amino acid substitution process is modeled via a site-independent time-reversible Markov process on a tree.The parameters of this model include a set of fixed equilibrium frequencies of the amino acids (referred to in this work as a profile) and a fixed matrix of amino acid exchange rates-exchangeabilities-throughout the tree.The exchangeability matrix accounts for some biological, chemical, and physical amino acid properties and when combined with the equilibrium frequencies of the amino acids, it describes the instantaneous rates of interchange between each pair of amino acids.
In most phylogenetic analyses the exchangeability matrix used is chosen from a set of fixed empirically estimated matrices.The first empirically estimated exchangeabilities were derived from the Dayhoff (Dayhoff et al. 1978) and Jones-Taylor-Thornton (JTT) (Jones et al. 1992) matrices that were obtained by counting the substitutions between each amino acid pair using ancestral sequence reconstruction and a parsimony-based approach to analyze databases of multiple alignments along with their estimated phylogenies.Subsequently, a maximum likelihood approach was used to improve exchangeability estimation leading to the development of the "Whelan and Goldman" (WAG) model (Whelan and Goldman 2001).Le and Gascuel expanded this approach (Le and Gascuel 2008) by considering larger data sets and by incorporating heterogeneity of rates across sites in the likelihood computation via a site-rate partition model.The resulting matrix, known as the "Le and Gascuel" (LG) matrix, is currently very widely used for phylogenetic inference based on protein sequences.Expanding on these, Minh et al. (2021) introduced QMaker, a maximum likelihood method to estimate an exchangeability matrix from a large protein data set consisting of multiple independent sequence alignments.The authors used QMaker to estimate a number of additional matrices to be used for phylogenetic analyses of specific taxonomic groups (e.g.Q.bird, Q.insect, and Q.plant).Other matrices have been developed to fit proteins encoded on certain organellar genomes (e.g.cpREV Adachi et al. 2000) or particular gene families (e.g.rtREV Dimmic et al. 2002).
All of the foregoing exchangeability matrices were obtained assuming that all sites evolve according to the same process and share a single set of equilibrium amino acid frequencies (a single profile).However, because of different functional constraints and structural microenvironments within proteins, there are distinct ranges of admissible amino acids at sites (Pál et al. 2006;Goldstein 2008;Franzosa and Xia 2009).Profile mixture models, such as the C10-C60 series and the UDM series (Si Quang et al. 2008;Wang et al. 2008Wang et al. , 2014;;Schrempf et al. 2020), were designed to account for this heterogeneity of preferred amino acids across sites.These models are mixtures of time-reversible Markov models, but they assume a common exchangeability matrix and distinct profiles of stationary frequencies.
Exchangeabilities and amino acid profiles capture, in different ways, similar properties of the amino acid replacement processes across sites.Ideally, one would want to separate the properties that are captured by exchangeabilities versus profiles.However, this is nontrivial since such properties depend on features like the structural context of sites in proteins, information that is absent from the data used for analysis (see, for example, Spielman and Wilke 2015).In current site-homogeneous approaches to the estimation of exchangeabilities, site-specific amino acid preferences are not explicitly modeled, so exchangeabilities indirectly capture some of these site-specific signals as average effects.It would be preferable if the profiles modeled site-specific selective constraints and the exchangeabilities modeled alignment-wide aspects of the substitution process (e.g.mutational processes and genetic code effects).Unfortunately, it is doubtful that these two aspects of substitution processes can be completely disentangled.Nevertheless, it seems clear that unless profiles are included when estimating exchangeabilities, estimates of the latter will reflect site-specific properties to a considerable degree.This highlights the importance of re-estimating exchangeabilities in the context of mixture models to avoid redundancy between the signals captured between profiles and exchangeabilities.
Estimation of a single exchangeability matrix, within a profile mixture setting, has been explored in a Bayesian context by Lartillot and Philippe (2004) through the development of various versions of the CAT model of Phylobayes (Lartillot et al. 2013).The CAT-GTR model uses Markov chain Monte Carlo techniques to jointly infer frequency vectors, exchangeabilities, the affiliations of each site to a given frequency vector, the rates at each site, the branch lengths, and the tree topology.However, in practice, convergence may not be achieved in large data sets with many sites and taxa in its current implementation (Lartillot et al. 2013).In the maximum likelihood framework, Wong and colleagues developed MAST (Wong et al. 2024), an extension of IQ-TREE2 that, among other things, allows the user to estimate a mixture model with various options for linking and unlinking exchangeability matrices and amino acid profiles, in conjunction with mixtures of tree topologies.While this implementation can be very useful in many contexts, it is not practical for profile mixture models with many profiles because, for each profile, 189 exchangeability parameters need to be estimated.For commonly used models with 40 to 60 profiles (e.g.C40 or C60) or more (e.g.UDM64, UDM256, etc.), this corresponds to >>7,500 estimated parameters.These models would require complex and computationally expensive optimization and will potentially be susceptible to problems associated with local optima, over-parameterization, and identifiability.
Here, we describe the implementation of a General Time-Reversible model via maximum likelihood estimation in IQ-TREE2 for use with profile mixture models.This GTR model (denoted as GTR20 in IQ-TREE2) has a single set of optimizable exchangeability parameters shared ("linked") over all classes of the profile mixture.By simulation, we show that our implementation accurately estimates exchangeability parameters and that it can improve tree topology estimation accuracy.Additionally, we show that the estimation of exchangeabilities under a profile mixture model provides a much-improved fit on a wellknown empirical data set than the profile mixture model with LG exchangeabilities.
Since the estimation of exchangeabilities can be computationally expensive and requires large data sets for accurate parameter estimates, we provide two matrices estimated from large concatenated supermatrices under the GTR-C60 profile mixture model to be used as fixed matrices for phylogenetic analyses.One of these, called eukaryotic linked mixture (ELM), is tailored for phylogenetic analyses of proteins encoded by eukaryotic nuclear genes, and the other, eukaryotic and archaeal linked (EAL), is for reconstructing relationships between eukaryotes and Archaea.We show, via three well-known empirical data sets, that these matrices have better fit and topological accuracy than the LG matrix when both are combined with C60.Additionally, we show that these matrices perform well with different sets of profiles.
Profile Mixture Models and Exchangeability Optimization
The general time-reversible model (GTR) (Tavaré 1986) is a Markov process where, for a profile π = (π 1 , π 2 , . . ., π 20 ), with 20 a=1 π a = 1, and a matrix Q of instantaneous rates of change between amino acids, diag(π)Q = Q T diag(π).Because of this, one can parameterize Q via a non-negative symmetric matrix S known as the exchangeability matrix.Specifically, given an exchangeability matrix S = {s i,j } 20 i,j=1 and a profile π, the entries of the time-reversible instantaneous rate matrix Q π = {q i,j } 20 i,j=1 associated to π are defined as 1) q ij = s ij π j , for i ≠ j and q ii = − 20 j=1,i≠j q ij , otherwise.2) multiplying all entries by ( − 20 i=1 q ii π i ) −1 , so that branch lengths are interpretable as expected number of substitutions per site.
All entries of Q π are non-negative, row sums are 0, πQ π = 0, and diag(π)Q π is symmetric.For any given π and any c > 0, exchangeability matrices S and cS yield the same instantaneous rate matrix Q π , and thus produce the same site-pattern probabilities.Therefore, we constrain one entry to be equal to 1 resulting in 189 free parameters from the exchangeability matrix.
Commonly used profile mixture models are, usually, mixtures of time-reversible models with a common exchangeability matrix S. Specifically, site profiles π c are selected independently with probability w c and, independently of these, rates for sites, r k , are chosen with probability d k .Given the rates and site profile for a site p, the evolutionary model is a GTR process with exchangeability matrix S along a tree T. Let P(x p | T, S, π c , r k ) denote the conditional probability of site pattern x p given its site rate, r k , and site profile, π c .Because the rates and site profiles are unobserved, the likelihood contribution under the model for the site is the marginal probability of the site pattern, were {π c } C c=1 is a collection of C profiles with corresponding positive weights {w c } C c=1 summing to one, {r k } K k=1 is a collection of K non-negative scalar rate parameters with corresponding positive rate weights {d k } K k=1 also summing to one, and K k=1 d k r k = 1.To reduce the complexity and computational cost of profile mixture models, fixed profiles are typically used for tree estimation (Si Quang et al. 2008;Wang et al. 2008;Schrempf et al. 2020;Tice et al. 2021;Eme et al. 2023).In these cases, the only additional parameters coming from the profile mixture are the weights of the profile, giving C − 1 additional free parameters from the weights, where C is the number of profiles.Different sets of profiles have been estimated via databases of alignments.For example, Si Quang et al. (2008) introduced the widely used sets of profiles known as C10, C20, C30, C40, C50, and C60 (generically referred to as CXX).These sets of profiles were estimated under uniform exchangeabilities (referred to as POISSON exchangeabilities).In each of these, the number next to the "C" denotes the number of profiles in the set.Other sets of profiles include the more recently introduced UDM models (Schrempf et al. 2020), with sets of profiles ranging from 4, 8, 16, and up to 4,096 classes.
For rates across sites, it is common to use a discretized approximation to the gamma distribution with shape parameter α and mean 1 (Yang 1994).For these distributions, all rates have an equal probability of occurrence and are continuous functions r k (α) of α.The shape parameter α adds only one free parameter to a profile mixture model.The discretized gamma distribution is commonly discretized into 4-rate classes and is denoted G4.
In most of our analyses with the C-series models below, we estimate the weights of the mixture; the default of IQ-TREE is to use empirical weights (weights obtained during the estimation of the original empirical profile frequencies, rather than being re-estimated for the data at hand) unless a "+F" component is included.In a similar fashion, unless clearly stated, we also jointly estimate the shape parameter α.Lastly, exchangeabilities are also jointly estimated, unless clearly stated to be fixed a priori to previously estimated exchangeabilities (for example, to LG or POISSON).
Given an MSA with n sites and a tree T, we estimate the exchangeabilities by maximizing the log-likelihood across all sites To do this, we arbitrarily fix the exchangeability between Y and V (corresponding to the entry s 19,20 of S) to 1, and we then estimate the 189 remaining exchangeabilities using the BFGS-algorithm (Fletcher 1987), a well-known iterative optimization method.By default, the algorithm is initialized with all 189 exchangeabilities equal to one, with the option to specify any other initial exchangeabilities.In its current implementation, other parameters of the profile mixture can be jointly estimated using IQ-TREE2's routines.For example, one can simultaneously estimate the tree topology, branch lengths, rates (not necessarily from a discretized gamma), weights of fixed profiles, and exchangeabilities (or any subset of this list).
We compare exchangeabilities S and S ′ via their associated rate matrices Q and Q ′ under the uniform profile; is a function of i and j, so the rate matrix entries can be thought of as exchangeabilities and we refer to them as such.But the transformation to Q and Q ′ puts the exchangeabilities onto a more comparable scale, one that is more closely associated with their end-use in rate matrices than setting one entry to 1 as was done in optimization.
Data Sets
Because of the computational burden associated with the estimation of the exchangeabilities in the GTRpmix model, we have analyzed two large concatenated protein "supermatrix" datasets to estimate "general use" substitution matrices for phylogenetic estimation with profile mixture models.These can be used with profile mixture models when sufficient computational resources are not available for full GTRpmix optimization or the datasets to be analyzed are too small to allow accurate estimation.The two datasets used to estimate these matrices are a pan-eukaryotic concatenated supermatrix and a eukaryote-archaea supermatrix: Pan-Eukaryotic data sets: To estimate the "Eukaryote" exchangeability matrix we selected a 240-protein data set with 76,840 sites and 78 taxa as a taxonomically representative subsample of all eukaryotes in the PhyloFisher database (Tice et al. 2021).Taxa were selected based on their known membership in a particular higher-level eukaryotic taxon and their phylogenetic position.Further selection was done to maximize gene coverage within the original PhyloFisher data set.As detailed below, to compare two methods of exchangeability estimation, we also looked at a smaller subset of the PhyloFisher database consisting of a 240-protein data set with 77,965 sites 50 taxa.
Eukaryotic-Archaeal data set:
To estimate the exchangeability matrix for reconstructing relationships between eukaryotes and Archaea, we used a 54-protein data set with 14,704 sites and 86 taxa.This data set includes a subset of the taxa presented in Eme et al. (2023).
For more details on the datasets and their taxonomic selection, see the Supplement's section "Data Sets."
Data Sets for Comparisons
The following data set is used to compare the fit of the new empirically estimated matrices to be used for reconstructing relationships between eukaryotes and Archaea against the LG matrix and the one estimated for Eukaryotic phylogenetic analysis.
• a data set of 56 ribosomal proteins (7,112 sites × 86 taxa) described in Eme et al. (2023).To ensure computational tractability taxa were subsampled from the original 331 taxon dataset to maintain a representation of Asgard archaea, TACK archaea, and Euryarchaeota.A tree topology, denoted T R estimated under LG+C60+G4, is used for comparing different exchangeabilities matrices.
We also used three empirical concatenated supermatrices to validate the empirical-estimated matrix discussed above to be used for Eukaryotic phylogenetic analysis and compare it with the LG matrix.For each data set, we consider two trees, the correct topology and an artifactual one (product of long-branch attraction artifacts).The data sets and trees are as follows: • a data set of the 133-protein data set (24,291 sites × 40 taxa) described in (Brinkmann et al. 2005) to assess the placement of the Microsporidia in the tree of eukaryotes.The correct topology denoted T M , was originally recovered with LG+C20+F+G4, (Susko et al. 2018)
Parameter Estimation Performance
To validate our implementation, we simulated 100 MSAs, with 10,000 sites and 10 taxa, using Alisim (Ly-Trong et al. 2022).Each alignment was simulated under the following conditions: LG exchangeabilities; a profile mixture model with four profiles, we arbitrarily chose the first four profiles from the C60 model, which turn out to be quite distinct (supplementary fig.S2, Supplementary Material online); a 10-taxon tree, depicted in supplementary fig.S1, Supplementary Material online, obtained from the empirically estimated tree T M defined above, after randomly removing taxa; and a discretized gamma distribution G4 (Yang 1994), with α = 0.67, where α was chosen from an empirical data estimate (obtained after fitting the model LG+C60+G4 on the tree T M for the Microsporidia data set).The arbitrarily chosen weights of the profiles were 0.35, 0.15, 0.25, and 0.25, respectively.For each MSA, we jointly estimated exchangeabilities, branch lengths, profile weights, and the rate parameter α.The only parameters not estimated were the tree topology and the profiles.We chose the POISSON exchangeability matrix, where all entries are equal to 1, as the initial values for the exchangeabilities to guarantee that the success of optimization was not due to the starting values being close to the true values.
Figure 1a shows a histogram of the difference between true and estimated exchangeability deviation for all entries and for all 100 simulations.Particularly, this plot shows how most entries were accurately inferred since S3, Supplementary Material online gives separate box plots for each exchangeability entry and shows that all entries are adequately estimated.
To investigate the performance of estimation of all exchangeabilities jointly, we compute for each estimated matrix S, the sum of absolute differences (SAD) between the true exchangeability matrix (LG) and S. Figure 1b shows the box plot of the SAD for all simulations.For reference, the SAD between the true matrix (LG) and the starting matrix (POISSON) is ∼0.5, which is considerably larger than that of any estimated matrices.Moreover, the SAD between the LG matrix, and the mean estimated matrix is ∼0.02 (Fig. 2), suggesting consistency of the exchangeability estimation.
Other parameters optimized jointly with exchangeabilities were also accurately estimated including profile weights (Fig. 1c), branch lengths (Fig. 1d), and the alpha shape parameters (supplementary fig.S4, Supplementary Material online).
Over 100 simulations, the median total CPU time used to estimate all parameters for each simulated dataset is 4,297 s and the median wall-clock is 217 s on an Intel Xeon E5-2697 with 64 GB RAM when using IQ-TREE2's multithreading option on 20 cores.
Estimating Exchangeabilities Improves Topological Accuracy
In Baños et al. (2024), it was shown that misspecification of the exchangeabilities can severely hamper tree estimation.Specifically, it was shown that, under a "rich" profile mixture model, data simulated under POISSON exchangeabilities and fitted using fixed LG exchangeabilities together with a profile mixture model that includes an "F-class" (a profile that is defined from the empirical frequencies of amino acids from the MSA) performs poorly.
To determine if GTR estimation would address this problem, we investigated a similar scenario by simulating 100 MSAs of length 10,000 using the POISSON+C10+G4{0.5}model on a 12-taxon tree shown in supplementary fig.S5, Supplementary Material online (L).We separately fitted the profile mixture model C10+F+G4{0.5} using the POISSON, LG, and GTR exchangeabilities, and two tree topologies, the correct tree and an artifactual one corresponding to the long-branch attraction (LBA) artifact (supplementary fig.S5, Supplementary Material online (R)).For all models, branch lengths and weights of the profiles were estimated, and for GTR+C10+F+G4{0.5}exchangeabilities were also estimated.Table 1 shows, for each model, the proportion of times the true tree had a higher log-likelihood than the LBA tree.As expected LG+C10+F performs poorly when compared to the POISSON+C10+F model; GTR+C10+F, performs much better than LG and is much closer in performance to the POISSON+C10+F model.If more taxa and sites were considered, the GTR exchangeability estimates are expected to approach the true POISSON exchangeabilities and tree estimation would improve concomitantly.Note that if profiles were misspecified, we would expect heterogeneity in the estimated exchangeabilities that compensate for this even with a very large data set or more taxa.Table 1 also shows a similar scenario to the one described above, with the only difference being that all MSAs are of length 500 sites.We note that for both the true tree and the LBA tree, for the MSA's of length 10,000, the mean SAD between the true POISSON and the estimated GTR exchangeabilities is ∼0.27, while for the MSAs of length 500 SAD is ∼0.9.Comparing the results from both MSA's lengths, we see that the proportions of correct topological estimates increase much less for the incorrect LG-based model than for either the POISSON or GTR-based models.It is also noteworthy that the POISSON and GTR proportions obtained from the MSAs of length 500 are comparable despite POISSON requiring far fewer parameters to be estimated.We strongly suspect that this is the result of a small sample bias as described in Wang et al. (2019).
Data Analysis
We then investigated if GTRpmix improves model fit significantly compared to the LG matrix for the Microsporidia data set from Brinkmann et al. (2005).By fixing the tree topology T M and the profiles of model C60, we jointly estimated the exchangeabilities of the GTR model, class weights, α from a discretized G4, and branch lengths.We refer to the estimated exchangeability matrix in this section as the Miscrosporidia eXchangeability Matrix (MXM).We compare this model against LG+C60+G4, where class weights, α, and branch lengths are estimated under fixed tree topology M , profiles of model C60, and LG exchangeabilities.Table 2 shows the log-likelihoods, AIC, and BIC obtained from models LG+C60+G4 and MXM+C60+G4.Note that for MXM+C60+G4, 189 additional parameters are being estimated compared to LG+C60+G4.Nevertheless, the AIC and BIC scores suggest a preference for exchangeability estimation by a large margin (around 10k AIC and BIC units, Table 2).
Figure 3 shows a comparison between the entries of the LG and the MXM matrix.Particularly, we see that many of the entries in the LG matrix are different than in the MXM matrix.Note that the SAD between POISSON and MXM exchangeabilities is ∼0.36 and between LG and POISSON is ∼0.5.To provide insight into these differences, we focus on a particular example.Some exchangeabilities involving cysteine (C) are increased in MXM compared to LG.This is likely because in C60 there is a profile (profile 8 as listed in IQ-TREE2) where cysteine has a frequency of ∼0.42 and both alanine and serine each have a frequency of ∼0.18; these three amino acids account for almost 80% of the overall amino acid proportion of this profile.By excluding The best values per column are shown in bold.Both exchangeability matrices MXM and FM were estimated from the data, with the former estimated under C60 and the latter with the single frequency class (denoted by "+F" in IQ-TREE2) based on the frequencies of amino acids in the alignment.Entry-wise comparison between the mean exchangeabilities estimated from the simulated data and the LG exchangeabilities used to generate the data.Each dot represents an entry in the exchangeability matrix, where the x-coordinate is the mean estimated exchangeability over all 100 simulations and the y-coordinate is the corresponding LG entry.Each point is labeled with the two amino acids it represents.All circles have equal sizes and were chosen to fit the label of the exchangeability they represent.
Table 1 Proportion of times the correct tree is preferred over the artifactual LBA tree for 100 simulated MSAs of length n = 500 and n = 10,000, simulated under the model POISSON+C10+G4{0.5}The only difference between fitted models is the choice of exchangeabilities.For the model GTR+C10+F+G4{0.5},exchangeabilities are estimated using our implementation.McNemar's test of the equality of proportions yielded a P-value ∼0 when comparing the contingency table from trees for models GTR+C10+ F+G4{0.5} and LG+C10+F+G4{0.5} for the MSAs with 10,000 sites.
Banos et al. • https://doi.org/10.1093/molbev/msae174MBE from C60 this and profile 4 (where cysteine has a frequency of 0.16 and only A, L, S, T, and V have a frequency greater than 0.05), the mean frequency of cysteine in the 58 remaining profiles is ∼0.009.Assuming this profile mixture is closer to reality, even if the exchangeabilities involving cysteine are non-negligible, a frequency profile with a substantial probability of cysteine is unlikely.Thus, a small proportion of sites with a cysteine and some other amino acid is expected.Because MXM takes frequency profiles into account it can recognize this.However, when a single profile is assumed in fitting, as with LG, a small exchangeability provides the only way to account for a low frequency of sites with cysteine and some other amino acids.A bubble plot of the differences between the LG and MXM exchangeabilities can be found in supplementary fig.S6, Supplementary Material online.
For comparison, we also estimated a GTR matrix, noted FM, under a single profile and discretized G4 rates for the Microsporidia data set.Specifically, for the profile we used the overall frequencies of amino acids in the data set.Supplementary fig.S7, Supplementary Material online(L) depicts a comparison between the FM and MXM matrices.This figure shows, as expected, how MXM has higher cysteine exchangeabilities than FM, as was noted in the MXM to LG comparison.For additional comparison, the SAD between the FM and LG is ∼0.15, while the SAD between the FM and MXM matrices is ∼0.26.This means that the FM matrix is more similar to the LG matrix than the MXM matrix.Table 2 shows that the AIC, and BIC scores of FM+F+G4, LG+C60+G4, and FM+C60+G4 are all worse than MXM underscoring the importance of fitting multiple profiles and exchangeabilities jointly.As suggested in Minh et al. (2021) and Pandey and Braun (2020), different clades are likely to have different optimal models, and thus one would expect the model FM+C60+G4 to perform better than the model LG+C60+G4.While in fact, the FM+C60 +G4 model is favored over LG+C60+G4 according to the AIC it is not according to the BIC.Because AIC and BIC were derived assuming full ML estimation of all parameters but FM+C60+G4 was not estimated in this way, it is not clear that these are the right criteria for comparison.BIC, in particular, was derived to approximate Bayes factors for models and usually assumes that ML estimates are used in calculating likelihoods (Schwarz 1978).
The total running time for full estimation of the exchangeabilities for the MXM model, branch lengths, C60 class weights, and the shape parameter, α was ∼11 hours on 40 cores of an AMD EPYC 7,543 Processor with 2T of RAM.We investigated whether accurate estimation of exchangeabilities would be possible by fixing branch lengths and the shape parameter α from G4. Specifically, we estimated the exchangeability matrix, denoted MXM Fix , by fixing branch lengths and α to the averages of these parameters obtained from fitting POISSON+C60+G4 and LG+C60+G4.The total running time to estimate MXM Fix was ∼7 hours on the same computer and the resulting SAD between MXM and MXM Fix is 0.017, suggesting the estimates are very similar (supplementary fig.S7, Supplementary Material online(R) shows an entry-wise comparison between these two matrices).The log-likelihood obtained after fitting the model MXM Fix +C60+G4 (where branch lengths and α are reestimated) is only ∼73 likelihood units less than the one obtained from fitting MXM+C60+G4 with all parameters estimated simultaneously.This suggests that fixing branch lengths and α does not greatly affect the estimation of exchangeabilities while yielding important computation time savings.
Empirically Estimated Exchangeability Matrices
Since the estimation of exchangeabilities is computationally expensive even after fixing branch lengths and rates, many users will not have the computational resources to optimize matrices for their datasets of interest.Alternatively, users may have datasets that are not large enough to permit accurate exchangeability estimation (e.g. a single-protein alignment).For these reasons, we have estimated two exchangeability matrices from large data sets using the C60+G4 model that can be used as fixed matrices for phylogenetic analyses under profile mixture models.The first matrix we introduce is tailored for phylogenetic analyses of proteins encoded by eukaryotic nuclear genes and the other is for reconstructing relationships between nuclear-encoded proteins in eukaryotes and orthologs in Archaea.We estimated an exchangeability matrix, which we refer to as the Eukaryotic Linked Mixture (ELM) matrix.This matrix was estimated from the 78-taxon Pan-Eukaryotic data set described in the section "Data Sets" above.We used the profiles from model C60, discretized G4 rates, and a tree topology recovered by fitting LG+C60+G4 to the data depicted in supplementary fig.S10, Supplementary Material online.To reduce computational time, we used the approach described for MXM fix estimation above; i.e. we fixed branch lengths and α to their averages based on estimates from LG+C60+G4 and POISSON+C60+G4.Thus for the ELM matrix estimation, we only optimized exchangeabilities and C60 profile weights jointly.Supplementary fig.S8, Supplementary Material online shows a bubble plot of the difference between the LG and ELM exchangeabilities, and supplementary fig.S9, Supplementary Material online(L) depicts an analog of Fig. 3 for these two matrices.To compare the fit between these two matrices, we used the three empirical data sets (Microsporidia, Nematode, and Platyhelminths) described in the subsection "Data Sets for Comparisons."Figure 4 contains, among other things, the likelihoods obtained from fitting models LG+C60+G4 and ELM+C60+G4 for the correct and artifactual topologies of each data set.Note that, by applying the KHns test developed in Susko (2014) to compare two fixed tree topologies, a likelihood difference between fitting the true and artifactual tree is considered significant, with a 5% significance level, if it is greater than 5.53 for the Microsporidia, 2.99 for the Nematode, and 1.92 for the Platyhelminths data sets.
Although the KHns test does not correct for the selection bias induced by estimation of the artifactual tree from the data, nevertheless, these thresholds do give some indication about how large likelihoods might be expected to be under the null hypothesis.Clearly, the ELM matrix produces considerably better likelihood scores for all data sets than the LG matrix.We note that for the Platyhelminths data set the ELM+C60+G4 significantly prefers the true tree over the artefactual tree, whereas the LG+C60+G4 matrix does not.For the other two datasets, all models preferred the true tree, although the LG model consistently had the weakest preference, Fig. 4.
We also found that even when the ELM matrix was optimized using the profiles from C60, it still provided better fit and improved topological accuracy when fitting different sets of profiles.Figure 4 contains the likelihoods for the three data sets when fitting the profiles in C40, C30, C20, UDM32, and UDM64 with LG and ELM exchangeabilities.For all profile mixture models, the ELM matrix provides better likelihood scores than the LG matrix.We also see that the ELM matrix always prefers the true tree, which is not true for the LG matrix under models C20, C40, and UDM32 for the Platyhelminths data set.
To make a broader comparison, we also looked at the likelihoods for all the profile mixture models used in the comparisons above with MXM exchangeabilities, shown in Fig. 4. As expected, independently of the profiles, for the Microsporidia data set the MXM matrix produces better likelihood scores since this matrix was optimized on this data set.Nonetheless, for the other two data sets the ELM matrix produces better likelihood scores.
Fig. 4.
The log-likelihoods of the trees estimated from the three empirical data sets and the difference between fitting the true and the artifactual tree for each data set.The bar scale goes from the lower value per column (empty) to the highest value (full) per column.D(T X ) denotes the log-likelihood of the "correct tree" (e.g.T M ) minus the "incorrect" tree (e.g.T MA ).Positive values of D(T X ) reflect a preference for the true tree, while negative preference for the LBA tree.The best LH score per model and data set is shown in bold.In the first column, the MXM matrix is the GTR matrix for the data set represented in the column (Microsporidia dataset), showing that the ELM matrix gives similar log-likehoods to the GTR exchangeability rates.
Fig. 1 .
Fig. 1.Plots showing the comparison between true and estimated parameters for all 100 simulations.For the box plots, an horizontal line at zero represents a perfect estimation of the true parameters.a) Histogram showing the differences between true and estimated exchangeability deviation for all entries.b) Box plot showing the sum of absolute differences between the true (LG) and estimated exchangeabilities.c) Box plot showing the differences between true and estimated weights for the four classes used to simulate the data.d) Box plot showing the differences between true and estimated branch lengths.Branches are ordered from largest to shortest (the reason why variability decreases from left to right).
Fig.2.Entry-wise comparison between the mean exchangeabilities estimated from the simulated data and the LG exchangeabilities used to generate the data.Each dot represents an entry in the exchangeability matrix, where the x-coordinate is the mean estimated exchangeability over all 100 simulations and the y-coordinate is the corresponding LG entry.Each point is labeled with the two amino acids it represents.All circles have equal sizes and were chosen to fit the label of the exchangeability they represent.
Lartillot et al. 2007;Wang et al. 2017;Susko et al. 2018)e artifactual topology denoted T MA , was recovered with LG+F+G4 and groups the Microsporidia with the archaeal outgroup (i.e.branching sister to all other eukaryotes) due to an LBA artifact.•adataset of 146 proteins (35,371 sites × 37 taxa) described inLartillot et al. (2007)to assess the placement of the Nematodes in the animal tree of life.The correct topology, denoted T N , was recovered with LG+C20+F+G4 where Nematodes branch as sister to arthropods.The artifactual topology denoted T NA , was recovered with LG+F+G.• a data set of 146 proteins (35,371 sites × 32 taxa) assembled inLartillot et al. (2007)to assess the position of the Platyhelminths in the animal tree of life.The correct topology denoted T P , was recovered with CAT+GTR places Platyhelminths within the Protostomia.The artifactual topology, denoted T PA , recovered with LG+F+G4 and places Platyhelminths within Coelomata and many mixture models (seeLartillot et al. 2007;Wang et al. 2017;Susko et al. 2018) Banos et al. • https://doi.org/10.1093/molbev/msae174MBE most of the mass is around zero.Supplementary fig.
Table 2
The log-likelihoods, number of free parameters (denoted k, which also accounts for the 77 branch lengths in the tree), AIC, and BIC obtained from fitting models MXM+C60+G4, LG+C60+G4, FM+F+G4, and FM+C60+G4 Entry-wise comparison between the MXM matrix, obtained from fitting a GTR matrix to the Microsporidia data set, and the LG exchangeabilities.Each dot represents an entry in the exchangeability matrix, where the x-coordinate is the entry of the MXM matrix and the y-coordinate is its corresponding LG matrix entry.Each point is labeled with the two amino acids it represents.All circles have equal sizes and were chosen to fit the label of the exchangeability they represent.
Banos et al. • https://doi.org/10.1093/molbev/msae174MBE material online, we have included an IQ-TREE2 sample command line to estimate exchange abilities as it was done in this work.
|
2024-04-03T13:14:00.865Z
|
2024-03-30T00:00:00.000
|
{
"year": 2024,
"sha1": "23f64a553cf6ab07723236526cf7ffc2663998a9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1093/molbev/msae174",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8d583a02b8ea83ae75b3b0320fd2842bd2a27ee8",
"s2fieldsofstudy": [
"Computer Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
52218233
|
pes2o/s2orc
|
v3-fos-license
|
Biogeosciences Eutrophication and warming effects on long-term variation of zooplankton in Lake Biwa
We compiled and analyzed long-term (1961– 2005) zooplankton community data in response to environmental variations in Lake Biwa. Environmental data indicate that Lake Biwa had experienced eutrophication (according to the total phosphorus concentration) in the late 1960s and recovered to a normal trophic status around 1985, and then has exhibited warming since 1990. Total zooplankton abundance showed a significant correlation with total phytoplankton biomass. Following a classic pattern, the cladoceran/calanoid and cyclopoid/calanoid abundance ratio was related positively to eutrophication. The zooplankton community exhibited a significant response to the boom and bust of phytoplankton biomass as a consequence of eutrophication-reoligotriphication and warming. Moreover, our analyses suggest that the Lake Biwa ecosystem exhibited a hierarchical response across trophic levels; that is, higher trophic levels may show a more delayed response or no response to eutrophication than lower ones. We tested the hypothesis that the phytoplankton community can better explain the variation of the zooplankton community than bulk environmental variables, considering that the phytoplankton community may directly affect the zooplankton succession through predator-prey interactions. Using a variance partition approach, however, we did not find strong evidence to support this hypothesis. We further aggregated zooplankton according to their feeding types (herbivorous, carnivorous, omnivorous, and parasitic) and taxonomic groups, and analyzed the aggregated data. While the Correspondence to: C. H. Hsieh (chsieh@ntu.edu.tw) pattern remains similar, the results are less clear comparing the results based on finely resolved data. Our research suggests that zooplankton can be bio-indicators of environmental changes; however, the efficacy depends on data resolution.
In this research, we investigated eutrophication and warming effects on the zooplankton community of Lake Biwa.Lake Biwa is the largest lake in Japan; it provides great ecological values, such as high biodiversity and high economic value including transportation, drinking water, and fisheries (Kumagai, 2008).As with many lakes in the world, Lake Biwa has experienced eutrophication and warming through the past several decades.An increase in nutrient loading due to urbanization started in the 1960s, and subsequently blooms of Uroglena americana have occurred since 1977 and cyanobacteria since 1983 (Kumagai, 2008).A water treatment regulation was enforced in 1982, and nutrient loading was progressively reduced and then stabilized after 1985 (Kumagai, 2008).However since late 1980, the air temperature has risen quickly, causing another problem for the Lake Biwa ecosystem (Hsieh et al., 2010).Changes in nutrient and physical conditions due to eutrophication and warming effects and the consequent reorganization of the phytoplankton community in Lake Biwa have been documented (Hsieh et al., 2010).Here, we extend the trophic level upwards and investigate those environmental change effects on the zooplankton community.
In Lake Biwa, zooplankton studies have focused on their role in foodweb dynamics (Nagata and Okamoto, 1988;Urabe et al., 1996;Yoshida et al., 2001a;Kagami et al., 2002) and nutrient cycling (Urabe et al., 1995;Yoshimizu and Urabe, 2002;Yoshimizu et al., 2001;Elser et al., 2001).The seasonal succession of the zooplankton community have been documented (Yoshida et al., 2001b;Miura and Cai, 1990).However, how eutrophication and warming might have affected zooplankton in Lake Biwa for the past half century has rarely been explored.The only exception is the paleo-limnological study by Tsugeki et al. (2003), in which they examined remains of three cladoceran and two rhipod species in a sediment core with a resolution of roughly three years and showed fluctuations of these taxa in the 20th century.They concluded that the increased zooplankton abundance from 1960s to 1980s is likely due to bottom-up effects driven by eutrophication (Tsugeki et al., 2003), and this eutrophicaiton effect has been shown to propagate to fish (Nakazawa et al., 2010).In fact, time series data of the zooplankton community have been collected by the Shiga Prefecture Fisheries Experimental Station (SPFES) since 1962 for fisheries management purposes.Curiously, these data have never been analyzed with respect to environmental variations.
Here, we compiled published records of the zooplankton community by the SPFES.With the environmental and phytoplankton data compiled in Hsieh et al. (2010), we investigated the long-term variation of zooplankton communities in response to eutrophication and warming from 1962 to 2005.Most studies concerning lake zooplankton have focused on daphnids (Wagner and Benndorf, 2007;Benndorf et al., 2001;Straile, 2000;George et al., 1990) and copepods (Seebens et al., 2007;Winder et al., 2009b;Anneville et al., 2007) and occasionally on rotifers (Molinero et al., 2006).In this research, we investigated environmental effects on the whole mesozooplankton community using a hierarchical manner.First, using the highly resolved taxonomic data (species or genus level), we investigated how eutrophication and warming have driven the reorganization of the zooplankton community.Second, we aggregated the zooplankton data according to their taxonomy (Cladocera, Copepoda, Rotifera, and Protista) or feeding types (herbivorous, carnivorous, omnivorous, and parasitic), because the taxonomy or feeding types may have determined the responses of zooplankton to the changes of their prey field (Kawabata, 1988;Urabe et al., 1996;Yoshida et al., 2001b;Anneville et al., 2007).Such aggregation also allowed us to investigate the effects of data resolution.Third, we examined whether the ratio of cyclopoid/calanoid and of cladoceran/calanoid can be used as an indicator of eutrophication (Ravera, 1980;Kane et al., 2009).Finally, we studied the total zooplankton abundance in response to eutrophication and re-oligotrophication.In addition, we tested the hypothesis in each level of analyses that the phytoplankton community can better explain the variation of zooplankton community than bulk environmental variables (such as total phosphorus, total phytoplankton biomass, or temperature).This hypothesis is based on the observation that changes in the zooplankton community may be critically affected by change in the phytoplankton community through predator-prey interactions and species competitions (Kawabata, 1988;Yoshida et al., 2001b;Polli and Simona, 1992;Magadza, 1994;Anneville et al., 2007).
Zooplankton data
Zooplankton samples were collected using a Kitahara's closing net (139 µm in mesh size and 25 cm in diameter) in four depth intervals (0-10 m, 10-20 m, 20-40 m, and 40-75 m) by the Shiga Prefecture Fisheries Experimental Station (SPFES) at five stations (Fig. 1).Numbers of depth intervals varied according to the bathymetry of stations (e.g.only 0-10 m samples taken in shallow stations 1 and 5).The samples were fixed in 5 % formalin and enumerated under microscopes.While sampling was conducted monthly, zooplankton identification and enumeration was carried out only quarterly for most years.We digitized zooplankton abundance data from published annual reports by the SPFES from 1962 to 2005.For each species, we calculated the depth-integrated zooplankton density (10 4 ind.m −2 ) for each station, averaged the densities of the five stations into the quarterly mean, and finally averaged the quarterly data into the annual mean.While the seasonality or phenology of zooplankton might also be affected by environmental variations (Anneville et al., 2007), we focused on interannual variation in this study.The taxonomic resolution of zooplankton changed over time and Biogeosciences, 8, 1383Biogeosciences, 8, -1399Biogeosciences, 8, , 2011 www.biogeosciences.net/8/1383/2011/varied among groups.To ensure consistency, only genuslevel data were obtained for some taxa.Only the taxa occurring in ≥15 yr of the 44 sampling years were investigated for this study, which amounted to a total of 20 taxa (6 species and 14 genera, Fig. 2).The only exception is the total zooplankton abundance, which consists of all zooplankton.To simplify this, we call the species/genus resolved dataset "genus data" hereafter.Note that among these 20 taxa, Trichodina spp. is parasitic on aquatic animals and were often collected by plankton nets.We compared our data with that reported by Miura and Cai (1990) from 1965 to 1979 (visual investigation on their figures) and found generally a close agreement, although their zooplankton samples were collected from station I (Fig. 1).
Environmental and phytoplankton data
The details of environmental and phytoplankton data can be found in Hsieh et al. (2010).Here, we focused only on environmental data directly representing water warming and changes in trophic status in Lake Biwa.Lake surface water temperature data (average of 0-20 m) were obtained from SPFES for the same sampling stations (Fig. 1).The surface water temperature exhibited substantial interannual fluctuations superimposed on a long-term increasing trend (Fig. 3a).
The average total phosphorus (TP) in the upper 20 m was used to represent the trophic status of the Lake (see Hsieh et al., 2010 for data source).The TP increased quickly after 1967, reached a maximum in 1974 and then declined until 1985, and fluctuated around a stable value thereafter (Fig. 3b).In addition to the local environmental data, climate indices were included in the analyses.We investigated the Arctic Oscillation index (AO), a climate pattern that has been known to influence the weather condition of Japan (Thompson and Wallace, 1998).The air and water temperatures of Lake Biwa were significantly related to the AO index (Hsieh et al., 2010).We investigated also the Pacific Decal Oscillation (PDO) (Mantua et al., 1997) and Southern Oscillation index (SOI) (Trenberth, 1984).These basin-scale patterns have been shown to influence the climate of Japan through air-sea interactions (Miyazaki and Yasunari, 2008;Jin et al., 2005).Their influences on marine zooplankton have been studied (Chiba et al., 2006); however, their effects on the lakes of Japan are not clear.Phytoplankton community data include the time series from 1978 to 2003 collected by the Lake Biwa Environmental Research Institute (Station L in Fig. 1) and those from 1962 to 1991 collected by the SPFES (Stations 1 to 5 in Fig. 1), as detailed in Hsieh et al. (2010).We integrated these two time series to arrive at a phytoplankton total biomass (carbon) time series (Fig. 3c).While we admit uncertainty may exist in this integration, such proxy represents the longterm variation of phytoplankton biomass in Lake Biwa.See procedure and justification in Supplement A.
Data analysis
Two main environmental issues were associated with the Lake Biwa ecosystem: eutrophication and warming.We investigated these environmental effects on the zooplankton community at four levels: (1) highly resolved zooplankton genus data, (2) aggregated zooplankton groups according to feeding types or a higher taxonomic level, (3) the ratio of cyclopoid/calanoid and of cladoceran/calanoid, and (4) total zooplankton abundance.The analytical procedure is illustrated in Fig. 4.
We started from the genus data (shown in Fig. 2).First, univariate correlation analyses (with a lag of up to two years considering the short generation time of zooplankton) were used to investigate long-term relationships between the environmental factors and the zooplankton abundance on an interannual scale.The stationary bootstrap approach (Politis and Romano, 1994) with an accelerated bias correction was used to compute 95 % confidence limits and to perform a hypothesis test in order to account for serial dependence www.biogeosciences.net/8/1383/2011/Biogeosciences, 8, 1383-1399, 2011 in time-series data (Hsieh et al., 2009).Second, long-term variation of zooplankton communities were examined using principal component analysis (PCA) (Legendre and Legendre, 1998).Third, we linked the temporal pattern of zooplankton community to environmental variables using redundancy analysis (RDA) (Legendre and Legendre, 1998).A stepwise procedure was used to select the significant variables (using α = 0.05) and to exclude irrelevant variables.
Such a procedure selects only the variables that explain the highest variance when colinearility exists among multiple variables in the model (Peres-Neto et al., 2006).Prior to PCA and RDA analyses, abundance data were transformed as log 10 (X+1) and then normalized.Finally, using a variance partition approach (Legendre et al., 2005;Cushman and McGarigal, 2002), we calculated the relative contributions of bulk environmental factors and the phytoplankton community in explaining the temporal variation of the zooplankton community.
We tested the hypothesis that phytoplankton community can better explain the variation of zooplankton community than bulk environmental variables by a variance partition approach (Pinel-Alloul et al., 1995) and compared whether their contributions differed significantly using a randomization test (Peres-Neto et al., 2006) of 5000 times.Such tests can only apply for the data between 1978 and 2003 because detailed phytoplankton community data were only available during this period.To investigate the hypothesis, we aggregated phytoplankton community data (Supplement B) according to their size (0 ∼ 200 um 3 , 200 ∼ 1000 um 3 , 1000 ∼ 8000 um 3 , and >8000 um 3 ; time series shown in Supplement Fig. B1) (Urabe et al., 1996;Makarewicz et al., 1998), morphology (single cell, filament, and colony type; time series shown in Supplement Fig. B2) (Anneville et al., 2002b;Hsieh et al., 2010), or taxonomic class (time series shown in Supplement Fig. B3), because these traits potentially influence the trophic interactions between zooplankton and phytoplankton.For the phytoplankton community, we did not consider individual species.Rather, we aggregated phytoplankton into groups according to size, morphology, and higher taxonomy, because zooplankton unlikely have the ability to distinguish phytoplankton species.
Next, we aggregated zooplankton data according to their taxonomic groups (Supplement Fig. C1) or feeding types (Supplement Fig. C2).Detailed information on the feeding types and taxonomic groups for each taxon was collected from the literature and are provided in Supplement D. We note that feeding types and taxonomic groups are not independent.While body size is an important life history trait to consider for zooplankton (Gillooly, 2000;Hansen et al., 1997), we do not have size information of each taxon through time.The size of many crustaceans can vary substantially.Those aggregated zooplankton data were analyzed with respect to environmental variables and phytoplankton community in the same manner as aforementioned.Similarly, the ratio of cyclopoid/calanoid and of cladoceran/calanoid was analyzed likewise.Finally, for the zooplankton abundance, simple stationary bootstrap correlation was employed to see how they responded to eutrophication and warming.
Zooplankton community responding to eutrophication and warming
The zooplankton community exhibited a substantial change from 1962 to 2005 when the data were investigated at the species-genus level (Fig. 2).The results of univariate correlation analyses indicated that several taxa were either correlated with changes in the trophic status of the lake (e.g.zooplankton community variation was significantly affected by environmental changes (randomization test, p < 0.05), based on RDA results (Fig. 5c).The variation from early 1960s to late 1980s was related to the change in the trophic status of the Lake, while the variation from the late 1980s onwards was caused by warming.The result of RDA (Fig. 6a), based on data from 1978 to 2003, exhibited a similar pattern with that based on data from 1962 to 2005 (cf.Fig. 5c), showing that eutrophication (signified by TP) and warming have significantly driven changes in the zooplankton community.
Responses of aggregated zooplankton to environmental variations
When we investigated zooplankton time series aggregated according to a higher taxonomic level (Cladocera, Copepoda, Rotifera, and Protista), we found that the aggregated taxa still showed significant correlations with environmental variables (Table 2).Rotifera correlated positively with TP with a 1-yr lag and concurrent phytoplankton biomass.Cladocera showed a positive correlation with TP and PDO with a 1-yr lag and phytoplankton biomass with 2-yr lag.Copepoda did not show any correlation, and Protista correlated positively with water temperature and negatively with concurrent SOI, and TP with a 2-yr lag.Note that when lagged and concurrent correlations were all significant, we considered the correlation of the strongest strength.The results of RDA based on the aggregated taxa also revealed the effects of eutrophication and warming (Fig. 5d) similar to the RDA results based on genus data (cf.Fig. 5c); however, the gradient of increased eutrophic status and warming is less clear.When we aggregated zooplankton data according to their feeding types (herbivorous, carnivorous, omnivorous, and parasitic), correlations between zooplankton and environmental factors were also found (Table 2).Herbivorous zooplankton correlated positively with TP with a 1-yr lag and concurrent phytoplankton biomass.Carnivorous and omnivorous zooplankton did not show any correlation; parasitic zooplankton correlated positively with water temperature.The results of RDA based on the feeding types again revealed the effects of eutrophication and warming (Fig. 5e) similar to the RDA results based on genus data (cf.Fig. 5c); however, the gradient of increased eutrophic status and warming is less clear.
Ratio of cladoceran/calanoid and of cyclopoid/calanoid and the total zooplankton abundance as an indicator of eutrophication
The ratio of cladoceran/calanoid (Fig. 7a) and of cyclopoid/calanoid (Fig. 7b) may be indicative of the trophic status of the lake.The cladoceran/calanoid ratio correlated positively with TP with a 2-yr lag and phytoplankton biomass with a 1-yr lag, and the ratio of cyclopoid/calanoid correlated positively with concurrent TP with a 2-year lag and the phytoplankton biomass with a 1-yr lag (Table 3).The total zooplankton abundance (Fig. 3d) correlated positively with TP with a 1-yr lag and concurrent phytoplankton biomass (stationary bootstrap test, p < 0.05).
Differential effects of environmental variables and phytoplankton community on zooplankton
We tested the hypothesis, whether phytoplankton community can better explain the variation of zooplankton community than bulk environmental variables, by comparing their rel-ative contributions in explaining the temporal variation of the zooplankton community.For this investigation, we used only data from 1978 to 2003, when phytoplankton community data were available.When considering genus-level zooplankton data, both environmental factors (mainly TP, phytoplankton biomass, and temperature) and the phytoplankton community (aggregated according to their size, morphology, and taxonomic class) explained a significant amount of variance (Table 4).Together, these two matrices explained >50 % of zooplankton variance.The environmental variables explained 38.05 % of variance, and the phytoplankton community explained 15.69 % to 38.05 % of the variance depending on how phytoplankton data were aggregated.Still, a significant common fraction existed between the environmental variables and the phytoplankton community.When we partitioned the variance, the environmental variables explained slightly higher variance than the the phytoplankton data aggregated into either size or morphological groups, whereas the phytoplankton class-level data explained slightly higher variance than the environmental variables (Table 4).However, the difference is not statistically different (0.11 < p < 0.67).The best combination of explanatory variables consisted of the environmental matrix and class-level phytoplankton data, which explained 72.1 % of variance.The most parsimonious RDA results based on forward selection are shown in Fig. 6b, and only temperature and biomasses of Cryptophyceae and Cyanobacteria were retained in the final model.
When considering the zooplankton feeding groups, the environmental variables explained a significant amount of variance (38.8 %), but the phytoplankton community explained only a small amount of variance, except for class-level phytoplankton data (Table 5).Together, these two matrices explained 50 % of zooplankton variance.When we partitioned the variance, the environmental variables explained higher variance than phytoplankton data; however, the difference remained statistically insignificant (0.13 < p < 0.29).Similar results were found when considering the zooplankton taxonomic groups (Table 5).
The ratio of cladoceran/calanoid was better explained by the phytoplankton matrix than the environmental matrix, but the difference remained statistically insignificant (0.34 < p < 0.56) (Table 6).The phytoplankton community explained a significant amount of variance (46.52-52.49%), and the environmental variables explained 28.41 % of variance.Together, these two matrices explained >60 % of the variance.The ratio of cyclopoid/calanoid was also better explained by the phytoplankton matrix (40.9-68.84%) than the environmental matrix (36.34 %), except for the comparison of environmental variables versus phytoplankton morphological groups (Table 6).However, the difference was statistically insignificant (0.15 < p < 0.87).Together, these two matrices explained >50 % of the variance.Similar to the results based on the genus data, the best combination included the environmental variables and class-level www.biogeosciences.net/8/1383/2011/Biogeosciences, 8, 1383-1399, 2011 phytoplankton data, each explaining 74.75 % and 87.38 % of variance for the cladoceran/calanoid and cyclopoid/calanoid ratio, respectively (Table 6).
The zooplankton community in response to environmental variations
The ecosystem of Lake Biwa has experienced a dramatic change in trophic status (Fig. 3b and c) and thermo regime (Fig. 3a) during the past half century.These environmental changes in turn have driven the reorganization of the zooplankton community (Figs. 2 and 5).This is particularly visible in the RDA results, where the gradient of trophic varia-tion (TP and phytoplankton biomass) and temperature variation (lake surface temperature and AO) has had significant effects on the evolution of zooplankton community (Fig. 5c).These kind of effects of eutrophication-reoligophication processes and warming on zooplankton communities have also been observed in other lakes (Jeppesen et al., 2003;Stige et al., 2009;Straile and Geller, 1998;Lovik and Kjelliberg, 2003;Anneville et al., 2007).
In Lake Biwa, the total zooplankton abundance showed a significant positive correlation with TP and phytoplankton biomass (Fig. 3b, c, and d), suggesting a bottom-up control.Nevertheless, not every taxon showed a significant positive response; among the 20 taxa, only six showed a significant positive correlation to TP or phytoplankton biomass (Table 1).We further investigated whether a taxon's positive response to trophic status is dependent on its feeding type or taxonomy (Supplement D) and found no significant relationship (logistic regression, p > 0.4).In addition to the bottom-up effects, the thermo regime also had significant effects; nine out of the 20 taxa exhibited a significant correlation (negative or positive) with temperature (Table 1).Nevertheless, whether a taxon responded to water temperature is not dependent on its feeding type or taxonomy (logistic regression, p > 0.4).While warming effects on zooplankton have been widely documented (George and Harris, 1985;Molinero et al., 2007), the mechanisms underlying the complex pattern in the Lake Biwa zooplankton are far from clear.Top-down effects from planktivorous fish or other invertebrates in the zooplankton community may also be important (Carpenter and Kitchell, 1988) but cannot be examined here due to lack of data.A future monitoring of Lake Biwa should include higher trophic levels.In addition, because water warming is accompanied by a reoligotrophication process, it may be difficult to discern the effects of trophic status and warming.
Among the 20 taxa, five showed a significant correlation with AO (Table 1).This linkage is likely due to temperature effects, as AO showed a significant positive correlation with Lake Biwa water temperature (Hsieh et al., 2010).A few taxa showed a correlation with PDO or SOI (Table 1).While PDO and SOI have been shown to affect marine zooplankton through changes in circulation around Japan (Chiba et al., 2006), their effects on Japanese lake ecosystems are not known.
While 15 among the 20 taxa showed a significant correlation with the environmental variables investigated here, the strengths of all the correlations are small.We suspected that complex competition among zooplankton (Einsle, 1983) or a compensation mechanism may be at work within a trophic level (defined as herbivores, carnivores, or omnivores).If compensation is important, negative covariance should prevail (Houlahan et al., 2007).Following Houlahan et al. (2007), we investigated the covariance structure among the taxa within the same trophic level using the binomial test.We found no evidence of compensation within the same trophic level.Within the herbivorous level, 31/66 pair-wise comparisons in the covariance are negative (binomial test, p = 0.712), 2/6 for the carnivores (binomial test, p = 0.688) and 1/3 for the omnivores (binomial test, p = 1).Our results echo the findings of Houlahan et al. (2007) that compensatory dynamics are not common in natural ecological communities.
Furthermore, we noted several zooplankton and phytoplankton taxa in Lake Biwa have exhibited phenological shifts during the past several decades.These shifts may cause trophic mismatch and result in community changes (Anneville et al., 2002a, b).Detailed analyses on the phenology of zooplankton is beyond the scope of this study but should be a next critical step.
Issues on data resolution
We investigated the issue of zooplankton data resolution by aggregating the zooplankton time series according to their taxonomy or feeding types.The aggregation according to feeding type is akin to the trait-based approach (Menezes et al., 2010).As expected, herbivorous zooplankton showed a positive response to phytoplankton biomass (Table 2).Parasitic organisms (Trichodina spp.) showed a positive relationship to temperature ( 2 We investigated delayed responses of zooplankton; however, we included only 1 and 2 yr lagged correlation, considering the short generation time of zooplankton. thus parasites may be more productive in warm water, as they do not have food limitation.Carnivorous and omnivorous zooplankton did not show a clear response to changes in trophic status, although carnivorous zooplankton indeed peaked in 1970 when eutrophication was serious (Table 2 and Supplement Fig. C2b).The omnivores fluctuated but maintained long-term constancy (Supplement Fig. C2c).When aggregation was done based on taxonomy, we found that Rotifera (Supplement Fig. C1a) and Cladocera (Supplement Fig. C1b) exhibited a positive correlation with phytoplank-ton biomass (Table 2).This is not surprising because most of them belong to the group of herbivores (Supplement D).Protista correlated positively with water temperature, mainly attributable to Epistylis spp.and Trichodina spp.(Fig. 2 and Supplement Fig. C1d).
While the aggregated time series (reduced resolution) still revealed a pattern of changes in trophic status and temperature regime, the community sensitivity to environmental variation was reduced.This is true for either aggregated taxonomic groups (Fig. 5d) or feeding groups (Fig. 5e).The RDA results remained significant, but the gradient was less clear.Together with the aforementioned findings that whether a taxon responded to water temperature or trophic status is not dependent on its feeding type or taxonomy, the trait-based approach provides limited additional information in interpreting the dynamics of the Lake Biwa zooplankton.
Zooplankton as bio-indicators of environmental changes
The zooplankton community indeed has signaled environmental changes in Lake Biwa such as eutrophication and warming, according to the PCA (Fig. 5a and b) and RDA results (Figs. 5c-e and 6).However, the PCA and RDA scores are not ideal indicators, because the scores change whenever new data are included in updated analyses.From the genus data, we found that Cyclops spp., Keratella spp., and Trichocerca spp.showed a clear response to eutrophication and Epistylis spp., Ploesoma spp., Synchaeta spp., and Trichodina spp.showed a clear response to warming (Fig. 2 and Table 1).These taxa could potentially make good bioindicators.
The ratio of cladoceran/calanoid (Fig. 7a) and of cyclopoid/calanoid (Fig. 7b) represents a good indicator of eutrophication.This is consistent with previous studies suggesting that increasing lake trophic status will favor cyclopoid over calanoid copepods and cladocerans over calanoids (Rognerud and Kjellberg, 1984;Patalas, 1972;Straile and Geller, 1998).We noticed that these ratios generally showed a lagged response (Table 3).In fact, for the ratio of cladoceran/calanoid, the correlation became stronger if more lags were taken (maximal at lagged 4 yr).In such a case, an aggregated time series becomes informative.
Zooplankton are reasonable bio-indicators of environmental changes.Combining these data with environmental and phytoplankton data allows one to investigate the energy fluxes of the pelagic ecosystem of Lake Biwa in the context of a changing climate.For example, a steady-state box model including bacteria, phytoplankton, zooplankton, and detritus has been developed for the pelagic ecosystem of Lake Biwa using inverse methods based on limited data collected from snapshots in summer as well as some parameters from the literature (Niquil et al., 2006).Our rich data with high temporal resolution and coverage may potentially be used to investigate how energy flows change with time, and in particular, with environmental variation.
Testing the hypothesis that the phytoplankton community explained better than bulk measurements
We tested the hypothesis that the phytoplankton community can better explain the variation of zooplankton community than bulk environmental variables, because changes in the zooplankton community may be critically affected by change in phytoplankton community through predator-prey interactions and species competitions.However, neither our results based on genus data nor aggregated time series support this idea.Generally, the bulk of environmental variables explained slightly higher amount of variance (Tables 4 and 5).The only exception is the phytoplankton taxa, which explained an almost equal amount of variance to the bulk environmental variables (Tables 4 and 5).By contrast, for the ratio of cladoceran/calanoid and of cyclopoid/calanoid, phytoplankton community explained better (Table 6).Thus, absolute group abundances followed better with bulk variables, while ratios followed better with the phytoplankton community.However, one should note that these variance partition analyses were done based on data only from 1978 to 2003; this limited data series may have hampered our resolution of analyses.Complex predator-prey interactions and species competition play important roles in the Lake Biwa ecosystem, which warrant further studies.
Hierarchical responses across trophic levels
From a system point of view, the differential responses of different trophic levels to environmental changes have been suggested (Allen et al., 1987).For example, one might expect a quick response of phytoplankton biomass to TP, but a delayed response of herbivores and even a more delayed response or no response of carnivores.Such concept is biologically intuitive, although it has been rarely tested in lake ecosystems.The only comprehensive analysis was carried out in Muggelsee and revealed no such evidence (Wagner and Adrian, 2009).In Lake Biwa, the total phytoplankton biomass showed a 1-yr lagged response to TP (Hsieh et al., 2010).However, the total zooplankton abundance showed a www.biogeosciences.net/8/1383/2011/Biogeosciences, 8, 1383-1399, 2011 concurrent correlation with phytoplankton biomass but not a delayed response (Fig. 3).When taxonomic groups were considered, cladocera showed a delayed response, but other groups did not; when different trophic levels were considered, no delayed response was found (Table 2).Interestingly, while herbivores showed a significant correlation with phytoplankton biomass, the higher trophic levels (such as carnivores and omnivores) exhibited no correlation with either phytoplankton or herbivores.Particularly, omnivores showed a long-term constancy in abundance, perhaps because they could forage on a wider spectrum of food sources.These results imply the effects of external forcing dissipated through up the trophic levels, as suggested by Allen et al. (1987).Furthermore, the ratio of cladoceran/calanoid and of cyclopoid/calanoid showed a 1-yr lagged response to phytoplankton biomass (Table 3).When considering individual taxon, six taxa showed a response to phytoplankton biomass; 66.7 % among them were delayed responses (Table 1).By contrast, nine taxa showed a response to water temperature; 55.6 % among them were delayed responses (Table 1).Thus for Lake Biwa, we saw some evidence of hierarchical responses across trophic levels, although the results are not unanimous.Moreover, whether or not a hierarchical response occurred may depend on the type of external forcing.
Conclusions
We compiled and analyzed long-term zooplankton community data in response to eutrophication-oligotrophication and warming in Lake Biwa.While the total zooplankton abundance showed a significant correlation with total phytoplankton biomass (Fig. 3), the zooplankton community changed substantially in response to changes in trophic status and water temperature (Figs. 5 and 6).The food (phytoplankton) availability sets the carrying capacity for the total zooplankton; however, complex interactions occurred among zooplankton taxa.Similar observations have been done for the Lake Biwa phytoplankton community in response to TP (Hsieh et al., 2010) levels.Higher trophic levels may show a more delayed response or no response to eutrophication than lower ones, but this kind of hierarchical response was not clear to temperature.Our results indicate that whether a taxon responded to eutrophication or warming is not dependent on its feeding type or taxonomy.Moreover, an aggregating time series based on feeding types or taxonomic groups reduced the sensitivity of using zooplankton community as bio-indicators (Fig. 5c-e).Traits other than feeding and taxonomy should be investigated in the future.However, following a classic pattern, cladoceran/calanoid and cyclopoid/calanoid abun-dance ratio (Fig. 7) was related positively to eutrophication and can be used as a good indicator.To summarize, the zooplankton community may be reasonable bio-indicators of environmental changes in Lake Biwa; however, hierarchical responses across trophic levels should be borne in mind.Our analyses did not support the idea that the phytoplankton community can explain the variation of zooplankton community better than bulk environmental variables.In addition, we found no compensatory dynamics within a trophic level.Perhaps, complex nonlinear species competitions and predator-prey interactions prevail in Lake Biwa,
Fig. 1 .
Fig. 1.Map showing sampling stations in Lake Biwa.Stations 1 to 5 are the Shiga Prefecture Fisheries Experimental Stations; station L is the long-term monitoring station of the Lake Biwa Environmental Research Institute; station I is the environmental monitoring station of the Kyoto University.
Fig. 3 .
Fig. 3. Time series of (a) annual averaged lake surface water temperature and (b) estimated average total phosphorus in upper 20 m, (c) estimated phytoplankton carbon biomass, and (d) total water column integrated zooplankton abundance averaged over 5 stations.Zooplankton abundance is significantly correlated with total phosphorus with 1-yr lag and concurrent phytoplankton carbon (stationary bootstrap test, r = 0.387 and r = 0.534, respectively, p < 0.05).
Cyclops spp., Daphnia spp., Diaphanosoma brachyurum, Eodiaptomus japonicus, Keratella spp., Mesocyclops leuckarti, Polyarthra spp., and Trichocerca spp.) or water temperature and/or the Arctic Oscillation index (e.g.Bosmina longirostris, Conochilus spp., Cyclops spp., Eodiaptomus japonicus, Epistylis spp., Keratella spp., Leptodora kindtii, Ploesoma spp., Synchaeta spp., and Trichodina spp.), although still other taxa did not show any correlation (Table 1).The results of principal component analysis revealed the long-term variation of the zooplankton community, and the first and second principal components amount to explain 35.5 % of the variance (Fig.5a and b).The first principal component showed a first peak around 1970 and a second peak around 1980, possibly influenced by eutrophication (cf.Fig.3b and c); it then changed to a negative phase around 1990 due to an increase in water temperature (cf.Fig.3a).The second principal component also showed a first peak around 1970 and turned into a negative phase from 1974 to 1980, and has fluctuated since then.The long-term
Fig. 4 .
Fig. 4. Schematic illustration of the analytical procedure at various levels.
Fig. 5 .
Fig. 5. Principal component analysis of dominant taxa (a and b), and biplot of redundancy analysis relating years and zooplankton taxa and depicting how environmental factors affected the zooplankton community dynamics (c-e).The analyses were based on data at (c) genus level, (d) aggregated higher taxonomic groups, and (e) aggregated feeding types.In (c), the black arrow indicates a gradient of increased eutrophic status, and the purple arrow indicates a warming gradient.In (d) and (e) the gradient of increased eutrophic status and warming is less clear.
Fig. 6 .
Fig. 6.Biplot of redundancy analysis relating years (1978 to 2003) and zooplankton taxa and depicting (a) how temperature and TP, and (b) how temperature, biomasses of Cryptophyceae and Cyanobacteria affected zooplankton dynamics.The black arrow indicates a gradient of increased eutrophic status, and the purple arrow indicates a warming gradient.
Table 1 .
Results of the correlation analyses between zooplankton abundance and environmental variables.
Note: only significant variables are shown (based on stationary bootstrap test with α = 0.05).+ indicates a positive correlation, − indicates a negative correlation, and lag 1 yr indicates that the zooplankton response is one year behind.If 0, 1, and 2 yr-lag correlations are significant, only the best fit is retained.The results are not adjusted for multiple tests because we wish to explore potential relationships between zooplankton abundances and environmental variables.While correlation exists, the average explained variance is less than 15 %.LST, Lake Surface water Temperature; TP, Total Phosphorus; Phyto, phytoplankton carbon biomass, AO, Arctic Oscillation; PDO, Pacific Decadal Oscillation; SOI, Southern Oscillation Index.
Table 2 .
Results of correlation analyses of zooplankton abundances, categorized according to their feeding types and taxonomic order versus environmental variables.Significant correlation based on a stationary bootstrap test with α = 0.05.The results are not adjusted for multiple tests because we wished to explore potential relationships between zooplankton group abundances and environmental variables.
Table 3 .
Results of correlation analyses of the zooplankton group ratio versus environmental variables.
Table 4 .
Results of variance partition to investigate the relative contribution of environmental and phytoplankton matrices in explaining the variation of zooplankton community based on genus data.
Table 5 .
Results of variance partition to investigate the relative contribution of environmental and phytoplankton matrices in explaining the variation of zooplankton groups.Testing difference in the two factors 0.4341 Testing difference in the two factors 0.2368 Testing difference in the two factors 0.4821 Prob.: p-value from randomizaton test of 5000 times.The interaction component cannot be tested statistically.
Table 6 .
Results of variance partition to investigate the relative contribution of environmental and phytoplankton matrices in explaining the ratio of cladoceran/calanoid and cyclop/calanoid.Testing difference in the two factors 0.5521 Testing difference in the two factors 0.5597 Testing difference in the two factors 0.3449 Testing difference in the two factors 0.5873 Testing difference in the two factors 0.8674 Testing difference in the two factors 0.1544 Prob.: p-value from randomizaton test of 5000 times.The interaction component cannot be tested statistically.
|
2018-08-30T21:59:14.793Z
|
2011-05-30T00:00:00.000
|
{
"year": 2011,
"sha1": "2edeaa00bcdf24271a7f15adc1e30a9966646dfc",
"oa_license": "CCBY",
"oa_url": "https://www.biogeosciences.net/8/1383/2011/bg-8-1383-2011.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2edeaa00bcdf24271a7f15adc1e30a9966646dfc",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
217689199
|
pes2o/s2orc
|
v3-fos-license
|
Training Needs Assessment of Women in Small Scale Livestock Production and Its Implication for Socio-economic Empowerement in Oyo State, Nigeria
The aim of this study was to assess the training needs of rural women in livestock production in Oyo State, Nigeria. A multistage random sampling technique was used to collect data from 180 women from two (Ibadan/Ibarapa and Ogbomoso) zones of Oyo State Agricultural Development Programme (OYSADEP) using a well structure interview guide. The data was analyzed using frequency counts, percentages and mean while chi-square was used to test the relationship Original Research Article Oyegbami et al.; AJAEES, 8(1): 1-8, 2016; Article no.AJAEES.18104 2 between variables. Results from the study showed that more than half (57.5%) of women were into poultry production having less than 30 birds as stock size. Also, 82.2% and 42.3% indicated a need for training on general management of poultry and on sheep and goat production respectively. Credit (85%), lack of capital (90%), high mortality rate (51.7%) and inadequate information (63.5%) were some of the constraints in livestock farming as indicated by the women. Chi-square analysis result showed that primary occupation and age was significant (P= 0.05) to training needs of women in pig production. It is therefore recommended that government development strategies be modified to encourage and empower through training in livestock production and also allow women to have access to credit as this will allow them boost their production level.
INTRODUCTION
The term livestock can be defined as animals domesticated for food and fibre production. Such livestock may include pigs, cow, goat, sheep, horses, donkeys, mules, various types of poultry including chickens, geese, ducks and turkeys and even aquaculture. Livestock production accounts for one third of Nigeria's GDP providing food, farm energy, manure, fuel and transport [1]. The type of livestock reared varies worldwide and depends on certain factors such as climate, consumer demand, native animals, local tradition and land type. Nigeria is one of the four leading livestock producers in sub-Sahara Africa, in 1990 the livestock population comprises about 14 million cattle, 23 million goat and 13 million sheep [2]. However, these figures have since increased to 15.2 million cattle, 28 million goat and 23 million sheep [3].
Livestock are important in supporting the livelihood of poor farmers, consumers, traders and labourers in the developing world [4]. In sub-Sahara region, livestock is an important sector of agricultural production. Livestock provide a steady stream of food and income [5], help to raise farm productivity, and for many, offers a livelihood option as they exploit common resources for private gain [6]. An estimated 70% of the rural poor are vulnerable groups including children and women for whom livestock play an important role not only by providing a source of income but also by conferring status [7].
Women play significant role in agricultural production in developing countries, particularly in low income countries in which agriculture accounts for an average of 32% of growth in gross domestic product (GDP) and in which an average of 70% of the country's poor live and work in the rural areas. Women make up a substantial majority of the agricultural workforce and produce most of the food that is consumed locally. The large proportion of agricultural production that is attributed to women makes them important agents of economic development and principal agents of food security and household welfare in both rural and urban areas [8]. The role that women play and their position in meeting the challenges of agriculture and livestock production and development are quite dominant and prominent. Their relevance and significance therefore cannot be overemphasized.
Findings from a study conducted by the United Nations Development Programme (UNDP) as reported by World Bank [9] revealed that women make up 60-80% of agricultural labour force in Nigeria. The production of livestock according to Ndang and Tazuah [10] is predominantly male dominated, yet women are productive and are regular actors in animal husbandry particularly in the production of micro livestock. In an area where commercial banking is poorly developed, particularly in the rural area, livestock tend to serve as a store of value and rural bank for most rural dwellers.
Demand for livestock is expected to double in developing countries in the next twenty years [11], making it the fastest growing agricultural sector. Empowering women in this sector will improve their status, increase their income level and make them have a say in the community.
In light of this, the broad objective of this study was to assess the training needs of women involved in livestock production in Oyo State, Nigeria while the specific objectives were to: i.
Describe the personal and production characteristics of women in the study area; ii.
Identify the types of livestock reared by women; iii.
Investigate areas of training needs of women; iv.
Find out the constraints encountered by women in livestock production. [13]. The State is homogenous comprising mainly people of the Yoruba ethnic group but other tribes could also be found.
Sampling Design
A multi-stage random sampling technique was used because Oyo State has 33 local government areas which are divided into four agricultural zones. At the first stage, two zones were randomly selected out of the four agricultural zones of Oyo State Agricultural Development programme (OYSADEP). These were Ibadan/Ibarapa and Ogbomoso zone. Three local government areas were also randomly selected from each zone to give a total of six local government areas (Ibadan central, Ido, Ibarapa North, Ogbomoso South, Ogo-Oluwa, Surulere) six villages were also randomly selected from each local government area to give a total of 36 villages in all. Five women were further purposively selected from each village to give a total of 180 rural women which was used for data analysis.
Method of Data Collection and Analysis
A well structured interview guide was used to collect primary data from women involved in livestock production. Descriptive statistics such as frequency counts, percentages, and mean were used to present results on personal and production characteristics of women, types of livestock reared, training needs and constraints of women in livestock production. Chi-square was used to test the relationship that exists between personal characteristics and training needs of women in livestock production. Table 1 shows the distribution of women according to their personal characteristics. About one quarter (38.2%) of the women was between ages 31-50 years with mean age of 52.9 years. Though, one may say that the women are getting old but they are still active. Almost half (49.45) of the women are Muslim and this may have implication on the rearing of pigs based on religious grounds. Also, majority (60.2%) of the women had one form of education or another. The literacy level of the women is a very important variable and this may influence their ability to properly comprehend new techniques and method required to bring about positive change in attitude, knowledge and skill of the women. It is also evident from Table 1 that 94.4% of the women were married, 55.2% had a household size of between 5-12 persons with a mean of 7.8 persons. The large family size in the study area shows a typical rural Africa setting which is characterized by a large family size as observed by Ekong [14]. The large household size could serve as a useful source of labour for the women thereby reducing the cost of labour.
Personal Characteristics of Women
Findings from the study further show that civil service was the primary occupation of 24.9% of the women while 29.8%, 12.7%, and 21.5% indicated crop farming, livestock farming, and mixed farming respectively as their primary occupation. Also, 24.9%, 28.2%, 40.3%, and 2.2% indicated civil service, crop farming, livestock farming, and mixed farming as their secondary occupation. The result implies that very few (12.7%) of the women took livestock farming as primary occupation while 41.3% were into livestock farming as secondary occupation.
The very few women that were into livestock farming as primary occupation are those that got financial assistance from their husband as indicated by the women during focus group discussion. Furthermore, 40.2% took livestock production as secondary occupation but have small stock size though willing to have larger number but got no assistance either from any group or their husband as mentioned by the women in the study area. The implication of this is that women may not be able to stand on their own, may not have a voice in the society, and there is probably going to be a decrease in income and low standard of living. They may also not be able to pay their children school fees (especially women that are the head of household), may not have access to good health care facilities and all these may increase the poverty level of the women. Table 2 shows the distribution of women according to their production characteristics. The result shows that more than half of the women (57.5%) were into poultry production alone, 8.8% were into the production of sheep and goat, 26% rear poultry together with sheep and goat. This implies that women were involved in different types of livestock production as a means of livelihood diversification for income. Majority (95%) of the women have small stock size of between 1-30 birds. Also, very few (1.1%) of the women were into pig production and this may be because of religious belief of the Muslims. According to Harris [15], the unclean nature of pigs makes it unfit for rearing and eating for the Muslims. Also, 72.9% have small stock size of less than or equal to 5.
Production Characteristics of Women
The result implies that though the women were into different livestock rearing activities, they can still be regarded as small scale livestock farmers.
The annual income of 75.1% of the women is less than or equal to N50, 000, the small stock size may be attributed to this. Personal and family labour was the source of labour used on the farm as indicated by 46.1% and 43.9% of the women respectively. This may be because majority of the women (as shown in Table 1) have small stock size which may not require hiring of labour. Furthermore, majority (68.8%) of the women got the capital used for livestock farming from personal saving while 16% got theirs from their husbands as indicated by the women. Table 3 shows the training needs of women in livestock production. The result show that 82.2% of the women indicated training on general management of poultry (from brooding/chick production market size). This may be because poultry are small animals that is easily handled and mostly reared by women. Also, 42.3% of the women want training on general management of sheep and goat. The other areas where women need training are feeding in fish (96.6%), pig production (5.6%), and in sheep and goat (11.1%) using local feed ingredients as indicated by the women and in drug administration for all the livestock types under studied.
Training Needs of Women in Livestock Production
The result further show that the women want training on all the livestock types selected for this study as training in these areas will give them knowledge, build their capacity and also empower them in these areas of livestock production. Ajayi [16] submitted that training is the process of teaching, informing and educating people so that they become well qualified to do their work and to perform in a position of great difficulty and responsibility. Ajayi [17] also stated that training is the acquisition of the best ways to utilize knowledge and skills to achieve a specific goal of production. Table 4 shows the constraints of women in livestock production. The table shows that lack of access to credit (85.0%) and lack of capital (90%) are major constraints encountered in livestock rearing. This may be the reason why majority of the women were producing at small scale level. Mayoux [18] opined that access to credit gives women a greater economic role in decision making. He submitted that when women control decision regarding credit and savings, they will optimize it for their own welfare. It was further corroborated by Mayoux [18] that investment in women's economic activities will improve employment opportunities for the women and this he said will have a trickledown effect. High mortality rate (51.7%) and inadequate information (68.3%) were also major constraints as indicated by the women. Information about new technologies is very vital for effective and efficient production system. Lack of adequate information may reduce level of livestock production among the women. The women (46.7%) identified marketing of livestock and its products as a major constraint and this can affect livestock production level among the women and also reduce income level. Moise [19] positioned that trade is a powerful engine for economic growth, poverty reduction and development.
CONCLUSION AND RECOMMENDA-TION
The major conclusion from this study is that women are involved in livestock rearing either as a major or secondary occupation. Though as small scale producers, they do not have access to credit and majority of the women interviewed showed interest in training in all the livestock types indicated in this study as this will empower them and boost their social status and level of livestock production. Lack of capital, lack of credit and inadequate information were the major constraints identified by the women as affecting livestock farming. Primary occupation and age were significant (P=.05) to training needs of women in pig production and in sheep and goat production respectively. This informs that age and occupation are important variables to be considered when planning a training programme as this will have a significant effect on the level of reception of the content of training and rate of application. Based on the findings from this study, it is recommended that: 1. Government development strategies should be modified to encourage and recognize women's increasing role in livestock production. Development projects and policies should also be targeted at women. 2. Women should be encouraged to form groups and these groups should be encouraged to take up activities related to livestock production. 3. The really poor do not have large animals and so a livestock strategy that will include many poor women and their families must focus on small animals like poulty. 4. Credit needs to be provided directly to women removing or allow flexibility in bureaucratic procedures. Terms of loan repayment should be made flexible to accommodate slower rate of return on livestock and adequate follow up should be provided. 5. Information on livestock production should be made available to women through different means as this will increase awareness on new techniques and how to apply and adopt these techniques to increase their level of production. 6. Empowerment requires training and financial support which is required to boost women livestock production capabilities as well as increase their income and standard of living.
|
2019-08-20T02:51:47.846Z
|
2016-01-10T00:00:00.000
|
{
"year": 2016,
"sha1": "6ba080f579ffd0f38b42442b1680f1f0dc8903a7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.9734/ajaees/2016/18104",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "976fbc9b75ed0c646e60da6db38c70333e716a54",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Sociology"
],
"extfieldsofstudy": [
"Business"
]
}
|
210979651
|
pes2o/s2orc
|
v3-fos-license
|
Menstrual-Related Headaches Among a Cohort of African Adolescent Girls
Introduction Migraine attacks associated with menstruation are generally perceived as more severe than attacks outside this period. Aim and Objective The study aimed at determining the frequency of menstrual-related headaches among a cohort of senior secondary school girls in Abeokuta, Nigeria. We also determined its burden among these school girls. Methodology This study was cross-sectional using a validated adolescent headache survey questionnaire. A self-administration of the instrument was done during a school visit. A headache was classified using the ICHD-II criteria. Results Of the 183 students interviewed, 123(67.2%) had recurrent headaches. Mean age ±SD, 16.18±1.55 (range 12–19). The prevalence of definite migraine was 17.5% while the prevalence of probable migraine was 6.0%. The prevalence of tension-type headache was 41.0%. Migraine was significantly menstrual-related (p=0.001, 95% CI=1.06–6.63). Median pain severity score was higher among MRH group (p=0.043). The median number of days of reduced productivity and missed social activities was significantly higher in the MRH group; p= 0.001 and p=0.03, respectively. Subjects with MRH were more incapacitated by their headaches (p= 0.003). Conclusion Menstrually related headache is prevalent even among the adolescent and it has adversely affected their productivity and social life. Care of adolescent with headaches should be intensified.
Introduction
Over 37 million Nigerians are adolescents (between the ages of 10-19 years) and they constitute over 20% of the general population. Roughly half of these are females. 1 Adolescence is a phase of life characterized by rapid and ample physical, biological, psychosocial and neurodevelopmental changes 2 One of the biological changes that characterize this period is the attainment of menarche; a milestone that is greatly influenced by the genetic makeup and environmental factors of the individual and one that is accompanied by menstrually associated disorders 3 apart from other psychosocial issues. The adolescent population is therefore faced with specific health and developmental needs.
Migraine headache, the 7th disabling disease 4 is probably as common among the adolescent population as in the adult 5 although our current understanding of headache pathophysiology among children and adolescents is based on extrapolations from studies conducted among adults. 6 Migraine is twice as prevalent in females as in male genders especially after puberty. Among adolescents just like in adults, chronic migraine has been found to be strongly associated with depression, irritability, missed school/ works as well as missed social activities thereby increasing the total burden of headaches in this population. 7 Menstrual disorders have been found to be significant among adolescents who reported headache or dizziness during their menstrual period. 8 According to the international classification of headache disorders 2nd version (ICHD-II), 9 menstrually related migraine (MRM) is defined as attacks of menstrually related migraine without aura that has an onset during the perimenstrual time period (2 days before to 3 days after the onset of menstruation) and this pattern must be confirmed in two-thirds of menstrual cycles, but other attacks may occur at other times of the menstrual cycle (Table 1). Migraine attacks associated with menstruation are generally perceived as more severe, more disabling and refractory to abortive medications than those outside this period. 10 There is paucity of data-driven scientific reports from sub-Saharan African on the burden of menstrually related migraine, especially among adolescents. The need for individualized therapy, provision and optimization of effective abortive therapy and improvement in the overall quality of life of adolescent migraineurs calls for this survey. We hypothesized that menstrually related headaches will be prevalent among Nigerian high school adolescent girls. Our study aimed therefore, at determining the frequency of menstrually related headaches among a cohort of senior secondary school girls in Abeokuta, southwestern Nigeria. We also set out to document headache burden among these school girls.
Methodology
This was a cross-sectional observational study that was conducted in four secondary schools in Abeokuta south local government area (LGA), Abeokuta, south-western Nigeria in October 2010. Abeokuta is a southwestern Nigerian city with a population of 593,100 according to the 2006 census. It is situated at elevation 64 m above sea level and lies in longitude 3.35and latitude 7. 16. 11 Abeokuta is made up of two local governments namely Abeokuta south and Abeokuta north LGA. Abeokuta south was selected for the study by balloting. Of the 20 public secondary schools in the local government, four secondary schools (20%of the sample frame) were randomly selected for the survey.
To obtain the lifetime prevalence of migraine and tensiontype headache from this adolescent population, the minimum sample size to be studied was estimated to be 142. This was based on a standard normal deviate of 1.96 and a degree of accuracy of 0.05 while the proportion of high school students with migraine was put at 10.3% based on a previous study among Nigerian undergraduates. 12 The study protocol was approved by the research and ethics committee (REC) of Federal Medical Centre Abeokuta (FMCA). The authorities of the secondary schools approved the study and informed verbal consent was obtained from the participants. Individual parental informed consent was not obtained. The school authority served as proxy for the parents in this regard. This modification of the protocol was approved by the REC of FMCA. A validated self-administered headache questionnaire developed by Ojini et al 13 was used for the survey. The questionnaires were anonymized while the students were assured of the confidentiality of their responses. The questionnaire were distributed to the participants by the residents rotating through the neurology unit of the medical department of our hospital after a pilot study with 20 students showed excellent clarity. The 29 item questionnaire is made up of three sections.
The first section assessed socio-demographic variables. The second section evaluated headache-related variables including the association of headache with menstruation in the past 3 months. The third section assessed headache disability using the migraine disability assessment scale (MIDAS). The headache-related items were developed to incorporate diagnostic items for migraine and tension-type headaches according to A. Attacks, in a menstruating woman, fulfilling the criteria for migraine without aura B. Attacks that occur exclusively from days −2 to +3 of menstruation in at least two out of three menstrual cycles, and additionally at other times of the menstrual cycle.
the International Classification of Headaches Disorders (ICHD-II) classification. 9 The survey was conducted in trickles during school hours at the end of class from October 1, 2010 to October 20, 2010. Although the questionnaires were self-administered, the authors were on hand to give clarifications where necessary. Six hundred questionnaires were distributed in all. The males had 350 questionnaires while 250 questionnaires were distributed to the female students based on their availability in the school on the day of interview. Subanalysis of the female respondents was performed for this report.
Statistical Analysis
Statistical analysis was performed with Statistical Package for the Social Sciences version 20.0.0 (SPSS Inc., Chicago). Data were summarized and presented as frequencies and percentages. Categorical data were compared using chi-square test. Normally distributed numerical variables were summarized as mean with standard deviation and between groups comparison was performed with student's t-test. Skewed numerical data were summarized with median (interquartile range) and non-parametric Kruskal-Wallis test was used for between group comparisons. The level of significance was set at p<0.05.
Characteristics of the Respondents and Prevalence of Menstrually Related Headaches
One hundred and eighty-three (183) out of the 250 questionnaires distributed to the female students were returned (response rate 73.2%). Of the 183 respondents, 39 subjects (21.3%) had headaches considered to be due to systemic infection like malaria, typhoid or flu. Of those whose headaches were not due to the factors above, 16 (8.7%), had non-recurrent headaches. Five subjects who had recurrent headaches but were older than 19 years of age were excluded from the analysis. Figure 1 shows the algorithm of subjects' selection for the analysis.
The prevalence of menstrually related headache was 14.2% (26/183).The prevalence of definite menstrually related migraine was 6.6% (12/183). When probable migraine was added, the prevalence of all menstrually related migraine was 8.7% (16/183). There was no significant age difference between the cohorts with and without menstrually related headaches ( Table 2). Although the difference was also insignificant, subjects with menstrually related headaches tend to have a positive family history of headaches. Nausea and photophobia were significantly prevalent among subjects with menstrually related headaches (p<0.05) and these subjects were more likely to consult a doctor for their headaches (p<0.05). Table 2 shows the baseline characteristics of those with recurrent headaches. Table 3 shows the frequency of menstrually related primary headaches. Migraine was significantly menstrually related
Burden of Menstrually Related Headaches
When severity of headaches was compared across age groups, there was a significantly higher median headache severity score among 18-19 years old with menstrually related headaches (p=0.034). (Figure 2).No statistically significant difference in median headache severity scores was noted between those with or without menstrually related headache in the other age groups ( Figure 2). Table 4 shows the comparison of each item of the MIDAS scale between subjects with and without menstrually related headaches. Median number of days of missed social activities and reduced productivity was significantly higher among the cohort with menstrually related headaches; p-values 0.047 and 0.004, respectively. There was no difference in the median number of days of missed social activities, reduced productivity or reduced house works between the cohort of menstrually related migraine and other menstrually related headaches. The cohort with menstrually related migraine, however, demonstrated more severe headaches (p value =0.006) as well as more number of headache days in the last three months (p value=0.007). Table 5 shows the degree of headache disability according to the total MIDAS score. A score of 0-5 was graded as little or no disability (grade I), 6-10 as mild disability (grade II), 11-20 as moderate disability (grade III) while >20 was defined as severe disability (grade IV). Significantly higher proportion of patients with menstrually related headaches had moderate disability (p=0.003).
Discussion
Research on small and specific populations of headache sufferers has the added advantage of aiding in the identification of factors that influence the frequency and severity of various subtypes of headache 13 thereby enabling effective planning and organization of health services. From a planning point of view, gender-specific data, from clinical and epidemiological studies have the potential to bridge the knowledge gap that fosters the neglect of vulnerable groups such as adolescents.
Globally, the estimated overall mean prevalence of headache among adolescents is 54.4% (95% CI 43.1-65.8) and the overall mean prevalence of migraine is 9.1% (95% CI 7.1-11.1). 14 Among this cohort of female adolescents, the prevalence of headache was 67.2%. 14,15 Our study found a higher prevalence compared to the study by Ofovwe and Ofili who reported a prevalence of 19.5% in a survey of secondary school students in south-south Nigeria. 16 Ofovwe and Ofili, however, reported a migraine 10 prevalence of 13.5% which is comparable to the prevalence rate of 11.5% that we found in this study. As pointed out by Wöber-Bingöl, there is a paucity of population-based studies of adolescent headaches from low and low-middle income countries 15 hence there are limited studies to compare our findings with. When compared with studies among young undergraduates, our study found a higher prevalence of headache than Mitsicostas as et, who reported a prevalence of 50.3% among female undergraduate. 14 Krogh et al 17 however, reported a prevalence of 88%; a rate higher than what we found in this cohort. The relatively modest sample size in our study as well as regional/ racial differences in headache epidemiology could have been responsible for these differences even though we used the same diagnostic criteria as the earlier quoted studies. The prevalence rate in this study is however comparable to the prevalence rate of 62.8% in the female cohorts of the survey by Ojini et al in a sample of Nigerian undergraduates. 13 The prevalence of menstrually related headache was 14.2% in this survey of which menstrually related migraine was 8.7% when all migraine (definite and probable migraine) was summed up. Among the female adult population, the prevalence of menstrually related migraine without aura ranges from 20-51% while that of pure menstrual migraine without aura varies from 7-19%. 18,19 There is paucity of literature on menstrually related migraine in adolescents. In a study of 896 girls between 9 and 18 years, Crawford et al reported a prevalence rate of 36.9%. 20 This value no doubt is higher than what we found. While the value of Crawford et al approaches that of the female adult population, the prevalence of MRM in this study lags behind that of the female adult population as well as that of the Caucasian adolescent population. This finding suggests that perhaps the adolescent Caucasian or African American adolescents attain menarche earlier, and that their hypo-thalamo-pituitaryovarian (HPO)functions mature faster than native African adolescents.
By way of pathophysiology, the decline in serum estradiol levels that occur soon before and during the peri-menstrual time period is the most plausible trigger for menstrual migraine. Other contributor to MM includes the release of prostaglandins from a shedding endometrium that sensitizes peripheral nociceptors, and the declines in serum magnesium levels. A decrease in the inhibitory neurotransmitter systems that modulate neuronal firing rates within second-order neurons of the trigeminal system may also augment the initiation of MM. 21 Therefore, the more developed the HPO function is, the more readily the adult pattern of menstruation is established. The above hypothesis is buttressed by our findings of higher median pain severity score among older girls aged 18-19 years (Figure 2). This relatively higher pain severity score also agrees with the finding of Stewart et al 22 who in a rigorous prospective survey of 81 migraineurs women from the general population, found a slight increase in the severity of attacks occurring the first 2 days of menses. However, these authors did not find any other significant characteristical difference between menstrual and non-menstrual migraine attacks.
A significant proportion of adolescents with MRH sought a doctor for their headaches (Table 1). Although the difference was not significant, the MRH cohort was more likely to self-medicate. It is possible that this treatment-seeking pattern is dictated by the severity of the headache attacks. We recorded no use of abortive therapies such as triptans and ergot preparations in this study. Our finding mirrors the report by other authors in our environment. 12,13,16,23 Oshinaike et al, also recorded no patient with triptans prescription, but up to 32% of those who sought medical consultation had ergot prescription. 23 The study population (hospital staff members) however was not comparable with ours hence the difference in the treatment pattern. Our finding clearly underscores the fact that headaches in general, are undertreated in our environment and menstrually related headaches in particular, does not enjoy any special attention as far as treatment is concerned. Our results show that menstrually related headaches are clearly more disabling especially when moderate to severe disability are considered. Crawford et al 20 found no difference in disability between girls with a menstrual pattern and those without a menstrual pattern although girls with MM reported increased associated symptoms compared with girls without MM. In our cohort, however, girls with MM reported more headache days in 3 months and higher pain severity suggesting that girls with MM are likely more incapacitated by their headaches. Just like we have established, the American Migraine Prevalence and Prevention (AMPP) study found that women with MM were more impaired by attacks while women with MAM had overall highest burden, likely due to experiencing migraines on additional days. 24
Limitations and Strength of Study
Our study is limited by its modest sample size and crosssectional design. However, its strength lies in the fact that there is paucity of MRH-specific data from SSA. In addition, we have studied a population of educated/enlightened girls; hence, the influence of cultural beliefs on reporting MRH is less likely.
Conclusion
Our study found that menstrually related headache is prevalent among adolescent Nigerian girls. When compared with similar studies among whites, it appears less prevalent but undoubtedly more disabling. Unfortunately, health-care facilities are poorly accessed and the headache is poorly treated in that approved medications are seldom used by the migraineurs. More headache education and awareness programs need to be instituted and possibly incorporated into adolescent and school health programs. Headache advocacy among this vulnerable group should be escalated as this might have a long-lasting impact on reducing the burden of menstrually related headaches in our environment.
|
2020-01-23T09:21:16.062Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "7ec51a94f88c83a63187805e3f402a1b0609cd51",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=55454",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f44c1d8be3b7ee6bf8a5d7ce57f22089d7fc6d2c",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
227228534
|
pes2o/s2orc
|
v3-fos-license
|
On vanishing near corners of conductive transmission eigenfunctions
In this paper, we consider the transmission eigenvalue problem associated with a general conductive transmission condition and study the geometric structures of the transmission eigenfunctions. We prove that under a mild regularity condition in terms of the Herglotz approximations of one of the pair of the transmission eigenfunctions, the eigenfunctions must be vanishing around a corner on the boundary. The Herglotz approximation can be regarded as the Fourier transform of the transmission eigenfunction in terms of the plane waves, and the growth rate of the transformed function can be used to characterize the regularity of the underlying wave function. The geometric structures derived in this paper include the related results in [5,19] as special cases and verify that the vanishing around corners is a generic local geometric property of the transmission eigenfunctions.
1. Introduction 1.1. Background. In its general form, the transmission eigenvalue problem is given as follows (cf. [22]): where Ω is a bounded Lipschitz domain in R n , n = 2, 3, with a connected complement R n \Ω and P j (x, D) are two elliptic partial differential operators (PDOs) with D signifying the differentiations with respect to x = (x j ) n j=1 ∈ R n , and C denotes the Cauchy data set. If there exists a nontrivial pair of solutions (u, v), then λ ∈ C is called a transmission eigenvalue and (u, v) are the corresponding pair of transmission eigenfunctions.
Though the PDOs P j , j = 1, 2, are generally elliptic, selfadjoint and linear, the transmission eigenvalue problems of the form (1.1) are a type of non-elliptic, non-selfadjoint and nonlinear (in terms of the transmission eigenvalue λ) spectral problems, making the corresponding spectral study highly intriguing and challenging; see [22] for some related discussion. The transmission eigenvalue problems arise in the wave scattering theory and connect to many aspects of the wave scattering theory in a delicate way. Indeed, many of the spectral results established for the transmission eigenvalue problems in the literature have found important applications in the wave scattering theory, including generating novel wave imaging and sensing schemes, producing important implications to invisibility cloaking and proving new uniqueness results for inverse scattering problems. We refer to [10,11,16,22] for historical accounts and surveys on the state-of-the-art developments of the spectral studies for the transmission eigenvalue problems in the literature.
To a great extent, the spectral properties of the (real) transmission eigenvalues resemble those for the classical Dirichlet/Neumann Laplacian: there are infinitely many real transmission eigenvalues which are discrete and accumulate only at infinity. Nevertheless, due to the non-selfadjointness, there are complex transmission eigenvalues; see [10,16] the references cited therein. Recently, several local and global geometric structures of distinct features were discovered for the transmission eigenfunctions [2-9, 12-14, 19] and all of them have produced interesting applications of practical importance in the scattering theory. In this paper, we are concerned with the vanishing property of the transmission eigenfunctions around a corner on the boundary of the domain, which was first discovered in [5] and further investigated in [19]. Before discussing our major discoveries, we next specify the transmission eigenvalue problem as well as its vanishing properties in our study.
Let Ω be a bounded Lipschitz domain in R n , n = 2, 3,, with a connected complement R n \Ω, and V ∈ L ∞ (Ω) and η ∈ L ∞ (∂Ω) be possibly complex-valued functions. Consider the following transmission eigenvalue problem for v, w ∈ H 1 (Ω) and λ = k 2 , k ∈ R + : in Ω, where ν ∈ S n−1 signifies the exterior unit normal to ∂Ω. Two remarks concerning the formulation of the transmission eigenvalue problem (1.2) are in order. First, we introduce k 2 to denote the transmission eigenvalue. On the one hand, k signifies a wavenumber in the physical setup and on the other hand, this notation shall ease the exposition of our subsequent mathematical arguments. Though only k ∈ R + is physically meaningful, some of our subsequent results also hold for the case that k is a complex number, which should be clear from the context. Second, the second transmission condition on ∂Ω in (1.2) is known as the conductive transmission condition. This type of transmission condition arises in modelling wave interaction with a certain material object and can find important applications in magnetotellurics; see e.g. [13,19] and the references cited therein for more relevant physical backgrounds. On the other hand, if one simply takes η ≡ 0, (1.2) is reduced to the transmission eigenvalue problem that has been more intensively studied in the literature. In order to signify such a generalization and extension, we refer to the eigenvalue problem (1.2) as the conductive transmission eigenvalue problem, which includes the conventional transmission eigenvalue problem as a special case. Let x c ∈ ∂Ω be a corner point, which shall be made more precise in what follows. Let B ρ (x c ) denote a ball of radius ρ ∈ R + centred at x c . The vanishing property of the transmission eigenfunction is described as follows: where m denotes the Lebesgue measure. It is noted that w and v are H 1 -functions and the vanishing at a boundary point should be understood in the integral sense. On the other hand, if ψ is a continuous function in a neighbourhood of x c , (1.3) clearly implies that ψ(x c ) = 0. In fact, the regularity of the transmission eigenfunctions w and v in (1.2) is critical for the establishment of the vanishing property (1.3). Under the regularity condition that both w and v are additionally Hölder continuous, namely C α continuous with α ∈ (0, 1), it is shown in [5] and [19] that the vanishing property holds respectively in the cases with η ≡ 0 and η = 0. By the classical result on the quantitative behaviours of the solutions to elliptic PDEs around a corner (cf. [17,18,21]), we have the following decompositions where the regular parts belong to H 2 and hence by the standard Sobolev embedding, they are Hölder continuous. The singular parts may also be Hölder continuous provided the coefficients, namely V , as well as the boundary data of w and v around the corner are sufficiently regular. However, in the transmission eigenvalue problem (1.2), the boundary data, namely (w| ∂Ω , ∂ ν w| ∂Ω ) and (v| ∂Ω , ∂ ν v| ∂Ω ) are not specified. Hence, it may happen that the transmission eigenfunctions are H 1 but not Hölder continuous. Clearly, according to our discussion above, the vanishing property may serve as an indicator for such singular behaviours of the transmission eigenfunctions around the corner. Indeed, according to the extensive numerical examples in [4], though the transmission eigenfunctions generically vanish around a corner, there are cases that the transmission eigenfunctions are not vanishing and instead they are localizing around a corner, especially when the corner is concave. Hence, it is mathematically intriguing and physically significant to thoroughly understand such a singularity formation of the transmission eigenfunctions and its connection to the corresponding vanishing behaviour. In [5,19], a regularity criterion of a different mathematical feature, but more physically related, has been investigated in connection to the vanishing property of the transmission eigenfunction. It is given in terms of the Hergoltz approximation of the transmission eigenfunction v in (1.2). The Herglotz approximation in a certain sense is the Fourier transform (in terms of the plane waves) of the eigenfunction v who satisfies the homogeneous Helmholtz equation. Hence, the growth rate of the transformed function, i.e. the density function in the Herglotz wave, can naturally be used to characterize the regularity of the underlying wave function. This resembles the classical way of defining the Sobolev space via the Bessel potentials. In this paper, we shall explore along this direction and derive much sharper estimates to show that the vanishing property of the transmission eigenfunctions holds for a much broader class of functions in terms of the Herglotz approximation. The vanishing property of the transmission eigenfunctions derived in this paper include the corresponding results in [5,19] as special cases.
1.2. Statement of the main results and discussions. In order to present a complete and comprehensive study, the statements of our main results are lengthy and technically involved. Nevertheless, in order to give the readers a global picture of our study, we briefly summarize the major findings in the following two theorems. To that end, we first introduce the Herglotz approximation.
v j is known as a Herglotz wave with kernel g j . It is easy to see from (1.5) that v j is formed by the superposition of plane waves and it is an entire solution to the Helmholtz equation ∆v j + k 2 v j = 0. Hence, g j can be regarded as the Fourier density of the wave function v j in terms of the plane waves. We have the following denseness property of the Hergoltz waves.
Let Ω ⋐ R n be a bounded Lipschitz domain with a connected complement and H k be the space of all the Herglotz wave functions of the form (1.5). Define and H k (Ω) = {u| Ω ; u ∈ H k }. Then H k (Ω) is dense in S k (Ω)∩L 2 (Ω) with respect to the topology induced by the H 1 (Ω)norm.
Theorem 1.2. Consider the transmission eigenvalue problem (1.2) with η ≡ 0. Let x c ∈ ∂Ω be a corner point in two and three dimensions and N h be a neighbourhood of x c within Ω with h ∈ R + sufficiently small. Suppose that (1 + V )w and η are both Hölder continuous on N h and ∂N h ∩ ∂Ω respectively and η(x c ) = 0. If there exist constants C, ̺ and Υ with C > 0, Υ > 0 and ̺ < Υ such that the transmission eigenfunction v can be approximated in H 1 (N h ) by the Herglotz functions v j , j = 1, 2, . . . , with kernels then w and v vanish near x c in the sense of (1.3).
More detailed results are respectively given in Theorems 2.3 and 3.1 for the two and three dimensions. Remark 1.3. As discussed earlier, the vanishing properties were investigated in [19] under a similar setup to Theorem 1.2. Compared to the corresponding results in [19], Theorem 1.2 has two significant improvements in the regularity requirements. First, the Herglotz approximation condition in [19] was required to be where the constants C > 0, Υ > 0 and 0 < ̺ < 1. It is directly verified that the regularity condition (1.7) is included in (1.6) as a special case. Second, it was required in [19] that w − v is H 2 -regular away from the corner point x c , and we remove this rather artificial regularity requirement in Theorem 1.2. The similar result holds in the three-dimensional case with (1.6) replaced to be (3.23).
More detailed results are respectively given in Corollaries 2.4 and 3.2 for the two and three dimensions.
one can readily see that w vanishes near x c , which in turn implies the vanishing of v near x c by noting that w and v possess the same traces on ∂Ω.
Remark 1.6. The vanishing of the transmission eigenfunctions in the case η ≡ 0 was also studied in [5,19]. The regularity requirement in [19] is the same as that described in Remark 1.3, whereas in [5], the Herglotz approximation was required to be where the constants C > 0 and 0 < β < 1/(2n + 8), (n = 2, 3). It is directly verified that the corresponding results in [5,19] are included into Theorems 1.2 and 1.4 as special cases. Nevertheless, it is pointed out that in [5], the technical condition (1 + V )w being Hölder continuous is not required and instead it is required that V is Hölder continuous.
Finally, we would like to give two general remarks on the vanishing properties of the transmission eigenfunctions.
Remark 1.7. The vanishing properties established in Theorems 1.2 and 1.4 as well as those in [5,19] are of a completely local feature. That is, all the results hold for the partial-data transmission eigenvalue problem, namely in (1.2) the transmission boundary conditions on ∂Ω is required to hold only in a small neighbourhood of the corner point. It is mentioned that a global rigidity result of the geometric structure of the transmission eigenfunctions was presented in [14].
Remark 1.8. According to our earlier discussion, if the transmission eigenfunctions w and v are Hölder continuous around the corner, then both of them vanish near the corner. Hence, in order to search for the transmission eigenfunctions that are non-vanishing near corners, especially those numerically found in [4] which are actually locally localizing around corners, one should consider transmission eigenfunctions whose regularity lies between H 1 and C α , α ∈ (0, 1). By using properties of the Herglotz approximation (cf. [15]), one can show (though not straightforward) that the regularity criterion (1.7) defines a set of functions which includes some functions that are less regular than C α , but also does not include some functions which are more regular than C α . Hence, the regularity characterization in terms of the Herglotz approximation is of a different feature from the standard Sobolev regularity. Nevertheless, Theorems 1.2 and 1.4 indicate that the vanishing near corners is a generic local geometric property of the transmission eigenfunctions.
In what follows, Sections 2 and 3 are respectively devoted to the vanishing properties of the transmission eigenfunctions in two and three dimensions.
Vanishing properties in two dimensions
To facilitate calculation and analysis, we introduce the two-dimensional polar coordi- Define an open sector in R 2 with the boundary Γ ± as follows, where −π < θ m < θ M < π, i := √ −1 and Γ + and Γ − respectively are (r, θ M ) and (r, θ m ) with r > 0. Define that 2) We shall make use a particular type of planar complex geometrical optics (CGO) solution, which was first introduced in [2].
By direct calculations, one can obtain the following estimates for the CGO solution u 0 (sx): Furthermore, one has the following result: Corollary 2.2. The following estimates hold for the L 2 norm of u 0 , Proof. Using the integral mean value theorem, one can deduce that where Θ ∈ (0, h). On Λ h , it can be seen that all of which decay exponentially as s → ∞. By straightforward calculations, one can show that Using polar coordinates, we can deduce that which completes the proof.
Next, by direct computations and the compact embeddings of Hölder spaces, one can easily obtain the following result: Furthermore, using the Jacobi-Anger expansion ( [15, Page 75]) in R 2 , we can obtain the following lemma. Lemma 2.3. The Helgloltz wave function v j defined in (1.5) admits the following asymptotic expansion: (2.14) where J p (t) is the p-th Bessel function of the first kind. Indeed, then the following expansion holds near the origin: Proof. Using Green's formula and the boundary condition in (1.2), we can deduce that Thus (2.17) is obtained by taking the limit on both sides of the equation (2.18).
19)
and Then it holds that
21)
and Note that ω(θ) > 0 for θ m ≤ θ ≤ θ M . By Lemma 2.3, we can obtain that and (2.25) By using Lemma 2.3, one can drive that where we assume that kh < 1 for sufficiently small h. Taking
By Lemmas 2.3 and 2.4, it holds that
(2.28) Since for kh < 1. Furthermore, by direct computations, one can obtain that (2.31) Similarly, one can prove (2.22), which completes the proof.
Then the following estimate holds,
Proof. By Lemma 2.4, the trace theorem and Cauchy-Schwarz inequality, one has Combining with Corollary 2.2, one readily obtains (2.34).
The proof is complete.
We are in a position to present our first main result on the vanishing properties of the conductive transmission eigenfunctions (v, w) in two dimensions. Theorem 2.3. Let v ∈ H 1 (Ω) and w ∈ H 1 (Ω) be a pair of eigenfunctions to (1.2) associated with k ∈ R + . Assume that the domain Ω ⊂ R 2 contains a corner Ω ∩ W, where x c is the vertex of Ω ∩ W and W is a sector defined in (2.1). Moreover, there exits a sufficiently small neighbourhood S h (i.e. h > 0 is sufficiently small) of x c in Ω, where S h is defined in (2.2), such that qw ∈ C α S h with q := 1 + V and η ∈ C α Γ ± h for 0 < α < 1,. If the following conditions are fulfilled: (a) the transmission eigenfunction v can be approximated in H 1 (S h ) by the Herglotz functions v j , j = 1, 2, . . . , with kernels g j satisfying for some constants C, ̺ and Υ with C > 0, Υ > 0 and ̺ < Υ.
For national simplicity, we define f 1j (x) = −k 2 v j . Consider the following integral: (2.40) By Corollary 2.1, one has that u 0 / ∈ H 2 near the origin. Consider the domain D ε = S h \B ε for 0 < ε < h. Using the fact that and by Lemma 2.5, one can drive that where I 1 and ∆ j (s) are defined in (2.40), and (2.43) Recalling that I 1 is defined in (2.40), by Lemma 2.4 and the compact embedding, one can deduce that where δf 1j (x) and δf 2 (x) are deduced by Lemma 2.4. Using the fact that and combining (2.5) and (2.6) in the Lemma 2.1, it can be derived that ( where δf 1j (x) is deduced by Lemma 2.4. Using the assumption (2.35), we further otain that (2.49) By using the Hölder inequality, Corollary 2.2, and the trace theorem, one can prove that where c ′ > 0 as s → ∞. By Lemma 2.6 and (2.44), multiplying s on the both sides of (2.42), we can deduce that Combining with the triangular inequality one finally arrives at The proof is complete.
We next consider the degenerate case of Theorem 2.3 with η ≡ 0. By slightly modifying our proof of Theorem 2.3, we can show the following result.
Corollary 2.4.
Let v ∈ H 1 (Ω) and w ∈ H 1 (Ω) be a pair of eigenfunctions to (1.2) with η ≡ 0 and k ∈ R + . Let W and S h be the same as described in Theorem 2.3. Assume that qw ∈ C α S h for 0 < α < 1. Under the condition (2.37) and that the transmission eigenfunction v can be approximated in H 1 (S h ) by the Herglotz functions v j with kernels
54)
for some constants C, ̺ and Υ with C > 0, Υ > 0 and ̺ < αΥ/2, one has Proof. Since the proof is similar to that of Theorem 2.3, we only outline some necessary modifications in the following. Without loss of generality, we assume that x c = 0. Since η(x) ≡ 0, from (2.42), (2.43) and (2.44), we have the following integral indentity The proof is complete.
Vanishing properties in three dimensions
In this section, we consider the vanishing properties of the transmission eigenfunctions in the three-dimensional case. We first introduce the (edge) corner geometry in the three-dimensional setting. It is described by W × (−M, M ), where W is a sector defined in (2.1) and M ∈ R + . It is readily seen that W × (−M, M ) actually describes an edge singularity and we call it a 3D corner for notational unification. Suppose that the domain Ω ⊂ R 3 possesses a 3D corner. Let x c ∈ R 2 be the vertex of W and where Γ ± are the two boundary pieces of W . For the subsequent use, we introduce the following dimension reduction operator.
The following lemma shows the regularity of the functions after applying the dimension reduction operator.
Proof. We first show that R : M )), by the dominated convergence theorem, we know that Furthermore, by Minkowski integral, we have When g ∈ C α (W × [−M, M ]), it can be easily drived that which means that R(g)(x ′ ) ∈ C α (W ).
Similar to Lemma 2.2, we have the following lemma.
where 0 < α < 1 and diam(S h ) is the diameter of S h .
Applying the dimension reduction operator to the above spherical Bessel function, we can obtain the following lemma.
We next derive several critical auxiliary lemmas. (3.11) Similarly, using the fact that η is independent of x 3 , we can obtain that Therefore, by Green's formula, we have Since v, w ∈ H 1 (S h × (−L, L)) , from Lemma 3.1 we know that R(v − w) ∈ H 1 (S h ), and it can be derived that lim ε→0 Λc Using the trace theorem, we see that The proof is complete.
14)
and where C ± 2 are positive constants, and Proof. Using Lemma 2.4, we have
The proof is complete.
Then it holds that where C is a positive constant.
Proof. Using Cauchy-Schwarz inequality and the trace theorem, we can deduce as follows , which readily completes the proof.
We are now in a position of to the vanishing properties of the conductive transmission eigenfunctions (v, w) in the three-dimensional case, and we have the following theorem.
Since v = w on Γ ± × (−M, M ), it easy to see that Multiplying s 2 on both sides of (3.44), taking s = j β with max{̺/α, 0} < β < Υ/2, using the assumptions (3.42) and (3.43), and by letting j → ∞, from (3) readily completes the proof of the corollary. which describes the vanishing property of the transmission eigenfunctions near the edge corner in three dimensions.
|
2020-12-01T02:00:36.294Z
|
2020-11-28T00:00:00.000
|
{
"year": 2021,
"sha1": "4a6e9af85d5bdc92bfcbfd0cb9bec1f094965fec",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2011.14226",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4a6e9af85d5bdc92bfcbfd0cb9bec1f094965fec",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
14361110
|
pes2o/s2orc
|
v3-fos-license
|
Sustainable Development in China ’ s Coastal Area : Based on the Driver-Pressure-State-Welfare-Response Framework and the Data Envelopment Analysis Model
The economic development of China’s coastal areas is being constrained by resources and the environment, with sustainable development being the key to solving these problems. The data envelopment analysis (DEA) model is widely used to assess sustainable development. However, indicators used in the DEA model are not selected in a scientific and comprehensive manner, which may lead to unrepresentative results. Here, we use the driver-pressure-state-welfare-response (DPSWR) framework to select more scientific and comprehensive indicators for a more accurate analysis of efficiency in China’s coastal area. The results show that the efficiencies of most provinces and cities in China’s coastal area have a stable trend. In the time dimension, efficiency was rising before 2008, after which it decreased. In the spatial dimension, China’s coastal provinces and cities are divided into three categories: high efficiency, low efficiency, and greater changes in efficiency. By combining DPSWR and DEA, we produce reliable values for measuring efficiency, with the benefit of avoiding the incomplete selection of DEA indicators.
Introduction
With rapid economic development exhausting land resources, people have begun focusing on the sea [1], which is becoming increasingly important for sustaining the economy [2,3], especially in China [4].The Marine Society of China is in a stage of all-round development; however, is remains subject to certain problems, such as insufficiency in the basic strength of the sea, the imbalance of regional development, and the deterioration of the marine ecological environment [5].To realize the sustainable development of the sea, it is necessary to understand the current status of sustainable development of the marine environment.The data envelopment analysis (DEA) model constructs the objective function skillfully, and transforms the fractional programming problem to a linear programming problem through the Charnes-Cooper transform (C 2 -Transform), without the requirement of the dimension of a uniform index and input-output weights.This capability improves the objectivity of the evaluation of decision-making units.Thus, the DEA model is suitable for assessing the sustainable development of marine ecological economy, which has both environmental and economic problems [6].
Many studies on marine efficiency have applied the DEA model, with most studies focusing on fisheries, shipbuilding, and port logistics.Zheng and Zhou measured the fishing capacity of Chinese marine fleets through peak-to-peak (PTP) and DEA methods [7].Griffin
and Woodward determined
Sustainability 2016, 8, 958; doi:10.3390/su8090958www.mdpi.com/journal/sustainabilitypolicy-efficient management strategies in fisheries by DEA [8].Vázquez and Tyedmers identified the importance of the "skipper effect" for sources of measured inefficiency in fisheries by DEA [9].Pham et al. analyzed the relationship between capacity efficiency and the economic performance of gillnet fisheries in Da Nang, Vietnam [10].Thøgersen and Pascoe calculated the efficiency of Danish North Sea demersal trawlers by combining the DEA with the multi-output distance function (DFA) [11].González-García et al. analyzed cross-vessel eco-efficiency by combining DEA and lifecycle assessment (LCA) [12].Lee analyzed the efficiency of Korean small-and medium-sized (SMS) shipyards by DEA and the Malmquist index [13].Park et al. evaluated the performance of the block manufacturing process (BMP) by integrating DEA with process mining (PM) [14].Huang and Peng measured the efficiency between economic growth and port logistics by DEA in Zhejiang, China [15].Birgun measured the efficiency of seaport container terminals by DEA [16].However, the efficiency measured in these studies was highly dependent on the choice of indicators.If one indicator is changed, the whole efficiency value would noticeably alter; thus, it is important to select appropriate indicators for objective assessment.Zhao and Guo used grey relational analysis (GRA) to select indicators when calculating the marine economic efficiency of China's coastal areas [17].Yuan and Qiu used principal component analysis (PCA) to reduce the overlap of information between indicators when calculating the development efficiency of Tianjin, China [18].These authors focused on the scientific nature of indicator selection, but overlooked comprehensiveness.Xu combined the driver-pressure-state-impact-response (DPSIR) framework with the DEA model, using driving force as the input, and pressure, state, impact, and response as the outputs to calculate the efficiency of agricultural industrialization [19].This work inspired the development of the current study because the DPSIR framework selects indicators in a scientific and comprehensive way, and the research on DPSIR has become more and more mature.
Karageogis et al. analyzed the impact of 100-Year Human Interventions on the Deltaic Coastal Zone of the Inner Thermaikos Gulf in Greece by DPSIR [20].Pacheco et al. developed a proposed coastal management program (CMP) based on DPSIR, for the management of channels [21].Kohsaka analyzed developing biodiversity indicators for cities by applying DPSIR [22].Atkins discussed the comprehensive problems of DPSIR in the management of the marine environment [23].Gregory promoted a more systemic view of decision-making and policy development based on DPSIR and problem structuring methods (PSM) [24].
Here, we used the driver-pressure-state-welfare-response (DPSWR) framework as the basis of our analyses, which is an improved form of the DPSIR framework.Specifically, we re-constructed the relationships of input and output for each index in the DPSWR framework, and selected indicators according to the DPSWR framework, to achieve the goal of selecting indicators in a scientific and comprehensive manner.We used the DEA model to analyze the efficiency of the DPSWR framework, to produce more scientific and reliable evaluation results on the efficiency of sustainable development in China's coastal area.We then used kernel density estimation and hierarchical clustering to analyze the time and space efficiency of 11 provinces and cities in China, to provide suggestions for management.
Study Area
Chinese coastal provinces, municipalities, and autonomous regions (excluding Hong Kong, Macao, and Taiwan), from north to south, include Liaoning, Hebei, Tianjin, Shandong, Jiangsu, Shanghai, Zhejiang, Fujian, Guangdong, Guangxi, and Hainan.The total area of these 11 coastal provinces and cities accounts for less than 14% of the total area of China; however, the development level of these areas is much higher than other regions.In 2014, the GDP of these 11 coastal provinces and cities reached 373 × 10 6 billion yuan, accounting for 58.6% of the country's GDP.Additionally, the Gross Ocean Product (GOP) of these 11 coastal provinces and cities reached 5993.6 billion yuan in 2014, contributing 9.4% to China's GDP [25] (Figure 1).In parallel, these 11 coastal provinces and cities contain 38% of the country's population (Figure 1).Consequently, social, environmental, and other issues remain a problem.
DPSWR Framework
The DPSWR framework was established based on the DPSIR framework, which was developed from the pressure-state-response (PSR) framework of the Organization for Economic Cooperation and Development [26] by the European Environmental Agency [27].The DPSIR framework has been widely applied to structure information and identifying important relationships [28], due to its ability to capture the cause-effect relationships among the sectors of social, economic, and environmental systems [29][30][31][32][33].Although the DPSIR framework has been applied to many parameters, defects have gradually emerged.For instance, it is difficult to understand the definition of its information level, and it is difficult to perform the economic interaction analysis between the social and ecological system [34].Bowen and Riley suggested the inclusion of welfare in the DPSIR framework [35].Subsequently, Cooper modified the DPSIR model, and proposed the DPSWR model [36].Gilbert et al. analyzed marine spatial planning and good environmental status using spatial and temporal dimensions produced by the DPSWR framework [37].The DPSWR framework assumes that its core five indicators have a "ring" relationship with a single direction; however, these five indicators are interrelated (see Figure 2a).Furthermore, the indicators also have a bidirectional relationship.Similar to DPSWR, one of the main aims of the DPSIR framework is to evaluate efficiency [27,28].Therefore, in the present study, we constructed the DPSWR efficiency framework based on inputs and outputs (see Figure 2b).The solid lines and dotted lines all represent the link between the various indicators; however, solid lines represent the direction, which is based on the direction of input-output relationships, and has been discussed in this paper.We can consider that Figure 2b is the result of the change in Figure 2a, and the solid lines in Figure 2b highlight the contents of this study.To understand the rationality combining the DPSWR framework with the DEA model, along with the selection of inputs and outputs, we deconstructed the DPSWR framework from the ring structure to form a linear structure, and we connected annual DPSWR together (Figure 3).State and welfare represent the final results of the DPSWR, while driving force, pressure, and the response of the previous year represent necessary input elements for the final results.In the DPSWR framework, driving force represents changes to the economy and ecological environment; pressure represents the resources consumed by humans; response represents the positive policies and measures formulated by humans to address economic and ecological environment changes.Thus, we selected driving force, pressure, and response as inputs.State represents the status of the economic and ecological environment, and welfare represents the final results of economic changes.Thus, we selected state and welfare as outputs.
DEA Model
The DEA model was proposed by two American operation researchers, A. Charnes and W. W. Cooper, in 1978 to assess relative efficiency [38].DEA is being increasingly used in a wide range of fields, because it provides a simple way to deal with the relative efficiency of inputs and outputs [39].However, classical DEA methods measure efficiency based on the CCR model and BCC model suggested by Farrell [40].These models belong to the radial and linear segment measurement theory, and do not consider the effect of relaxation, which may lead to errors in the measures of efficiency.In comparison, the classical DEA method does not incorporate the external environment and random error of the main body; consequently, the resulting efficiency score may underestimate or overestimate the actual level of efficiency [41].Subsequently, Tone, Fukuyama, and Weber proposed a non-radial and non-angle efficiency evaluation model, by introducing input and output To understand the rationality combining the DPSWR framework with the DEA model, along with the selection of inputs and outputs, we deconstructed the DPSWR framework from the ring structure to form a linear structure, and we connected annual DPSWR together (Figure 3).State and welfare represent the final results of the DPSWR, while driving force, pressure, and the response of the previous year represent necessary input elements for the final results.To understand the rationality combining the DPSWR framework with the DEA model, along with the selection of inputs and outputs, we deconstructed the DPSWR framework from the ring structure to form a linear structure, and we connected annual DPSWR together (Figure 3).State and welfare represent the final results of the DPSWR, while driving force, pressure, and the response of the previous year represent necessary input elements for the final results.In the DPSWR framework, driving force represents changes to the economy and ecological environment; pressure represents the resources consumed by humans; response represents the positive policies and measures formulated by humans to address economic and ecological environment changes.Thus, we selected driving force, pressure, and response as inputs.State represents the status of the economic and ecological environment, and welfare represents the final results of economic changes.Thus, we selected state and welfare as outputs.
DEA Model
The DEA model was proposed by two American operation researchers, A. Charnes and W. W. Cooper, in 1978 to assess relative efficiency [38].DEA is being increasingly used in a wide range of fields, because it provides a simple way to deal with the relative efficiency of inputs and outputs [39].However, classical DEA methods measure efficiency based on the CCR model and BCC model suggested by Farrell [40].These models belong to the radial and linear segment measurement theory, and do not consider the effect of relaxation, which may lead to errors in the measures of efficiency.In comparison, the classical DEA method does not incorporate the external environment and random error of the main body; consequently, the resulting efficiency score may underestimate or overestimate the actual level of efficiency [41].Subsequently, Tone, Fukuyama, and Weber proposed a non-radial and non-angle efficiency evaluation model, by introducing input and output In the DPSWR framework, driving force represents changes to the economy and ecological environment; pressure represents the resources consumed by humans; response represents the positive policies and measures formulated by humans to address economic and ecological environment changes.Thus, we selected driving force, pressure, and response as inputs.State represents the status of the economic and ecological environment, and welfare represents the final results of economic changes.Thus, we selected state and welfare as outputs.
DEA Model
The DEA model was proposed by two American operation researchers, A. Charnes and W. W. Cooper, in 1978 to assess relative efficiency [38].DEA is being increasingly used in a wide range of fields, because it provides a simple way to deal with the relative efficiency of inputs and outputs [39].However, classical DEA methods measure efficiency based on the CCR model and BCC model suggested by Farrell [40].These models belong to the radial and linear segment measurement theory, and do not consider the effect of relaxation, which may lead to errors in the measures of efficiency.In comparison, the classical DEA method does not incorporate the external environment and random error of the main body; consequently, the resulting efficiency score may underestimate or overestimate the actual level of efficiency [41].Subsequently, Tone, Fukuyama, and Weber proposed a non-radial and non-angle efficiency evaluation model, by introducing input and output slack in the objective function [42,43].Based on environmental production technology, Tone constructed the slacks-based measure (SBM) model, which contains the unexpected output [42].The main difference between this model and the classical CCR and BCC models is that the objective function contains the slack variables.Therefore, the problem of non-zero relaxation of input or output was solved, and the unexpected output of the production process was also solved [44].However, these changes failed to resolve the problem of effectively distinguishing the efficiency of the decision making unit (DMU).Based on the SBM model, Tone proposed the Super-SBM efficiency model [45], with the following mathematical expression: where ρ represents the efficiency value, while N, M, and I represent the number of inputs, expected outputs, and unexpected outputs, respectively.i , s x n , and 0 < ρ ≤ 1.The production unit is fully effective, whereby ρ = 1, while the production unit has efficiency loss when ρ < 1.
Kernel Density Estimation
For small sample events, it is not accurate to estimate the density function [46], with kernel density estimation representing a good solution to this problem as a non-parametric density estimation method [47][48][49].For a random sample x 1 , x 2 , . . ., x n , the form of kernel density estimation is: In the formula, K( x−x i h ) is a weighted function, and the Gaussian kernel is used here according to the intensity of the packet data.The Gaussian kernel is expressed as 1
Hierarchical Cluster
The clustering process of hierarchical clustering, which is also called system clustering, is widely used [51], and is performed at a certain level.There are two types of hierarchical clustering: Q clustering and R clustering.Q clustering is used to cluster the samples, which are clustered together with similar characteristic values.R clustering is the clustering of variables, which are clustered together.Here, we used R clustering, with Euclidean distance as the fixed distance type variable: where d ij = d ji , and d ij is the distance of i and j.According to the requirement of the Euclidean distance method, the Ward's method which combined sum of squares of deviations together, is used for clustering.After G p and G q are combined into G r , the recursive formula for the distance from the G k is: where n p , n k , n r , and n q are samples of G p , G k , G r , and G q , respectively.
Selection of Indicators
Fourteen percent of the territorial area of China's coastal areas support 38% of the country's population, which is still increasing at a high rate.Thus, the high density and high growth rate of the coastal population is the main driving force for the development of marine economy and ecology.Here, we selected two indicators, natural population growth rate and population density, which reflect the problem of the high density of the population that we are currently facing.Since the population continues to increase, it must be addressed in coastal areas.In parallel, the population also reflects the situation of labor, corresponding to the use of labor as investment in the DEA model.Tourism and fisheries represent two other driving forces of development in coastal areas.Tourism, fisheries, and aquaculture reflect the driving force for natural resource endowment in various coastal provinces.China has always been committed to improving the level of urbanization, and urbanization reflects the momentum of economic development as a driving force.
Pressure originates from two sources, energy consumption and the destruction of the marine environment caused by human economic and social activities.Here, we selected three indicators to describe energy consumption; namely, per capita water use, per capita electricity consumption, and GDP energy intensity.Water and electricity consume most of the energy in agriculture and industry.GDP energy intensity is the main index to reflect the level of energy consumption and the condition of energy saving and consumption reduction.This indicator shows the extent of energy utilization in a country's economic activities, and reflects the change of economic structure and energy use efficiency.We also selected two indicators to describe destruction; namely, wastewater discharge and solid waste discharge.Wastewater discharge, gas waste discharge, and solid waste discharge inform the industries formed the main pressure for the environment.The data on industrial waste gas emission are relatively lesser, so only wastewater discharge and solid waste discharge were selected for this study.Furthermore, energy consumption and destruction of the marine environment represent the energy and environmental capital input in the DEA model.
Under the influence of driving force and pressure, state describes the present situation of China's coastal areas.Here, we selected four indicators to describe the economic situation; namely, marine secondary industry specific gravity, marine tertiary industry specific gravity, marine comparative labor productivity (CLP), and the proportion of the Gross Ocean Product in the Gross Regional Product.Marine secondary industry specific gravity and marine tertiary industry specific gravity can reflect the marine industrial structure in a coastal area.Marine comparative labor productivity (CLP) refers to the production capacity of each employee in the unit of time according to the value of production.It is an important indicator of the economic activity of an enterprise, and of the comprehensive performance of enterprise production, technology level, management level, technical proficiency, and labor enthusiasm of the workers.The formula for CLP is as follows: In this formula, GOP i represents the GOP of the region i, and LABOR i represents the number of jobs in the region i.The proportion of the Gross Ocean Product in the Gross Regional Product reflects the contribution of marine economy to regional economy.Its formula is as follows: In this formula, GOP t i − GOP t−1 i represents the value added to the GOP of the region i in the year t, and GDP t i represents the GDP of this region.We selected one indicator to describe the ecological situation; namely, water quality.We selected two indicators to describe the situation of society; namely, the Human Development Index (HDI) and Gini Coefficient.Good economic development has provided many favorable factors for the human society, and some of the most important factors are the healthy and long life, the acquisition of knowledge, and a decent living standard.HDI, which includes these three indicators, has been discussed by the United Nations in its 1990 human development report, and has gone through 25 years of continuous improvement.HDI was selected in this study to express the state of society, in order to show the general situation of human health, knowledge, and living standards.The Gini coefficient shows the fairness of social income distribution.
Affected by state, welfare selected the growth rate of the Gross Ocean Product to describe the situation of economic development, per capita Gross Ocean Product as social welfare status, and per capita marine ecosystem services and CO 2 emissions as the ecological status.Gross Ocean Product is the final response to the all the ocean-related economic activities in the national economy.From the accounting framework, the accounting system of GOP is same with the GDP.Moreover, in view of this study being focused on the coastal area, it is more reasonable to choose GOP rather than GDP.The coastal area has been the fastest-growing region in China, which is vital to ocean contribution, and the growth rate of GOP has been greater than GDP in recent years in China.Therefore, in this study, the growth rate of GOP was selected to represent the economic development in coastal areas.However, when we selected the indicators to represent the level of social development, their statistical data reflecting the ocean-related per capita disposable income were unavailable.The principle that, under normal circumstances, greater per capita GDP means greater per capita disposable income is also true in coastal areas; thus, greater per capita GOP means greater ocean-related per capita disposable income.Therefore, we selected the per capita GOP to reflect the level of social welfare in coastal area.The marine ecosystem service is the benefit that people derive from the marine ecosystem directly or indirectly; therefore, the per capita marine ecosystem services were selected to represent the ecological benefits of coastal areas in this study.
Among these parameters, CO 2 emissions represented negative indicators, representing the adverse effects caused by ecological destruction (e.g., vegetation damage) and the excessive use of fossil fuels.This paper regards CO 2 emissions as a negative indicator, and takes the algorithm of negative indicator in the process of standardization of indicators.This means that when the value of CO 2 emission is higher, the welfare level is poorer.Conversely, the lower the value of CO 2 emission, the higher the welfare level.An important criterion for measuring the quality of people's lives is the quality of the air.CO 2 emission is directly related to air quality.Additionally, the negative indicators of CO 2 emission can be used to measure the level of the welfare brought by the temperature, because CO 2 emission is the main cause of the greenhouse effect and global warming.
For response, two indicators were selected to describe the change of economic development; namely, the degree of contribution of the marine industry and marine industrial structure change index.These parameters correspond to marine secondary industry specific gravity, marine tertiary industry specific gravity, and the proportion of the Gross Ocean Product in the Gross Regional Product in the state.Three indicators were selected to describe social progress with respect to the degree of openness, science and technology, and education; namely, international tourism (foreign exchange) income, revenue of marine scientific research institutions, and the status of education.Three indicators were selected to describe the response to poor ecological environment; namely, investment in environmental protection, marine reserves in coastal regions, and wastewater treatment rate.
Contribution degree of the marine industry.In May 2003, the strategic aim of "gradual establishment of powerful marine country" was definitely launched in "National Marine Economy Development Plan", which guided China in the approach of implementing marine development and opening up of marine economy.In 2012, eighteen major reports of the Communist Party of China clearly proposed the strategic objectives of "improving the capacity of marine resource development and development of the marine economy".A series of local documents also focus on the development of the ocean.These policy documents put the development of marine economy as the focus of future work, with the hope that the marine economy could promote regional economic development.The result of the implementation of these policies has been to improve the contribution rate of marine industry to regional economic development.Therefore, we selected the contribution degree of the marine industry to express the government's support for the marine development.
Marine industrial structure change index.In the backdrop of the global economic crisis, China is adjusting and upgrading the industrial structure, and the marine industry is no exception.This industry adjustment is not simply to raise the proportion of the tertiary industries, which would have warranted selection of the proportion of tertiary industries as the indicator.For example, Hainan is an important tourist destination in China, and it has a high proportion of tertiary industries.The policy of reform of the industrial structure needs an increase in the proportion of the secondary industries in Hainan.That is the reason we chose marine industrial structure change index, rather than the proportion of the tertiary industries.
International tourism (foreign exchange) income."China Ocean Agenda 21" and the "National Marine Economy Development Plan" proposed coastal tourism as a key for the marine industry.This indicator not only reflects the government's support for coastal tourism, but also reflects the results of the government policy to improve the ecological environment.The government has introduced a number of policies for the ecological environment, but the extent to which these policies will improve the ecological environment is still unknown.As is obvious, a good ecological environment will attract more people to travel, especially in the coastal areas, which are based on the natural landscape rather than cultural landscape.Therefore, more tourism revenue also indicates a better ecological environment, indirectly.In addition, opening up to the outside world has been a basic national policy of China for a long time, especially in the coastal areas.This is the reason why we choose international tourism (foreign exchange) income, rather than the domestic tourism revenue.Moreover, international tourism (foreign exchange) income was selected as a response indicator because it can reflect the economic development, ecological environment, and the degree of opening to the outside world caused by the implementation of the policy.
Revenue of marine scientific research institutions.The marine economy has already been a new growth point of economic development.In this context, the developed countries in the world regard marine science and technology as the most important factor in accelerating the development of the marine economy, and China is no exception.Marine scientific research institutions are at the forefront of marine science and technology, and the country has invested a lot of money into such institutions.We would also like to include direct capital investment, which China provides to each province and city as an indicator but, unfortunately, reliable data in this regard are unavailable.Therefore, we choose revenue of marine scientific research institutions to reflect the government's policy on marine science and technology.
The four indicators, the contribution degree of marine industry, the marine industrial structure change index, international tourism (foreign exchange) income, and the revenue of marine scientific research institutions, in response of Table 1 are the results of certain responses, rather than responses themselves.Most of the literature describes policy as indicators of response, and some reports directly used the number of policies as a response.There are some problems in this: the effectiveness of each policy is not same and the implementation of the policy has lagged behind.Therefore, if we want to quantify the policy, we can only take the results of the implementation of the policy as indicators.For this reason, in the present study, we selected the results of certain responses rather than the responses themselves.Two indicators, the marine reserves in coastal regions and the education situation are also the result of certain responses, which can reflect the results of the strength of environmental protection and education.Investment in environmental protection and wastewater treatment rate are direct indicators of response, which represent the effort to prevent environmental damage by human beings.
Processing of Indicators
The unit and the degree of importance of each index in the DPSWR framework differ.Thus, we adopted the data entropy method to deal with the data in DPSWR.The calculation steps for the entropy method are: Original data normalization.If the increase in variable value results in a worse situation: If the increase in variable value results in a better situation: Where x ij is the standardized value of an indicator for location i; x max and x min are the original values for location i, representing the highest value and the lowest value locations, respectively.
To quantify each index to the same degree, we should calculate the proportion of location i under index j: Calculate the coefficient of variation (g j ) of index j: g j = 1-e j .
To normalize the coefficient of variation, calculate the weight (w j ) of index j: The evaluation score (F i ) of location i is calculated as: Each indicator and its weight is shown in Table 1.
Efficiency Evaluation
The efficiency of each region was calculated by MaxDEA software (Beijing, China) for each year by using the evaluation value, which was calculated according to the weight of the entropy method (see Figure 4 for results).To show that there is no linear relationship between the efficiency value and the value of evaluation, we obtained the value for DPSWR (Figure 4).The evaluation results of DPSWR using the data entropy method showed a steady trend, with minimal differences occurring among the various provinces and cities.Thus, the status of sustainable development was stable, remaining at the same level in China's coastal area.This level was below 0.4, demonstrating that the efficiency analysis is very important and necessary.
Tianjin, Hebei, Guangxi, and Hainan have high efficiencies, which are basically maintained at more than 1.Thus, the efficiencies of these four provinces and cities are relatively effective.The efficiency of Fujian was maintained at 0.6-1, which was comparatively high to the other areas.The efficiencies of Liaoning, Shanghai, Zhejiang, and Guangdong were mostly maintained at 0.4-0.6,indicating a relatively low sustainable development efficiency of these four provinces.The efficiency of Jiangsu remained below 0.4, indicating it is relatively ineffective.
The efficiency in Shandong changed over time.It was relatively ineffective in 2004 (below 0.5), became highly effective from 2005 to 2009 (exceeding 1), and became ineffective again in 2010-2013 (falling below 0.4).We hypothesize that the decline in the efficiency of Shandong after 2009 was caused by an increase in the overall level of sustainable development.The main reason for this phenomenon is because the revenue of marine scientific research institutions, which is an important response indicator, increased more than 10 times from 1847.47 million yuan in 2009 to 18,897.31 million yuan in 2010.In contrast, the state and welfare did not change to a similar degree in this short time, resulting in a decrease in efficiency.However, this result should not be treated with pessimism, because the increase in investment in science and technology will be reflected through various achievements in forthcoming years, with an increase in efficiency representing one of these achievements.
In 2011 and 2012, the whole efficiency of the coastal provinces and cities was more than 0.8, indicating that China's coastal areas were relatively efficient, but that there remained room for growth.To make further comparisons, we analyzed the efficiencies in relation to time and space dimensions.The evaluation results of DPSWR using the data entropy method showed a steady trend, with minimal differences occurring among the various provinces and cities.Thus, the status of sustainable development was stable, remaining at the same level in China's coastal area.This level was below 0.4, demonstrating that the efficiency analysis is very important and necessary.
Tianjin, Hebei, Guangxi, and Hainan have high efficiencies, which are basically maintained at more than 1.Thus, the efficiencies of these four provinces and cities are relatively effective.The efficiency of Fujian was maintained at 0.6-1, which was comparatively high to the other areas.The efficiencies of Liaoning, Shanghai, Zhejiang, and Guangdong were mostly maintained at 0.4-0.6,indicating a relatively low sustainable development efficiency of these four provinces.The efficiency of Jiangsu remained below 0.4, indicating it is relatively ineffective.
The efficiency in Shandong changed over time.It was relatively ineffective in 2004 (below 0.5), became highly effective from 2005 to 2009 (exceeding 1), and became ineffective again in 2010-2013 (falling below 0.4).We hypothesize that the decline in the efficiency of Shandong after 2009 was caused by an increase in the overall level of sustainable development.The main reason for this phenomenon is because the revenue of marine scientific research institutions, which is an important response indicator, increased more than 10 times from 1847.47 million yuan in 2009 to 18,897.31 million yuan in 2010.In contrast, the state and welfare did not change to a similar degree in this short time, resulting in a decrease in efficiency.However, this result should not be treated with pessimism, because the increase in investment in science and technology will be reflected through various achievements in forthcoming years, with an increase in efficiency representing one of these achievements.
In 2011 and 2012, the whole efficiency of the coastal provinces and cities was more than 0.8, indicating that China's coastal areas were relatively efficient, but that there remained room for growth.To make further comparisons, we analyzed the efficiencies in relation to time and space dimensions.
Time Series Analysis
We selected the efficiency results from 2004, 2008, and 2013, and incorporated them in the kernel density estimation using Eviews 8.0 (Denver, CO, USA) (see Figure 5 for the results).
Time Series Analysis
We selected the efficiency results from 2004, 2008, and 2013, and incorporated them in the kernel density estimation using Eviews 8.0 (Denver, CO, USA) (see Figure 5 for the results).The shape of the efficiency distribution (Figure 5) shows that efficiency presented a significantly skewed distribution, and did not form a single peak.The efficiency distribution was skewed to the left from 2004 to 2013, but weakened in 2008.This pattern indicates that the ratio of provinces and cities with high efficiencies increased from 2004 to 2008, and decreased from 2008 to 2013.The position of the efficiency distribution shows that the efficiency of marine sustainable development shifted to the right and then to the left.This pattern indicates that the efficiency of China's coastal area increased and then decreased.The kurtosis of the efficiency distribution showed that the value of efficiency, which corresponded to the peak, first increased and then decreased.This pattern shows that the proportion of provinces and cities with high efficiency declined, following an initial increase.In parallel, the efficiency distribution showed a development situation from a sharp peak to a wide peak, and returned to a sharp peak again from 2004 to 2013.The kurtosis in 2004 was similar to 2013, which was mostly due to the low efficiency of the provinces and cities.The area that corresponded to the peak of efficiency distribution in 2008 was large, indicating that efficiency improved in most provinces and cities from 2004 to 2008, with noticeable acceleration in provinces and cities with low efficiency.
The time trend of efficiency was basically consistent with the development of China's marine economy.After joining the WTO in 2001 and adopting a more open market accession, China's marine economy developed rapidly after 2003, with 16.05% average growth rate of Gross Ocean Production from 2004 to 2007.However, because China was affected by the global economic crisis, its marine economy developed slowly, with 9.88% average growth rate of Gross Ocean Production from 2008 to 2013.The speed of economic growth affected changes to efficiency, both directly and indirectly, which also explains the change in the efficiency distribution of the time series.
Space Sequence Analysis
We incorporated the efficiency values of each province and city in Equation ( 6) using SPSS 19.0 (Chicago, IL, USA) software to evaluate the hierarchical clustering (see Figure 6 for results).The shape of the efficiency distribution (Figure 5) shows that efficiency presented a significantly skewed distribution, and did not form a single peak.The efficiency distribution was skewed to the left from 2004 to 2013, but weakened in 2008.This pattern indicates that the ratio of provinces and cities with high efficiencies increased from 2004 to 2008, and decreased from 2008 to 2013.The position of the efficiency distribution shows that the efficiency of marine sustainable development shifted to the right and then to the left.This pattern indicates that the efficiency of China's coastal area increased and then decreased.The kurtosis of the efficiency distribution showed that the value of efficiency, which corresponded to the peak, first increased and then decreased.This pattern shows that the proportion of provinces and cities with high efficiency declined, following an initial increase.In parallel, the efficiency distribution showed a development situation from a sharp peak to a wide peak, and returned to a sharp peak again from 2004 to 2013.The kurtosis in 2004 was similar to 2013, which was mostly due to the low efficiency of the provinces and cities.The area that corresponded to the peak of efficiency distribution in 2008 was large, indicating that efficiency improved in most provinces and cities from 2004 to 2008, with noticeable acceleration in provinces and cities with low efficiency.
The time trend of efficiency was basically consistent with the development of China's marine economy.After joining the WTO in 2001 and adopting a more open market accession, China's marine economy developed rapidly after 2003, with 16.05% average growth rate of Gross Ocean Production from 2004 to 2007.However, because China was affected by the global economic crisis, its marine economy developed slowly, with 9.88% average growth rate of Gross Ocean Production from 2008 to 2013.The speed of economic growth affected changes to efficiency, both directly and indirectly, which also explains the change in the efficiency distribution of the time series.
Space Sequence Analysis
We incorporated the efficiency values of each province and city in Equation ( 6) using SPSS 19.0 (Chicago, IL, USA) software to evaluate the hierarchical clustering (see Figure 6 for results).The 11 coastal provinces and cities were separated into three categories: high efficiency, low efficiency, and greater changes in efficiency, based on the hierarchical cluster (Figure 6).The classification results of Figure 6 and the efficiency changes shown in Figure 4, we show that Liaoning, Shanghai, Jiangsu, Zhejiang, and Guangdong belong to a low efficiency group of 0.4-0.6, with a small change in scope (i.e., <0.2).Tianjin, Hebei, Guangxi, and Hainan belong to a high efficiency group of >1.0, with a greater change in scope (0.4).Shandong and Fujian belong to a high efficiency group (average of 0.8 and 0.7 for Fujian and Shandong, respectively), with the largest change in scope (with extremes of 0.48 and 0.03 in Fujian and 0.70 and 0.12 in Shandong).
While the results were divided according to the trend of change in the variables (Figure 6), the results minimally differed to those obtained for variable size.This phenomenon showed that the efficiency of China's coastal areas is basically stable, with large fluctuations being rare.Furthermore, the four provinces and cities with the highest efficiency were located at the ends of the northern (Bohai Sea Ring) and southern (South China Sea) coastal areas, respectively.These four provinces and cities radiate far inland, and have the geographical advantages of trade with China's neighbors, which contribute to their high efficiency.
Conclusions and Suggestions
At present, most studies evaluating sustainable development are based on the evaluation score of the index system.The DPSWR framework represents an important evaluation method of this index system because it incorporates welfare.However, for regions with similar evaluation results (e.g., Figure 4), the efficiency analysis helps guide policy-makers and environmental protectors on the direction of sustainable development.In parallel, The DEA model is widely used for evaluating efficiency, but is subject the indicators being selected, with grey correlation and principal component analysis failing to resolve this issue of comprehensive selection.In contrast, the DPSWR framework provides a good foundation for selecting reliable indicators for the DEA model.
Here, we used the combined DPSWR framework and DEA model to measure the efficiency of sustainable development in various provinces and cities of China's coastal areas.Four provinces and cities (Tianjin, Hebei, Guangxi, and Hainan) were found to be relatively effective.Fujian was The 11 coastal provinces and cities were separated into three categories: high efficiency, low efficiency, and greater changes in efficiency, based on the hierarchical cluster (Figure 6).The classification results of Figure 6 and the efficiency changes shown in Figure 4, we show that Liaoning, Shanghai, Jiangsu, Zhejiang, and Guangdong belong to a low efficiency group of 0.4-0.6, with a small change in scope (i.e., <0.2).Tianjin, Hebei, Guangxi, and Hainan belong to a high efficiency group of >1.0, with a greater change in scope (0.4).Shandong and Fujian belong to a high efficiency group (average of 0.8 and 0.7 for Fujian and Shandong, respectively), with the largest change in scope (with extremes of 0.48 and 0.03 in Fujian and 0.70 and 0.12 in Shandong).
While the results were divided according to the trend of change in the variables (Figure 6), the results minimally differed to those obtained for variable size.This phenomenon showed that the efficiency of China's coastal areas is basically stable, with large fluctuations being rare.Furthermore, the four provinces and cities with the highest efficiency were located at the ends of the northern (Bohai Sea Ring) and southern (South China Sea) coastal areas, respectively.These four provinces and cities radiate far inland, and have the geographical advantages of trade with China's neighbors, which contribute to their high efficiency.
Conclusions and Suggestions
At present, most studies evaluating sustainable development are based on the evaluation score of the index system.The DPSWR framework represents an important evaluation method of this index system because it incorporates welfare.However, for regions with similar evaluation results (e.g., Figure 4), the efficiency analysis helps guide policy-makers and environmental protectors on the direction of sustainable development.In parallel, The DEA model is widely used for evaluating efficiency, but is subject the indicators being selected, with grey correlation and principal component analysis failing to resolve this issue of comprehensive selection.In contrast, the DPSWR framework provides a good foundation for selecting reliable indicators for the DEA model.
Here, we used the combined DPSWR framework and DEA model to measure the efficiency of sustainable development in various provinces and cities of China's coastal areas.Four provinces and cities (Tianjin, Hebei, Guangxi, and Hainan) were found to be relatively effective.Fujian was relatively effective too, whereas Liaoning, Shanghai, Zhejiang, and Guangdong had low effectiveness, along with Jiangsu.Shandong was relatively effective from 2004 to 2007, but was relatively ineffective level from 2008 to 2013.Analysis from the time dimension showed that the efficiency of China's coastal areas changed with the marine economy, which increased from 2004 to 2007, and then return to the original level.From the perspective of space analysis, Tianjin, Hebei, Guangxi, and Hainan had high efficiency, whereas Liaoning, Shanghai, Jiangsu, Zhejiang, and Guangdong had low efficiency.Shandong and Fujian formed a third group with a large amplitude in variation across time.Although variation in the amplitude of efficiency at Fujian and Shandong are large, rules remain.The fluctuation in efficiency was around 0.8 in Fujian, initially exceeding 1.0 from 2004 to 2007, and then changing to 0.35 from 2008 to 2013 in Shandong.
The results of some studies using DEA to calculate efficiency are often very large in a short period of time; however, this is not consistent with the actual situation.If there is no major economic turmoil or policy support, efficiency in a short period of time cannot result in a large change.Large variation in efficiency values is mainly caused by large variation in the selected indicators; thus, it is more valuable for choosing more comprehensive evaluation indicators.According to the efficiency of DPSWR indicators, efficiency in a short period of time was mostly maintained at the same level in this study, which is closer to the actual situation.The classification result of the spatial analysis and according to efficiency value size were similar, despite changes to the classification of variable.Thus, this technique will help direct policy-makers by providing more accurate and comprehensive efficiency results, ensuring focus is correctly placed on low efficiency areas or area with major changes in efficiency.
To enhance efficiency, output must be improved and input reduced.We should reduce input to reduce the input of driving forces, pressure, and response.However, we should also improve the level of state and welfare, which is the output of the efficiency.The augmentation of the driving force and the response provides positive support for the increase in state and welfare.Therefore, to improve efficiency, we must allocate reasonable driving force and response factors.In parallel, we must reduce the consumption of energy and environmental resources, which are indicators of pressure.As the output, the internal elements of state and welfare should be increased with emphasis, rather than being based on the average distribution of limited resources in the different areas with respect to the degree of importance.For instance: Driving force.Population density accounts for a very large proportion (34%).China's coastal areas support high population densities.To ease this problem, we must increase the driving effect of the economy from coastal provinces and cities to inland provinces and cities, facilitating a shift in the population to inland provinces and cities.To improve efficiency, we should reduce the number of people and improve the quality of life of the population, both physically and mentally, by increasing investment in education and health.Tourism and urbanization are important for sustainable development, and must continue to be improved and strengthened, despite representing a small percentage in DPSWR.In addition, there is a small gap between the percentage of fisheries and aquaculture, which should be utilized by reducing fisheries and promoting the development of aquaculture.
Pressure.Each pressure indicator was associated with the environment or energy consumption.Thus, to improve the efficiency, we must reduce the input of these indicators.Based on the weight of each indicator, we know how to reduce the consumption of these categories.For energy consumption, GDP energy intensity represents the largest percentage, and is calculated from standard coal consumption.In contrast, the percentage of per capita electricity consumption is the smallest.Thus, we should reduce the consumption of standard coal, and use electricity or other cleaner sources of energy instead.Per capita water use also accounted for a certain percentage; thus, we should strengthen water saving education, and improve the way of water use, to reduce overall water consumption.The percentage of two indicators in environmental consumption, wastewater discharge and solid waste discharge, was also very large.Strengthening waste water purification technology and improving the comprehensive utilization of solid wastes is imminent.
Response.Revenue generated by marine scientific research institutions, marine reserves in coastal regions, and international tourism (foreign exchange) income accounted for the largest response percentage.However, these three indicators are the main responses to promote sustainable development as inputs.Therefore, we should continue to increase investment in science and technology, environmental protection, and openness to improve the efficiency.The degree of contribution of the marine industry, wastewater treatment rate, and education status accounted for the smallest response percentage.Despite the increased amount of these three response indicators, they had a small effect at improving efficiency, but did contribute towards improving the overall level of sustainable development.Therefore, we should continue to increase investment in the marine industry for overall economic support, as well as improve the wastewater treatment rate and increase investment in education.
State.Two indicators for evaluating the social state, HDI and the Gini coefficient, accounted for 36% of state; thus, strengthening policy support of social parameters is particularly important, especially for the improvement of the Gini coefficient, which means promoting social justice.The percentage of the Gross Ocean Product in the Gross Regional Product accounted for 20% of state; thus, continuing to strengthen the construction of the marine industry is important for improving sustainable development.In addition, the percentage of marine tertiary industry specific gravity was greater than the marine secondary industry specific gravity; thus, transferring the percentage of the marine tertiary industry to the secondary industry, and optimizing the industrial structure, could improve efficiency.
Welfare.The per capita Gross Ocean Product accounted for the largest percentage of welfare, and was greater than the sum of the percentage of any other two indicators combined.Therefore, it is important to improve the per capita Gross Ocean Product to enhance welfare.We must broaden marine innovation and investment channels, optimize industrial structure, promote the development of high-tech industry to construct the whole industrial chain of the innovation network for the marine environment, and promote the upgrading and transformation of the whole industrial chain.Per capita marine ecosystem services also accounted for more than a quarter of the percentage, thereby increasing investment in environmental protection.Furthermore, the supervision of environmental protection policies must be strengthened.
Figure 1 .
Figure 1.Development of China's 11 coastal provinces and cities.Note: Sansha city was founded in 2012; thus, data were lacking, resulting in our excluding this city from the analyses.
Figure 2 .
Figure 2. The DPSWR framework.(a) The DPSWR framework assumes that its core five indicators have a "ring" relationship with a single direction; (b) the DPSWR efficiency framework based on inputs and outputs.
Figure 2 .
Figure 2. The DPSWR framework.(a) The DPSWR framework assumes that its core five indicators have a "ring" relationship with a single direction; (b) the DPSWR efficiency framework based on inputs and outputs.
n ) represents the slack variables of the inputs and outputs, and (y k m , b k i , x k n ) represents the input-output value of the production unit k at time t.Finally, (z y k , z x k ) represents the weights of each input and output.The objective function ρ is strictly monotone, decreasing with s y m , s b where n represents the number of locations, and m represents the number of indices.Whereby: Calculate the entropy (e j ) of the index j: e j = −k n ∑ i=1 p ij ln(p ij ), where k = 1/ln(n), and e j ≥ 0.
Figure 5 .
Figure 5.Time series analysis of efficiency by kernel density estimation.
Figure 5 .
Figure 5.Time series analysis of efficiency by kernel density estimation.
Figure 6 .
Figure 6.Space sequence analysis of efficiency by hierarchical cluster.
Figure 6 .
Figure 6.Space sequence analysis of efficiency by hierarchical cluster.
Table 1 .
Indicators and its weights of DPSWR.
|
2016-09-21T08:51:56.807Z
|
2016-09-21T00:00:00.000
|
{
"year": 2016,
"sha1": "14826034bc75dea102534dca032cf2fd3a93eee5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/8/9/958/pdf?version=1474444117",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "14826034bc75dea102534dca032cf2fd3a93eee5",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
244215755
|
pes2o/s2orc
|
v3-fos-license
|
Comment on esd-2021-67
This paper is a tour de force compendium on the latest scientific results relevant to understanding recent and future climate change on the Baltic Sea Region. The authors and the Baltic science community are to be commended in taking this on in a way that builds on and updates the two previous BACC assessments. It is particularly effective that the assessment is linked with the efforts of HELCOM. It sets a high standard for climate change assessments for regional seas in other parts of the world.
Thank you very much for the thorough review and excellent comments. We will follow your suggestions and will revise the manuscript accordingly. We are impressed by your review work.
This summary paper brings together and depends on the results of nine specialty papers or BEARs, which I have not reviewed. Nor have I been charged with reviewing the consistency of the summary with the BEARs, but trust the authors to ensure that consistency. The construct wherein each environmental variable is treated under present climate change, future climate change, and knowledge gaps and research needs results in some redundancy. This could be alleviated somewhat by reference to the corresponding previous section without repeating narrative and references. The section on concluding remarks does help bring these all together.
Within Baltic Earth, we performed prior to submission an internal review with two reviewers that were not involved in the BEARs to guarantee consistency.
The list of knowledge gaps and research needs is rather daunting, and rather depressing as it seems that virtually everything uncertain and unknown, and equally so. Clearly, this is not the case. Concluding that section with a brief consideration of the knowledge gaps and research needs that are most critical to determining the future Baltic and are most potentially resolvable with concerted research would be help.
We will add a concluding paragraph.
The words uncertain, uncertainty, and uncertainties are used some 104 times in the paper, and often incautiously. Frequently, there are better terms to describe the nature of these so-called uncertainties. They may be a result of inadequate knowledge rather that inherent uncertainty or they might actually be deep uncertainties. In particular, when future changes depend on steps society might take to limit greenhouse gas emissions, these seem not so much as uncertain but yet to be determined. Some finetuning of the uncertainty language would help.
We will fine-tune the uncertainty language.
While, the paper indicated it will follow the terminology used by IPCC concerning degree of confidence in statements, as it does in section on key messages, it is not as very careful when it comes to the use of the term "likely" some 64 times and doesn't seem to differentiate among as-likely-as-not, likely, very likely or virtually certain as per the use of likelihood terms in IPCC parlance. Similarly, the term unknown is used quite a bit, without differentiating among completely unknown, largely unknown, not fully known, or incompletely understood. These might be more accurate descriptors in places.
We will check and possibly be more specific in the revised version. However, for many variables the information about confidence levels does not exist at the regional scale because large ensembles do not exist. We will explain this fact in the introduction and modify the definition of our terminology.
While this paper was developed prior to the release of the IPCC Sixth Assessment in August 2021 it is appearing after this release. Virtually all the literature cited used earlier GCM results, although some results based on CMIP6 models are discussed. It is impossible and unreasonable that this paper attempt to incorporate or compare Sixth Assessment models and conclusions in any great detail, it would be useful if the authors wrote brief comments about the extent to which conclusions might be affected by the new IPCC assessment, perhaps after section 1.5.5. My sense is that they wouldn't dramatically affect the conclusions. Recent literature (e.g. Hausfather and Peters, 2020, as cited in this paper) makes the point that the RCP8.5 pathway and the associated 4°C warming during this century is highly unlikely to occur and, in fact, the IPCC AR6 essentially admits this. Something between RCP7.0 and RCP4.5 is probably the maximum warming without substantial mitigation. Perhaps this point can be made more strongly in this paper. In fact, it would be informative to mention in key places where mitigation measures would affect key climate drivers, if and as society significantly reduces emissions and the use of fossil fuels (e.g. this would affect N deposition, shipping, plastics, etc.).
Following the comments of both reviewers, differences between CMIP5 and CMIP6 and how these differences may affect our results will be discussed.
Specific comments:
What are the current Baltic Earth Grand Challenges, a listing or, at least, a reference (line 128).
We will add the Grand Challenges and a reference.
Farther (not further) poleward as this refers to distance (line 2290)
Will be corrected.
The future scenarios for shipping do not include a future where the use of hydrocarbon fuels or at least emissions of CO2 are greatly restricted to meet GHG reduction requirements. Could the authors speculate what this might mean? (lines 2353) This is a valid point. The current available studies for shipping in the Baltic Sea do not include the strong fossil fuel emission reductions IMO is postulating (i.e. 50% less greenhouse gas emissiones by 2050). The IMO secretary-general states in the foreword of the Fourth IMO Greenhouse Gas Study: "The Study demonstrates that whilst further improvement of the carbon intensity of shipping can be achieved, it will be difficult to achieve IMO's 2050 GHG reduction ambition only through energy-saving technologies and speed reduction of ships." New synthetic fuels are needed or alternative propulsion. We are just running our CTM for scenarios addressing those requirements. Results will be available in spring next year, earliest.
In a synthesis paper for Shipping in the Baltic Sea, we will show that under current legislation and quite strong energy efficiency assumptions (stronger than the EEDI from IMO), CO2 emissions in the BS will drop down to only about 78% in 2040 compared to the value in 2014. Other measures are needed, while the use of LNG as fuel is a good solution for reducing air pollutants like NOX, SO2 and PM, CO2 emissions remain considerable. Methane slip during transport and operation can even compensate for the reduced CO2 emissions. This paper will not be published in time for this summary.
We will close the paragraph with a more general remark (line 2369): "The pollutant concentrations reported in this section may drop to yet not known lower values if the shipping sector is (partly) successful in meeting the IMO target in greenhouse gas emission reduction of 50% by 2050. This reduction is only possible if low-carbon alternative fuels will be introduced, employing a high energy efficiency as it is already considered in scenarios used in Karl et al. (2019a) will not be sufficient. The new fuels will also lead to altered emissions of pollutants." In this section, is the use of "likely" versus "very likely" consistent with IPCC? (line 2372) We will change "may have an influence" because it is difficult to state this cause-andeffect relationship for all systems of the hydrosphere.
This paragraph refers to both mitigation measures and adaptation measures. This is confusing in light of the way those terms are used in climate change assessments. (lines 2439-2244) We agree and we will use "anthropogenic measures".
Here again adaptation and mitigation are both used in a somewhat confusing way (lines 2450-2464) We will rephrase the paragraphs.
This sentence is unclear. (line 2556-2557) We suggest the following alternative formulation: "The authors argued that a more comprehensive assessment of forest management as a strategy to achieve the goals of the Paris Agreement should go beyond the reduction of atmospheric CO2 and, thus, the reduction of the radiative imbalance at the top of the atmosphere. They suggested…" To which models are you referring? Do you mean under both RCP2.6 and RCP8.5 emissions pathways? (line 2568-2569) We will rephrase the first paragraph to clarify the changes under the various RCPs.
There is a need for a reference for this paragraph, I assume it is BACC II Author Team (2015). These conclusions about decreased pH are contradicted somewhat by the previous discussion in section 3.2.5.7.2. (lines 2878-2883) We will add the BACC II Author Team 2015 reference and clarify the apparent contradiction.
The concept of retreat of marine species may not be clear for readers unfamiliar with the Baltic Sea, perhaps this can be more accurately stated as reduced penetration of marine species into the Baltic Sea. (line 3024) We will add new text in line 3024 … many species in the Baltic Sea. Because of the projected decline in salinity, a reduced penetration of marine species, such as bladderwrack, eelgrass and blue mussel, into the Baltic Sea has been predicted (Vuorinen et al., 2015). A large number of other species is affiliated with such keystone species, and species distribution modelling has indicated that, e.g., a decrease of bladderwrack will have large effects… Here again, it seems to be assumed that shipping will continue to depend on the use of fossil fuels. (lines 3236-3237) Isn't it more accurate to state that how these practices will change in response to climate change is yet to be determined? (line 3245-3246) We will rephrase this sentence to emphasize the research needs in this respect.
Perhaps state that there is YET little direct evidence THAT THIS IS OCCURRING? (lines 3261-3262) Thank you, as for the previous comment, we will rephrase the sentence.
It is mentioned earlier that warmer temperatures should allow the establishment of more nonindigenous species. This bears repeating here. (line 3263).
We will add the information.
Won't microplastics also be greatly affected by societal decisions about the use of plastics, in part influenced by efforts to decarbonize? (line 3283-3285) Thank you, that is a good point; changes in the use of plastics and regulations in response to the problem can be expected but the effects are uncertain. We will add a short sentence in this respect.
Should the authors continue to use Celsius rather than Kelvin here? (line 3399) Yes, we decided to use Celsius in the entire manuscript will change here.
. . . strongly affected by whether warming is allowed to proceed to the point of destabilizing Antarctic ice sheets. (line 3605) We will rephrase the sentence.
What does it mean to have low confidence in a statement that changes could not be detected? (line 3750) The confidence level refers to our knowledge about the change in large-scale circulation and not to the specific statement. We will explain the confidence levels better.
If this trend is almost statistically significant, why isn't it medium confidence, just less that a 95% threshold for high confidence? (line 3829) We agree and change to medium confidence.
Why is there only low confidence in the statement that larger runoff would lead to larger nutrient inputs? (lines 3920-3921) We agree and change to medium confidence.
|
2021-10-19T15:26:16.335Z
|
2021-09-25T00:00:00.000
|
{
"year": 2021,
"sha1": "99ee495b04abb00b558da8240e08b299a0996fd0",
"oa_license": "CCBY",
"oa_url": "https://esd.copernicus.org/preprints/esd-2021-67/esd-2021-67.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "707496026fe4ef6d5213fe8d6beb19b1844ecbb1",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
258014835
|
pes2o/s2orc
|
v3-fos-license
|
“Green” Extraction and On-Site Rapid Detection of Aflatoxin B1, Zearalenone and Deoxynivalenol in Corn, Rice and Peanut
The common mycotoxins in polluted grains are aflatoxin B1(AFB1), zearalenone (ZEN) and deoxynivalenol (DON). Because of the potential threat to humans and animals, it is necessary to detect mycotoxin contaminants rapidly. At present, later flow immunoassay (LFIA) is one of the most frequently used methods for rapid analysis. However, multistep sample pretreatment processes and organic solvents are also required to extract mycotoxins from grains. In this study, we developed a one-step and “green” sample pretreatment method without using organic solvents. By combining with LFIA test strips and a handheld detection device, an on-site method for the rapid detection of AFB1, ZEN and DON was developed. The LODs for AFB1, ZEN and DON in corn are 0.90 μg/kg, 7.11 μg/kg and 10.6 μg/kg, respectively, and the working ranges are from 1.25 μg/kg to 40 μg/kg, 20 μg/kg to 2000 μg/kg and 35 μg/kg to 1500 μg/kg, respectively. This method has been successfully applied to the detection of AFB1, ZEN and DON in corn, rice and peanut, with recoveries of 89 ± 3%–106 ± 3%, 86 ± 2%–108 ± 7% and 90 ± 2%–106 ± 10%, respectively. The detection results for the AFB1, ZEN and DON residues in certified reference materials by this method were in good agreement with their certificate values.
Introduction
Mycotoxins are among the secondary metabolites released by moulds, particularly fungi, which mainly include aflatoxin B1 (AFB1), zearalenone (ZEN) and deoxynivalenol (DON) [1][2][3][4]. The International Agency of Research on Cancer (IARC) classified AFB1 into Group 1, which includes substances with sufficient evidence to support their carcinogenicity in humans [5][6][7][8][9]. Structurally, zearalenone is similar to 17β-oestradiol, which can cause abortion, stillbirth and teratogenesis, and can cause symptoms of central nervous system poisoning and even death [2,10,11]. Deoxynivalenol (DON), also known as vomiting toxin, can result in vomiting [2,12,13]. Mycotoxins may be produced during the production, processing, transport and storage of grain [14][15][16][17]. According to FAO data [18], approximately 25% of wheat, corn, sorghum and rice are polluted by mycotoxins every year. In addition, studies have shown that most mycotoxins cannot be eliminated in the food processing and cooking process [19]. Therefore, some risks of mycotoxin exposure cannot be ignored in food and its products. If rapid and on-site methods can be developed to detect mycotoxin contamination in food in the fields, factories, grain depots, shopping malls and even in homes, people can find dangerous foods more efficiently, thereby reducing health risks to humans and animals.
At present, mycotoxin detection methods in grain can be mainly divided into two categories. One is quantitative analytical methods, including high-performance liquid chromatography (HPLC) [17], liquid chromatography-mass spectrometry (LC-MS) [20,21] and gas chromatography-mass spectrometry (GC-MS) [20,22], which use large-scale instruments with high sensitivity and accuracy for the quantification of mycotoxins. However, the sample pretreatment process of those methods is normally laborious and requires a professional person to operate. The other is rapid analysis methods based on immunological analysis methods, such as enzyme-linked immunosorbent assay (ELISA) [23], fluorescence immunoassay (FFIA) [24], and lateral flow immunochromatography (LFIA) [25][26][27][28], but they also need multistep sample pretreatment processes to prepare test solutions. For example, Li et al. developed a quantum dots (QDs) fluorescence LFIA method to simultaneously detect AFB1, ZEN and DON, in which the sample was extracted by methanol: water (60:40, V/V). After vortexing and centrifugation, the supernatant was diluted with PB (0.01 M, pH 7.4) at a ratio of 1:6 to prepare the solution to be tested [29]. Similar sample pretreatment strategies have been used in other studies that use immunoassays to detect mycotoxins [27,[30][31][32][33]. Not only is this process time-consuming and laborious, but the use of organic solvents increases the risk of endangering human health and polluting the environment. In addition, users are also faced with an increase in storage, transport, management and other costs because of the use of organic solvents, such as methanol and ethanol, which are inflammable and explosive dangerous goods. Obviously, the methods mentioned above are not suitable for on-site and household use, so it is necessary to find a "green" and environmentally friendly solution to extract mycotoxins from foods and develop a simple sample pretreatment method that does not rely on bulky instruments.
The water solubilities of AFB1, ZEN and DON are very different; DON is easily soluble in water, AFB1 is hardly soluble in water, and ZEN is almost insoluble in water. Compared with DON, AFB1 and ZEN are difficult to extract with aqueous solutions, which may be the reason why organic solvents are often used for extraction in existing methods. In our previous work, we found that fatty alcohol polyoxyethylene ether (AEO) surfactants can increase the partition coefficient of ZEN in the water phase of an oil-water mixture system [34]. AEO surfactants are non-ionic surfactants that are often used as detergents, defoamers, emulsifiers, levelling agents, etc. AEO surfactants have low skin irritation and good biodegradability, so they are friendly to the human body and environment compared to organic solvents. Based on this, we were inspired to attempt to extract ZEN, AFB1 and DON from grain with solutions containing AEO. If ZEN can be extracted, AFB1 and DON with lower relative lipophilicities will also be extracted. Therefore, we can establish a sample pretreatment method without using organic solvents. By combining this method with LFIA test strips and a hand-held detection device, a method for the rapid detection of AFB1, ZEN and DON, three mycotoxins in grain, can be established, which will be suitable for field and even family use.
In this study, we first studied the extraction efficiency of AEOs with different chemical structures for AFB1, ZEN and DON in grain samples. Then, we selected a specific AEO solution that can simultaneously extract these three mycotoxins, and optimized the concentration, volume and extraction time of the extractant solution. Finally, we established a LFIA method that can be used for on-site rapid detection of AFB1, ZEN and DON.
Optimization of the Composition of the Mycotoxin Extraction Solution
In this study, it was necessary to first screen mycotoxin extraction solutions without organic solvents that are suitable for extracting AFB1, ZEN and DON from grain, to increase the convenience of detection and achieve the goal of "green" extraction. Certified reference materials (CRMs) of blank corn flour and of corn flour containing AFB1, ZEN and DON respectively were used as grain samples to screen the surfactants in the extraction solution. The surfactant solution with the highest extraction rate was selected as the mycotoxin extraction solution. We first investigated the extraction effect of AEO surfactants on AFB1 in CRMs of corn flour containing AFB1. As shown in Figure 1a, compared with the buffer, the extraction rates of the AEO7, AEO9, AEO15 and Brij35 solutions were higher than 70%; in particular, the extraction rate of the extraction containing AEO15 was the highest, reaching 87.96%, indicating that these extraction solutions could effectively extract AFB1 from corn flour. Next, we investigated the extraction efficiency of AEO surfactants on ZEN in CRMs of corn flour containing ZEN. The extraction rates of solutions containing surfactants AEO7, AEO9 and AEO15 for ZEN in corn flour were significantly higher than those of AEO3 and Brij-35, as shown in Figure 1b. Therefore, we next investigated the extraction rate of DON in CRMs of corn flour containing DON, because of the high extraction efficiency of the extraction solutions containing AEO7, AEO9 and AEO15, respectively. As shown in Figure 1c, there was no significant difference in the extraction rate of DON from corn flour by the extraction solutions containing AEO7, AEO9 and AEO15. Considering the influence of the extraction rate and matrix effect on the extraction of AFB1, ZEN and DON from grain, the extraction solution containing AEO15 was selected as the optimal extraction solution for the extraction of the three mycotoxins from corn flour. Compared to extractants containing organic solvents [35][36][37], the AEO15 solution has a similar extraction efficiency for AFB1, ZEN and DON, but is more "green".
Optimization of Extraction Time
The effects of extraction time on the extraction rates of AFB1, ZEN and DON in grain samples were also investigated. To investigate the effect of extraction time on the extraction rate, the selected extraction solution and sample (20 µg/kg) were mixed at a volume ratio of 1:30, and 1, 2, 4 and 6 min were set as the extraction times. After standing for 5 min, the influence of different extraction times on the extraction rate was investigated. As shown in Figure 2a, the extraction rate was close to 100% for AFB1 and DON after 1 min of extraction, but for ZEN, the extraction rate after 2 min was significantly higher than that after 1 min, and there was no significant difference in the extraction rate at extraction times of 2 min, 4 min and 6 min. Therefore, 2 min was finally selected as the optimal extraction time for simultaneously extracting AFB1, ZEN and DON from grain. Due to the usual steps of vortexing, centrifugation and dilution, the traditional sample pretreatment process requires more than 25 min [30,35], while only 4 min was required in this method, which greatly reduces the time for the entire detection.
Optimization of the Concentration of Mycotoxin Extraction Solution
Then, the concentration of the AEO15 solution was optimized for the extraction of AFB1, ZEN and DON from corn flour. First, the effect of different AEO15 concentrations on the extraction rate of AFB1 in corn flour was investigated. As shown in Figure 1d, the extraction rates of the four groups were between 80% and 110%, which met the detection requirements. Next, the effect of the AEO15 concentration on the extraction rate of ZEN in corn flour was investigated. As shown in Figure 1d, when the concentration of AEO15 was 10 mM, the extraction rate was 54.95%. Increasing the AEO15 concentration did not significantly increase the extraction rate. Finally, the effect of the AEO15 concentration on the extraction rate of DON from corn flour was investigated. The results in Figure 1d showed that the extraction rates from the investigated AEO15 solutions were all between 80% and 110%, and there was no significant difference among the groups. Because the extraction solution containing 20 mM AEO15 had the highest extraction rate for AFB1, ZEN and DON in corn flour, 20 mM was selected as the optimal concentration of AEO15 in the extraction solution.
Optimization of Extraction Time
The effects of extraction time on the extraction rates of AFB1, ZEN and DON in grain samples were also investigated. To investigate the effect of extraction time on the extraction rate, the selected extraction solution and sample (20 µg/kg) were mixed at a volume ratio of 1:30, and 1, 2, 4 and 6 min were set as the extraction times. After standing for 5 min, the influence of different extraction times on the extraction rate was investigated. As shown in Figure 2a, the extraction rate was close to 100% for AFB1 and DON after 1 min of extraction, but for ZEN, the extraction rate after 2 min was significantly higher than that after 1 min, and there was no significant difference in the extraction rate at extraction times of 2 min, 4 min and 6 min. Therefore, 2 min was finally selected as the optimal extraction time for simultaneously extracting AFB1, ZEN and DON from grain. Due to the usual steps of vortexing, centrifugation and dilution, the traditional sample pretreatment process requires more than 25 min [30,35], while only 4 min was required in this method, which greatly reduces the time for the entire detection.
Optimization of Preparation Conditions for TRF-LFIA Test Strips
The preparation conditions of the TRF-LFIA test strips for the simultaneous detection of AFB1, ZEN and DON were optimized, as shown in Supplementary Materials: Figure S1. Finally, 0.3 mg/mL AFB1-BSA, 0.05 mg/mL ZEN-BSA and 0.5 mg/mL DON-BSA were selected as coating antigens for the TRF-lateral flow strip. Under the optimized preparation conditions, the detection ranges for AFB1, ZEN and DON in solution were 0.03-0.9 µg/L, 0.3-25 µg/L and 1.0-50 µg/L, respectively.
Optimization of Extraction Volume
One gram of CRMs of corn flour spiked with AFB1, ZEN and DON at concentrations of 20 µg/kg, 60 µg/kg and 1000 µg/kg, respectively, and CRMs of blank corn flour were extracted with 20, 30 and 40 mL of extraction solution under the optimized conditions. The experimental results (Figure 2b) showed that the I% values for AFB1, ZEN and DON (76.9%, 28.5% and 72.2%) were in the detection ranges of AFB1, ZEN and DON, respectively, when 30 mL of extraction solution was used. Therefore, 30 mL was selected as the optimized extraction volume. Compared to the pretreatment processes of existing approaches [35,36,38,39], which use 20 mL methanol or acetonitrile and shaking for 30 min to extract mycotoxins from grains, followed by centrifuging for 15 min and then dilution of the supernatant at different ratios [37], the sample pretreatment method of this work was simple and easy to operate.
Standard Curves
The standard curves for detecting AFB1, ZEN and DON in corn flour are shown in Figure 3. The standard curves of AFB1, ZEN and DON were based on the four-parameter logistic equation, and the working ranges were from 1.51 µg/kg to 40 µg/kg, 20 µg/kg to 2000 µg/kg and 35 µg/kg to 1500 µg/kg, respectively. The LODs of AFB1, ZEN and DON were 0.90 µg/kg, 7.11 µg/kg and 10.6 µg/kg, respectively. The LOQs of AFB1, ZEN and DON were 1.51 µg/kg, 20.3 µg/kg and 35.4 µg/kg, respectively. The maximum residue limits were 20 µg/kg for AFB1, 60 µg/kg for zearalenone and 1000 µg/kg for deoxynivalenol in cereals and their processed products [35], all within the working range of the three standard curves of AFB1, ZEN and DON in this study. As the residue limits of AFB1, ZEN and DON are quite different, the extracts must be diluted for detection at different ratios for different mycotoxins in other studies [36,40]. However, the method developed in this study can simultaneously extract three mycotoxins directly from grains without dilution
Optimization of Preparation Conditions for TRF-LFIA Test Strips
The preparation conditions of the TRF-LFIA test strips for the simultaneous detection of AFB1, ZEN and DON were optimized, as shown in Supplementary Materials: Figure S1. Finally, 0.3 mg/mL AFB1-BSA, 0.05 mg/mL ZEN-BSA and 0.5 mg/mL DON-BSA were selected as coating antigens for the TRF-lateral flow strip. Under the optimized preparation conditions, the detection ranges for AFB1, ZEN and DON in solution were 0.03-0.9 µg/L, 0.3-25 µg/L and 1.0-50 µg/L, respectively.
Optimization of Extraction Volume
One gram of CRMs of corn flour spiked with AFB1, ZEN and DON at concentrations of 20 µg/kg, 60 µg/kg and 1000 µg/kg, respectively, and CRMs of blank corn flour were extracted with 20, 30 and 40 mL of extraction solution under the optimized conditions. The experimental results (Figure 2b) showed that the I% values for AFB1, ZEN and DON (76.9%, 28.5% and 72.2%) were in the detection ranges of AFB1, ZEN and DON, respectively, when 30 mL of extraction solution was used. Therefore, 30 mL was selected as the optimized extraction volume. Compared to the pretreatment processes of existing approaches [35,36,38,39], which use 20 mL methanol or acetonitrile and shaking for 30 min to extract mycotoxins from grains, followed by centrifuging for 15 min and then dilution of the supernatant at different ratios [37], the sample pretreatment method of this work was simple and easy to operate.
Standard Curves
The standard curves for detecting AFB1, ZEN and DON in corn flour are shown in Figure 3. The standard curves of AFB1, ZEN and DON were based on the four-parameter logistic equation, and the working ranges were from 1.51 µg/kg to 40 µg/kg, 20 µg/kg to 2000 µg/kg and 35 µg/kg to 1500 µg/kg, respectively. The LODs of AFB1, ZEN and DON were 0.90 µg/kg, 7.11 µg/kg and 10.6 µg/kg, respectively. The LOQs of AFB1, ZEN and DON were 1.51 µg/kg, 20.3 µg/kg and 35.4 µg/kg, respectively. The maximum residue limits were 20 µg/kg for AFB1, 60 µg/kg for zearalenone and 1000 µg/kg for deoxynivalenol in cereals and their processed products [35], all within the working range of the three standard curves of AFB1, ZEN and DON in this study. As the residue limits of AFB1, ZEN and DON are quite different, the extracts must be diluted for detection at different ratios for different mycotoxins in other studies [36,40]. However, the method developed in this study can simultaneously extract three mycotoxins directly from grains without dilution for detection. Therefore, this approach simplifies the operation process and is easier for nonprofessional users, which can help realize on-site testing and home testing.
Recovery of AFB1, ZEN and DON in Grains
To evaluate the accuracy of this method, CRMs of blank corn flour, rice flour and peanut purchased from TMRM were spiked with various concentrations of AFB1, ZEN and DON and detected using the strips.
The recovery was calculated using the following equation:
Recovery of AFB1, ZEN and DON in Grains
To evaluate the accuracy of this method, CRMs of blank corn flour, rice flour and peanut purchased from TMRM were spiked with various concentrations of AFB1, ZEN and DON and detected using the strips.
The recovery was calculated using the following equation: where C1 is the detected mycotoxin concentration of the sample, C0 is the background mycotoxin concentration of the sample and C is the spiked mycotoxin concentration. As shown in Table 1, the average recoveries in the three substrates varied from 89 ± 3% to 106 ± 3% for AFB1, 86 ± 2% to 108 ± 7% for ZEN and 90 ± 2% to 106 ± 10% for DON. These results indicated that AFB1, ZEN and DON in corn, rice and peanut can be quantitatively determined simultaneously using the developed method with acceptable precision.
Cross-Reactivity
To test the specificity of the detection method, the standard curves and IC50 values of AFB1 analogues, ZEN analogues and other common mycotoxins in corn were obtained by the same procedure as outlined above, and the cross-reactivity of the different AFB1 analogues, ZEN analogues and other common mycotoxins, including AFB2, AFG1, AFG2, α-ZEL, β-ZEL, α-ZOL and β-ZOL, was calculated using the following equation: CR% = (IC50 for target mycotoxins/IC50 for the analogue) × 100%.
The test strips for AFB1, ZEN and DON detection had low cross-reactivity with these analogues, which demonstrated that they were highly specific for AFB1, ZEN and DON detection in grains (Table 2). Compared with studies that simultaneously detect multiple mycotoxins [36], the CRs of anti-AFB1 mAbs for AFB2, AFG1 and AFG2 in this method are similar, but the CRs of anti-ZEN mAbs for α-ZOL and β-ZOL are lower than in the previous literature [38]. From the results, we observed that the antibodies used in this study were specific. In applications of multi-mycotoxin determination, this method is valuable to some extent for screening rice and corn samples that contain AFB1, ZEN DON and other mycotoxin analogues.
Assessment of the Trueness
To further verify the reliability of this method, the bias between the results of the established method and the certified concentration (as the reference value) was compared by using the method established in this study to detect the commercial CRMs in Table 3. All CRMs used in this study were from naturally polluted grain samples, and the content of mycotoxins in all materials was confirmed by the Chinese national standard method. The calculation is as follows: where X is the detected mycotoxin concentration of the sample and X 0 is the concentration of the reference value. Although these materials are derived from natural samples contaminated with mycotoxins, they should still be referred to as CRMs rather than real samples.
Here we evaluated the trueness of the method by calculating the bias between the results of the established method and the reference concentrations of the CRMs according to the Eurochem Guide [41]. For the seven CRMs with varying AFB1 concentrations from low to high (maximum residue limits), the bias between the test results of this method and the certified concentration ranged from −17% to 8%, showing that the developed method can quantitatively detect all concentrations below the maximum residue limit concentration. For the six CRMs with different concentrations of ZEN from 32 µg/kg to 750 µg/kg in corn and rice, the bias ranged from −11% to 7%. The bias for the five CRMs containing DON at concentrations from 125 µg/kg to 1310 µg/kg was between −15% and −6%. The results indicate that the trueness, excellent anti-interference ability and accuracy of the method established in this study can meet the needs of quantitative detection of AFB1, ZEN and DON in grains.
Chemicals and Reagents
Time
Preparation of mAb-TRF Nanoparticle Conjugates
First, 100 µL TRF-MS (0.5%) was added to a centrifuge tube. After 15 min of sonication, 200 µL of EDC (0.25 mg/mL) and sulfo-NHS (2 mg/mL) in MES buffer (50 mM, pH 5) were added, and the solution was shaken at room temperature for 20 min. It was then dialyzed overnight in PBS buffer (10 mM, pH 7.2) to remove EDC and sulfo-NHS. After dialysis, certain amounts of AFB1 antibodies (0.15 mg/mL), ZEN antibodies (0.15 mg/mL), DON antibodies (0.40 mg/mL) and chicken IgY antibodies (0.30 mg/mL) were added to the solution and kept at room temperature for 1 h 30 min. Then, the microspheres were blocked with BSA (1 mg/mL) for 30 min. Finally, the preservation solution for storage (PB buffer, 5 mM, pH 7.2, containing 5% sucrose, 0.25% Tween 20 and 0.1% Proclin 300) was added to the conjugates and stored in a refrigerator at 4 • C.
Preparation of the TRF LIFA Test Strip
As shown in Figure 4, the time-resolved fluorescence immunoassay test strip was constructed with four parts as follows: PVC plastic card, sample pad, NC membrane and absorbent pad. The NC membrane was pretreated for 1 h at 25 • C and a humidity of 55%. The XYZ HM3010 dispensing platform was used to dispense ZEN-BSA, AFB1-BSA and DON-BSA on the T line and goat anti-chicken IgY antibody on the C line of the NC membrane, at a speed of 0.5 µL cm −1 . The NC membrane was then dried at 37 • C for 12 h. The sample pad, NC membrane and absorbent pad were pasted onto a PVC plastic card. The card was cut into strips and packed into a customized plastic cartridge.
Preparation of the TRF LIFA Test Strip
As shown in Figure 4, the time-resolved fluorescence immunoassay test strip was constructed with four parts as follows: PVC plastic card, sample pad, NC membrane and absorbent pad. The NC membrane was pretreated for 1 h at 25 °C and a humidity of 55%. The XYZ HM3010 dispensing platform was used to dispense ZEN-BSA, AFB1-BSA and DON-BSA on the T line and goat anti-chicken IgY antibody on the C line of the NC membrane, at a speed of 0.5 µL cm −1 . The NC membrane was then dried at 37 °C for 12 h. The sample pad, NC membrane and absorbent pad were pasted onto a PVC plastic card. The card was cut into strips and packed into a customized plastic cartridge.
Detection Procedure
The ground grain sample was mixed with the mycotoxin extract in a certain ratio, shaken for 2 min, and allowed to stand for 2 min. Then, 100 µL of supernatant was dropped into the sample hole of the test strip. The fluorescence values of the T lines and C line were read with a hand-held time-resolved fluorescence measuring instrument after 20 min. Then, the concentrations of AFB1, ZEN and DON were calculated by the inhibition rate % and the working curve. The inhibition rate (I%) was calculated using Equation (4): I% = (1 − (T/C of sample)/(T/C of blank)) × 100%. (4)
Detection Procedure
The ground grain sample was mixed with the mycotoxin extract in a certain ratio, shaken for 2 min, and allowed to stand for 2 min. Then, 100 µL of supernatant was dropped into the sample hole of the test strip. The fluorescence values of the T lines and C line were read with a hand-held time-resolved fluorescence measuring instrument after 20 min. Then, the concentrations of AFB1, ZEN and DON were calculated by the inhibition rate % and the working curve. The inhibition rate (I%) was calculated using Equation (4):
Selection of the Mycotoxin Extraction Solutions
The six kinds of AEO surfactants (AEO3, AEO7, AEO9, AEO15, AEO9P and Brij-35) were prepared with PB buffer (pH 7.2, 50 mM, with 0.3% Tween 20) at a concentration of 5 mM as the mycotoxin extraction solution. They were mixed with CRMs of blank corn flour to make the blank extraction solutions. The supernatants of the blank extraction solutions were used to prepare the different concentrations of AFB1, ZEN and DON solutions for the standard curves. Then, the surfactant solutions were mixed with certified corn flour reference materials containing 20 µg/kg AFB1, 85 µg/kg ZEN and 1310 µg/kg DON at a volume ratio of 30:1 (surfactant solution: grain) and shaken by hand for 2 min. After standing for 2 min, 100 µL of the supernatant was dropped onto the sample pad of the test strip for three tests, and the prepared standard curve solutions were dropped at the same time. After 20 min, the test strips were placed in a hand-held strip reader, and the T/C of the T lines and C line were measured. The inhibition rate (I%) of each mycotoxin extract was calculated and compared with the standard curves to determine the actual detection concentration. The extraction rate was calculated using Equation (5): Extraction rate % = (actual concentration × dilution ratio)/certified reference material concentration × 100%.
The surfactant solution with the highest extraction rate % was selected as the mycotoxin extraction solution. Finally, an optimal surfactant extract with a good extraction effect for the three mycotoxins in grain could be selected.
Standard Curves
One gram of the CRMs of blank corn flour was spiked with 200 µL of AFB1 standard, ZEN standard and DON standard at different concentrations to make spiked samples (AFB1: 0, 0.5, 1.5, 3, 5, 7.5, 15 and 25 µg/kg; ZEN: 0, 25, 50, 100, 250, 500, 1000 and 2000 µg/kg; DON: 0, 25, 50, 100, 250, 500 and 1000 µg/kg). After stirring evenly, the sample powder was placed at room temperature to dry, and the spiked samples were extracted according to the above sample pretreatment steps. Taking the AFB1, ZEN and DON concentrations as the abscissa and the average value of I% corresponding to the spiked samples at each concentration as the ordinate, a standard curve was obtained with Curve Expert 1.4.0 software (https://www.curveexpert.net accessed on 15 May 2022).
LOD and Working Range
According to "The Fitness for Purpose of Analytical Methods" [41], 10 low-concentration samples near the blank value were used to calculate the LOD value. Three times the standard deviation (so ) of 10 samples was used as the LOD value. The LOQ value was used as the lower limit of the working range, and the concentration producing obvious signal abnormalities was used as the upper limit of the working range. LOQ is the analyte concentration corresponding to so multiplied by the coefficient ko, where ko = 10. The value of so was used as the standard deviation after correction, so = so/ √ n, where n is the number of samples that were measured.
Conclusions
In this study, AEO15 was selected from six different AEO surfactants to simultaneously extract AFB1, ZEN and DON from grains. The optimized concentration of AEO15 solution was 20 mM, and the volume of extractant was 30 mL/g grain. It only took 4 min to complete the sample pretreatment process. The LFIA method has been established for the detection of AFB1, ZEN and DON in cereals. The detection ranges for AFB1, ZEN and DON were 1.25-40 µg/kg, 20-2000 µg/kg and 35-1500 µg/kg, respectively. The accuracy of this method was verified by good recovery for spiked samples (from 86 ± 2% to 108 ± 7%) and low biases relative to the reference values of CRMs. Because a nonorganic solvent extraction solution was used and the extracted solution could be tested directly without filtration, separation and dilution processes, the pretreatment process is "green", simple and rapid. The method established in this study is suitable for field and family use for the detection of AFB1, ZEN and DON in grain samples.
|
2023-04-08T15:19:36.440Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "1b456ac22e54b669c20f9832e7a85baeb5c0a361",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/28/7/3260/pdf?version=1680760286",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3e15cd9e8584a846396fd1b253179f01f17eb78b",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
146418085
|
pes2o/s2orc
|
v3-fos-license
|
Radiation Hardness of the Siemens SIPART intelligent valve positioner
About 1400 Siemens SIPART PS2 intelligent valve positioners are installed in the LHC accelerator. They were selected assuming that their non-electronic parts are intrinsically radiation hard. This positioner variant is a split design: the electronic board located in a radiation protected area and the electro-pneumatic unit on the valve stem. The next LHC upgrade will result in a significant increase of the radiation levels and the initial radiation hardness assumption may not hold. A preliminary test on an electro-pneumatic unit was done with a cobalt 60 source. At the end of the test (99.8 kGy) one of the two miniature piezoelectric valves was damaged. After this result, the CERN CALLAB facility was used to irradiate 10 miniature piezoelectric valves while they were powered by an electrical signal. The first failures are observed after a dose of 137 kGy. The paper describes the test protocol and synthesizes the results.
Introduction
During the LHC accelerator design phase, various commercially available digital valve positioners were evaluated [1] in order to avoid using the analog I/P pneumatic control valve positioners. At that time, the main advantages of the new positioners' generation were simpler installation thanks to a common communication cable, automatic calibration, remote configuration, diagnostic data for preventive maintenance and reduced pressurized air consumption. Unfortunately all of the tested digital (intelligent) valve positioners malfunctioned at Total Integrated radiation Dose (TID) levels as low as 50 Gy. These positioners were therefore not suitable for the radiation environment found in the LHC tunnel.
The observed failures are mainly related to the electronic components and therefore Siemens proposed a split design of the SIPART PS2 intelligent valve positioner. The main electronic components are separated from the electro-pneumatic unit, the former is installed in radiation protected areas and the latter on the valve stem. The split design requires long cables for reading the potentiometer that indicate the opening of the valve and for switching the two piezoelectric miniature valves, the longest cable length used is close to 900 m.
The electro-pneumatic unit was assumed to be intrinsically radiation hard as its main components are a rotary potentiometer and two piezoelectric miniature valves; so far this assumption is valid as no radiation induced failure has ever been reported on the 1400 split SIPART PS2 valve positioners operating continuously since at least 10 years in the LHC. However during 2016 it was estimated that some pneumatic units during the LHC high luminosity operation could be exposed to a TID exceeding 100 kGy [2]. Therefore a radiation qualification campaign was undertaken in order to understand the radiation effects on the electro-pneumatic unit. This paper present the main results.
SIPART PS2 intelligent valve positioner
The LHC-type Siemens SIPART PS2 electro-pneumatic unit uses two piezoelectric miniature valves [3] to regulate the pressure of a single acting spring loaded pneumatic valve actuator. The electro-pneumatic units are mounted on cryogenic valves ( Figure 1) located around the LHC cryogenic distribution line (QRL) and are exposed to the tunnel radiation environment. For the LHC tunnel, the remote electronics is located in drawers compatible with 19" racks containing up to 3 clusters of five SIPART PS2 control electronics (five is the typical quantity of cryogenic valves clustered around a QRL jumper connection to the superconducting magnets) communicating through a profibus-PA network ( Figure 2). The LHC detectors use a different remote electronic made of three SIPART PS2 control electronics using 4-20 mA communication housed in a 19" enclosure.
As there were no irradiation tests performed in the past, during 2015 a single SIPART PS2 electropneumatic unit was irradiated at the Fraunhofer Institute for Technological Trend Analysis (INT). The unit was installed in the radiation area without any electrical or pneumatic connections, the TID was 99.8 kGy and the pneumatic unit showed afterwards a malfunction in one of the two piezoelectric miniature valves. There were no intermediate checks and the data does not permit to assess the maximum TID level that this particular pneumatic unit can withstand. The 99.8 kGy TID level is relatively low within the scope of the future radiation environment of the LHC HiLumi upgrade. A radiation test campaign was therefore required to understand how to design a maintenance scenario to keep all the SIPART PS2 in good operation conditions in the future LHC environment.
Past SIPART PS2 failures in the CERN equipment, are mostly related with either the piezoelectric miniature valve releasing the air of the spring loaded valve actuator (as was the case after the irradiation test) or the rotary potentiometer. None of the failures could be attributed to the radiation level that was indeed low and the repair is performed by exchanging the complete pneumatic unit. The piezoelectric miniature valve is produced by Hoerbiger Origa [4] as Original Equipment Manufacturer (OEM), it is a 3/2 way valve made of a piezoelectric bending element actuated by a DC voltage switching the process port between the normally open and closed ports ( Figure 3).
Test set-up
The piezoelectric miniature valves are not available commercially and 20 units were provided to CERN by Siemens in order to evaluate the influence of the radiation effects. On the SIPART PS2 the piezoelectric valves are actuated by a train of voltage pulses; under these conditions the valve functions as a purely ON/OFF valve. According to the data-sheet [4] the main operational specification are a 1.2 barg inlet pressure, 1.5 l/min nominal flow, a leakage through the closed port not exceeding 0.15 l/min and a 24 Vdc operating voltage. The pressure and flow ranges applied on the spring loaded pneumatic actuator are adjusted by a second set of valves operated through the piezoelectric miniature valves [4].
Functional test set-up
Typical damages on the SIPART PS2 pneumatic unit is a fault on either the rotary potentiometer or on the piezoelectric miniature valve that releases the air of the spring valve actuator.
Damage on the rotary potentiometer is observed as noise on the valve stem position feedback, the origin of the problem is probably due to dust or oxidation on the wiper track. Radiation damage would be expected to provoke a gradual change of the wiper track resistance. Such an effect is easily corrected by recalibrating the positioner and if it exists the change is less than 10% for a TID of 100 kGy.
The piezoelectric miniature valve is assumed to be the most sensitive element to radiation damage. To evaluate whether a fault is present, the volumetric flow is measured when the valve is either fully opened or closed, Figure 4. The pressurized air volumetric flow is measured with a thermal mass flowmeter (range 0.03 to 6 standard liter per minute) and the inlet pressure is regulated and measured in order to keep similar conditions for all the tests.
Irradiation test set-up
The irradiation test set-up can hold up to ten piezoelectric miniature valves and they are electrically powered in parallel by a standard SIPART PS2 electronic unit (4-20 mA interface), see Figure 5a. The electronic unit is located outside the irradiation area. The rotary potentiometer is kept at a constant value in between the maximum and minimum values of the requested set-point and the valve position setpoint is applied as a square pattern that send pulses sequentially to the five "release" valves and the five "supply" valves. This set-up do not foresee a pressurized air supply as this would add too much complexity to the experimental set-up.
The piezoelectric miniature valves were irradiated in the CC60 irradiation room of CERN's CALLAB facility [5], Figure 5b. The irradiation is started by raising from the bottom of a shaft a Co-60 source, the radiation dose rate depends on the distance between the source and the sample. The piezoelectric miniature valves were placed as close as possible to the Co-60 source in order to get the maximum irradiation rate and be able to reach at least a TID of 100 kGy after a few weeks of test. During the irradiation several stops were foreseen in order to check, with the functional test set-up, each individual piezoelectric miniature valve (Table 1). Figure 6 shows, for ten activated piezoelectric miniature valves (Figure 4b), the air flow rate normalized by the gauge pressure (about 0.97 bar) versus the TID. The functional tests were performed between May and November 2017 with a relatively long non-irradiation cooling period between June 22 and October 11, Table 1. The first failures are observed after receiving 137 kGy, two valves self-healed and resulted in four operational valves till the end after reaching 280 kGy (see Figure 6).
Irradiation Tests Results
Thermal annealing processes in electronic circuits often self-heal, at least partially, radiation damage during non-irradiation periods. Surprisingly, six miniature valves failed during the consecutive 111 days of non-irradiation ( Figure 6). Radiation effects on the piezoelectric miniature valves are observed exclusively as complete failures on the opening of the normally closed port (Figure 3a).
To check whether the failures could be due to wearing of the piezoelectric miniature valves, nine "new" valves were used as in the radiation tests but without exposure to the gamma radiation. The duration of this test was about 1100 hour well above the irradiation duration corresponding to 130 kGy where the first radiation induced failures are observed. All of the valves showed a variation of the activated flow (varies between 0.3 to 0.9 ln/min) as the ones shown on Figure 6.
Conclusion
The radiation tests performed on the piezoelectric miniature valve permit to estimate a radiation hardness of about 130 kGy for the pneumatic unit of the SIPART PS2 valve positioner, this value is higher than 99.8 kGy as seen on the first test performed at the Fraunhofer Institute. These values are compatible with future LHC HiLumi operation that does not foresee a TID greater than 70 kGy. The tested piezoelectric miniature valves are radiation-hard COTS (Commercial Off The Shelf components) as they can operate well above 1 kGy.
For an environment like a particle accelerator that has a very large variation of the radiation dose over a few meters, the radiation hardness of the pneumatic unit can still be increased by splitting the rotary potentiometer and the piezoelectric miniature valves, appropriate operation has been checked when using a 30 m flexible pipe between the pneumatic block and the single acting spring loaded actuator.
The tests performed were done with a Co-60 source and it is not clear whether a mixed particle field as the one found in the LHC will provoke more damage at equivalent TID value. Unfortunately, qualifying equipment up to 100 kGy with a large spectrum of particles is difficult, which may be feasible but only if testing a single miniature valve at a time in the IRRAD facility [5].
|
2019-05-07T14:15:59.730Z
|
2019-04-15T00:00:00.000
|
{
"year": 2019,
"sha1": "6b5d0dc9005519c27c59e49a53eb3614e86605ea",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/502/1/012196/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b6c40988e32e459464f40f2eab6e7538b78875d4",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
}
|
18525797
|
pes2o/s2orc
|
v3-fos-license
|
Retrospective cost analysis of cervical laminectomy and fusion versus cervical laminoplasty in the treatment of cervical spondylotic myelopathy
Background Cervical laminoplasty (CLP) and posterior cervical laminectomy and fusion (CLF) are well-established surgical procedures used in the treatment of cervical spondylotic myelopathy (CSM). In situations of clinical equipoise, an influential factor in procedural decision making could be the economic effect of the chosen procedure. The object of this study is to compare and analyze the total hospital costs and charges pertaining to patients undergoing CLP or CLF for the treatment of CSM. Methods We performed a retrospective review of 81 consecutive patients from a single institution; 55 patients were treated with CLP and 26 with CLF. CLP was performed via the double-door allograft technique that does not require implants, whereas laminectomy fusion procedures included metallic instrumentation. We analyzed 10,682 individual costs (HC) and charges (HCh) for all patients, as obtained from hospital accounting data. The Current Procedural Terminology codes were used to estimate the physicians’ fees as such fees are not accounted for via hospital billing records. Total cost (TC) therefore equaled the sum of the hospital cost and the estimated physicians’ fees. Results The mean length of stay was 3.7 days for CLP and 5.9 days for CLF (P < .01). There were no significant differences between the groups with respect to age, gender, previous surgical history, and medical insurance. The TC mean was $17,734 for CLP and $37,413 for CLF (P < .01). Mean HCh for CLP was 42% of that for CLF, and therefore the mean charge for CLF was 238% of that for CLP (P < .01). Mean HC was $15,426 for CLP and $32,125 for CLF (P < .01); the main contributor was implant cost (mean $2582). Conclusions Our study demonstrates that, in clinically similar populations, CLP results in reduced length of stay, TC, and hospital charges. In CSM cases requiring posterior decompression, we demonstrate CLP to be a less costly procedure. However, in the presence of neck pain, kyphotic deformity, or gross instability, this procedure may not be sufficient and posterior CLF may be required.
Introduction
Cervical spondylosis is a common disorder that results from degeneration of intervertebral discs and hypertrophic ossification of discoligamentous structures within the cervical spine. Resultant cervical spinal stenosis may cause cervical spondylotic radiculopathy (CSR) and cervical spondylotic myelopathy (CSM). Additional pathologies, such as a herniated nucleus pulposus and ossification of the posterior longitudinal ligament (OPLL), may contribute to the development of axial neck pain, CSR, and CSM. Recently, guidelines have been published regarding the natural history, predictive prognostic features, surgical indications for cervical radiculomyelopathy, and means for assessing functional outcomes. [1][2][3][4][5] Significant controversy remains concerning the most appropriate means of operative management.
Posterior cervical procedures, such as cervical laminectomy (CL), cervical laminectomy and fusion (CLF), and cervical laminoplasty (CLP), have been advocated for patients with multisegmental disease (42 segments). 4,6,7 CLP has the additional caveat of requiring preserved lordotic cervical alignment. [8][9][10] There have been no large, multicentered, prospective, randomized, controlled trials comparing CLF with CLP and the existing literature is limited to retrospective case series and cohort analyses. [11][12][13] There have been several studies that demonstrate the relative merits of these 2 procedures and their superiority over simple CL. 6,7,[14][15][16][17][18] There are well-described situations in which one procedure may be preferred over the other based on clinicoradiographic features; however, in situations of clinical equipoise, the question of relative cost may be significant. There is essentially no existing literature on the relative cost of CLF in comparison with CLP.
There is a growing concern over the escalating cost of health care, and the relative cost of procedures may ultimately become a component of a surgical decisionmaking algorithm. This is certainly the case in clinical scenarios where both laminoplasty and laminectomy and fusion are deemed to be appropriate treatments. In such scenarios, the advantages and disadvantages of each procedure must be compared to determine the best course of action, and cost may become a relevant issue to both patients and providers. Direct care cost has been defined in the literature as the cost directly associated with intervention (ie, cost of perioperative inpatient management). 19 This excludes both the utilization of outpatient healthcare resources and consideration of lost or gained economic productivity (or return to work potential). Our hypothesis is that CLP has an obvious cost advantage over CLF due to the lack of surgical implants, even if open-door spacer implants are utilized. However, a detailed account of the contributing factors has never been demonstrated. The aim of this study is to analyze the relative direct and indirect (housekeeping etc. are "indirect costs," which are different from outpatient and long-term resource consumption) care costs associated with 2 surgical techniques for subjects with symptomatic cervical disease, CLP and CLF.
Patient population
The institutional review board approved this study before collection of any data. A retrospective chart review was performed at a single institution between 2006 and 2009 for subjects treated for CSM, OPLL, and multilevel CSR. Subjects were treated according to the surgeon's preference, via either variable length CLF (C2-T1 inclusive) or CLP (C2-T1 inclusive). CLF was performed using typical lateral mass screw and rod constructs with C7 and T1 pedicle screw fixation in individual cases; CLP was performed using the "double-door" or "French-door" technique, utilizing cadaveric allograft bone struts with suture fixation. 8,10,17 There was no direct involvement with industry in this study, and therefore no consideration was given to companies providing supportive grants. The double-door technique utilizes cadaveric allograft and suture only, whereas the laminectomy and fusion procedures were completed with metallic implants from a single vendor with no known discount other than the negotiated rate for the institution. No laminoplasty spacers were employed preferentially.
Subject demographic and surgical data were obtained for each individual subject. This included subject's age, gender, length of stay (LOS), surgical technique, revision cases, number of levels decompressed/fused, and method of payment as non-Medicare versus Medicare. A matched subanalysis, focused on patients undergoing C3-7 level decompression, including demographics and the overall cost analysis, was also performed.
Financial data
Individual subject costs, charges, and payment values were obtained from the hospital financial records with regard to all itemized costs for direct care. These costs included, but were not limited to, operating room materials and supplies (ORMS), transfusions, time in the operating room, laboratory results, physical therapy, and inpatient housekeeping. To these costs were added the costs of the physicians' labor (physician cost); physician costs were based on Medicare reimbursement schedules and were comprised of the procedure-specific Medicare reimbursement rates for surgeon, neuromonitoring, and anesthesiologist fees. The Current Procedural Terminology (CPT) codes used for calculating physician fees were taken from the Current Procedural Terminology 2009 Professional Edition, and the Manhattan health referral region adjustment factor was applied to all the fees. 20 This information is kept confidential by institutional policy, as billing rates are shared between insurance and medical device companies, and publication of such information could represent a breach of such a contract with providers.
Physician cost was calculated using the formula described and illustrated later in the article, which accounts for relative value units (RVUs) for both the surgeon and neuromonitoring, as well as the anesthesia rate per procedure (PR). The RVUs are location-specific factors and represent the labor and supply elements required to provide a service. The physician-specific RVU we used were based on CPT codes and comprised of work, practice expense, and malpractice expense values. Each of these individual values is dependent upon geographic location; for our study, these values were adjusted for Manhattan rates. Physician-specific RVUs were multiplied by standard conversion factors (CFs) to calculate the corresponding dollar amount, which represented the Medicare payment to the physician. These CFs vary depending upon the service provided; for our study, we were interested in surgeon-, anesthesia-, and neuromonitoring-specific CF. The anesthesia PR was determined by the formula PR ¼ (X/15) þ 13, where X is a constant dependent upon procedure. For CLP, this constant is 120; for CLF, it is 180.
Therefore, we calculated the Manhattan-adjusted physician cost for each procedure as the sum of the geographically adjusted costs for the surgeon, anesthesia, and neuromonitoring. Surgeon cost was represented by RVU for surgeon multiplied by the surgeon-specific CF; anesthesia cost was represented by the anesthesia procedure rate multiplied by the anesthesia-specific CF; and neuromonitoring cost was equal to RVU for neuromonitoring multiplied by the neuromonitoring-specific CF (Fig. 1). The CPT codes for CLP included 63051 for surgeon cost, as well as 95920, 95925, and 95926 for neuromonitoring. The CPT codes for CLF included 63015, 22842, 22600, and 22614 for surgeon cost, as well as 95920, 95925, and 95926 for neuromonitoring. This adjustment in cost for the Manhattan region is limited to physician reimbursement and plays no more than a small role in absolute dollar quantities. However, it should be noted that the Manhattan health referral region commands an increased adjustment for both procedures, as Manhattan is considered to be an expensive practice region.
Total cost, charge, and payment analyses between both groups were performed. Cost has been defined as the value (US dollars [USD]) of resources and supplies consumed in the provision of a service or product. Charge is defined as the assigned price from the provider institution based on the value of the given service or product, with consideration for additional resource expenditure; payment has been defined as the reimbursement received by an institution for the provision of a given service or product.
As charge and payment financial data pertaining to implants and hospital billing records are confidential under institutional policy, values for each procedure are reported as relative units (eg, charge CLP/charge CLF). By way of example, the relative charge unit for CLP patients was determined by dividing the mean total charge for CLP by the mean total charge of CLF. Subsequently, a mean relative charge was reported, with the corresponding P-value representing the statistical comparison of the original USD values. This was also performed to establish relative payments. Detailed and itemized cost comparisons for operating roomrelated costs (ORRC) and perioperative-related costs (PORC) were then performed based on hospital billing records. Each type of cost, ORRC, or PORC is broken down into several categories of goods and services, and all figures are reported in USD. Under the ORRC analysis, ORMS refer to the cost of grafts, implants, operating room instruments, and operating room materials. This analysis excludes any confidential information regarding suppliers of equipment, resources, or services. Comparison analysis within each group between non-Medicare-and Medicare-insured patients consisted of total cost (USD), charge (relative units), and payment (relative units). Likewise, comparison analysis of procedures within patient insurance type also consisted of total cost (USD), charge (relative), and payment (relative).
Statistical methods
Descriptive statistics, including means and standard deviations, were calculated for demographic, operative, and financial data (SPSS v.17, Chicago, Illinois). Nominal variables were analyzed using contingency tables and the Fisher's exact test was reported. A Student's t test was used for quantitative variables and the level of significance was set at P o .05.
Demographics
In our study there were 2 populations, CLP (n ¼ 55) and CLF (n ¼ 26) ( Table 1). The 2 groups were comparable in age, gender, rate of prior cervical operations, and method of payment (private insurance vs Medicare). The CLP subjects had a significantly shorter LOS following surgery at 3.7 Ϯ 2.2 days when compared with 5.9 Ϯ 3.2 in the CLF subjects (P o .01). This finding came despite the fact that the CLP subjects had significantly more levels decompressed with 6.0 Ϯ 1.0 compared with 4.7 Ϯ 0.6 in the CLF subjects (P o .01). Cost, charge, and payment data After the financial records were obtained and processed, we identified statistically significant reductions in cost when CLP was performed for cervical spondylosis in comparison with CLF ( This was true despite significantly greater spinal segments being decompressed in the CLP subjects versus the CLF subjects. If we perform a subanalysis comparing just subjects with C3-7 decompressions, we find that LOS, hospital cost, charge, and payment received remain statistically significant (Table 3). Physician cost (including surgeon, anesthesia, and neuromonitoring) and total cost reduction utilizing CLP also remain statistically significant (P o .01).
When cost is broken down into ORRC and PORC, we identify significant contributors to the relative cost of CLP versus CLF. First looking at ORRC ( (Fig. 2). It should be reiterated that the overall cost for CLP was lower in comparison with the overall cost for CLF, and that these percentages are calculated from significantly different total costs.
Non-Medicare insurance versus Medicare
The results described earlier were also apparent when patients were categorized by insurance type (Medicare vs non-Medicare) before comparison (Table 6). Amongst non-Medicare patients, the mean hospital cost of CLF ($33,336 Ϯ $9720) was significantly greater than that of CLP ($14,762 Ϯ $5093) (P o .01). Likewise, hospital charges for CLF were 1.48 Ϯ 0.95 times the charges for CLP (P o .01), and hospital payments received were 1.54 Ϯ 1.13 times the payments received for CLP (P o .01). These findings were comparable to procedural comparisons of patients covered by Medicare only; CLF was more costly for the hospital (mean cost $30,474 Ϯ $10,870) than CLP ($16,284 Ϯ $4700) (P o .01), it generated 1.39 Ϯ 0.88 times the hospital charges of CLP (P o .01) and resulted in 1.08 Ϯ 0.59 times the payments received for CLP (P o .01).
When CLP and CLF subjects were broken down separately by payer (non-Medicare insurance and Medicare), we did not find many significant differences within the 2 populations (
Discussion
The best management of CSM or CSR in the context of cervical spinal stenosis, herniated nucleus pulposus, and OPLL remains an arena of intense clinical debate. Expert consensus essentially remains that individual patient factors are of the utmost importance in devising the most appropriate management strategy. There are several procedures available in the arsenal of posterior approaches to the cervical spine, including CLF and CLP. In clinical situations where more than one type of procedure could be deemed appropriate, patients and caregivers are forced to weigh multiple factors to determine the best treatment option. Factors that guide decision-making between these procedures include the patient's cervical alignment, axial neck pain, multisegmental (42 levels) spondylosis, the presence of OPLL, the extent of cervical cord compression, other patient factors (eg, comorbidities and age), and the surgeon's own preferences. 6,15,21,22 CLP has been deemed the more appropriate procedure in cases of preserved lordosis, no segmental instability, and minimal neck pain. It has been associated with neurologic recovery rates from 41%-81%, based on Nurick grading and Japanese Orthopedic Association outcomes, [23][24][25] though several authors have noted differing recovery based on age, with older patients showing lesser degrees of recovery. 24,26 Despite being considered a motionpreserving procedure, CLP has been demonstrated to reduce ROM in the range of 81-341. 17,27,28 In contrast, CLF has been the procedure of choice when patients present with kyphotic deformity, gross instability, and neck pain. CLF can be recommended for the treatment of CSM and OPLL and should be considered equivalent to CL and CLP with regard to functional improvement. 29 The neurologic stabilization and recovery rates range from 51%-97%, based on Nurick and Japanese Orthopedic Association outcomes, 7,30 though early studies demonstrating effective neurologic outcome with CLF had high complication rates that included kyphosis and pseudarthrosis when using onlay bone graft techniques. 30,31 However, a more recent series, with better rates of neurologic recovery when utilizing lateral mass fixation techniques, demonstrated lower rates of complication. 7,32 Therefore, institutional practice is such that cervical laminoplasty is reserved for patients with CSM who have limited axial neck pain and maintenance of neutral or lordotic cervical alignment. Laminectomy and fusion is employed when there is significant axial neck pain, kyphotic deformity, dynamic hypermobility, or instability. However, it is not uncommon for a patient to lack definitive symptoms that would clearly indicate which procedure, CLP or CLF, is preferable; in such situations of clinical equipoise, the importance of defining clinical superiority between these 2 procedures is overshadowed by the individual patient factors and surgeon's preferences that influence the decision-making algorithm of the surgeon. The existing comparative literature between CL, CLP, and CLF, however is surprisingly scarce. Several authors have demonstrated equivalent rates of postoperative neurologic recovery and improvement in CLF and CLP, 12,33-35 though CLP is suspected to result in reduced ROM, 36 and there have been conflicting reports of which procedure is more advantageous with respect to the rate of postoperative kyphosis. 33,[35][36][37] A 2001 independent matched-cohort analysis of CLF versus CLP concluded that CLP may be the preferable procedure owing to reduced complication rates and improved functional outcomes. 11 However, a more recent study has suggested that both procedures offer effective and comparable functional outcomes, and that a RCT would be necessary to determine the superiority of either modality. 29 An additional factor that deserves attention, and which previously has not been considered, is the relative economic cost of these techniques.
We have chosen to evaluate the 2 most common posterior cervical techniques from a simplified economic perspective. Assessment of their total costs and relative charges and hospital payments based on institutional data is illustrative of differences in the perioperative setting. Overall, our results demonstrate that CLP carries a lower cost, presents a reduced relative charge to payers and results in a lower relative payment to the institution. This is largely a result of direct ORMS costs from an ORRC perspective and increased LOS from PORC perspective. In the evaluation of this type of data, there are several interested parties including the patient, the providers (physician and hospital), the payers (subjects, Medicare, and private insurers), and policy makers (Government and society). Overall, CLP is the superior procedure from a direct and indirect short-term care cost perspective if reduced cost is the goal ( Table 2).
The institution incurs significantly lower short-term care costs in providing CLP in comparison with CLF. The major ORRC factors involved are the surgical implants. It is relevant to note, however, that the cost of CLP can be increased incrementally by the utilization of an "open-door technique" employing customized implantable spacing devices and plates. 8,10 Likewise, the overall cost of CLF may be reduced through the judicious inclusion of levels in the fusion construct and the number of implantable screws and rods. These factors may be considered in the context of overall patient outcome, which obviously must not be sacrificed. Within PORC, the room expense is the largest factor, which is certainly a consequence of increased LOS. In reality, it is likely that all of the statistically increased PORCs demonstrated are a product of increased LOS. This makes reductions in LOS an important target from an economic perspective, in addition to the clinical benefits of reduced hospital admission periods.
Despite the similarity in complexity between CLP and CLF, the payment received by the hospital is also significantly less for CLP; this finding was true regardless of the patient's insurance type, which demonstrates that hospital payment rates in this population are not necessarily driven by characteristics of the payer. Providers may, therefore, find themselves in situations where economic factors could potentially influence decision making. In an era with continued interest in reducing healthcare cost, payers and policy makers are likely to choose a less costly procedure in situations of clinical equipoise; in the setting of CSM, that procedure appears to be CLP. This is, perhaps, the most controversial aspect of this type of cost-analysis information. As providers, spine surgeons utilize existing literature, clinical training and experience, and access to advanced technology to deliver the best possible care for patients. The additional factor of cost should remain secondary, but must still be considered. An undesirable scenario is one in which payers and policy makers utilize cost as rationale to cover one procedure without truly understanding the clinical nuances of the decision, therefore, the importance of defining clinical outcomes to associate with cost data could not be more apparent. Prospective collection of general and diseasespecific health-related quality-of-life outcome scores related to the management of CSM, as well as outpatient healthcare resource utilization and gained or lost patient economic productivity, will be crucial information to consider moving forward.
Conclusion
We demonstrate that, in clinically similar populations, cervical laminoplasty results in a shorter LOS and reduced costs, charges, and payment. In clinical scenarios requiring posterior decompression and procedural equipoise, we recommend spine surgeons to consider performing a cervical laminoplasty. In the presence of neck pain, kyphotic deformity, or gross instability, this procedure may not be sufficient and CLF may be required. Longterm follow-up with consistent reporting of general health and disease-specific outcome measures is essential to study the economics of CSM more effectively. Additionally, monitoring economic factors including non-short-term care healthcare resource utilization and loss or gain of productivity will remain a challenge that must be met with accurate data collection and consistent modeling of cost-utility research.
|
2016-05-12T22:15:10.714Z
|
2013-12-01T00:00:00.000
|
{
"year": 2013,
"sha1": "8885c0f26dfe1029b7b8742eacf9eb869850ec0e",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc4300974?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "8885c0f26dfe1029b7b8742eacf9eb869850ec0e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237496076
|
pes2o/s2orc
|
v3-fos-license
|
Anti-Carbamylated LL37 Antibodies Promote Pathogenic Bone Resorption in Rheumatoid Arthritis
Objective Antibodies against carbamylated proteins (anti-CarP) are associated with poor prognosis and the development of bone erosions in rheumatoid arthritis (RA). RA neutrophils externalize modified autoantigens through the formation of neutrophil extracellular traps (NETs). Increased levels of the cathelicidin LL37 have been documented in the synovium of RA patients, but the cellular source remains unclear. We sought to determine if post-translational modifications of LL37, specifically carbamylation, occur during NET formation, enhance this protein’s autoantigenicity, and contribute to drive bone erosion in the synovial joint. Methods ELISA and Western blot analyses were used to identify carbamylated LL37 (carLL37) in biological samples. Anti-carLL37 antibodies were measured in the serum of HLA-DRB1*04:01 transgenic mice and in human RA synovial fluid. Results Elevated levels of carLL37 were found in plasma and synovial fluid from RA patients, compared to healthy controls. RA NETs release carLL37 and fibroblast-like synoviocytes (FLS) internalized NET-bound carLL37 and loaded it into their MHCII compartment. HLA-DRB1*04:01 transgenic mice immunized with FLS containing NETs developed autoantibodies against carLL37. Anti-carLL37 antibodies were present in RA sera and synovial fluid and they correlated with radiologic bone erosion scores of the hands and feet in RA patients. CarLL37-IgG immune complexes enhanced the ability of monocytes to differentiate into osteoclasts and potentiated osteoclast-mediated extracellular matrix resorption. Conclusions NETs are a source of carLL37 leading to induction of anti-carbamylated autoantibody responses. Furthermore, carLL37-IgG immune complexes may be implicated in the bone damage characteristic of RA. These results support that dysregulated NET formation has pathogenic roles in RA.
INTRODUCTION
Rheumatoid arthritis (RA) is an autoimmune disease characterized by inflammation of the joint, cartilage damage and bone erosion (1). Lack of appropriate control of RA symptomatology is associated with joint destruction, disability and increased mortality. One of the hallmarks of RA is the presence of autoantibodies to post-translationally modified proteins (2), particularly directed against citrulline. More recently, antibodies against a similar but structurally distinct modification, homocitrulline (carbamylation), termed anti-CarP have been described in several cohorts of RA patients (3)(4)(5). The presence of anti-carbamylated protein autoantibodies (anti-CarP) is associated with enhanced radiographic bone erosion (3); however, the pathogenic mechanisms underlying this observation are not well understood.
Neutrophils are highly abundant in the synovial fluid of RA patients (6) and we previously reported that RA neutrophils display an enhanced capacity to form neutrophil extracellular traps (NETs) and that these structures are a source of both citrullinated and carbamylated autoantigens (7,8). NETs carrying modified autoantigens can be internalized by fibroblastlike synoviocytes (FLS), endowing them with antigen presenting cell-capabilities and induction of anti-citrulline pathogenic adaptive immunity (9). Carbamylation is a non-enzymatic posttranslational modification (PTM) of a positively charged lysine residue, which yields neutral charged homocitrulline. Carbamylation can also occur at sites of inflammation, possibly due to cyanate formation during neutrophil oxidative burst (10,11). The relative contribution of PTM's in NET-associated proteins remains unknown, and how these modified proteins drive aspects of disease pathogenesis requires further exploration.
LL37 is an antimicrobial peptide that is externalized during NET formation and is elevated in the synovium of RA patients (12,13). LL37 PTMs can impair its antimicrobial capacity (11), while autoantibodies against LL37 have been implicated in the pathogenesis of autoimmune diseases such as systemic lupus erythematosus (SLE) (12,(14)(15)(16). Furthermore, carbamylation of LL37 and antibodies against carLL37 have been reported in psoriatic arthritis patients (17) but their role in disease pathogenesis is unclear.
Here, we sought to investigate the role of carbamylated LL37 (carLL37) in the pathogenesis of RA. Specifically, we hypothesized that NETs are a source of carLL37 and that this autoantigen may mediate a pathogenic immune response and be critical for the development of erosive joint disease.
Human Specimens and Cells
Patients recruited in this study fulfilled the 2010 American College of Rheumatology criteria for RA (18). Healthy controls were recruited by advertisement. All individuals gave written informed consent and enrolled in a protocol approved by the Instituto Nacional de Ciencias Medicas y de la Nutrición Salvador Zubirań (INCMNSZ, Ref 1243). A complete clinical examination was performed by a rheumatologist, which included documentation of the Disease Activity Score (DAS-28) (19). Hand and foot RA radiographs were scored using the Simple Erosion Narrowing Score (SENS) (20,21). The rheumatologist who scored the radiographs was blinded to the patients' clinical data. Patient characteristics can be found in Table S1. Peripheral blood (PB) was obtained by venipuncture and collected in EDTA-containing tubes. PB was fractionated via Ficoll-Paque Plus (GE Healthcare) gradient. Neutrophils were isolated by dextran sedimentation and hypotonic salt solution as previously described (7). Healthy control PB CD14 + were purified by positive selection. Briefly, PBMCs were incubated with CD14 beads (Miltenyi Biotech) in MACS buffer and isolated according to manufacturer's instructions by positive selection. Synovial fluid was collected from a separate Canadian cohort (22) (Ethics Board approval number HS14453) of RA patients. Samples were collected by routine joint aspiration, aliquoted, labelled by diagnosis and stored at -20 until further use. For the purposes of this study, samples were classified as either RA or non-RA (5.8% Psoriatic Arthritis, 5.8% Polymyalgia Rheumatica, 5.8% Reactive Arthritis, 11.8% Connective tissue disease, and 70.6% Osteoarthritis).
Quantification of Serum Carbamylated LL37 and NET Complexes
A 96-well plate was coated with rabbit polyclonal carbamylated-Lysine antibody (Cell bioLabs) at 1:400 in PBS overnight at 4°C. Wells were washed and blocked with 1% BSA at room temperature for 1 hour. Diluted serum (1:100) was added to the wells in 1% BSA blocking buffer and incubated overnight at 4°C. The wells were washed three times and incubated with mouse monoclonal anti-LL37 (EMD Millipore) at 1:100 in blocking buffer. After washing three times, goat anti-mouse conjugated HRP antibody (Bio-Rad) was added to the wells in blocking buffer at 1: 10,000. Wells were washed five times, followed by the addition of TMB substrate (Sigma Aldrich) and stop solution (Sigma Aldrich). The absorbance was measured at 450 nm and values were calculated as an OD index. The OD index is calculated by normalizing all OD to the control mean (OD index = OD value/control OD mean). Assays were performed in duplicate.
For NET complexes, a similar procedure was followed using a 96-well plate that was coated with either carbamylated-Lysine antibody (Cell bioLabs) or rabbit anti-citrullinated histone 3 (Abcam) in PBS overnight at 4°C. Mouse monoclonal anti-dsDNA (EMD Millipore) was used as primary antibody diluted (1:100) in blocking buffer followed by incubation with goat antimouse conjugated HRP antibody (Bio-Rad) at 1: 10,000 dilution. OD index for ELISA is calculated using the following formula: OD index value = OD value/control OD mean.
Effect of Immune Complexes on Osteoclast Formation
A 96-well plate was coated with 200 ng of carbamylated LL37 in PBS overnight. LL37 immune complexes were generated by adding 100 ug of total IgG isolated from RA serum using the Melon kit (Thermo Fisher). After two hours incubation, wells were washed with PBS. CD14+ cells were isolated as described above and incubated in the presence of 50ng/mL of monocyte colony stimulating factor (M-CSF) for three days. Pre-osteoclasts were seeded in the with carbamylated LL37-coated wells in the presence or absence of RA IgGs. Cells were cultured with M-CSF and RANKL (100 ng/mL). After four days, the plate was washed and a TRAP staining kit (Kamiya Biomedical company) was used to detect TRAP-positive cells, indicative of OCs. Multinucleated TRAP positive cells were quantified and plotted.
Immunofluorescence of NET Treated Fibroblast-Like Synoviocytes
Methods for these experiments were adopted from our group's previous work (9). Briefly, FLS obtained from RA patients were cultured on coverslips and treated with either RA NETs or vehicle. For plasma membrane detection, cells were incubated with membrane-dye (Biotium) for 30 min at 37 C. Cell were washed and fixed with 4% paraformaldehyde 12h at 4°C. For intracellular detection, cells were permeabilized with 0.2% triton for 10 min at room temperature. Coverslips were blocked with porcine gelatin (Sigma) for 30 minutes, then incubated for 1 hour with the primary antibody [anti-LL37 (Abcam) or anti-MHCII (Abcam)] at 37°C. After washing, coverslips were incubated for 30 minutes with secondary antibodies, then counterstained with Hoechst 1:1000. After further washing, coverslips were mounted on glass slides using Prolong-gold (Invitrogen). Images were acquired on a Zeiss LSM 780 confocal microscope.
Immunization of HLA-DRB4*04:01 Transgenic Mouse Model HLA-DRB4*04:01 breeding pairs were a kind gift from Dr. Chella David (Mayo Clinic) and were housed and bred at the NIH animal facility. Animal studies were conducted according to the guidelines established by the NIAMS Laboratory Animal Care and Use section and following approved protocol (A016-05-26). Mouse FLS (100 000 per well) were cultured in the presence or absence of 50 mg of spontaneously generated human RA NETs for three days prior to injection. FLS were washed with PBS, detached with Trypsin, washed, and resuspended in PBS. FLS with and without NETs were injected into the hindleg synovial space using a 27-gauge needle. This procedure was performed every 7 days for a total of 7 weeks (7 injections). Serum was collected for analysis of autoantibodies.
OC Resorption Assay
CD14+ cells were isolated from PBMC from healthy control using MACs columns. Cells were incubated in the presence of 50ng/mL of M-CSF for three days, followed by incubation with 100ng/mL of RANKL for 7 days. OCs were detached using nonenzymatic dissociation solution (Sigma). Carbamylated LL37 was bound to a calcium phosphate coated plate overnight at 4°C, washed, and incubated with 100 ug of purified RA IgG (Melon IgG spin purification kit) for 60 minutes at room temperature. Plate was washed extensively with PBS to remove non-specific binding. Equal numbers of OCs were seeded in a calcium-phosphate coated plate (KT651 Kamiya) in the presence or absence of 200 ng of carbamylated LL37 or carbamylated LL37-IgG. OCs were cultured in the presence of RANKL (100 ng/mL) for 3 days. Cells were removed, and the plate was washed and scanned in a Keyence microscope. Images were analyzed using ImageJ.
Western Blotting
Neutrophils from healthy volunteers or RA patients were resuspended in RPMI and seeded in 24-well plates in the presence or absence of 100 ng/mL of PMA (Sigma) or 2.5 uM of calcium Ionophore (Sigma) for 4 hours at 37°C. NETs were harvested with 10 U/mL of micrococcal nuclease (ThermoFisher, Waltham, MD) for 10 min at 37°C. NETs were collected and cleared of debris by centrifugation. NETs were quantified using a BCA protein assay (Pierce) and equal amounts of NET protein were resolved in a 4-12% gradient Bis-Tris gel (Invitrogen), transferred onto a nitrocellulose membrane and blocked with 10% BSA for 30 min at room temperature. After overnight incubation with primary antibodies, membranes were washed three times with PBS-Tween (PBS-T) and incubated with secondary antibody coupled to IRDye 800CW. Membranes were developed using Li-COR Odyssey Clx scanner (Li-COR).
Statistical Analysis
All analyses were performed using GraphPad Prism Version 8.1.1 (La Jolla, CA) unless otherwise stated. Mann-Whitney U test was used for non-parametric continuous comparisons between 2 groups. Continuous associations were assessed by linear regression using base R. Code is available upon request. All analyses were considered statistically significant at p < 0.05.
Carbamylated LL37 Is Elevated in RA Serum, Synovial Fluid, and NETs
We have previously shown that carbamylated histones are found in NETs (8), and these antigens potentiate RA bone erosion. However, little is known about carbamylated LL37 and its relevance in RA pathogenesis, despite it being highly expressed in the RA synovium and known component of NETs (13). Thus, we investigated whether carbamylated LL37 can also be detected in RA. By ELISA, we detected significant elevation of carLL37 in RA serum when compared to serum from healthy volunteers ( Figure 1A). Similarly, levels of carLL37 were significantly elevated in synovial fluid from RA patients when compared to non-RA synovial fluid ( Figure 1B). Since LL37 can be externalized during NET formation (23), we explored the possibility that NETs also contain the carbamylated form of LL37. We found a significant positive correlation between serum levels of citrullinated histone 3/DNA complexes, a previously validated surrogate marker of NETs, and serum levels of carbamylated LL37 ( Figure 1C). These results suggested that NETs are a source of carLL37 in RA ( Figure 1C).
Healthy control and RA neutrophils were isolated and stimulated with phorbol 12-myristate 13-acetate (PMA) or calcium ionophore A23187 for 4 hours to induce NET formation. Western blot analysis showed that spontaneously generated RA NETs (without PMA or ionophore in vitro addition) contained elevated amounts of carbamylated proteins, when compared to healthy control NETs induced in the presence of either PMA or calcium ionophore ( Figure 1D). Moreover, Western blot analysis showed that LL37 was carbamylated in RA NETs ( Figure 1D). Anti-carbamylated antibody did not cross-react with citrullinated residues, suggesting it is specific for the recognition of carbamylated proteins ( Figure S1). Interestingly, supernatant collected from healthy control neutrophils also contained carbamylated LL37, suggesting that carLL37 may be released during degranulation and NET formation ( Figure S2). To further demonstrate that LL37 was carbamylated, immunoprecipitation was performed. Western blot analysis of immunoprecipitated LL37 demonstrated carbamylation of the protein ( Figure 1E). Quantification of carLL37 in NETs by ELISA showed significant elevation in spontaneously generated RA NETs when compared to healthy control NETs generated with either PMA or calcium ionophore ( Figure 1F). These data suggest that RA neutrophils extrude carLL37 during NET formation.
FLS Internalize carLL37 and Promote Anti Car-Autoantibody Responses
Carbamylated NET proteins contribute to the pool of RA autoantigens (8), but how these modified proteins stimulate adaptive immune responses remains unclear. We previously reported that FLS can acquire antigen presenting cell capabilities by internalizing NETs and presenting NET-derived citrullinated antigens to antigen-specific CD4+ T cells (9). We then investigated whether FLS can also internalize NET-bound carLL37, and whether this internalization can also induce antigen-specific adaptive immune responses. Immunofluorescence confocal microscopy analysis demonstrated that LL37 can be internalized by RA FLS and that it colocalizes with the MHCII compartment ( Figure 2A). This suggests that carLL37 can be loaded onto MHCII compartment intracellularly. In addition, colocalization of LL37 and MHCII was also detected in the plasma membrane of nonpermeabilized FLS ( Figure 2B), suggesting that the LL37/ MHCII complex traffics to the plasma membrane.
To test whether FLS that have internalized NETs can induce specific adaptive immune responses in vivo, we used the humanized HLA-DRB4*04:01 transgenic mouse model (24). We previously showed that FLS isolated from these mice can internalize RA NETs (9). We isolated FLS from HLA-DRB4*04:01 transgenic mice and incubated them with spontaneously generated RA NETs for 24h. A total of 100,000 FLS, with or without internalized NETs, were injected into one knee joint of each HLA-DRB4*04:01 mouse. After seven rounds of injections, antibodies against carLL37 were measured. Significantly higher levels of anti-carLL37 antibodies were detected in the sera of mice that received intra-articular injection of FLS loaded with NETs, when compared with animals that received FLS alone ( Figure 2C). To recapitulate these findings in the human synovium, we quantified autoantibodies against carLL37 in synovial fluid from RA and non-RA subjects. Significantly higher levels of anti-carLL37 autoantibodies were detected in RA synovial fluid when compared to non-RA SF. These results suggest that RA patients also develop antigen-specific adaptive immune responses to carbamylated LL37 ( Figure 2D).
Anti-Carbamylated LL37 Antibodies Correlate With Radiographic Bone Erosions in RA
To assess the clinical significance of anti-carLL37 responses, we first assessed the correlation between anti-CarP and anti-carLL37 responses in RA. A positive correlation was found between the anti-CarP and anti-carLL37 levels in RA serum ( Figure 3A) and synovial fluid ( Figure 3B). To investigate the clinical relevance of presence and levels of anti-carLL37 antibodies in RA, we performed supervised correlations between anti-carLL37, carLL37 protein and several clinical parameters (Table S2). Although smoking status has been associated with risk of developing antibodies against carbamylated proteins (25), we did not find a significant correlation between smoking and the presence of anti-carLL37 Ab (Table S2). A positive correlation was found between serum levels of anti-carLL37 Abs and the presence of periarticular hand (p=0.004) and foot (p = 0.01) bone erosions and radiographic erosion score (p = 0.003, Figure 3C). Interestingly, the strength of this association exceeded that of anti-carbamylated Histone Ab (R 2 0.27 vs 0.23 respectively, Table S2), which have been previously shown to associate with erosive disease (8). These results suggest that anti-carLL37 autoantibodies may be implicated in the development of bone erosions in RA, and suggest that anti-carLL37 may be considered as a clinically useful biomarker for risk of erosive RA. Carbamylated LL37-IgG Immune Complexes Enhance OC Formation and Activity Given the relationship between anti-carLL37 Abs and radiographic bone erosions, we sought to determine if these antibodies directly impacted OC formation and bone resorption capabilities. We have previously shown that carbamylatedhistone-IgG complexes potentiate OC formation and activity (8).
Whether other carbamylated IgG complexes also increase OC function is not known. Healthy control PB CD14+ monocytes were incubated with M-CSF/RANKL in the presence or absence of carLL37-IgG immune complexes. A significant increase in multinucleated TRAP-positive cells was found in the cells exposed to anti-carLL37-IgG immune complexes, when compared to carLL37 alone or in cells only treated with M-CSF/ RANKL ( Figures 4A, B). These results suggest that the presence of carLL37-IgG immune complexes accelerates OC formation. Next, equal number of OCs (generated by MCSF/RANKL) were plated on a calcium-phosphate plate in the presence or absence of carLL37 or carLL37-IgG immune complexes. RANKL was added to all conditions to activate OCs. CarLL37-IgG immune complexes significantly enhanced OC resorptive activity from 40% to 65%, when compared to OC without immune complexes ( Figures 4C, D). Of note, carLL37 alone was also able to significantly increase OC resorption ( Figure 4D). These results suggest that anti-carLL37-IgG immune complexes potentiate OC formation and activity.
DISCUSSION
Increasing evidence supports the notion that neutrophils are key players in the generation of modified autoantigens in the RA synovium (7,9). While various post-translationally modified antigens are present in NETs, much of the research has focused on citrullinated autoantigens. Recent evidence suggests that other post-translational modifications may also play an important role in RA pathogenesis. Carbamylation is a modification that is structurally similar to citrulline, is also generated in proinflammatory environments, and potentially links smoking and/or diet to RA development. Our prior work suggests that carbamylated histones, amongst other carbamylated autoantigens, are generated during NET formation. We found that LL37 is also carbamylated in spontaneously generated RA NETs. FLS that internalize NET fragments containing carLL37 were loaded onto the cells MHCII compartment, trafficking to the FLS plasma membrane which might be presented to CD4+T cells as we have shown previously (9). This process leads to the generation of anti-carLL37 antibodies in the synovium of RA patients, which are correlated with radiographic bone erosion in an RA cohort. Furthermore, carLL37-IgG immune complexes potentiate OC formation and their ability to resorb bone, providing a mechanistic link between specific autoantibody responses and clinical outcomes. Large number of activated neutrophils are found in the synovial fluid of RA patients during active and in early phases of the disease (6, 26) and they associate with the development of classic RA manifestations such as morning stiffness (27). LL37 is highly expressed in inflamed RA synovial joints as well as in pristane-induced arthritis rat models (1,16). Antibodies to carbamylated proteins has a significant association with enhanced radiographic bone erosions in RA (3,5). We now provide evidence that anti-carLL37 antibodies strongly associates with anti-CarP responses in serum and synovial fluid and, most importantly, correlates with bone radiographic scores. A possible mechanism for this association is that carLL37-IgG immune complexes, similar to other antigen-IgG complexes in RA, may enhance osteoclast differentiation and function in the RA synovium. This data builds on our previous observations and provides evidence that antibody responses to a variety of PTM's may potentiate mechanisms of bone erosions. Importantly, our previous work that elucidates the mechanism by which carbamylated histones enhances OC activity, strengthens the observation that innate immune proteins derived from NETs can breach immune tolerance and potentiate bone resorption, playing a dual role in RA. Further studies interrogating the specificity of these antigen-Ig interactions are required to better understand their precise role in RA pathogenesis. This finding opens the possibility that other posttranslational modifications may contribute to joint damage in this disease.
HLA-DRB1*04:01 transgenic mice immunized with FLS loaded with RA NETs developed antibodies against carLL37. This supports the notion that genetic susceptibility and environmental factors are crucial in the interplay of autoantibody generation against posttranslationally modified neoantigens. FLS-T-cell interactions may be critical for the loss of immune tolerance, and it remains to be determined if other cell-cell interactions are important for these events to occur (28). Of note, the presence of anti-carLL37 antibodies has also been documented in psoriatic arthritis, suggesting that additional genetic polymorphisms besides the shared epitope may be implicated in these responses (29). Furthermore, the bone damage phenotype in psoriatic arthritis is different from the one in RA and further understanding how anti-CarP responses modulate bone damage in other diseases need to be further investigated. Finally, the cross-reactive nature of antimodified citrullinated antibodies is an important consideration when interpreting these results, however it remains unclear if autoantibodies that target carbamylated residues also cross-react, and to what degree, in anti-carP positive patients (30).
The results from this study further support the rationale for testing whether inhibitors of dysregulated NET formation (31), targeting neutrophil hyperactivity, and/or preventing specific cellcell interactions in the synovium in future clinical trials in RA.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Instituto Nacional de Ciencias Medicas y de la Nutrició n Salvador Zubirá n (INCMNSZ, Ref 1243). The patients/participants provided their written informed consent to participate in this study. The animal study was reviewed and approved by NIAMS Laboratory Animal Care and Use section and following approved protocol (A016-05-26).
|
2021-09-14T13:15:08.258Z
|
2021-09-14T00:00:00.000
|
{
"year": 2021,
"sha1": "c707bf791f968ad5d52c538efbeb960d9f4c3d49",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2021.715997/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c707bf791f968ad5d52c538efbeb960d9f4c3d49",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
152013820
|
pes2o/s2orc
|
v3-fos-license
|
The Relationship between Postpartum Depression and Beliefs about Motherhood and Perfectionism during Pregnancy
Postpartum depression is a common mood disorder following childbirth. Depression occurring at this crucial stage in a child’s life is known to have far reaching and potentially damaging consequences for the mother, the baby and her family. Whilst a number of risk factors have been identified in the literature as contributing to the development of postpartum depression, including a past psychiatric history and lack of social support, some of these are not easily modifiable through psychological interventions. The aim of this longitudinal study was to examine the contribution of specific psychological factors, including maternal beliefs about motherhood and perfectionism and perceived social support, in the development of postpartum depression. Seventy-three pregnant women consented to take part and returned questionnaires during the third trimester of their pregnancy. Of those women, 61 also completed questionnaires 4-6 weeks following the birth of their baby. Significant associations were identified for postpartum depression and the psychological variables of perfectionistic beliefs and social support, whereas many demographic factors were not significantly implicated in the development of depression. Using a multiple hierarchical regression analysis, the study examined whether maternal beliefs about motherhood and beliefs about perfectionism predicted more of the variance in postpartum depression scores than other demographic variables, including a past history of emotional difficulties. As predicted, beliefs about motherhood and perceptions of poor social support from friends and family were significant predictors of postpartum depression, when the influence of antenatal depression scores were accounted for. A past history of emotional difficulties was also retained in the final model, whereas beliefs about perfectionism were not. These findings have implications for clinical services, highlighting the need for refined assessments of expectant mothers’ beliefs about motherhood and their perceptions of their social support during pregnancy and the need for more refined psychological interventions that address these beliefs. DOI : 10.14302/issn.2381-862X.jwrh-15-848 Corresponding Author: Anja Wittkowski, University of Manchester, Division of Psychology and Mental Health, Email: anja.wittkowski@manchester.ac.uk Running Title: Postpartum depression and specific antenatal maternal beliefs
Introduction
Postpartum depression is a common mental health problem, but as a result of differences in method and population between studies estimates of prevalence and incidence vary 1 .While a meta-analysis (which included only studies using rigorous assessment methods) suggested a period prevalence of 21.9% over the first year postpartum. 2 Davé et al. 3 linked childbirth and depression diagnoses and antidepressant prescriptions in medical records for couples (N > 72, 000) and found an incidence of depression or antidepressant prescription in the first year postpartum of 13.93% for women.According to the National Institute for Clinical Excellence, 4 depression and anxiety affect 15-20% of women in the first year after childbirth.
Unlike the maternity or 'baby blues', a transitory condition experienced in the days immediately following birth which remits in a few days 5 ,postpartum depression can be a seriously debilitating condition for both mother and baby. 6Postpartum depression typically occurs within four weeks following childbirth and although the duration of symptoms varies 7 ,many mothers continue to report significant symptoms for up to 12 months postpartum. 8stpartum depression is unique from major depression in terms of the context in which it occurs 9 and because its negative consequences affect a much wider system than the mother alone.Depression in a recent mother has been linked to increased conflict and reduced affection in the marital relationship 10,11 and an increase in depression in fathers. 12Mothers with postpartum depression tend to be less sensitive to their infants' needs, more negative and punitive in parenting style and more apathetic and rejecting during interactions. 13,14 onsequently, infants of depressed mothers are significantly less likely to develop a secure attachment 15 .Attachment difficulties are widely implicated in impairments in children's emotional, behavioural and cognitive development 16 .
The severity and range of psychological sequelae of postpartum depression has precipitated a large body of literature investigating the factors which cause a woman to be at risk of depression following childbirth.
Several review papers 5,[17][18][19][20] suggest that educational attainment, parity and length of relationship with partner are not significant risk factors, while age is implicated insofar as teenage mothers are more at risk.However, factors which have been found to be significantly associated with postpartum depression encompass all domains of biological, social and psychological factors.
One of the most consistent predictors of perinatal depression is a past personal or family history of major depressive episodes. 5,20,21It is most likely that postpartum depression develops and is maintained through a combination of these factors 18 , some of which are difficult to modify through interventions.
As cognitive processes can be modified through psychological interventions 4 , research into the cognitive features of postpartum depression and role of cognitions in its development has increased in perinatal populations.Several qualitative studies have provided valuable insight into the thoughts and beliefs held by mothers with postpartum depression.Worries about maternal competence and conflict of roles have been consistently found to be present in depressed mothers 22 .In a metasynthesis, the incongruence between women's expectations of and beliefs about motherhood and the reality and a pervasive sense of loss were the main presenting themes 23 with antenatal beliefs about motherhood, b) with perfectionism, and c) with perceived levels of social support during pregnancy.We also predicted that postpartum depression would be predicted by maternal beliefs about motherhood and beliefs about perfectionism, when the influence of antenatal depression was controlled for.
Methodology Design
The study adopted a prospective, repeated measures design.Participants completed questionnaires and provided demographic information during the third trimester of pregnancy (Time 1) and at 4-6 weeks following childbirth (Time 2).
Participants
Women in their third trimester of pregnancy were eligible to take part in the study, regardless of age or parity.They were recruited through antenatal classes in a large National Health Service Hospital in Manchester, United Kingdom.As all questionnaires were presented in English, participants who indicated that their understanding of English was insufficient were excluded.
Measures
Postpartum Depression.Two measures were used to assess postpartum depression.The 10-item Edinburgh Postnatal Depression Scale 34 (EPDS) is a screening tool, rated on a 4-point scale.Although a score of 12 indicates depression, a cut-off score of 10 has been used in community samples to prevent false negatives. 35e EPDS has a Cronbach's alpha reliability of .87(.85 for this sample), specificity of 78% and sensitivity of 86% 34 and good face validity 36 .This is a widely used measure which has been validated for use during pregnancy. 37,38 The BDI-II has good internal consistency (alpha .92for outpatients) and good test-retest reliability (.93; .82for this sample).The BDI-II has been used with postpartum populations and has good concurrent validity with measures of postpartum depression 41 .The BDI-II provides a depression severity score and its total score was therefore used as the main indicator of depression.
Sample size and characteristics
One hundred and sixty-four women were approached to take part in the study, of those 130 agreed and were given questionnaires.Seventy-three returned Time 1 questionnaires and of those women, 61 also completed Time 2 questionnaires.At the end of the study period, two women had not yet had their babies and the remaining 10 women were lost to the study.This produced an overall response rate of 46.92% and a retention rate of 83.56%.
The mean age of the sample was 28.11 years (the youngest participant was 17 and the oldest was 41) (see Table 1).Twenty-six women were married, 28 were in a long-term relationship and 7 women were single.
Thirty-three women were employed full time, 17 parttime and 11 were unemployed.All of the women who responded had some qualifications, 14 had GCSE's or equivalent and the remaining 37 had gone on to further and higher education with 5 women being educated to doctoral level.Overall, women in the sample were reasonably satisfied with their current financial situation (see Table 1).Thirty-seven of the women were primiparious, 14 had 1 child and 10 participants had two or more children already.
Whilst their feelings about pregnancy were fairly positive, participants varied in respect of whether pregnancy was expected or planned.In terms of their mental wellbeing, 14.8% of participants reported having a current emotional difficulty; 4.9% indicated experiencing anxiety and 9.8% experienced depression.Forty-three % report-ed past emotional difficulties: 11.5% reported prior experience of anxiety, 24.6% reported past depression and 6.6% experienced anxiety comorbid with depression in the past.No significant differences were observed for participants' perceptions of their social support, their beliefs about motherhood and their perfectionistic beliefs.
Assessment of cognitive and clinical factors in postpartum depression
These variables remained relatively stable from late pregnancy until after childbirth (see Table 2).
The relationships between maternal variables and postpartum depression
The relationships between antenatal maternal beliefs about motherhood, beliefs about perfectionism, perceived social support, labour experience and postpartum depression were examined by carrying out Pearson correlation coefficients (Table 3).Weaker correlations were also noted for postpartum depression as measured by the EPDS at Time 2 and antenatal personal standards (r=0.267,p<0.05) and doubts about actions (r=0.259,p<0.05).
A highly significant negative association was between perceived antenatal social support (MSSS) and postpartum depression (r=-0.417for BDI-II and r=0.610 for EPDS, p<0.01).These findings suggest that participants with elevated postpartum depression scores endorsed more beliefs about needing to be perfect and perceived their partner or family as less supportive during pregnancy.
Postpartum depression scores were not associated with age, marital status or whether the pregnancy was expected or planned in this sample.However, postpartum depression scores were associated with participants feeling less happy about their pregnancy (on the BDI-II-r = -.27,p<.05, on the EPDS -.33, p<.01).Participants with elevated postnatal depression scores were also more likely to have had a past history of emotional difficulties (t(59) = -2.78,p<.01) but this was only the case for the BDI-II.
Prediction of postpartum depression
Linear regression analyses revealed that post-partum depression (using BDI-II) was not predicted by maternal age, marital status, educational status, employment status, financial status, number of children or whether the pregnancy was planned but feelings about the pregnancy and a history of emotional difficulties were predictive (as dichotomous variable).Of the psychological variables PRBQ scores did not predict postpartum depression, but perceptions of social support did so.
Although all MPS subscales, except for organisation, predicted postpartum depression, only the MPS total scores were used in the hierarchical regression analysis because the sample size would only allow for a limited of variables to be entered.
Using the enter method at step one to control for the influence of antenatal depression (as measured by BDI-II at Time 1), a multiple hierarchical regression analysis was undertaken to explore the contribution of maternal beliefs (PRBQ), perfectionistic beliefs (MPS) and perceived social support (MSSS), to postpartum depression (BDI-II at Time 2).All other variables were entered using stepwise and these included participants' happiness about the pregnancy and their past history of emotional difficulties.
As anticipated, at stage 1 antenatal depression contributed significantly to the regression model, F (1,53)=20.93,p<0.001, and accounted for 27% of the variance (see
Discussion
The current study found that postpartum depression scores were significantly associated with perfectionism and perceived social support.Contrary to our hypothesis, postpartum depression scores were not significantly associated with antenatal beliefs about motherhood despite this sample's mean scores being higher than that of 191.1 reported by Moorhead and colleagues 42 .Thus, women who endorsed more symptoms on a depression self-report measure indicated to have had more beliefs about perfectionism and poorer social support from family and friends.We also noted an associated between how these women felt on first finding out about their pregnancy and subsequent postpartum depression scores.No associations were found for postpartum depression and maternal age, number of children or other socio-demographic factors.
Although antenatal beliefs about motherhood were not significantly associated with postpartum depression, this variable was included in the multiple hierarchical regression analysis, because variables may interact to cause an effect.When the influence of antenatal depression was accounted for, beliefs about motherhood were retained in the final model as the most important contributory predictive factor followed by perceived social support.These findings add additional weight to those reported by Sockol and colleagues 33,46 who used different self-report measures to assess maternal attitudes towards motherhood and social support in perinatal samples from the US.They highlighted that maternal attitudes or beliefs predicted depression over and above other dysfunctional beliefs.Perceived lack of social support has been demonstrated to be predictive of postpartum depression in our study as well as previous research. 47Consistent with previous research 20,45,48 , we also found a link between a history of emotional difficulties and postpartum depression, but this variable made the least contribution to our final regression model.Interestingly, beliefs about perfectionism were associated with postpartum depression but these were not retained in our model, nor were how the mothers felt about their pregnancy.
Although this study makes a significant contribution to the identification of modifiable risk factors in the development of postpartum depression, some limitations have to be acknowledged.The sample size was rather small, but the overall retention rate for this longitudinal study was excellent.We sought to overcome the recognised problem of a lack of socio-demographic diversity in health care research 49,50 through recruitment from a large inner city maternity hospital serving a socially diverse perinatal population; however, the participants were generally well adjusted in terms of their back- With 54 items the PRBQ is unnecessarily lengthy and not suitable as a screening measure in its current format; however, this study highlights the need to assess beliefs about motherhood in perinatal populations.Whilst the PRBQ was the most suitable measure at the time this study was initiated, the Attitudes Towards Motherhood Scale 30 (AToM), with its focus on the evaluative aspect of maternal attitudes including expectations and experiences, appears to be another promising tool with good psychometric properties. 33,46 ore longitudinal studies exploring the predictive validity of specific maternal cognitions and beliefs about motherhood in larger, more socially diverse samples of perinatal women are also warranted.
In conclusion, this longitudinal study offers valuable insights into the role that maternal beliefs about motherhood, perfectionism and perceived social support play in relation to postpartum depression.
. Considering the role of perfectionism in postpartum depression, Cutrona and Troutman 24 identified a sense of maternal competence as a key mediating factor.Hall and Wittkowski 25 identified a number of cognitive themes in non-depressed mothers, including perfectionism, unrealistic expectations of motherhood, heightened responsibility, concerns over the safety of the baby, negative judgements by others and negative appraisals of their current situation.It is possible that negative thoughts are common in all mothers, but that www.openaccesspub.org| JWRH CC-license DOI : 10.14302/issn.2381-862X.jwrh-15-848Vol-1 Issue 4 Pg.no.-11 the severity of the expectations or beliefs, or the discrepancy between expectations and reality, are more pronounced in mothers with postpartum depression.The role of dysfunctional attitudes during pregnancy has been investigated in studies using the Dysfunctional Attitudes Scale 26 (DAS),but these have failed to find a significant role for negative thoughts in the prediction of postpartum depression. 27, 28, 29As the DAS does not contain items relating to pregnancy and motherhood, it may not be specific enough to identify the cognitions which are implicated in postpartum depression.Using the Maternal Attitudes Questionnaire 30 (MAQ), Church, Brechman-Toussaint and Hind 31 found that maternal specific cognitions mediated the relationship between having a difficult baby and postpartum depressive symptomatology.However, their study was not longitudinal and investigated cognitions specific to the postpartum period only (which reflects the items on the MAQ).Furthermore, Sockol 32 highlighted the poor internal reliability of the MAQ in expectant mothers.In their cross-sectional studies, Sockol and colleagues 33 provided some evidence of the role of maternal attitudes in predicting symptoms of depression and anxiety among pregnant and postpartum first-time mothers and the incremental predictive validity of maternal attitudes over general dysfunctional attitudes (measured by the DAS), marital satisfaction and social support, whilst finding no contribution of maternal age, perinatal stage (pregnancy vs postpartum), marital status and ethnicity to depression 33 .Given the paucity of longitudinal studies, the aim of the current study was to investigate the contribution of specific antenatal psychological risk factors to postpartum depression.It was hypothesised that postpartum depression would be significantly correlated a) Perceived social support.Developed with the aim of providing a meaningful measure of women's perception of their social support, the 6-item Maternity Social Support Scale45 (MSSS) was used.This scale is rated on a 5point likert rating scale (always, most of the time, some of the time, rarely, never).The MSSS has good reliability with Cronbach's alpha scores of .69(.80 for this sample) in the antenatal period and .78(.77) postpartum.Demographic information including age, parity, current or previous psychiatric disorder, marital and employment status and their emotional response to being pregnant were also collected.ProcedureEthical approval was granted by relevant committees.Participants were approached in the antenatal clinic waiting room of a local Manchester hospital and given an information sheet outlining the study.Once they provided written consent, they were given the initial questionnaire pack (demographic information sheet, EPDS, BDI-II, MSSS, PRBQ and MPS), which they could choose to complete in the clinic or return in a stamped, addressed envelope.Participants also provided information on their estimated date of delivery and a contact health professional (e.g., GP, health visitor or midwife), who was contacted following the due date to ensure that it would be appropriate for the mother to continue with the study.The Time 2 packs were sent out to the women 4-6 weeks following the birth of their baby.Data AnalysisData were analysed using SPSS for Windows.Sample characteristics were examined using descriptive statistics.Missing data were minimal but excluded.Onesample Kolmogorov-Smirnov tests indicated that the use of parametric tests was permitted for most variables.As log transformations were unsuccessful for two MPS subscales and the MSSS at Time 2, equivalent nonparametric tests were used.Univariate and multivariate analyses, including hierarchical multiple regression analwww.openaccesspub.org| JWRH CC-license DOI : 10.14302/issn.2381-862X.jwrh-15-848Vol-1 Issue 4 Pg.no.-13 ysis, were used to test hypotheses.For this purpose, some variables (e.g., marital status) were recoded as dichotomous variables.Pearson correlation coefficients were used to examine bivariate comparisons.In order to reduce the risk of Type 1 error, a stricter p-value of .01 was adopted as the criterion for statistical significance, because of the number of multiple comparisons.
grounds.As participants completed self-report questionnaires which they posted back, it was not possible to obtain a formal diagnosis of antenatal or postpartum depression.For this reason we opted to use the BDI-II as our main outcome measure instead of a screening measures for postpartum depression (e.g., EPDS but we decided to include EPDS scores to allow for comparison with other studies).Another strength is the fact that we used a stringent p-value for comparisons.The findings of this study have several implications for health care providers and clinicians.Although NICE guidelines4 recommend assessments for the identification of women at risk of developing perinatal distress, health care professionals should focus on asking expectant mothers about their beliefs about motherhood and their perceived social support in addition to questions about past emotional difficulties.Furthermore, it would be advisable to explore women's tendencies to endorse perfectionistic beliefs and how happy they felt about first finding out about their pregnancy.Our findings suggest that whether the pregnancy was expected or not does not affect the likelihood of a woman going on to develop postpartum depression, but if the woman was not happy about becoming pregnant (planned or otherwise), she might be at greater risk of depression www.openaccesspub.org| JWRH CC-license DOI : 10.14302/issn.2381-862X.jwrh-15-848Vol-1 Issue 4 Pg.no.-20 postpartum.Health care providers could offer more appointments to these women at risk and if necessary, fast track them to additional psychiatric or psychological support.Sockol and colleagues 33, 46 highlighted that many risk factors for postpartum depression are not necessarily easily modifiable (such as a past psychiatric history), whereas maternal beliefs and attitudes are.The fact that women's beliefs about motherhood predict postpartum depression allows for the development of new psychological interventions or the refinement of existing ones, including cognitive behavioural therapy with its focus on challenging beliefs and cognitive biases 4 .Clinicians should clearly explore and address women's feelings about their pregnancy in the context of their beliefs about motherhood.However, the assessment of maternal beliefs and/or attitudes about motherhood require further investigation so that better self-report measures can be developed.Future studies are also required to explore the psychometric properties of the PRBQ further.
|
2018-12-04T00:56:42.815Z
|
2017-02-03T00:00:00.000
|
{
"year": 2017,
"sha1": "ade39d98ec612bd33f997d35b799121616f0c95e",
"oa_license": "CCBY",
"oa_url": "http://openaccesspub.org/article/422/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ade39d98ec612bd33f997d35b799121616f0c95e",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
195765567
|
pes2o/s2orc
|
v3-fos-license
|
HOW WE TEACH Generalizable Education Research Multimedia-aided instruction in teaching basic life support to undergraduate nursing students
Multimedia-aided instruction in teaching basic life undergraduate life support knowledge is a necessity for nursing students, as they have to deal with cardiac arrest events during their professional career. Existing studies indicate poor BLS knowledge among health science students, including nursing students. Learning BLS requires an understanding of basic sciences, such as anatomy, physiology, and biochemistry, subjects perceived to be difficult, resulting in misconceptions. Hence, a mul-timedia-aided instruction on BLS, supplemented with cooperating learning groups, was developed to assist nursing students in gaining correct BLS knowledge. A pretest-posttest designed for single cooperating groups was employed to evaluate students’ achievements. Sixty-five undergraduate nursing students took the pretest and posttest that consisted of 10 open-ended questions, each designed to evaluate an aspect of their BLS knowledge. The results show significantly more students (60 vs. 20%) answered more questions correctly on the posttest compared with the pretest ( P value (cid:2) 0.05, Wilcoxon signed-rank test). Thus the multimedia-aided instruction package enhanced undergraduate nursing students’ understanding of BLS and also as-sisted to generate a positive perception of multimedia-aided instructions, supplemented with a cooperating learning group.
INTRODUCTION
Basic life support (BLS) is a primary medical life-saving aid in situations of cardiac arrest. It includes recognition of signs, e.g., sudden cardiac arrest, heart attack, stroke, and foreign-body airway obstruction; cardiopulmonary resuscitation (CPR); and defibrillation with an automated external defibrillator. As an early CPR step, the American Heart Association (AHA) guidelines recommend adult and pediatric BLS sequence of chest compressions-airway-breathing because effective chest compressions are crucial for the victim's survival (1,4,5,11). Nurses during their professional practice may have to deal with cardiac arrest situations and may have to perform BLS procedures for the survival of the affected indi-vidual. Thus BLS training is a necessary and important topic in any nursing curriculum.
At our school, undergraduate nursing students are given BLS courses in their second and fourth year of the nursing program. Second-year undergraduate nursing students learn only theoretical BLS knowledge. A practical BLS training course is provided in the fourth year, consisting of a 1-day program, with a lecture and practice on a manikin. The core content of the program follows the BLS guidelines recommended by AHA (5). The lecture session on BLS knowledge during the program is given without a problem-based approach and emphasizes integration of basic science principles with BLS knowledge. Despite skillful BLS practice, students lacked sound knowledge of basic nursing science. Many fourth-year undergraduate nursing students indicated (informally) they do not remember much from their basic science courses and are unable to link basic science knowledge with BLS practice. A study of Thai university nursing students in 2012 revealed BLS knowledge and skills decline following completion of formal BLS training (17). For healthcare providers in Thailand, a BLS course is conducted by the Thai Resuscitation Council of the Heart Association of Thailand, in accordance with AHA guidelines, and certification is renewed every 2 yr; however, this not a mandatory requirement. Studies from other countries revealed poor BLS knowledge among healthcare providers and health sciences students, including those in nursing (1,22,24).
The decision to perform BLS requires immediate recognition of signs and symptoms of cardiac arrest, which needs an understanding of the underlying physiology and biochemistry. When a cardiac arrest occurs, immediate recognition and application of early CPR with effective chest compressions can help the victim through pumping blood to vital organs and preventing damage to brain cells (5,12). This is based on the knowledge of heart function, oxygen transportation, and the role of mitochondrial oxidative phosphorylation, all of which students have learned from basic science classes.
The main problem of basic science education among nursing and other health science students in many nations is the difficulty of students to assimilate the information of basic science, which often leads to possible misconceptions regarding cardiovascular physiology, blood circulation pathway, and lung function. A previous study of undergraduate students in science courses at a university in the U.S. found misconceptions regarding dual blood circulation pathway (70% of students), types of blood vessels (33%), mechanism of gas ex-change (55%), and lung function (20%) (18). For example, students believe that blood flows in a circular route from the heart around the body before returning to the heart, that arteries connect to veins without requiring capillaries, that all carbon dioxide molecules are exchanged for new oxygen molecules, and that lungs function to clean blood. Another study conducted in the Islamic Republic of Iran found 80 -98% of nursing and other health science students have the misconception that heart ventricles pump more volume from the left than from the right because of the thicker wall of the former (16). A recent study involving healthcare providers in the U.S. reported students are unable to answer questions on basic human physiology and on reasons for performing chest compression (7). A clear comprehension on oxidative phosphorylation and adenosine triphosphate (ATP) generation by mitochondria prove difficult according to a study of undergraduate students at a university in India (10).
It would be helpful to create an instruction tool that can help students correct their misunderstanding of relevant basic science knowledge and enhance their ability to integrate the understanding of BLS principles before putting them into practice under simulated and actual situations. Educational multimedia, including text material, graphics, illustrations, photographs, and animations, is one such tool that can provide presentations of specific contents so as to make them easier to understand (13). Results of a Brazilian university nursing student study conducted in 2017 (22) shown effectiveness of an online course for teaching and learning key BLS skills, and a study conducted in the northeastern United States in 2015 also indicated positive learning outcomes of nursing staff participating in online methods for BLS renewal (20), but neither study focused on instruction of BLS that integrated basic science knowledge. In the era of transformative nursing education, a student-centered approach should be considered, where instructors act as facilitators, providing active learning skills to achieve the desired learning objectives (21,23). A multimedia learning approach on intrapartum nursing care supplemented with group discussions resulted in higher scores in knowledge and performance assessments among nursing students compared with traditional teaching practices (8). In addition, a previous study suggested any new designs of teaching and learning strategies to improve bioscience (anatomy and physiology) performance among nursing students should be supportive, collaborative, and student focused (15).
This study was carried out to 1) develop for undergraduate nursing students a multimedia-aided instruction for BLS that integrated basic science knowledge supplemented with cooperating learning groups, 2) examine students' achievements resulting from application of this learning approach, and 3) determine perceptions of undergraduate nursing students toward these learning approaches.
Participants.
A single group pretest-posttest design was employed in the study to determine the efficacy of multimedia-aided instructions on BLS learning among undergraduate nursing students in Thailand. The participants were third-year undergraduate nursing students of Ramathibodi School of Nursing, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, who were enrolled in a "Nursing of Child and Adolescent Practicum" course in the second semester of 2013 and who previously studied basic sciences and attended lectures on the principles of BLS. Convenience sampling was used in the study. Sixty-five students volunteered to participate and were informed of the objectives and procedures of the study, and each participant submitted a signed consent form. The research project was approved by the Research Ethical Committee of Mahidol University Institutional Review Board (Certificate of Eligibility no. MU-IRB 2011/022.2712). In giving consent, the names of the participants were not revealed, and the participants had the right to withdraw from the study at any time. Their results were reported as aggregate, and their responses did not affect any scores in their regular course of study.
Multimedia and its components. As misconceptions and difficulties among college students abound regarding cardiorespiratory physiology, and oxidative phosphorylation and ATP generation by mitochondria (10,16,18), together with an inability to integrate relevant science knowledge (7,14), a multimedia instruction package for undergraduate nursing students was developed. The multimedia package was created with a focus on correcting the misconceptions and reducing the barrier to attain basic science knowledge required for proper understanding of BLS. Another focus was on integrating basic science knowledge, such as anatomy, physiology, and biochemistry, into the learning of BLS principles. The software was developed using Adobe Creative Cloud (Mahidol University). This multimedia package displayed animation, audio, pictures, and text to reduce cognitive load and make learners understand clearly the basic knowledge of BLS. The contents of the multimedia tool consisted of four concepts relevant to the main components of BLS, namely, purposes of BLS and early CPR, chest compression, airway, and breathing. The first concept reviews the purpose of BLS and early CPR by displaying the functional features of the medulla oblongata, circulation, and energy metabolism of the brain. The second concept reviews chest compression and chest recoil by displaying the mechanisms of heart function, dual-circulation pathways, thoracic pump function, and cardiac pump function. The third concept reviews airway management by displaying the skull, airway anatomy, and airway management in children and adults. The fourth concept reviews the causes of cardiac arrest in children and adults.
The story-board content for creating the multimedia tool was validated by four experts experienced in preclinical or clinical teaching at our school, one in biochemistry, one in anatomy, one in pediatric critical care (registered nurse), and one in adult critical care (registered nurse). The process of validation comprised three distinct phases. The first phase aimed to develop instruments in the form of a checklist for the validation process. In the second phase, the first version of the multimedia was validated for the educational content, namely, structure, scope, and accuracy. Then it was revised according to the experts' comments, such as on simplifying and chunking the content, clarification of symbols, and modification of images. In the third phase, a prototype was produced and validated by the same four experts and another expert in educational multimedia design. The aim of this phase was to assess the ergonomics as well as the educational content. The final revised multimedia package comprising the revised content, design, and organization was used in the study.
Multimedia instruction and knowledge testing process. Students' achievements were evaluated by a 10-question open-ended questionnaire. Each student was provided with a printed copy of the questionnaire to complete individually in 30 min. The questions were developed to examine knowledge related to BLS. Ten questions (Tables 1 and 2) consisted of 1) three questions on the purpose of BLS and early CPR, 2) two questions on chest compression and chest recoil, 3) three questions on managing airway, and 4) two questions on causes of cardiac arrest. The questionnaire was validated by three experts: an instructor in critical care nursing, an instructor in anatomy and medical science, and an instructor in biochemistry. The total score is 10, with one point for each question. The scores were based on the level of accuracy of their descriptive response, with a score of 1 for correct and complete answer and 0 for incorrect or incomplete answer.
The pretest determined their prior knowledge of BLS. After completing the pretest, the participants were asked to improve their own understanding of the topics using the multimedia-aided instruction package via individual computers for 45 min. Then the students were divided into small cooperating learning groups of five persons for a 45-min facilitator-led discussion.
The posttest contained the same set of questions given in the pretest. In a cooperative learning group, students were asked to discuss important science knowledge in BLS practice and their viewpoints. The facilitator visited one group at a time, helping them clarify their thinking and learning by asking guiding questions.
Students' perception. Students' perception of the multimedia-aided instruction package and the cooperating group learning was explored by focus group discussions. After the posttest, one of the authors (C. J.) was a moderator using the same interview questions to collect data for each cooperative learning group, spending 30 min with each group and audio-recording each session for subsequent comparisons. The interview guide consisted of two open-ended questions: "What do you think about the multimedia tool in aiding your learning?" and "What are the differences between the multimedia-aided instruction package supplemented with cooperating group learning and traditional learning practice?" At the end of the interview, the participants were asked to express their attitudes regarding other aspects of learning not covered in the interview.
Data analysis. Statistical Package for the Social Sciences (SPSS 18.0, Mahidol University) was used for analysis of students' achievement scores. For each question in the pre-and posttest, descriptive statistics were employed to determine the percentage of students providing correct answers. Scores of knowledge of the four BLS concepts (3 points each for the first and the third concepts, and 2 points each for the second and the fourth concepts) are reported as range and means Ϯ SD. As the data did not have a normal distribution, the Wilcoxon signed-rank test was used to compare difference in knowledge of BLS between pre-and posttest (significance at P value Ͻ0.05). For data from the focus group discussions, recordings of each group interview were transcribed verbatim. Codes were generated and grouped into two preliminary categories, namely, positive and negative responses, and subsequently into subcategories. Preliminary information then was verified.
RESULTS AND DISCUSSION
The multimedia-aided instruction package on learning BLS was developed based on an integration of basic science knowl-
1.
What is the main purpose of BLS? BLS helps restores normal breathing and blood circulation after cardiac arrest by providing ventilation through airway management and breathing resuscitation, and circulation function through chest compression and recoil. It enables oxygen and glucose transportation to organs for cellular metabolism by using oxygen and glucose to generate ATP via oxidative phosphorylation. 2. Why is it important to start CPR immediately after cardiac arrest?
A person succumbs to death if not receiving CPR immediately owing to a lack of blood glucose and oxygen supply, resulting in dysfunction of the medulla oblongata, the respiratory center, and the circular center. 3. How do you tell whether you should continue or cease CPR?
In a living person, look for chest or abdomen movement and listen for sounds of breathing, as they represent an adequate circulation function for oxygen and glucose blood flow to maintain medulla oblongata activity in the brain stem. 4. Describe the mechanism of cardiac function.
Myocardial function depends on synchronized coupling of electrical excitation of the heart and the mechanical contraction and relaxation. The sinoatrial (SA) node, located in the posterior wall of the right atrium, gives an action potential depolarization for innervation of the heart through the atrioventricular (AV) node located in the floor of the right atrium above the insertion of the tricuspid valve, depolarizing the ventricles, which give electrical excitation from the AV node spreading throughout bundle of His, down the bundle branch, and through the Purkinje fibers. Blood travels to dual-circuit pathways from contractions of the right and left atriums through the right and left ventricles, then relaxation of the right and left atriums next to the right and left ventricles. 5. Why is it important to allow complete chest recoil after each compression?
The complete chest recoil after compression is important because it leads to chest decompression and the heart to function as a passive conduit for blood flow in the heart chambers. Chest decompression results in decompression of the sternum and the vertebral column, generating relaxation of cardiac muscle of the left and right ventricles, and decreases plural cavity pressure, generating relaxation of cardiac muscle and dilatation of vessels. This allows the heart to function as a passive conduit, thereby allowing blood to flow into the heart chambers. 6. Explain why children are not simply small-sized adults? (Explain in aspect of growth and development.) Children have incompletely growth and development, which include the structure of their airways, such the occiput, the thyroid cartilage, the cricoid cartilage, the ribs, and size of the lungs.
7.
Describe the difference between the airway structure of children and adults, and explain the airway management that is appropriate for each.
The differences in techniques for airway clearance in children and adults: In the former, airway is opened by supporting the shoulder in a supine position using a head tilt-chin lift maneuver or jaw thrust, avoiding overextending the head or neck as an infant tongue is more rostral and larger relative to the size of the oropharynx; whereas, in the latter, the head tilt-chin lift maneuver or jaw thrust is employed to open the airway or by adopting a lateral position to decrease airway obstruction. 8. How do you assess airway and breathing?
The respiratory assessment in children includes that of airway and breathing, the former by looking for chest or abdomen movement and listening for airflow and sounds of breathing, and the latter by observing respiratory rate, respiratory effort, chest expansion, and lung sounds. 9. What is the common cause of cardiac arrest in children?
In children, the respiratory center in the brain stem and respiratory muscle ineffectively function, causing respiratory failure due to the lack of oxygenated blood and glucose delivery to the brain. The cardiovascular system compensates by increasing heart rate until cardiac muscle weakness results in cardiac arrest. 10. What is the common cause of cardiac arrest in adults?
Adults have complete growth and development, and the cause of cardiac arrest arises from deterioration of cardiac muscle, cardiac impulse, and/or the vascular system, leading to myocardial infarction or dysrhythmia.
BLS, basic life support; CPR, cardiopulmonary resuscitation. edge underlying BLS concepts to assist nursing students in gaining a better understanding of BLS. In concordance with the existing studies (3,19), it is crucial to integrate the science content throughout the nursing curricula, which results in improved nursing practices and health care outcomes. The multimedia-aided instruction package contained learning tools of four BLS concepts: 1) the purpose of BLS and early CPR, 2) chest compression and chest recoil, 3) managing airway, and 4) causes of cardiac arrest. Each student's achievement was evaluated from knowledge of BLS using a 10-question pre-and posttest. More students provided correct answers in the posttest than the pretest ( Table 2). The Wilcoxon signed-rank test revealed significantly higher posttest than pretest scores for every concept of BLS (P value Ͻ0.05). The mean posttest score was higher than that of the pretest score for every concept of BLS (Table 3).
These findings indicate that the multimedia-aided instruction package on BLS concepts supplemented with cooperating learning group improved undergraduate nursing students' un-derstanding of BLS. This could be explained by the contents of the multimedia tool developed to focus on the misconceptions and difficulty of integrating basic science knowledge (anatomy, physiology, and biochemistry) into a comprehensive and cohesive picture, and by linking this pertinent knowledge to the concepts of BLS. According to the cognitive theory of multimedia learning, humans learn more deeply from words and pictures than from words alone (13). The multimedia tool displayed a combination of animation, audio, picture, and text that was designed to keep nursing students' attention focused on the content and to engage them in an active learning process. Consistent with previous studies (6,9,23,25), the multimedia-aided instruction package was able to enhance students' understanding of the information and to create an enabling learning environment.
The multimedia contents enabled Ͼ80% of the nursing students to answer questions correctly on the posttest for questions 1, 3, 4, 6, 7, 9, and 10. They performed less well for questions 2, 5, and 8. The correct answers to the questions posed are presented in Table 1.
An additional result from the focus group discussions showed two categories of the students' responses: positive and negative perceptions. The majority of participants (92%) had positive perception toward the multimedia-aided instruction of BLS supplemented with the cooperating learning groups. The majority of students stated that they prefer this learning method over the traditional learning practice for a number of reasons. First, the multimedia-aided instruction package helped them to link basic science knowledge to BLS concepts, allowing them to gain a better understanding of BLS. In the past, students employed a fragmented memorization strategy and practice using a step-by-step BLS guideline without much clear understanding. Second, the multimedia-aided instruction combining animation, audio, pictures, and text kept their attention focused on the contents, satisfied their learning needs, and made the contents easier to understand. Third, learning activities in the cooperating learning groups provided interpersonal interactions with their peers and the instructor, which allowed them an opportunity to share their thought and understanding and to make a more reasoned discussion.
These findings support the notion that multimedia-aided instruction supplemented with a cooperating learning group could enhance nursing students' achievements in their understanding of BLS concepts. Similar to previous studies (2,8,21), the use of multimedia-aided instruction and group activities improved knowledge and understanding. This may be due to students being satisfied and pleased with using the multimedia tool, and to their perception that it helps them learn more easily. In addition, students stated the cooperating learning activities allow them to discuss learning issues with their peers. Students learn from peers in the group to help fill gaps in their understanding, and, conversely, students giving advice obtain a deeper understanding of the subject at hand. One student stated, "I enjoyed discussing in the group and learned more than sitting alone with a computer." There were some examples of negative perception. 1) The multimedia package contained too much information on the four concepts of BLS to be learned within 45 min and needed more time to assimilate the information. For future applications, the multimedia content should be made available online for access anywhere and anytime.
2) The multimedia package made a number of students feel less interest in the content presented. This could be due to their inability to recall their prior basic science knowledge and thus made the content presented difficult to understand.
In conclusion, using the multimedia-aided instruction package on BLS key concepts supplemented with cooperating learning groups for undergraduate nursing students has the potential to improve their understanding of basic sciences and improve their ability to integrate such knowledge to BLS. In addition, students were satisfied with this new learning method, which provided them not only knowledge but communication skill as a cooperative group member. The results of the study can also be used to provide basic information for developing new teaching and learning methods to enhance the competency of nursing and other health science students. However, there are a number of limitations in the study. First, the study design (a single group, pre-and posttest) made it difficult to claim that the multimedia-aided instruction with cooperating learning groups was superior to the traditional learning approaches. Second, the volunteer students in the study were drawn from a group motivated to improve their learning skills. Third, additional data were not collected on a later date after the posttest, and thus retention of knowledge was not evaluated over time. Fourth, the study was conducted at only one nursing school, and thus generalization of the findings was not possible. Nevertheless, this study contributes to a learning approach that integrates knowledge with true understanding.
|
2022-06-19T17:45:15.637Z
|
0001-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "f463563d642a4e64f1380fe7fd1a1ecf720a1b20",
"oa_license": null,
"oa_url": "https://doi.org/10.1152/advan.00106.2018",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f463563d642a4e64f1380fe7fd1a1ecf720a1b20",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
8935791
|
pes2o/s2orc
|
v3-fos-license
|
Congenital agminated melanocytic nevus - case report*
Agminated nevus is a cluster group of melanocytic nevi confined to a localized area of the body. There are many pigmented lesions described in the literature as agminated, such as blue nevi, multiple lentigines and Spitz nevi, but only a few cases of congenital agminated melanocytic nevi have been described. We report a case of a male child who presented with congenital agminated nevi, emphasizing the importance of physical examination, dermoscopy, histopathological evaluation, differential diagnosis and follow up to rule out the possibility of dysplastic or malignant changes.
INTRODUCTION
Agminated nevi are infrequent pigmented lesions. 'Agminated' is derived from the Latin word 'agmen', meaning 'aggregation' and refers to a clustered or circumscribed grouping of lesions confined to a localized area of the body. It should be distinguished from other forms of segmental distribution lacking a definite clustering. 1 Pigmented lesions that have been described in the literature as agminated include blue nevi, multiple lentigines, Spitz nevi, congenital melanocytic nevi, acquired melanocytic nevi, and lesions within nevi spili. [2][3][4][5] No hyperpigmented lesions described as agminated include xanthogranuloma, angiofibromas and neurilemomas. 6 Most cases reported in the literature corresponded to Spitz or blue nevi, with only few descriptions of melanocytic congenital agminated nevi. Only 170 a few cases of congenital agminated melanocytic nevi have been described in literature. 4,7 One of these cases presented with a "blaschkoid" pattern following Blaschko lines on the abdomen. 8 The main differential diagnosis is nevus spilus. This nevus commonly appears during late infancy or early childhood, leading to the belief that they are a type of congenital nevus. 3,6 A tan lentiginous background patch on which more darkly pigmented macules and papules are distributed characterizes the lesion. The nevus spillus usually presents as lentigo simplex on the histological analysis whereas agminated nevus generally shows a junctional or compound melanocytic nevus. 9 The association of melanoma with agminated nevus was initially described by Marghoob et al, in a patient with atypical mole syndrome, a pre-existent melanoma and a dysplastic agminated nevus on the arm that first appeared during puberty. 6 Later were described two cases of melanoma arising directly from agminated melanocytic lesions. The first one, published by Corradin et al, described the development of an invasive melanoma in an acquired agminated nevus that appeared after thermal and solar burn. 9 Most recently Rezze et al reported the case of a patient affected by atypical mole syndrome and with personal history of melanoma who presented an agminated nevus on the anterior chest since puberty. 10 Some nevi within the agminated lesion presented clinical and dermoscopic criteria of atypic nevus. The whole lesion was excised and the histopathological analysis showed an in situ melanoma, some areas of severe dysplasia and other areas corresponding to junctional, intradermal and compound nevi.
CASE REPORT
A nine-year-old male patient, phototype V, student, was referred to the dermatology clinic for evaluation of multiple pigmented lesions on his left thigh. He was accompanied by his father, who informed that the child had had a cluster of melanocytic lesions since birth. The cluster increased in size following the growth of the patient and the minor lesions became somewhat closer together over the years. The family background of skin cancer was negative. The father has a history of congenital nevi on the face and left lumbar region. The patient presented epilepsy and was in treatment with carbamazepine, imipramine and risperidone.
On physical examination, the patient had a cluster of approximately 20 maculopapular, lightly palpable blackened lesions of different sizes on the anterior part of his left thigh, forming a cluster of nevi (Figures 1 and 2). The diameter of the total lesion was 9.2cm x 7.6cm. The largest isolated lesion measured 2.6 cm x 1.3cm. Dermoscopy revealed predominantly homogeneous pattern with diffuse brownish areas, regular network at the periphery and numerous regularly distributed small dots (Figure 3). No background pigmentation was noted on clinical or dermoscopic examination between lesions.
A single lesion was excised for histological evaluation with a 2 mm border of normal skin. The histopathological findings revealed an intradermal melanocytic nevus without histological melanocytic hyperplasia or hyperpigmentation in clinically normal peripheral skin (Figures 4 and 5).
Considering the clinical and histopathological diagnosis of the lesion and treatment limitations due to the size of the total lesion, it was decided to keep dermoscopic monitoring of the patient every 4 months. It has enhanced to the family the importance of sun protection and guidance for the observation of changes such as modification of color, palpation (such as nodularity), shape and rapid growth.
DISCUSSION
Agminated nevus is a rare lesion and its incidence is unknown. In this case the patient has a congenital agminated nevus, a rare condition.
The patient presented a cluster of melanocytic lesions in the anterior part of the left thigh. There were no dysplastic or malignant changes on clinical examination, dermoscopy or histology. It can be differentiated from nevus Spillus by the absence of brown macular background in the clinical examination and histopathology.
The association between epilepsy and agminated melanocytic nevus is to the best of our knowledge not found in the literature. There is an established relationship between big and giant congenital melanocytic nevi with neurocutaneous melanosis, in which the occurrence of epilepsy is common. In this case there was no suspicion of indication for investigation for cutaneous melanosis.
The decision to follow or excise this particular and rare presentation of compound nevus should be individualized. We decided to monitor the patient since the lesion was large and without atypical clinical, dermoscopic and histological findings. Long-term follow-up is recommended due to the possibility of malignant transformation, even if this probability is at the moment not exactly defined.q
|
2017-07-11T18:32:47.856Z
|
2013-11-01T00:00:00.000
|
{
"year": 2013,
"sha1": "13731a57fa876c4407331303e635bbed237cc775",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/abd/v88n6s1/0365-0596-abd-88-06-s1-0170.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "13731a57fa876c4407331303e635bbed237cc775",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256584878
|
pes2o/s2orc
|
v3-fos-license
|
QTL Analysis Reveals Conserved and Differential Genetic Regulation of Maize Lateral Angles above the Ear
Improving the density tolerance and planting density has great importance for increasing maize production. The key to promoting high density planting is breeding maize with a compact canopy architecture, which is mainly influenced by the angles of the leaves and tassel branches above the ear. It is still unclear whether the leaf angles of different stem nodes and tassel branches are controlled by similar genetic regulatory mechanisms, which limits the ability to breed for density-tolerant maize. Here, we developed a population with 571 double haploid lines derived from inbred lines, PHBA6 and Chang7-2, showing significant differences in canopy architecture. Phenotypic and QTL analyses revealed that the genetic regulation mechanism was largely similar for closely adjacent leaves above the ears. In contrast, the regulation mechanisms specifying the angles of distant leaves and the angles of leaves vs. tassel branches are largely different. The liguless1 gene was identified as a candidate gene for QTLs co-regulating the angles of different leaves and the tassel branch, consistent with its known roles in regulating plant architecture. Our findings can be used to develop strategies for the improvement of leaf and tassel architecture through the introduction of trait-specific or pleiotropic genes, thus benefiting the breeding of maize with increased density tolerance in the future.
Introduction
Maize is one of the most important crops, serving as a source of food, feed and industrial materials. Maintaining a sufficient supply of maize is vital for ensuring food security worldwide. Research has shown that during the history of maize breeding, increasing planting density and density tolerance has been the most important technical measure to improve maize yield. For example, Duvick conducted comprehensive studies on the hybrids released in the United States from 1934 to 2004 and found that the yield per plant and hybrid heterosis did not largely increase during the process of increasing the yield per unit area of American corn, while the planting density and density tolerance continued to increase [1][2][3]. Mansfield et al. also found that the increase in corn yield in the United States from 1930 to 2010 was strongly correlated with the planting density and variety density tolerance [4]. These observations demonstrate that increasing planting density and cultivating varieties with density tolerance are important for increasing maize production.
A key measure to increase density tolerance in maize is to breed plants with a compact architecture, which is conducive to increasing ventilation and light transmission and reducing the competition between plants. For example, introgressing the favorable allele of the leaf angle-regulating gene Upright Plant Architecture2 into modern hybrids was shown to significantly enhance maize yields under high-density conditions [5]. It has been reported that the most important requirement for maize ideotype is the compact plant configuration above the ear [6]. The three leaves around the upper-most ear (the first leaf above uppermost ear, the leaf of the uppermost ear and the first leaf below the uppermost ear) are vital to the formation of maize yield. The compact upper ear configuration is vital for ventilation, light transmission, the interception of light energy, and maintenance of effective photosynthetic efficiency. During the process of modern maize breeding, the leaves above the ear became more and more compact [7,8].
The canopy of corn plants is generally composed of tassels and three to nine leaves above the ear [8]. The angles of leaves and tassels are the keys to determining the light transmittance and canopy structure [9,10]. Therefore, it is very important to analyze the genetic basis of regulation of these angles and to shape a compact canopy structure for cultivating density-tolerant maize varieties. However, it is still unclear whether the leaf angles of different nodes and tassel branch angles are regulated differently, which limits our ability to breed density-tolerant plants.
In this study, we investigated the angles of the first (LA1), second (LA2), third (LA3), and flag leaves (FLA) above the top-most ear, and the angle of the first tassel branch (TBA) using a double haploid (DH) population derived from two inbred lines, PHBA6 and Chang7-2, with significant differences in leaf angle and tassel branch angle. QTL analysis of these traits was carried out in the DH population using a high-density linkage map. We found that the genetic regulatory basis was largely similar between adjacent leaves but relatively different between leaves and tassel branches. The liguless1 (LG1) gene was identified as a candidate gene for QTLs controlling leaf and tassel branch angles, and it may play an important role in the regulation of these traits. These results are important for guiding the targeted or universal improvement of specific leaf or tassel configurations in the future.
Phenotyping of a DH Population Derived from PHBA6 × Chang7-2
To study the genetic basis for regulation of lateral angles above the ear, we selected two inbred lines, PHBA6 and Chang7-2, with significant differences in leaf and tassel angles and constructed a DH population composed of 571 lines. Compared with Chang7-2, PHBA6 had a larger leaf and tassel branch angles ( Figure 1).
We measured the LA1, LA2, LA3, and FLA above the top-most ear and the TBA of PHBA6, Chang7-2 and 571 DH lines in six environments and calculated the best linear unbiased prediction (BLUP) values (Supplementary Table S1 (Figure 2). Almost all the lateral anglerelated traits showed a continuous normal distribution, indicating that these traits are controlled by quantitative trait loci (QTLs), which is similar to the conclusions reported previously [11][12][13]. We measured the LA1, LA2, LA3, and FLA above the top-most ear and the TBA of PHBA6, Chang7-2 and 571 DH lines in six environments and calculated the best linear unbiased prediction (BLUP) values (Supplementary Table S1 6 degrees. All of these five traits showed transgressive segregation. Among them, the FLA and TBA showed wider variability. We also found that LA1, LA2, and LA3 had higher broad-sense heritabilities Correlation analysis of BLUP values for different traits showed a very strong correlation between the angles of adjacent leaves (LA1, LA2, and LA3, r ≥ 0.933). The correlation between angles of leaves at farther distances apart were still significant; however, the correlation gradually decreased with the distance between the position of different leaves. For example, the correlation coefficient between FLA and LA3 was 0.547, whereas that between FLA and LA1 was only 0.463. The TBA and FLA, which are adjacent to each other, were moderately correlated, whereas the correlations of TBA with LA1, LA2, and LA3 were weak ( Figure 2). The significant correlation among these traits indicated that there may be some shared genetic regulators of these angles, while the difference in correlation also indicated that some regulators may be unique or may differentially regulate these angles [14].
between FLA and LA1 was only 0.463. The TBA and FLA, which are adjacent to each other, were moderately correlated, whereas the correlations of TBA with LA1, LA2, and LA3 were weak ( Figure 2). The significant correlation among these traits indicated that there may be some shared genetic regulators of these angles, while the difference in correlation also indicated that some regulators may be unique or may differentially regulate these angles [14].
High-Density Linkage Map Construction
We next analyzed the genetic basis of lateral angle regulation using the massively parallel 3' end RNA sequencing (MP3RNA-seq) strategy [15] to genotype the 571 DH lines. After SNP calling and strict filtering (see Materials and Methods), we retained 436 DH lines (135 lines with missing genotype data were excluded from the analysis) and 26,917 nonredundant SNPs for high density linkage map construction. The total length of the genetic map was 833.94 cM, and the average genetic distance and physical distance between the markers were 0.033 cM and 0.082 Mb, respectively (Table 1), indicating the high resolution of our map. To verify the accuracy and validity of our map, we first performed QTL analysis of cob color with simple genetic structure in the DH population. The difference in cob color between the parents of the DH population was remarkable, so the cob color traits of 311 lines in the DH population were successfully scored. The ratio of red cobs to white cobs was 141:170, which very close to the 1:1 ratio expected if cob color is controlled by a single gene (Chi-square test, p = 0.100) [16]. QTL analysis revealed a very significant QTL locus on chromosome 1 between 47.589118 and 48.956707 Mb (peak at 48.336939 Mb, LOD = 2308.5), which is located within the tandem repeat region containing the known cob color gene Pericarp color1 (P1) [17] (Figure 3). The precise mapping of the P1 gene demonstrated the quality of our map.
QTL Analysis of Lateral Angles in the DH Population
To clarify the genetic basis for control of lateral angles above the top-most ear, QTL mapping was performed for the five lateral angle traits in the DH population using the high-density linkage map (Table 1). A total of 42 lateral angle-related QTLs were identified (Table 2), with 12, 8, 9, 6, and 7 QTLs identified for LA1, LA2, LA3, FLA, and TBA, respectively. The additive effects of these QTLs varied from 0.82° to 7.57° with an average
QTL Analysis of Lateral Angles in the DH Population
To clarify the genetic basis for control of lateral angles above the top-most ear, QTL mapping was performed for the five lateral angle traits in the DH population using the high-density linkage map (Table 1). A total of 42 lateral angle-related QTLs were identified (Table 2), with 12, 8, 9, 6, and 7 QTLs identified for LA1, LA2, LA3, FLA, and TBA, respectively. The additive effects of these QTLs varied from 0.82 • to 7.57 • with an average of 3.03 • . For most QTLs, the allele decreasing the angle was contributed by Chang7-2, which indicates that Chang7-2 may be an important donor for the improvement of compact plant architecture above the ear. The phenotypic variation explained by these QTLs ranged from 0.85% to 21.8%, with an average of 6.42%. Most of the lateral angle QTLs (35/42) were minor effect QTLs, which was consistent with the findings from previous genetic analyses of leaf angle [8,[11][12][13] and also reflected the complexity of the regulation of the angle traits. The detected QTLs explained 64.44%, 57.15%, 52.57%, 44.33%, and 50.90% of the phenotypic variation for LA1, LA2, LA3, FLA, and TBA, respectively. These results were consistent with the findings from phenotypic analysis that LA1 has the highest heritability and FLA has the lowest ( Figure 2).
We next analyzed the distribution of the 42 QTLs and the overlap between them. QTLs were found on all chromosomes except for chromosomes 6 and 7. There were seven overlapping QTLs among LA1, LA2, and LA3 (Table 3), and these QTLs explained a large portion of the phenotypic variation of these three traits (34.21-48.32%), indicating that the mechanisms regulating these different lateral angles are very similar. Interestingly, we found that the overlapping QTLs between adjacent leaf angles (LA1 vs. LA2, LA2 vs. LA3) could explain more phenotypic variation than those between more distant angles (LA1 vs. LA3). There were no overlapping QTLs between the most distant leaf angles (FLA vs. LA3). For FLA vs. LA1 and FLA vs. LA2, there was only one overlapping QTL each, and the proportion of explained phenotypic variation was small (12.11-21.08%). These results implied that the regulation mechanisms for the angles of distant leaves may be largely different. We further investigated the overlapping QTLs between the tassel and leaf angles. The result indicated that there was only one overlapping QTL between LA1, LA2, FLA and TBA, respectively. No overlapping QTL was detected between TBA and LA3. These results indicated that although there may be some similar factors regulating the angle of tassel and leaf, their genetic basis was generally largely different.
Candidate Gene Analysis
There have been many studies on the genetic basis of maize leaf angle and tassel branch angle, and genes affecting the lateral angles have been reported. When searching our QTL intervals, we found several previously reported functional genes or homologs of these genes that may be candidate genes ( Figure 4). The same intervals on chromosome 10 for qLA1_10, qLA2_10, and qLA3_10 contained a YABBY-like transcription factor gene, ZmYAB14 (Zm00001d025944). Previous reports have shown that the YABBY family of transcriptional regulators regulate the angle and architecture of maize leaves [18]. Therefore, ZmYAB14 is a likely candidate gene for these QTLs. qTBA_1a, which is located at the beginning of chromosome 1 contains the important maize brassinosteroid (BR) synthesis gene ZmDWF4 (181 kb from the QTL peak). ZmDWF4 encodes a cytochrome P450, steroid 22-alpha-hydroxylase, that catalyzes C-22 hydroxylation in the BR biosynthesis pathway [19]. BR plays an important role in the regulation of plant lateral angles. Thus, ZmDWF4 may be a candidate gene for qTBA_1a. Finally, the key lateral boundary definition gene LG1, which encodes a square promoter binding transcriptional regulator [20], was located in the QTL confidence intervals of qLA1_ 2a, qLA2_2a, qFLA_2a, and qTBA_2A, very close to the peaks (43, 74, 45, and 573 kb, respectively). This, combined with the known role of LG1 in regulating LA and TBA [21,22], makes LG1 a likely candidate gene for these QTLs controlling LA and TBA.
A Large-Scale DH Line Population Improving the Accuracy of QTL Mapping with BLUPs
Previous studies have mapped many QTLs for lateral angles in maize using F2:3 populations, recombinant inbred line (RIL) populations or a few other segregating populations [23][24][25][26][27]. Compared with RIL populations, the DH approach quickly converts
A Large-Scale DH Line Population Improving the Accuracy of QTL Mapping with BLUPs
Previous studies have mapped many QTLs for lateral angles in maize using F2:3 populations, recombinant inbred line (RIL) populations or a few other segregating pop-ulations [23][24][25][26][27]. Compared with RIL populations, the DH approach quickly converts heterozygous materials to completely homozygous lines, that greatly reduces the time to build a stable mapping population [28]. There is no dominant effect of genes in the DH lines; thus, the additive and epistatic effects of quantitative trait genes can be studied more accurately. Moreover, the DH populations can be used to eliminate the influence of competition among individual plants and reduce the environmental test error compared with the F2:3 populations and the other early segregating populations. The cost of sequencing continues to decrease along with the development of high-throughput sequencing technology. Meanwhile the system for the rapid creation of large-scale DH lines has been established gradually [29][30][31]. Therefore, DH populations are ideal for mapping QTLs and identifying candidate genes [32,33].
BLUPs could minimize the phenotyping errors and best estimate the genetic effect influenced for a trait. BLUPs, instead of values of individual environments, were widely used in QTL mapping or GWAS study [13,[34][35][36]. Thus, analysis by BLUP is more suitable for our purpose, revealing the conserved and differential genetic regulation of maize leaf and tassel branch angles. In fact, we also performed QTL analysis for individual environments. We summarized the QTL results for individual environments as following figures ( Figure 5). As shown in the figures, most of the QTLs detected by BLUP were stably detected by individual environments. In this study, we used the BLUP values of 571 DH lines with 26,917 SNPs to identify 42 QTLs for lateral angle traits. The phenotypic variation explained by these QTLs ranged from 0.85% to 21.8%, with an average of 6.42%. The confidence intervals of some of these QTLs were less than 1 Mb in length. This demonstrates the mapping accuracy that can be obtained when using large-scale maize DH line populations.
Comparison with Previous QTL Studies
We compared the published QTL/genes for lateral angle with those identified in this study. The most prominent region, qLA1_2a, qLA2_2a and qFLA_2a for LA were found to be located nearby the known classic LG1 gene on chromosome 2. The LG1 gene encodes a squamosa promoter binding (SPB) translational regulator, which plays a key role in the process of ligule and auricle formation [20,37]. Tian et al. detected a significant QTL for leaf angle in the 2-Mb region at the nearby LG1 on chromosome 2 in the maize nested association mapping (NAM) population [13]. Ku et al. used a meta-QTL analysis to find an mQTL in the region between umc1165 and bnlg1297, which overlapped with the location of LG1 in F2:3 families of Yu823 and Yu87-1 [12]. Li et al. identified a QTL region (qLA2a) that ranged from 3.09 to 4.30 Mb on chromosome 2, which was located nearby to the interval of the LG1, using three RIL populations derived from crossing Huangzaosi with Huobai, Weifeng322, and Lv28 [25]. Ding et al. and Tang et al. detected qLA2-1 and qLA2.1 in the same region, bin 2.01 on chromosome 2, which overlapped with the location of LG1, in a four-way cross population of D276, D72, A188 and Jiao51 and a RIL population derived from B73 and SICAU1212, separately [23,38]. These QTLs that were detected in different genetic backgrounds and environments shared a high congruence, which supported the candidacy of LG1 for qLA1_2a, qLA2_2a and qFLA_2a. In addition to lacking ligules and auricles, lg1 mutants have significantly smaller leaf and tassel branch angles [21,22], which is consistent with this gene being located in QTL intervals for both LA and TBA (qLA1_2a, qLA2_2a, qFLA_2a, and qTBA_2A). However, the leaves of lg1 mutants is lacking a proper ligular region and displays an upright habit of growth; both PHBA6 and Chang7-2 have normal ligules and auricles. We speculate that functional variants may be located in the regulatory region of LG1. This locus, as the only one identified, may regulate almost all lateral angles (LA1, LA2, FLA, TBA) above the ear. In the future, this locus can be used to improve the ear configuration of breeding materials through molecular breeding. Further exploration of the functional natural variation of this locus would be of great benefit to breeding of density tolerant maize in the future. qLA1_10, qLA2_10, and qLA3_10 were in the same interval on chromosome 10 which contained a YABBY-like transcription factor gene, ZmYAB14. The YABBY family of transcriptional regulators regulate the angle and architecture of maize leaves [18]. This QTL region was closely neighboring the QTLs for lateral angles identified on chromosome 10 in the previous studies [13,25,26,39,40]. There was no known gene co-location with another large-effect QTL qLA2_3a and its overlapping QTL qLA3_3a, suggesting that there was an undiscovered gene controlling maize leaf angle. qTBA_1a, which is located at the beginning of chromosome 1 contains the important maize brassinosteroid (BR) synthesis gene ZmDWF4 (181 kb from the QTL peak). Ku et al. detected the corresponding QTL region for LA (bin 1.02 of chromosome 1) where the DWAF4 gene was located using two different F2:3 populations. Liu et al. detected the ZmDWF4 gene co-locating with one large-effect QTL, qLA1_2 for LA by using high-density SNP markers and a F2:3 population of H082183 × Lv28. Dzievit et al. identified the ZmDWF4 gene in one prominent genomic bin on chromosome 1 for LA across the F2 and F2:3 generation for the B73 and Mo17 populations. Therefore, ZmDWF4 may be a candidate gene for qTBA_1a. ZmDWF4 and encodes a cytochrome P450, steroid 22-alpha-hydroxylase, that catalyzes C-22 hydroxylation in the BR biosynthesis pathway [19]. BR plays an important role in the regulation of plant lateral angles. Previous studies have revealed the effects of plant phytohormones, especially BR, in regulating leaf angles. Zhao et al. expounded that LC2 may regulate leaf angle through a BR-independent pathway and participate in the feedback control of BR signaling, using a rice leaf inclination2 (lc2, three alleles) mutant [41]. Feng et al. confirmed that SLG, a BAHD acyltransferase-like protein gene is involved in BR homeostasis by positively regulating endogenous BR levels to control leaf angle for planting density in rice [42]. Liu et al. found that TaSPL8 might regulate the lamina joint, the tissue connecting the leaf blade and sheath by binding to the promoter of the brassinosteroid biogenesis gene CYP90D2, and activating its expression [43]. We believe that the causal genes for our angle QTLs may also be involved in BR synthesis/signaling, and this needs to be further studied in the future.
Plant Materials and Phenotypic Data Collection
For DH lines production, the F1s, derived by crossing of PHBA6 and Chang7-2, were crossed with a maize haploid inducer CAU3 in 2014 in Sanya (The F1s were used as female parent, and CAU3 as male parent). The putative haploid kernels were selected based on the lack of R1-mediated purple anthocyanin in the scutellum of the haploid embryo [29] and planted at the field nursery during the 2015 summer in Beijing. The putative haploids were further screened using the features of shorter stature and smaller biomass in the field. Approximately 1% of the haploid plants were successfully self-crossed by the natural doubling method.
The DH population used here includes 571 lines, and was derived by crossing PHBA6 and Chang7-2. The DH lines and their parents were phenotyped in six environments (each location in an individual year was considered an environment), namely the Haidian . PHBA6 and Chang7-2 were used as controls, and they were planted in every 20 DH lines, alternately. All the materials were grown under natural field conditions in each environment and field management was in accordance with local practices. LA1, LA2, LA3, and FLA were measured on three plants per plot, while cob color and TBA were measured on three to five plants per plot. LA1, LA2, and LA3 measurements were collected at Haidian, Shunyi, and Shenbei. Cob color, FLA, and TBA were collected at Shunyi and Shenbei only. Cob color was divided into red and white. LA1, LA2, and LA3 were recorded as the angle between the midrib and upper stem of the first, second, third leaves above top-most ear node, respectively. FLA was recorded as the angle between midrib and upper stem of the flag leaf. TBA was recorded as the angle between the first primary branch and the central spike of tassel. BLUP values were calculated for each phenotype across different environments using the lme4 package [17] in R, and these values were used for subsequent analysis. The heritability (H 2 ) estimates were calculated by R software as reported previously [17].
RNA Isolation, MP3RNA-Seq, and SNP Calling
Leaf tissues from three seedlings per DH line were bulked for total RNA extraction, which was used for construction and sequencing of cDNA libraries using the MP3RNAseq method [15]. The DH lines for which library construction was unsuccessful, or little sequencing data were obtained (135 lines in total), were excluded from further analysis.
SNP calling was conducted using the software samtools (v0.1.16) and bcftools (v0.1.16). Only uniformly mapped reads and non-duplicated reads were used after filtering [44]. Then, the MP3RNA-seq data for PHBA6 and Chang 7-2 were analyzed for SNP identification (location: refer to the B73 genome of version 4). A total of 35,836 high-quality SNPs were detected between PHBA6 and Chang 7-2 with a read depth ≥ 5. In addition, the SNPs with partial segregation greater than 2/1 (Chi square test, p < 1.0 × 10 −7 ) or a heterozygosity rate greater than 15% were discarded, and the DH lines with a heterozygosity rate greater than 15% were also eliminated. Ultimately, 26,917 SNPs in 571 DH lines were retained for further genetic map construction.
Genetic Map Construction
Genotype calling and recombination breakpoint determination for each DH were conducted using a sliding window approach [45,46] with minor modifications.
(1) In each 15-SNP window, the DH genotype was defined by the ratio of alleles from PHBA6 and Chang7-2: The window was called homozygous if > 11/15 of the sites in the window were alleles from either of the parents or heterozygous otherwise. Using the sliding window method, we found that recombination breakpoints were detectable as a region of several heterozygous SNPs that did not exceed more than six continuous windows. Therefore, we set the heterozygous regions spanning less than seven uninterrupted windows as breakpoints and divided them into two at the midpoint. Then adjacent windows with the same genotype were merged together as a block.
(2) Blocks with fewer than five sequenced SNPs or a physical length less than 300 kb were set as missing to avoid calling false double crossover events.
(3) Adjacent windows and successive small blocks with frequently transient genotypes were merged into a larger heterozygous block, and heterozygous blocks with fewer than 15 SNPs or a physical length less than 1 Mb were set as missing to avoid false estimation.
QTL Analysis
The QTL analyses of lateral angles were conducted using the composite interval mapping (cim) method in the R package R/qtl as reported [46]. The LOD threshold was defined by 1000 permutations at a significance level of p < 0.05. The 1.5 LOD-drop method was used for defining the QTL confidence interval. A linear QTL model was used for evaluation of QTL effect size.
|
2023-02-05T16:12:11.533Z
|
2023-02-01T00:00:00.000
|
{
"year": 2023,
"sha1": "cb72c07182e78c5da54c4ed8fa032b58e90b8bde",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/12/3/680/pdf?version=1675411604",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d4b5677c6beeb452e77ec3bd42f0e0ce48910b82",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247594321
|
pes2o/s2orc
|
v3-fos-license
|
Iterative Refinement of Schur decompositions
The Schur decomposition of a square matrix $A$ is an important intermediate step of state-of-the-art numerical algorithms for addressing eigenvalue problems, matrix functions, and matrix equations. This work is concerned with the following task: Compute a (more) accurate Schur decomposition of $A$ from a given approximate Schur decomposition. This task arises, for example, in the context of parameter-dependent eigenvalue problems and mixed precision computations. We have developed a Newton-like algorithm that requires the solution of a triangular matrix equation and an approximate orthogonalization step in every iteration. We prove local quadratic convergence for matrices with mutually distinct eigenvalues and observe fast convergence in practice. In a mixed low-high precision environment, our algorithm essentially reduces to only four high-precision matrix-matrix multiplications per iteration. When refining double to quadruple precision, it often needs only 3-4 iterations, which reduces the time of computing a quadruple precision Schur decomposition by up to a factor of 10-20.
Introduction
Given a matrix A ∈ C n×n , a factorization of the form with Q ∈ C n×n unitary and T ∈ C n×n upper triangular is called Schur decomposition of A. This decomposition plays a central role in algorithms for solving eigenvalue problems, computing matrix functions, and solving matrix equations. Note that the diagonal of T contains the eigenvalues of A and they can be reordered to appear there in any desirable order [18].
In this work, we consider the following question. Suppose that we have an approximate Schur decomposition at our disposal, that is, T is nearly upper triangular and Q is unitary or only nearly unitary. How do we refine Q, T to yield a more accurate Schur decomposition using an algorithm that is computationally and conceptually less demanding than computing the Schur decomposition of A from scratch by applying, e.g., the QR algorithm [11,12]?
Partly motivated by the increased role of GPUs and TPUs in high-performance computing, there has been a revival of interest in exploiting the benefits of a mixed-precision environment in numerical computations; see [2] for a recent survey. Now, suppose that a Schur decomposition of A has been computed, inexpensively, in a certain (low) machine precision but the target application demands for higher accuracy. For the computed factor Q, the entries ofQ H AQ below the diagonal and the entries ofQ HQ − I are roughly on the level of unit roundoff in low precision. The refinement procedure discussed in this work produces a Schur decomposition that is accurate in high machine precision essentially at the cost of a few matrix multiplications in high machine precision. Examples for this setting of current interest include mixing half or single with double precision on specialized processors as well as mixing double with quadruple or even higher precision on standard CPUs. In both cases, the operations performed in high precision are significantly more costly and should therefore be limited to the minimum. Other scenarios that could benefit from the refinement of Schur decompositions include the solution of parameter-dependent eigenvalue problems and continuation methods [8,9].
The goal of this paper is to develop an efficient Newton-like method for refining a Schur decomposition. Closely related to the theme of this paper, the refinement of an individual eigenvector has been investigated quite thoroughly, going back to the works of Wielandt [41] and Wilkinson [42], and is usually addressed by some form of inverse iteration, see also [24]. When several eigenvalues and/or eigenvectors are of interest, it is sensible to refine the invariant subspace (belonging to the eigenvalues of interest) as a whole rather than individual eigenvectors. Various variants of the Newton algorithm have been proposed for this purpose, see, e.g., [5,8,9,13,15,31]. Often, these algorithms require the solution of a Sylvester equation at each iteration. While the developments presented in this work bear similarities, we are not aware that any of these existing methods would allow for refining a Schur decomposition or, equivalently, for simultaneously refining the entire flag of invariant subspaces associated with the Schur decomposition. In principle, Jacobi algorithms could be used for this purpose; see [16,19,23,38] for examples. While such algorithms converge locally and asymptotically quadratically under certain conditions [32], their efficient implementation requires significant attention to low-level implementation aspects while the critical parts of our algorithm are entirely based on matrix multiplications. Improving an earlier method by Davies and Modi [14], Ogita and Aishima [34,35] recently proposed a Newton-type methods for refining spectral decompositions, addressing the task considered in this work when A is real symmetric/complex Hermitian matrix. We will discuss similarities and differences with our algorithm in Section 2.5. As a cu-riosity, we also point out a discussion by Kahan [28] on refining eigendecompositions for nonsymmetric matrices; see also Jahn's 1948 work [25].
Algorithms
To motivate the approach pursued in this work, let us consider a unitary matrixQ that nearly effects a Schur decomposition: where T is upper triangular and E is strictly lower triangular. We will write stril(·) to denote the strictly lower triangular part of a matrix. A unitary matrix Q L that transforms T + E to Schur form satisfies the two matrix equations In the following we linearize these equations, analogous to existing first-order perturbation analyses of Schur decompositions [29,39].
Ignoring the second order term W H W , this means that W is skew-Hermitian and can be written as where D is diagonal with purely imaginary diagonal elements. Now, the first equation in (3) becomes where · F denotes the Frobenius norm, and dropping all second order terms in ε, we arrive at the triangular matrix equation If the eigenvalues of T are pairwise distinct, there is a unique strictly lower triangular matrix L satisfying (6), see [29,39] and also Theorem 1 below. Note that the diagonal factor D has disappeared in (6); it can be chosen to contain arbitrary diagonal entries on the imaginary axis. In the following, we choose D = 0. In order to attain a unitary factor Q L , the matrix I + W = I + L − L H needs to be orthogonalized; we will discuss different orthogonalization strategies in Section 2.2.
Algorithm 1 summarizes the procedure outlined above. Note that we allow the input matrixQ to be non-unitary, which is taken care of by an optional orthogonalization step in line 1. ComputeT ← Q H AQ.
6:
Replace Q by (approximately) unitary matrix obtained from orthogonalizing Q(I + L − L H ). 7: until convergence In the following sections, we will provide details on Algorithm 1 and perform a convergence analysis.
Solution of triangular matrix equation
Let us first consider the triangular matrix equation from Step 5 of Algorithm 1: where T is upper triangular and E as well as the desired solution L are strictly lower triangular. For 1 ≤ j < i ≤ n, equating the (i, j)-entry of (7) yields which, assuming t ii = t jj , can be rewritten as Note that all entries of L appearing on the right-hand side are located either to the left or below the entry (i, j) in the matrix L. Thus, if (i 1 , j 1 ), (i 2 , j 2 ), . . . , (i N , j N ) with N = n(n − 1)/2 describes an order in which the entries of L shall be determined, this order must satisfy for all ν, µ ∈ {1, . . . , N }. Interestingly, this corresponds to the elimination order of the northeast directed sweep sequences in the nonsymmetric Jacobi algorithm for which local quadratic convergence was proven in [32]. Each order satisfying (10) gives rise to a different algorithm for solving (6). A possible choice is a bottom-to-top columnwise order, leading to Algorithm 2. Note that we use Matlab notation to refer to entries and submatrices of a matrix.
Algorithm 2 Successive substitution for solution of triangular matrix equation (7) Input: Upper triangular matrix T ∈ C n×n with pairwise distinct diagonal entries. Strictly lower triangular matrix E ∈ C n×n . Output: Strictly lower triangular matrix L ∈ C n×n satisfying (7).
By construction, Algorithm 2 shows that (7) has a solution for every E under stated condition on T . The unique solvability simply follows from the fact that (7) can be regarded as a linear system of n(n − 1)/2 equations in n(n − 1)/2 unknowns.
For the other direction, assume that the condition is violated, that is, T has two identical eigenvalues λ. Then there is a principal submatrixT of T taking the form Since λ ∈ Λ(T 22 ), equations (11) and (12) have unique solutions L 21 and L 32 . This determines the right-hand side of (13), which is an equation of the same type as (7).
Since T 22 has pairwise distinct eigenvalues, the first part of this theorem implies that there is a strictly lower triangular solution L 22 to (13). By embedding L = diag(0,L, 0) we have thus found a nonzero L ∈ stril(C n×n ) such that stril(T L − LT ) = 0. Because of linearity, this implies that the equation stril(T L − LT ) = −E is not uniquely solvable for any E ∈ stril(C n×n ) if the condition is violated. The non-locality of the memory access pattern renders an actual implementation of Algorithm 2 slow for larger matrices. Other orderings satisfying (10), like parallel wavefront techniques [36], may lead to more efficient implementations. In the following we use a recursive formulation, as proposed for a variety of matrix equations in [26,27], to achieve increased data locality in a convenient manner. For this purpose, let us partition with T 11 , E 11 , L 11 ∈ C n 1 ×n 1 . Inserted into (7), this gives The (2, 1) entry is a triangular Sylvester equation, which has a unique solution L 21 , provided that T 11 and T 22 have no eigenvalue in common, and can be addressed using, e.g., the software package RECSY [26,27]. Once L 21 is determined, L 11 and L 22 can be obtained as solutions of Note that both equations are of the same type as (6), but of size n 1 and n−n 1 , respectively. Their solutions can be obtained by again subdividing into 2 × 2 blocks or, for smaller problems, by using Algorithm 2. The described strategy leads to the following recursive algorithm for solving (6).
Algorithm 3 Recursive block algorithm for triangular matrix equation (7)
Input: Upper triangular matrix T ∈ C n×n with pairwise distinct diagonal entries. Strictly lower triangular matrix E ∈ C n×n . Integer n min ≥ 2. Output: Strictly lower triangular matrix L ∈ C n×n satisfying (7).
5:
Solve Sylvester equation 10: The main cost of Algorithm 3 is in the two matrix multiplications in Lines 7 and 8, which allows to leverage the efficiency of level 3 BLAS operations.
Orthogonalization procedures
In the following, we discuss algorithms for carrying out the (approximate) orthogonalization in lines 1 and 6 of Algorithm 1. A suitable orthogonalization procedure needs to satisfy two goals. On the one hand, it should improve orthogonality. On the other hand, it should not modify the matrix more than needed. With these two goals in mind, we require the following: For ε > 0, consider any matrices Q, W ∈ C n×n such that W is skew-Hermitian and with constants c Q , c W independent of ε. Then the matrix Q new returned by the orthogonalization procedure applied to Q(I + W ) satisfies
Orthogonalization by QR decomposition
When Q is unitary (that is, (16) is satisfied with c Q = 0) then Q(I + W ) is an element of the tangent space of the manifold of unitary matrices at Q. Retractions, a concept popularized in the context of Riemannian optimization [4,10], map an element from the tangent space back to the manifold. Thus, the first relation in (17) is trivially satisfied by a retraction. The second relation in (17) such that Q L is unitary and the upper triangular matrix R has real and positive entries on its diagonal then Q new = QQ L satisfies (17).
Orthogonalization by Newton-Schulz iteration
The Newton- The following lemma shows that one step of the Newton-Schulz iteration yields a suitable orthogonalization procedure.
Proof. The first relation of (17) [21,Problem 8.20]. To show the second relation, we set := Q H Q − I and computê
Local convergence analysis
We now perform a local convergence analysis of Algorithm 1. This analysis makes use of the quantity for an upper triangular matrix T , where stril(C n×n ) denotes the set of all strictly lower triangular n × n matrices. Note that 1/φ(T ) governs the first-order sensitivity of a Schur decomposition [29,39] Proof. By the definition (18), the quantity φ(T ) is the smallest singular value of the linear operator L T : Since Weyl's inequalities [22,Corollary 7.3.5] imply that singular values are Lipschitz continuous with constant 1, this concludes the proof.
Suppose that A has mutually distinct eigenvalues. Then its Schur decomposition A = QT Q H is unique up to the order of the eigenvalues on the diagonal of T and unitary diagonal transformations. While φ(T ) is invariant under the latter, it does depend on the eigenvalue order. To circumvent this difficulty, we introduce Because there are only finitely many different eigenvalue orderings, the minimum is assumed and positive. The following lemma shows that ψ(·) remains positive in a neighborhood of A.
Theorem 5 Assume that the eigenvalues of A ∈ C n×n are pairwise distinct. Given ε > 0 consider any matrix Q ∈ C n×n satisfying Let Q new denote the output of one iteration of Algorithm 1 applied to A, Q using an orthogonalization procedure that satisfies (17). Then, for ε > 0 sufficiently small, it holds that Proof. Let E = stril(Q H AQ) and T = Q H AQ − E. By the definition of φ(T ), it follows that the solution L of (6) satisfies .
In order to proceed from here, we need to show that φ(T ), which depends on Q, admits a uniform lower bound. By the polar decomposition there is a unitary matrixQ such that Q − Q F ≤ Q H Q − I F ≤ ε 2 ; see, e.g., [20, P. 380]. In turn, we obtain the perturbed Schur decompositioñ where we used that Q 2 = Q H Q 2 ≤ √ 1 + ε 2 ≤ 2 for ε sufficiently small. Equivalently, this can be viewed as a Schur decomposition of a perturbed matrix: A −Q(E + )Q H =QTQ H . From Lemma 4 it follows that φ(T ) ≥ ψ(A)/2 holds for ε sufficiently small. Hence, W = L − L H satisfies W F ≤ c W ε with c W = 4/ψ(A). This means that the matrixQ = Q(I + W ) satisfies the conditions (16) and therefore the matrix Q new returned by the orthogonalization procedure in line 6 of Algorithm 1 satisfies (17). This already establishes the second relation in (21). To establish the first relation, we note that and, hence, Combined with Q new −Q = O(ε 2 ), this proves the first relation in (21).
If Algorithm 1 is started with a unitary matrixQ sufficiently close to a unitary matrix Q that transforms A to Schur form then the relations (17) imply that the matrix Q obtained from the orthogonalization procedure applied in line 1 satisfies the conditions of Theorem 5. Hence, Theorem 5 establishes local quadratic convergence of Algorithm 1.
Remark 6
The idea of imposing constraints (such as unitary matrix structure) only asymptotically upon convergence of an iterative procedure is not new. Similar ideas have been proposed in the context of differential-algebraic equations (Baumgarte's method [6]), constrained optimization (interior point method [33]), and -more recently -Riemannian optimization ( [17], landing algorithm [3]). However, Algorithm 1 does not seem to fit into any of these existing developments.
Complete algorithm for mixed precision
In this section we specialize the template Algorithm 1 to computing a Schur decomposition of a given matrix A in a mixed precision environment. The decomposition is first computed in a lower precision (lp in the following), and then refined to a higher precision (hp in the following). A typical scenario uses double precision as lp, and quadruple or 100-digits precision as hp. The initial lp decomposition produces the matricesQ and T lp such that A ≈QT lpQ H , and the matrices A andQ are the input to the refinement algorithm. In order to ensure convergence (see Theorem 5 and the discussion in the previous section), orthogonality of the matrixQ is first improved by applying one step of the Newton-Schulz iteration: This computation is done entirely in hp; the matrixQ is first converted to hp. The most time consuming parts of (22) are the two matrix-matrix multiplications in hp. We then enter the loop in Algorithm 1: computation ofT = Q H AQ in Line 3 is also done in hp, requiring two additional matrix-matrix multiplications. Equation (6) in Line 5 can be solved entirely in lp. Note that numerical instabilities in computing L are expected when there are clustered eigenvalues in T . In hope that these instabilities do not spread through the entire matrix L, the initial lp Schur decomposition is reordered so that clustered eigenvalues appear on neighboring diagonal elements of T lp . This is done using ordschur in Matlab; the target permutation of eigenvalues is determined by the order in which they appear after being projected on a random line in the complex plane.
Finally, we need to compute the matrix Q new by improving the orthogonality of Q(I + L − L H ). In line with Lemma 2, this is done by applying one step of the Newton-Schulz iteration to Q(I + L − L H ). To lower computational complexity and avoid unnecessary matrix-matrix multiplications in hp, we proceed in the following way: let W = L − L H , and Y = Q H Q − I. Then (22) withQ = Q(I + W ) becomes where we used W H = −W . We observe that the products between the matrices Y and W can all be computed in lp while the summation of the terms needs to be carried out in hp.
In any case, only 2 hp matrix-matrix multiplications are needed to compute the update: one to compute Y , and one to compute the product of Q with (2I + 2W − Y − · · · ). The cost can be reduced slightly by observing that high-order terms tend become very small and may even fall below the unit roundoff in lp. In our implementation, we actually use the approximation . The complete procedure is summarized in Algorithm 4. As the numerical experiments will demonstrate, the computational complexity is almost entirely concentrated in hp matrix multiplication. The algorithm does 2 such multiplications before the loop, and then 4 in each loop iteration. The iteration is stopped in line 5 when E becomes small enough. Note that lines 9-11 can be skipped if the norms of Y and W indicate that Q(I + W ) is already unitary in hp. For this purpose, the norm of Y F ∼ u hp , where u hp denotes unit roundoff in hp, while W F ∼ √ u hp due the fact that W is skew-Hermitian (see (4)). This potentially saves 1 hp matrix multiplication in the penultimate iteration. ComputeT ← Q H AQ in hp.
7:
Convert L to hp.
Symmetric A
When A is real and symmetric, its Schur form becomes diagonal and our algorithms can be simplified significantly. Instead of Steps 4 of Algorithms 1 and 4,T is now decomposed in its diagonal part T and its off-diagonal part E. In turn, the solution L of the linear system (6) becomes nearly trivial; its entries are given by ij = e ij /(t ii −t jj ) for i > j. The resulting simplified algorithms bear close similarity with the method RefSyEv proposed by Ogita and Aishima [34]. However, in contrast to our approach, RefSyEv performs the improvement of diagonality and orthogonality in reverse order and integrates them in a single update. More concretely, using our notation, one iteration of RefSyEv updates Q ← Q(I + W ) in the following manner. It first computes Y = Q T Q − I andT = Q T AQ. Orthogonality is improved by setting the symmetric part of W to −Y /2, which is equivalent to one step of Newton-Schulz as pointed out in [34,Appendix A]. Then the diagonal ofT is updated and the skew-symmetric part of W is chosen to improve diagonality by solving an equation of the form (6), with the right-hand side updated to reflect improvement of orthogonality. RefSyEv actually does not treat the symmetric and skew-symmetric parts separately but derives a simple and elegant formula to update all entries of W in a single step. The most significant advantage of RefSyEv is that it avoids the initial orthogonalization in Algorithms 1 and 4. This saves 2 hp matrix-matrix multiplications in the initial phase; note, however, that both RefSyEv and Algorithm 4 need the same number (4) of hp matrix-matrix multiplications in each iteration. In summary, there seems to be little that speaks in favor of our approach for symmetric eigenvalue problems, especially when taking into account that RefSyEv and its further development described in [35] take precautions for (nearly) multiple eigenvalues in order to still attain fast convergence in such a critical case. One (small) advantage of our approach is that the separation of the orthogonalization step allows for the use of other orthogonalization procedures, which may be of interest for future developments.
Numerical experiments
In this section, we report the results of a number of numerical experiments in order to demonstrate the correctness and effectiveness of the proposed algorithm. The experiments were executed on a notebook computer with Intel Core i5-1135G7 CPU and 24GB RAM, running Ubuntu 21.10. Algorithm 4 was implemented in Matlab R2021b. We have also implemented a straightforward extension of this algorithm to real Schur decompositions [18]. The main difference to the complex case is that some attention needs to be paid to 2 × 2 diagonal blocks containing complex pairs of eigenvalues. In Algorithm 3, the value of n min was set to 4 in the complex case, and to 1 in the real case. To solve Sylester equations of the form (15) we used the internal Matlab solver matlab.internal.math.sylvester tri.
Our test procedure starts by generating an input matrix A in high precision, for which we discuss two scenarios: quadruple precision (i.e., 34 decimal digits of precision), and 100 decimal digits of precision. The matrix is given as input to Algorithm 4. There we use the standard double precision as low precision, and the builtin Matlab command schur to compute the initial lp Schur decomposition (either real or complex). The refined factors Q, T returned by Algorithm 4 are verified for correctness: we check if T has proper (quasi) triangular form, analyze orthogonality of Q by computing I − Q H Q F , and compute the residual T − Q H AQ F .
The performance of our algorithm heavily relies on the efficiency of matrix-matrix multiplication in the target precision. For that purpose we use the Matlab toolbox acc based on the Ozaki scheme [37], which was also used in [34]. The toolbox uses a configurable number of standard doubles to represent a single high-precision number; we use 2 doubles to represent a quadruple precision number, and 6 doubles to represent a 100-digit number.
The time needed by the whole refinement procedure (including the initial Schur decomposition in double precision) is compared with the time needed for computing the Schur decomposition immediately in quadruple/100 digits precision. For the latter we use Advanpix [1], a multiprecision computing toolbox for Matlab. 1 Example 7 This first example aims at verifying the accuracy of our algorithm by applying it to the companion matrix for the Wilkinson polynomial The coefficients of the polynomial and, hence, entries of the companion matrix vary wildly in magnitude and cannot be stored exactly in double precision. As a consequence, computing the eigenvalues via the Matlab command roots(poly(1:20)) incurs a large error; see Table 1.
Storing the companion matrix in quadruple precision allows for more accurate eigenvalues. Table 1 compares the accuracy of the results obtained from applying Advanpix's schur (with 34 digits) with the ones from Algorithm 4 using mixed double-quadruple precision. In both cases the obtained accuracy is on a similar level and significantly improves upon the double precision computation. However, it can also be seen that the use of the acc toolbox (which uses 2 doubles representing each number) results in a slight loss of accuracy compared to using Advanpix (34 digits) within Algorithm 4. A similar effect is seen when using 100 digits in Advanpix versus 6 doubles per high-precision number in acc. As the acc toolbox is significantly faster, we will use it in all subsequent experiments. Note that ongoing and future modifications of the Ozaki scheme, such as the ones presented in [30], will likely yield further improvements of the accuracy and efficiency of this approach. Example 8 This experiment focuses on the performance of Algorithm 4. We generate a series of random matrices of increasing sizes with random entries from the standard normal distribution. Both the real and the complex Schur forms are computed in high precision. As can be seen from Figure 1, Algorithm 4 is much faster than Advanpix's schur. For this example, our algorithm always converges within 3 iterations to quadruple precision, while 7-8 iterations are needed to attain 100-digit precision. The computed factors Q and T satisfy I − Q * Q F ≤ 9 · 10 −32 , in quadruple precision; 3 · 10 −97 , in 100-digits precision; stril(Q * AQ) F / A F ≤ 3 · 10 −33 , in quadruple precision; 2 · 10 −98 , in 100-digits precision, for all input matrices A. For more insight, Table 2 shows detailed timings for the largest matrix of size 1000. We note that Algorithm 4 spends essentially all of its time on high-precision matrix-matrix multiplications.
Example 9 This example aims to provide insight into how Algorithm 4 fails to converge in exceptional situations, when eigenvalues are poorly separated or, equivalently, the triangular matrix equation (6) is very ill conditioned. To illustrate how such a situation affects Algorithm 4, we generate the matrix A = XDX −1 of size 150. Here, X is a random matrix of condition 10 5 . The matrix D is a diagonal matrix with diagonal elements chosen uniformly at random between −10 and 10, with the exception of two clusters of size 10. In each cluster, the cluster center is chosen randomly, and the cluster elements are perturbed randomly by at most 10 −5 from the center. We execute Algorithm 4 for computing the complex Schur form using double-quadruple precision. Figure 2 shows the absolute values of the computed matrix L during the second and sixth iteration of the algorithm. We note that, initially, all the entries of L are quite small with the exception of those that correspond to the two clusters. In later iterations, these elements have polluted the entire lower triangular part of L. In the eight iteration, the algorithm fails as the matrix L contains NaNs.
Note that if this example is slightly softened (e.g., lowering the condition of X to 10 4 or the increasing the cluster radius to 10 −4 ) then Algorithm 4 converges in 6 iterations and yields a Schur decomposition in quadruple precision.
Example 10 Finally, we report on the performance of Algorithm 4 for a number of wellknown eigenvalue benchmark examples from MatrixMarket [7]. Table 3 details the timing, the errors in the computed factors, and the number of iterations needed for each of these test cases. Note that the matrix size is indicated in the matrix name.
Algorithm 4 fails to converge for matrices denoted by asterisks, due to the reasons explained in Example 9. However, there is a simple trick to avoid the appearance of large numbers in the computed factor L: during its computation, each value larger than, e.g., 10 −5 in absolute value is immediately set to zero. Currently, there is little theoretical support that such a modification will lead to convergence of Algorithm 4-in fact, it still fails for the matrix from Example 9. However, for all matrices reported in Table 3 Table 3 when refining the complex Schur decomposition.
trick successfully resolves convergence problems. We implemented this trick only for the complex Schur form and the obtained results are shown in Table 4.
Conclusions
In this work, we have developed iterative algorithms for refining approximate Schur decompositions that exhibit rapid convergence, in theory and in practice. Using the Newton-Schulz iteration for orthogonalization yields an algorithm that carries out most operations in terms of matrix-matrix multiplications, allowing for a simple and efficient implementation. In particular, when refining a double precision decomposition to high precision this allows to leverage the Ozaki scheme and attain significant speedup over existing implementations of algorithms for computing high-precision Schur decompositions. A number of points deserve further investigation. It is not unlikely that a suitable extension of the modifications described for the symmetric case in [35] will address the convergence failures observed in Examples 9 and 10. Also, it would be valuable to further study possibilities for merging the improvement of orthogonality and triangularity in a single step, as in [34], and avoiding the need for the initial orthogonalization in Algorithm 4. Finally, to attain very high precision it would certainly be beneficial to study the effective use of more than two levels of precision.
|
2022-03-22T07:13:10.602Z
|
2022-03-21T00:00:00.000
|
{
"year": 2022,
"sha1": "d30c774194865f75b6664f4110c739451d1dc3d3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d30c774194865f75b6664f4110c739451d1dc3d3",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
27205747
|
pes2o/s2orc
|
v3-fos-license
|
A phosphodiesterase 4B-dependent interplay between tumor cells and the microenvironment regulates angiogenesis in B-cell lymphoma.
Angiogenesis associates with poor outcome in diffuse large B-cell lymphoma (DLBCL), but the contribution of the lymphoma cells to this process remains unclear. Addressing this knowledge gap may uncover unsuspecting proangiogenic signaling nodes and highlight alternative antiangiogenic therapies. Here we identify the second messenger cyclic-AMP (cAMP) and the enzyme that terminates its activity, phosphodiesterase 4B (PDE4B), as regulators of B-cell lymphoma angiogenesis. We first show that cAMP, in a PDE4B-dependent manner, suppresses PI3K/AKT signals to down-modulate VEGF secretion and vessel formation in vitro. Next, we create a novel mouse model that combines the lymphomagenic Myc transgene with germline deletion of Pde4b. We show that lymphomas developing in a Pde4b-null background display significantly lower microvessel density in association with lower VEGF levels and PI3K/AKT activity. We recapitulate these observations by treating lymphoma-bearing mice with the FDA-approved PDE4 inhibitor Roflumilast. Lastly, we show that primary human DLBCLs with high PDE4B expression display significantly higher microvessel density. Here, we defined an unsuspected signaling circuitry in which the cAMP generated in lymphoma cells downmodulates PI3K/AKT and VEGF secretion to negatively influence vessel development in the microenvironment. These data identify PDE4 as an actionable antiangiogenic target in DLBCL.
. For loss of function assays, PDE4B was transiently knocked-down in the PDE4B-high DLBCL cell line OCI-Ly18 using a small interfering RNAs (siRNAs) strategy, as described earlier 4 . Confirmation of the PDE4B knockdown was performed with quantitative real-time reverse transcription polymerase chain reaction (Q-RT-PCR) and western blotting, which were also used for quantification of PDE4B expression in the DLBCL parental cell lines, as we described 5 . Lastly, PDE4B expression in primary DLBCL biopsies was determined by Q-RT-PCR.
Western blotting. Whole-cell lysates were extracted from DLBCL cell lines or primary murine tumors, separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), and transferred to polyvinylidene difluoride membrane, as described 6 . Thereafter, they were examined with antibodies directed to PDE4B (H-56, Santa Cruz Biotechnology), total AKT and phospho-AKT (Thr 308) (all from Cell Signaling, Beverly, MA), FLAG or -actin (both from Sigma Aldrich, St Louis, MO).
Quantification of vascular endothelial growth factor A (VEGFA). VEGFA abundance was examined at RNA and protein levels, in human and murine samples. For quantification of the VEGFA gene expression, RNA was isolated from DLBCL cell lines following 6h exposure to Forskolin or vehicle control, cDNA generated, and real-time RT-PCR performed as described previously 7 . The VEGFA expression was normalized to that of the housekeeping control gene TBP (TATA-binding protein). The relative quantification was achieved by calculating ΔΔCT, and expression was defined as 2 -ΔΔCT , as we reported 8 . The primer sequences are listed in Supplementary Table 2. Quantification of secreted VEGFA was performed using Human or Mouse VEGF Immunoassay according to the manufacturer's recommendations (R&D systems, Minneapolis, MN). In brief, genetically modified or parental DLBCL cell lines were exposed to Forskolin or vehicle control in presence or absence of Roflumilast for 6h, followed by drug washoff, replenishment with fresh drug-free media, and supernatants collection after 24h. For the murine assays, VEGF was quantified in the sera collected immediately after lymphoma bearing mice (n=30) were sacrificed.
Quantification of intracellular cAMP. Intracellular cAMP levels were measured as we previously described 4 . In brief, after exposure to the adenylyl cyclase activator Forskolin (40 μM, for 30 minutes) or DMSO, DLBCL cell lines were harvested and washed with PBS. cAMP was measured using enzyme-linked immunosorbent assay (ELISA) according to the manufacturer's instructions (Parameter cAMP assay; R&D Systems, Minneapolis, MN). cAMP was also measured in all conditioned media utilized in the HUVEC tube formation assay.
HUVEC tube formation assay. The HUVEC tube formation assays were performed on low passage sub-confluent human umbilical vein endothelial cells (HUVEC, Life Technologies). In brief, 3 x 10 4 HUVECs mixed with DLBCL cell line conditioned media were seeded in 48-well plates coated with basement membrane matrix (Geltrex, Life Technologies). After 4 to 5 hours incubation, the tube-like structure formations were captured using an inverted microscope (200× magnification) (Olympus). The total tube length, branch points and loops from three independent fields were quantified using Image J software 9 . Importantly, we established several safeguards to guarantee the robustness of the HUVEC data. First, after exposure to Forskolin, the culturing media of the DLBCL cell lines was removed, the cells extensively washed, and replenished with Forskolin/Roflumilast-free fresh media, which was collected 24h later to be used in the HUVEC assays. In addition, we quantified the levels of cAMP in these conditioned media and found it to be nearly undetectable (sub-pmol amounts) across all cell lines examined, irrespective of whether the cells had been exposed to Forskolin and/or Roflumilast (Supplementary Figure 3). Thus, these conditioned media are drug-and cAMP-free, indicating that the HUVEC assays are reflective of the effects of Forskolin/cAMP/Roflumilast on the production of VEGF by the lymphoma cells, and not from a fortuitous presence of these Immunohistochemically stained sections were examined at 100x magnification to identify areas of maximal MVD ("hot spots"). For each case, 3 hot spots were examined at 200x magnification (each field with an area of 1.14 mm2) and all microvessels manually counted and highlighted using the ImageScope software, which calculated the area of each vessel. Hot spots were counted within the lymph node tissue and areas of extranodal spread (defined as clearly outside the lymph node capsule and infiltrating adjacent fibro-adipose tissue) were excluded from the analysis. As previously defined 10 , all endothelial cells or endothelial cell clusters, with or without a lumen, that were clearly separate from adjacent microvessels were counted as one microvessel. Vessels with muscular walls were not counted. The total microvessel count of each hot spot was calculated and data reported as the mean counts of the three hot spots per 200x field for each case. As indicated, to determine MVD, we initially stained the murine lymphomas with vWF and CD34. Comparison of these two approaches in multiple independent tumors indicated that a larger number and total area of vessels were stained for CD34 than for vWF antibody, a finding previously reported in non-Hodgkin lymphoma 11 . Further, although we found a good correlation between the vessel numbers defined by these two measurements . Western blotting identified two of the principal PDE4B isoforms, PDE4B2 and PDE4B3 in the PDE4B-high DLBCL cell lines, whereas no PDE4B protein was detected in the PDE4B-low cell lines. This protein expression pattern was confirmed by Q-RT-PCR using a set of primers common to all isoforms of the PDE4B gene(left), whereas the identity of the larger PDE4B protein in OCI-Ly4 and OCI-Ly18 was confirmed to be PDE4B3 with the use of primers specific for this isoform (right). The PDE4B1-specific Q-RT-PCR did not yield any product in these cell lines. The PDE4B expression was normalized to that of a housekeeping gene TBP and relative quantification achieved by calculating Ct, and expression defined as 2 -Ct . b) Brief exposure to the adenylyl-cyclase activator Forskolin (40 μM, 30 minutes) was significantly more efficacious in elevating the intracellular levels of cAMP in PDE4B-low than in PDE4B-high DLBCL cell lines (p<0.0001, ANOVA; p<0.01 Bonferroni's multiple comparison post-test for PDE4B-low cell lines, non-significant for PDE4B-high cells). Data shown are mean ±SD of the fold-increase in cAMP level following Forskolin exposure obtained in an assay performed in triplicate. The approximately same areas of two representative lymphomas (ULN1 and ULN2) were stained for CD34 (left) and vWF (right panels). CD34 consistently showed a more uniform staining, whereas vWF was overall fainter with more variable intensity. The size bar indicates 100 µm CD34 vWF ULN1 ULN1 ULN2 ULN2
|
2017-11-08T00:46:33.195Z
|
2015-10-27T00:00:00.000
|
{
"year": 2016,
"sha1": "847e3edf826ddbeda8b7f3b32f737646d5ebc947",
"oa_license": null,
"oa_url": "https://www.nature.com/articles/leu2015302.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "502384084f8963a1f5d1957b85039f07c38619b8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
195413176
|
pes2o/s2orc
|
v3-fos-license
|
Functional outcome of intracapsular fracture of neck of femur-osteosynthesis by cannulated cancellous screw fixation in adults
Introduction: Fractures of neck of femur have always presented great challenges to the orthopaedic surgeons. This remains, even today, an unsolved fracture as far as treatment and results are concerned. Results depend upon the extent of injury, timing of surgery and adequacy of reduction and fixation. Fixation with cannulated cancellous screws is usually adequate for femoral neck fractures. Lateral cortex plays a very important role in screw fixation. Aims and objectives: To study the effectiveness of cannulated cancellous screw fixation for treatment of fracture of neck of femur in adults. Materials and methods: This study was conducted at NRI Institute of Medical Sciences, Visakhapatnam, AP from March 2016 to February 2019. The patients with intracapsular fracture of neck of femur are evaluated with pre-operative X-rays of pelvis with both hips and X-ray of the concerned hip joint both in anteroposterior and lateral views and their outcome was evaluated postoperatively after fixation with cancellous screws. The outcome is evaluated in terms of pain relief, extent of ambulation achieved after surgery. The classifications we followed are Pauwel’s and Garden’s classification of fracture of neck of femur. The patients were followed up to one year to assess the functional outcome. Observation and results: A good result was obtained in 66.1% of the patients, excellent in 23.2%, fair in 3.8% and poor result in 6.9% of the patients. Complications such as Non-union & avascular necrosis in one case, Non-union and Extrusion of screws in one case, Cut through of screws into articular surface leading to painful joint in one case. Most of the cases of intracapsular neck of femur were in the age group of 31-40 years. There was male preponderance as shown in this study (69%). Conclusion: By the usage of multiple cannulated cancellous lag screws, compression effect at the fracture site is achieved, it also avoids re-displacement and rotations. The implant occupies less volume in the small-sized femoral necks of South Indian patients allowing better osteosynthesis of intracapsular fracture of neck of femur. Multiple cannulated cancellous screw fixation for intracapsular fracture of neck of femur is an easy, safe & useful procedure with encouraging results.
Introduction
Femoral neck fractures often are associated with multiple injuries and high rates of avascular necrosis and non-union. In 1931, Smith-Petersen using a tri-flanged nail, reported a series of open nailing in which he advocated reduction, impaction and internal fixation [1] . In 1989, Lars Rehnberg, Claes Olerud, from the University Hospital, Uppsala, Sweden recommended subchondral cannulated screw fixation for femoral neck fractures [2] . In 1997, V. K. Gautam, and colleagues of Department of Orthopaedics, Maulana Azad Medical College, New Delhi, India recommended management of displaced femoral neck fractures in young adults (15-50 years), primary open reduction and internal fixation of femoral neck fractures with three cancellous screws [3] . In 2010, Lin SQ et al. observed cannulated screw fixation and percutaneous autogenous bone marrow grafting is a more efficient method for accelerating healing of femoral neck fractures and reducing femoral head necrosis [3] . Even with undisplaced fracture of neck of femur, there is no assurance that a fracture will attain an excellent result [3] .
Early anatomical reduction compression of the fracture and rigid internal fixation are used to promote union. An attempt has been made by this study to evaluate the role of multiple cancellous lag screws in internal fixation of intracapsular fracture of neck of femur. The aim of present study is to evaluate the effectiveness of cannulated cancellous screw fixation for treatment of fracture of neck of femur in adults.
To study the rate of union (radiological and clinical) and the incidence of complications and compare the results of our study with the works reported.
Material and methods:
The present study is carried out in the Department of Orthopaedics, NRI Institute of Medical Sciences, Visakhapatnam, AP during March 2016 to February 2019. All the patients were preoperatively assessed to grade the type of fracture by "Garden's Classification" and prepared for surgery. All fractures were reduced by Leadbetter (in flexion) method. A total of 28 cases of intracapsular fracture neck of femur in adults were treated after accurate reduction and rigid internal fixation under X-ray control with 3 partially threaded 6-5 mm Cannulated Cancellous screws.
Surgical Technique:
Internal fixation of intracapsular fracture of neck of femur by multiple cannulated cancellous screws.
Anaesthesia:
Done under spinal anaesthesia, only a few cases were done under general anaesthesia.
Reduction:
The patient was kept over the fracture table and fracture reduced by Leadbetter technique. The reduction was confirmed by both anteroposterior and lateral view of the hip.
Technique:
After the reduction vertical incision was given over the lateral surface of the greater trochanter and extended distally up to 6-8 cm, dissection was carried down through the skin and subcutaneous tissue and fascia lata was split. Femoral cortex was approached by detaching vastus lateralis and reflecting it. Then, the lateral cortex was predrilled with 2 mm drill bit. Guide pins placed across the fracture from the lateral aspect of the femoral shaft parallel to the neck usually at a 135° angle. One guide pin placed adjacent to the medial cortex at 135° angle. Three guide pins placed at the middle of the head, one inferocentral, one anterior and one posterior, and driven within 5 mm of subchondral bone. Checked under C-arm, both anteroposterior and lateral views; the guide pins measured to determine the correct screw length. After satisfactory position of the guide wires in the neck, drilling and tapping done over the guide wires with cannulated drill and cannulated tap respectively. Cannulated cancellous lag screws inserted over the guide wires by using the cannulated screw driver. Confirmation of adequate fixation done by checking under C-arm both anteroposterior and lateral view. The screws should be within 5 mm of subchondral bone. If necessary washers were used to prevent the screw head shrinking and get the uniform compression at the fracture site. Haemostasis was secured. Wound closed in layers over the suction drain.
Observation and results:
In our study, there are 20 males and 8 females, Grading of results as per a six-point functional outcome scoring system for Asians after hip surgery [5] .
Discussion
Our important objective in the treatment of intracapsular fracture of the femur is to obtain stable osseous support of the femoral head on the femoral neck. The purpose of the fixation screws are to lock the fracture in a position in which the femoral neck gives bone-on-bone support to the femoral headneck fragment, to prevent posterior and varus migration of the femoral head, and to be parallel so as to maintain bone-onbone support as the fracture settles in the healing period.
5.1
There are several reasons for use of a Cannulated screw system 1. The smaller-diameter guide pins can be used to determine the screw position and length accurately. 2. Cannulated screw systems improve the accuracy of screw placement by supplying jigs that can place guide pins very accurately; and with parallel screws, excellent compression can be produced a traumatically by the lag effect of the screw The total number of cases of Intracapsular fracture neck of femur followed are [28] . The cases were treated by multiple cannulated cancellous screws and followup from 6 months to 2 years. Percentage of male patients was higher than female patients. The commonest age group of the followed cases is between 31-40 years. The commonest radiological type of fracture is Garden's type IV followed by Type-II. In our study, Garden's type IV showed poor results when compared to others [6] . All the patients were explained regarding the precautions to be followed after surgery. One case developed absorption of the neck and loosening of the screws and for that Girdle Stone excision arthroplasty was done. One case developed collapse at the fracture site and extrusion of screws due to early weight-bearing and the case did not turn up for follow-up. This series contains patients who are hard working labourers and sedentary females. The mechanism of injury in most cases is in the form of fall from height. There is also slightly violent injury leading to intracapsular fracture. The commonest radiological type of fracture is Garden's type IV followed by Type-II. In our study Gardens type IV showed poor results when compared to others [29] . All the patients were explained the precaution to be followed after surgery. The reduction of fracture was done by Lead better method without fail. The reduction was confirmed by an X-ray both Anteroposterior and lateral views. Through lateral approach, the fracture was fixed by multiple cannulated cancellous screws. In most of the cases the fixation of fracture was done by 2 or more than two to prevent rotation of the proximal fragment. The threaded portions of the screws were seen to cross the fracture line to get a better lag effect. A similar study was conducted V. K. Gautam
Conclusion
The most of the cases of intracapsular fracture neck of femur were in the age group of 31-40 years. There was male preponderance as shown in this study of intracapsular fracture neck of femur. In our study the side of hip which was frequently fractured is right hip, of 28 patients, 18 patients fractured right hip. The nature of violence in this study shows, mainly, fall from a height. This injury usually not associated with any other injuries. In our institute accurate reduction and rigid internal fixation of intracapsular fracture neck of femur was done with the help of C-Arm and the results were encouraging even up to the age of 60 years. By early mobilization of the patients the complications of prolonged immobilisation like thromboembolism, hypostatic pneumonia were avoided. By the usage of multiple cannulated cancellous lag screws, compression effect at the fracture site is achieved, it also avoids redisplacement and rotations. The implant occupies less volume in the small sized femoral necks of South Indian Patients allowing better osteosynthesis of intracapsular fracture neck of femur. Thus Multiple cannulated cancellous screw fixation for intracapsular fracture neck of femur is an easy, safe & useful procedure with encouraging results.
|
2019-06-26T13:40:55.674Z
|
2019-04-01T00:00:00.000
|
{
"year": 2019,
"sha1": "7967dd8b118efeb63bf35298b6acd51650d33c31",
"oa_license": null,
"oa_url": "http://www.orthopaper.com/archives/2019/vol5issue2/PartE/5-1-194-248.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0ad008895c782ae354aef0e78c14c5874fd4596f",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
263245562
|
pes2o/s2orc
|
v3-fos-license
|
Inaugural Description of Extrafloral Nectaries in Sapindaceae: Structure, Diversity and Nectar Composition
Sapindales is a large order with a great diversity of nectaries; however, to date, there is no information about extrafloral nectaries (EFN) in Sapindaceae, except recent topological and morphological data, which indicate an unexpected structural novelty for the family. Therefore, the goal of this study was to describe the EFN in Sapindaceae for the first time and to investigate its structure and nectar composition. Shoots and young leaves of Urvillea ulmacea were fixed for structural analyses of the nectaries using light and scanning electron microscopy. For nectar composition investigation, GC-MS and HPLC were used, in addition to histochemical tests. Nectaries of Urvillea are circular and sunken, corresponding to ocelli. They are composed of a multiple-secretory epidermis located on a layer of transfer cells, vascularized by phloem and xylem. Nectar is composed of sucrose, fructose, xylitol and glucose, in addition to amino acids, lipids and phenolic compounds. Many ants were observed gathering nectar from young leaves. These EFNs have an unprecedented structure in the family and also differ from the floral nectaries of Sapindaceae, which are composed of secretory parenchyma and release nectar through stomata. The ants observed seem to protect the plant against herbivores, and in this way, the nectar increases the defence of vegetative organs synergistically with latex.
Introduction
A nectary is a gland that secretes nectar, a secretion mainly composed of sugars [1].The fact that the concept of nectary is functional implies a great morphological diversity.Nectaries can be trichomes, epidermal surfaces, as well as glands composed of nectariferous and subnectariferous tissues, which can be vascularized or not [1,2].
The type of nectary found in a given clade is directly related to the morphological evolution of secretory structures of this group and their ecological relationships.This fact can be especially noted in Sapindales, a morphologically multidiverse order that shows an unusual diversification of their glands [3].
Despite the large number of species in this family and the morphological studies of its representatives, the first report of extrafloral nectaries in Sapindaceae has only occurred recently in Paullinia, Serjania and Urvillea [3], and there is no structural information about them.Extrafloral nectaries vary in shape and size in the different families of Sapindales and could be trichomes, stalked glands or ocelli located on leaves, cataphylls, bracts, inflorescence axis and fruits [3][4][5][6][7][8][9][10][11][12][13].
Despite the morphological and ontogenetic variation, all nectaries recorded in Sapindales are composed of nectariferous parenchyma, vascularized by xylem and phloem, and Plants 2023, 12, 3411 2 of 14 release nectar through stomata occurring in a non-secretory epidermis [3,12,13].Although nectaries of Sapindaceae would supposedly have a similar tissue composition [3], there is no anatomical information to date.
Previous personal field observations indicate a relationship between the nectar produced by these nectaries and the attraction of a large number of ants that patrol the young branches, as reported by Villatoro-Moreno et al. [14] in a largely well-known mutualistic relationship [15][16][17][18][19].Although nectar is composed basically of water and sugar, other minor components may occur in this secretion, such as amino acids and proteins, lipids, and even phenolic compounds and alkaloids [1,[20][21][22].There is no information about nectar composition in Sapindaceae.The analysis of secretion composition is fundamental to correctly classify a gland and perform ecological inferences [23].
Due to the scarcity of information about the extrafloral nectaries in Sapindaceae and their likely distinct constitution from all other nectaries of Sapindales, the goal of this study was to analyse the structure of extrafloral nectaries and the main chemical classes of metabolites in Urvillea nectar to better understand their morphological, anatomical and functional diversity in Sapindaceae.
Morphology
The leaves of Urvillea ulmacea are trifoliolate with leaflets exhibiting a dentate margin (Figure 1A) and nectaries in the tooth apex (Figure 1B).These nectaries are inconspicuous, forming a small submarginal rounded bulge on the abaxial surface of the leaflets (Figure 2A), where the nectariferous tissue is found slightly sunken, corresponding to an ocellus (Figure 2B).This secretory epidermis is formed of cells with an irregular pattern of organization, covered by a smooth cuticle, and there are no stomata (Figure 2C,D).
indales are composed of nectariferous parenchyma, vascularized by xylem and phloem, and release nectar through stomata occurring in a non-secretory epidermis [3,12,13].Although nectaries of Sapindaceae would supposedly have a similar tissue composition [3], there is no anatomical information to date.
Previous personal field observations indicate a relationship between the nectar produced by these nectaries and the attraction of a large number of ants that patrol the young branches, as reported by Villatoro-Moreno et al. [14] in a largely well-known mutualistic relationship [15][16][17][18][19].Although nectar is composed basically of water and sugar, other minor components may occur in this secretion, such as amino acids and proteins, lipids, and even phenolic compounds and alkaloids [1,[20][21][22].There is no information about nectar composition in Sapindaceae.The analysis of secretion composition is fundamental to correctly classify a gland and perform ecological inferences [23].
Due to the scarcity of information about the extrafloral nectaries in Sapindaceae and their likely distinct constitution from all other nectaries of Sapindales, the goal of this study was to analyse the structure of extrafloral nectaries and the main chemical classes of metabolites in Urvillea nectar to better understand their morphological, anatomical and functional diversity in Sapindaceae.
Morphology
The leaves of Urvillea ulmacea are trifoliolate with leaflets exhibiting a dentate margin (Figure 1A) and nectaries in the tooth apex (Figure 1B).These nectaries are inconspicuous, forming a small submarginal rounded bulge on the abaxial surface of the leaflets (Figure 2A), where the nectariferous tissue is found slightly sunken, corresponding to an ocellus (Figure 2B).This secretory epidermis is formed of cells with an irregular pattern of organization, covered by a smooth cuticle, and there are no stomata (Figure 2C,D).
Anatomy
Anatomically, Urvillea nectaries are comprised of a multiseriate secretory epidermis (Figure 3A) covered by a thin cuticle, with three to six layers of cells located on a layer of transfer cells (Figure 3B).The secretory cells are isodiametric and have thin, pecto-cellulosic walls, dense cytoplasm with several vacuoles full of secretion (Figure 3C), and a prominent nucleus with evident nucleolus.These cells are juxtaposed and irregularly grouped since the beginning of their ontogeny (Figure 3D).However, these cells split, forming deep slits in the secretory tissue during the secretory phase (Figure 3B,E), indicating a likely relation to the release of nectar, which reaches the nectary surface without rupture of the cuticle (Figure 2D or Figure 3C).The transfer cells, beneath the secretory epidermis, have thick, primary (Figure 3C,F) and suberized anticlinal walls (Figure 4A,B) and a single central vacuole containing phenolic compounds (Figure 3C).
The nectaries are vascularized by ramifications of the vascular bundles of the leaf tooth (Figure 3A,B), composed mainly of phloem (Figure 3E) but also containing xylem vessels (Figure 3F).The vascular terminations expand beneath the transfer layer throughout the nectary extension.The leaf mesophyll underlying the nectaries has a large amount of laticifers, phenolic (Figure 3A,B) and crystalliferous idioblasts (Figure 3F) containing druses in a similar distribution to the rest of the leaf.
Anatomy
Anatomically, Urvillea nectaries are comprised of a multiseriate secretory epidermis (Figure 3A) covered by a thin cuticle, with three to six layers of cells located on a layer of transfer cells (Figure 3B).The secretory cells are isodiametric and have thin, pecto-cellulosic walls, dense cytoplasm with several vacuoles full of secretion (Figure 3C), and a prominent nucleus with evident nucleolus.These cells are juxtaposed and irregularly grouped since the beginning of their ontogeny (Figure 3D).However, these cells split, forming deep slits in the secretory tissue during the secretory phase (Figure 3B,E), indicating a likely relation to the release of nectar, which reaches the nectary surface without rupture of the cuticle (Figure 2D or Figure 3C).The transfer cells, beneath the secretory epidermis, have thick, primary (Figure 3C,F) and suberized anticlinal walls (Figure 4A,B) and a single central vacuole containing phenolic compounds (Figure 3C).
Nectar
The histochemical analysis detected the presence of carbohydrates (sugars), proteins and/or amino acids and phenolic compounds in the nectar (Figure 4C-F and Table 1).No starch storage was found in the nectariferous or subnectariferous tissue.Lipids are present in the nectar but were not histochemically detected (Table 1) due to their very low concentration (Figure 5 and Table 2).The nectaries are vascularized by ramifications of the vascular bundles of the leaf tooth (Figure 3A,B), composed mainly of phloem (Figure 3E) but also containing xylem vessels (Figure 3F).The vascular terminations expand beneath the transfer layer throughout the nectary extension.The leaf mesophyll underlying the nectaries has a large amount of laticifers, phenolic (Figure 3A,B) and crystalliferous idioblasts (Figure 3F) containing druses in a similar distribution to the rest of the leaf.
Nectar
The histochemical analysis detected the presence of carbohydrates (sugars), proteins and/or amino acids and phenolic compounds in the nectar (Figure 4C-F and Table 1).No starch storage was found in the nectariferous or subnectariferous tissue.Lipids are present in the nectar but were not histochemically detected (Table 1) due to their very low concentration (Figure 5 and Table 2).2.
The chemical analysis revealed the nectar is mainly composed of sucrose, fructose, xylitol and glucose, in addition to some minor components, such as amino acids, lipids (Figure 5 and Table 2) and phenolic compounds (Figures 6 and 7).Comparing the phenolic derivatives of nectaries and leaves, four flavones were found exclusively in the nectaries of Urvillea ulmacea (Figures 6 and 7; compounds 1-3 and 6).derivatives of nectaries and leaves, four flavones were found exclusively in the nectaries of Urvillea ulmacea (Figures 6 and 7; compounds 1-3 and 6).
After the release of nectar on the surface of the nectary, it is gathered by a large number of ants that forage the plant.Although nectar droplets have been rarely observed, the ants collect the secretion without damaging the nectary.During field collection of the plants, several leaves were analysed, and no wounds were observed.After the release of nectar on the surface of the nectary, it is gathered by a large number of ants that forage the plant.Although nectar droplets have been rarely observed, the ants collect the secretion without damaging the nectary.During field collection of the plants, several leaves were analysed, and no wounds were observed.
Discussion
In this study, we unravelled the structure of extrafloral nectaries in Sapindaceae, revealing an unexpected gland formed by multiple epidermis, subtended by a layer of transfer cells, which has no parallel in Sapindales.Despite the large number of ants foraging leaves and shoots of several species of Sapindaceae (pers.obs.), the first reports of leaf nectaries in the family are recent [3,14], probably due to its greatly reduced size.
Extrafloral nectaries vary in position within Sapindaceae.They have been noted on the teeth of leaflet margin in Paullinia seminuda Radlk.[3], as observed in this study for Urvillea ulmacea.However, the nectaries were found at the leaflet apex in Paullinia
Discussion
In this study, we unravelled the structure of extrafloral nectaries in Sapindaceae, revealing an unexpected gland formed by multiple epidermis, subtended by a layer of transfer cells, which has no parallel in Sapindales.Despite the large number of ants foraging leaves and shoots of several species of Sapindaceae (pers.obs.), the first reports of leaf nectaries in the family are recent [3,14], probably due to its greatly reduced size.
Extrafloral nectaries vary in position within Sapindaceae.They have been noted on the teeth of leaflet margin in Paullinia seminuda Radlk.[3], as observed in this study for Urvillea ulmacea.However, the nectaries were found at the leaflet apex in Paullinia carpopoda Cambess.[3] and in the apical region or along the midrib in Nephelium lappaceum L. [14].The morphology of these nectaries also varies in the family.They are sunken ocelli in Urvillea, elevated ocelli in Nephelium [14] and stalked glands in Paullinia [3].
The structure of the extrafloral nectaries is highly diverse in Sapindales, but the histological composition described for those of Urvillea represents a novelty for the order.In Anacardiaceae, they are trichomatous, found clustered in depressions on the leaf surface and often confused with domatia [5,8].In Meliaceae and Rutaceae, nectaries are ocelli formed by nectariferous epidermis and parenchyma [4,6,7,[9][10][11].Although Simaroubaceae also has ocelli on the surface of the leaf blade, nectaries formed only by secretory parenchyma, which release nectar through stomata, were recently observed at the apex of Homalolepis and Simaba leaflets [3,12].
Although the nectaries of Urvillea stand out for being formed of a multiple epidermis with irregular cell arrangement, the irregularity in the organization of epidermal secretory cells is common in Sapindales.Glandular trichomes of Sapindaceae, Meliaceae and Simaroubaceae are characterized by the irregular arrangement of their secretory cells [3,12], which seems to be the pattern of cell organization in the members of these families, denoting an unusual activity of the protoderm during the ontogenesis of these glands.Furthermore, we observed that secretory cells, grouped in distinct clusters, split during secretory activity.These slits formed during nectary differentiation indicate that the nectar may be released laterally to the inner cells of the secretory tissue, similar to that observed in some colleters [32][33][34][35].In Plumeria (Apocynaceae), the separation of the epidermal cells is due to Plants 2023, 12, 3411 9 of 14 the dissolution of the middle lamella [32].More studies are needed to verify the mode of secretion release in nectaries in Sapindaceae and the formation of these slits.
The multiple secretory epidermis of Urvillea are situated on a layer of transfer cells.Despite the large number of cells that form these nectaries, the presence of a layer of transfer cells under the secretory portion is similar to that reported for nectariferous trichomes of Adenocalymma (Bignoniaceae) [36].This transfer layer is associated with the shortdistance transport of solutes via the transmembrane.Apparently, transfer cells occur when the area for transport is much smaller than the volume of the destination structure and the transported solute is accompanied by minimal solvent flux [37,38].In Adenocalymma nectaries, cell wall invaginations are also located in anticlinal regions, and a large number of mitochondria are associated with the polarized cell transport of solute [36].
The absence of stomata in the nectaries of Urvillea is also an unusual feature for the order, except for the trichomatous ones [3].The release of nectar directly through the wall and cuticle would not be expected even for the family since the floral nectaries of Sapindaceae are formed of secretory parenchyma that release the nectar through stomata [3,[39][40][41][42][43][44][45][46][47][48][49].
The nectar of Urvillea is composed of sugars, amino acids, lipids and phenolic compounds.Sucrose, fructose and glucose are the main components of the nectar of most plants [1] and are among the major compounds of the nectar in Urvillea.However, a fourth sugar was found in large amounts-xylitol.The occurrence of this sugar in the nectar is unexpected since xylitol is not commonly produced by plants.Its presence may be related to the growth inhibition of bacteria provided by xylitol [50].Further studies are necessary to investigate the production and functions of this sugar in nectar.
Phenolic compounds, commonly found in the nectar, also have antimicrobial activity [1,51].The phenolics detected in this study were identified as flavones, which have these properties [52][53][54].The presence of these compounds is very important in secretions composed basically of water and carbohydrates [55] to avoid proliferation of microorganisms that may cause necrosis of the nectariferous tissues.Additionally, flavonoids provide photoprotection to the secretion, preventing the oxidation of compounds.Therefore, the occurrence of flavonoids in the nectar of Urvillea may prolong the secretory activity of the leaf nectaries.
The second major class of compounds in nectar is amino acids, which account for about 14% in Urvillea.This high concentration of amino acids in the nectar is unusual.In general, amino acids are 100-1000 times less concentrated than sugars in nectar.However, a higher concentration of amino acids has a greater potential to attract ants since it influences the taste of nectar, increasing its attractiveness and its nutritional importance, especially for animals whose only food resource is nectar [56][57][58][59].Perhaps the high concentration of amino acids in the nectar of Urvillea is one of the factors responsible for the large number of ants observed continuously foraging the plants.Although the formation of droplets on the surface of the nectary is rarely observed, it is likely stored in the slits of the epidermis, where the nectar is collected, especially in young leaves.
Extrafloral nectaries have a well-known mutualistic relationship with ants that protects the plant against herbivores [15,17,19,60], and this interaction in Sapindaceae has been studied by Villatoro-Moreno et al. [14], who reported 10 species from five subfamilies of ants visiting the nectaries of Nephelium lappaceum.The ants were observed continuously foraging Urvillea ulmacea in the field, effectively protecting the plant.This protection associated with the occurrence of laticifers and phenolic idioblasts [61] inhibits herbivory; no wounds were observed in the leaves.was deposited in the Herbarium SPF (Maximo, D. 1).This species was selected based on previous observations that identified nectaries occurring on the margins of its leaflets [3].
Structural Analyses
Shoot apices and young leaves were fixed in formalin-acetic acid-alcohol (FAA) solution for 24 h [62] and buffered neutral formalin in 0.1 M sodium phosphate buffer (pH 7.0) for 48 h [63].For the micromorphological study, mature nectaries were isolated, dehydrated in a graded ethanol series, dried by the critical point method, mounted on aluminium stub, and sputter-coated with gold, with subsequent observation in a Zeiss Sigma VP scanning electron microscope (Carl Zeiss, Oberkochen, Germany).
For anatomical analyses, shoot apices and young leaves were isolated, dehydrated through a tertiary butyl alcohol series [62], embedded in Paraplast ® (Leica Microsystems Inc., Heidelberg, Germany), and serial-sectioned at a 12 µm thickness on a Leica RM2145 rotary microtome.Longitudinal and transverse sections were stained with astra blue and safranin O [64] and the slides were mounted with resin Permount (Fisher Scientific, Pittsburgh, PA, USA).Observations and photographs were performed using a Leica DMLB light microscope.
Nectar Composition 4.3.1. Histochemical Tests
The main chemical classes of the constituents of nectar were investigated using the following histochemical tests in the embedded nectaries: periodic acid-Schiff's (PAS) reaction for carbohydrates [65], ruthenium red for acidic mucilage [66], tannic acid and ferric chloride for mucilage [67], Lugol's reagent for starch [62], aniline blue black for proteins [68], Sudan black B and Sudan IV for lipids [69], Nile blue for acidic and neutral lipids [70], copper acetate and rubeanic acid for fatty acids [71,72], ferric chloride and fixation in ferrous sulphate-formalin for phenolic compounds [62], and Dragendorff's [73] and Wagner's [74] reagents for alkaloids.Standard control procedures were carried out according to Demarco [23].The autofluorescence of the secretion and suberized walls were also analysed under UV and blue light.All observations and photographs were performed using a Leica DMLB light microscope equipped with an HBO 100 W mercury vapor lamp and a blue light filter block (excitation filter BP 420-490, dichromatic mirror RKP 510, suppression filter LP 515) and UV filter block (excitation filter BP340-380, dichromatic mirror RKP400, suppression filter LP425).
Derivatization of the Sample and Identification of Compounds through GC-MS
The extrafloral nectaries of Urvillea were derivatized using methoxyamine hydrochloride dissolved in pyridine (28 µL, CAS 593-56-6, Sigma-Aldrich, St. Louis, MO, USA) for 2 h at 37 • C and N-Methyl-N-(trimethylsilyl) trifluoroacetamide (48 µL, MSTFA, CAS 24589-78-4, Sigma-Aldrich) for 30 min at 37 • C [75].The metabolites were analysed by GC-MS equipped with the HP-5MS column (Agilent, length 30 m, ID 250 µm, 0.25 µm film thickness, Santa Clara, CA, USA).The initial column temperature was adjusted to 70 • C for 5 min and ramped at 5 • C min −1 to a final temperature of 320 • C, which was maintained for 8 min with a total run time of 58 min.The injection volume was 1 µL in splitless mode with helium as a carrier gas at 1 mL min −1 .The injector, ion source, and quadrupole temperatures were 300 • C, 200 • C, and 280 • C, respectively.MS detection was performed with electron ionization (EI) at 70 eV, working in the full-scan acquisition mode ranging between 50-800 m/z at 2.66 scan s −1 .
Compound identification was made by comparison of mass fragmentation using NIST digital library spectra (v2.0, 2008) employing comparison values of Match e R-Match above 900 as well as by retention time and mass fragmentation pattern of commercial standards.
Crude Extracts Preparation and HPLC-DAD Analysis of Methanol Extracts
For phenolic extraction, fresh fragments of leaves and nectaries were extracted with 1 mL of HPLC-grade methanol for 15 min in an ultrasonic bath at room temperature followed by the collection of the supernatants by centrifugation (13,000× g, 4 min, 25 • C).The obtained solutions were filtered using a 0.45 µm syringe filter.
Phenolic compounds were analysed using HPLC-DAD (model: 1260 system, Agilent Technologies, Santa Clara, CA, USA) equipped with an autosampler, using a Zorbax Eclipse Plus C18 column (150 × 4.6 mm, 3.5 µm particle diameter) at 45 • C with a flow rate of 1 mL•min −1 , and an injection volume of 3 µL.The detection wavelengths were registered at 254, 280, and 352 nm.The chromatographic method was constituted by a gradient of mixtures of solvents A (water acidified with 0.1% acetic acid) and B (acetonitrile
Conclusions
In this study, we described the structure of extrafloral nectaries in Sapindaceae for the first time.Urvillea ulmacea have nectaries of the ocelli type composed of multiple secretory epidermises positioned on a layer of transfer cells, vascularized by phloem and xylem.This nectary stands out due to an irregular arrangement of the secretory cells that split, forming deep slits where the nectar is temporarily stored.Nectar is mainly composed of sucrose, fructose, xylitol and glucose, in addition to amino acids and other minor compounds, such as lipids and phenolics.After the release of nectar onto the gland surface, it is gathered by many ants, which continuously forage the plant.This novel description reveals a new type of nectary that differs structurally from all others of Sapindales and display a high concentration of amino acids, which can be related to a more effective attraction of ants.The redundancy in defensive secretory systems of Urvillea may be related to the low predation rate of its leaves, which contain extrafloral nectaries, laticifers and phenolic idioblasts.Further studies of nectaries of Sapindaceae are needed to verify whether the peculiar characteristics observed in this study are typical of the genus or family.
Figure 2 .
Figure 2. Micromorphology of the extrafloral nectaries of Urvillea ulmacea Kunth.Scanning electron microscopy.(A) EFN on the abaxial surface of the tooth apex.(B) General view of the nectary (ocellus).(C) Nectariferous epidermis composed of irregularly arranged cells.(D) Detail of the secretory epidermis.Note the smooth cuticle and the absence of stomata.
Figure 2 .
Figure 2. Micromorphology of the extrafloral nectaries of Urvillea ulmacea Kunth.Scanning electron microscopy.(A) EFN on the abaxial surface of the tooth apex.(B) General view of the nectary (ocellus).(C) Nectariferous epidermis composed of irregularly arranged cells.(D) Detail of the secretory epidermis.Note the smooth cuticle and the absence of stomata.
x FOR PEER REVIEW 4 of 13 Figure 3 .
Figure 3. Structure of the extrafloral nectaries of Urvillea ulmacea Kunth.Light microscopy.(A-C,E) Bright field.(A) General view of the nectary on abaxial surface of the leaflet.(B) Nectariferous tissue situated on a layer of transfer cells.(C) Detail of the nectary.Note the secretory cells full of secretion, and the transfer cells containing phenolic compounds.(D) Developing nectary.Note numerous cells in different phases of cell division.Section stained with astra blue and safranin observed under blue light.(E) Vascular bundle of the nectary mainly composed of phloem.(F) Xylem vessels and crystals beneath the nectary evidenced by polarized light.(L = laticifer; NT = nectariferous tissue; PI = phenolic idioblast; TL = transfer layer; VB = vascular bundle; Xy = xylem).
Figure 3 .
Figure 3. Structure of the extrafloral nectaries of Urvillea ulmacea Kunth.Light microscopy.(A-C,E) Bright field.(A) General view of the nectary on abaxial surface of the leaflet.(B) Nectariferous tissue situated on a layer of transfer cells.(C) Detail of the nectary.Note the secretory cells full of secretion, and the transfer cells containing phenolic compounds.(D) Developing nectary.Note numerous cells in different phases of cell division.Section stained with astra blue and safranin observed under blue light.(E) Vascular bundle of the nectary mainly composed of phloem.(F) Xylem vessels and crystals beneath the nectary evidenced by polarized light.(L = laticifer; NT = nectariferous tissue; PI = phenolic idioblast; TL = transfer layer; VB = vascular bundle; Xy = xylem).
Figure 4 .
Figure 4. Histochemical analysis of the extrafloral nectaries of Urvillea ulmacea Kunth.(A,B) Fluorescence microscopy.(C-F) Bright field.(A,B) Suberized walls of the transfer cells evidenced by yellow fluorescence under blue light (A) and blue fluorescence under UV (B).Sections stained with astra blue and safranin.(C) Suberin of the transfer cells stained with Sudan black B. (D) Carbohydrates detected by PAS reaction.(E) Proteins and/or amino acids identified using aniline blue black.(F) Phenolic compounds evidenced by ferric chloride (NT = nectariferous tissue; TL = transfer layer).
Figure 4 .
Figure 4. Histochemical analysis of the extrafloral nectaries of Urvillea ulmacea Kunth.(A,B) Fluorescence microscopy.(C-F) Bright field.(A,B) Suberized walls of the transfer cells evidenced by yellow fluorescence under blue light (A) and blue fluorescence under UV (B).Sections stained with astra blue and safranin.(C) Suberin of the transfer cells stained with Sudan black B. (D) Carbohydrates detected by PAS reaction.(E) Proteins and/or amino acids identified using aniline blue black.(F) Phenolic compounds evidenced by ferric chloride (NT = nectariferous tissue; TL = transfer layer).
Figure 5 .
Figure 5. Chromatogram of compounds detected in the nectary of Urvillea ulmacea Kunth.Compounds exclusively detected in the nectar (arrows) are described in Table2.
Figure 5 .
Figure 5. Chromatogram of compounds detected in the nectary of Urvillea ulmacea Kunth.Compounds exclusively detected in the nectar (arrows) are described in Table2.
Figure 6 .
Figure 6.Chromatograms detected at 352 nm from the methanol extracts of nectaries (above) and leaves (below) of Urvillea ulmacea Kunth.
Figure 6 .
Figure 6.Chromatograms detected at 352 nm from the methanol extracts of nectaries (above) and leaves (below) of ulmacea Kunth.
Figure 6 .
Figure 6.Chromatograms detected at 352 nm from the methanol extracts of nectaries (above) and leaves (below) of Urvillea ulmacea Kunth.
Table 1 .
Histochemical tests applied to extrafloral nectaries of Urvillea ulmacea Kunth to identify the chemical classes of metabolites that compose the nectar.
Table 1 .
Histochemical tests applied to extrafloral nectaries of Urvillea ulmacea Kunth to identify the chemical classes of metabolites that compose the nectar.
Table 2 .
Identification of compounds, chemical classes, and their respective relative percentual found in the nectar of Urvillea ulmacea Kunth.
Table 2 .
Identification of compounds, chemical classes, and their respective relative percentual found in the nectar of Urvillea ulmacea Kunth.
|
2023-09-30T15:02:26.766Z
|
2023-09-28T00:00:00.000
|
{
"year": 2023,
"sha1": "1f1e8950c08d62d0ba56f5d9d489db493eb01f7f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/12/19/3411/pdf?version=1695874750",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7b56ccca33aed9675287d81dc37380fcb50a5e13",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1742005
|
pes2o/s2orc
|
v3-fos-license
|
Using virtual robot-mediated play activities to assess cognitive skills
Purpose : To evaluate the feasibility of using virtual robot-mediated play activities to assess cognitive skills. Method : Children with and without disabilities utilized both a physical robot and a matching virtual robot to perform the same play activities. The activities were designed such that successfully performing them is an indication of understanding of the underlying cognitive skills. Results : Participants’ performance with both robots was similar when evaluated by the success rates in each of the activities. Session video analysis encompassing participants’ behavioral, interaction and communication aspects revealed differences in sustained attention, visuospatial and temporal perception, and self-regulation, favoring the virtual robot. Conclusions : The study shows that virtual robots are a viable alternative to the use of physical robots for assessing children’s cognitive skills, with the potential of overcoming limitations of physical robots such as cost, reliability and the need for on-site technical support. (cid:2) Implications for Rehabilitation
Introduction
The assessment of cognitive understanding of children with neuromotor disabilities raises concerns. In fact, cognitive ability may be confounded by the nature of the physical disability itself [1]. Traditional tests rely on motor or verbal responses that children might not be able to provide due to their disability. Adapted tests (e.g. the Pictorial Test of Intelligence (PTI) [2]), where children only need to choose from a set of possible answers through a pointing method, for instance eye gaze, are available, but these require sustained attention on questions that may be meaningless and uninteresting to the children. As a consequence, children's cognitive abilities might be underestimated, leading to reduced expectations on the part of parents, teachers and clinicians. Reduced expectations can lead to providing fewer opportunities for children to develop and demonstrate their cognitive skills, thus entering a vicious cycle that prevents children from developing to their full potential [3].
The use of robot-mediated activities has been proposed as an alternative method for assessing cognitive skills of children with disabilities [4]. In these applications, robots are used as augmentative manipulation tools to perform play activities that elicit particular cognitive skills. The performance of a child with disabilities can then be compared to the performance of typicallydeveloping children when executing the same robot-mediated play activities as a proxy measure of his or her cognitive development. The main advantage over other cognitive assessment tests is that children are playing while their cognitive skills are being tested, thus increasing children's motivation to perform the activities. Robots can be controlled using different access methods (e.g. single switches or a joystick) making them accessible to potentially every child. Additionally, robots can be programmed to perform complex tasks upon a simple command from the child. For example, a robot can be programmed to go to a particular location and load food to be given to an animal upon a single switch press. This feature allows the design of activities that appeal to the children and that do not require high-level cognitive skills (in the example, only cause and effect needs to be understood to press the switch that makes the robot go and load the animal's food). Or, robots can require more input from the child in order to accomplish tasks. For example, the robot could move forward, backward, left or right based on which of four switches the child presses. In this way children are challenged to use more skills. Lego Õ Mindstorms Õ robots have been used in the work reported in Cook et al. [4]. These are relatively inexpensive robots ($$300) that are perceived by children as toys. Different robots can be built with the Lego parts and robot programming is facilitated through graphical programming software. For an indepth discussion on the characteristics that a robot should have for being used as an augmentative manipulation tool for play and academic activities, please refer to Cook et al. [5].
From several studies (a survey of those studies can be found in Cook et al. [4]) it is now clear that the use of robotic systems can provide a window into children's cognitive skills, avoiding dependence on standardized test administration [4]. Children as young as 8 months are able to use a robot as an augmentative manipulation tool to perform different activities [6]. Children's performance on robot-mediated activities designed to elicit particular cognitive skills varies with cognitive age 1 [4,7,8], thus showing the potential of the method to discriminate children by cognitive age.
Potential barriers to the use of robot-mediated activities to assess cognitive skills are: Lego robots are still expensive in some contexts (e.g. under resourced countries), they are not very reliable (e.g. when programmed to do a right angle turn they may not turn exactly 90 , thus compromising the direction of subsequent movements), and they require technical skills for assembling and troubleshooting. Virtual robots have the potential of overcoming these limitations. A software package including different activities to be performed using virtual robots that could have different visual features to match the child's preferences could be developed and easily shared. Standard assistive technologies for computer access [9] could be used to make the software accessible for all. But in order to take advantage of these benefits, it is necessary to establish the equivalence between the virtual and the physical robots when used in activities to assess children's cognitive skills.
In this paper, a study is reported aiming at comparing the experiences of children with and without disabilities using a physical robot and a matching virtual robot to perform the same tasks respectively in a physical and in an on-screen simulated environment. The objectives of the study were: (1) To determine if the tasks successfully completed by typically-developing children using physical robots are also successfully completed using computer simulations of robots. (2) To determine if the tasks successfully completed using physical robots by children with disabilities are also successfully completed using computer simulations of robots. (3) The potential for the tasks to discriminate children by cognitive age.
Robot-mediated activities and underlying cognitive skills
Robot-mediated activities to assess cognitive skills were proposed in Cook et al. [7]. This was the basis for the robot-mediated tasks designed for the study with typically-developing children reported in Poletz et al. [8]. The same tasks were utilized in the study described in this paper, building on the acquired experience. The tasks are briefly explained here in the sequence in which they were presented to the children in our study. Names for the tasks reflect the major cognitive skill they aim to elicit. 2 Task 1 -Cause and effect: The child is required to press and hold a switch to make the robot drive forward to knock over a stack of blocks ( Figure 1).
Task 2 -Inhibition: The child is required to drive the robot forward (by pressing and holding the same switch as for task 1), stop beside a pile of blocks (by releasing the switch when the robot reaches the pile) where blocks are loaded onto the robot, and then drive the robot to the location at which they were stacked for the first task (by pressing and holding again the switch to make Figure 1. Task 1 -cause and effect. 1 In this paper, cognitive age refers to the age equivalent provided by a standardized cognitive and developmental abilities test (e.g. the PTI [2]). 2 In previous publications we have used different terms for cognitive skills: task 1 (causality), task 2 (negation), and task 3A (binary relations). the robot move to the end of the table and by releasing the switch at the end of the table to make the robot stop). This is illustrated in Figure 2.
Task 3A -Laterality: In task 3 two stacks of blocks are located one to the left and one to the right of the original stack, and the robot is placed at the end of the table between these two new stacks. In task 3A the child is required to choose a stack of blocks to knock over and then turn the robot towards that stack using one of the two new switches (Figure 3). Each of these new switches makes the robot turn 90 left or right upon a switch hit (pressing and holding the switch has exactly the same effect of turning 90 ; for additional turns it is necessary to release the switch and hit it again).
Task 3B -Sequencing: After turning the robot in 3A in the appropriate direction, the child is required to press and hold the forward switch to knock over the desired stack of blocks ( Figure 4).
These robot-mediated activities were designed to be play activities that are able to discriminate children by their cognitive age, meaning that being able to perform each of the tasks is an indication of cognitive understanding of a set of skills. Even though several other skills are required to successfully complete the proposed robot tasks, each task described above aims to elicit a particular major cognitive skill that can provide information regarding the child's current cognitive understanding. The following is a list of these major cognitive skills and the operational definition under which they have been explored in this study. These skills are presented with reference to childhood development and tool-use literature. Tool use refers to the ability of the child to use an object to act on the environment to accomplish a goal [10] and develops within the second year of life [10][11][12][13]. Tasks 1-3 require that children understand that they can use the robot as an augmentative manipulation tool to interact with the environment, namely with the blocks.
Cause and effect
In order for a child to understand a causal relation between objects or events, the child must be able to make causal inferences. A causal inference is the ability to detect a difference between initial and final states in an event, and infer a cause as a result of tracking this event over time [14]. In the robot task 1, the child is not given explicit information about the switch controlling the robot and the subsequent relation between them (i.e. a continued press of the switch causes the robot to keep moving). In order to successfully carry out the task, the child needs to be able to identify how the robot moves while the switch is being pressed, therefore inferring the cause and effect relation. This kind of very simple cause and effect relationship was understood by children with the cognitive age of 8 months in controlling a robot arm to bring a cookie closer [6]. Causal knowledge changes with age and children are able to progressively understand more complex causal relations as they get older and are exposed to different objects and interactions [15]. Gopnik et al. [16] found that two, three and four year olds were able to make causal judgments when exposed to a new machine, a ''blicker detector'', but only three and four years olds could use this information to make the machine stop when requested. The robot in this study was controlled via infrared signals, hence, there was no direct contact between the switch and the robot, potentially making the task more complex than in Stanger and Cook [6].
Inhibition
Inhibition or inhibitory control is the ability of the child to actively inhibit a predominant response in order to achieve a certain goal [17], especially when this response has been previously successful and as a result a positive reinforcement has been associated with it [18]. In the second task, the child is required to release the switch in order to stop the robot at specific places. The child is thus required to inhibit the response that was previously successful in completing task 1, i.e. continued switch activation to complete the task. Inhibition emerges towards the end of the first year of life and matures rapidly in the toddler and pre-school years, allowing children to progressively regulate their behavior [17].
Laterality
The ability of the child to orient in terms of left and right depends on right-left discrimination and recognition [19]. Right-left discrimination can be defined as the ability of the child to differentiate between two identical symmetrical stimuli shown simultaneously in relation to the body sagittal symmetry [20]. This ability also allows the child to compare objects regarding their location in space. For example, when applied to objects or images, this ability allows the child to differentiate the object as being left or right and compare it with an image previously seen or use its location to make a choice [19]. In task 3A, the child faces two identical options (one on the left and the other on the right) that he/she needs to differentiate in order to choose one or the other. Then, the child is required to recognize which of the two symmetrical identical switches relates to the chosen side. Some of these aspects are mastered by the fourth year, but the appropriate use of the labels ''right'' and ''left'' can continue up to the eleventh year [19]. The robot task does not require the child to label the side correctly but rather discriminate one side and then relate it to the same side switch, therefore making the task more appropriate for pre-schoolers.
Sequencing
A child's learning process is largely underlined by the ability to segment actions into sequences and determine which small sequences of action are necessary or useful for a particular outcome and why [21]. The ability to understand and perform a sequence of actions to achieve a goal has been related to imitation and critical dimensions such as cultural and social knowledge [22]. Children around the end of their second year of life can plan sequences prospectively to achieve a goal even when there is no contact visually available between the tool and the target [10].
Three-year-old children were able to complete a two-step sequencing task, but not a three-step sequence in Stanger and Cook [6]. In task 3B the child is required to plan and perform a certain sequence of switch presses in order to accomplish the goal of knocking over the desired stack of blocks.
Cognitive skills mature with age and thus it is not possible to precisely state the ages at which each skill is attained. However, it is possible to indicate the age intervals at which typically cognitive skills are acquired. This is done in Table 1 from which one can infer the potential of the proposed tasks to discriminate children by cognitive age. The degree to which such discrimination is possible will also be analyzed using data gathered from the study.
Methods
A convenience sample of 20 typically-developing children and nine children with cerebral palsy was obtained at day cares and institutions that support children with cerebral palsy within greater Lisbon (Portugal). Children were recruited in three cognitive age brackets: 33-39, 45-51 and 57-63 months. Table 2 shows the distribution of participants by the cognitive age group. Cognitive age was assessed through the Pictorial Test of Intelligence (PTI) [2]. The PTI is an adapted test of general intelligence comprising three subtests: verbal abstractions, form discrimination and quantitative concepts. Scores in each subtest are combined to provide a global score that gives an age equivalent for the subject. Having participants in relatively narrow cognitive age brackets allowed for the evaluation of the discriminating potential of the different robot-mediated tasks by comparing the average performance of typically-developing children in each cognitive age group when executing the same task. A video analysis was also conducted in order to compare the utilization of the two robots beyond the task success rates analysis. To increase the sample size, in the video analysis of the participants with cerebral palsy four additional participants were added that were not considered in the task success rates analysis since their cognitive age did not lie in any of the defined cognitive age brackets: one child 40 months old, two children 41 months old and one child 43 months old (n ¼ 13 in total). The necessary institutional ethics board approval was obtained. Informed consents were obtained from the parents for each child.
Participants were seen in two sessions approximately one week apart. Sessions took place in quiet rooms at the day cares or at the institutions participants were recruited from and were videotaped for subsequent analysis. In each of the sessions children were required to perform the robot-mediated tasks 1-3B using a Lego Õ Mindstorms Õ TriBot physical robot and a matching virtual robot, with a 20 minutes recess between robots. Robot order was randomized ensuring a balanced number of participants starting with each robot and it was changed for the second session with each child. Tasks with both robots were presented to participants following the protocol in Table 3. In this study, task 1 played the role of a familiarization task as children with cognitive ages of 33 months and more should all master cause and effect. The protocol for presenting this task to the participants thus included stages of modeling and exemplification since failure in executing task 1 reveals a resistance by the child to use the robot (e.g. from being afraid of using the robot or due to shyness). Modeling and exemplification stages were not included for the other tasks since the goal of the study was to evaluate participants' cognitive skills using the robot-mediated tasks, and not to teach those cognitive skills.
The virtual robot was developed using Microsoft Õ Robotics Developer Studio 3 (MS-RDS). MS-RDS is a widely available at no cost programming environment for building robotics applications. It includes a visual simulation environment (VSE) to simulate and test robotic applications using a 3D physics-based simulation tool, thus allowing for the creation of robotic applications without the hardware. Moreover, robot control programs can be used either with the physical or the corresponding virtual robot. User-defined 3D virtual environments can be designed using VSE, and a scenario with a table inside a classroom with piles of blocks on it was created mimicking the physical scenario. Physical properties and sounds were added such that behavior of the virtual objects matched the behavior of the physical objects as closely as possible. For more details on the development of the virtual robot please refer to Encarnação et al. [23,24]. Figure 5 shows the experimental setups with the physical and the virtual robots. Participants controlled both robots through the same set of switches. The scenarios and the activities were similar in both cases, the only difference being that with the virtual robot action took place on a computer screen with virtual objects instead of on the table with physical objects, as with the physical robot.
Participants with cerebral palsy were all able to access the three single switches used for robot control (they were all in levels 1 or 2 of the Gross Motor Function Classification System [25]). Success rates in each task were registered by the investigator through a command console which controlled the robots and the switch inputs.
Videos from the experimental sessions were coded with (i) behavioral markers: behavioral changes (out of context laughing or irritation), child rejects the activity, fatigue, stereotypes (repetitive movements or sounds) and echolalia; (ii) interaction and communication markers: search for support, additional guidance, child's comments (referring to the activity), verbal and non-verbal expressions of displeasure and of pleasantness; and (iii) cognitive construct markers: sustained attention, association of ideas, visuospatial and temporal perception, eye-hand coordination and self-regulation/impulsivity. Table 8 describes how these markers were operationalized in the context of the proposed tasks. For the typically-developing participants' video analysis, only the cognitive construct markers, except for eyehand coordination, were considered in the robot comparison. This is because the main goal of the study was to assess children's cognitive skills through the use of robot-mediated tasks, while behavioral, interaction and communication, and eye-hand coordination aspects were not expected to be critical for this population. The use of behavioral analysis such as this has also been reported by Cook et al. [26], and by Dautenhaun & Werry [27] who called them ''micro-behaviors''. Give the child 15seconds before prompting. If task is completed: repeat task (2nd trial) and, if completed again, move to 2A.
If in need of further prompting, move to 1B.
Ok. Now how about you help me build the stack of blocks? Can you help me take these blocks from here (pointing) to there (pointing)? Ready?
Ok. So now we're going to drive the truck, and we need to stop here (pointing to where the blocks are), so I can put these blocks on the truck.
If the child stops the truck at the correct place (beside the blocks), place the blocks on the truck and provide the child with the instruction for the next step: Well done! Now let's drop off the blocks at the end here (point to exactly where they should stop) Give the child 15seconds before prompting. If in need of further prompting, move to 2B.
Now we have a pile of blocks here (pointing) and another over there (pointing).
Which would you like to knock over first? Make sure the choice is clear for both the child and the investigator. OK! Now you have three switches (pointing to the switches so as to ensure the child is aware of them).
Go ahead!
Give the child 15seconds before prompting If in need of further prompting, move to 3B.
B -Prompting by clarifying steps and/or explicitly referring to the switches Do you think this button will do something?
Give the child 15 seconds before prompting. If task is completed: repeat task (2nd trial) and, if completed again, move to 2A.
If in need of further prompting, move to 1C. If task is completed: repeat task (2nd trial) and, if completed again, move to 3A.
If task is not completed, stop, and move onto 3A.
1. If the child does not decide upon a switch to turn the truck or presses the wrong button (thus, not demonstrating binary choice): Place the truck in the starting position of this task and say: Remember you have three buttons you can use (pointing to all three, so as to ensure that by looking, the child becomes more aware of all three buttons).
2. If the child succeeds in the binary logic task (turns in the right direction), but persists with the same switch for the sequencing task: Place the truck in the starting position of this task and say: Well done! You turned the right way. Now, remember you have three buttons you can use (pointing to all three, so as to ensure that by looking, the child becomes more aware of all three buttons).
C -Modeling
If the child does not press the switch, the investigator should model the task by pressing the switch and then have the child try the task.
If task is completed: repeat task (2nd trial) and, if completed again, move to 2A.
If in need of further prompting, move to 1D.
D -Exemplification using hand-over-hand support If the child continues not pressing the switch, exemplify using child's hand to press the switch.
If there is no collaboration on the child's part, end the activity. After an interval, attempt with the other robot.
Results -typically-developing children
Typically-developing participants' success rates in tasks 1-3B with both robots are plotted in Figure 6. The three cognitive age groups are identified at the three vertical stripes corresponding to the age brackets 33-39, 45-51 and 57-63 months. Participants' success rates between 0% and 100% in each of the four tasks are presented on the same plot. A vertical comparison informs on the success rates for the different activities for a given cognitive age, while a horizontal comparison provides a task success rate analysis across ages. The two plots in Figure 6 refer to the physical (top) and the virtual (bottom) robot. Despite having only a convenience sample, the small sample size and the unbalanced design (different sample size in each sample group), a statistical analysis was conducted to get indicative answers to the following questions: (i) Are success rates influenced by the robot? (ii) Are success rates influenced by participant's cognitive age? and (iii) Are success rates influenced by the task? A three-way main-effects repeated measures ANOVA [28] to assess the dependency of the success rates on the independent variables robot, cognitive age group and task was conducted using SPSS Õ (Chicago, IL). In this analysis, task 1 success rates were not considered since, as expected, all participants had 100% success rates in this familiarization task. The within-subjects variables were the robot (physical or virtual) and the task (2, 3A or 3B) and the between-subjects variable was the cognitive age group (3, 4 or 5 years old). The p-values obtained are listed in Table 4, showing that the factors cognitive age and task influence the success rates, while the robot factor does not. The p-values shown for the within-subjects effects (robot and task) hold assuming sphericity or not [28]. The p-value computed for the between-subject effect (cognitive age) assumes equality of error variances while Levene's test [28] does not support this assumption for the success rates in task 3A using the virtual robot (p ¼ 0.022), thus it should be interpreted with caution.
In order to refine the analysis, a-posteriori multiple comparisons were conducted computing 95% confidence intervals for the estimated marginal means with the Bonferroni adjustment [28]. Table 5 shows the confidence intervals obtained. If a confidence interval contains the null value, one cannot reject the hypothesis that the group means are different at the confidence level of 95%, and the interval amplitude is indicative of the confidence one can have that the group means are in fact equal [29]. From Table 5 one can thus conclude that the success rates are similar for the two robots, that the means across cognitive age groups only achieved significant differences between the three and the five years groups and that the average success rates on tasks 2 and 3B and on tasks 3A and 3B were significantly different.
Regarding the video analysis, Wilcoxon matched-pairs signed ranks tests [28] were used to compare the number of occurrences of the cognitive construct markers sustained attention, association of ideas, visuospatial and temporal perception, and self-regulation/ impulsivity in the two environments. Significant differences for the markers sustained attention (better in the virtual environment, p ¼ 0.002), visuospatial and temporal perception (better in the virtual environment, p ¼ 0.014) and self-regulation/impulsivity (also better in the virtual environment, p ¼ 0.007) were found. Figure 7 shows the success rates of participants with cerebral palsy when performing tasks 1-3B using the physical (top) and the virtual (bottom) robots. Since the number of participants in each cognitive age group does not allow statistical assessment of the influence of cognitive age on the success rates, as was possible with the typically-developing sample, a two-way maineffects repeated measures ANOVA analysis was performed having the robot (physical or virtual) and the task (2, 3A or 3B) as the within-subject variables. The p-values obtained are listed in Table 6. A significant effect of the task factor is observed, while the robot factor had no significant effect on the success rates.
Results -children with cerebral palsy
A-posteriori multiple comparison 95% confidence intervals with the Bonferroni adjustment are shown in Table 7. Again, there is evidence that the robot has no effect on the success rates, and the differences between the success rates in task 3B were significantly different from the success rates in tasks 2 and 3A.
In the video analysis, Wilcoxon matched-pairs signed ranks tests [28] for all the markers in Table 8 revealed only one significant difference between utilization of the two robots for the marker visuospatial and temporal perception (one-tailed p-value of 0.000, better with the virtual robot).
Discussion
Results show that participants' performance assessed by the success rates in each task as well as by the video analysis is similar or better with the virtual robot when compared to a matching physical robot. For the typically-developing participants, the video analysis conducted showed significant differences for the markers visuospatial and temporal perception, sustained attention and self-regulation/impulsivity, with children performing better with the virtual robot. Visuospatial and temporal perception might be enhanced by the onscreen view of the virtual play environment, while the children's perspective of the physical environment may induce parallax errors (when objects appear in a different position due to the line of sight). The virtual environment has less distracting factors, which can promote sustained attention and self-regulation. However, only significant differences for the visuospatial and temporal perception marker were found for the participants with cerebral palsy. In spite of the fact that children's visual acuity was not assessed in this study, it is important to take into account that up to 70% of children with cerebral palsy have visual acuity problems which may affect perception [30].
Another important consideration is that 25% of children with cerebral palsy have behavioral and psychosocial problems [30] which can interfere with self-regulation. This study showed no significant differences between the physical and virtual robots regarding self-regulation. It was measured by observing if participants waited until the task explanation ended or the blocks were loaded onto the robot before performing an action. However, if only waiting until the blocks were loaded in task 2 was considered, significant differences would be found (p ¼ 0.031) for the participants with cerebral palsy. This might be a consequence of the fact that, with the physical robot, four blocks were loaded, two at a time, and participants were not informed of how many blocks would be loaded and thus they might have thought that they should take the two first blocks right away to the end of the table. With the virtual robot, loading of four blocks was done instantaneously.
The experimental data supports that children with cognitive ages above three years old are able to use a virtual robot to perform play activities. Task 1, which mainly requires If participant made a verbal comment Verbal expression of pleasantness Self-explanatory Non-verbal expression of pleasantness Self-explanatory Verbal expression of displeasure Self-explanatory Non-verbal expression of displeasure Self-explanatory Cognitive constructs markers Sustained attention Number of times the participant looked away from the task Association of ideas stimuli for more than three seconds Visuospatial and temporal perception Intentionally looked at the switch after the task explanation Eye-hand coordination Stopped right beside the stack of blocks Self-regulation/Impulsivity Pressed the switch after looking at it Pressed switches before or during the instruction or while the researcher placed the blocks on the robot the understanding of cause and effect, something that typicallydeveloping children start mastering at approximately 8 months of age [26], had 100% success rates both with the physical and the virtual robot. Success rates in the other tasks varied with cognitive age, as predicted. Though success rates in task 2 were not significantly different from success rates in task 3A, a visual inspection of Figures 6 and 7 shows that there are performance differences in these activities for the three and four years cognitive age groups. Having participants in a continuum of cognitive ages, instead of only in relatively narrow age brackets, could have helped to capture the maturation of the cognitive skills.
The cognitive skills that can be potentially mapped through the use of these tasks allow children with disabilities to reveal understanding of important concepts often associated with more complex global skills such as problem-solving. Problem-solving is a sequence of cognitive and perceptual actions and processes required to achieve a certain goal [10]. It includes acting prospectively, monitoring problems in performance that need to be solved in order to achieve the goal and changing strategies that are judged to be inefficient for achieving success. All these skills can be assessed and adapted as needed when using robot tasks. Another part of problem-solving is to use spatial concepts to control the robot in multiple dimensions. The successful completion of the robot tasks requires that the child is able to transition from an egocentric frame of reference, i.e. the child needs to be able to place him/herself on the robot's frame of reference in order to be able to control the robot since the switches make the robot move forward, left or right relatively to the robot's frame of reference.
Conclusions
The paper reports a study where typically-developing children and children with cerebral palsy utilized a physical robot and a matching virtual robot as tools to perform play activities. One basic conclusion from the study is that children with cognitive ages of three years and above are able to use a virtual robot to perform play activities in a simulated environment on a computer screen, as previous studies have shown they were able to do with physical robots. Additionally, the study revealed that the performance was similar for both the physical and the virtual robot. The proposed robotmediated activities were designed to require increasingly complex cognitive skills such that success rates in each activity would be an indicator of children cognitive understanding. The study results show that participants' performance varied with age, thus validating this proxy measure of cognitive development within the context of the skills associated with tasks 1, 2, 3A and 3B.
Limitations of the study include the small sample size and the limited number of cognitive skills encompassed in the tasks. Other aspects of virtual versus physical robots should also be addressed: Would teachers' and parents' perceptions that child is more skilled after seeing them use a virtual robot be the same as they have been with physical robots [7]?
The use of physical robots by children with disabilities in classroom contexts, for example, has shown to promote children's integration [31]. Will that be the case for virtual robots? Would virtual robots motivate children to participate like the physical robots did? Are children's play experiences similar with both robots? Virtual robots cannot be used to explore children's own toys or their own room or house, or any real physical environment unless those objects and environments are included in the virtual scenarios. It is also necessary to evaluate virtual robots use by children with severe motor impairments to assess if the absence of manipulation experiences or the need for different access methods (e.g. scanning) influences the results. Furthermore, the economic value of using virtual robots instead of physical robots may not have a great impact in countries where personal computers are not widespread (like in Colombia).
However, the study opens the doors to the investigation on the use of virtual robots as augmentative manipulation tools for cognitive development through the participation in play and academic activities.
|
2018-04-03T03:18:06.443Z
|
2014-04-22T00:00:00.000
|
{
"year": 2014,
"sha1": "34756cd6b5d5d7305d7f613265367283d6d0c77a",
"oa_license": "CCBYNC",
"oa_url": "https://repositorio.ucp.pt/bitstream/10400.14/34884/1/RoboticPlayActivitiesToAssessCognitiveSkills_MainBody_rev_Preprint_20ver.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "0775d5df2b7d2c03aaa1a0c6f58dae031c5de94f",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
225405946
|
pes2o/s2orc
|
v3-fos-license
|
VoIP Performance Evaluation and Capacity Estimation Using different QoS Mechanisms
Data networks and new mobile networks (4G and 5G) deploy packet switching in their core networks. Due to this architecture VoIP protocols has a wide deployment in these environments. VoIP capacity and quality must be considered during VoIP protocols implementations. The aim of QoS mechanisms is to satisfy voice traffic requirements; this is deployed by using many tools like congestion management utilities, congestion avoidance utilities, and link fragmentation tools. Each tool has an impact on voice performance. Queuing is very important mechanisms in the traffic management system. Certain routers in data networks must deploy some QoS tools that control how different packets are temporally buffered until transmission on the interface. This paper studies the effect of different QoS tools on VoIP application performance and capacity via OPNET simulation. Also, the maximum VoIP capacity which gives accepted quality will be investigated.
The needed BW for a one VoIP call in a certain direction is 64 kbps. G.711 voice codec takes a sample every 20ms of voice per packet. So, 50 packets will be transmitted every second. Each packet consists of 160 samples (160 bytes) which gives 8000 samples/sec. Each voice packet is transmitted in one Ethernet frame and additional headers will be added. These headers are RTP (layer 5), UDP (layer 4), IP (layer3), and Ethernet (layer 2) with sizes 12,8,20, and 26 bytes, respectively. Therefore, 226 bytes needs to be sent 50 times/second, (i.e 90.4 kb/s). The required BW for a single call in both directions is 180.8kbps.
To simulate voice traffic in OPNET the predefined voice application can be used. Different voice codec can be selected. Maximum call numbers the simulated network can support while preserving voice requirements can be extracted by adding voice calls increasingly to the simulated network while tracking the delay/jitter thresholds. When any of these thresholds have been reached, the maximum call numbers can be known.
The voice call quality can be measured by Mean-Opinion Score (MOS) which is based on a scale of 5 to 1 as presented in Table 1. Jitter is the arrival time variation in of consecutive VoIP packets that calculated over a specific interval of time. Noted that the device buffers can be over-fill, triggering packet drops [6]. ETE delay is extracted by finding the delay from the transmitter to the receiver (including all delay sources). ITU-T provides the rules for the jitter and delays for different types of call quality, as presented in Table 2. Figure 1. FIFO
Priority Queuing (PQ)
PQ queuing principles based on the packet priority (IPP, or DSCP), the highest packet priority is transmitted first from the output interface then the data packets with low priority as declared in fig 2 [9].PQ has many queues that are given to the output network port and each queue(buffer) has a certain priority level.PQ has four default queues with default lengths, high (20 packet length), medium(40 byte length), normal(60 byte length) and low priority queue(80 byte length) [2]. If packets reach high queue then PQ drops anything it is doing to send those packets. When a packet is transmitted, the high-priority queues (buffers) on that port are scanned for new packets. The highest priority queues are scanned first then the medium-priority queue and so on.
Weighted-Fair Queuing (WFQ)
WFQ technique achieves QoS by assigns fair dedicated bandwidth to different traffic to control on delay, jitter, and data loss [9]. The main idea of the fair queuing is to assign a dedicated queue (buffer) for each currently flows. Then the router handles these queues using a round-robin manner. WFQ assigns weight to every queue (flow). This weight controls the link's BW percentage each flow will use [10].
WFQ is the best known and the most studied queuing discipline. WFQ as in fig 3 assigns a separate dedicated queue for each data flow and applies weights to determine how much BW each packet flow is allowed with respect to others. The maximum queue length is determined by the length-limit [3]. When a queue is longer than length-limit, the packets are starting to be dropped [4]. When there're many TCP sessions there is a high probability that when the traffic exceeds the buffer length-limit because of data bursty nature of packet networks. If the router drops all traffic which exceeds the length-limit of the buffer queue (tail drop behaviour), many TCP sessions then simultaneously go onto slow start which creates a condition called global synchronization resulted in significant link underutilization. This is solved by applying congestion avoidance techniques like Radom Early Detection (RED) or Weighted Random Early Detection (WRED). These techniques prevent tail drop behaviour by start dropping traffic randomly when it reaches a certain threshold in the queue. [11]
Network Parameters
The bellow configurations applied in Open Modeler and simulated to get results. 1. Two routers connected with serial PPP_DS1link (1.5 Mb/s). 2. Five work stations and one server are connected with routers with Ethernet 10Base_T links (10 Mb/s). 3. Different queuing discipline between the edge routers which has effects on the applications performance and the network utilization. So the two routers will be configured to use those different queuing techniques [2]. The traffic parameters used in the simulation are as follows: FTP inter-request time: 10 sec File size: 1 Mbyte TOS: 0 best effort Video: high-resolution video TOS: 4 streaming multimedia Voice codec: G711, G723.1, G729 TOS: 6 interactive voice. Fig. 5 shows MOS for VoIP traffic using the main three queuing methods (FIFO, PQ, WFQ). The figure shows that the best queuing methods which give the best MOS are WFQ and PQ which is 4.3. By using the FIFO method, the MOS becomes not acceptable because it gives poor voice quality. The VoIP delay variation in Fig. 6 shows that the lowest VoIP delay variation is obtained when using PQ method.
4.Results
The average jitter (delay variation) value using PQ is 0.012 ms but by using WFQ it is about 0.015 ms. Using FIFO as queuing discipline the delay variation jumps to 0.4 sec. which is very large in VoIP transmission as declared in fig. 7. Fig. 8 shows the VoIP packet ETE delay using a FIFO queue at the router interface. The figure presents that the ETE delay starts to increase up to 0.6 sec. which is very large in VoIP transmission and leads to echo and poor voice quality.
4.1.VoIP ETE Delay
But by implementing PQ and WFQ, the ETE delay becomes 103.13 ms, and 103.15 ms respectively. As declared in fig. 9. Note that the accepted ETE delay from 150 ms to 200 ms. Fig 10, fig. 11, and fig. 12 show the traffic volume of transmitted and received FTP using WFQ and PQ methods. From the figure it is clear that using WFQ give more FTP throughput compared with using PQ, this is because PQ gives absolute priority to real-time traffic compared FTP traffic. Table 3 lists the average FTP throughput using queuing methods. FTP throughput is extracted by dividing received-packets numbers by the number of transmitted-packets using different queuing discipline. Fig. 13 presents the VoIP MOS using three different codecs and WFQ method, this is to declare the relation between using VoIP codec and voice quality. The figure shows that the best voice codec which gives the best quality is G711, and this is logic because no voice compression exists in G711 codec. But G711 capacity is small because its high consumed BW. On the other side, the figure shows G729 gives better MOS compared with G723. Table 4 presents the average MOS for the three VoIP codecs. Fig.14 shows the effect of applying WRED plus WFQ discipline on FTP traffic, as the figure shows the decreasing in received traffic due to packet dropping when the output buffer reaches the minimum threshold. The average received FTP traffic before applying WRED is 609 kb/s and after applying WRED is 373 kb/s. Also, the voice delay decreases from 105 ms to 102 ms, and MOS increases from 3.3 to 3.5 when applying WRED. The effect of implementing WRED on the voice service quality is negligible compared to the decreasing in FTP throughput, this is because already we give priority to voice traffic in WFQ. So, no need to apply WRED when we use WFQ discipline.
VoIP Capacity Estimation using G729 codec
The VoIP traffic started at 7 sec then every 2 seconds one VoIP call is added. The OPNET simulation terminates at 4 min to generate the required number of calls. Note that, since the OPNET simulator terminates at 4 min, the last call to be generated was at 3 min and 58 sec. Total generated calls equal: 1+(3*60+58-7)/2 = 115 calls From fig. 15 the voice delay start to increase dramatically starting from 91 sec. This leads to the maximum capacity of voice calls in the simulated network is calculated by the following formula: 1+(91-7)/2 = 43 calls Figure 15. Average End-to-End voice delay
5.Conclusion
In this paper, the effect of different queuing methods on voice service quality has been investigated. The results display that the best queuing method is WFQ because it gives minimum voice delay variation and ETE delay. Also, WFQ has an acceptable FTP throughput compared with PQ and FIFO. Also, the study illustrated that FIFO gives bad voice quality. No need to implement WRED during WFQ implementation, because its effect on voice delay is negligible. From the simulation it is cleared that the best codec from the view of capacity and quality is G729, it provides 43 calls at a time and MOS equals 4.01.
|
2020-08-13T10:09:19.848Z
|
2020-08-11T00:00:00.000
|
{
"year": 2020,
"sha1": "cc8bccebfe82a0250b36a059c56652be33b47515",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/881/1/012146",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "f75b642d5f4c600302746cde14eaeb22791bb369",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
}
|
237934111
|
pes2o/s2orc
|
v3-fos-license
|
The Burden and Risk Factors of Patellar and Achilles Tendinopathy in Youth Basketball: A Cohort Study
This study aimed at evaluating the burden and risk factors of patellar and Achilles tendinopathy among youth basketball players. Patellar and Achilles tendinopathy were prospectively monitored in 515 eligible male and female youth basketball players (11–18 years) through a competitive season. Overall, the season prevalence of patellar tendinopathy was 19.0% (95% CI: 15.7–22.7%), 23.2% (95% CI: 18.6–28.2%) in males and 12.5% (95% CI: 8.3–17.9%) in females. The season prevalence of Achilles tendinopathy was 4.3% (95% CI: 2.7–6.4%), 4.1% (95% CI: 2.2–7.0%) in males and 4.5% (95% CI: 2.1–8.4%) in females. Median proportion of symptoms duration was 83% of average total weeks of basketball exposure for patellar tendinopathy and 75% for Achilles tendinopathy. Median time to patellar tendinopathy onset was 8 weeks for male players and 6 weeks for female players. Higher odds of patellar tendinopathy risk were seen in males (OR: 2.23, 95% CI: 1.10–4.69) and players with previous anterior knee pain had significantly elevated odds (OR: 8.5, 95% CI: 4.58–16.89). The burden and risk of patellar tendinopathy is high among competitive youth basketball players. Risk factors include sex and previous anterior knee pain. These findings provide directions for practice and future research.
Introduction
Patellar tendinopathy (PTP) is a common overuse/gradual onset injury in elite-level basketball with a reported prevalence of 32% (60% career prevalence) [1]. Although the prevalence of Achilles tendinopathy (ATP) is relatively low, its impact on elite adult basketball players is concerning [2]. Tendinopathy is primarily a clinical problem of pain with or without dysfunction [3]. Individuals may also have tendon pathology, which is a primary factor for tendon degeneration, disrepair, and potential rupture [4,5], and this may result in early retirement from professional sport and potentially impact players' long-term health [2,5].
Despite the potentially significant impact of PTP and ATP on competitive athletes, there is a paucity of literature regarding the burden and risk factors of PTP and ATP, particularly in youth sport [6,7]. Only one study [8], conducted several years ago, investigated the prevalence of PTP in youth basketball players, and there is currently no single study on the prevalence of ATP in youth basketball. A prevalence of 7% (11% in males and 2% in females) was previously reported in a cross-sectional study through clinical evaluation, limiting the potential capture of all PTP in youth basketball players [8].
To fully understand the risk of injury, a combination of measures of risk has been advocated [9]. Systematic reviews on risk factors for PTP in athletes (mostly > 18 years) concluded that there is currently a lack of strong evidence evaluating risk factors for PTP and have suggested further research in prospective studies [10]. No single study has investigated the burden (e.g., symptoms duration) of and time to PTP and ATP in youth basketball. In order to have a robust understanding of risk, it is essential that conventional measures of risk are supplemented with injury burden in epidemiological studies [9].
Tendinopathies are often chronic in nature, with periods of remission and exacerbation. Therefore, prevalence (the proportion of athletes affected at a given time), and not incidence rate (new cases over a given time), is the appropriate measure of risk [11]. Conventional injury surveillance tools, ideally suited to capture acute injuries, are less appropriate and insensitive to capture the magnitude and severity of PTP and ATP [11,12]. This clinical and research challenge may be overcome using a novel, validated injury surveillance methodthe Oslo Sports Trauma Research Centre (OSTRC) Overuse Injury Questionnaire-to accurately register overuse injuries in sport using athlete self-reporting [12]. The OSTRC Questionnaire is, however, limited in its scope of application for specific injury types and diagnosis as it does not provide information about individual overuse injuries. To address this limitation, the OSTRC Overuse Injury Questionnaire was adapted by our team and validated against clinical evaluation for self-reporting PTP [13].
The approach to clinical management of PTP and ATP is variable and treatment modalities have limited success in patients with chronic tendinopathy [14][15][16]. Prevention and early diagnosis are key to protecting players' health and avoiding long-term consequences. Thus, the most expedient population to implement the prevention of PTP and ATP and associated consequences is youth athletes. The objectives of this study were: (1) to evaluate the burden of PTP and the burden of ATP, including prevalence, time to tendinopathy (first report), and symptoms duration, and (2) to evaluate risk factors associated with PTP and ATP in competitive youth male and female basketball players.
Study Design and Participants
The STROBE guidelines for reporting cohort studies were followed [17]. We conducted a prospective cohort study involving adolescent players from high school (December 2016-March 2017) and club basketball (March 2017-June 2017) teams in Calgary, Canada. A total of 52 high schools and 23 basketball clubs in Calgary, Canada, were invited to participate in this study. Prior to recruiting participants from high school basketball teams, consent was obtained from school principals followed by the Physical Education directors and basketball coaches. Similarly, prior to recruiting participants from clubs, consent was obtained from club managers followed by coaches.
Players were eligible to participate if they were formally registered with their school or club basketball team. Eligibility to participate in the study was irrespective of anterior knee pain, PTP or ATP at baseline; it was expected that a few players would have ongoing PTP and ATP at time of enrollment and excluding players reporting PTP and ATP at baseline would result in a biased study population [11,18]. This methodological approach underpins the use of prevalence measures rather than incidence [11]. Exclusion criteria at baseline included acute lower extremity injuries or medical conditions (including those that may be associated with tendon problems, e.g., Type II diabetes) precluding competitive basketball participation or baseline performance testing. Written informed consent was requested from all players (in cases of players younger than 15 years, written parental consent and child assent were obtained). This study was approved by the Conjoint Health Research Ethics Board of the University of Calgary, Alberta, Canada (REB16-0864).
Registration of Patellar and Achilles Tendinopathy
Main study outcomes were season prevalence of PTP and season prevalence of ATP. Diagnosis of PTP and ATP was based on self-report by participants using the Oslo Sports Trauma Research Centre Patellar [13] and Achilles Tendinopathy Questionnaire, an adaptation of the OSTRC Overuse Injury Questionnaire [19]. This is a two-part questionnaire with similar but specific questions relating to the knee and ankle joints (Supplementary Materials). The patellar tendinopathy part of the questionnaire, the OSTRC-Patellar Tendinopathy Questionnaire, has been validated against clinical evaluation in youth basketball players with an overall sensitivity of 79% and specificity of 98% [13].
Players were asked to complete the OSTRC Tendinopathy Questionnaire weekly online using a mobile smartphone or alternatively using an identical hard-copy paper version handed to them by their team designate (research assistant, team student trainer, or coach). Players with smartphones were prompted to complete the OSTRC Tendinopathy Questionnaire weekly (every Sunday evening) and a reminder was automatically sent after the following day if the questionnaire was not completed. For players who opted to complete the paper version of the questionnaire, it was distributed every Monday at the time of practice.
Determined based on teams' final league schedules, as available online, for the 2016/2017 league seasons, players with less than the expected number of OSTRC Tendinopathy Questionnaire responses through the competitive season (i.e., 10-12 weeks of high school basketball season and 8-14 weeks of club season, depending on whether the player/team made playoffs) were followed up post-season through a telephone interview by the study physical therapist or trained research assistants to complete the OSTRC Tendinopathy Questionnaire for any missing weeks. The follow-up OSTRC Tendinopathy Questionnaire contained the same questions as the weekly questionnaire, except for the questions relating to weekly severity scores. Players reporting PTP or ATP were also asked to estimate the number of weeks they experienced symptoms (pain with/without dysfunction) during the season in the follow-up phone interview.
Prevalence Measures
We calculated season prevalence of PTP by dividing the number of players reporting at least one episode of PTP through the season by the number of players in the study. This calculation was performed for all players and separately for male and female players. Analogous calculations were performed for prevalence of ATP. Weekly prevalence was also calculated for all and substantial tendinopathy (for each week, number of players reporting, and episode over the number of players answering the questionnaire). Substantial tendinopathy connotes PTP or ATP leading to moderate or severe reductions in training volume or performance, or an inability to participate (that is, responses c, d, or e in either question #2 or #3 on the OSTRC Tendinopathy Questionnaire) (see Supplementary Materials) [12].
Time to Tendinopathy Onset
We reported time to tendinopathy onset during the study, in weeks, for every first report of PTP or ATP. This was based on a specific question on the OSTRC Tendinopathy Questionnaire-question #6 for each part-asking if a player's reported symptoms were experienced for the first time the previous week (see Supplementary Materials).
Symptoms Duration and Severity
Measures of PTP and ATP burden on players reporting tendinopathy included proportion of symptoms duration (percentage of weeks with symptoms) and level of dysfunction based on players' weekly severity scores. Severity scores (0-100 in arbitrary units [AU]) were derived from the fourth set of questions on the OSTRC Tendinopathy Questionnaire and calculated as advised by Clarsen et al. [12] in the original OSTRC Overuse Injury Questionnaire. Time to tendinopathy and severity scores were only obtained in players who completed the weekly OSTRC Tendinopathy Questionnaire.
Potential Risk Factors
All participants were required to complete a pre-participation questionnaire at the time of enrollment in study. Potential risk factors considered included demographic and sport-related factors such as age, sex, body weight, height, league setting (school vs. club basketball league), previous knee injury (1-year history), previous anterior knee pain (3-month history), playing position, and basketball specialization (single-sport basketball participation based on baseline information of whether a player was involved in only basketball or multiple sports in the previous year). Our choice of potential risk factors was informed by previous literature [10,20].
Statistical Analysis
Statistical analyses were performed using Stata (StataCorp LP, College Station, TX, USA, version 14.4) and Microsoft Excel for Mac (version 16.16.1). Player characteristics were described with means and standard deviations (SD) for numerical variables, or median (range) (if not normally distributed), or frequencies and proportions (%) for categorical variables, by sex. For players participating in both school and club seasons, an assumption of independence was made, given that a small proportion of players participated in both seasons.
Season prevalence estimates were reported with exact 95% confidence intervals (CI) for all players and by sex. Weekly prevalence (all and substantial tendinopathy) were plotted over time to identify trends throughout the course of the study. Time to tendinopathy (i.e., week of first tendinopathy report) was summarized using median and interquartile range (IQR). Symptoms duration was estimated in proportion (%), that is, the number of weeks with symptoms divided by the number of participation weeks for each player, and the median (first (Q1) and third (Q3) quartiles) of these percentages was calculated. The weekly severity scores reported by players (ones that completed one or more weekly OSTRC Tendinopathy Questionnaire) with tendinopathy were summarized using median for each week and plotted over time to identify trends.
Some potential risk factor variables had substantial missing data, so we implemented the multivariate imputation by chained equations (MICE) to impute missing data. This algorithm augments for perfect prediction during imputation of data and adjusts for differences in variable types (i.e., binary, nominal and continuous variables) by modelling each variable according to its distribution [21,22]. Considering the extent of missing data, we generated 30 iterations of imputed datasets for optimal regression modelling [22].
Based on season prevalence of PTP and complete MICE datasets, a multivariable logistic regression analysis adjusted for team clusters was used to estimate odds ratios (ORs) and 95% CI to evaluate risk factors for PTP. The following covariates were included from the beginning: height, previous knee injury, playing position, and basketball specialization. A backward elimination regression analysis regression technique was employed. Covariates with no significant effects were removed from the model, one at a time, considering statistical (0.05 significance level) and clinical significance. Covariates were also checked for a change in more than 10% alteration in other coefficients.
Although we had an a priori plan to conduct a multivariable logistic regression analysis for ATP risk factor evaluation, the total number of participants with ATP outcomes were too few to generate any meaningful model.
Player Characteristics
A total of 515 players from 63 teams completed this study: 315 (61.4%) males and 200 (38.6%) females. The selection and flow of players through the study are presented in Figure 1. The distribution of player baseline characteristics by sex is presented in Table 1.
Thirteen players (2.5%; 8 males, 5 females) in this cohort participated in both school and club seasons-an assumption of independence was made in these players given the small proportion.
Response Rate
All 515 players completed at least one version of the OSTRC Tendinopathy Questionnaire (either in-season or at follow-up), yielding an overall response rate of 100%. However, 350 (68%) of all players completed one or more (in-season) weekly OSTRC Tendinopathy Questionnaire. The average weekly response rate to the weekly OSTRC Tendinopathy Questionnaire was 62%. Characteristics (e.g., sex, age, and league setting) of the players that responded (having one or more weekly questionnaire response) and non-responders to the weekly questionnaire were similar.
Trends in the weekly prevalence of PTP and ATP are presented in Figure 2. While the prevalence of all PTP increased slightly over the course of the season in all players, the prevalence of all ATP was within the same range. The prevalence of substantial PTP and ATP ranged between 0% and 5% in both male and female players through the season. Week numbers were sequentially aligned for both school and club seasons; a 1-week Christmas break (week 4) was absent for the school season, i.e., week 4 prevalence is based on the club season only. Substantial tendinopathy connotes tendinopathy leading to moderate or severe reductions in training volume or performance, or an inability to participate.
Time to Tendinopathy Onset
The median (IQR) time to onset of PTP was 7 (4) weeks overall, 8 (4) weeks for males and 6 (5) weeks for females. No new cases of ATP were reported through the season; athletes either reported the same pain as the previous week or a return of pain that had gone away.
Multivariable Analysis of Risk Factors
The final multivariable logistic regression model showed that the odds of PTP in males was 2.23 (95% CI: 1.10-4.69) times the odds of PTP in females; the odds of PTP in players with previous anterior knee pain was 8.79 (95% CI: 4.58-16.89) times the odds of players reporting no anterior knee pain ( Table 2).
Discussion
In this study, we evaluated the prevalence, time to tendinopathy, and symptoms' duration of PTP and ATP in youth basketball players and examined associated risk factors for PTP in a cluster-adjusted multivariable regression analysis. To our knowledge, this is the first study to prospectively report the burden and risk factors of PTP in youth basketball.
Prevalence of Patellar and Achilles Tendinopathy
Overall, we found a high prevalence of PTP and a relatively low prevalence of ATP. Our findings suggest that PTP is about five times as common as ATP among youth basketball players. While there was no difference in the season prevalence of ATP between male and female players, we found a significant difference in the season prevalence of PTP for male and female players. Our results showed that PTP became increasingly common among players as the season progressed, but the prevalence of ATP remained within the same range. Studies relating to the prevalence and risk factors of PTP and ATP in basketball are sparse and there is currently no study reporting the prevalence of ATP in youth basketball; the potential for comparison with other studies is therefore limited. An overall PTP prevalence of 19% (23% in males and 13% in females) presented in our study is lower than the prevalence (32%) reported for male elite adult basketball players [1]. However, our PTP prevalence estimates more than double the 7% (11% in males and 2% in females) previously reported by Cook et al. in youth basketball players aged 13-18 years [8]. Differences in study designs between our study and that of Cook et al. may explain, in part, the large difference in prevalence estimates. For example, in our study, we prospectively examined PTP weekly (repeated measurements) through self-reporting (which is highly sensitive in capturing gradual onset injuries) [23,24] to derive our main outcome of season prevalence of PTP, while in the study by Cook et al., PTP prevalence was based on a cross-sectional examination (one-time measurement) by a clinician. Further to this, the frequency and intensity of participation appears to have increased over the past decade, and it is very likely gradual onset/overuse injuries have increased as well. Driven by the desires to obtain collegiate scholarships or potentially earn a professional contract, youth basketball players are increasingly becoming highly competitive [25]. It is therefore not surprising that the prevalence of PTP has increased accordingly.
Some studies have investigated the prevalence of ATP in youth sports. Cassel et al. [26], in a cross-sectional study, reported an overall ATP prevalence of 1.8% (2.0% in males and 1.6% in females) in 760 adolescent athletes from 16 different sports (basketball not included). In another study, a prevalence of 7.5% was reported for high school runners aged 13-18 years [27]. Further, Emerson et al. reported a prevalence of 12.5% and 17.5% in male and female gymnasts, respectively [28]. Although the prevalence estimate reported for ATP in our study (i.e., 4.5% overall) is higher than the average reported for 16 popular youth sports examined by Cassel et al. [26], it is much lower than the prevalence reported for adolescent runners and gymnasts [28]. It is thus conceivable that ATP is less common in youth basketball, especially when compared with its prevalence in other youth sports involving repeated jumping and landing.
Time to Tendinopathy, Symptoms' Duration and Severity Score
The importance of extensively appraising injury burden to fully understand injury risk has been emphasized [9]; it provides a complete picture of risk and gives insight of injury morbidity and potential consequences. Further to prevalence estimates, we examined other measures of burden to elucidate the problem of PTP and ATP in youth basketball players. Results from our study suggest that 50% of asymptomatic players at the start of the season may develop PTP at approximately 7 weeks (8 weeks for males and 6 weeks for females) into the season. This finding has direct implications for practice. For example, effort to reduce the burden of PTP may include decisions by sport directors/club administrators and basketball coaches regarding the need and timing for a periodic in-season PTP evaluation and subsequent workload adjustments in competitive youth basketball players based on the sex-specific estimated time to tendinopathy reported in the current study.
Although the prevalence of ATP was found to be low, it appears the burden of ATP on players as measured by both symptoms' duration proportion and severity scores was high and comparable to that of PTP. This finding corroborates previous report indicating a high burden of ATP in elite basketball players [2]. An overall median symptoms duration proportion of 83% for PTP and 75% for ATP found in the current study suggests that youth basketball players have chronic tendinopathies [3] with symptoms lasting through an extensive period of a competitive season. Future research should investigate the short-term impact of injury chronicity on players' overall health, e.g., psychological wellbeing and players' quality of life.
Risk Factors of Patellar and Achilles Tendinopathy
Based on a cluster-adjusted multivariable logistic regression analysis, our study suggested that sex and previous anterior knee pain are significantly associated with PTP risk in youth basketball players. In accordance with previous studies in basketball and volleyball, age was not found to be a significant risk factor for PTP in the current study [1,29]. Of note, the odds of having PTP in players who reported previous anterior knee pain (within past 3 months) at baseline was 8.5-fold compared with players without previous anterior knee pain. Anterior knee pain in basketball players would most likely be a quadricep tendinopathy, patellar tendinopathy, or patellofemoral pain syndrome. A pre-participation screening program that includes assessment for recent history of anterior knee pain (specifically including the aforementioned) prior to a competitive season may be valuable in identifying youth basketball players at high risk of PTP.
Our finding of male sex as a risk factor for PTP corroborates that of Cook et al. [8] and other studies in adult elite and recreational basketball players [1,29]. Although the mechanism by which male players are more susceptible to PTP than female players is not fully known, it is suggested that estrogen may have a protective function in females [30].
Multivariable regression analyses could not be run for ATP given the low prevalence of ATP. Similarly, the total number of participants and events in the sub-cohort with performance data were inadequate for risk factor analysis in a multivariable logistic regression.
Methodological Considerations
As it was impracticable for our study physical therapist to evaluate all 515 players for PTP and ATP on a weekly basis, we implemented a self-report measure of PTP using a questionnaire whose diagnostic accuracy has been validated against clinical examination for PTP [13]. Previous studies reporting the prevalence of PTP or ATP based on a questionnaire or pain mapping approach have used tools that were not pre-validated [27,29]. As used in the current study, a self-report methodology with calculations of prevalence, rather than incidence, has been shown to be more accurate in reporting overuse injuries [11,12]. In contrast to previous studies that used the OSTRC Overuse Injury Questionnaire to report average weekly/bi-weekly prevalence and severity scores [24,[31][32][33], we chose to examine the trend of weekly PTP and ATP through line graphs. This is because the repeated measures of weekly prevalence are not independent, and as such, mean (95% CIs) or median (IQR) are likely to yield erroneous summary estimates. Furthermore, we used a robust cluster-adjusted multivariable regression analysis to evaluate independent risk factors for PTP risk, a strength in this study.
Study Limitations
This study had some limitations. First is the possibility of response and recall bias that might have impacted our prevalence estimates of PTP and ATP. Although the overall response rate for our primary outcomes, season prevalence of PTP, and season prevalence of ATP was adequate, the information collected at follow-up through phone interviews might have been impacted by recall bias.
Another limitation is that we are unable to ascertain the validity of the Achilles tendinopathy part of the OSTRC-Tendinopathy Questionnaire, as this is yet to be evaluated in the literature. Given that the questions in the ATP part of the questionnaire are similar to the ones in the PTP part, we speculate that the diagnostic accuracy of ATP segment might be similar to that of the patellar tendinopathy part of the OSPTC Tendinopathy Questionnaire.
Third, although we used the term risk factors to define the independent variables examined in the risk factor analysis conducted in our study, we are unable to confirm a causal relationship between PTP and these variables. For any causal relationship to be established, it is critical that exposure precedes event or disease among other conditions [34]. In our study, 22% of players indicated symptoms of anterior knee pain at baseline. It is speculated that many of these 22% had PTP at baseline. We decided a priori not to exclude such players, given that our measure of risk was prevalence (sensitive measure for chronic/overuse injuries) and not incidence (sensitive for acute onset injuries) [11]. While we acknowledge the importance of temporality, we believe that its application in risk evaluation for chronic injuries is restricted. Additionally, excluding symptomatic players at baseline in a bid to establish temporality in our study would result in a biased sample frame, which would in turn limit the external validity of our finding [11,18].
Finally, we did not evaluate players' workload in the present study. Workload may be a key risk or protective factor for PTP and ATP [35,36]. The complex system approach for pattern recognition and risk profiling of sports injury etiology may serve as an innovative next-step towards the prevention and control of PTP and ATP [37][38][39]. We recommend that future studies evaluate the relationship between workload and PTP while considering the potential moderating effects of other risk factors for PTP.
Conclusions
The prevalence and burden of patellar tendinopathy is high in competitive male (11-18 years) and female (13-18 years) youth basketball players; 1 in 4 male players and 1 in 5 female players reported symptoms of patellar tendinopathy in a competitive season. Although less common, the burden of Achilles tendinopathy appears significant and comparable to that of patellar tendinopathy. Risk factors of patellar tendinopathy in youth basketball include sex and previous anterior knee pain. The current findings have implica-tions for practice and future research relating to the prevention and in-season management of patellar and Achilles tendinopathy in youth basketball. There is a crucial need for countermeasures to abate the risk and potential consequences of both patellar and Achilles tendinopathy in competitive youth basketball. This includes raising awareness about the burden of patellar and Achilles tendinopathy among stakeholders-administrators, coaches, players, and parents-and promoting the uptake of current best practices for both primary and secondary prevention of patellar and Achilles tendinopathy, including progressive tendon loading that incorporates both isometric and isotonic exercises [40][41][42][43].
|
2021-09-28T05:30:35.571Z
|
2021-09-01T00:00:00.000
|
{
"year": 2021,
"sha1": "2d574db432bd383aa95abe96ddd41169ef47d4b3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/18/18/9480/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2d574db432bd383aa95abe96ddd41169ef47d4b3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
31466974
|
pes2o/s2orc
|
v3-fos-license
|
The Efficacy of Testosterone Ointment on Insulin Resistance in Men with Metabolic Syndrome
Low levels of testosterone are related with metabolic syndrome and type 2 diabetes mellitus. Testosterone levels are considered to be negatively correlated with insulin resistance and HbA1c levels. There are also reports that testosterone replacement therapy reduces insulin resistance or improves glycemic control. Transdermal administration of testosterone ointment (Glowmin) is a method of drug administration that keeps blood concentrations stable and constant. In this study, testosterone ointment (Glowmin) was administrated as a testosterone supplement to male metabolic syndrome with low free testosterone levels. This study included 5 male metabolic syndrome with low free testosterone levels (mean age, 50.6 ± 8.8 years; mean BMI, 29.5 ± 3.1 kg/m2; mean waist circumference, 97 ± 7 cm; free testosterone levels, <8.5 pg/ml; values indicate means ± SD). Glowmin was administrated to the submandibular area at a dose of 0.3 g twice a day for 6 months. Three months after administration, a significant decrease was observed in fasting immunoreactive insulin levels, homeostasis model assessment for insulin resistance, total cholesterol and LDL-C. Six-months after administration, each of these parameter estimates remained steady. In conclusion, transdermal administration of testosterone ointment (Glowmin) gradually reduced insulin resistance in male metabolic syndrome with low free testosterone levels.
Testosterone levels are considered to be negatively correlated with insulin resistance and HbA1c levels [11][12][13]. There are also reports that testosterone replacement therapy reduces insulin resistance or improves glycemic control [13][14][15][16][17]. One report has shown that, in a hypogonadal man with type 2 diabetes mellitus, testosterone replacement therapy reduced insulin resistance and improved glycemic control [13]. At present, testosterone replacement therapy is administered mainly by Enarmon-Depot injection and has an immediate and reliable effect. However, the therapy induces extremely large fluctuations in blood testosterone concentrations; consequently, subjective symptoms may vary and adverse events such as plethora may occur. Because no self-injection kit has been approved, frequent visits to an outpatient clinic for injections are also a problem.
Meanwhile, transdermal administration of testosterone ointment (Glowmin, Daitou Pharma, Japan) is a method of drug administration that keeps blood concentrations stable and constant. However, there are only a few reports that assess whether this mode of administration has an immediate and reliable effect.
In this study, testosterone ointment (Glowmin) was administrated as a testosterone supplement to men with metabolic syndrome who had low FT levels. The effect on insulin resistance was then assessed.
Subjects and Methods
This study included 5 men with metabolic syndrome who had low FT levels (mean age, 50.6 ± 8.8 years; mean body mass index [BMI], 29.5 ± 3.1; mean waist circumference [WC], 97 ± 7 cm; values indicate means ± standard deviation [SD]). This sample included 3 men receiving oral antihypertensive agents and 2 men receiving oral antihyperlipidemic agents. A low FT level was defined as less than 8.5 pg/ml, which is the level recommended by the diagnostic algorithm for late-onset hypogonadism (LOH) syndrome [18]. Metabolic syndrome was defined as a case meeting the Japanese diagnostic criteria [19]. Specifically, men with metabolic syndrome included those with a WC ≥ 85 cm and who met at least 2 of the following 3 criteria: 1) hypertriglyceridemia (≥ 150 mg/dl) and/or low levels of high-density lipoprotein (HDL) cholesterolemia (<40 mg/dl); 2) systolic blood pressure ± 130 mmHg and/or diastolic blood pressure ≥ 85 mmHg; and 3) fasting plasma glucose (FPG) level ≥ 110 mg/dl. All subjects provided informed consent, and Glowmin was administrated to the submandibular area at a dose of 0.3 g twice a day for 6 months. The following parameters were measured before administration of Glowmin and 1, 3, and 6 months after administration, and their fluctuations were analyzed: BMI, WC, FPG, fasting immunoreactive insulin (F-IRI), HbA1c, homeostasis model assessment for insulin resistance (HOMA-R), total cholesterol (TCHO), triglycerides (TG), HDL-cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C, calculated by the Friedewald equation), and FT. HOMA-R was calculated from FPG and F-IRI as follows: HOMA-R ± FPG [mg/dl] × F-IRI [µIU/ml]/405. Free testosterone level was measured by free testosterone RIA kit (Sceti Medicallabo, Sakura city, Chiba, Japan).
The results were expressed as means ± SD. Paired t-tests were performed to compare parameters before and after the administration of Glowmin. A p-value of less than 0.05 was considered to indicate statistical significance.
This study was conducted with the approval of the Ethics Committee of Toho University Omori Medical Center (No. 25-104).
Discussion
Low testosterone level is thought to impair insulin sensitivity and increase body fat percentage, and is associated with truncal obesity, dyslipidemia, hypertension, and cerebrovascular diseases [20]. An epidemiological study has shown that the prevalence of testosterone deficiency is as high as 40% in men with type 2 diabetes mellitus [4]. Furthermore, it has been reported that individuals with metabolic syndrome have lower levels of endogenous total testosterone and FT than those without metabolic syndrome [21]. In addition, low testosterone level is a predictive factor for the development of insulin resistance, metabolic syndrome, and type 2 diabetes mellitus [22].
In 2006, Kapoor et al. [1] reported that testosterone replacement therapy reduced insulin resistance in hypogonadal men with type 2 diabetes mellitus [13]. After testosterone ester was intramuscularly injected at a dose of 200 mg once every 2 weeks for 3 months, insulin resistance (-1.73, HOMA-R) and HbA1c levels (-0.37%) reduced. In addition, improvements in WC, leptin, and TCHO were also observed. Even with transdermal supplementation of testosterone (2% testosterone gel), insulin resistance reduced by 15% after 6 months and 16% after 12 months in hypogonadal men with type 2 diabetes mellitus or metabolic syndrome [17].
In this study, testosterone ointment (Glowmin) was administrated to men with metabolic syndrome who had low FT levels to supplement testosterone, and the effects on glucose and lipid metabolism were assessed. The improvement of insulin resistance and serum lipid level was maintained even 6 months after administration. The FT levels significantly increased 1 month after administration, and the metabolic parameters were assumed to have improved gradually. In this study, Glowmin was administrated to the submandibular area at a dose of 0.3 g twice a day. Studies on the effects of Glowmin on metabolic parameters and the incidence of adverse events might need to be conducted with administration to the abdomen at a dose of 0.6 g per day, and improved administration methods would facilitate long-term use. The mechanism that testosterone improves HOMA-R and LDL-C without changing BMI, glycemia and HbA1c levels is uncertain at present. We speculate that the decrease of visceral fat will became deeply involved. In this study, the improvement of BMI and waist circumference were not demonstrated. However, if we have more longer period of observation, waist circumference may be reduced, because waist circumference has tendency to be decreased compared with values before treatment.
Conclusion
Transdermal administration of testosterone ointment (Glowmin) gradually reduced insulin resistance in men with metabolic syndrome who had low FT levels.
|
2019-03-17T13:04:01.512Z
|
2017-05-15T00:00:00.000
|
{
"year": 2017,
"sha1": "38ef0c375f14c88c07ba9fed6eaca32449262e86",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2167-0943.1000225",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "dd00e563eb310bfa85598ed91715212646803399",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268885432
|
pes2o/s2orc
|
v3-fos-license
|
A case report on multisystem inflammatory syndrome in children (MIS‐C) associated with COVID‐19 infection
Key Clinical Message Early recognition and treatment of Multisystem Inflammatory Syndrome in Children (MIS‐C) within the context of COVID‐19 infection is crucial for improved outcomes. Prompt intervention with IVIG and steroids leads to significant improvement in a severe case of MIS‐C. Clinicians should be vigilant for MIS‐C symptoms and initiate timely management. Abstract We report a case involving a fourteen‐year‐old male with COVID‐19 infection who developed multisystem inflammatory disease. A previously healthy child presented with a history of 10 days of fever and cough, along with diarrhea, and vomiting for 3 days. His COVID‐19 infection was confirmed through Polymerase Chain Reaction (PCR), and the laboratory values were remarkable for high levels of C‐reactive protein, D‐dimers, B‐type natriuretic peptide (BNP), and troponin I. He developed circulatory shock on the second day of the presentation and needed inotropic support. Steroids and intravenous immunoglobulin (IVIG) were started in light of Multisystem Inflammatory Syndrome in Children (MIS‐C), which improved his condition. Thus, during the management of COVID‐19 infection, early detection and a careful clinical characterization for MIS‐C are essential.
| CASE HISTORY AND EXAMINATION
A 14-year-old male child without prior co-morbidities presented to the Emergency Room (ER) with 10 days of fever (maximum temperature: 103 F) and acute dry cough associated with shortness of breath on exertion.He also complained of diarrhea and vomiting for 3 days.On examination, he was febrile (temperature: 100 F), tachycardic (heart rate: 120 bpm), tachypneic (respiratory rate: 30 breaths/min), and required oxygenation via Non-Invasive Ventilation (NIV) at 35% FiO2.Blood pressure (BP) during the presentation was 120/80 mmHg.Arterial Blood Gas (ABG) analysis revealed no significant abnormalities.Other systemic examinations revealed no abnormalities.Chest X-ray showed bilateral lower lobes patchy opacities.
| METHODS
Polymerase chain reaction (PCR) for COVID-19 was positive.He was admitted to intensive care unit (ICU) in view of severe COVID-19 infection for further evaluation and management and was started on: injection (inj.)ceftriaxone 1 gm intravenous (IV) 12 hourly, tablet (tab.)azithromycin 500 mg per oral once daily, inj.enoxaparin 40 U subcutaneously once daily, inj.dexamethasone 6 mg once daily, and inj.rabeprazole 20 mg once daily as per the standard treatment protocol.
On the second day of admission, he landed up in circulatory shock with the lowest BP reading of 80/60 mmHg.Consequently, he was started on Inj.nor adrenaline intravenous infusion at 0.10 mcg/kg/min and titrated accordingly as per the ICU protocol to improve peripheral vascular resistance and maintain blood pressure.His laboratory values were remarkable for high levels of C-reactive protein, D-dimers, B-type natriuretic peptide (BNP), and troponin I [as shown in Table 1].
Echocardiography showed the features of cardiogenic shock-dilated left ventricle (LV), moderate LV systolic dysfunction (ejection fraction: 30%-35%), mild tricuspid regurgitation (TR) with tricuspid regurgitation pressure gradient (TRPG) 20 mmHg with normal right ventricle (RV) systolic function and no pericardial effusion.There were no signs of coronary artery aneurysm during the evaluation.Cardiology and infectious disease departments were consulted.As he had multisystem involvement in the background of COVID-19 infection and was meeting the CDC criteria for MIS-C, 1 a strong possibility of MIS-C was considered.Consequently, inj.IV immunoglobulin (IVIG) at 2 g/kg and inj.methylprednisolone (125 mg once daily) were started.After the initiation of immunoglobulin therapy, his condition gradually improved.On the fifth day of ICU admission, he was hemodynamically stable-thus, weaned off the ionotropic support, and oxygenation was maintained at room air.An ejection fraction of 60% was achieved in echocardiography, and the patient was afebrile and free from respiratory and gastrointestinal symptoms.He was shifted to the general ward and discharged on oral medications after 3 days.
| DISCUSSION
MIS-C in pediatric patients is a novel syndrome that appears to be linked to previous exposure to SARS-CoV-2.MIS-C is thought to be related to a post-viral immunemediated inflammatory process, although the pathogenesis of the syndrome remains largely unclear. 1 Children with MIS-C may present with a continual fever for 3-5 days on average; fatigue; signs and symptoms of systemic inflammation, including laboratory-confirmed elevated inflammatory markers and involving multiorgans signs and symptoms: respiratory, cardiac, GI, renal, hematologic, dermatologic, and neurologic system involvement. 3,4Not all patients present with the same symptoms and signs, and in some cases patients may exhibit symptoms not mentioned above. 4n our case, the patient, with the background of COVID-19 infection, had involvement of respiratory, gastrointestinal, and cardiac systems, thus indicating a diagnosis of MIS-C, and showed improvement in condition with IVIG and Steroids.To date, there are no definitive T A B L E 1 Laboratory Investigations on the second day of admission.
FUNDING INFORMATION
None.
|
2024-04-05T05:07:43.411Z
|
2024-04-01T00:00:00.000
|
{
"year": 2024,
"sha1": "c8d91000a2e91b62268701a009f988649ac2650a",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ccr3.8737",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c8d91000a2e91b62268701a009f988649ac2650a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233254711
|
pes2o/s2orc
|
v3-fos-license
|
Exploratory analysis based on leprosy epidemiological and operational indicators in the city of Governador Valadares/MG/Brazil
Governador Valadares (GV) is a municipality of recognized leprosy hyperendemicity. From 2001 to 2010, the municipal health management invested in a heterogeneous way in the decentralization of control actions to expand access to diagnosis, highlighting the years 2002 and 2004, when training-campaigns took place in the Family’s Health Program. It is an epidemiological study, of a descriptive nature, of a longitudinal type, developed in the city of GV/Minas Gerais/Brazil. The variables collected were: Age group, categorized as <15 years and> 15 years; Gender; Year of notification; Operational classification categorized as paucibacillary and multibacillary; Number of registered household and outside contacts, classified by whole numbers starting with zero (no communications), and mode of entry, as new or not. The estimates obtained from the analysis of municipal indicators reveal that the coefficients of general detection and children under 15 years old remained at hyperendemic levels during the ten years of study with an apparent decrease in general detection to the period studied. Furthermore, according to national parameters, it was observed maintenance of a consistent number of diagnoses in children under 15 and insufficient contact examination coverage. Our study points to the importance of continuing leprosy control actions over time. Observing the findings related to some epidemiological and operational leprosy indicators in GV from 2001 to 2010 makes us aware of a high disease burden.
INTRODUCTION
Leprosy is an infectious disease caused by Mycobacterium leprae. It is a chronic condition, with a long incubation period, of 2-5 years (Goulart et al., 2002). The detection of new leprosy cases has shown progress worldwide. According to WHO (2019), in 2018, including all priority endemic countries, 208,619 new leprosy cases were reported in 127 countries. Brazil recorded 28,660 patients. India, Indonesia, and Brazil together accounted for 79.6% of all new cases detected globally. In Brazil, between 2009 and2018, 311,384 new cases of leprosy were diagnosed. The country remained in the parameter of high endemicity, except in the South and Southeast regions (WHO, 2019). According to the State Department of Health (2019), in the State of Minas Gerais, approximately 1,400 new cases have been reported each year for the last eight years. In the municipality of Governador Valadares, the coefficient detection of new cases of leprosy / 100,000 inhabitants was 24.7 in 2016, and in children under 15 years old, the coefficient was 6.1. Even with a decrease in detection over the years, the municipality is classified as "very high" about the endemic disease's strength.
In the State of Minas Gerais is located two of the ten largest national clusters, the four and the nine. Governador Valadares belongs to cluster 4, formed by the south of Bahia, the north of Espírito Santo, and the northeast of Minas Gerais (Penna et al., 2009). The city stands out in the national context as a priority municipality by the Ministry of Health. It has a qualified team at the local reference center at the state level, which is also considered a regional reference. Detection and prevalence coefficients have been decreasing in the State, with 16.4 / 100,000 inhabitants and 3.2 / 10,000 inhabitants in 2000. In 2009, they corresponded to 9.35 / 100,000 inhabitants and 1.3 / 10,000 inhabitants. In the last decade, the State Secretariat of Health of Minas Gerais (SES / MG) has invested in preparing municipal and regional managers and monitors to control actions. In this way, they will be able to act in a qualified way to train professionals, supervision, monitoring, and evaluation, focusing on improving the program and social mobilization (Andrade et al., 2010). Governador Valadares is a pole municipality in eastern Minas Gerais, which has its history marked by the robustness of leprosy endemic, with innovative investments in controlling the disease (Municipal department of health, 2003;Morais, 2010). It is considered a priority for monitoring leprosy control actions at the regional, state, and national levels. In the last decade, epidemiological studies have addressed the temporal dimension and the characterization of the endemic through the variables related to cases (Lana et al., 2002;Morais, 2010), without, however, insert the space or territory as variables in the analysis. The use of epidemiological and statistical tools in identifying areas with a high number of detected cases, complemented by the understanding of risk factors linked to the patients' place of residence, can guide the planning, monitoring, and evaluation, enabling the implementation of control programs. Thus, in the present study, an exploratory analysis was carried out between 2001 and 2010, based on epidemiological and operational indicators of leprosy, to present the general epidemiological-operational panorama of the endemic disease in GV municipality.
MATERIALS AND METHODS
This is an epidemiological study, of a descriptive nature, of a longitudinal type, developed in the city of (Brazil, 2007). The variables collected were: Age group, categorized as <15 years and > 15 years; Gender; Year of notification, in the period between the years 2001 to 2010; Operational classification in the diagnosis, categorized as "paucibacillary" and multibacillary"; Number of registered household and outside contacts, classified by whole numbers starting with zero (no communications), and mode of entry, as new or not. From the data collected, the following epidemiological and operational indicators were constructed for the analysis of the endemic disease (Table 1) as proposed by the Ministry of Health (Brazil, 2010): Annual detection coefficient of new leprosy cases per 100,000 inhabitants; Annual detection coefficient of new cases of leprosy in the population aged 0 to 14 years per 100,000 inhabitants; Proportion of leprosy cure among new patients diagnosed in the years of the cohorts; Proportion of examinees among registered household contacts of new leprosy cases in the year. The internal consistency of the data was checked, with typing and related revision whenever necessary. The general detection coefficient exceeded the parameter for a region to be considered hyperendemic throughout the period. proportion of cure had results above the recommended.
3-Coefficient of annual detection of new cases of leprosy in children under 15 years old
In the other years, the results were below this parameter. In 2003 the cure rate was the lowest (81.5%).
5-Proportion of examinees among registered household contacts of new leprosy cases
The proportion of those examined among the registered household contacts of new leprosy cases between 2001 and 2010 is shown in figure 4. There are good and regular parameters (>75% and <50% of household contacts examined, respectively) as indicators for monitoring contacts. In 2001 (44.3%), 2002 (28.4%) and 2004 (48.5%) the proportion of contacts examined was below the regular parameter. The proportion was above the regular parameter in the other years, but below the good parameter, presenting the best result in 2009 (73%).
DISCUSSION
To understand the global behavior of leprosy in the municipality of Governador Valadares, we present the indicators for monitoring and evaluating the disease between 2007 and 2010, complementing the data published by Morais (2010Morais ( ) from 2001Morais ( -2006 Comparing the number of cases diagnosed between 2007 to 2010 with the data from Morais (2010). There was a decrease over the years, since, between 2001 and 2006, the number of cases was higher (Table 2). However, the proportions observed between these periods are similar: women predominate in a higher percentage, except in 2010 (Table 2); the rate of multibacillary remains close to 40%, as in the period studied by Morais (2010), but diverging from the studies by Lana et al. (2002) for the period from 1990 to 2000, in which the percentage of MB was 60%. The proportional distribution in the selected age groups (adult) and (under 15 years old), was done by the most considerable epidemiological importance in the years studied by Morais (2010), which are similar to those of this period. In this context, this endemic disease's socioeconomic relevance mainly affects people of economically active age and the concern about children's involvement is highlighted. Indeed, this last fact is attributed to large community bacillary loads and the weakness of control actions (Chen et al., 2000;Norman et al., 2004). Figure 1 shows that the municipality remains categorized as hyper-endemic throughout the period, with an apparent decline in the number of diagnosed cases. Morais (2010) highlighted the high detection in 2002 and 2004 when there were intense efforts to decentralize leprosy control actions for the Family's Heath Program teams. Table 2 and figures 1 and 2 show the detection coefficients of new cases in children under 15 years old in the same place and period, reinforcing that the endemic disease still has a large magnitude. In all the years studied, detection in this age group remains at "hyper-endemic" or "very high" levels, pointing to active transmission in the communities. Figure 3 shows the proportion of cases cured in the cohorts, an indicator of the Pact for Life (Brazil, 2010), had results close to those recommended as good (90%). This indicator gives leprosy visibility as a health priority since the diagnosed cases are expected to be managed appropriately for the desired outcome (bacteriological cure). Finally, figure 4 shows the indicator's findings for monitoring contacts, the proportion of contacts examined among the cured. Such a panorama is worrying since it remained below the recommended (Brazil, 2010). This finding may indicate the need for decentralization, problems in the registration of the exam, and even the lack of completeness of the actual surveillance. It is expected that the services are organized for the active search for cases among household contacts, who probably have a higher risk of becoming ill (Fine et al., 1997;Matos et al., 1999;Jain et al., 2002 (Brazil, 2010). Notably, the operational indicators point, for the most part, to consistent performance, except in the examination of contacts. This situation is expected once the Reference Center for Leprosy diagnoses is located in the central part of the city, and it may be a possible explanation for the proportion of contacts examined below the recommended: the farther from the service the patient and his family are, the more difficult it is to capture and bond. In 2003, there was no structured training; however, in 2004, a joint effort was made in the same way as the 2002 training (in loco), with the difference that, after the theoretical approach, the practice was directed towards examining contacts from previous years. From 2005 to 2008, the Credenpes, a Reference Center for endemic diseases, conducted training, focusing primarily on doctors and nurses. In 2007, the action was taken with community agents and in some communities with the active search. In 2009 there was no training for the teams, and, in 2010, only in the last quarter, some groups were trained traditionally. Professional turnover, especially for doctors, is an ordinary reality in the family health environment. It is described as detrimental to the sustainability of the performance of Leprosy control actions (Pimentel et al., 2004;Barbosa et al., 2008;Moreno et al., 2008). Thus, professionals who were trained, a few months later, we're no longer working in the municipality. Finally, at the central level, implementing activities to the Primary Care Master Plan in 2009 and 2010, making the family health training agenda incompatible, also contributed to this discontinuity.
|
2021-01-07T09:04:51.075Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "3f6e752579153cd5556f198631af1dbd930e4295",
"oa_license": null,
"oa_url": "https://doi.org/10.36630/jmbsr_20012",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6d7d6bd6503932bd12cc7ec129ad66ae1b7ae6db",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
252547304
|
pes2o/s2orc
|
v3-fos-license
|
Low temperature modifies seedling leaf anatomy and gene expression in Hypericum perforatum
Hypericum perforatum, commonly known as St John’s wort, is a perennial herb that produces the anti-depression compounds hypericin (Hyp) and hyperforin. While cool temperatures increase plant growth, Hyp accumulation as well as changes transcript profiles, alterations in leaf structure and genes expression specifically related to Hyp biosynthesis are still unresolved. Here, leaf micro- and ultra-structure is examined, and candidate genes encoding for photosynthesis, energy metabolism and Hyp biosynthesis are reported based on transcriptomic data collected from H. perforatum seedlings grown at 15 and 22°C. Plants grown at a cooler temperature exhibited changes in macro- and micro-leaf anatomy including thicker leaves, an increased number of secretory cell, chloroplasts, mitochondria, starch grains, thylakoid grana, osmiophilic granules and hemispherical droplets. Moreover, genes encoding for photosynthesis (64-genes) and energy (35-genes) as well as Hyp biosynthesis (29-genes) were differentially regulated with an altered growing temperature. The anatomical changes and genes expression are consistent with the plant’s ability to accumulate enhanced Hyp levels at low temperatures.
Introduction
Hypericum perforatum L. (St John's wort) is a perennial herb widely distributed in Europe, Asia, Northern Africa and Northern America (Bagdonaitėet al., 2010). Aerial parts contain the metabolites hypericin (Hyp) and hyperforin that are used in traditional medicine as anti-depression, anti-viral, anti-microbial and anti-tumor agents, as well as other plant constituents such as flavonoids, tannins and volatile oils (Barnes et al., 2001;Napoli et al., 2018).
St John's wort has traditionally been used as an external antiinfammatory and healing remedy for the treatment of swellings, wounds and burns. It is of interest recently due to new and important therapeutic applications (Bombardelli and Morazzoni, 1995;Nahrstedt and Butterweck, 1997;Erdelmeier, 1998). The species is characterized by the presence of different types of secretory structure: translucent glands or cavities, black nodules and secretory canals (Ciccarelli et al., 2001). The frequency and diversity of these secretory structures is evidence of the intense secretory activity of the species. Previous studies have found that H. perforatum growth and Hyp accumulation are affected by the germplasm source (Couceiro et al., 2006;Sotaḱ et al., 2016;Zhang et al., 2021) as well as environmental factors such as light quality (Germ et al., 2010;Najafabadi et al., 2019;Tavakoli et al., 2020;Karimi et al., 2022), drought (Gray et al., 2003), and temperature (Zobayed et al., 2005;Couceiro et al., 2006;Yao et al., 2019;Tavakoli et al., 2020;Kaundal et al., 2021). Lower temperatures can enhance plant growth and Hyp accumulation; indeed cooler growth conditions can significantly increase plant biomass by inducing gene expression that favor growth (Brunaḱováet al., 2015;Yao et al., 2019;Tavakoli et al., 2020). Previous studies have found that ca. 750 genes are differentially expressed and 150 genes are involved in plant growth, Hyp biosynthesis and/or environmental responses in H. perforatum seedlings at different temperatures (Su et al., 2021). Based on this previous study, St John's wort expression levels for low-level gene candidates have been quantified by qRT-PCR (real time quantitative PCR), to further probe the mechanism of seedlings performance under a cool temperature. Secretory structures associated with leaf metabolite accumulation were also monitored under reduced temperature conditions.
Measurement of chlorophyll and carotenoid contents
Chlorophyll and carotenoid contents were measured according to a previous protocol Yang et al., 2014). Briefly, fresh whole leaves (0.1 g) were finely ground in 80% acetone (v/v, 5 mL) and centrifuged at 5000 r/min and 4°C for 10 min. The supernatant was diluted to 25 mL with 80% acetone (v/v). Absorbance was taken at 662, 646 and 470 nm using a spectrophotometer (UV-6100, Shanghai, China). The specific calculations are as follows: Chlorophyll concentration : C T ðmg=LÞ = C a + C b Carotenoid concentration : C ðmg=L) where "A 662 ", "A 646 " and "A 470 " represent the absorbance at 662, 646 and 470 nm, as well as "C", "V" and "M" represent the concentration of pigment (mg/L), volume of extract (L) and sample fresh weight (FW, g), respectively.
HPLC quantification of Hyp content
Hyp content was quantified according to previous protocols (Couceiro et al., 2006;Yao et al., 2019). Briefly, air-dried aerial parts of seedlings were finely powdered, samples (0.1 g) were soaked in 95% ethanol (v/v; 20 mL) and agitated in the dark at 22°C for 72 h, and centrifuged at 8000 r/min and 4°C for 10 min. The supernatant was evaporated and concentrated using a vacuum in a rotary evaporator at 60°C, and then the concentrated residue was re-dissolved in methanol (10 mL, chromatography grade). After filtered with a durapore membrane (0.22 mm; Millipore, Sigma, USA), extracts (10 μL) were analyzed at 590 nm by HPLC (Eclipse Plus C18, 250 mm × 4.6 mm, 5 mm; Column temperature 30°C; Agilent 1100 series, Santa Clara, California, USA) and mobile phase with acetonitrile: 50 mmol/L triethylamine (70:30, v/v) at a flow rate of 1.0 mL/min. Hyp content was evaluated on peak area comparison with a reference standard (hypericin, 56690; Sigma Chemical Co., St. Louis, MO, USA). The specific calculations are as follows: where "W", "Y 0 ", "Y", "V", "M1" represent the standard concentration of Hyp (μg/mL), standard peak area of Hyp (mAU×s), sample peak area (mAU×s), volume of extract (L) and sample DW (g), respectively.
Leaf ultra-structure analysis
Leaf ultra-structure was observed by transmission electron microscopy (Kornfeld et al., 2007), specific protocols and instrumentation followed previously published literature . Briefly, small pieces (4 mm × 2 mm) of the middle adaxial leaf without mainly veins were firstly immersed into glutaraldehyde (2.5%, v/v) at 4°C for 12 h, then washed with 0.2 M sodium phosphate buffer (pH 7.4) at 22°C for 15 min (thrice), and then fixed with osmium tetroxide (1%, w/v) at 4°C for 5 h; secondly, the fixed samples were washed with the above buffer and then extracted sequentially in 50% ethanol (15 min), 70% (12 h), 80% (15 min), 90% (15 min), 100% (15 min), and acetone (100%) for 15 min (twice, v/v), the mixture agent of acetone and embedding (v/v, 1:1) for 7 h, and embedding medium (Epoxy resin, composed of MNA, EPon-812, DDSA and DMP-30) at 22°C for 12 h; thirdly, the treated leaves were transferred to a embedding plate and immersed in a embedding medium, and then dried sequentially at 35°C for 10 h, 45°C for 12 h and 68°C for 48 h; finally, the embedded samples were sliced (75 nm) with an ultra-microtome (EM UC6, Leica, Germany) and stained with uranyl acetate and lead citrate, and then ultra-structure was observed by a transmission electron microscope (JEM-1230, JEOL Ltd., Japan).
Gene excavation
RNA sequencing was by unigene expression analysis and basic annotation was conducted; 1584 high-level expressed genes with 749 characterized genes and 150 genes involved in plant growth, Hyp biosynthesis and environmental response have been identified with |log 2 (fold-change)| > 1 in previously published article (Su et al., 2021). In this study, low-level genes were identified according to a criteria of 0.2< |log 2 (fold-change)|< 1.0 (Robinson et al., 2009;Love et al., 2014), since low-level genes also play important roles in many biological processes (Maia et al., 2007;Gotor et al., 2010). Differentially Expressed Genes (DEGs) were annotated against the Swiss-Prot database (https://www.uniprot.org/), and 64 candidate genes (Table S1 and Table 1) involved in photosynthesis, energy and Hyp biosynthesis were dug out based on the biological functions.
qRT-PCR quantification
Primer sequences of the selected 32 candidate genes (Table S2) were designed using a Primer-BLAST tool in NCBI. The coding sequences (CDS) of the 32 genes are shown in Table S3. Actin (ACT) was selected as a reference gene. The extraction of total RNA, synthesis of first-strand cDNA and PCR amplification were performed using RNA kit, RT kit and SuperReal PreMix, respectively. The RNA quality was assessed using an Ultramicro spectrophotometer (Micro Drop, BIO-DL, Shanghai, China) (Table S4) and the integrity was evaluated by 1.0% (w/v) agarose gel electrophoresis ( Figure S3), reverse transcription was performed to generate cDNA on the following protocol: 42°C for 15 min and then 95°C for 3 min, one cycle, PCR amplification was performed on the following protocol: one cycle at 95°C for 15 min, and 35 cycles at 95°C for 10 s, 60°C for 20 s and 72°C for 30 s, and melting curve analysis was performed after a 34 s incubation at 72°C . The concentrations of cDNA and primer were respectively diluted to 100 ng/mL (2 mL) and 10 mM (1.2 mL) for gene expression analysis. Gene expression was quantified using a LightCycler 96 (Roche, Switzerland). Relative expression level (REL) of gene at 15°C compared with 22°C (Control) was valuated based on a 2 -△△Ct method according to the following formula (Willems et al., 2008):
Statistical analysis
Three biological replicates were performed; SPSS 22.0 software was used for a t-test analysis with P<0.05 for differences.
Low temperature increases chlorophyll and carotenoid content
To probe physical and physiological changes in leaves with a change in median growth temperature, a series of growth Figure S4). These results were consistent with previous reports that low temperature can significantly increase chlorophyll content, plant growth and subsequently enhance biomass accumulation in comparison with high temperatures (22 and 30°C) Su et al., 2021). Five photosynthetic encoded genes (i.e. psbA, psbC, ycf4, ycf5 and matK) were up-regulated at 15°C compared with 22 and 30°C ; and nine genes encoding chlorophyll a-b binding proteins (i.e. CAB, CAB1, CAB1B, CAB3, CAB3C, CAB96, ELI_PEA, OHP2 and RBCS-C) were up-regulated at 15°C compared with 22 (Su et al., 2021). The up-regulation of these genes encoding chlorophyll a-b binding, light-induced and light-harvesting complexes proteins indirectly indicate that lower temperatures can improve the accumulation of chlorophyll pigments, which successively enhance photosynthesis and plant growth.
Low temperature increases biomass and Hyp content
As shown in Figure 1, there were greater biomass and Hyp content at lower temperature, with a 1.2-fold increase of the whole seedlings DW ( Figure 1A) and 4.5-fold increase of Hyp content in aerial parts at 15 compared with 22°C ( Figure 1B). The representative chromatograms of reference standard (50 mg/ mL, injection volume 10 μL) as well as the extracts (10 mL, injection volume 10 μL) of aerial parts of seedlings at 15 and 22°C were shown in Figure 1C. Previous studies on H. perforatum have found that cooler temperatures can enhance Hyp accumulation. Specifically, there was a 1.4-fold increase of Hyp content on a DW basis at 15 compared with 22°C after the seedlings treated for 45 days ; a maximum Hyp content on a DW basis at 4 and 8°C compared with 16 and 25°C, with about 10-fold increase at 4 compared with 25°C after the seedlings treated for 7 days (Tavakoli et al., 2020). These findings further demonstrate that Hyp accumulation in H. perforatum can be significantly enhanced by cooler temperatures. In fact, extensive experiments have demonstrated that bioactive compounds can be improved at cooler temperatures, such as podophyllotoxin content in Sinopodophyllum hexandrum at 15 compared with 22°C , ferulic acid content in Angelica sinensis at 15 compared with 22°C (Dong et al., 2022), and total ginsenosides content in Panax ginseng at 10 compared with 25°C (Wang et al., 2019). light and temperature conditions (Briskin and Gawienowski, 2001;Walker et al., 2001;Cirak et al., 2007;Stoyanova-Koleva et al., 2015;Su et al., 2021). Since Hyp biosynthesis occurs in dark glands (DG) and secretory cells (SC) is associated with Hyp accumulation (Lv and Hu, 2001;Zobayed et al., 2006;Kornfeld et al., 2007;Rizzo et al., 2019), these organelles were monitored. Increases in DG size and SC number of 1.2-fold and 1.9-fold, respectively for plants growing at the lower temperature was observed ( Figures 3B, C). Studies with Camellia oleifera grown at 15 compared with 16°C exhibited an increased leaf thickness, of 1.2-fold (Hu et al., 2016). Previous studies on H. perforatum have found that the number of DG is more at 15°C compared with 22°C (Su et al., 2021). Thus, larger size of DG and more number of SC in this study further confirm previous studies that higher Hyp accumulates to a greater level at 15°C than 22°C (Su et al., 2021).
Low temperature changes leaf cell ultra-structure
Vacuole ( Figure 4). The number of Ch, Mi, S, TG and OG affected by abiotic stresses such as temperatures has been observed in other plants (Zhang et al., 2005;Li et al., 2020). Here, an increase in the number of Ch, Mi, S, TG and OG may be a low-temperature response for energy acquisition and utilization, since previously studying H. perforatum have reported that cooler temperature can enhance plant growth Su et al., 2021). The HD, which seems to adhere to membranes or is somehow trapped in a hemispherical shape, may be associated with Hyp biosynthesis (Kornfeld et al., 2007). Here, an increase in the size of HD may play a certain role in enhancing Hyp biosynthesis at lower temperature.
For the specific biological functions of the selected 8 genes related to chloroplast, CAB13, CAB2R, LHCB1.2 and CAP10A encode light-harvesting chlorophyll a-b binding proteins (LHCs) that functions as a light receptor and play indispensable roles in capturing and delivering excitation energy to photo systems (Zou et al., 2020), PGRL1A and PGR5 Cross-sectional micro-structure for leaves of seedlings grown at 15 and 22°C. Ch, chloroplast, DG, dark gland, LE, lower epidermis, LV, leaf veins, PC, palisade cell, SC, secretory cell, ST, spongy tissue, UE, upper epidermis. are involved in electron flow (Munekage et al., 2002;Hertle et al., 2013), Os01g0913000 and TRM1 are involved in various redox reactions (Capitani et al., 2000;Glauser et al., 2004). For selected genes associated with the thylakoid membrane, CURT1B determines thylakoid architecture by inducing membrane curvature , THF1 is required for the formation of mature thylakoid stacks from the normal vesicles (Wang et al., 2004), At3g63540 is involved in the folding and proteolysis of thylakoid proteins (Peltier et al., 2002), and TERC is involved in the thylakoid formation (Kwon and Cho, 2008;Schneider et al., 2014). For these four genes related to mitochondrion, NMAT1 and NMAT2 are required for mitochondrial biogenesis and the regulation of fundamental metabolic pathways during early developmental stages (Nakagawa and Sakurai, 2006), SPS3 is involved in the ubiquinone-9 biosynthesis from solanesyl diphosphate (Ducluzeau et al., 2012), and EMB2247 is involved in the formation of carbon-oxygen bonds in aminoacyl-tRNA (Berg et al., 2005). The up-regulation of these genes involved in photosynthesis and energy will confer H. perforatum seedlings to grow robust and adapt cooler temperatures compared with higher temperatures.
FIGURE 3
Changes of leaf thickness (A), diameter of dark-gland (B) and number of secretory cell (C) for seedlings grown at 15 and 22°C. All the values are average with their standard deviations (n = 10). The "*" represents a significant difference (P< 0.05) between 15 and 22°C.
Expression level of genes in green tissue
The relative expression of selected genes in the green tissue were observed to be differentially regulated, with up-regulation of 1.6-, 1.6-, 1.1-, 1.1-, 1.4-and 1.1-fold for the 6 genes HXK1, PFP-ALPHA, CUT1, Acot9, AIM1 and KCR1, while down-regulation of 0.8-and 0.8-fold for the 2 genes PFK2 and ENO1, respectively at 15°C compared with 22°C (Figure 7). For selected genes involved in glycolysis, PFK2 is involved in the formation of fructose 1,6bisphosphate by phosphorylating D-fructose 6-phosphate (Mustroph et al., 2007), ENO1 is involved in catalyzing the formation of phosphoenolpyruvate from 2-phosphoglycerate (Allen and Whitman, 2021), HXK1 and PFP-ALPHA are involved in the formation of D-glyceraldehyde 3-phosphate and glycerone phosphate (Todd et al., 1995;Giese et al., 2005). For genes involved in fatty-acid metabolism, CUT1 participates in both decarbonylation and acyl-reduction wax synthesis pathways (Fiebig et al., 2000), Acot9 is involved in the formation of free fatty acid and coenzyme A by hydrolyzing of acyl-CoAs (Poupon et al., 1999), AIM1 is involved in the peroxisomal beta-oxidation pathway for the biosynthesis of benzoic acid (Bussell et al.,FIGURE 5 The expression level of genes involved in chloroplast, thylakoid and mitochondrion for seedlings grown at 15 versus 22°C, as determined by qRT-PCR (n=3). Column highlighted in blue represents gene up-regulation and red represents gene down-regulation. The red dotted line in the image differentiates up-regulation (>1) and down-regulation (<1) at 15°C compared with 22°C (Control), respectively. 2014), and KCR1 is responsible for the first reduction step in very long-chain fatty acids synthesis (Beaudoin et al., 2009). The upregulation of these genes in green tissue at cooler temperature is likely to provide abundant acetyl-CoA and malonyl-CoA as precursors for downstream Hyp biosynthesis.
Expression level of genes in dark gland
The relative expression of selected genes in dark glands were also observed to be differentially regulated, with up-regulation of 1.5-, 1.5and 1.2-fold for the 3 genes PKSA, FGRAMPH1_01T20223 and At4g20800, while down-regulation of 0.9-, 0.9-and 0.5-fold for the Key genes (red color) mapped in the Hyp biosynthetic pathway from glucose to acetyl-and malonyl-CoA in green tissue (green frame), and from acetyl-and malonyl-CoA to Hyp in dark gland (dark frame). PKS: polyketide synthase, OKS: octaketide synthase, TER: thioesterase, POCP: phenoloxidative coupling protein, BBE: berberine bridge enzyme. The Hyp biosynthetic pathway concludes from previous literatures Rizzo et al., 2019;Rizzo et al., 2020;Su et al., 2021).
FIGURE 7
The expression level of genes involved in Hyp biosynthesis in green tissue for seedlings grown at 15 versus 22°C, as determined by qRT-PCR (n=3). Column highlighted in blue represents gene up-regulation and red represents gene down-regulation. The red dotted line in the image differentiates up-regulation (>1) and down-regulation (<1) at 15°C compared with 22°C (Control), respectively. genes PKSG5, MALD1 and STH-2, respectively at 15°C compared with 22°C ( Figure 8). Both PKSA and PKSG5 encode polyketide synthase that are involved in the condensation of malonyl-CoA units (Mizuuchi et al., 2008;Flores-Sanchez et al., 2010), FGRAMPH1_01T20223 is predicted to encode TER1 that participates in the formation of emodin anthrone (Kong et al., 2013), MALD1 and STH-2 are predicted to encode POCP, and At4g20800 encodes BBE-like 17 that catalyzes the oxidation of aromatic allylic alcohols (Daniel et al., 2015). The up-regulation of these genes (PKSA, FGRAMPH1_01T20223 and At4g20800) in dark glands at a cooler temperature is predicted to play a role in inducing Hyp biosynthesis and accumulation. In this study, the two CHS and CHS1 genes are not up-regulated, and the significant downregulation (0.63-fold) of CHS1 might indicate that the reduced temperatures negatively affect phenylpropanoid biosynthesis. If this effect is directly connected to the up-regulation of the Hyp biosynthetic pathway via redirecting the pool of 4-coumaroyl-CoA and malonyl-CoA precursors remains to be established. This will require quantitative phenolic profiling by LC-MS combined with flux analysis, but is beyond the scope of this manuscript.
Conclusions
In Hypericum perforatum, low temperature changes cell structure (e.g. dark gland, secretory cell and hemispherical droplet) associated with regulating plant growth and gene expression (e.g. BBE, POCP and TER1) associated with Hyp biosynthesis in leaf green tissue and dark gland. These findings not only further confirm that low temperature enhances plant growth and Hyp biosynthesis Tavakoli et al., 2020), but also complement previous transcriptomic analysis (Su et al., 2021). Moreover, these findings will provide useful references for guiding H. perforatum cultivation in field or green house, cell and tissue culture, and revealing the mechanism of Hyp biosynthesis to increase Hyp accumulation.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material.
Author contributions
HS: data curation and investigation. LJ: Resourses. ML: conceptualization, project administration and writing-original draft. PP: writing-review and editing. All authors contributed to the article and approved the submitted version. The expression level of genes involved in Hyp biosynthesis in dark gland for seedlings grown at 15 versus 22°C, as determined by qRT-PCR (n=3). Column highlighted in blue represents gene up-regulation and red represents gene down-regulation. The red dotted line in the image differentiates up-regulation (>1) and down-regulation (<1) at 15°C compared with 22°C (Control), respectively.
|
2022-09-28T13:54:40.800Z
|
2022-09-27T00:00:00.000
|
{
"year": 2022,
"sha1": "a762020eb056f2576a34fc0c6b8f18b5d8f05ed4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "a762020eb056f2576a34fc0c6b8f18b5d8f05ed4",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
8494146
|
pes2o/s2orc
|
v3-fos-license
|
Foreign Body Response to Intracortical Microelectrodes Is Not Altered with Dip-Coating of Polyethylene Glycol (PEG)
Poly(ethylene glycol) (PEG) is a frequently used polymer for neural implants due to its biocompatible property. As a follow-up to our recent study that used PEG for stiffening flexible neural probes, we have evaluated the biological implications of using devices dip-coated with PEG for chronic neural implants. Mice (wild-type and CX3CR1-GFP) received bilateral implants within the sensorimotor cortex, one hemisphere with a PEG-coated probe and the other with a non-coated probe for 4 weeks. Quantitative analyses were performed using biomarkers for activated microglia/macrophages, astrocytes, blood-brain barrier leakage, and neuronal nuclei to determine the degree of foreign body response (FBR) resulting from the implanted microelectrodes. Despite its well-known acute anti-biofouling property, we observed that PEG-coated devices caused no significantly different FBR compared to non-coated controls at 4 weeks. A repetition using CX3CR1-GFP mice cohort showed similar results. Our histological findings suggest that there is no significant impact of acute delivery of PEG on the FBR in the long-term, and that temporary increase in the device footprint due to the coating of PEG also does not have a significant impact. Large variability seen within the same treatment group also implies that avoiding large superficial vasculature during implantation is not sufficient to minimize inter-animal variability.
INTRODUCTION
Implantable neural probes hold the promise of providing functional recovery to individuals suffering from traumatic injuries or neurological disorders (Taylor et al., 2002;Hochberg et al., 2012). A major dilemma of using such devices is that their functionality degrades over time, which eventually leads to the inability of discriminating relevant neural signals from background noise (Liu et al., 1999;Vetter et al., 2004;Williams et al., 2007). In an attempt to resolve the biological aspects of this issue, researchers have modulated multiple factors including: device architecture/material type/flexibility (Seymour and Kipke, 2007;Karumbaiah et al., 2013;Xie et al., 2015;Lee et al., 2017a,b;Luan et al., 2017), bioactive coatings (Pierce et al., 2009;Azemi et al., 2011;Kozai et al., 2012a;Rao et al., 2012), and drug delivery schemes (Shain et al., 2003;Zhong and Bellamkonda, 2007). In parallel to these engineering mitigation strategies, there are ongoing attempts to discover the precise biotic and abiotic mechanisms of implant failure in order to develop strategies to improve the functional lifetime of neural implants (Barrese et al., 2013(Barrese et al., , 2016. One of the most common and indirect methods for evaluating the effectiveness of these intervention strategies is looking at markers of the FBR. Typically, implanted microelectrodes in the brain instigate a neuro-inflammatory cascade that involves infiltration of plasma proteins, monocytes, macrophages, and leukocytes from breached blood-brain barrier (BBB) activation, and recruitment of microglia and macrophages to the injury site, activation of astrocytes to initiate astrogliosis, and a loss of neurons near the implant (Edell et al., 1992;Biran et al., 2005;Prasad et al., 2012;Potter-Baker et al., 2014;Jorfi et al., 2015). A severe FBR generally results in a large neuronal loss in the long-term (Prasad et al., 2012;Potter et al., 2013). Thus, reducing the FBR has been considered essential to achieve long-term functionality of neural implants.
A number of neural-interface strategies have used the biocompatible polymer poly(ethylene glycol) (PEG). Examples include using PEG as an adhesive to aid insertion (Gage et al., 2012), a stiffening agent for inserting flexible probes (Felix et al., 2013;Lecomte et al., 2015;Lee et al., 2017a), or as a vector for molecules to deter protein adsorption (Kozai et al., 2012a;Gutowski et al., 2014;Sommakia et al., 2014b). While the exact mechanisms are unknown, there are studies suggesting the use of PEG as a standalone treatment for spinal cord injury (SCI) (Luo et al., 2002;Estrada et al., 2014), traumatic brain injury (TBI) (Liu et al., 1989;Koob et al., 2005), and peripheral nerve damage (Britt et al., 2010). It was reported that PEG reduces oxidative stress and repairs damaged cell membranes which contribute to enhanced anatomical and functional recovery over a few hours (Liu et al., 1989;Luo et al., 2002;Koob et al., 2005) to weeks or months (Britt et al., 2010;Estrada et al., 2014). We have previously evaluated tissue responses to PEG-coated intracortical devices and found that PEG prevents glial cell adsorption but does not alter neuronal responses in the short-term in an in vitro setting (Sommakia et al., 2014a,b). However, the consequence of using PEG for brain-implanted microelectrodes in the chronic phase has not been investigated. Moreover, our recent study with flexible neural probes employed PEG as the stiffener for insertion (Lee et al., 2017a), and there remains a need to evaluate if any unforeseen effect of PEG has confounded the outcome of the study.
In this paper, we evaluated the brain's biological response to PEG-coated silicon probes and non-coated silicon probes at 4 weeks using two different mouse strains: wild-type (WT, C57BL/6) and CX3CR1-GFP (B6.129P-CX3CR1 −/− ). CX3CR1-GFP, a transgenic mouse model, which has its fractalkine receptor replaced with green fluorescent protein (GFP) (Jung et al., 2000), expresses GFP in immune cells including microglia and has been instrumental in studying microglial activity both in vivo and ex vivo in normal and injury models (Nimmerjahn et al., 2005;Kozai et al., 2012bKozai et al., , 2016Lee et al., 2017a). However, the consequence of genetic manipulation in microglial cells on overall tissue response to neural implants remains unknown. We first conducted the PEG-coat vs. no-coat experiment on WT, and replicated on CX3CR1-GFP to identify that the observation from WT mice is consistent and if any difference exists between WT and CX3CR1-GFP.
Probe Preparation and PEG-Coating
Planar "Michigan type" silicon microelectrodes, 249 µm wide and 15 µm thick, were used in this study (GP_1x16_249, NeuroNexus, Ann Arbor, MI; same dimension used in Lee et al., 2017a,b). Probes were "raw, " meaning that no electrical backbone was attached to efficiently conduct histological assessment. Four kilodaltons PEG was chosen for its superior anti-biofouling property (Su et al., 2009). Probes and PEG (4 kDa MW, Sigma, St. Louis, MO) were autoclaved for 30 min at 120 • C 1 day prior to surgery. In an aseptic setting, PEG was melted on a hot plate at 80 • C and the probes were dip coated in PEG for 5 s. After allowing the probes to air dry they were sealed in a sterilized container until surgery.
A separate set of probes coated with PEG were taken for profilometry thickness measurements (Alpha Step 500, Tencor, Milipitas, CA). The maximum thickness of the PEG-coatings was 46.05 µm (±4.86 µm, s.e.m., N = 9) on one side of the broad surface. Figures 1A,B show pictures of non-coated and PEG-coated probes.
Surgical Procedures
All surgeries and animal experiments were performed in accordance with the University of Florida Institutional Animal Care and Use Committee (IACUC) guidelines.
Procedures are almost identical to our previous study (Lee et al., 2017a). Briefly, 2-3 months old (18-28 g) WT mice (C57BL/6, N = 4) and CX3CR1-GFP mice (B6.129P-CX3CR1 −/− , Stock #005582, The Jackson Laboratory, Bar Harbor, ME, N = 4) were bilaterally implanted with a PEGcoated device in one hemisphere and a non-coated device in the other hemisphere. Mice were anesthetized with isoflurane (3% induction, 1.5% maintenance) and kept warm using a heating pad. Vital signs were monitored throughout the surgery with a pulse oximeter (MouseOx, Kent Scientific, Torrington, CT). After a small portion of scalp was removed, a craniotomy above each brain hemisphere was performed using a microdrill. A probe was mounted onto a piezoelectric actuator (PiLine M-663, Physik Instruments, Karlsruhe, Germany) and inserted into one hemisphere 1.2 mm from the cortical surface at 100 mm/s. Sensorimotor cortical regions (∼1 mm posterior and 1.5 mm lateral to bregma) were targeted, however small deviations occurred to avoid large surface vasculature. This procedure was then repeated on the contralateral hemisphere. After the insertion in both hemispheres, craniotomies were covered with silicone elastomer (Kwik-Sil, WPI, Sarasota, FL) followed by dental acrylic (Fusio Liquid Dentin, Pentron, Orange, CA) to secure the devices. Post-operative care involved applying antibiotic ointment (Actavis, NC) around the cut areas and maintaining the body temperature. Meloxicam (Norbrook, United Kingdom) was administered pre-and up to 3 days postsurgery to minimize discomfort.
Tissue Preparation
After 4 weeks of implantation, mice were deeply anesthetized with 5% isoflurane and transcardially perfused with 20 mL of ice cold phosphate buffered saline (PBS, pH 7.4) followed by 20 mL of ice cold 4% paraformaldehyde (PFA, pH 7.4) solution.
Heads were then isolated and soaked in 4% PFA for 24 h at 4 • C for post-fixation. After rinsing with PBS, brains were carefully extracted from the head and cryopreserved in 30% sucrose in PBS for 24-48 h at 4 • C. Brain tissues were flash frozen in 2-methyl butane at −40 • C for 2 min and equilibrated to −20 • C before cryosectioning. Tissues were lightly embedded with Optimal Cutting Temperature (OCT) medium (Sakura Finetek, Alphen aan den Rijn, The Netherlands) then horizontally sectioned into 25 µm slices and transferred to electrostatic adhesive glass slides (Superfrost Plus, Thermo Fisher Scientific, Waltham, MA). Harvested slices were kept at 4 • C for no longer than 1 week before immunolabeling.
Immunohistological Processing
Tissue slices were allowed to sit at room temperature (RT) for 30 min for secure adhesion onto slides. Tissue slices were rinsed three times with PBS for 5 min each to remove residual OCT and then blocked in blocking buffer (4% goat serum, 0.3% Triton-X in PBS) for 2 h at RT. Primary antibodies diluted in blocking buffer were applied and incubated for 20-24 h at 4 • C. Following primary incubation, tissue slices were washed five times with PBS for 5 min each to remove any unbound primaries. Corresponding secondary antibodies diluted in blocking buffer were then applied to the tissue slices and incubated for 2 h at RT. After 5 min of five subsequent washes with PBS, tissue slides were cover slipped using VectaShield mounting medium (H-1000, Vector Lab, Burlingame, CA).
Tissue slices were prepared in two sets that alternate every 25 µm in depth. The first set of slices were stained with CD68 and IgG. The second set of slices were stained with GFAP and NeuN. Note that for assessing BBB leakiness, the secondary antibody (i.e., anti-mouse IgG 555) was directly applied to the tissue without a primary. Antibodies used in this study are listed in Table 1.
Imaging and Quantitative Analysis
Fluorescence images were taken with a Zeiss LSM 710 laser scanning confocal microscope (Carl Zeiss, Jena, Germany). A 2 × 2 tile was captured using a 10X objective to obtain a wider view of the device-tissue interface for quantitative analysis. To minimize the depth dependent variability (Woolley et al., 2013), two slices per sample, roughly from 450 to 600 µm down the cortical column, were used for quantitative analysis. A 20X objective was used for representative qualitative figures. A maximum intensity projection was used for 20X stacks to span 10 µm. Qualitative figures presented in this paper were contrast enhanced and pseudo-colored for visual clarification.
We used Minute v1.5 (Potter et al., 2012;Potter-Baker et al., 2014) for the quantitative analysis of immunolabels. Briefly, an ellipse was drawn to define the contour of the device track. Exclusion areas were drawn to limit the analysis to be conducted only on cortical areas of the device-containing hemisphere. For each 5 µm concentric ring, expanding from the defined ellipse, the average label intensity and the area of the ring was calculated. The strength of Minute v1.5 is that it allows us to utilize the entirety of an image, which is useful for reducing potential bias from using only a portion of the image for quantification. An illustration is depicted in Figure 1C.
Statistical Analysis
Statistical analyses were performed using Prism 7.00 statistical analysis software (GraphPad Software, La Jolla, CA). A twoway ANOVA was performed taking coating scheme (PEG vs. No-Coat) and binned interval as the two independent variables. Then, we blocked each animal to eliminate inter-animal variability and ran paired t-tests to directly compare, within each interval, a PEG-coated device implanted hemisphere with its contralateral non-coated device implanted hemisphere. Tests were run on WT and CX3CR1 independently to verify the results' consistency.
Microglia/Macrophage Characteristics
To identify whether observed immune responses correspond with previously published work (Biran et al., 2005;Kozai et al., 2012b) we took high magnification images of microglia in the proximity of the implant to examine their morphology. Figure 2A shows CD68, GFP, and merged images of a devicetissue interface taken with a 20X objective. It can be seen in Figures 2A-D that activated microglia/macrophages extended their processes toward the implant track ( Figure 2C) and/or fused together making them harder to detect as individual cells ( Figure 2D). They looked largely different from ramified microglia that are in a resting state ( Figure 2B). This microglial activity was also identified by co-localization of GFP with CD68 immunoreactivity. Of the microglia co-localized with CD68 near the implant track, no particular morphological differences were noted between PEG-coat and no-coat groups (data not shown). Figures 2E,F are examples of microglial adsorption to PEG-coated and non-coated devices, respectively. It is intuitive and supported by previous studies that PEG prevents glial cell adsorption to the device in the short-term, but here only few devices were retained intact and no statistical conclusion could be drawn at 4 weeks. It appeared that microglia did not preferentially adhere to a specific region of the device such as electrode sites.
Quantitative Analysis of Immunolabels
There was no significant interaction effect between the coating and animal strain (p > 0.05). Hence we mainly compared the effect of PEG coating with WT mice, and then repeated with CX3CR1-GFP mice. Quantitative analysis of normalized CD68 fluorescence intensity vs. distance from the implant is depicted in Figure 3. For both WT and CX3CR1-GFP mice, CD68 was strongest in the 0-50 µm interval and gradually decreased to the background level in the outer regions. No statistical significance was found between PEG-coat vs. no-coat with the ANOVA test (p > 0.05). Paired t-tests also did not show statistical difference in any of the binned intervals (p > 0.05). Although statistically not significant, PEG-coated devices generally had higher intensity of CD68 than non-coated devices.
IgG is known to correlate well with markers for activated microglia, such as CD68, as it is indicative of chronic BBB leakage which is largely affected by the perturbation of BBB from inflammatory molecules released by activated glial cells (Potter et al., 2013;Saxena et al., 2013). However, it has also been shown that IgG can be highly fluctuating due to its dependency to nearby vasculature (Lee et al., 2017a). In Figure 4 we see that IgG subsides to the background level, but in the close proximity to the implant the intensity is highly variable. We also see the same trend that is seen from CD68 that PEG-coated devices show higher IgG level than the non-coated devices, except for the bins 50-100 and 50-100 µm of CX3CR1-GFP mice. No statistical significance between PEG-coat and no-coat was found with both WT and CX3CR1-GFP mice in any of the binned intervals (p > 0.05).
Compared to other immunolabels used in this study, the mean values of GFAP were stable especially at 0-50 µm indicated by relatively small error bars. However, no statistical difference in normalized GFAP fluorescence intensity was found between PEG-coat vs. no-coat with the two statistical tests as seen in Figure 5. The statistical non-significance was also retained in neuronal density as seen in Figure 6. NeuN counts were more variable in the 0-50 µm bin but gradually became stable at distant intervals. Similar to the trend seen from CD68 and IgG, the mean neuronal densities of PEG-coat were lower than that of nocoat without statistical significance. The background neuronal densities were 2347 (±94, s.e.m., N = 8) cells/mm 2 for GFP mice and 2376 (±53, s.e.m., N = 8) cells/mm 2 for WT mice, which are in the range of previously reported neuronal densities in healthy mouse cortex (Schuz and Palm, 1989).
Variability of FBR
Even within the same treatment group vastly different tissue responses were observed. PEG-coated or non-coated devices, regardless of implanted in CX3CR1-GFP mice or WT mice, caused either a mild response or a severe response, as can be Frontiers in Neuroscience | www.frontiersin.org seen in Figure 7. In mild responses, CD68, IgG, and GFAP fluorescence intensities were relatively weaker and only localized to the implant track. In severe responses, the fluorescence intensities were stronger and largely diffused to the outer intervals. Mild responses were also manifested by a relatively higher neuronal density near the implant track than that of severe responses. When there was a large implant track, it always accompanied a severe response. However, a severe response did not always indicate that its implant track was large (e.g., in Figure 7 CX3CR1-GFP PEG-coat severe response).
Microglial Response
As described in a number of reports, activated microglia play a key role in the neuro-inflammatory response to foreign objects by secreting various pro-inflammatory and cytotoxic factors which propagate the neuro-inflammatory cascade (Nimmerjahn et al., 2005;Kozai et al., 2012b). We see in Figures 2A-D that our samples reflect morphologic changes upon activation that have been well-established in this field. Kozai et al. showed that microglia, presumably surveying the local environment, extend their processes toward the foreign object at the onset of the phagocytic state (Kozai et al., 2012b). Microglia then migrate toward the device surface and create a microglial sheath. We saw the extension of processes at 4 weeks postimplantation, indicating that there was ongoing inflammation near the implant track. The GFP in Figure 2D, which highly co-localizes with CD68 activity, indicated maximally activated microglia or macrophages. This is supported by studies showing a large microglial response near the implant in the early chronic phase and a subsided response later in the chronic phase.
Previous acute studies have revealed that PEG prevents protein adsorption to silicon or polymer based probes by creating a hydrophilic layer in between the device-tissue interface (Gutowski et al., 2014;Sommakia et al., 2014a,b). An in vitro study by Gutowski et al. showed that microglia are seldom found on PEG-decorated devices at 24 h post-implantation (Gutowski et al., 2014). Another in vitro study by Sommakia et al. showed that a significant difference in microglial population exists between PEG-coated and non-coated devices up to 50 µm away at 7 days, although the difference was not as dramatic as that by Gutowski et al. at 24 h (Sommakia et al., 2014b). In this study, it appears that fewer microglia are attached to the surface of the device at 4 weeks with qualitative examination (Figures 2E,F), but the intensity profiles around the device suggest that there was no significant difference (Figure 3). The difference in results is mainly due to the longer implantation time, but other factors also could have played such as the in vivo environment that accelerates removal of the PEG layer surrounding the device and/or scraping off of the PEG layer during the insertion process. By 4 weeks, the 4 k PEG layer would have long been gone although the dissolution and degradation time may vary depending on the molecular weight and the surrounding medium (Glastrup, 1996). Note that our work herein focuses on the impact of acute delivery of PEG, which presumably alters the onset of inflammatory cascade, on the chronic FBR.
Impact of PEG-Coating
Conflicting expectations may exist regarding the use of PEG coatings. It could be that (1) the ameliorative effect of PEG reduces the onset of the FBR and subsequently the chronic FBR, or (2) larger mechanical damage caused by the thick layer of coating exacerbate the chronic FBR.
PEG has been shown to be biocompatible and provide a beneficial effect on traumatic injury sites. PEG is not readily degraded upon hydration and interacts with the surrounding tissues until it dissolves away (Glastrup, 1996). Studies have reported that PEG reconstitutes damaged cell membranes which in turn promotes axonal regeneration in injured spinal cords (Luo et al., 2002;Estrada et al., 2014) or reduces traumatic or ischemic cell loss in the brain (Liu et al., 1989;Koob et al., 2005). Moreover, acute in vivo/in vitro tissue responses (Gutowski et al., 2014;Sommakia et al., 2014b) and impedance measurements (Sommakia et al., 2014a) suggest that PEG possesses anti-biofouling properties which prevent glial cell adsorption. The hydrated PEG layer can also work as a short term diffusion barrier. Despite the reported benefits of PEG, no significant difference in FBR was found between PEG-coat vs. no-coat cohorts in this study. A subtle difference existed in that PEG-coated devices had greater FBR than non-coated devices, especially in the 0-100 µm range. However, this difference was not statistically significant. Since PEG is presumed to be gone long before 4 weeks, and with no significant difference in FBR at 4 weeks, we expect little deviation from this trend beyond 4 weeks.
It is unclear whether PEG has little effect on reducing the FBR, or the beneficial effects of PEG were leveled off by a larger initial trauma caused by the thick coating. The increase in thickness of the penetrating profile was from 15 µm to ∼100 µm, which could be sufficiently large to induce a difference when the thickness is retained throughout the implantation period . In our study, however, the thickness of the residing material was kept the same as control since the PEG layer dissolves away upon contact with the blood and cerebrospinal fluid. Skousen et al. reported that the surface area of the residing material is a significant factor when penetrating profiles are kept similar (Skousen et al., 2011). This also supports the idea that the dimension of the residing material is more impactful.
Implications from CX3CR1-GFP Mice
For CX3CR1-GFP mice, the involvement of CX3CR1 gene in accelerating or decelerating the secretion of neurotoxic factors in microglia is controversial. CX3CR1, a receptor present in microglia, interacts with the ligand fractalkine in neurons which is considered a mechanism of neuron-glia crosstalk. This poses the question of whether the deletion of CX3CR1 has a prominent impact on the various cells including neurons. Jung et al. found that the absence of CX3CR1 in microglia did not alter the microglial response to various inflammation models, one being a peripheral nerve axotomy (Jung et al., 2000). However, contradictory results were reported that CX3CR1 deletion may have a negative or positive effect on neurons. Cardona et al. demonstrated that CX3CR1 deficient mice either stimulated with lipopolysaccharide (LPS), given neurotoxins to induce Parkinson's disease symptoms, or genetically modified to induce amyotrophic lateral sclerosis resulted in more neurodegeneration than controls (Cardona et al., 2006). By contrast, Denes et al. and Donnell et al. reported that the lack of CX3CR1 had a neuroprotective effect in brain ischemia and spinal cord injury models, respectively (Denes et al., 2008;Donnelly et al., 2011). Among these, our results corresponded well with Jung et al. in which there was no significant impact with CX3CR1 deletion. Statistical tests comparing WT vs. CX3CR1-GFP showed no significant differences with any of the immunolabels (data not shown). It may be that there is no general rule for CX3CR1 deficient microglia to react to the induced inflammation, but rather the response largely depends on the type of stimuli and the target region of the body. The trend of PEG-coat vs. nocoat in WT mice was replicated with very similar characteristics, strengthening the idea that dip-coating of PEG has little impact on the chronic FBR.
On the Variability of the Quantitative Histology
Even if PEG-coating and/or the CX3CR1-GFP mouse model caused small differences in the FBR, the differences were not prominent compared to the innate variability of the neural implant with the current technical standard. The qualitative images in Figure 7 are indicative of the highly variable nature of the FBR to implants. Even though all the surgical and care plans remained the same, there was a large difference in the tissue response within the same group. This is in line with the discrepancies observed between studies or even within studies (Jorfi et al., 2015). Rousche et al. and Williams et al. showed that a large variability existed even between different shanks within a multi-shank device (Rousche and Normann, 1998;Williams et al., 2007). Our previous study with flexible neural probes also indicate that reducing the variability would enable accurate identification of critical factors responsible for chronic device failure (Lee et al., 2017a). Vascular damage is a probable suspect for this variability, as devices that sever large vasculature are reported to cause severe neuro-inflammation (Skousen et al., 2011;Saxena et al., 2013). Although surface vasculature can be avoided during implantation, vasculature that underlie brain parenchyma are hard to avoid unless identified. Emerging technologies such as 3D mapping of the brain prior to insertion may minimize BBB damage (Kozai et al., 2010). Although the breach of the BBB is inevitable, minimizing this variability will be critical in facilitating quantitative research.
CONCLUSION
We have evaluated the FBR of implanted microelectrodes with and without PEG-coating in two mouse strains (WT and CX3CR1-GFP). Statistical analyses suggest that dip-coating of PEG does not result in a significant decrease or increase in the FBR at 4 weeks post-implant. This finding is supported by the consistency observed in the transgenic CX3CR1-GFP mouse model. This study confirms that an acute delivery of PEG does not result in significant mitigation of the chronic FBR. Comparison studies that utilize PEG for various purposes can be assured that PEG does not confound the study that assesses the degree of FBR in the long-term.
AUTHOR CONTRIBUTIONS
HL has led the entire experiments and composing of the manuscript. JG assisted with surgeries and provided his insight to the research. SC helped with immunohistology. MM helped with profilometry. KP and KO funded and mentored the research. All authors participated in writing and revising of the manuscript.
|
2017-09-14T11:07:42.772Z
|
2017-09-14T00:00:00.000
|
{
"year": 2017,
"sha1": "010b13c4b5d9cbac8f8fb7472486f75d2ba36677",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2017.00513/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "010b13c4b5d9cbac8f8fb7472486f75d2ba36677",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
268494184
|
pes2o/s2orc
|
v3-fos-license
|
A Rare Case of Streptococcus agalactiae Ventriculitis and Endocarditis
Streptococcus agalactiae infection is typically seen in specific populations, including neonates, pregnant women, and the elderly. These patients have immature, lower, and waning immune systems, which makes them more susceptible to infections. Typical S. agalactiae infections manifest as cellulitis, bacteremia, endocarditis, meningitis, ventriculitis (a rare complication of meningitis), and osteomyelitis. In rare cases, a patient can present with two or more of these typical infection manifestations. The authors present a case of a 48-year-old female with a past medical history of hypothyroidism and chronic back pain who presented to the emergency department with altered mental status. The patient developed nausea and vomiting two days prior to presentation after a family gathering, followed by occipital headache and agitation. On arrival at the emergency department, the patient did not follow commands and was drowsy. The initial examination showed positive for Brudzinski and Kernig signs. The patient was tachycardic, tachypneic, and hypertensive. Initial computed tomography (CT) head without contrast was negative for any acute pathology. Neurology was consulted, and a bedside lumbar puncture was performed, which was significant for elevated opening pressure of 32 cm H2O. The patient was initially started on ceftriaxone, ampicillin, vancomycin, acyclovir, and dexamethasone. Magnetic resonance imaging (MRI) of the brain with and without contrast showed acute ventriculitis, mild leptomeningeal enhancement, and a right posterior corona radiata acute lacunar infarct. Meningitis panel, BioFire (BioFire Diagnostics, Salt Lake City, UT), was positive for S. agalactiae, and the patient was de-escalated to ceftriaxone. Cerebrospinal fluid and blood cultures returned positive for S. agalactiae. A transthoracic echocardiogram was negative for endocarditis, but a transesophageal echocardiogram was significant for a 0.7 × 0.4 cm mobile echodensity attached to the posterior leaflet of the mitral valve (P1/P2 scallop). Repeat blood cultures, additional cerebrospinal fluid analysis, and infectious workup remained negative. Cardiology was consulted and recommended medical treatment. The patient improved clinically, continued ceftriaxone, and was discharged to complete a total of six weeks of treatment with outpatient follow-up evaluations. This case depicts a rare presentation of endocarditis, meningitis, and ventriculitis S. agalactiae infection and the need for a definite treatment algorithm in the management of complicated conditions such as the one presented.
Introduction
The Streptococcus genus are gram-positive pathogens with spherical or ovoid cells that characteristically take on the chain formation and occasionally are arranged in pairs [1].This genus is known for its Lancefield classification, ranging from groups A to M, based on their cell wall characteristics and antigenic reactions [1,2].They can be further classified into a pyogenic group, consisting of Streptococcus agalactiae and Streptococcus pyogenes, and a mitis group, including Streptococcus pneumoniae, Streptococcus mitis, and Streptococcus oralis [2].Their infectivity relies on their cell-surface components and virulence factors, which comprise biofilm, adhesins, capsules, M-protein, and many more characteristics [1].
S. agalactiae made its first presentation in literature in the 1930s when Lancefield differentiated it from other pathogens in its genus after it had been isolated from cow's milk and bovine mastitis [2,3].Now, it is commonly known as group B streptococci (GBS), based on its Lancefield classification.Infection by S. agalactiae presents as cellulitis, bacteremia, endocarditis, meningitis, ventriculitis (a rare complication of meningitis), and osteomyelitis [3].Its infectivity is associated with specific populations, particularly pregnant women and infants [3].However, there is evidence of increasing incidence of S. agalactiae infection in the adult non-pregnant population, evidenced by data accrued from the Active Bacterial Core surveillance that showed an increase from 8.1 cases per 100,000 in 2008 to 10.9 cases per 100,000 in 2016 [3,4].Nonpregnant adults who are infected by S. agalactiae often have other comorbidities or are immunocompromised or older [3].In this report, the authors present a patient with ventriculitis and endocarditis S. agalactiae infection.
Case Presentation
A 48-year-old female with a past medical history of hypothyroidism and chronic back pain presented to the emergency department (ED) with altered mental status.On presentation to the ED, the patient was not following commands and was drowsy.Per the patient's husband, the patient had suffered from food poisoning after a family gathering and experienced nausea, vomiting, and gastrointestinal upset two days prior.The patient also developed a constant occipital headache.Initial vital signs were significant for a temperature of 37.0°C, tachycardia with a heart rate of 116, tachypnea at 23 breaths per minute, hypotension of 99/54 with a mean arterial pressure of 69, and an oxygen saturation of 97% on room air.On initial examination, the patient could not follow commands, was drowsy, and had a Glasgow coma scale score of 10.The patient had positive Brudzinski and Kernig signs and no focal neurological deficits.Oral examination showed poor dentition.Initial laboratory findings showed a normal white blood cell count with a bandemia, mild non-anion gap metabolic acidosis, uncontrolled hypothyroidism, hypoalbuminemia, hypoproteinemia, transaminitis, elevated alkaline phosphatase, and an elevated creatinine kinase (Table 1).
Serum test Normal range Result
White blood cells
TABLE 1: Initial laboratory workup
An electrocardiogram (EKG) showed sinus tachycardia, incomplete bundle branch block, and a nonspecific T wave abnormality.An initial computed tomography (CT) head without contrast showed no acute intracranial abnormality.The patient had a bedside lumbar puncture, which showed an elevated opening pressure of 32 cm H 2 O.The patient was started on empiric intravenous ceftriaxone, ampicillin, vancomycin, acyclovir, and dexamethasone.The patient had a magnetic resonance imaging (MRI) brain with and without contrast for suspected meningitis that was significant for minimal debris layering in the bilateral occipital horns and mild enhancement of the ependymal lining of the ventricles (concerning ventriculitis), mild enhancement of leptomeningeal area postcontrast and the folia of the cerebellar hemispheres, and a right posterior corona radiata acute lacunar infarct (Figure 1).
Right posterior corona radiata acute lacunar infarct (C). Mild enhancement of the folia of the cerebellar hemispheres (D).
Cerebrospinal fluid (CSF) analysis showed neutrophilic pleocytosis, hyperproteinorrhachia, elevated red blood cells, and normal glucose (Table 2).Meningitis panel, BioFire (BioFire Diagnostics, Salt Lake City, UT), was positive for S. agalactiae.An MRI of the cervical, thoracic, and lumbar spine with and without contrast was done to further characterize the degree of meningitis and showed subtle leptomeningeal enhancement along the upper cervical cord, along the cerebellar folia, and at the caudal terminus of the thecal sac.Initial blood cultures from admission were obtained and were positive for gram-positive cocci in clusters, and susceptibilities showed multi-susceptible S. agalactiae (Table 3).Infectious disease was consulted, and a transthoracic echocardiogram, along with a high dose ceftriaxone for a minimum of two weeks if no vegetations, and for six weeks if vegetations were present, were recommended.Transthoracic echocardiogram showed normal left ventricular systolic function, with an ejection fraction estimate of 55-60%, mild mitral regurgitation, no pericardial effusion, and no echocardiographic evidence of endocarditis.
Drug
During admission, the patient complained of continued throbbing pain radiating from the head to her eyes and shoulder.A repeat MRI brain with and without contrast due to continued headaches showed progression of ventriculitis, interval increase in size of areas of restricted diffusion within debris within bilateral posterior horns of lateral ventricles with enhancement involving ependymal linings, supratentorial and infratentorial leptomeningitis, and a 7-mm pineal gland cyst (Figure 2).
FIGURE 2: A 7-cm pineal gland cyst.
The imaging was reviewed by a neurologist, who recommended increasing the topiramate dose as needed for headaches.The patient's headache improved with the addition of topiramate.Given high suspicion for endocarditis with associated meningitis/ventriculitis due to S. agalactiae, a transesophageal echocardiogram (TEE) was completed and was significant for a 0.7 × 0.4 cm mobile echodensity attached to the posterior leaflet of the mitral valve (P1/P2 scallop) (Figure 3).Cardiology was consulted for native valve endocarditis and recommended no surgical interventions.Repeat MRI of the brain with and without contrast showed interval decrease in the amount of layering debris in the lateral ventricles with persistent enhancement involving the ependymal linings, decreased leptomeningeal enhancement, with increased conspicuity of punctate foci of enhancement in the right frontal and parietal corona radiata, corresponding to evolving tiny subacute infarcts (Figure 4).
FIGURE 4: MRI brain with and without contrast. Persistent enhancement of ependymal linings signifying ventriculitis (A,B). Increased conspicuity of punctate foci enhancement in the frontal and parietal corona radiata, corresponding to evolving tiny subacute infarcts (C,D).
The patient had a peripherally inserted central catheter (PICC) inserted and was instructed to complete intravenous (IV) ceftriaxone for a total of six weeks from negative blood cultures.The patient was discharged home with home health with scheduled follow-ups with a primary care provider, neurology, and cardiology in an outpatient setting.
Discussion
S. agalactiae is a gram-positive pathogen known popularly by its Lancefield classification as GBS [1].GBS infection is increasingly common within specific populations: pregnant women, women in the peri-partum stage, and neonates [5].However, there has been a noted growth in incidence in populations outside of the aforementioned, such as among individuals greater than the age of 60, individuals with diabetes mellitus, and individuals in an immunocompromised state [6].Clinical presentation of GBS infection includes infective endocarditis, pneumonia, skin infections, osteomyelitis, and more [6].
GBS meningitis in adults remains uncommon, and patients with GBS meningitis with coinciding endocarditis are especially rare.In the exhaustive literature review conducted by van Kassel et al., only 14 patients were found with both conditions [7].Yet rates of GBS infection in adults are rising globally [4].The disease's low incidence, high mortality rate, and potential for an increasing incidence in future years highlight the importance of this case.
Treatment algorithms for GBS meningitis with endocarditis are not fully standardized, and the disease course of the patient, in this case, can help future clinicians and researchers develop an evidence-based standard of care.In the van Kassel study, antimicrobial treatment was detailed in 111 of the 144 patients, with 67 receiving penicillin G (either in monotherapy or combined with gentamicin (n = 9) or vancomycin (n = 1)) and 14 receiving ceftriaxone monotherapy [7].The Infectious Disease Society of America (IDSA) indicates that penicillin G and ampicillin are first-line, associated with A-III-level evidence, and ceftriaxone as an alternative, associated with B-III evidence [8].However, these guidelines have not been updated since 2004, and there is some evidence of increasing antibiotic resistance to penicillin G in GBS in some areas [9].Additionally, sometimes, penicillin is not practical for home use since it requires more frequent administration than ceftriaxone.Antibiotic stewardship considerations favor penicillin.Given the rarity of adult GBS meningitis/endocarditis, more studies are needed to determine the most effective treatment, but it is notable that ceftriaxone was associated with a generally favorable outcome in this patient.
This case underscores the importance of evaluating GBS meningitis patients for endocarditis and other associated conditions and sequelae.In the van Kassel study, concomitant endocarditis was noted in GBS meningitis patients at a much higher level (12%) than in meningitis patients generally (1%) and raised the lethality of GBS meningitis patients from 31% to 36% [7,10].Although rare, the risk of having GBS meningitis with concomitant endocarditis is significant enough for clinicians to have a high suspicion for it and work it up appropriately [11].Other studies have documented substantial sequelae resulting from the additional endocarditis burden in patients with meningitis, including respiratory failure, circulatory shock, and arthritis [10].Perhaps most importantly, the presence of endocarditis typically extends antibiotic treatment for meningitis from two weeks to six weeks [10].Recommendations to screen for endocarditis in patients with Staphylococcus aureus meningitis, group D Streptococcus meningitis, and enterococcal meningitis have already been published [12].Given the increase in morbidity and mortality in GBS meningitis with concomitant endocarditis, as well as divergent treatment requirements when endocarditis is present, we agree that all patients with GBS meningitis should be screened for endocarditis.This patient's hospital course proceeded with multiple noteworthy events.First, the patient was noted to have an acute right posterior corona radiata lacunar infarct on brain MRI.Bacterial meningitis is known to increase the risk of many types of CNS sequelae, infarction among them [13].Another notable factor in the patient's disease course was her significant hypothyroidism.Hypothyroidism has immunosuppressive effects thought to predispose to infection in general, and one report implicates hypothyroidism in a patient's suboptimal immune response to meningitis specifically [14,15].
Conclusions
GBS meningitis with concurrent endocarditis is a rare and highly lethal illness.Epidemiological trends suggest it may be becoming more prevalent, which increases the relevance of this case presentation.This case can help support the refinement of treatment algorithms for GBS meningitis/endocarditis and serve as a reminder for physicians to consider screening for endocarditis in all GBS meningitis cases for appropriate, timely diagnosis and treatment.
FIGURE 3 :
FIGURE 3: Transesophageal echocardiogram showing 0.7 × 0.4 cm mobile echodensity attached to the posterior leaflet of the mitral valve (A,B).
TABLE 2 : Cerebrospinal fluid analysis
Electroencephalography (EEG) showed a diffuse slow background with sleep spindles, and there were fewer fast frequencies on the right hemisphere during stimulation, suggestive of moderate diffuse encephalopathy with a greater degree of right hemispheric neuronal dysfunction with no epileptiform abnormalities or electroclinical seizures recorded.Repeat CT head without contrast was significant for mild dental disease.
|
2024-03-17T15:54:53.329Z
|
2024-03-01T00:00:00.000
|
{
"year": 2024,
"sha1": "825054a353f767f5614aefaf1e73b4d76725e262",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/213067/20240314-23497-had0rs.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "33a8b081ed49f8a7a33b344c111071700ab924ab",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
269762814
|
pes2o/s2orc
|
v3-fos-license
|
Enhancement of vitamin B6 production driven by omics analysis combined with fermentation optimization
Background Microbial engineering aims to enhance the ability of bacteria to produce valuable products, including vitamin B6 for various applications. Numerous microorganisms naturally produce vitamin B6, yet the metabolic pathways involved are rigorously controlled. This regulation by the accumulation of vitamin B6 poses a challenge in constructing an efficient cell factory. Results In this study, we conducted transcriptome and metabolome analyses to investigate the effects of the accumulation of pyridoxine, which is the major commercial form of vitamin B6, on cellular processes in Escherichia coli. Our omics analysis revealed associations between pyridoxine and amino acids, as well as the tricarboxylic acid (TCA) cycle. Based on these findings, we identified potential targets for fermentation optimization, including succinate, amino acids, and the carbon-to-nitrogen (C/N) ratio. Through targeted modifications, we achieved pyridoxine titers of approximately 514 mg/L in shake flasks and 1.95 g/L in fed-batch fermentation. Conclusion Our results provide insights into pyridoxine biosynthesis within the cellular metabolic network for the first time. Our comprehensive analysis revealed that the fermentation process resulted in a remarkable final yield of 1.95 g/L pyridoxine, the highest reported yield to date. This work lays a foundation for the green industrial production of vitamin B6 in the future. Supplementary Information The online version contains supplementary material available at 10.1186/s12934-024-02405-1.
Background
Low-carbon and sustainable manufacturing has emerged as a topical subject in global economic development [1,2].In particular, environmentally friendly microbial manufacturing has experienced rapid growth across various fields of food, medicine [3], and energy, which has brought significant economic benefits and generated substantial social value worldwide [4].Recent advances have been made in many commercial cases of microbial manufacturing using high-performance strains in trans-aconitic acid [5], lipoic acid [6], cinnamaldehyde [7], L-leucine [8] and other products [9].Vitamin B 6 encompasses a group of vitamins, namely, pyridoxal Microbial Cell Factories † Zhizhong Tian and Linxia Liu contributed equally to this work and share first authorship.
(PL), pyridoxine (PN), and pyridoxamine (PM), along with their corresponding phosphate esters, 5'-pyridoxal phosphate (PLP), 5'-pyridoxine phosphate (PNP), and 5'-pyridoxamine phosphate (PMP) [9][10][11].Vitamin B 6 , also known as VB 6 , plays a crucial role as a cofactor for numerous proteins and enzymes, making it one of the most significant vitamins [12,13].Most enzymes that rely on pyridoxal phosphate (PLP) as a cofactor [14] are involved in various biochemical processes, such as amino acid biosynthesis, decarboxylation, racemic reactions, Cα-Cβ bond cleavage, elimination, and the replacement of α, β, and γ [15].The enzymes involved in the pathway of deoxysugar biosynthesis utilize pyridoxamine phosphate (PMP) as a cofactor.Among the various production forms, PN holds the utmost significance [16].Vitamin B 6 is a valuable compound for the pharmaceutical industry [9], food and feed additives [17], and cosmetics [18].Recently, it has been reported that engineered Escherichia coli (E.coli) can produce 1.4 g/L PN by decoupling the de novo synthesis pathway, protein engineering of key enzymes, and using a multimodule iterative optimization strategy, which is the highest PN yield ever reported [16].However, the intrinsic connection between cell metabolism and culture conditions is not clear, and optimization methods are needed to explore to further enhance PN production [19].
With the rapid development of omics technologies, the process optimization of microbial manufacturing can also be guided by omics, including genomics, transcriptomics, proteomics, metabolomics etc [1].Metabolomics, transcriptomics and proteomics [20] approaches can detect changes in intracellular metabolism and thus help improve the efficiency of product synthesis pathways [21].RNA-seq is mainly applied to study transcriptomic differences caused by various treatments [22].The overall analysis of transcription, translation and metabolism at the molecular level allowed the identification of key differentially expressed genes (DEGs) and provided new insights into the molecular mechanisms involved in target molecule production [23].Zhu et al. [24] used RNA-Seq to explore the effects of nitrogen sources and ICDH (isocitrate dehydrogenase) knockout on glycolate production in E. coli.Transcriptome analysis under different fermentation conditions was performed, and the significantly altered genes related to N regulation, the oxidative stress response, and iron transport were analyzed.The latter strain, Mgly624 (with the ICDH deletion in Mgly534), achieved a balance between cell growth and glycolate production, reaching a glycolate production of 0.81 g glycolate/OD, which was 2.6-fold greater than that of the previous strain Mgly534.Notably, RNA-seq significantly contributed to revealing the importance of the ICDH gene.Liang et al [25].used RNA-Seq technology to analyze the gene expression of two strains, Pseudomonas balearica R90 and Brevundimonas diminuta R79, during cocultivation.The expression of genes related to the synthesis of polyhydroxyalkanoate (PHA), specifically the acs (encoding acetyl-CoA synthetase) and phaA (encoding acetyl-CoA acetyltransferase) genes, was upregulated in the coculture group compared to the pure culture groups.This enhanced the utilization of acetic acid and the synthesis of poly-β-hydroxybutyrate (PHB), leading to a considerably greater yield of PHB in the coculture group than in the pure culture group.Li et al [26].used RNA-seq technology to investigate the coproduction mechanism of poly-γ-glutamic acid (γ-PGA) and nattokinase in Bacillus subtilis natto.In this study, gene expression at different fermentation periods (6 h, 9 h, and 24 h) was analyzed, and the key genes involved in the production of γ-PGA and nattokinase were identified.By analyzing the main metabolic pathways, potential target genes for enhancing nattokinase activity and γ-PGA yield were also identified.The maximum γ-PGA yield obtained was 358.5 g/kg sucrose when 50 g sucrose was added per kilogram of substrate.The authors also observed the upregulation of genes related to glutamate synthesis and the downregulation of γ-PGA-degrading enzymes, which could contribute to an increase in yield.Thus, with RNA-Seq, we can study global transcriptional changes and gain new insights into strains that produce PN.
In the present study, we aimed to better understand the implications of artificially altering the metabolic pathways of the strains by transcriptomic profiling and metabolomics.This includes studying the various changes that may occur, such as changes in glycolysis, the tricarboxylic acid (TCA) cycle, amino acids, cofactors, and the response to fermentation conditions.Here, we revealed the metabolic network changes behind the PN production and provided clues for improving PN production via fermentation optimization.
Characterization of pyridoxine produced by engineered E. coli
In a previous study, we constructed the PN high-production strain LL388 [16].This strain was transformed with two plasmids, one containing the backbone of pRSFDuet-1, and the other containing p15ASI.pRSF-Duet-1 carries mutated genes from E. coli, including pdxA2 (encoding 4-hydroxythreonine-4-phosphate dehydrogenase) and pdxJ1 (encoding pyridoxine 5-phosphate synthase).p15ASI carries epd (encoding D-erythrose 4-phosphate dehydrogenase) from Glaciecola nitratireducens, pdxB (a native gene encoding erythronate-4-phosphate dehydrogenase), dxs (encoding 1-deoxy-D-xylulose-5-phosphate synthase) from Ensifer meliloti, and serC (a native gene encoding phosphoserine aminotransferase).These genes encode all the enzymes involved in the PN biosynthesis process, and we rationally modified and/or heterologously screened these genes in the previous study [16].The PN yield of the strain reached 450 mg/L in the shake flask and 1.4 g/L in 5 L of fed-batch fermentation, but there was fermentation heterogeneity owing to metabolic burden and plasmid stability.It characterized that PN production was coupled with cell growth.However, many of our fed-batch experiments showed that cell growth arrest hinders PN production.In bioreactors, the OD 600 value representing cell growth is usually approximately 30-40, which greatly limits the increase in PN production.To identify potential gene targets for improving PN production and find the factors that stop cell growth, transcriptome profiling was conducted to identify changes in global gene transcription levels in response to PN [27][28][29].Transcriptome analysis is an effective technique for studying genome-wide gene expression changes in microorganisms and has been widely applied to uncover potential genes and elucidate the molecular mechanisms underlying various metabolic processes [30][31][32].
In this study, transcriptome samples were obtained from the LL388 strain in fed-batch fermentation cultured at 37 °C and pH 6.8.The glycerol-containing solution started to feed at ∼ 10 h when the initial glycerol concentration was less than 3 g/L during fermentation, and the initial glycerol concentration was 15 g/L.From 6 to 7 h, the cells undergo a transition from the lag phase and enter a phase of rapid growth, and the production of PN is coupled to cell growth.As a result, the cells of LL388 were collected, and comparative analysis of their transcriptional profiles was conducted at 6 and 16 h, corresponding to the mid-to-late log phase (Fig. S1).In addition, the main byproducts we assayed included acetate, lactate, pyruvate, succinate, α-ketoglutaric acid, citric acid, etc., but the accumulation of these organic acids was not detected.Therefore, it may not be that the byproducts in the supernatants repressed cell growth, so we performed intracellular metabolomic analysis to further identify the bottleneck of PN production at 6, 26, 36, and 42 h.
Overall analysis of the response of DEGs to the production enhancement mechanism of pyridoxine
Comparative transcriptome analyses were performed during the log phase at 6 h and 16 h, when the OD 600 increased from 9.43 to 29.3 and the PN titer increased from 98.0 to 262.2 mg/L (Fig. S1).The transcript levels of 306 genes significantly changed (log2|FoldChange|≥ 2.0 and FDR ≤ 0.05) upon PN accumulation, 193 of which were significantly downregulated and 113 of which were significantly upregulated (Fig. 1A).The differentially expressed genes (DEGs) were then subjected to Gene Ontology (GO) term enrichment analysis, and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis was subsequently conducted to identify the pathways associated with the genes whose expression significantly increased.
In these paired samples at different points, GO functional enrichment analysis was carried out.Notably, "TCA cycle", "iron assimilation, homeostasis and transport", and "amino acid metabolic process" were the dominant terms in the "biological process".In addition, we also identified a relatively large number of genes associated with transmembrane transport (Fig. 1B).GO functional enrichment analysis revealed that the largest number of downregulated genes were enriched in the TCA cycle, carbon metabolism, amino acid biosynthesis and transport, while the upregulated genes were enriched in the processes of iron assimilation, homeostasis and transport.Additionally, DEGs were mapped to 32 KEGG pathways, and the significantly enriched pathways are shown in Fig. 1C.The KEGG analysis revealed that the primary enriched genes were associated with amino acid biosynthesis or metabolism, the TCA cycle, carbohydrate metabolism, etc.
Glycolysis, the TCA cycle and the electron transport chain are ubiquitous and important pathways in E. coli [33,34].Detailed analyses revealed that most genes related to glycolysis and the TCA cycle were downregulated, especially genes related to the TCA cycle (Fig. 2).This observation suggested that PN accumulation may inhibit central carbon metabolism, subsequently leading to a decrease in PN production in the engineered strain.This is in line with the observed reduction in the growth curve.Amino acid biosynthesis or metabolism was another significant category of genes downregulated in response to PN accumulation (Fig. 2).The downregulated genes were involved in the biosynthesis or metabolism of glutamate, L-aspartate, L-histidine, tryptophan, L-valine/L-isoleucine/L-leucine, the aspartate family, etc.Combined with the downregulation of glycolysis and the TCA cycle, we inferred that the accumulation of PN might inhibit cell growth through its impact on the synthesis of related amino acids.A previous study reported that 4-hydroxy-threonine (4HT), a branched intermediate of the native vitamin B 6 pathway, has significant negative effects on the biosynthesis of the aspartate-derived amino acid threonine and the branched-chain amino acids isoleucine and leucine [35].Additionally, 4HTP is a classical competitive inhibitor of the threonine synthase ThrC from E. coli.Consistently, we found that the transcription of genes involved in isoleucine biosynthesis pathway, such as ilvN, ilvB, and ilvC, was largely repressed by PN accumulation (Fig. 2).The transcription of thrC was upregulated, while the corresponding enzyme can catalyze 4HTP to form 4HT [36].It is reasonable that 4HTP is shunted to form 4HT, both of which inhibit the biosynthesis of some amino acids [9].4HT is a competitive inhibitor of ThrB that affects the conversion of homoserine to O-phospho-L-homoserine.This inhibition leads to the suppression of the synthesis of amino acids such as threonine and isoleucine [35].Additionally, the downregulation of histidine biosynthetic genes may be caused by the induction of R5P, which provides PLP for cell growth and PN production [37].
Furthermore, metabolic alterations were investigated by untargeted metabolomic profiling analysis via ultraperformance liquid chromatography-mass spectrometry (UPLC-MS).Samples at different time points (6,16,36, and 42 h) were selected for metabolomics analysis.For the metabolome, |Error(ppm)| ≤10 and P value were used to screen for differentially expressed metabolites.Metabolic pathway analysis was based on the KEGG database.In this study, we identified 17 metabolites involved in amino acid biosynthesis and 10 metabolites involved in the TCA cycle.The levels of amino acids, such as histidine, aspartate, and valine, were significantly reduced from 6 to 16 h (early stage of fermentation) but showed a variable trend from 36 to 42 h (late stage of fermentation) (Fig. 3).For instance, there was a substantial increase in the level of histidine during the late stage of fermentation, whereas the level of valine decreased markedly.No noticeable change was observed in the aspartate level.These adjustments reflect the series of changes that bacteria undergo to adapt to their environment.The results were consistent with the transcriptome analysis.The precursor for PN biosynthesis in E. coli is DXP (1-deoxy-Dxylulose-5-phosphate), which can be synthesized from pyruvic acid and G3P through Dxs (1-deoxy-D-xylulose-5-phosphate synthase).The relative abundance of pyruvic acid dramatically decreased during PN production, indicating that PN production significantly increased the consumption of pyruvate.Additionally, succinate was significantly decreased in the TCA cycle, and less α-ketoglutarate (α-KG) was produced, especially during early fermentation.These metabolomic results are consistent with the transcriptomic data, indicating that PN production might inhibit the synthesis of succinate and disturb amino acid metabolism.These results could provide insight into the relationship between metabolite levels and gene expression during PN production.
Construction of a single plasmid-engineered strain
Based on the findings from transcriptome analysis and metabolic profiling, a hypothetical model of the mechanism of PN inhibition has been proposed.This model suggests that the disruption of the TCA cycle, particularly the impaired production of succinate, as well as the partial synthesis of amino acids, collectively contributes to the inhibition of cell growth and the accumulation of PN.To further reduce the effects of PN inhibition or 4HTP toxicity and enhance final PN production, medium component optimization, especially nitrogen optimization, was performed to improve the growth of E. coli cells.
Initially, to increase PN production, we integrated two copies of the pdxA2-pdxJ1 operons into the LL006 genome and created a single plasmid to reduce the instability associated with using two plasmids.The newly obtained strain was named TZ01, and its PN production increased (Fig. 4A).The high-copy plasmids pTrc99a and pRSFDuet-1 were optimized as single vector backbones to carry epd-pdxB-dxs-P J231119 -serC fragments [16].The promoter and ribosome binding sites (RBSs) employed were consistent with those used by p15ASI, specifically the tac promoter, to regulate the transcription of epd-pdxB-dxs.Additionally, the J23119 promoter was used to control the expression of serC.The results indicated that pTrc99a (Fig. S2) exhibited greater PN production than pRSFDuet-1 and slightly greater PN production than the original p15ASI plasmid (Fig. 4B).The new strain TZ03 (TZ01 harboring pTrc99a-P tac -epd (Gni)-pdxB (Eco)-dxs (Eme)-P J231119 -serC (Eco)) was employed as the chassis cell for fermentation optimization experiments.
Fermentation optimization of decreased TCA compounds and amino acids
Optimization of the fermentation medium by incorporating cost-effective chemicals has been demonstrated to be an effective approach for enhancing the production of specific target molecules.According to the results of omics analysis, first, we added some significantly downregulated molecules, including succinate, histidine, and threonine, to the fermentation medium FM1.4 to increase cell viability and enhance PN production.The addition of succinate altered the pH, so the concentration added was up to 4 g/L.The concentration of α-KG used was based on previous studies and was 10 mM.The synthesis of PN by the recombinant strain TZ03 was promoted to 490 mg/L by the addition of 2 g/L succinate and 10 mM α-KG (Fig. 4C).The addition of α-KG can enhance PdxB turnover by disrupting the tight binding state between the enzyme and its cofactor NADH [38], which enhances the metabolic flow of the vitamin B 6 pathway.However, the specific role of succinate in the production of vitamin B 6 remains unknown.Further research is needed to determine its precise contribution to the production process.
Not all components present in the medium contribute to metabolite production.Therefore, it is of utmost importance to eliminate noncontributing factors from the study as early as possible.To enhance the production of PN, a statistical method of medium optimization was used.Plackett-Burman design (PBD) is a well-established and widely used statistical technique for efficiently screening medium components [39,40].It is a two-level design that is very useful for economically detecting the main effects while assuming that all other interactions are negligible [41].Despite the complexity and numerous interactions within cellular networks, we attempted to identify important amino acid components, including 10 downregulated candidates from the omics analysis.Table 1 represents the Plackett-Burman experimental design for 12 trials with two concentrations for each variable and the corresponding PN activity.The variable represents 10 amino acids; however, the confidence level of the variable is less than 95%, and hence, the variable is considered to be insignificant (Table 1 and Table S1).The above results indicated that a single amino acid had no significant effect on PN production.
Additionally, yeast extracts and Bacto peptone are rich sources of peptides and mixed amino acids [42].Optimizing the concentration of yeast extracts and Bacto peptone provides an alternative approach to replenish the diminished amino acid pool [43].This optimization is closely linked to the carbon to nitrogen (C/N) ratio, which plays a crucial role in fermentation medium [44].Microorganisms rely on a well-balanced ratio of carbon to nitrogen to sustain their activity [45,46].By adjusting the concentration of yeast extracts and Bacto peptone, it becomes possible to modulate the availability of carbon and nitrogen in the fermentation medium [46].This ensures that microorganisms have an optimal nutrient supply to support their growth and metabolic processes.While optimizing amino acids, we optimized the PN production of glycerol, glucose, yeast extract, and Bacto peptone by orthogonal testing.Orthogonal testing is a method of designing and conducting experiments to test multiple factors and their interactions in a systematic and efficient manner [47].It involves selecting a set of test cases that are independent and do not overlap in their effects on the system being tested.This helps identify the most important factors and interactions while minimizing the number of test cases needed.The carbon-nitrogen ratio ranged from 3.55 to 9.92 (Table 2).The results showed that the inclusion of an organic nitrogen source resulted in an increase in PN production compared with the original medium, with the optimal combination of glycerol at 12 g/L, glucose at 4 g/L, yeast extract at 8 g/L, and Bacto peptone at 7 g/L.However, it was observed that cell growth was superior to that under other conditions.One possible explanation for these results is that the organic nitrogen source primarily stimulated the growth of the microorganisms, as evidenced by the increased optical density (OD 600 ), which affected PN production (Table 2).
To improve the yield of metabolites, it is necessary to optimize the production medium.Fermentation optimization of specific amino acids and mixed amino acids (such as organic nitrogen sources) in the medium increased the PN yield, but the highest yield was approximately 400 mg/L (409.7 mg/L amino acid pool and 406.3 mg/L organic nitrogen source).To simplify the medium preparation process, we introduced an improved carbon and nitrogen formula known as FM1.5.This upgraded formula was designed to enhance the efficiency of the medium for producing PN.Furthermore, among the ten amino acids, Thr and Asp exhibited higher confidence levels.This suggests that these two amino acids may play a significant role compared to the other amino acids.Specifically, these findings indicate that the toxic compound 4HTP indeed impairs the biosynthesis of Thr, as the addition of Thr can reverse the inhibition caused by 4HTP.Asp, which is upstream of Thr in the biosynthetic pathway (Figs. 2 and 3), is of particular interest due to its downregulation.Based on our analysis, we speculate that the addition of different concentrations of Asp will have a positive effect on the yield of PN.Through the exogenous addition of Asp, we observed a consistent improvement in pyridoxine (PN) production across all tested concentrations, including 0.5, 1, 3, and 5 g/L (Fig. 4D).This finding supports our hypothesis, as the addition of 0.5 g/L Asp resulted in the highest PN titer, reaching 514.30mg/L.
Fed-batch fermentation optimization
To assess the feasibility of scaling up PN production using a one-plasmid strain, fed-batch cultivation of strain TZ03 was performed in a 5 L bioreactor.Four fed-batch cultures were conducted, employing media formulations containing succinate as the carbon source and varying concentrations of the nitrogen source.By systematically evaluating these different combinations, the aim was to identify the optimal concentration that would yield the highest PN production.
In the fed-batch cultivation, a mixture of glycerol and glucose was used as the carbon source.Throughout the entire fermentation process, the concentration of glycerol was maintained below 3 g/L.Additionally, we observed that the proportion of glucose was relatively low, and glucose was preferentially consumed over glycerol.The control of low glycerol concentrations helps prevent the a +, high concentration of variable, 0.6 g/L; -, low concentration of variable, 0.4 g/L excessive accumulation of glycerol, which can hinder the growth and productivity of the strain.The pH of the culture broth was maintained at 6.5 by automatically feeding with 25% (w/v) NH 3 •H 2 O.
Table 1 PBD matrix of amino acids with PN production
In fed batch I, we investigated the production capacity of the single plasmid strain using the original medium supplemented with 4 g/L succinate in the medium in a bioreactor.The dissolved oxygen (DO) level was maintained at 30%.The results indicated that the maximum optical density (OD 600 ) reached only 40, indicating the slow growth of the microorganisms.However, at 10 h into fermentation, PN accumulation (∼ 103 mg/L) commenced, and its concentration increased to 1 g/L by 46.5 h.The accumulation of PN rapidly increased throughout the entire fermentation process, particularly during the early to mid-stage of fermentation, concurrent with cell growth (as shown in Fig. S3).Ultimately, the PN productivity was measured to be 1.177 g/L/h after 54 h of fermentation.Various organic acids, including acetate, succinate, pyruvate, formic acid, and citric acid, were tested in the fermentation process.However, only succinate, pyruvate, and acetate were detectable.The results revealed that the amount of succinate rapidly decreased over time, indicating that succinate was consumed during fermentation.This observation aligns with the absence of succinic acid in the previous fermentation process from shaking flasks.On the other hand, the acetate concentration increased throughout the fermentation process, while the pyruvate concentration initially increased and subsequently decreased (Fig. S4).However, we speculated that the minimal accumulation of byproducts such as acetate and pyruvate does not significantly hinder growth.There are likely other crucial factors that truly impact growth, such as succinic acid and the analyzed amino acid pool (organic nitrogen source).
In fed-batch II, the addition of 12 g/L succinate as a feeding strategy contributed to improved cell growth and PN production.Succinate serves as a supplementary carbon source during the fed-batch phase, supporting the metabolic activity of the strain.The DO level was increased and maintained at 40% to promote cell growth.PN started to be synthesized after cultivation for 4 h (Fig. 5B).Subsequently, PN was continuously produced, and the highest titer was measured to be 1.60 g/L after cultivation for 42 h, with an average productivity of 48.09 mg/L/h (Fig. 5B).However, the yield of PN did not continue to increase after the fermentation time exceeded 42 h.The OD 600 increased to 50, surpassing the value observed in feeding batch I.This result suggested that the addition of succinic acid indeed promoted the growth of the strain.
Fed-batch III was used to investigate the effects of different C/N ratios in the culture medium on the yield during the fermentation process.The addition of 12 g/L succinate was used as the feeding medium for fed-batch II.The control group utilized the original medium with a C/N ratio of 6.76 (Fig. 5C), while the experimental group employed a modified medium with a lower C/N ratio of 4.56 (Fig. 5D).The results showed that fermentation with a lower C/N ratio balanced cell growth with production and that DO was more stable than fermentation with a C/N ratio of 6.76.The results revealed a state of slow and steady growth and production, which was the preferred state for PN production (Fig. 5D).After 48 h of fermentation, the control group yielded 1019 mg/L, while the experimental group with the optimized C/N ratio produced 1420 mg/L PN.As the fermentation continued and reached 70 h, the control group's product yield increased to 1140 mg/L, whereas the experimental group achieved a higher final product yield of 1951 mg/L.These results indicate that the modified C/N ratio in the experimental group led to a significant improvement in product yield compared to that in the control group, even after an extended fermentation time.However, the average productivity was only 27.87 mg/L/h, which was lower than that of fed-batch II.Furthermore, there was The original medium was used as the control a continued tendency for cell growth to increase, reaching approximately 60.This result further confirms the findings obtained from the transcriptomic and metabolic data.We also conducted tests on the plasmid stability of single-plasmid bacteria and observed that the plasmid retention rates at the end of fermentation were 97% and 98%, respectively (Fig. S5).It can be inferred that the single-plasmid strain is more stable than the dual-plasmid strain, based on the higher plasmid retention rates observed.However, based on the relative yield comparison between the fed-batch fermentation and shake flask, it is clear that additional research is needed to optimize the control of the fermentation process.Additionally, it appears that there is a production limit for PN (1 to 1.5 g/L) in fed-batch fermentation within a 48-h period.Consequently, a thorough and systematic analysis of omics data is essential for identifying bottlenecks.This includes employing ionomics to ascertain the absence of crucial ions in the fermentation liquid or proteomics to determine the presence of toxic protein synthesis or inadequate enzyme synthesis within the vitamin B 6 synthesis pathway, among other potential issues.Moreover, the limitations might also stem from the engineered strain itself.Switching chassis cells or boosting cell growth through metabolic engineering with omics data represents another potential strategy for increasing vitamin B 6 production.We are committed to continuing our rigorous metabolic engineering and omics studies, aiming to increase fermentation yields and establish a microbial basis for the sustainable industrial production of vitamin B 6 .
Conclusions
In this study, transcriptome and metabolome analyses of a high-yield strain provided insights into the possible inhibitory effect of PN accumulation on amino acids and compounds in the TCA cycle, particularly succinate and α-KG.The accumulation of PN may inhibit amino acid biosynthesis by interfering with enzyme activity, acting as a feedback inhibitor [48], disrupting amino acid balance [49], and affecting coenzyme availability [50] for essential metabolic processes.Afterward, a genetically modified single plasmid strain was utilized to assess different fermentation targets, such as specific amino acids, the C/N ratio (organic nitrogen source with high free amino acids), α-KG, and succinate.Ultimately, the engineered strain successfully reached a PN concentration of 514 mg/L in shake flasks.Moreover, through further optimization in a 5 L fed-batch fermentation process, an even higher production of 1951 mg/L was achieved.Overall, these results offer valuable insights and a scientific basis for future efforts aimed at enhancing PN yield through omics-guided approaches.
Strains and growth conditions
The engineered E. coli strains used and constructed in this work are derivatives of the previous strain LL006.The strains and plasmids used are listed in Table 3.The primers used in this work are listed in Table S2.The E. coli DH5α strain was used for plasmid construction and replication.Lysogeny broth (LB) medium (10 g/L Bacto peptone, 5 g/L yeast extract, 10 g/L NaCl) was used to inoculate cells and propagate plasmids.
For bioreactor fermentation, activated colonies from the frozen stock were inoculated into 250 mL shakers containing 50 mL of seed medium.The composition of the seed medium was the same as above.After shaking for 8 h at 37 ℃, the culture was transferred to fermentation medium (1.4) at a dose of 10% (v/v).The feed contained 300 g/L glycerol, 20 g/L glucose, 5 g/L Bacto peptone, 5 g/L yeast extract, 6 g/L MgSO 4 •7H 2 O, 300 mg/L FeSO 4 •7H 2 O, 300 mg/L MnSO 4 •5H 2 O and 12 g/L succinate.All of the above were used in fed-batch I and II.
Bioinformatics analysis of transcriptome data Cells for RNA-Seq were cultured in a 5 L bioreactor with 2 L of fermentation medium.After batch fermentation, 10 mL of cells were harvested at mid-log phase, flash-frozen in liquid nitrogen, and stored at -80 °C.Total RNA from each sample was extracted using TRIzol Reagent/RNeasy Mini Kit (Qiagen).Total RNA from each sample was quantified and qualified by Agilent 2100/2200 Bioanalyzer (Agilent Technologies, Palo Alto, CA, USA), NanoDrop (Thermo Fisher Scientific Inc.) and 1% agarose gel electrophoresis.Total RNA (1 µg) was used for subsequent library preparation.The rRNA was depleted from total RNA using an rRNA removal kit.The ribosomal-depleted RNA was then fragmented and reverse-transcribed.First-strand cDNA was synthesized using ProtoScript II Reverse Transcriptase with random primers and actinomycin D. Second-strand cDNA was synthesized using Second Strand Synthesis Enzyme Mix (including dACGTP/ dUTP).The double-stranded cDNA purified by beads was then treated with End Prep Enzyme Mix to repair both ends, and a dA tail was added in one reaction, followed by T-A ligation to add adaptors to both ends.Selection of the size of the adaptor-ligated DNA was then performed using beads, and fragments of ∼ 400 bp (with an approximate insert size of 300 bp) were recovered.The dUTPmarked second strand was digested with Uracil-Specific Excision Reagent.Each sample was then amplified by PCR using the P5 and P7 primers, with both primers carrying sequences that can anneal with the flow cell to perform bridge PCR and the P5/P7 primer carrying the index allowing for multiplexing.The PCR products were cleaned using beads, validated using a Qsep100 (Bioptic, Taiwan, China), and quantified by Qubit3.0Fluorometer (Invitrogen, Carlsbad, CA, USA).Then, libraries with different indices were multiplexed and loaded on an Illumina HiSeq/Novaseq instrument according to the manufacturer's instructions (Illumina, San Diego, CA, USA) or an MGI2000 instrument according to the manufacturer's instructions (MGI, Shenzhen, China).Sequencing was carried out using a 2 × 150 paired-end (PE) configuration; image analysis and base calling were conducted by HiSeq Control Software (HCS) + OLB + GAPipeline-1.6(Illumina) on the HiSeq instrument, NovaSeq Control Software (NCS) + OLB + GAPipeline-1.6(Illumina) on the NovaSeq instrument, and Zebeacall on the MGI2000 instrument.
For the data analysis, to remove technical sequences, including adapters, polymerase chain reaction (PCR) primers, fragments thereof, and bases with a quality lower than 20, pass filter data in fastq format were processed by Cutadapt (version 1.9.1, phred cutoff: 20, error rate: 0.1, adapter overlap: 1 bp, min.length: 75, proportion of N: 0.1).First, reference genome sequences and gene model annotation files of related species were downloaded from genome websites, such as UCSC, NCBI, and ENSEMBL.Second, Bowtie2 (v2.2.6) was used to index the reference genome sequence.Finally, the clean data were aligned to the reference genome via Bow-tie2 software (v2.2.6).Initially, transcripts in fasta format were converted from known gff annotation files and indexed properly.Then, with the file as a reference gene file, HTSeq (v0.6.1p1) was used to estimate gene expression levels from the paired-end clean data.Differential expression analysis was performed with the DESeq2 Bioconductor package, a model based on a negative binomial distribution.After adjusting Benjamini and Hochberg's approach for controlling the false discovery rate, the Padj of genes was set to < 0.05 to detect DEGs.Gene Ontology (GO) terms in GOSeq (v1.34.1) were used to annotate a list of enriched genes with a p value less than 0.05.In addition, topGO was used to plot the DAG.Rockhopper uses a Bayesian approach to create a transcriptome map including transcription start/stop sites for proteincoding genes and novel transcripts identified by Rockhopper.Samtools v0.1.19with the command mpileup and Bcftools v0.1.19+ were used for SNV calling.Rockhopper (v2.0.3) was used to predict operons, transcription start sites (TSSs) and transcription stop sites (TTSs).RBSfinder (v1.0) was used for SD sequence prediction.TransTermHP (v2.09) can accurately detect Rho-independent transcription terminators.The novel intergenic transcripts were subjected to BLAST searches against the NR database, and nonannotated transcripts were considered potential trans-encoded sRNAs.The novel antisense transcript was treated as a cis-encoded sRNA.The secondary structures of the sRNAs were predicted using RNAfold (2.3.2).
Samples for metabolomic analysis
To monitor changes in intracellular metabolite content, samples were collected from the cultures at 6, 16, 36 and 42 h.Each sample consisted of 0.2 mL, which was immediately mixed thoroughly with 1 mL of precooled methanol (40% concentration) at -20 ℃ by vortexing for one second.Three parallel samples were prepared within one minute.Subsequently, the samples were centrifuged at a speed of 10,625 × g and a temperature of 4 ℃ for two minutes to remove the supernatant while preserving the bacterial cells at -80 ℃.The cells were then individually treated with precooled 40% methanol (2 mL) at -20 ℃ and centrifuged for two minutes.The resulting pellets were resuspended in an acidic acetonitrile-water solution (1:1 v/v) containing formic acid (0.1%) that had been precooled to -20 ℃ as well as an ethanol-water solution (3:1 v/v) heated to 100 ℃.The extraction and derivatization procedures followed the protocol described by Chang et al. [51][52][53], after which the obtained supernatant was freeze-dried and stored at -80 °C.
Fed-batch fermentation
After overnight incubation in a seed medium at 37 ℃, the seed culture (10%, v/v) was inoculated into a 5 L bioreactor containing 2 L of fermentation medium supplemented with 1 L of conventional feed.The primary focus is on promoting cell growth while maintaining dissolved oxygen (DO) levels above 40% through regulation of the agitation rate (200-400 rpm).It is assumed that the uninoculated bioreactor achieves maximum dissolved oxygen at an agitation rate of 200 rpm.A continuous supply of conventional feed is introduced to sustain dissolved oxygen levels above 40%.Throughout the fermentation process, circulating water is utilized to maintain a constant temperature of 37 °C.Additionally, pH control is achieved by incorporating ammonium hydroxide and phosphoric acid.
Analysis of pyridoxine and organic acid by HPLC
The cell density (OD 600 ) was measured using a Hybrid Multi-Mode Reader (Synergy Neo2, Bio Tek, USA).PN quantification was performed using a Thermo Fisher high-performance liquid chromatography (HPLC) system (UltiMate™ 3000) equipped with an FLD-3400 detector and employing a gradient program based on previously published protocols with minor modifications.In brief, the fermentation mixture underwent centrifugation, and the resulting supernatant was subjected to HPLC analysis utilizing fluorescence detection.The excitation and emission wavelengths were set at 293 and 395 nm, respectively.The separation of PN was achieved using an octadecylsilyl (ODS) column (Cosmosil AR-II; 250 by 4.6 mm, particle size: 5 μm; Nacalai Tesque) through a gradient program.Mobile phase A consisted of 33 mM phosphoric acid and 8 mM 1-octanesulfonic acid adjusted to pH 2.4 with KOH, while mobile phase B consisted of acetonitrile at an 80% (v/v) concentration.The total flow rate used for separation was maintained at 0.8 mL/min.The linear gradient program employed in this study involved the following steps: from 0% mobile phase B to 1% B over 5 min, from 1% B to 19% B over 5 min, from 19% B to 28% B over 10 min, from 28% B to 63% B over 5 min, from 63% B to 0% B over 2 min, and finally maintaining the composition as 0% B for another 3 min.All the data are presented as the means ± SDs of three independent replicates.The software Origin Pro (version 9.1) or GraphPad Prism (version 8) was utilized for analyzing fermentation data.
The quantitative analysis of the organic acids was carried out using an Agilent 1260 Infinity series HPLC system (Agilent Technologies, Palo Alto, California, USA), which consisted of a quaternary pump, a thermostated automated injector, a thermostated column compartment and a refractive index detector.The samples were separated and analyzed by an Aminex® HPX-87 H Ion Exclusion Column (BIO-RAD, 7.8 mm × 300 mm).The mobile phase was composed of 5 mM H 2 SO 4 in water (eluent A) using gradient elution as follows: 0-30 min, 100% of A. The injection volume, flow rate and ambient temperature were 20 µL, 0.5 mL/min and 60 ℃, respectively.The RI detection was set as follows: zero compensation, 5%; attenuation, 500,000 nRIU.
Plasmid stability analysis
The fermentation samples in the bioreactor were plated on nonselective plates and incubated at 37 °C for 16-20 h.A single colony was randomly picked and spotted on an LB plate supplemented with kanamycin.The colonies were counted, and the percentage stability was calculated by determining the ratio of the number of colonies on plates with growth to the total number of spots.
Fig. 1
Fig. 1 Gene Ontology (GO) and pathway enrichment analysis.(A) Volcano plot depicting the transcriptome data as a relation between the q value and fold change.(B) GO term analysis of the DEGs showing three aspects: molecular function, cellular component and biological process.(C) Pathway enrichment analysis of the DEGs.
Fig. 2
Fig. 2 Transcriptome analysis of the mutant strain E. coli in response to PN overproduction from 6 h to 16 h.The red font represents the precursors of PN, and the red dots indicate the upregulated genes.The numbers after the gene showed the fold change in the transcriptome analysis
Fig. 3
Fig. 3 Metabolic analysis of the mutant strain E. coli in response to PN overproduction between the early and late stages of fermentation.(A) Amino acid analysis.(B) Chemical analysis of the TCA cycle.Among the two adjacent squares, the squares on the left represent changes occurring in the early phase (from 6 to 16 h), while the squares on the right represent changes occurring in the late phases (from 36 to 42 h).The experiments were conducted in triplicate, and the significance (p value) was evaluated by two-sided Student's t test.* P < 0.05; ** P < 0.01; ***P < 0.001; **** P < 0.0001
Fig. 4
Fig. 4 Construction of the chassis cell and fermentation optimization.(A) PN titer and cell growth (OD 600 ) of TZ01 with one copy inserted into the genome at the rpnD locus.(B) PN titer and cell growth (OD 600 ) of TZ01 transformed with different plasmids.(C) PN titer and cell growth (OD 600 ) of TZ03 after the addition of succinate and α-KG.The experiments were conducted in triplicate, and the significance (p value) was evaluated by two-sided Student's t test.**P = 0.0038, *P = 0.0131.(D) PN titer and cell growth (OD 600 ) of TZ03 after the addition of different concentrations of Asp
Fig. 5
Fig. 5 Time course of fed-batch fermentation under different conditions.(A) Time course of PN titer, OD 600 and DO (%) of fed-batch fermentation I. (B) Time course of the pyridoxine titer, OD 600 and DO (%) of fed-batch II.(C) and (D) Time course of the PN titer, OD 600 and DO (%) of fed-batch III with different C/N ratios.C, C/N ratio = 6.76;D, C/N ratio = 4.56.The black line indicates the trend of the OD 600 , the red line shows the PN titer, and the purple line represents the DO (%)
Table 2
The orthogonal array of different C/N ratios
Table 3
Strains and plasmids used in this study
|
2024-05-15T13:11:35.437Z
|
2024-05-15T00:00:00.000
|
{
"year": 2024,
"sha1": "08459b9aea8a22596a615cc650faf212735b6901",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ab3960b6087feec3be3f6194cfd9c157aee079a4",
"s2fieldsofstudy": [
"Engineering",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268004173
|
pes2o/s2orc
|
v3-fos-license
|
Successful Use of Ultrasound Guided Quadratus Lumborum Block Without General Anesthesia for Open Appendectomy in a Patient with Heart Failure with Reduced Ejection Fraction- A Case Report and Literature Review
Background Patients diagnosed with Heart Failure with Reduced Ejection Fraction (HFrEF) are at high risk of perioperative cardiovascular complications. While it is important to focus on optimizing their cardiac function, it is also crucial to address and optimize any other modifiable risk factors that could potentially impact postoperative outcome. This also includes careful consideration of anesthetic techniques to suit the patient and facilitate the surgery. However, there is a scarcity of evidence regarding the safety of specific anesthetic approaches for heart failure patients. Case Presentation We describe the case of an adult patient in mid-50s, with a history of ischemic dilated cardiomyopathy with reduced Ejection Fraction (about 25%) who presented with acute gangrenous appendicitis and was scheduled for an open appendectomy. It was deemed to be a high-risk patient for general and spinal anesthesia. With the guidance of a multidisciplinary team, surgery was successfully performed using a quadratus lumborum block with standard monitoring. The patient was comfortable and hemodynamically stable throughout the procedure. The postoperative course was uneventful. Conclusion Quadratus Lumborum Block for open appendectomy can be a beneficial alternative anesthesia technique in high-risk patients that significantly lowers perioperative cardiovascular risk, maintains hemodynamics, enhances satisfaction, and shortens hospital stay.
Introduction
Patients with Heart failure with reduced Ejection Fraction (HFrEF) pose significant perioperative challenges due to risk of developing perioperative major adverse cardiovascular events (MACE).These patients need optimization of preload, afterload reduction, and minimization of myocardial depression to maintain cardiac output. 1 Therefore, anesthetic techniques should be instituted wisely to prevent perioperative morbidity and mortality. 2,3or high-risk patients requiring surgery, safer options such as neuraxial or peripheral nerve blocks are advisable.Modern ultrasound imaging aids in precise and targeted local anesthesia, enabling surgeries without compromising vital organ function.We report our early success using a Quadratus Lumborum Block (QLB) for near-complete anesthesia in open appendectomy.Although the QLB is well-known for postoperative pain relief, its use as the sole anesthetic technique in abdominal surgery has not yet been well-substantiated.
With adequate monitoring and a careful approach, we report a successful case of appendectomy performed under QLB in a patient with heart failure with reduced ejection fraction (HFrEF).
Case Report
A 56-year-old gentleman (65 kg, 160 cm, BMI 25.4 kg/sq.m and ASA-IV E) presented to our emergency department with a two-day history of right lower abdominal pain, fever, and vomiting.His medical history was significant for longstanding type 2 diabetes mellitus (DM), hypertension, dyslipidemia, and dilated cardiomyopathy.Examination revealed tenderness in the right iliac fossa and bilateral fine basal crepitation.CT abdomen revealed acute gangrenous appendicitis, and an urgent open appendectomy was scheduled.
Preoperative Assessment and Optimization
We assessed him preoperatively and consulted with a cardiologist.He was admitted to the Emergency Department, and our focus was on the patient's cardiorespiratory system and ability to tolerate a general anesthetic procedure.
On examination, he was stable enough to verbalize his medical history.He also had a history of hospital admission for decompensated heart failure in 2019.At that time, transthoracic echocardiography (TTE) revealed a dilated left ventricle (LV), severely reduced systolic LV function (EF 18%), severe global hypokinesis of LV, and Grade 3 diastolic dysfunction.The patient was treated with anti-heart failure medications and discharged home a week later.He was prescribed aspirin, bisoprolol, ramipril, dapagliflozin, rosuvastatin, isosorbide dinitrate, and insulin.
In this visit, he had poorly controlled DM (HbA1C 13%).Laboratory tests revealed an NT pro-BNP level of 1122 pg/ mL, troponin T level of 28 ng/l, CRP level of 158 mg/l, WBC 16000/mcl.TTE revealed a moderately dilated left ventricle, severely reduced LV systolic function, biplane LVEF of approximately 25%, severe global hypokinesis of the LV, and Grade 3 diastolic dysfunction.His medications on admission included bisoprolol, rosuvastatin, and insulin.
The patient had a revised cardiac risk index (RCRI) of 4, with 15% risk of major adverse cardiac events (MACE) and high risk for both general anesthesia and neuraxial blockade.
He was admitted to the SICU for preoperative optimization, arterial and central venous line insertion, Pulse index Continuous Cardiac Output (PiCCO)-guided fluid resuscitation, and antibiotic prophylaxis.
Multi-Disciplinary Team Meeting
Owing to the patient's high risk, we held a multidisciplinary team meeting.The team decided that the patient would undergo an open appendectomy with QLB combined with monitored anesthesia care (MAC).As the patient had gangrenous appendicitis, there was a further risk of deterioration.
Intraoperative Management
The patient was counselled regarding the urgent need for surgery, perioperative risk of cardiac complications, and specific anesthesia management, and written consent for anesthesia was obtained.He was brought to the operating room, and standard monitors were applied, along with an invasive arterial line monitor and supplemental oxygen administered via face mask at 5L/min.Inotropes (dobutamine and adrenaline infusions) were prepared in case he developed hemodynamic instability.
In the left lateral decubitus position, 2 mg of midazolam and 50 µg of fentanyl were administered intravenously as anxiolytics.An ultrasound probe (curvilinear low-frequency) was placed in a transverse orientation at the midaxillary line at the L2-L4 level to visualize the three expected abdominal layers (transversus abdominis, external oblique, and internal oblique).The probe was then moved posteriorly until quadratus lumborum muscle (QLM) was confirmed.Under aseptic techniques and ultrasound guidance, the needle was inserted and advanced into the anterior aspect of the QLM.Proper positioning of the needle tip was confirmed by hydrodissection, and 20 mL of 0.33% levobupivacaine was injected into the fascia between the right QL muscle and psoas muscles (Figure 1).
Surgery was performed 15 min after the block, with supplemental local infiltration and intravenous fentanyl at titrated doses of up to 50 µg.Intraoperative sedation was achieved with 2 mg of midazolam intravenously at titrated doses of up to 4 mg.A minimal dose of noradrenaline 0.03-0.05mcg/kg/mininfusion was continued to maintain normal hemodynamics.An additional dose of intravenous fentanyl (50 µg) was administered to blunt the effects of deep peritoneal stimulation.The patient maintained the airway and vital signs around baseline throughout the procedure.
Postoperative Management
The patient was transferred to the SICU for hemodynamic monitoring and organ support.The numerical rating of pain score was 0/10 at rest and 2/10 with movement at 24 h and reached a maximum of 4/10 with movement at 48 h postoperatively.His pain was managed with paracetamol, and he was discharged on the fourth day postoperatively.A summary of the clinical care pathways for our patient is shown in Figure 2.
Heart Failure and Perioperative Cardiac Risk
The worldwide incidence of heart failure is approximately 64 million. 8,9HFrEF is a global health concern and poses many anesthetic challenges.
HFrEF is associated with significant perioperative morbidity and mortality. 2,8,10,11While in-hospital mortality of HFrEF is greater than that of HFpEF (heart failure with preserved ejection fraction), there are no clinically significant differences in mortality at 30 days or up to 1 year and no clinically significant differences in hospital readmission rates. 8,12ey to successful anesthetic management in patients with HFrEF includes detailed knowledge of pathophysiological changes, precise intraoperative monitoring, intraoperative modulation of hemodynamic parameters, the right choice of anesthetics, and multidisciplinary team input.
Benefits of Regional Anaesthesia in Heart Failure
General anesthesia and neuraxial anesthesia can reduce cardiac output via the loss of sympathetic tone during induction.This can cause life-threatening circulatory collapse in heart failure patients.It is beneficial to offer locoregional anesthesia to patients undergoing minor or peripheral procedures when possible to preserve cardiac output and minimize myocardial work.
Patients with HFrEF are particularly sensitive to tachycardia, which leads to increased myocardial oxygen demand.Etiologies leading to tachycardia, such as intubation, surgical stimulation, postoperative pain, nausea, and vomiting, are therefore best avoided and mitigated.Intubation is avoided when possible by choosing adequate locoregional anesthesia, and surgical stimulus and postoperative pain are both mitigated and/or abolished.In addition, the patient's cough reflex remains intact and is thus able to protect his or her airway.In heart failure with reduced ejection fraction (HFrEF), left Figure 2 Clinical care pathway for our patient taking into consideration guidelines and recommendations for management of a cardiac patient presenting for non-cardiac surgery from the American Heart Association, American College of Cardiology, European Society of Cardiology and Canadian Cardiovascular Society.Additionally, we used input from a multidisciplinary team meeting with the surgical team and senior anaesthetists to make our decision.The black ticks highlight our steps in each phase of the patient's care.][5][6][7] Abbreviations: HFrEF, Heart Failure with Reduced Ejection Fraction; RCRI, Revised Cardiac Risk Index; MACE, Major Adverse Cardiac Events; ACE Inhibitors, Angiotensin Converting Enzyme Inhibitors; MDT, Multidisciplinary Team; QLB, Quadratus Lumborum Block.
ventricle filling relies on both atrial contraction and the pressure gradient between the left atrium and the left ventricle.A slight decrease in preload results in a swift reduction in stroke volume (SV), and thus hypovolemia is poorly tolerated.However, even a minor volume load or an increase in afterload can lead to rapid decompensation of the heart, attributed to elevated left ventricular end-diastolic pressure (LVEDP).Therefore, achieving a fine equilibrium between preload and afterload is crucial for improving outcomes and preventing perioperative complications.
By reducing opioid consumption and volatile anesthetics, major risk factors for postoperative nausea and vomiting are also avoided. 13ther potential benefits include better perioperative pain control than intravenous opioids, potentially reduced blood loss, reduced perioperative rates of deep vein thrombosis, reduced postoperative fatigue or confusion, earlier bowel function recovery, earlier discharge from the recovery room and hospital, and earlier ambulation and physical therapy.
Quadratus Lumborum Block and Appendectomy
For effective regional anaesthesia, it is important to isolate the nociceptive pathways that are involved in a patient with acute appendicitis undergoing appendectomy (Figure 3).The incision of open appendectomy traverses dermatomes innervated by the right Iliohypogastric (L1) and Ilioinguinal nerve (L1) and occasionally the 12th intercostal nerve.In Ultrasound-guided QLB, a local anesthetic is administered adjacent to the quadratus lumborum (QL) muscles to anesthetize the thoracolumbar nerves.Many published studies support the role of the QLB as the main component of multimodal analgesia for caesarean section deliveries and urologic or abdominal surgeries. 146][17] However, there is a paucity of evidence regarding its use as the primary surgical anesthetic technique, especially for appendectomy.
Detailed knowledge of the anatomy and relevant technical aspects of the quadratus lumborum block (QLB) is important for its successful and effective use.The QLM is a posterior abdominal wall muscle that arises from the posteromedial iliac crest and inserts into the transverse processes of the L1 to L4 vertebrae and the medial border of twelfth rib.Dermatomal sensory blockade from T12 to L2, including the iliohypogastric and ilioinguinal nerves, has been Figure 3 Pain type, intensity and distribution in Appendicitis patients.Acute appendicitis (top left) mainly produces well-localized sharp pain (red color overlay) in the left lower abdomen, with referred pain to the umbilicus (yellow color overlay), which is more poorly localized.Open appendectomy (top middle) produces sharp incisional pain in addition to pain associated with acute appendicitis (red overlay).Further pain occurs during blunt dissection of deeper muscles and tissues, and visceral pain occurs during peritoneal incision and appendix manipulation.Laparoscopic appendectomy (top right) produces sharp incisional pain at the sites of trocar insertion in addition to the preexisting pain of acute appendicitis (red overlay).After carbon dioxide insufflation, diffuse stretching of the abdomen and peritoneum results in diffuse and poorly localized discomfort (green overlay).There is also referred pain in the shoulder due to irritation of the diaphragm (green overlay).©2023 Body Scientific International, LLC.demonstrated. 20Three different approaches to quadratus lumborum block have been suggested, which are based on the anatomical location of the needle and local anesthetics injected into the quadratus lumborum muscle.This is thus termed anterior, lateral, and posterior techniques.In the anterior approach, the point of injection of the local anesthetic lies in the interfascial plane between the quadratus lumborum and the psoas muscle, whereas in the lateral technique, the injectate is administered along the lateral border of the quadratus lumborum muscle.Finally, for the Posterior approach, the local anesthetic is administered at the posterior border of the quadratus lumborum between the QL and erector spinae muscles.
][23] Given the substantial cardiovascular risk factors as well as the multiple comorbid conditions in our patient, we chose to perform QLB using the anterior approach as the primary anesthetic technique (Figure 4).The patient was counselled about the innovative nature of the anesthesia technique that would be utilized.This comprehensive discussion aimed to ensure the patient's understanding and informed decision-making regarding the impending procedure, potential challenges, and the unique aspects of the anesthesia approach.General anesthesia and neuraxial techniques were avoided in this patient because of their propensity to develop systemic hypotension and myocardial depression.We opted for the quadratus lumborum block because it provides superior analgesia compared with the transversus abdominis plane block and offers increased dermatomal coverage and visceral analgesia. 14,24Alternatively, the Erector spinae plane block at the level of the transverse process (T8) has been suggested as the main component of multimodal analgesia in abdominal surgeries. 25,26It can also provide visceral analgesia through the spread of local anesthesia to the paravertebral space.However, its efficacy as the sole anesthetic for abdominal surgeries has yet to be validated.
Conclusion
This case describes the successful utilization of an innovative anesthesia technique (Quadratus Lumborum Block) for urgent appendectomy in a high-risk patient with significant cardiovascular morbidity, thus reducing the perioperative risk of major cardiovascular events, maintaining perioperative hemodynamic status, excellent patient satisfaction, and reducing hospital stay.
Figure 4
Figure 4 Anesthetic coverage by a well performed Quadratus Lumborum block providing adequate coverage of the pain of open appendectomy (blue overlay).However, some amount of visceral pain is not completely blocked.©2023 Body Scientific International, LLC.
|
2024-02-27T17:04:17.724Z
|
2024-02-01T00:00:00.000
|
{
"year": 2024,
"sha1": "979c419828900d0ce2ee3b56d895406552a40f06",
"oa_license": "CCBY",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=96973",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a57de08e6c4804f4040311523583ecf4a1560d16",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.