id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
233604168
pes2o/s2orc
v3-fos-license
Reaction-based machine learning representations for predicting the enantioselectivity of organocatalysts Hundreds of catalytic methods are developed each year to meet the demand for high-purity chiral compounds. The computational design of enantioselective organocatalysts remains a significant challenge, as catalysts are typically discovered through experimental screening. Recent advances in combining quantum chemical computations and machine learning (ML) hold great potential to propel the next leap forward in asymmetric catalysis. Within the context of quantum chemical machine learning (QML, or atomistic ML), the ML representations used to encode the three-dimensional structure of molecules and evaluate their similarity cannot easily capture the subtle energy differences that govern enantioselectivity. Here, we present a general strategy for improving molecular representations within an atomistic machine learning model to predict the DFT-computed enantiomeric excess of asymmetric propargylation organocatalysts solely from the structure of catalytic cycle intermediates. Mean absolute errors as low as 0.25 kcal mol−1 were achieved in predictions of the activation energy with respect to DFT computations. By virtue of its design, this strategy is generalisable to other ML models, to experimental data and to any catalytic asymmetric reaction, enabling the rapid screening of structurally diverse organocatalysts from available structural information. Introduction Society's growing need for pharmaceuticals, agricultural chemicals, and materials requires a continuous push in the development of asymmetric catalytic methods. 1,2 In particular, enantioselective organocatalysis has emerged as a powerful strategy for the stereocontrolled assembly of structurally diverse molecules [3][4][5] with constant effort placed in making chemical transformations more selective, efficient, or generally applicable. 6 Although the computational design of highly selective catalysts has long been viewed as a "Holy Grail" in chemistry, 7,8 it is generally still more efficient to experimentally screen a range of potential organocatalysts for a given reaction than to assess their performance in silico. 9 That is because e.e. (enantiomeric excess) values, estimated as the ratio between the competitive reaction rates leading to the two enantiomeric products, 10 are relatively computationally expensive and challenging to predict accurately with standard electronic structure computations. The energy difference between the transition states (TSs) leading to the major and minor enantiomers can be quite small (<2 kcal mol À1 ) and multiple diastereomeric transition states, stemming from the large conformational space of exible organocatalysts, can yield the same enantiomer. 7,11 As the relation between rate constants and computed selectivity is exponential, minor errors in computed energies can lead to major errors in stereochemical outcome prediction. These factors pose a monumental challenge for traditional quantum mechanical (QM) methods, in terms of both accuracy and cost, 12,13 especially if many conformers and substrate-catalyst combinations have to be computed. While the intrinsic error of the quantum chemical level is oen addressed in comprehensive benchmark studies, 10,14-17 automated toolkits, 18,19 such as AARON 20 and CatVS, 21 have been developed to streamline the tedious and error-prone task of optimising hundreds of thermodynamically accessible stereocontrolling transition states. Starting from user-dened libraries, multiple conformations and congurations of TS structures are located and optimised. gaps, local electro/nucleophilicity, or RDKit descriptors 46 ) used as the input from which an algorithm can "learn" while being "supervised" by the reaction outcome (output, i.e. DDG ‡ , e.e., or yield). [47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62] While the reaction outcome is oen obtained from experiment (i.e., phenomenological models), alternatives based on computed data are highly valuable as well. [63][64][65][66][67] Indeed, so-called quantum (or atomistic) ML models, which map a three-dimensional molecular structure (called molecular representations, e.g. CM, 68 SLATM, 69 SOAP 70 ) to a representative target computed quantum chemically, constitute an appealing complementary strategy owing to its broad applicability and dependence on the laws of physics. 69,71,72 While these approaches provide a favourable combination of efficiency, scalability, accuracy, and transferability for predicting energetic and more complex molecular properties, 71 identifying enantioselective organocatalysts requires precise predictions of the relative energy barriers for the stereocontrolling transition states, a target currently beyond their accuracy. Recently, SOAP features of isolated reactants were used to train a machine learning classier and predict transition state barriers of regioselective arene C-H functionalization. In this work, a large number of molecular ngerprints were combined with the SOAP features to improve the regression, and the resulting model was outperformed in out-of-sample predictions by a random forest model using chemical descriptors with physical organic basis (PhysOrg). 73 Here, we provide a stepwise route to improve such QML approaches to reach sufficient accuracy for subtle properties such as those associated with asymmetric catalysis (i.e., e.e.). This objective is achieved by rationally designing a reactionbased representation (vide infra) that is a more faithful ngerprint of the enantiodetermining TS energy. The performance of the approach is demonstrated through accurately predicting the DFT-computed enantiomeric excess of Lewis base-catalysed propargylation reactions directly from the structure of the catalytic cycle intermediates. Unlike other ML models trained on (absolute) experimental e.e.'s, 35,36 our model is able to predict the absolute conguration of the excess product, because it is trained on the activation energy of the enantiodetermining step for each pair of enantiomers (pro-(R) and pro-(S) intermediates) independently. Reaction and organocatalysts database Asymmetric allylations [75][76][77][78] and propargylations 79 of aromatic aldehydes are key C-C bond forming transformations, providing access to optically enriched homoallylic and homopropargylic alcohols, respectively, which serve as valuable building blocks for the synthesis of complex chiral molecules. 80 Catalysts that are selective for allylations are generally not highly stereoselective for propargylations, which has led to a dearth of stereoselective propargylation catalysts. [81][82][83][84][85] Tools to screen dozens of allylation catalysts to nd promising candidates for propargylation reactions are therefore highly valuable. 9 To this end, Wheeler and co-workers have investigated 76 Lewis base organocatalysts (Scheme 1) 74 and used the computational toolkit AARON 20 to build a database of 760 stereocontrolling transition states to predict their enantioselectivity in the propargylation of benzaldehyde (Scheme 2). 14,74,86 Large databases of kinetic data for asymmetric catalysis generated in silico are scarce. 63 Therefore, this library constitutes an ideal training and validation set for the development of an atomistic ML model with reaction-based representations capable of predicting the e.e. of organocatalysts readily from the structures of intermediates. Note that the workow presented below would improve the ML performance independently of the size of the training data. The target of the ML model is the DFT-computed relative forward activation energy (E a , i.e., the energy difference between the TS and the preceding intermediate) associated with each of the 10 (R)-or (S)-ligand arrangements (see Fig. S1 †) of the enantiodetermining TS in Scheme 2 for the 76 catalysts in Scheme 1 (11 catalysts of type 1, 16 of type 2, 15 3, 11 4, 13 5, and 10 catalysts of type 6), yielding a total of 754 E a values. 87 e.e. values are computed from E a (vide infra), thus accurate predictions of E a lead to accurate e.e. predictions. General ML workow The general workow exploited and improved herein relies on a physics-based ML model for the prediction of the e.e. of the asymmetric catalytic reactions, as illustrated in Scheme 3 and described hereaer. It comprises two parts: part (1) is a training procedure that relies on the following steps: (1) Database construction: a library of 3D geometries and energies of catalytic cycle intermediates is curated. Here, the structures of 754 pairs of intermediates 2 and 3 are optimised with DFT (see the next section) and used to train the ML model. As shown in our previous work, 43 accurate geometries are not necessarily needed as inputs for atomistic ML models; thus, rough-coordinate estimates (e.g., obtained directly from SMILES strings) or low-cost DFTB structures could potentially be used to generate suitable molecular representations. (2) Generation of molecular representations: information intrinsically contained within the 3D structure of each intermediate is transformed into a suitable molecular representation. Here we build different variants based on the Spectral London and Axilrod-Teller-Muto (SLATM) 69 representation. SLATM is composed of two-and three-body potentials, which are derived from the atomic coordinates and contain most of the relevant information to predict molecular properties. 70,[88][89][90][91][92][93][94] (3) Training of the model: input representations are mapped onto the corresponding target values (E a , computed at the DFT level, see the next section) using Kernel Ridge Regression (KRR) 95 with a Gaussian kernel. Note that even if target values based on DFT are used here to train the ML model, the strategy proposed hereaer is expected to perform equally well on experimental or more accurate quantum chemical data. (4) Hyperparameter optimisation and cross-validation: the full dataset is split randomly 100 times into 90/10 training/test sets (678/76 datapoints) to optimise the KRR hyperparameters and obtain the learning curves. In part (2), the trained ML model is used to predict the activation energy of out-of-sample organocatalysts. The model requires as input the 3D structures of 2 and 3 and delivers the corresponding E a value. Using the energy of 2 as reference, the relative energies of the enantiodetermining (R)-and (S)-TSs can be calculated, and the e.e. of the catalyst under investigation computed (vide infra). Quantum chemistry Catalytic cycle intermediates 2 and 3 were optimised at the B97-D/TZV(2p,2d) level of theory, 96-98 accounting for solvent effects (dichloromethane, 3 ¼ 8.93) using the polarizable continuum model (PCM) 99-101 with Gaussian16, 102,103 in analogy with the study by Wheeler and co-workers. 74 Density tting techniques were used throughout. The structures of 1508 intermediates were obtained via intrinsic reaction coordinate calculations (IRC) 104 from the TS database curated by Wheeler et al. 74 754 target E a values (11 catalysts of type 1, 16 type 2, 15 3, 11 4, 13 5, and 10 of type 6, each in 5 distinct pro-(R) and pro-(S) ligand arrangements) 87 were computed (relative to the lowest-lying intermediate 2 ligand arrangement) at the same level, which was shown to provide the best compromise between accurate predictions of low-lying TS energies and stereoselectivities for allylation and propargylation reactions. 14 e.e. values were not predicted from Gibbs free energy barriers, but rather from relative energy barriers (i.e., electronic energies plus solvation free energies), since they have been found to be more reliable than those based on either relative enthalpies or free energy barriers for this reaction. 14 The symbol E a was therefore used to indicate the energy (electronic plus solvation) difference between the TS and the preceding intermediate. For each C 2symmetric catalyst (Scheme 1), 10 distinct ligand arrangements around a hexacoordinate Si centre are possible (BP1-5, (R)-and (S)-, Fig. S1 †). [84][85][86] Since each of these can lead to thermodynamically accessible reaction pathways, and the stereoselectivity is largely a consequence of which ligand arrangement is low-lying for a particular catalyst, all diastereomeric TSs were considered viable and the e.e. calculated from a Boltzmann weighting of the relative energy barriers. 74 In eqn (1)-(3), DE a,eff is the relative Boltzmann-weighted activation energy of each (R)-or (S)-species, DDE ‡ is the difference between the (R)and (S)-Boltzmann-weighted activation energies, R is the ideal gas constant, and T is the propargylation reaction temperature (195 K). Machine learning The Python package QML 105 was used to construct standard SLATM representations. Feature selection and the construction of the reaction-based representations SLATM DIFF and SLATM DIFF+ were done using the Python package Scikit-learn. 106 To (1)) and the DDE ‡ of each (R)-and (S)-pair (eqn (2)), and so the e.e. value of each organocatalyst (eqn (3)). The out-of-sample predictions were done with the same SLATM DIFF+ models trained in the cross-validation scheme. Additionally, out-ofsample predictions were done re-training the model on the entire dataset (see Fig. S6 †), although this did not lead to noticeable improvement. While simpler representations (e.g., CM, 68 BoB 107 ) were tested, SLATM performs signicantly better (see Fig. S2 †). Molecular representations The key step of the workow presented above is generating a molecular representation, which is mapped onto the target value (i.e., the activation energy E a ) and used as a ngerprint of the enantiodetermining TS. Representations can be constructed from single molecules and more recently as "ensemble representations": instead of associating one xed conguration of atoms to a single-point geometry energetic target value, information from multiple structures can be combined to generate a representation for an ensemble property, such as the free energy of solvation (DG sol ). 108 This has recently been achieved by calculating the ensemble average of the FCHL19 representations 109,110 of a set of congurational snapshots obtained through MD sampling. 108 Here, we propose an alternative approach that goes beyond standard QML representations (i.e., KRR using one given gas-phase geometry) 108 by describing the chemical transformation occurring during the enantiodetermining step of an asymmetric reaction through the comparison of the representations of the two catalytic cycle intermediates preceding and following the stereocontrolling TS. This allows us to generate a "reaction-based" representation, which can be closely mapped to the activation energy of the enantiodetermining step, as discussed later. We rely on "dissimilarity" plots as a diagnostic tool to determine whether a particular representation can adequately characterize the stereocontrolling step. By dissimilarity plots, we refer to histograms of the Euclidean distance between any two representations vs. the difference in their target property, which in this case is E a . For a particular representation to be effective, small distances between structures must correspond to small differences between target properties, as the Euclidean distance is used to measure the similarity of two molecular representations. Similar plots have previously been exploited to analyse the behaviour of molecular representations, 70,111 but only parenthetically. Here, we highlight their importance as a fundamental analytical tool to understand the performance of molecular representations in kernel methods for asymmetric catalysis and demonstrate their utility for constructing reliable ML models. Before discussing our proposed representation variants, we report in Fig. 1a the performance of the standard SLATM representation using the structure of a single intermediate (e.g., 2). Due to the structural similarities between 2 and the enantiodetermining TS (in both, the Si atom has 6 coordination sites occupied, whereas the coordination number is only 5 or 4 in intermediate 3), intermediate 2 was rst chosen to construct the input representation. The learning curve for the prediction of E a using SLATM (blue) of intermediate 2 (denoted SLATM 2 ) reaches a Mean Absolute Error (MAE) of 0.54 AE 0.06 kcal mol À1 for the prediction of E a with 90% of the data used for training (i.e., 680 structures). Considering the exponential relationship between relative activation energies and e.e. values, which implies a dramatic propagation of errors, the accuracy of this approach is insufficient. This is further demonstrated in Fig. 2, which shows the correlation between the predicted and reference DDE ‡ values (MAE ¼ 0.96 kcal mol À1 ), and in Fig. 3, where the e.e. values obtained from SLATM 2 are compared to the reference quantities: the large number of red-coloured cells indicates large deviations between ML-predicted and DFTcomputed e.e. values. The rather poor mapping between SLATM 2 and the E a of the stereocontrolling step (associated with the key 2 / 3 transition state) is evident from the visual inspection of Fig. 3, where the large number of red-coloured cells associated with catalysts bearing substituents a, d, e, g, f and j indicates inaccurate predictions of e.e. values, and from the analysis of the corresponding dissimilarity plot in Fig. 1b (le). In the latter, the large scattering of points lying outside the area delimited by the dotted lines, particularly when the Euclidean distance tends to zero, means that two different structures might be considered equal by the kernel (distance z 0) albeit leading to very different E a values. Thus, the shape of the dissimilarity plot of SLATM 2 deviates considerably from ideal one, indicated by the dotted straight lines. 70 Note that the MAE for E a increases up to 0.77 AE 0.05 kcal mol À1 (see Fig. S2 †) if starting from the SLATM representation of 3, the intermediate following the enantiodetermining step in the catalytic cycle (Scheme 2). The higher accuracy achieved using the representation of 2 vs. 3 could be attributed to the reaction step being exergonic and, according to the Hammond Postulate, 112 the enantiodetermining TS resembling 2 more closely. In any case, neither the structure of 2 or 3 provides sufficiently good ngerprints of E a on their own. Unlike other intrinsic molecular properties that depend on the structure of a single molecule, 108 enantioselectivity is determined by electronic and/or steric effects stabilising or destabilising one enantiomeric TS to a greater or lesser degree than the other. In that sense, it is to be expected that our target accuracy for E a , well below 1 kcal mol À1 , cannot be reached using only one structure that does not adequately describe the stereocontrolling transition state as an input. To improve the model performance, an alternative representation is Fig. 3 e.e. values obtained from DFT computations (top left) and from the ML predictions of E a using the three approaches discussed. These predictions are obtained by averaging the predictions obtained from the cross-validation scheme with 100 different random train/test splits. Cells are coloured according to their accuracy with respect to the reference, ranging from dark green (best) to dark red (worst). Positive e.e. values correspond to excess (R)-alcohol formation, negative values to excess (S)-alcohol formation. constructed by comparing the representations of both intermediates. Knowing that neither the structure of 2 or 3 are uniquely related to the corresponding activation energies, we can generate such a "reaction-based" representation that draws information from both structures, subtracting the global SLATM of 2 from 3. This is reminiscent of binary reaction ngerprints (obtained by subtracting the products' from the reactants' RDKit 46 ngerprints), which reect changes in molecular features over reaction processes. 113 The resulting representation (denoted SLATM DIFF ) accounts for the differences between the two intermediates and is thus more sensitive to the structural changes occurring during the enantiodetermining step. By subtracting "reactant" from "product", the global features that do not change during the catalytic cycle step are eliminated from the representation, and the structural changes between intermediates are highlighted. In this way, we obtain a more faithful representation of the reaction step, which corresponds to a more unique ngerprint of E a . Although the construction of SLATM DIFF requires the SLATM representations of both intermediates (2 and 3), the computational cost associated with its generation is negligible. As depicted in the dissimilarity plot (Fig. 1b, middle), the reaction-based representation (SLATM DIFF ) is signicantly better than SLATM 2 : the difference in E a values tends to zero as the Euclidean distance between their representations tends to zero. In line with this observation, the learning curve (shown by the orange line in Fig. 1a) is signicantly improved. The MAE of SLATM DIFF is reduced to 0.31 AE 0.2 kcal mol À1 , roughly 50% better than SLATM 2 and up to 60% better than that of SLATM 3 using 90% of the data for training (i.e., 680 structures) in the train/test splits of the cross-validation scheme. Given the rationality of the approach leading to the construction of SLATM DIFF , its gain in accuracy is encouraging. As shown in Fig. 2 and 3, the halved MAE leads to a very notable improvement in the prediction of e.e. values. Nevertheless, we note again that very small errors in E a are amplied when e.e. values are calculated, and therefore even a small accuracy gain can be signicant. The high probability density of normalised Euclidean distances between 0.5 and 0.75 seen in Fig. 1b (middle, SLATM DIFF ) indicates that the shape adopted by the dissimilarity histogram of SLATM DIFF is not yet ideal, and that further improvement is possible. To achieve higher accuracy, we focus on improving the shape of this dissimilarity plot. Notice that in our ML model, the Euclidean distance is used as a measure of similarity between representations. This means that features with high variance (i.e., that change the most between molecules) dominate the notion of similarity, as they contribute the most to the Euclidean distance between representations. By feature, we mean each of the terms in the molecular representation, which, for SLATM, consist of two-(London dispersion) and three-(Axilrod-Teller-Muto) body potentials computed on groups of atoms closer than a certain cut-off (here, 4.8Å). The results of these potentials are averaged over their atom-type sets (e.g., all C-C interactions for the two-body terms, all the C-C-C for the three-body terms), which are then concatenated to generate the SLATM vector. The size of the SLATM representation depends on the existing atom-type sets in the database. Given that our dataset contains the elements C, H, O, N, F, Cl and Si, the total number of features of the SLATM representations is 27 827. In SLATM DIFF , features with high variance dominate the notion of similarity, measured through the Euclidean distance. However, we are using SLATM to predict a property that is very different from the single-molecule properties for which it was originally designed. Consequently, features with high variance in SLATM are not necessarily the most important ngerprints of E a . In pursuit of the best possible ngerprint of the activation energy, we assign importance scores to each feature and attempt to focus on the most relevant ones. The linear correlation coefficient (r 2 ) between each feature and the target property is used as an estimate of the importance of the different terms in the representation. The results, presented in Fig. 4, show that in SLATM DIFF there are only a few highvariance features, while the computed importance scores are spread over many other features that have relatively small variances. Simply put, the variances in the features of the SLATM DIFF representation are not well correlated with their real importance in this application. Based on this observation, an improved representation, labelled SLATM DIFF+ , is generated by selecting only the N f most important features of SLATM DIFF (specically, N f ¼ 500) and discarding the rest. This feature selection was done using only the training data at each train/test split of the cross-validation step, as otherwise it could lead to severe overtting. Nevertheless, the importance scores were consistent across the crossvalidation splits thanks to the robustness of the linear regressions. An improved relationship between representation and target distances (Fig. 1b, right) is obtained with the SLATM DIFF+ representation, in spite of its reduced size. This simple feature selection leads to a noticeable improvement in accuracy, with a cross-validated MAE of 0.25 AE 0.4 kcal mol À1 (see the green curve in Fig. 1a). Using the SLATM DIFF+ representation, the resulting cross-validated correlation coefficients for the difference between (R)-and (S)-activation energies (DDE ‡ , Fig. 2) in the test set are greatly improved (r 2 > 0.95). The quality of our tted model far supersedes previously reported approaches. Good qualitative and even quantitative agreement is achieved between predicted and reference e.e. values computed using the test data splits from the cross-validation runs (Fig. 3). Since linear correlation constitutes a very limited notion of relevance, other methods, such as nonlinear mutual information criteria, 114 were tested as feature importance estimators, but the resulting models showed similar or even worse performance (see the ESI †). Similarly, methods based on metric learning 111,115 did not lead to any improvement, as the high dimensionality of the problem led to severe overtting. Ceriotti et al. 116 suggested the use of principal covariates regression (PCovR) to solve similar issues. 117 PCovR is a supervised feature selection method that interpolates between principal component analysis (PCA) and linear regression. Herein, because the variance of the features is completely unrelated to the importance scores, the addition of PCA would not offer any advantage. Nevertheless, these ndings highlight the importance of adapting molecular representations to the application at hand, while still preserving the overall generality of the approach. Chemical insight on asymmetric propargylation catalysts The ML model is able to reproduce the main trends in e.e. observed across the different catalysts from the 100 different random train/test splits (Fig. 3, top le table). For example, using SLATM DIFF+ (Fig. 3, bottom right table), which gives the best predictions with respect to the reference data, catalysts built on scaffold 4 (Scheme 1) are revealed to be outliers, yielding e.e.'s that are signicantly different to those obtained with other scaffolds, for a given substituent a-j. This is due to the different placement of the substituent X on the organocatalysts' scaffold. Excluding 4, the effect of different substituents on the e.e. is qualitatively the same across all scaffolds, with the exception of f ( i Pr) and j (Ph). The introduction of a phenyl group on the organocatalysts' scaffold leads to highly varied e.e. values, from À97 (S) to 91 (R). This variation, which is due to the presence of favourable p-stacking and CH/p interactions stabilising some (S)-TSs and degrading the enantioselectivity, 74 is nicely captured by SLATM DIFF+ . Overall, the high enantioselectivity displayed by (most) catalysts in the library can be attributed to the favourable electrostatic interaction between the formyl C-H of benzaldehyde and one of the chlorines bound to Si, which is present in the lowest-lying (R)-ligand arrangement, and absent in the (S)-structures. 74 In their computational screening with AARON, 74 Wheeler and co-workers identied derivatives of 6 as promising candidates for propargylation reactions. However, these catalysts are difficult to synthesize stereoselectively. 81,118 Recently, Malkov et al. reported the synthesis of a set of terpene-derived atropisomeric bipyridine N,N 0 -dioxides 7 (Fig. 5) as easily-separated diastereoisomers. 119 Aromatically-substituted catalysts 7j and 7k were shown to be highly active and selective (e.e. of 96 and 97, respectively); additionally, the TS structures for 7 were computationally shown to be nearly identical to the corresponding substituted forms of 6. 119 Prompted by these results, we decided to test the ML model with SLATM DIFF+ to predict the activation energy of the 10 distinct ligand arrangements afforded by 7j and 7k. The out-of-sample results are shown in Fig. 5. Despite scaffold 7 and substituent k not being in the original training set, excellent correlation between predicted and reference E a values is obtained (r 2 ¼ 0.97). Thus, the enantioselectivity of these out-of-sample catalysts is qualitatively reproduced, despite not achieving exact quantitative agreement between DFT and ML predicted DDE ‡ values (1.2 and 1.3 for 7j and 7k, respectively, vs. 0.2 and 0.5 kcal mol À1 ). In summary, we provide a logical route to improve atomistic ML methods for enantioselectivity prediction of asymmetric catalytic reactions, which are limited by both the required accuracy and the small amount of data generally available. Firstly, the intermediates associated with the enantiodetermining step (2 and 3 in Scheme 2) must be identied, and their SLATM representations generated. Secondly, using the difference between the two SLATM representations (SLATM DIFF ) as input, a set of features that map the activation energy accurately can be obtained. Finally, feature engineering can be used to improve SLATM DIFF , keeping only the most relevant features that relate to the target property. The results show that the ML workow presented herein is able to accurately predict enantioselectivity from the molecular structures of catalytic cycle intermediates. Conclusions In this work, we have developed an atomistic machine learning model to predict the DFT-computed e.e. of Lewis base-catalysed propargylation reactions (Scheme 2). The use of dissimilarity plots allowed us to rationally develop and progressively improve a reaction-based representation that can be adequately mapped onto the activation energy of the stereocontrolling step. We identied two fundamental limitations of many standard physics-based molecular representations for subtle catalytic properties. First, we have shown that neither the structure of the preceding nor that of the following catalytic cycle intermediate is a ne ngerprint of the energy of the stereocontrolling transition state. This issue can be circumvented by using a reactionbased molecular representation derived from both structures. Finally, we have demonstrated how feature selection can be used to ne-tune this representation. The resulting model can accurately predict the DFTcomputed enantioselectivity of asymmetric propargylations from the structure of catalytic cycle intermediates. Thus, it constitutes a valuable tool to quickly identify potentially selective propargylation organocatalysts. By design, the model is well-balanced between computational cost, generality and accuracy. It is easy to implement for a wide region of chemical space and seamlessly compatible with experimental (e.g., X-ray structures of stable intermediates) and computational data alike. Our results prove that semi-quantitative predictions of e.e. values in asymmetric catalysis can be achieved by accurately predicting E a . We conclude that atomistic ML models with adequately tailored molecular representations can be a practical and accurate alternative to both traditional quantum chemical computations of relative rate constants and multivariate linear regression with physical organic molecular descriptors. The stepwise improvement to the model described in this work opens the door to more complex reaction-based and catalytic cycle-based representations. Indeed, ensemble representations, which were recently introduced for properties very sensitive to conformational freedom, such as the free energy of solvation DG sol , 108 are a promising path to go beyond the single structure-to-property paradigm and allow for further generalisation, once combined with the approach discussed herein. Such methodologies will be explored in future work for the accurate screening of enantioselective catalysts in asymmetric reactions. Author contributions S. G. performed DFT computations and analyzed the results. R. F. trained and improved the ML models. S. G. and R. F. jointly wrote the manuscript with help from R. L. M. D. W. and R. L. provided feedback on the DFT and ML components, respectively. S. B. and M. D. W. ran preliminary computations initiating this work. All authors discussed the results and commented on the manuscript. C. C. conceived the project with M. D. W., provided supervision and wrote the nal version of the manuscript. Conflicts of interest There are no conicts to declare.
2021-05-04T22:05:34.179Z
2021-04-03T00:00:00.000
{ "year": 2021, "sha1": "39d9a893365b7abd090674b53837863697172a5d", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/sc/d1sc00482d", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c2d82efbbead9e42016c7f08fb55810cbd223e69", "s2fieldsofstudy": [ "Chemistry", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
52032679
pes2o/s2orc
v3-fos-license
Near full length genome of a recombinant (E/D) cosavirus strain from a rural area in the central region of Brazil In the present article we report the nearly full length genome of a Cosavirus strain (BRTO-83) isolated from a child with acute gastroenteritis, and who is an inhabitant of a rural area in the central region of Brazil. The sample was previously screened and negative for both: common enteric viruses (i.e. rotavirus and norovirus), bacteria, endoparasites and helminthes. Evolutionary analysis and phylogenetic inferences indicated that the Brazilian BRTO-83 Cosavirus strain was a recombinant virus highly related to the E/D recombinant NG385 strain (Genbank JN867757), which was isolated in Nigeria from an acute flaccid paralysis patient. This is the first report of a recombinant E/D Cosavirus strain detected in Brazil, and the second genome described worldwide. Further surveillance and molecular studies are required to fully understand the epidemiology, distribution and evolution of the Cosavirus. Results and Discussion A survey conducted throughout in the northern state of Tocantins in Brazil was performed during the years 2010 through 2016 to screen human feces for the presence of enteric pathogens. Fecal specimens were screened for enteric viruses (i.e. rotavirus and norovirus), bacteria (i.e E. coli and Salmonella sp.), endoparasites (i.e. Giardia sp.) and helminthes using conventional (i.e. culture techniques) and molecular methods (i.e. commercial enzyme immunoassays). NGS techniques were used to identify possible unscreened or undetected enteric viruses, and a recombinant Cosavirus strain was detected in sample BRTO-83. The near full length genome (6278 nt) was sequenced and compared with previously described Cosavirus strains ( Fig. 1 and Supplementary data). Recombination analysis indicated that the first 2680 bases of the BRTO-83 genome are related to the genotype E, while the remaining nucleotides are related to the genotype D ( Fig. 2 and Supplementary data). This pattern of Cosavirus recombination has been previously reported only once, in 2007, from a Nigerian patient suffering from AFP (NG385 strain; JN867757). Additional phylogenetic analysis, performed using complete genomes and the partitions corresponding to the genome breakpoints, confirmed that the Brazilian BRTO-83 strain was closely related to the NG385 strain E/D from Nigeria. The recombination breakpoint detected in both isolates, BRTO-83 and NG385, coincided, suggesting that both sequences display identical mosaic patterns ( Fig. 2 and Supplementary Material). The inferred trees, which were assembled using the partitions and composed by free recombination regions also suggest that the recombinant E/D strains detected in Brazil and Nigeria may share the same evolutionary ancestor ( Fig. 1 and Supplementary data). The child infected by the BRTO-83 strain was suffering from a severe episode of diarrhea, and the data strongly suggests that a Cosavirus was, in fact, the causative agent of the disease. A common feature of virome analysis is that the epidemiological background information on potential contact with animals, consumption of contaminated food and/or medical records of the patients is not generally available. The present study is no exception; there was no epidemiological connection between the patient and the virus. Cosaviruses are not pathogens recognized to be associated with gastroenteritis, and the patient may have been infected with other bacteria and parasites not recognized in the screening flow. In addition, the protocol applied focused on viral metagenomics and removed eukaryotic cells from the test samples, making impossible the use of the BRATO-83 sample library to screen for reads from bacteria, parasites or helminthes. Human Cosavirus infection reports are still very limited worldwide, but the association of the viruses with gastroenteritis is increasing 4,8,9 . In Brazil, there is only one study considering Cosaviruses role; nevertheless its association with gastroenteritis could not be established 3 . This is the first report of a recombinant E/D Cosavirus strain detected in Brazil, and the second genome described worldwide. Further surveillance and molecular studies are required to fully understand the epidemiology, distribution and evolution of the Cosavirus. Materials and Methods The protocol used to perform deep-sequencing was a combination of several protocols normally applied to viral metagenomics and/or virus discovery, and has been partially described by da Costa et al. 10 . In summary, 50 mg of the human BRATO-83 fecal sample was diluted in 500 μl of Hanks' buffered salt solution (HBSS), added to a 2 ml impact-resistant tube containing lysing matrix C (MP Biomedicals, USA) and homogenized in a FastPrep-24 5G Homogenizer (MP biomedicals, USA). The homogenized sample was centrifuged at 12,000 × g for 10 min, and approximately 300 μl of the supernatant was then percolated through a 0.45 μm filter (Merck Millipore, Billerica, MA, USA) in order to remove eukaryotic-and bacterial-cell-sized particles. Approximately 100 μl, roughly equivalent to one fourth of the volume of the tube of cold PEG-it Virus Precipitation Solution (System Biosciences, CA, USA), was added to the obtained filtrate, and the contents of the tube were gently mixed then incubated at 4 °C for 24 hours. After the incubation period, the mixture was centrifuged at 10000 × g for 30 minutes at 4 °C. Following centrifugation, the supernatant (~350 μl) was discarded. The pellet, rich in viral particles was treated with a combination of nuclease enzymes (TURBO DNase and RNase Cocktail Enzyme Mix -Thermo Fischer Scientific, CA, USA; Baseline-ZERO DNase -Epicentre, WI, USA; Benzonase -Darmstadt, Germany; and RQ1 RNase-Free DNase and RNase A Solution -Promega, WI, USA) in order to digest unprotected nucleic acids. The resulting mixture was subsequently incubated at 37 °C for 2 h. After incubation, viral nucleic acids were extracted using a ZR & ZR-96 Viral DNA/RNA Kit (Zymo Research, CA, USA) according to the manufacturer's protocol. The cDNA synthesis was performed with AMV reverse transcription (Promega, WI, USA). A second strand of cDNA synthesis was performed using DNA Polymerase I Large (Klenow) Fragment (Promega, WI, USA). Subsequently, a Nextera XT Sample Preparation Kit (Illumina, CA, USA) was used to construct a DNA library, which was identified using dual barcodes. For size range, Pippin Prep (Sage Science, Inc.) was used to select a 300 bp insert (range 200-400 bp). The library was deep-sequenced using the Hi-Seq 2500 Sequencer (Illumina, CA, USA) with 126 bp ends. Bioinformatic analysis was performed according to the protocol previously described by Deng et al. 11 . Contigs sharing a percent nucleotide identity of 95% or less were assembled from the obtained sequence reads by de novo assembly. The contigs included the rotavirus sequences, as well as others, such as enteric viruses, human, fungal and bacterial sequences. The resulting singlets and contigs were analyzed using BLASTx to search for similarity to viral proteins in GenBank's Virus RefSeq. The contigs were compared to the GenBank nonredundant nucleotide and protein database (BLASTn and BLASTx). After the identification of a Cosavirus, a reference template sequence was used for mapping the full-length genome with Geneious R9 7 . BLASTn was initially used to identify viral sequences through their sequence similarity to annotated viral genomes in GenBank. Based on the best hits of the blastx searches, the following 14 polyprotein (6329 nt) sequences, listed by their Genbank numbers, were choosen to be used in the next analyses: JN867756, JN867757, SCIENTIfIC REPORTS | (2018) 8:12304 | DOI:10.1038/s41598-018-30214-1 JN867758, JN867759, FJ555055, FJ438908, KJ194505, KM516909, AB920345, KJ396940, GU968209, FJ438903, FJ438902, FJ438907 and KX545380. These genomes were then aligned using Clustal × software 12 . Subsequently a phylogenetic tree was constructed by the Maximum Likelihood approach and branch support values were assessed by the Shimodaira-Hasegawa test. All trees were inferred using PhyML software 13 . The KHY model and gamma distribution were selected according to the likelihood ratio test (LRT) implemented in the jModeltest software 14 . To determine the extent of recombination among sequences, we used software RDP v.4, which utilizes a collection of methods. An excellent and detailed explanation of each method implemented in the RDP software can be found in the user's manual (http://darwin.uvigo.es/rdp/rdp.html). Breakpoints were identified by means of software Simplot, version 2.5, which is available at http://sray.med.som.jhmi.edu/RaySoft. Recombination was checked by means of Bootscan, using the neighbor-joining approach and the F84 substitution model. A sliding window of 350 bp with 60-bp increments was used. Window trees were replicated 300 times to provide bootstrap support for permuted trees. Recombination pattern of the Brazilian isolate BRTO-83. Colored lines represent the probability (given in bootstrap value) that genomic regions belong to a certain parental genotype. The x-axis represents the sequence length in base pairs (bp). The y-axis represents the statistical support (bootstrap) based on 500 replicates. Each plotted line refers to a certain genotype (see the code color on the small panel). A vertical arrow indicates the recombination break point on the genome region. The plot indicates a single breakpoint in the polyprotein region of the isolate BRTO-83 at the position 2798. This breakpoint corresponds to the initial part of the Cosavirus protein P1 (see diagram below the recombination plot). Strains used as parental references are also shown in the recombination plot. Trees were constructed using the partitioned genome. The first partition tree (nucleotides; 1-2798) demonstrates the isolate BRTO-83 clustering with genotype E (left tree), and the second tree (nucleotides 2799-6269) demonstates the isolate BRTO-83 within the clade formed by genotype D strains (right tree). Genotype reference strains are indicated by filled dots on the genome tree. The recombinant strain NG385 is indicated by gray triangles in the trees. Analyses were performed using a neighbor-joining method and a Kimura 2 parameters model in windows of 250 bp sliding along sequences in increments of 40 bp. For parental references, non-recombinant sequences were used. The boostscan plot was obtained by analysis performed using RDP v.4 software.
2018-08-18T13:37:15.720Z
2018-08-17T00:00:00.000
{ "year": 2018, "sha1": "59db95b7635af4cbcfe9758a64f3af57d9c13d7b", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-30214-1.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "59db95b7635af4cbcfe9758a64f3af57d9c13d7b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
18520350
pes2o/s2orc
v3-fos-license
Potential wells for AMPA receptors organized in ring nanodomains By combining high-density super-resolution imaging with a novel stochastic analysis, we report here a peculiar nano-structure organization revealed by the density function of individual AMPA receptors moving on the surface of cultured hippocampal dendrites. High density regions of hundreds of nanometers for the trajectories are associated with local molecular assembly generated by direct molecular interactions due to physical potential wells. We found here that for some of these regions, the potential wells are organized in ring structures. We could find up to 3 wells in a single ring. Inside a ring receptors move in a small band the width of which is of hundreds of nanometers. In addition, rings are transient structures and can be observed for tens of minutes. Potential wells located in a ring are also transient and the position of their peaks can shift with time. We conclude that these rings can trap receptors in a unique geometrical structure contributing to shape receptor trafficking, a process that sustains synaptic transmission and plasticity. Introduction Receptor trafficking has been identified as a key feature of synaptic transmission and plasticity [1,2,3,4,5,6,7]. Yet, the mode of trafficking remains unclear: after receptors are inserted in the plasma membrane of neuron, classical single particle tracking revealed that the motion can either be free or confined Brownian motion [8,9]. Recently, super-resolution light optical microscopy techniques for in vivo data [10,11] have allowed monitoring a large number of molecular trajectories at the single molecular level and at nanometer resolution. Using a novel stochastic analysis approach [11], we have identified that high confined density regions are generated by large potential wells (100s of nanometers) that sequester receptors. In addition, fluctuation in the apparent diffusion coefficient reflects the change in the local density of obstacles [12]. Classically, cell membranes are organized in local microdomains [13,14] characterized by morphological and functional specificities. In neurons prominent microdomains include dendritic spines and synapses, which play a major role in neuronal communication. In this report, we analyze AMPAR, that are key in mediating transmission in excitatory glutamatergic transmission, are not only relocating between synaptic and extrasynaptic sites due to lateral diffusion [3,8,15], but can be trapped in transient ring structures that contain several potential wells. We identified three regions ( Fig. 1) where trajectories form rings ( Fig. 2A-B): specifically, a first feature that defines a ring is that receptor trajectories are constraint into an annulus type geometry (Fig. 2C) located on the surface of the membrane. However, rings are not necessarily localized within dendritic spines. To further characterize a ring, we use the stochastic analysis developed in [11], to compute the vector field for the drift, which is surprisingly restricted to an annulus (Fig. 2B, Ring 2 and 3): the size of the inner radius is of the order of 500 nm while the external radius is around 1µm. In addition, inside a ring, we found several attracting potential wells of various sizes that co-localized with regions of high density (Fig. 3A). From the density distribution function along the ring (Fig. 3), we identified three wells. While the AMPARs diffusion coefficient in these wells is around 0.6 µm 2 /s (Fig. 3B), the wells have an interaction range of about 500 nm (Fig. 3C), and are associated with an energy of 3.6 kT , 1.8 kT and 3 kT respectively. Using time lapse imaging (Fig. 2), we could see that a ring can be stable over a period of 30 minutes (Ring 1), while ring 2 was only stable for 15 minutes. Interestingly, we could see a transient ring which creates a transient structure interconnecting two-ring like structures at time 45 minutes before it disappears at 60 minutes. Ring 3 could be detected transiently after 15 minutes, containing multiple wells. Finally, focusing on Ring 1, by plotting the density function of trajectories, we could see over time how a potential well emerges, is destroyed as well as its stability (Fig. 4A). While the diffusion coefficient remains constant over time (Fig. 4B), the energy of the main well ( fig. 4C) changes across the time lapse experiments as follow: at time 30 min it is E = 6.6 kT (score = 0.25 and a depth of A=2.0), at the intermediate time 45 min, the energy is E= 7.8 kT (score = 0.20, depth A=2.3) and finally at time 60 minutes E = 5.2 kT (score = 0.13, depth A=1.6). The weak score confirms the likelihood of the well [11]. Interestingly, the energy level of the well is neither weak and strong [11], but remains stable overtime. Conclusion To conclude, potential wells that reflect the interaction of AMPA receptor with molecular partner are not only appearing isolated or at synapses [11], but can also appear in ring structures. Although these rings can be due to latex bead, the organization of potential wells around in such vicinity suggest that membrane curvature could be a key necessary component to shape the strength of the wells. These rings can trap receptors in 100 nanometers structures. Both potential wells and rings are transient, but stable over periods of minutes. Classical physical modeling and statistical analysis of single receptor motion [14,16,17,18] assumes that the main driving force is free or confined Brownian motion. This is based on Langevin's description of a pointwise stochastic object description. However because gradient forces, such as electrostatic forces, which are the main sources of chemical interactions cannot generate close trajectories in the determinist limit of Langevin's approximation, maintaining receptor in ring cannot be due to electrostatic forces alone. Thus we propose several hypothesis for this ring structure: one possibility is that receptor dynamics description cannot be reduced to a single point and rather we have now to account for the complex structure of the protein that can generate interactions at specific protein group, which is decoupled from the center of mass. For example the C-terminus tail can interact with some supra-structure generated by scaffolding molecules. Another possibility is that rings are due to transient geometrical structures on the membrane, which traps receptors. We could imagine that this is the case near an endosomal compartment during vesicular exocytosis and endocytosis [19]. Comment The data analyzed in this report have been previously published in [11] and were generated by D. Nair, J.B. Sibarita, E. Hosy and D. Choquet. We thank them for providing us access to these data.
2013-09-24T17:58:42.000Z
2013-09-13T00:00:00.000
{ "year": 2013, "sha1": "dd7f82927b802bc5b9aba7a82b965ac9606a1f98", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "dd7f82927b802bc5b9aba7a82b965ac9606a1f98", "s2fieldsofstudy": [ "Biology", "Physics" ], "extfieldsofstudy": [ "Biology", "Physics", "Chemistry" ] }
121755698
pes2o/s2orc
v3-fos-license
Stabilization Using a Discrete Fuzzy PDC Control with PID Controllers and Pole Placement: Application to an Experimental Greenhouse This paper proposes a control strategy for complex and nonlinear systems, based on a parallel distributed compensation (PDC) controller. A solution is presented to solve a stability problem that arises when dealing with a Takagi-Sugeno discrete system with great numbers of rules. The PDC controller will use a classical controller like a PI, PID, or RST in each rule with a pole placement strategy to avoid causing instability . The fuzzy controller presented combines the multicontrol approach and the performance of the classical controllers to obtain a robust nonlinear control action that can also deal with time-variant systems. The presented method was applied to a small greenhouse to control its inside temperature by variation in ventilation rate inside the process. The results obtained will show the efficiency of the adopted method to control the nonlinear and complex systems. Introduction In the last few decades the Takagi-Sugeno (TS) fuzzy systems have become an important means for both modeling and control, and their performance in these two domains is proved in the research especially in the application of complex and nonlinear systems.TS fuzzy systems use a multimodel approach by fitting enough local linear models each describing an operating functional zone of the process [1].The final output of the fuzzy system is the fuzzy weighted contribution of all the local output models guaranteeing precision and stability. Such a tool can be used for control purposes; in fact when dealing with complex systems classical methods become inefficient and so emerges the need for other control techniques.The PDC controllers [2] are a convenient solution, having the same structure as a TS fuzzy system, where every rule is associated with a local linear controller synthesized from the local model with the same rule [3].The result is a nonlinear control action, which is a fuzzy blending of each individual linear controller.Many applications using TS model and PDC controller [3,4] were presented, and several propositions of new techniques of PDC control appear in the literature [5].Where the most proposed approaches present a state feedback for each rule, the stability is proved by finding a common Lyapunov function which can satisfy all the fuzzy subsystems [6].However, in case of a large number of rules describing the system it is very difficult to apply this approach [7]. In the continuous domain, a solution was presented using a PD or a PID controller for each rule.The denominator of the closed loop system is treated as an uncertain polynomial with affine linear uncertainty structure where the stability can be verified using a frequency domain criterion [7].In the discrete domain, there is not an efficient solution to deal with the stability problem which is one of the objectives of this paper that will be discussed in Section 3. Another goal that will be described in the same section is to find the appropriate parameters of the controller for each rule using pole placement approach.But before that, in Section 2, an introduction to the TS fuzzy model and the PDC controller will be presented.In Section 4, a practical application will be presented where the proposed control strategy is used in an adaptive fuzzy control scheme.The process is an experimental greenhouse that was fuzzy identified and controlled in real time.Finally, Section 5 will conclude this paper. Takagi-Sugeno Fuzzy Systems There are different classes of fuzzy systems; the most often used are Mamdani fuzzy systems [8] and Takagi-Sugeno fuzzy systems [1].The latter differ from the former in the rules consequents: it is not a fuzzy set but it is a local model of the system (submodel related to the rule that describes an operative zone of the process) to be approximated.For the jth rule a TS fuzzy system has the following form: (1) The fuzzy proposition "z is Ω" is the antecedent of the rule, the system of equation in the second part of (1) is the consequent, "z = (z 1 , . . ., z n )" are the inputs of the TS fuzzy system, they can be the states "x = (x 1 , . . ., x na )," the input "u c = (u 1 , . . ., u na )," or the disturbances inputs "v = (v 1 , . . ., v nv )," "Ω ji " is the membership function representing the fuzzy subset with a corresponding membership value "Ω ji (z i )," and N is the total number of rules. Assume that all the subsystems considered are completely controllable and completely observable.Also, denote the following states and inputs variables: (2) The matrix (A j ∈ R na×na , B j ∈ R nb×1 , and D j ∈ R nd×nd ), represents the parameters of the TS fuzzy system (1) with the following Frobenius canonical structure: The output level x j of each local model is weighted by the firing strength μ j (z(k)) = n i=1 (Ω ji (z i (k))).The final output of the system is the weighted average of all the rule's outputs, the computed as Assuming that the output of the TS model becomes with 0 < β j (z(k)) < 1 and N j=1 β j (z(k)) = 1.On the other hand, the controller is a TS fuzzy system, having the same antecedent of the fuzzy model and differing from it in its consequent.A linear controller is developed for each rule, and the global control action is synthesized in the same way as the TS fuzzy model output [2] as follows: with u j representing the local output control. Stability of a Closed-Loop TS System The stability problem of (1) can be solved in general using the Lyapunov approach [9][10][11].In this case, the proposed control law of the PDC controller is a state feedback having the following expression: Replacing ( 8) into (1) without considering the disturbance inputs leads to the following expression: x(k + 1) with G ji = A j − B j K i . The following theorem gives a sufficient stability condition for a system of the form of (11) [12]. Theorem 1.The closed-loop TS fuzzy system of the form of (10) is quadratically stable for some state feedback K j (via PDC scheme) if there exists a common positive definite matrix P such that The problem arises when the TS fuzzy system has a great number of rules, for it is difficult to find matrix P that verifies the stability condition (see (11) and ( 12)) [7]. The solution presented in this paper deals with a transfer function form of the TS fuzzy system.Combining ( 1) and ( 6) without considering the disturbance inputs, we obtain the following NARX form: Since N j=1 β j (z) = 1, the following expression can be obtained: The Z transformation leads to the new form: (16) and so the transfer function of the TS fuzzy system is as follows: Let us also consider the transfer function of the PDC controller to be where D j (q −1 ) = na m=0 d jm q −m and C j (q −1 ) = na m=0 c jm q −m are the numerator and the denominator of local controllers correspondent to each rule j of the PDC controller.The transfer function of the closed-loop fuzzy system will be This leads to the following expression: From (20) we can see that the controller related to a rule j, initially designed to stabilize the local model related to the same rule, has an influence on the stability of the other submodel related to different rules as indicated by the expression N j / = i β j β i (A j C i + B j D i ) located in the denominator of the closed-loop transfer function.There is a high probability that the local controller for a particular rule is not suitable for a local model associated with a different rule.Thus, the stability of the overall closed-loop system with the current structure cannot be guaranteed especially when the TS fuzzy system has a great number of rules. In fact, the cross-coupling and possible instability are created by the two multiplications ( N j=1 β j A j )( N i=1 β i C i ) and ( N j=1 β j B j )( N i=1 β i D i ); if we manage to eliminate the multiplication, then the problem can be solved. In order to fulfill this objective the chosen local controllers must have the same order as the local models.For instance, if the system is a first-order one, then the local controllers must be a PI, and if the system is a secondorder one, then the correspondent local controllers should be a PID.For higher-order systems, one can choose an RST controller to each local model with an equivalent order having the polynomial R(q −1 ) equal to the polynomial T(q −1 ). Also, assuming all stable zeros of the transfer function ( 17), the denominator C j (q −1 ) of each local controller must include the numerator of the local model having the same rule j.So let us consider the denominator of a local controller related to a rule j: C j q −1 = e q −1 B j q −1 with e q −1 = 1 − q −1 and Here all the local controllers share the same polynomial e(q −1 ) = 1 − q −1 representing the numerical expression of an integrator that will allow the rejection of disturbances.This expression already exists in a PI, a PID, or an RST controller having an integrator in the open loop [13].We have also Replacing ( 21) and ( 22) in ( 19) leads to the following transfer function of the closed-loop fuzzy system: In the transfer function (23) of the closed-loop system, the multiplication was eliminated and so the cause of the interaction or cross-coupling is removed.However, the stability is not yet guaranteed.We still need to find the appropriate numerator's parameters D j (q −1 ) of each local controller.A pole placement strategy is adopted with an appropriate polynomial F(q −1 ) = 1 + na+1 m=1 l m q −m , having its roots inside the unit circle, to be identified with the denominator of every closed-loop subsystem as follows: The result is the following system of equations: . . . The denominator of the transfer function (23) becomes invariant whatever the rule j, and since we have N j=1 β j = 1, the new form of the transfer function (23) becomes with the poles located inside the unit circle which guarantees the stability of the system.Also, with the described method, whatever the number of rules is, it is possible to take advantage of both: the performance of a classical controller like a PI, a PID, or an RST and in the same time the multicontrol approach given by the PDC controller.The result is a robust nonlinear controller that can even deal with complex system with time-variant parameters when using a recursive fuzzy identification of the TS fuzzy model.A simulation of the proposed method will be presented in the next section to control such a system. Application to an Experimental Greenhouse The presented method was applied to control the inside temperature of a small greenhouse (Figure 1).The inside climate is subjected to various phenomena like the solar radiation transfer, heat and mass transfer between the soil, the cover, the canopy and the outside climate.There is also the ventilation conducted by two three-phase motors that interfere to cool the inside temperature.The effect of the solar radiation is preponderant during daylight; in fact, the components of the greenhouse absorb the radiation energy and convert it to heat energy released in the air by heat transfer [14], which enhances the inside temperature that will surpass the outside temperature (Figures 2 and 3). All these factors result in a nonlinear evolution with time-varying parameters of the climate inside the greenhouse [15].The use of ventilation to cool down the climate during daylight is very common.In fact, with an adequate and skilful management it can be possible to maintain suitable climate-state variables (temperature, humidity, and CO 2 concentration) [16].Also, the ventilation represents an alternative and cheap solution to the air conditioner that consumes an important electrical power. But the complexity of the system makes it hard to achieve successful results, and so most of the farmers use an on-off control action between two boundary temperature values.But, this kind of control is unhealthy to the canopy.On the other hand, the time where the ventilation is effective is related to the weather.In fact, the possible constraints that oppose the use of this method all along the year are (i) the presence of solar radiation that will enhance the inside temperature and make it surpass the outside temperature, (ii) the desired temperature of the climate inside the greenhouse that must be greater with several degrees than the outside temperature.The choice of the desired temperature depends on the requirement of the cultivation inside the greenhouse. These conditions can be met in 3 to 4 months of the year.When the outside temperature is near the desired inside temperature, the farmers use the pad cooling system; it is a combination between fans and a wet pad that decreases the introduced air with several degrees [17][18][19].Thus, the ventilation use is effective for another one or two months depending on the quality of the pad. In this paper, we focus on the evolution of the inside temperature by manipulating the ventilation rate to obtain a desired behavior of the process.As described in Section 2, the TS fuzzy system is able to describe this kind of system and can be used to control the inside temperature using the fans during daylight. Materials and Measurements. The process is a greenhouse with the following geometrical characteristics: 1.5 m height, total width of 1 m, and total length of 1.5 m. The inputs of the system are obtained from several sensors located inside and outside the greenhouse as follows. (i) The inside and outside temperatures were measured by two LM35 transistors; both have an AD 620 amplifier to amplify the delivered signal. (ii) The inside humidity was measured using a humidity sensor (SY-HS-230BT). (iii) The solar radiation was obtained by a pyranometer type LP-PYRA 03 module. In order to control the ventilation rate, the greenhouse was equipped with two fans driven by two three-phase motors.These engines have a power supply delivered by a frequency converter (microdrive FC 51 Danfoss).The rotational speed of the fans is proportional to the frequency of the three-phase power supply and so the ventilation rate inside the greenhouse depends on the frequency of the power supply.Thus, the control input will be the frequency that will be computed by the fuzzy controller. The communication between MATLAB, the sensors, and the frequency converter is carried by the data acquisition module (KUSB-3100). Fuzzy Identification. The structure of the discrete TS fuzzy system that will be approximated will have a local model for every rule j having the following form [20]: where u is the frequency delivered by the frequency converter to the motors equipped with fans to control the ventilation rate inside the greenhouse.The fuzzy identification is performed in two steps.The first one is the determination of the appropriate membership functions of each input to set the antecedent part of the TS fuzzy system, which is done using fuzzy clustering based on C-means algorithm.The membership functions were fixed and would not change during the control application.For that purpose, on the day of 25-02-2011, we gave a random action control to collect the data needed under the sample time of 15 seconds; after that we set the membership functions in Figures 4,5,6,7,and 8.The second step is the estimation of (27) that represents the consequent part of the TS fuzzy system.For a timevariant system like a greenhouse, the more convenient way local classical controllers with recurrent form [21], and so the new expression of the control law will be u s (k) the measured disturbance inputs of the system during the application.In the beginning and in the end of the application, where the values of the intercepted solar radiation and outside temperature are very low (corresponding to the morning and later in the evening before the sun set), the controller does not take action.This is because the value of the inside temperature is below the desired temperature.Moreover, we can note from Figure 11 that the control signal presents high switching; it is because of some errors in measurements that affect the response of the controller; but, in spite of this, it still leads the process to the desired trajectory, which proves its robustness and efficiency. Conclusion TS fuzzy systems can assimilate the evolution of complex and nonlinear process and act consequently to provide acceptable results in terms of modeling and control.However, a stability problem arises because of the great number of rules used to model such systems.This paper offers a solution using classical controllers like a PI, a PID, or an RST for each rule and a pole placement strategy.The fuzzy controller presented combines the multicontrol approach and the performance of the classical controllers.The result is a robust nonlinear control action that can also deal with time-variant systems. Figure 8 : Figure 8: Membership function of the intercepted solar radiation. k) α j2 (k)u s (k − 1) + d j0 (k)e(k)+ d j1 (k)e(k − 1) control input.4.4.Practical Results.The desired output r is 27 • C. The application was conducted with a sample time equal to 15 seconds on the day of 24 March 2011.It started at 8:39 AM and ended at 6:23 PM.The results of the proposed method are shown in Figures 10 and 11.Figures 12 and 13 represent
2018-12-27T10:59:26.798Z
2011-01-01T00:00:00.000
{ "year": 2011, "sha1": "be754ac3d1bd3677815d55e69dd74e6c125dd69c", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jcse/2011/537491.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "be754ac3d1bd3677815d55e69dd74e6c125dd69c", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
265428586
pes2o/s2orc
v3-fos-license
Physico-chemical properties of curcumin nanoparticles and its efficacy against Ehrlich ascites carcinoma Curcumin is a bioactive component with anticancer characteristics; nevertheless, it has poor solubility and fast metabolism, resulting in low bioavailability and so restricting its application. Curcumin loaded in nano emulsions (Cur-NE) was developed to improve water solubility and eliminate all the limitations of curcumin. Size distribution, zeta potential, transmission electron microscopy (TEM) measurements, UV–Visible spectra, IR spectra and thermogravimetric analysis (TGA), were used to characterize the prepared Cur-NE. Cancer therapeutic efficacy was assessed by oxidative stress (superoxide dismutase (SOD), Glutathione–S–Transferase (GST), malondialdehyde (MDA) and nitric oxide (NO), DNA damage, apoptotic proteins (caspase-3 and 9), besides investigating tumor histology and monitoring tumor growth. Additionally, the cytotoxicity and genotoxicity of the liver, kidney, heart, and spleen tissues were examined to gauge the adverse effects of the treatment method’s toxicity. The results showed that Cur-NE is more effective than free curcumin at slowing the growth of Ehrlich tumors while significantly increasing the levels of apoptotic proteins. On the other hand, Cur-NE-treated mice showed some damage in other organs when compared to mice treated with free curcumin. Cur-NE has a higher efficacy in treating Ehrlich tumor. Unfortunately, all of curcumin's beneficial chemoprotective effects are limited due to several major issues, including poor bioavailability, low solubility in water, rapid excretion, and higher rate of metabolic activity 18 .As a result, numerous technological methods for overcoming these issues and improving their qualities have been investigated.Wang et al. 19 used full encapsulation of curcumin in sodium alginate/ZnO hydrogel beads to modulate curcumin delivery.Suryani et al. 20 used low viscosity chitosan and tripolyphosphate at various concentrations to make curcumin nanoparticles, which they recommended as a promising anticancer treatment.Elbialy et al. 21produced iron oxide nanoparticles coated with curcumin to improve curcumin circulation and biodistribution.Also, Elbialy et al. 22 encapsulated curcumin in mesoporous silica nanoparticles and reported that the formulation was useful for cancer therapy.Ben et al. 23 employed a microemulsion to encapsulate the curcumin and used it as radioprotective. In the current study, it is critical to investigate the synthesis and characterization of curcumin encapsulated in nano-emulsions, as well as its release.Secondly, the antitumor protentional of curcumin nano-emulsions compared with native curcumin in the in vivo system using Ehrlich tumor bearing mice as a model.Thirdly, the side effects on the vital organs (kidney, heart, spleen, and liver) were evaluated for all the treated mice groups and compared with untreated mice. Preparation of curcumin loaded in nano emulsions (Cur-NE) Curcumin loaded in nano emulsion (Cur-NE) was created using the techniques described in Ben et al. and Kesisoglou and Panmai [23][24][25] .In a nutshell, Span® 20 (0.16 mL), Tween® 80 (3 mL), isopropyl myristate (0.6 mL), and lauric acid (100 mg) were mixed until a clear liquid was formed.Curcumin (125 mg) was added to the solution and stirred with a magnetic stirrer (MS-300HS, Misung Scientific, Gyeonggi-do, Korea) until the components were completely combined before adding 8 mL of distilled water drop by drop.It may take up to 24 h for curcumin nanoparticles to equilibrate.At room temperature, all processes were accomplished. Characterization of curcumin free (CR) and curcumin nano emulsions (Cur-NE) The size and shape of Cur-NE were observed by Transmission Electron Microscope (TEM, FEI Tecnai G20, Super twin, Double tilt, LaB6 Gun).The mean particle size, size distribution, and zeta potential for Cur-NE were examined by the dynamic light scattering device (Malvern Zetasizer Nano ZS Instruments Ltd, UK).Additionally, the structural characteristics of CR and Cur-NE were determined by the absorption spectra (JENWAY, 6405 UV/VIS.Spectrophotometer, Barloworld scientific, Essex, UK, λ = 250-600 nm), Fourier-Transform Infrared Spectroscopy (Basic Vector, 22FT-IR, Germany, λ −1 = 4000 and 400 cm −1 with 4 cm −1 resolve) and thermogravimetric analyzer (TGA) (DTG-60H, SHIMADZU, Japan, temperature range room temperature up to 400 °C at a rate of 10 °C/ min).Furthermore, the encapsulation efficiency of curcumin in a nano emulsion and its release was performed accordingly to Avgoustakis et al. 26 , briefly, first, the encapsulation efficiency of Cur-NE was performed by measuring the absorbance of curcumin at different concentrations using a UV-Visible spectrophotometer at 435 nm.The standard curve of curcumin was prepared by plotting optical density against the concentrations and same was used for determining the concentration of curcumin in the supernatant.Second, the release of curcumin from Cur-NE was performed as followed, two suspensions of Cur-NE and CR were in two cellulose acetate dialysis chambers (Spectra/P or.MW cut-off 12,000, Spectrum, Canada), dipped in buffer (phosphate solution: Tween 8:2) and stirred at 50 rpm.At the waiting times, 2 mL of release buffer was withdrawn and immediately an extra 2 mL of new buffer was added and kept for up to 12 h.Curcumin concentrations in the release buffer were measured by spectrophotometry at 430 nm. Animals and experimental design Based on a review of approval number CU/I/F/1/21, the Cairo University Research Ethics Committee granted approval for a total of 20 albino mouse procedures and experimental protocols that were carried out in accordance with the Guide for the Care and Use of Laboratory Animals and Use Committee (CU-IACUC), which provide the policies, procedures, standards, organizational structure, staffing, facilities, and practices to achieve the humane care and use of animals in the laboratory.The National Cancer Institute ("NCI"), Cairo University, Giza, Egypt, provided male albino mice (average weight 25 g, 8-10 weeks old).The mice were housed in the lab for a week before the test as acclimatization under standard laboratory conditions.Separate cages were used to house the mice, and they had unrestricted access to clean, fresh water.Ehrlich ascites carcinoma (EAC) (Ehrlich ascites mammary carcinoma is a spontaneous murine mammary adenocarcinoma and it is one of the wellestablished models in cancer biology 27 was bought from Cairo University's Tumor Biology Department of the National Cancer Institute ("NCI").The experimental production of solid tumor in male mice was carried out by subcutaneous injection of 0.1 mL of Ehrlich's ascites carcinoma (2.5 × 10 6 cells/mL) to the right hind limb of each mouse under sterile conditions.10-13 days after tumor injection, the tumor manifested and became palpable in the injected animals as shown in Fig. 1 and described by Abdelrahman et al. 28 . Mice were separated into four groups (n = 5) at random: control, tumor, CR and Cur-NE group.Control and tumor groups were received orally PBS, CR and Cur-NE groups were received orally 20 mg/kg curcumin extract and 20 mg/kg curcumin loaded in nano emulsions respectively, every day for 2 weeks.Through the treatment Oxidative stress analysis The tumor, kidney, heart, spleen, and liver tissues of all mice groups were homogenized in cold phosphate-buffer, centrifuged at 4000 rpm (VS18000 M; Vision scientific, Korea) for 10 min and the supernatants were used for measuring the oxidative stress.The activity of superoxide dismutase (SOD) and Glutathione-S-Transferase (GST) and the level of malondialdehyde (MDA) and nitric oxide (NO) were determined using superoxide dismutase assay kit (SOD 2521), the Glutathione-S-Transferase assay kit (GST 2519), lipid peroxidation assay kit (MDA 2529), and nitric oxide assay kit (NO 2533) respectively, according to the manufacturer's instructions. Comet assay Comet assay is a quick, easy, visible, and exact way to assess DNA destruction and the early stages of apoptosis 30 .DNA damage induced in tumor, kidney, heart, spleen, and liver tissues for all experimental groups was estimated by the Comet assay described by Rageh and El-Gebaly 31 .Briefly, comet assay was done under alkaline conditions (pH > 13), a sample (5 μL) of cell suspension was mixed with70 μL of 0.7% low-melting-point (LMP) agarose.Agarose was put on a microscope slide, which was previously covered with a thin layer of 0.5% normal melting point (NMP) agarose.After cooling at 4 °C for 5 min, slides were wrapped with a third layer of LMP agarose.After solidification at 4 °C for 5 min, slides were immersed in recently prepared cold lysis solution at 4 °C for at least 1 h.Subsequent lysis, slides were put in a horizontal gel electrophoresis unit and sit on in fresh alkaline electrophoresis buffer.Electrophoresis was done for 30 min at 24 V (∼ 0.74 V/cm) and 300 mA at 4 °C.Therefore, the slides were dipped in neutralizing buffer and lightly washed times for 5 min at 4 °C.All the procedures were completed under lowered light.Comets were visualized by ethidium bromide staining solution and examined www.nature.com/scientificreports/ at × 400 magnification using a fluorescence microscope.Comet 5 image analysis software exploited by Kinetic Imaging, Ltd. (Liverpool, UK) linked to a CCD camera was used to assess DNA damage in the cells by measuring the length of DNA migration and the percentage of migrated DNA.Tail moment and Olive moment were computed 32 . Western blots Western blotting analysis of caspase-3 and caspase-9 was completed using the procedure outlined in Lihua et al. 33 .All mice groups' tumor tissues were homogenized, centrifuged and lysed in the radio-immunoprecipitation assay (RIPA) lysis buffer including a protease/phosphatase inhibitor cocktail and placing them on ice for 30 min until complete lysis was achieved.The lysate was transferred to an Eppendorf tube and centrifuged at 4 °C for 15 min at 13,000 rpm (VS18000 M; Vision scientific, Korea).SDS-PAGE (12% acrylamide) was used to separate the extracted proteins, which were then stained onto PVDF membranes.For normalization, antibodies against-actin were used to probe the membranes.The target proteins' band intensities were compared to the control sample-actin (housekeeping protein) using protein normalization on the ChemiDoc MP imager using image analysis software. Histological examination All groups' tumor samples were embedded in paraffin bars, fixed in 10% neutral buffered formalin solution, sectioned into 5 µm thick sections, and stained with hematoxylin and eosin (H&E) as pathologists and researchers usually utilize this stain as an initial evaluation since it gives a thorough picture of the microanatomy of a tissue 34 . For the inspection of tissue slices, an optical microscope (CX31 Olympus microscope, Tokyo, Japan) connected to a digital camera (Canon) was employed. Statistical analysis The data was examined using SPSS 19.0 for Windows, and the mean and standard deviation (SD) were displayed. One-way analysis of variance (one-way ANOVA) was used to calculate significant differences between groups; significant difference (LSD) was applied to multi-group evaluations.It was deemed significant at P ≤ 0.05. Results and discussion Curcumin is a polyphenol chemical found in nature.It's regarded as a powerful anti-inflammatory and antioxidant.Because of these characteristics, it can be used to treat a wide range of cancers.Curcumin's clinical use has been limited due to its lack of solubility, stability, bioavailability, and selectivity.As a result, several initiatives have been made in recent years to eliminate these problems and improve clinical applications, such as nano carriers, nanoparticles, micelles, solid dispersions and liposomes.The present study highlights the use of curcumin loaded in nano emulsions (Cur-NE) which has shown several major advantages, including higher physical stability and drug loading ability, low toxicity, and controlled drug release, and targeting 18,[35][36][37][38][39] . Cur-NE characterization In the present study, Cur-NE was set according to the method of Ben et al. and Kesisoglou and Panmai [23][24][25] and characterized using several techniques.The shape and size of Cur-NE were observed by TEM (Fig. 3).The round nanoparticles in the TEM image are evenly distributed, not aggregated, and range in diameter from 45 to 60 nm.The dynamic light scattering (DLS) measurement shown in Fig. 4a supports this finding.The size of Cur-NE is centered at 459 nm, (PDI = 0.604) with a reasonably narrow distribution but appears to be higher compared to TEM due to the interference of the dispersant to the hydrodynamic diameter.The potential analyzer was used to determine the electro-kinetic surface potential for Cur-NE (Fig. 4b).The value of zeta potential for Cur-NE is (− 16 mV ± 1.2).This result revealed that the particles were stable and evenly spread.These results matched those of Paolino et al. 40 , who stated that particles with a high negative or positive zeta potential are suspended. Because the particles are opposing each other, there will be no desire for them to stick together.UV-Vis and FT-IR spectra were employed to support curcumin free (CR) and Cur-NE structure.Figure 5 shows the absorption spectra of CR and Cur-NE in the wavelength range 250-600 nm, where the absorption peaks centered at 510 nm and 420 nm for CR and Cur-NE respectively.The shift between the CR and Cur-NE absorption bands might be attributed to Cur-NE production and the alterations in the structure or chemical environment of the Cur-NE this result is consistent with Pandit et al. and Kumar et al. 41,42 . Thermal analyses are necessary to determine the thermal variations in compounds and the maximum temperature to which they can be exposed without losing their properties.Thus, thermogravimetric analysis (TGA), which tracks mass variations, was used to assess CR and Cur-NE thermal stability (Fig. 7).The TGA curves of CR and Cur-NE indicated to a weight loss 2% and 18% in temperature ranges 25-75 °C and 25-150 °C respectively, the increase in weight loss in Cur-NE might be attributed to the present of water and organic solvents in the formulation.Moreover, higher decomposition temperature suggests that increased thermal stability and heat tolerance for Cur-NE 46,47 .Furthermore, encapsulation efficiency of loaded curcumin in nano emulsions (NE) is an important characteristic, hence, at the current study, the encapsulation efficiency of curcumin in Cur-NE was 85%.This finding suggests that the NE may keep a high quantity of curcumin 48 due to stronger non-covalent interactions between the curcumin and the compounds of the formulation and/or due to the bigger volume available to incorporate the curcumin. Figure 8 reveals the performance of in vitro release of free curcumin and Cur-NE.As shown in the figure, more than 60% of the free curcumin is released within 6 h, while Cur-NE displays gradual release around a long period of time with just 20% curcumin released within 6 h.The in vitro release study of curcumin displayed no www.nature.com/scientificreports/outbreak result so that the drug passage out of the nanoparticles was regulated mainly by a diffusion-controlled mechanism.Mohamad et al. 49 reported that slow release due to intermolecular interaction with the near environment.Furthermore, the NE is suitable for drug carriers due to high encapsulation efficiency and control the release.Hossann et al. 50confirmed this finding, reporting that the shielding effect of the lipid bilayer in liposomes mediated drug delivery was observed to be sustained and prolonged drug release pattern to the targeted areas. Cur-NE therapeutic efficacy in solid Ehrlich carcinoma-bearing mice Tumor size, oxidative stress, histology, DNA damage and apoptotic proteins were used to assess Cur-NE therapeutic efficacy in solid Ehrlich carcinoma-bearing mice.Oxidative stress and DNA damage were also considered in the liver, kidney, heart, and spleen to recommend the suggested treatment mode, where these organs are the initial sites for the accumulation of foreign substances associated with the pathogenesis of systemic diseases 51 . Oxidative stress When compared to normal cells, cancer cells produce reactive oxygen species (ROS) at a higher rate, altering the redox environment.MDA is a biomarker of oxidative stress since it is the byproduct of lipid peroxidation, and its level is associated with tumor progression.Simultaneously, SOD activity is lowered to protect the animals from ROS [52][53][54] .Moreover, most chemotherapeutics mediators raise intracellular levels of ROS, and can alter redox homeostasis of cells 55 .Consequently, treatment with curcumin free and Cur-NE were evaluated by the activity of SOD, GST and the level of MDA and NO in kidney, liver, heart, spleen, and tumor tissues (Fig. 9a-d). Figure 9a demonstrates a significant decrease in NO levels in the range (20-50%), in the mice group treated by Cur-NE compared to control group.Figure 9b,c, show a significant decrease in the activity of GST and SOD in the range (35-80%) and (2-35%) respectively in the mice group treated by Cur-NE compared to the control one.Also, Fig. 9d shows a significant increase in the level of MDA in the range (40-85%) in the mice group treated by Cur-NE compared to the control one.These results are coherent with several studies that documented curcumin treatment combined with increased ROS production and disruption of the antioxidant system [56][57][58][59] . Comet assay Modulation of oxidative stress by curcumin and Cur-NE is reflected in DNA damage.Comet assay from kidney, liver, heart, spleen, and tumor tissues showed comet growth in the treatment groups.Thus, DNA damage was measured as %DNA in tail, olive tail moment, tail length and tail moment (Fig. 10a-e) 60,61 .Mice groups treated with Cur-NE revealed a significant increase in the percentage of DNA (%DNA) in tail, olive tail moment, tail length and tail moment in the kidney, liver, heart, and spleen tissues in the range (2.6-57%) compared to the control group (Fig. 10a-d).On the other hand, tumor tissues showed a nonsignificant result compared to the control group.This data were proven by the comet images in Fig. 10e.These results may be due to the small size of Cur-NE which permits it to enter mitochondria and nuclear pores and the ability of curcumin to regulate cellular redox stability by distracting mitochondrial homeostasis and improving cellular oxidative stress.Furthermore, curcumin causes ROS creation and decreases mitochondrial transmembrane potential thereby activating DNA damage/repair pathway and mitochondrial apoptosis 62 .Additionally, a nonsignificant result in tumor tissues might be due to lack of genotoxicity of the Cur dose.www.nature.com/scientificreports/cells 63 .These results are displayed in the histopathological variation in striated skeletal muscles (Fig. 11a-d). Figure 11a shows a normal striated skeletal muscle for the normal mice compared to marked pleomorphism, hyperchromatism and mitotic activity for tumor cells in the skeletal muscles in Ehrlich carcinoma-bearing mice (Fig. 11b).On the other hand, mice group treated with CR shows masses of pleomorphic and anisokaryotic tumor cells penetrating striated muscles (Fig. 11c).While the group treated with Cur-NE shows the inhibition of tumor cells embedded in the muscles, as in Fig. 11d. Western blot Figure 12a-c illustrates the expression of apoptotic proteins in Ehrlich ascites carcinoma treated with curcumin and Cur-NE.In association to the control group, the level of caspases 3 and 9 significantly increased by around 1-and 3.5-fold, respectively.These results are consistent with the findings of other research 64,65 that suggest curcumin can cause oxidative stress, resulting in mitochondrial malfunction and an imbalance in antioxidant defiance.Hence, apoptotic proteins are activated, resulting in cell death (Supplementary Information). Tumor growth Curcumin and Cur-NE cause oxidative stress, DNA damage, apoptotic proteins, and histological alterations, which result in tumor shrinking and a delay in tumor growth (Fig. 13).Figure 12 shows that tumor volumes clearly rise over time in the control group.It also be noted that mice groups treated with curcumin and Cur-NE showed obstacles in tumor growth rate compared with the control one.Based on previous studies these results may be due to the ability of curcumin to generate free radical-driven apoptosis and/or necrosis of tumor cells 66 . Conclusion The current Cur-NE was created and characterized using several tools.The formulation demonstrated great encapsulation efficiency and improved regulated curcumin release.Cur-NE and free curcumin were used to cure solid Ehrlich tumor implanted in Balb/c mice.According to the combined findings, Cur-NE was a successful treatment when compared to free curcumin, with little cytotoxicity for the essential organs. Guideline The study is reported in accordance with Guide for the Care and Use of Laboratory Animals and Use Committee (CU-IACUC) number CU/I/F/1/21. Figure 1 . Figure 1.Image of tumor formed in the mice. Figure 2 . Figure 2. Shows a graphical illustration of the experimental techniques, as well as the study design and evaluation parameters. Figure 4 . Figure 4. Size distribution (a) and zeta potential (b) of curcumin loaded in nano emulsions (Cur-NE) measured by dynamic light scattering.The data points are the means of three independent measurements. Figure 8 . Figure 8.In vitro release of curcumin free (CR) and loaded in nano emulsions (Cur-NE). Figure 9 . Figure 9. NO level (a), GST activity (b), SOD activity (c) and MDA level (d) in kidney (K), liver (L), heart (H), spleen (S), and Ehrlich tumor (T) excised from all experimental groups.The data points are represented as mean ± SD (n = 3).Statistical difference denotes at a, p ≤ 0.0001, b, p ≤ 0.001 and c, p ≤ 0.002 compared to control group. Figure 10 . Figure 10.Comet parameters of DNA from kidney, liver, heart, spleen, and tumor tissues: (a) %DNA in tail, (b) tail length, (c) tail moment, (d) olive tail moment, (e) comet image for all experimental groups.The data points are represented as mean ± SD (n = 3).The statistical difference between donates at a, p < 0.0001 and b, p < 0.001 when compared to control group. Figure 13 . Figure 13.Effect of free curcumin and curcumin in nano emulsion on tumor volume (cm 3 ) in curcumin (CR) and curcumin loaded in nano emulsion groups.
2023-11-26T06:18:12.943Z
2023-11-24T00:00:00.000
{ "year": 2023, "sha1": "ebfc6b17cf071e171c9ce30b6dda3a1bf5537d34", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-023-47255-w.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8edeac6cf2766e3521377b6ca5dd0fefd5956c96", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
260567585
pes2o/s2orc
v3-fos-license
Quality of Life and Depressive Symptoms Among Peruvian University Students During the COVID-19 Pandemic Objective To determine the factors associated with quality of life and depressive symptoms in Peruvian university students during the COVID-19 pandemic. Methods Multicentre study in 1,634 students recruited by convenience sampling. The quality of life (QoL) was assessed with the European Quality of Life-5 Dimensions at three levels (EQ-5D-3L) and depressive symptoms with the Patient Health Questionnaire-9 (PHQ-9). To assess factors associated with QoL and depressive symptoms, linear regressions and fitted regressions were used, with robust coefficients of variance information (β). Results A 345 (21.1%) reported problems in performing daily activities, 544 (33.3%) reported pain and discomfort, 772 (47.2%) were moderately/very anxious or depressed. Furthermore, 207 (12.7%) had moderate-severe and severe depressive symptoms. Men reported better QoL than women (β: 3.2; 95% CI: 1.1, 5.4; p = 0.004) and fewer depressive symptoms (β: −0.7; 95% CI: −1.3, −0.2; p = 0.011). Ayacucho’s residents had more depressive symptoms than Ancash’s residents (β: 0.8; 95% CI: 0.1, 1.5; p = 0.022) and Piura’s residents had fewer depressive symptoms than Ancash’s residents (β: −1.195% CI: −1.8, −0.3, p = 0.005). Students who left home during quarantine reported more depressive symptoms (β: 0.7, 95% CI: 0.2, 1.2, p = 0.006). Conclusion Problems performing daily activities, pain and discomfort, as well as mild to severe depressive symptoms were found in more than three-quarters of the sample. Authorities could consider depression care to improve quality of life in regions where high rates of infection occurred during the pandemic. INTRODUCTION Coronavirus disease , classified as a pandemic by the World Health Organization (WHO), is a public health problem that has affected the entire world population. The disease has signs and symptoms similar to those of a common cold, but its complication in at-risk patients can be fatal (World Health Organization, 2020a). In September 2021, it had infected more than 200 million people worldwide and caused more than four million deaths (World Health Organization, 2020b). Latin America has been one of the most affected cities, with more than 40 million cases of COVID-19 and 1,477,000 deaths (Pan American Health Organization, 2020). Measures to control the spread of COVID-19, such as a state of national emergency and mandatory social isolation (confinement) were declared, have led to changes in people's life routines that could affect their mental health (Brooks et al., 2020;Rajkumar, 2020). A recent review study in China and Singapore reported the prevalence of anxiety and depressive symptoms in the population in ranges of 6-50% and 14-48%, respectively (Pappa et al., 2020). Factors such as low income, being a woman and being unemployed significantly impair mental health in times of a COVID-19 pandemic (Mejia et al., 2020;Parrado-González, 2020). For example, in low-income countries such as Ethiopia, the prevalence of depressive and anxiety symptoms has been reported to be 46.2 and 48.1%, respectively (Necho et al., 2020). Similarly, the COVID-19 pandemic has hurt the quality of life of these populations. A study with the Bangladeshi residents found that more than 50% of respondents had decreased quality of life, mainly due to difficulties in meeting their basic needs, loss of jobs and barriers to accessing education (Mondal et al., 2021). A study in China found that 41.3% of people had depressive symptoms and a significantly lower quality of life (Ma et al., 2020). In Latin America, there is little literature on assessing the association between quality of life and depressive symptoms in times of pandemic , with only one study in a Brazilian population reporting high levels of depressive symptoms (41.9%) and anxiety symptoms (29.0%), which were associated with poorer quality of life (Vitorino et al., 2021). In education, the rapid transmission of COVID-19 led to the suspension of face-to-face academic activities, affecting 91.3% of the global student population and 23.4 million higher education students in Latin America and the Caribbean, who had to adapt quickly to non-face-to-face education (Instituto Internacional para la Educación Superior en América Latina y el Caribe, 2020). A review study with more than 436,799 students from the United States and China found stress in 23%, anxiety in 29%, and depression in 37%, so that the abrupt adaptation to virtual education may have deteriorated the quality of life and mental health in this population (Wang et al., 2021). In Bangladesh, a high percentage of university students with depressive (54.5%) and anxious (42.9%) symptoms were reported (Islam et al., 2020). Also, in Europe, there are studies in Greece of 1,000 university students who reported prevalences of 42.5% for anxiety, 74.3% for depression, 63.3% suicidal thoughts, and quality of life has worsened by 43.0% (Kaparounaki et al., 2020). In Italy, a study of 655 university students reported feelings of sadness (51.3%), nervousness (64.6%), irritability (57%), difficulty concentrating (55.9%), difficulty sleeping (54.5%), eating disorders (73.6%), tachycardia (65%), and a tendency to cry (65%) (Commodari et al., 2021). Peru has been a country most affected by the COVID-19 pandemic. The confinement affected the education of Peruvian students due to economic hardship and the digital divide (internet connectivity and computer use). This action led the government to implement economic policies for Peruvian families and the education sector, such as the issuance of economic bonds, educational credits, connectivity, and scholarships (Figallo et al., 2020;Contraloría General de la República, 2021). For example, the National Institute of Statistics and Informatics revealed that during 2020, monetary poverty of Peruvian households amounted to 30.1%, increasing 10 percentage points from the previous year, affecting mainly 45.7% of the rural population and 26.0% of the urban population. Moreover, 85.7 and 82% of these poor households do not have a computer or Internet access, respectively (Instituto Nacional de Estadística e Informática, 2021), there are only studies before the COVID-19 pandemic in Peru where they reported low levels of quality of life and the presence of depressive symptoms in 52.2 and 24.6% of the university population, respectively (Kuong and Concha, 2017;Diaz-Godiño et al., 2019). Given the above, there is little research in Latin America and non-existent in Peru on the evaluation of the quality of life (QoL) situation associated with depressive symptomatology and the associated factors of the same, in a population of university students in times of COVID-19, considering that the current pandemic situation generates a greater state of vulnerability (Puthran et al., 2016;Ribeiro et al., 2018), making it difficult to implement programmes and interventions to improve mental health and quality of life in the face of such a state of emergency (Figallo et al., 2020). Therefore, this study determined the factors associated with quality of life and depressive symptomatology in Peruvian university students during COVID-19. Study Design We conducted a multicentre cross-sectional study of students at a private university in four regions of Peru (Ancash, Ayacucho, Lima, and Piura), with large student populations and entrenched socio-cultural differences between July and August 2020. Participants The study participants were 1,634 university students from four regions of Peru (Ancash, Ayacucho, Lima, and Piura), which were obtained from a non-probabilistic convenience sampling, with an online survey sent through social networks (WhatsApp and email), where informed consent was presented through the presentation of the problem and the objective of the study, to subsequently decide the option to voluntarily participate in filling out the questionnaire; those who agreed to participate in the study answered all the questions of the data collection instrument. The eligibility criteria for a student to participate in the study were as follows: (i) Being over 18 years of age, (ii) Residing in the pandemic crisis within the regions of Peru, where the study was conducted, and (iii) Students studying at the undergraduate level within the private university where the data were collected. On the other hand, the only exclusion criterion was to exclude participants who did not fully answer the questions in the questionnaires. However, it is important to mention that it was impossible to take into account the evaluation of previous psychiatric comorbidities, because at the time of data collection, it was not feasible in Peru due to the restrictions established. Procedures The electronic survey generated for the study followed the quality improvement recommendations for web-based surveys based on the Checklist for reporting results of Internet e-surveys (CHERRIES) (Eysenbach, 2012). The development and data collection of the electronic instrument was carried out through the Survey Monkey virtual platform (Survey Monkey, 1999). Initially, this electronic instrument consisted of an introduction explaining the composition of the work team, the objectives of the study, anonymity, confidentiality of the data and the use of the information for scientific purposes only. Subsequently, students were given informed consent to continue with the survey. The time to complete the questionnaire was 15 min. The survey was promoted through emails and university student study groups (WhatsApp), between the period of July and August 2020. Participants were free to opt out of the questionnaire at any time, without explanation, and were not asked to identify themselves due to the confidentiality of the information. Finally, all the surveys completed by university students were securely stored using a passwordprotected database. Variables The primary variables were quality of life (QoL) and depressive symptomatology in university students from four regions of Peru. The QoL variable was measured through the European Quality of Life-5 Dimensions questionnaire in three levels (EQ-5D-3L) (EuroQol, 2017), composed of five items that respond to five dimensions (Mobility, Self-care, Daily activities, Pain or Discomfort, Depression or Anxiety), with three levels of response (absence, moderate presence, and severe presence). It also has a visual analog scale (EQ-VAS) that reports current life status on a range from 0 (worst status) to 100 (best status), this last indicator of QoL. We used the cultural adaptation of the EQ-5D-3L translated into Spanish available for Peru, carried out by the EuroQol Group, which was responsible for the original instrument and its theoretical validation (Herdman et al., 2003). The EQ-5D-3L is a widely used instrument worldwide to assess quality of life. Some studies have conducted psychometric validity of this instrument, reporting adequate indicators of discriminant validity (Shannon's H : 0.47 and 0.98) and good reliability (weighted Kappa w K: 0.39-0.93); however, this type of validation was performed on the original version (Buchholz et al., 2018). On the other hand, there are few psychometric validation studies in the Spanish-speaking version using the general and/or clinical population (García-Gordillo et al., 2015). In Peru, this type of validation has not yet been conducted; however, several studies in the country report similar and consistent results within this geographical context (Taype-Rondan et al., 2017;Figueroa-Quiñones et al., 2019). The Patient Health Questionnaire-9 (PHQ-9) was used for the variable depressive symptoms. It is a nine-item self-administered questionnaire with five response levels according to the frequency of depressive symptoms in the last 2 weeks: not at all, several days, more than half of the days, and almost every day. The total score is within the range of 0 to 27 points. Depressive symptoms are also reported according to severity levels: minimal (0-4), mild (5-9), moderate (10-14), moderate-severe (15)(16)(17)(18)(19), and severe (20-27) (Spitzer et al., 1999;Kroenke et al., 2001;Cameron et al., 2008). This questionnaire has been validated in the Peruvian population, presenting indicators of psychometric properties through confirmatory factor analysis (CFI = 0.936, TLI = 0.914, RMSEA = 0.089 and SRMR = 0.039) and reliability of Cronbach's Omega and Alpha indicators (ω = 0.87 and α = 0.87), interpreted in such a way because many indicators such as CFI and TLI are close to 0.90; the RMSEA at 0.08 and the SMRM is lower than 0.08; while, for reliability, the values of Cronbach's omega and alpha indexes are higher than the optimal point (0.80) (Villarreal-Zegarra et al., 2019). The covariates that participated in the study were age (in tertiles), sex (male or female), regions of residence (Ancash, Ayacucho, Lima, and Piura), marital status (single, separated, widowed, divorced, and married or partner), occupation (studying and working or only working), left home during quarantine (no and yes), decrease in family income in quarantine (no and yes), lives alone (no and yes) and family with chronic disease (no and yes). Data Analysis A descriptive analysis was presented using measures of central tendency and dispersion (for numerical variables) and absolute frequencies (for categorical variables). We performed linear regression with robust variance reporting crude, adjusted model with coefficients (β) and their 95% confidence intervals (CI95%) to evaluate the association of the factors associated with the EQ-VAS (quality of life) and PHQ-9 (depressive symptoms) scores. In all cases, variables that obtained a p < 0.20 in the crude model were included in the adjusted model. Analyses were performed in Stata v15.0 statistical software (StataCorp, 2017). Ethics Statement This study was reviewed and approved by the Ethics Committee of the Universidad Católica Los Ángeles de Chimbote (Los Angeles de Chimbote Catholic University). In addition, the study was anonymous and voluntary, so it does not pose a risk to participants who accepted their participation with the onlineinformed consent. With the approval of the ethics committee and the ethical steps followed for data collection, we sought to ensure compliance with the National Commission for the Protection of Human Subjects of Biomedical and behavioral Research (Commission for Protection of Human Subjects of Biomedical and Behavioral Research, 1978). RESULTS The 1,825 participants were initially recruited, of which 65 did not agree to participate in the study and 126 did not complete the entire survey; therefore, only 1,634 (89.5%) university students participated in the study. The participants were distributed according to the region of residence: 712 (43.6%) are from Ayacucho, 347 (21.2%) from Ancash, 342 (20.9%) from Piura and, 233 (14.3%) from Lima (Table 1). Participants had a median age of 24 years (interquartile range: 20-30 years), 1,146 (70.1%) were women, 1,270 (77.7%) reported being single, separated, widowed, or divorced. University students studying and working at the same time were 842 (51.5%) with a significant prevalence in Lima. 879 (53.8%) of the university students reported leaving home during quarantine, 1,511 (92.5%) had a decrease in family income, 1,502 (91.9%) declared not living alone and 959 (58.69%) had a family member with a chronic disease ( Table 1). On the quality of life during the pandemic, 345 (21.1%) reported problems in carrying out daily activities, 544 (33.3%) reported pain and discomfort and 667 (40.8%) reported being moderately anxious or depressed and 105 (6.4%) reported being severe anxious and depressed ( Table 2). The univariate analysis reported that during the COVID-19 pandemic, being male and residing in Lima or Piura were associated with higher EQ-VAS (quality of life) scores; whereas, being between 22 and 27 years of age, is dedicated only to study, residing in the Ayacucho region, leaving home during quarantine, suffering a decrease in family income during quarantine, having a family member with chronic illness and depressive symptom scores decreased QoL scores (Table 4). On the other hand, factors associated with PHQ-9 scores (depressive symptoms) reported that all variables were significantly associated. However, after adjusting the model with all variables, we were find that these were still significant: residing in the region of Ayacucho, leaving home in quarantine, and having a family member with chronic disease have higher scores for depressive symptoms; while being male, residing in the region of Piura and higher QoL score decrease scores for depressive symptoms. Our positive relationship results concerning depressive symptom scores show that university students residing in Ayacucho have higher scores compared to those residing in Ancash (β = 0.8, 95% CI = 0.1 to 1.5), those residing in Ancash (β = 0.8, 95% CI = 0.1 to 1.5) and those residing in Piura (β = 0.8, 95% CI = 0.1 to 1.5), those who were quarantined compared to those who were not (β = 0.7, 95% CI = 0.2 to 1.2) and having a family member with a chronic illness presented higher scores in depressive symptomatology compared to those who did not have family members with an illness (β = 1.5, 95% CI = 1.0 to 2.1); the latter factor being the biggest problem. Meanwhile, concerning the negative relationship, men had lower scores than women (β = −0.7, 95% CI = −1.3 to −0.2), with residents of the region of Piura having lower scores than those residing in Ancash (β = −1.1, 95% CI = −1.8 to −0.3). DISCUSSION University students in relation to QoL during the pandemic reported some problems in performing daily activities 345 (21.1%), pain and discomfort in 544 (33.3%) and reported being moderately anxious or depressed in 667 (40.8%) and severe anxious and depressed in 105 (6.4%). Furthermore, the PHQ-9 reported that 741 (45.4%) had mild and moderate depressive symptoms, while 207 (12.7%) had moderate-severe and severe depressive symptoms. These results are consistent with the study conducted in Vietnamese university students during the COVID-19 pandemic, which reported greater impairment in the anxiety/depression and pain/discomfort dimensions of QoL (Tran et al., 2020). Another study with young Chinese adults reported similar impairment in QoL dimensions during the pandemic (Ping et al., 2020). These findings may possibly be due to bereavement over the death of students' family members, isolation and reduced physical activities (Hamid and Jahangir, 2020) and fear caused by overexposure to the media and the high lethality of the virus (Mejia et al., 2020), which may have increased symptoms of anxiety or depression. In addition, the long months of confinement resulted in constant exposure to stress and often manifested in pain and discomfort (Esquivel-Acevedo et al., 2020). On the other hand, non-face-to-face education meant that the student had to sit in front of the computer for a long time, choosing postures that provided comfort, however, incorrect body postures could have generated pain and tension . On the other hand, males reported better QoL than females (β: 3.2; 95% CI: 1.1, 5.4; p = 0.004) and fewer depressive symptoms (β:−0.7; 95% CI: −1.3, −0.2; p = 0.011). This result could be explained by several factors; for example, women nowadays have more responsibilities in the work environment and during confinement the with family support needs (e.g., family caregivers) has increased, leading to more stress, and depression (Verma and Mishra, 2020;. Moreover, confinement as a measure to prevent the spread of COVID-19 and the inability to interact socially with peers has a negative impact on mental health (Brooks et al., 2020;Palgi et al., 2020). Likewise, the new normality that was accompanied by sedentary behavior adopted by students due to non-face-toface education and financial insecurity for educational expenses may have increased depressive symptoms in university students (Huckins et al., 2020;Islam et al., 2020), as well as; it may have been a reaction associated with confinement and habit changes (Vásquez et al., 2020). Residents in Ayacucho presented greater depressive symptomatology than those in Ancash (β: 0.8; 95% CI: 0.1; 1.5; p = 0.022) and residents in Piura had less depressive symptomatology than those in Ancash (β:−1.1 95% CI: −1.8; −0.3; p = 0.005). This result could be explained by the fact that during the evaluation months of our study, the rate of COVID-19 infections and deaths in Piura had decreased, while in Ayacucho, in rural Peru, it was at its highest peak (Plataforma digital única del Estado Peruano, 2020). In addition, students who left home during the quarantine were find to have greater depressive symptoms than those who were compliant with staying at home (β: 0.7; 95% CI: 0.2, 1.2; p = 0.006). Possibly a reason why university students were forced to leave home was the loss of family income, which implies difficulties in accessing treatment, medication and living expenses, affecting the quality of life and leading to the onset of some depressive symptoms (Kretchy et al., 2020;Ping et al., 2020;Tran et al., 2020). A study in China with university students also reported mental health problems due to COVID-19 (Wang and Zhao, 2020). It is important to note that other studies show that these mental health problems are more prominent in students within the adolescent stage (Commodari and La Rosa, 2020) and most likely the entire increase is due to the COVID-19 pandemic; however, the extent of influence of the changes themselves has so far not been determined (i.e., mood and mental impairment) o of this stage in mental health reports (Commodari and La Rosa, 2020). Another reason may have been that this age group, according to case-fatality reports, had a lower mortality risk than adults and the elderly, but the fear of bringing the virus home and infecting their family members may have generated the depressive symptoms (Figueroa-Quiñones and Ipanaqué-Neyra, 2020;Johnson et al., 2020). Depressive symptoms influence and impact the quality of life of university students (Gan and Yuen Ling, 2019). The strength of our study is that it is the first study to report up-to-date evidence on the quality of life and mental health status of university students in Peru after the confinement of the COVID-19 pandemic. However, the study has some limitations. Due to the confinement by COVID-19, is the non-probabilistic nature of the sampling, which affects the representativeness of the study sample and the probable impossibility of generalizing the results obtained; however, a significant sample size was achieved in different cities of Peru, which produces consistent evidence of university students, and the results obtained were similar to other studies conducted (Chang et al., 2020;Tran et al., 2020). Another limitation of the study was that it did not take into account the inclusion of any restrictions on particular clinical characteristics, the subjects may have had previous psychiatric comorbidities, so the problems found may possibly be slightly inaccurate, and it is recommended that future studies include previous history or mental health treatment. Likewise, the instrument used (EQ-5D-3L) to assess psychometric quality of life was not validated in Peru, however, this instrument has been used in other studies and populations in Peru (Taype-Rondan et al., 2017;Figueroa-Quiñones et al., 2019), moreover, it was translated into Spanish by the EuroQol Group and has been adapted in other countries and languages (EuroQol, 2017). CONCLUSION It is concluded that university students reported some problems in performing daily activities, pain and discomfort, as well as mild to severe depressive symptoms in more than three quarters of the sample in relation to their QoL during the pandemic. Therefore, our health authorities also consider psychological interventions to reduce depressive symptoms and improve QoL. On the other hand, men reported better QoL than women and lower depressive symptoms. The female students would be a priority in the Peruvian university population for managing depressive symptoms. In addition, Ayacucho's residents had higher depressive symptoms than Ancash's residents and Piura's residents reported lower depressive symptoms than Ancash's residents. Health authorities might therefore consider addressing depression in pandemics in settings with high infection rates of infection are reported. A clear example is the professional associations of psychology in the cities of Peru most affected by depression should work in coordination with the health and university sectors to design plans and actions for prevention, detection and subsequent emotional support for their students with depressive symptomatology through virtual platforms (electronic devices, hotlines and internet) and to be able to provide immediate professional help to students affected during COVID-19. It has also been found that students who left home during quarantine had more depressive symptoms than those who did not. Therefore, education authorities also provide contingency plans for students in pandemic situations, as the lack of resources often forces them to leave home. Universities in Peru could mitigate such depressive symptoms in students by generating a more effective promotion of care programmes with a psychological specialist, while it is true that such attention promotion exists, the impact on students is mostly low. The increase in student scholarships and the reduction of pension costs by universities and/or subsidies and economic bonuses for families by the government could also help reduce students' worries about paying for their studies, considering that in an undeveloped country, many university students must study and work simultaneously to survive and achieve their academic goals. Furthermore, suicidal tendencies are prevalent behavior in subjects with anxiety/depression problems and are often complicated by underlying pathophysiological factors and due to our findings in the current context by COVID-19 we believe that it might increase the risk for subjects' mental health and quality of life. Therefore, mental health authorities should plan assessments, interventions and treatments in clinical practice for affected students. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://doi.org/10. 7910/DVN/93WCEZ. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Research Ethics Committee of Los Angeles Catholic University of Chimbote. The participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS JF-Q carried out the administration of the manuscript. MI-Z performed the methodology, data retention, formal analysis, and supervision of the manuscript. JF-Q and MI-Z were responsible for the visualization and validation of the manuscript. JC was responsible for the acquisition of funds for the manuscript. JF-Q, JC, DM-P, and MI-Z were responsible for the preparation of the first report and the final version of the original manuscript. All authors contributed to the article and approved the submitted version. FUNDING This study was funded by the Universidad Peruana Unión (UPU).
2022-02-26T00:09:53.284Z
2022-02-23T00:00:00.000
{ "year": 2022, "sha1": "111b2b47563f92c9e0e404cb7fb1214df024a721", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2022.781561/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7393f37c8f7549eddcf1997547392de2f2cf2cd9", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
267320387
pes2o/s2orc
v3-fos-license
Photon-triggered jets as probes of multi-stage jet modification . Prompt photons are created in the early stages of heavy ion collisions and traverse the QGP medium without any interaction. Therefore, photon-triggered jets can be used to study the jet quenching in the QGP medium. In this work, photon-triggered jets are studied through di ff erent jet and jet substructure observables for di ff erent collision systems and energies using the JETSCAPE framework. Since the multistage evolution used in the JETSCAPE framework is adequate to describe a wide range of experimental observables simultaneously using the same parameter tune, we use the same parameters tuned for jet and leading hadron studies. The same isolation criteria used in the experimental analysis are used to identify prompt photons for better comparison. For the first time, high-accuracy JETSCAPE results are compared with multi-energy LHC and RHIC measurements to better understand the deviations observed in prior studies. This study highlights the importance of multistage evolution for the simultaneous description of experimental observables through di ff erent collision systems and energies using a single parameter tune. Introduction Heavy ion collisions offer a means to explore the early stages of the universe, particularly the Quark Gluon Plasma (QGP) formed microseconds after the Big Bang.Among the various probes employed to study QGP in heavy ion collisions, prompt photons are crucial [1], as they enable the estimation of energy and direction for the initiating parton on the away side jet before energy loss occurs.In recent years, there has been a growing interest in investigating observables related to prompt photons within both theoretical and experimental communities.The experimental analysis of identifying prompt photons from a large number of shower and fragmentation photons poses significant challenges, and a common practice involves applying isolation criteria.Given that photons do not interact with the QGP medium, these isolated photons predominantly consist of prompt photons. The scarcity of events featuring prompt photons is a consequence of the low cross-section in hard scattering processes leading to their production.In theoretical simulations, generating modified events with prompt photons requires adjusting the cross-section to significantly reduce computation time.However, it's important to note that this simulation approach differs from experimental analysis, which relies on isolated direct photons. In this study, we compare various observables using both modified prompt photon events and full events generated by JETSCAPE 3.X (utilizing PP19 [2] and AA22 [3] tunes) against available experimental results.Additionally, for the first time, we explore isolated photon and di-jet correlation, groomed jet substructure associated with γ-triggered jets, and γ-jet asymmetry, using new JETSCAPE results featuring improved statistics [4]. 2 Simulating photon triggered jets using JETSCAPE framework JETSCAPE is the first framework that supports multistage evolution, where multiple modules can be employed to simulate the partonic shower within a QGP medium, depending on the virtuality of each shower parton.This multistage evolution capability enables the JETSCAPE framework to simultaneously describe numerous experimental observables across different center-of-mass energies and centralities using a single parameter tune. In the simulation of p-p collisions, partons produced in the initial hard scattering in PYTHIA are fed into the MATTER energy loss module for vacuum partonic showers.Subsequently, all final state partons from MATTER are transferred back to PYTHIA for string hadronization.For Pb-Pb collisions, pre-generated event-by-event hydro profiles with initial conditions are utilized.Similar to p-p collisions, partons produced in the initial hard scattering in PYTHIA are transferred into MATTER and LBT energy loss modules based on their virtuality.Final state hadrons are generated using PYTHIA's string hadronization, following the partonic shower.The same set of parameters used in previous p-p [2] and Pb-Pb [3] studies is applied in this study without further parameter tuning. In this study, trigger photons are identified using the same isolation criterion employed in experimental analysis for both modified prompt-photon events and full events [5,6].The same sets of generated events at √ S NN = 5.02 TeV are used to analyze a wide variety of observables and compare them with available experimental results.After identifying the isolated photon in an event, jets are clustered using the anti-k T jet clustering algorithm within FastJet. Results and Discussion The γ-jet asymmetry was investigated for both p-p and Pb-Pb collisions using modified prompt-photon and full events, and the findings were compared with those of ATLAS and CMS.Figures 1 and 2 illustrate comparisons of X jγ distributions for four different p γ T intervals with ATLAS results for p-p and Pb-Pb, respectively.Given that ATLAS results are unfolded, unmodified JETSCAPE results were used in this comparison.Despite the considerably large uncertainties in the results from full events, due to the rarity of events with prompt photons, a better agreement can be observed in both p-p and Pb-Pb results. Since CMS results for both p-p and Pb-Pb are smeared, the same smearing function is applied for a proper comparison, as illustrated in Figure 3. Similarly, in this case, results from full events exhibit better agreement with the experimental results.Although the isolated photons mainly consist of prompt photons, these findings suggest a significant contribution to the transverse momentum imbalance from other photons, including those produced in the partonic shower and fragmentation photons. Figure 4 displays the z g distribution, which measures the energy imbalance of the hardest split in a γ-triggered jet.The ratio between Pb-Pb and p-p illustrates the modification from the QGP medium.As observed in Figure 4, the z g distribution does not exhibit a significant dependence on the transverse momentum imbalance, X jγ .Although there are no experimental results available yet to compare with these groomed jet substructure observables using photon-triggered jets, ongoing experimental studies are being conducted. Figure 1 . Figure 1.γ-jet asymmetry for p-p collisions using prompt-photon events and full events generated by JETSCAPE compared with ATLAS results.Here four p T γ regions from 63.1GeV to 200GeV are used. Figure 2 . Figure2.γ-jet asymmetry for Pb-Pb collisions using prompt-photon events and full events generated by JETSCAPE compared with ATLAS results.Same p T γ regions as Figure1is used here.
2024-01-31T06:45:09.490Z
2024-01-30T00:00:00.000
{ "year": 2024, "sha1": "3a103746e09c7fb4c074f58c87a32980f6c2b342", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1051/epjconf/202429611008", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "3a103746e09c7fb4c074f58c87a32980f6c2b342", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119104507
pes2o/s2orc
v3-fos-license
Detection relic gravitational waves in thermal case by using Adv.LIGO data of GW150914 The thermal spectrum of relic gravitational waves causes the new amplitude that called `modified amplitude'. Our analysis shows that, there exist some chances for detection of the thermal spectrum in addition to the usual spectrum by Adv.LIGO data of GW150914 and detector based on the maser light (Dml). The behavior of the inflation and reheating stages are often known as power law expansion like $ S(\eta)\propto \eta^{1+\beta}$, $S(\eta)\propto \eta^{1+\beta_s}$ respectively. The $\beta$ and $\beta_s$ have an unique effect on the shape of the spectrum. We find some upper bounds on the $\beta$ and $\beta_s$ by comparison the usual and the thermal spectrum with the Adv.LIGO and Dml. As this result modifies our information about the nature of the evolution of inflation and reheating stages. I. INTRODUCTION The relic gravitational waves (RGWs) have a wide range of frequency ∼ (10 −19 Hz - 10 11 Hz). They are generated before and during inflation stage. They did not have interaction with other matters during their travel from the early universe until now. Therefore they contain valuable information about the early universe. Thus we can obtain the information by detecting RGWs on the through range of the frequency. Nowadays the people are trying to detect the waves at different frequency ranges like: ∼ (10 −19 Hz -10 −16 Hz) by Planck [1], ∼ (10 −7 Hz -10 0 Hz) by eLISA [2], ∼ (10 −1 Hz -10 4 Hz) by Advanced.LIGO (Adv.LIGO) [3], ∼ (10 0 Hz -10 4 Hz) by Einstein telescope (ET) [4], GHz band by detector based on the maser light (Dml) [5] and etc. It is believed that thermal spectrum of RGWs exists from a pre-inflationary stage and it may affect on the CMB temperature and the polarization anisotropies in the low frequency range (∼10 −18 Hz-10 −16 Hz) [6]. Then thermal spectrum of RGWs extend this investigation to the general pre-inflationary scenario by assuming the effective equation of state, ω being a free parameter [7]. Also in high frequency range ∼ (10 8 Hz - 10 11 Hz), extra dimensions cause thermal gravitational waves (or, equivalently, a primordial background of gravitons) [8]. For more details about the extra dimension see Ref. [9]. For the gravity-wave background origin, any fit of the CMB anisotropy in term of gravity background should include a thermal dependence in the spectrum [10]. Thus in the middle range ∼ (10 −16 Hz -10 8 Hz), this thermal spectrum causes the new amplitude that called 'modified amplitude' [11]. We have analysed the results of modified amplitude by comparison it with the sensitivity of the Adv.LIGO, ET and LISA in [11]. Recently Adv.LIGO has detected the effect of waves of a pair of black holes called GW150914 [12] with a peak gravitational-wave strain of 1.0 × 10 −21 in the frequency range 35 to 250 Hz. There is an average measured sensitivity in the range ∼ (10 −1 Hz -10 4 Hz) of the Adv.LIGO detectors (Hanford and Livingston) during the time analysed to determine the significance of GW150914 (Sept 12 -Oct 20, 2015) [3]. Therefore in this work, we upgrade our previous work [11] by comparison the thermal spectrum with average measured sensitivity of Adv.LIGO and Dml. We show that there are some chances for detection the spectrum of RGWs in usual and thermal case. On the other hand the different stages of the evolution of the universe (inflation, reheating, radiation, matter and acceleration) cause some variation in the shape of the spectrum of the RGWs. The behaviour of the inflation and reheating stages are often known as power law expansion like S(η) ∝ η 1+β , S(η) ∝ η 1+βs respectively. The S and η are scale factor and conformal time respectively and β, β s constrained on the 1 + β < 0 and 1 + β s > 0 [13,14]. The β and β s have an unique effect on the shape of the spectrum in the full range ∼ (10 −19 Hz -10 11 Hz) and high frequency range ∼ (10 8 Hz -10 11 Hz) respectively. Therefore these two parameters play main role in the spectrum of the RGWs. Thus, we are interested to obtain some upper bounds on the β and β s by comparison the usual and thermal spectrum with the average measured sensitivity of the Adv.LIGO and Dml. As obtained result of the upper bounds can modify our information about the nature of the evolution of inflation and reheating stages. In the present work, we use the unit c = = k B = 1. II. THE SPECTRUM OF GRAVITATIONAL WAVES The perturbed metric for a homogeneous isotropic flat Friedmann-Robertson-Walker universe can be written as where δ ij is the Kronecker delta symbol. The h ij are metric perturbations with the transverse-traceless properties i.e; ∇ i h ij = 0, δ ij h ij = 0. The gravitational waves are described with the linearized field equation given by The tensor perturbations have two independent physical degrees of freedom and are denote as h + and h × that called polarization modes. We express h + and h × in terms of the creation (a † ) and annihilation (a) operators, where k is the comoving wave number, k = |k|, l pl = √ G is the Planck's length and p = +, × are polarization modes. The polarization tensors ǫ p ij (k) are symmetric and transversetraceless k i ǫ p ij (k) = 0, δ ij ǫ p ij (k) = 0 and satisfy the conditions . Also the creation and annihilation operators satisfy [a p k , a † k ′ ) and the initial vacuum state is defined as for each k and p. For a fixed wave number k and a fixed polarization state p the eq.(2) gives coupled Klein-Gordon equation as follows: where h k (η) = f k (η)/S(η) [13,14] and prime means derivative with respect to the conformal time. Since the polarization states are same, we consider f k (η) without the polarization index. The solution of the above equation for the different stages of the universe are given in appendix. [A] for more details. III. THE ANALYSIS OF THE SPECTRUM Let us now call the spectrum of the waves in the thermal case as 'thermal spectrum' . In this section we have supported and upgraded our previous result that obtained in [11] by comparision the thermal spectrum with the average measured sensitivity of Adv.LIGO (Hanford and Livingstone) and Dml. We are interested to the frequency range ∼ (10 −1 Hz -10 11 Hz). Therefore we plotted the spectrum by using eqs.(9-11) in figs. [1,2]. Note that the On the other hand there is a procedure for detection based on Dml at GHz band [5]. The author in [5] is obtained the sensitivity of the Dml at the frequency 4.5 GHz like: He is believed that the Dml can not detect the RGWs due to the gap of 4 ∼ 5 orders between the sensitivity of the Dml and the amplitude of the waves. Therefore he is recommended to upgrade the Dml by using some points that mentioned in [5]. In addition to those points, we claim that the problem of detection can remove by considering thermal spectrum. Using eq. We can write an exact solution f k (η) by matching its value and derivative at the joining points, for of a sequence of successive scale factors with different u for a given model of the expansion of universe. The approximate solution of the spectrum of RGWs is usually computed in two limiting cases based on the waves are outside (k 2 ≫ S ′′ /S, short wave approximation) or inside (k 2 ≪ S ′′ /S, long wave approximation) of the barrier. For the RGWs outside the barrier the corresponding amplitude decrease as h k ∝ 1/S(η) while for the waves inside the barrier, h k = C k simply a constant. Therefore these results can be used to obtain the spectrum for the present stage of universe [13,21]. The history of expansion of the universe can written as follows: a) Inflation stage: where 1 + β < 0, η < 0 and l 0 is a constant. c) Radiation-dominated stage: where η E is the time when the dark energy density ρ Λ is equal to the matter energy den- EE+lowP+lensing contribution based on Planck 2015 [1] where η 0 is the present time. e) Accelerating stage: For normalization purpose of S, we put |η 0 − η a | = 1 which fixes the η a , and the constant ℓ 0 is fixed by the following relation, where ℓ 0 is the Hubble radius at present with H 0 = 67.8 km s −1 Mpc −1 from Planck 2015 [1]. The wave number k H corresponding to the present Hubble radius is k H = 2πS(η 0 )/ℓ 0 = 2πγ. the Hilbert space has the corresponding counter part in the tilde space [22]. Therefore a thermal vacuum state can be defined as where is the thermal operator and |00 is the two mode vacuum state at zero temperature. The quantity θ k is related to the average number of the thermal particle,n k = sinh 2 θ k . Then k for given temperature T is provided by the Bose-Einstein distributionn k = [exp(k/T )−1] −1 , where ω k is the resonance frequency of the field, see Ref. [11] for more details.
2016-09-11T07:33:42.000Z
2016-06-27T00:00:00.000
{ "year": 2016, "sha1": "66866408922c8397a735783b58d95c5325b58221", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-017-5135-8.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "8257e00bb33fc9eaab5485137121e0c666e7147d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
12820552
pes2o/s2orc
v3-fos-license
Chlamydia Anti-apoptosis – A By-product of Metabolic Reprogramming? a INSERM U1138, Centre de Recherche des Cordeliers, Paris 75006, France b Equipe 11 labellisée Ligue Nationale contre le Cancer, Centre de Recherche des Cordeliers, Paris 75006, France c Université Paris Descartes, Paris 75006, France d Metabolomics and Cell Biology Platforms, Institut Gustave Roussy, Villejuif 94800, France e Pôle de Biologie, Hôpital Européen Georges-Pompidou, AP-HP, Paris 75015, France f Karolinska Institute, Department of Women's and Children's Health, Karolinska University Hospital, Stockholm 17176, Sweden The obligate intracellular pathogen Chlamydia trachomatis is the most frequent bacterial agent of sexually transmitted disease worldwide. Recent estimates of the World Health Organization suggested N100 million annual cases of C. trachomatis infections (Newman et al., 2015). While acute infections are asymptomatic in 50-70% of all cases, repeated and recurrent infections occur, increasing the risk for complications, such as pelvic inflammatory disease, ectopic pregnancy, and infertility (Schuchardt and Rupp, 2016). Less well understood is whether C. trachomatis infection also represents a risk factor for the development of cervical cancer, because studies that explored this association reported contradictory findings (Zhu et al., 2016). Potential indirect pro-carcinogenic effects by C. trachomatis include its ability to promote the acquisition and persistence of human papilloma virus, the principal etiological agent in cervical cancer, and the establishment of a pro-inflammatory environment, which favors cellular damage and transformation (Zhu et al., 2016). In addition, Chlamydia-mediated reprogramming of infected cells, which includes modulation of cell signaling, metabolism, DNA integrity, genome stability, proliferation and survival, may directly sensitize cells to cellular transformation, while at the same time protecting them from death (Gagnaire et al., 2017). In this issue of EBioMedicine, Al-Zeer et al. describe a new mechanism by which C. trachomatis couples metabolic reprogramming of host cells via stabilization of the Myc oncogene and induction of hexokinase II (HK-II) expression with enhanced production of infectious Chlamydia particles and protection from apoptosis via mitochondrial effects of HK-II (Al-Zeer et al., 2017). The fact that cells infected with C. trachomatis are protected from apoptosis has been known for decades (Fan et al., 1998). The apoptotic machinery in infected cells is blocked upstream of mitochondrial outer membrane permeabilization (Fan et al., 1998), preserving a major energy-generating system of the host cell. While the initial idea that this effect is mediated by degradation of pro-apoptotic BH3-only proteins was later disproved, other anti-apoptotic activities were described, including for instance the activation of pro-survival signaling pathways (Raf/ MEK/ERK and PI3K/AKT) that mediate upregulation and stabilization of the anti-apoptotic protein Mcl-1 and degradation of p53 (Gonzalez et al., 2014;Rajalingam et al., 2008;Siegl et al., 2014). Down-regulation of p53 was also shown to protect mitochondrial architecture (Chowdhury et al., 2017) and to shift host cell metabolism towards aerobic glycolysis and the pentose phosphate pathway, which may benefit Chlamydia replication by providing anabolic substrates (Siegl et al., 2014). This suggests that apoptosis suppression by Chlamydia may constitute a by-product of the metabolic reprogramming of the host cell. The current study by Al-Zeer et al. provides further evidence in favor of this idea (Al-Zeer et al., 2017). The authors first demonstrate that infection with C. trachomatis induces a major surge in Myc protein levels, presumably via Myc protein stabilization mediated by its PDPK1-PLK1dependent phosphorylation. Moreover, in infected cells HK-II protein expression was upregulated in a Myc-dependent manner, and HK-II specifically enriched in mitochondrial fractions. Inspired by former reports that HK-II can inhibit apoptosis by binding to the outer surface of mitochondria through an interaction with the voltage-dependent anion channel (VDAC), the authors disrupted the hexokinase-mitochondria association and observed a strong resensitization of Chlamydia-infected cells to TNF-α induced apoptosis. Indeed, the level of resensitization appeared to be much higher than that observed in previous studies in which other branches of Chlamydia anti-apoptosis were disrupted. However, a direct comparison of the relative importance of different anti-apoptotic strategies during the course of the infection cycle still needs to be carried out. The authors' observation that interference with hexokinasemitochondria association disrupted the production of infectious bacterial progeny without inducing spontaneous apoptosis in Chlamydia-infected cells (Al-Zeer et al., 2017), is in line with the idea that the bacteria foremost depend on the metabolic, not the anti- apoptotic, effects of Myc signaling. In this context, it is noteworthy that, while inhibition of apoptosis can be primarily considered to be beneficial for the bacteria when viewed from the perspective of individual infected cells, the overall role of Chlamydia-mediated suppression of host cell death in pathogenesis is much less well understood and requires further investigation. The precise modality by which infected cells die in vivo (e.g. apoptosis vs. necrosis) could have substantial influence on bacterial spread, inflammatory and immune responses, and tissue damage. Studies on the significance of the anti-apoptotic trait could be facilitated by the recent introduction of techniques for genetic manipulation of C. trachomatis, which should enable the identification and genetic disruption of Chlamydia anti-apoptotic factors. However, it may be challenging, if not impossible, to identify death-suppressive molecules encoded by the Chlamydia genome that are not also involved in metabolic reprogramming of the host cell as well. Contents lists available at ScienceDirect Whether the Chlamydia-mediated upregulation of the Myc oncogene or the inhibition of apoptosis can contribute to the establishment or longevity of infection-induced potentially pro-oncogenic cellular alterations, requires further investigation. Despite apoptosis inhibition, Chlamydia-infected cells are eventually lysed to release bacterial progeny. This naturally limits the pro-carcinogenic outcome of infection. However, in the complex setting of an in vivo infection some infected cells may survive, since antibiotics, immune responses, and unfavorable growth conditions can promote Chlamydia persistence, characterized by long-term but nonproductive intracellular survival, or clearance. Uninfected cells also potentially arise from infected cells during mitosis. In this context, it is noteworthy that upregulation of the Myc oncogene was not restricted to the infected cells contained in the infected cell population (Al-Zeer et al., 2017). It remains to be clarified whether this can be explained by paracrine effects, as suggested by the authors, or whether these cells had previously encountered intracellular Chlamydia. Future studies on the nature and longevity of cellular abnormalities in surviving and bystander cells could significantly enhance our understanding of the mechanisms by which C. trachomatis may contribute to carcinogenesis. Taken together, the study by Al-Zeer et al. highlights the PDPK1-Myc signaling pathway and the metabolic reprogramming of host cells in general as potential targets for the development of new anti-chlamydial drugs. As already pointed out by the authors, the observation that cellular alterations in Chlamydia-infected cells resemble in some aspects those induced in cancer cells, suggests that certain drugs currently in use for anti-cancer therapy may be effective against Chlamydia as well. Conflicts of Interest The authors declare no conflicts of interest.
2018-04-03T00:33:46.365Z
2017-08-24T00:00:00.000
{ "year": 2017, "sha1": "16b6703322ae94e446f443854e85be6de19f8562", "oa_license": "CCBYNCND", "oa_url": "http://www.ebiomedicine.com/article/S2352396417303353/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "16b6703322ae94e446f443854e85be6de19f8562", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
220592882
pes2o/s2orc
v3-fos-license
Learning of Exception Strategies in Assembly Tasks Assembly tasks performed with a robot often fail due to unforeseen situations, regardless of the fact that we carefully learned and optimized the assembly policy. This problem is even more present in humanoid robots acting in an unstructured environment where it is not possible to anticipate all factors that might lead to the failure of the given task. In this work, we propose a concurrent LfD framework, which associates demonstrated exception strategies to the given context. Whenever a failure occurs, the proposed algorithm generalizes past experience regarding the current context and generates an appropriate policy that solves the assembly issue. For this purpose, we applied PCA on force/torque data, which generates low dimensional descriptor of the current context. The proposed framework was validated in a peg-in-hole (PiH) task using Franka-Emika Panda robot. I. INTRODUCTION Robot task executions often stop due to a variety of errors that cannot be forseen in advance. In such cases it is most often necessary for a human cooperating with a robot to manually eliminate the cause of the error and restart the task [1]. In the vast majority of cases, the robot does not learn anything from such experiences. If a similar or even the same situation is again encountered, the intervention of a human will be needed again and again. The frequency of such events depends on the process. The less the process is structured and determined, the more such events occur. In view of this, we can expect that this problem will be even more pronounced by the upcoming generation of humanoid and service robots that will perform a variety of tasks in domestic environments, which are often not well structured. Despite the impressive development of robotics in recent years, there are only a few research works dealing with the above-mentioned problems [2]. The most common solution is to appended the existing control policy with a fixed search/rescue pattern, such as stochastic search patterns, spiral search, raster search [3], tilt strategy, dithering and hopping, etc. [4]. A more systematic approach to the policy execution failures was considered in [5], where the robot recognizes when it is unable to proceed and requires human intervention to complete the task. In [6] this approach was extended with the ability to correct the state sequencing by a human demonstrator. A framework where the teacher corrects a continuous action selection was proposed in [7]. More work has been done in the field of automatic classification of robot failures. Tovar et al. [8] proposed Bayesian network classifier, which was able to classify between three different failure situations during PiH operation using forcetorque data. A similar approach was realized using multilayer neural networks [9]. Neural networks were also applied for fault detection of robot actuators [10]. Karapinar et al. [11] developed experience-based learning of failure contexts from sensor data. Investigation of the cause of the failure using Hidden Markov Models was studied in the work of Altan et al. [12] Our research aims to develop an integrated solution for automatic handling of failures in assembly processes. The proposed approach combines incremental kinesthetic learning, failure detection and classification, and statistical learning. Exception learning is initially supervised by an operator who first resolves the issue on the occurrence of the error, and then demonstrates appropriate action that enables the continuation of the given assembly task. Robot builds a database of demonstrated actions and associates them with the detected error context. Using statistical learning, it generates appropriate action for unforeseen errors from the demonstrated actions. The robot becomes more and more autonomous and eventually does not require any human intervention to resolve assembly failures. The paper is organized into 6 sections. In the next section, we present our main idea. The proposed framework is composed of three main technologies: 1) an algorithm which allows to incrementally update of a nominal trajectory along the refinement tube, 2) error classification using principal component analyses (PCA) and 3) non-parametric statistical learning using locally weighted regression (LWR). They are presented in Sections III, IV and V, respectively. Experimental verification on a generic assembly task, peg-inhole, is described in Section VI. Our final conclusions and discussion about limitations and possible future extensions of the proposed framework are given in Section VII. II. FRAMEWORK FOR LEARNING EXCEPTION STRATEGIES In this work, we assume that the basic control policy to execute the desired task was appropriately learnt and optimized. An efficient way to learn the assembly policy is kinesthetic guidance [13], but other methods can also be used, e.g. off-line programming using CAD models etc. Next, we assume that the policy is parameterized with Cartesian space DMPs [14], although our framework enables to use also other popular parameterization techniques, such as Gaussian Mixture Models and Gaussian Mixture Regression (GMM-GMR) [15], Probabilistic Motion Primitives (ProMP) [16], Radial Basis Functions (RBF), etc. Optimization of the desired control policy can be done using standard techniques, such as iterative learning control (ILC) [17] or reinforcement learning (RL) [18]. Our framework does not aim to change the demonstrated control policy, here denoted by π d . Rather, it enables the generation of an alternative strategy at the onset of an unexpected situation, which results in failure and consequently the suspension of the task. The reasons for failure can vary, from the incorrect grasping of parts, deviations in the geometry of components, damaged parts, etc. The follow-up actions are demonstrated once the failure occurs. These demonstrations are captured together with the sensory information, which is used later to classify the cause of the failure. The basic strategy is illustrated in Fig. 1. Whenever a failure occurs, the robot stops. Initially, the robot has no knowledge how to continue, therefore it expects the intervention of the operator. The operator first rolls back the robot action to the point from which it is possible to continue the task. Next, using incremental learning along the refinement tube [19], [20], the human operator demonstrates an alternative policy, which allows the robot to perform the given task from the current context. The context is determined from sensor signals. In assembly operations, we typically rely on a force-torque sensor, but other sensors such as pressure sensors, vision sensors, etc. can also be used. Left: A failure occurs and the robot stops and waits for the intervention of the operator. Center: Operator rolls back the robot actions to resolve the issue. Right: Operator demonstrates alternative policy As explained above, the robot analyzes the cause of failure using sensory data. It memorizes the current context and the alternative control policy and saves both of them in a database. When a failure occurs for the second time, the robot checks if it has any experience about the failures in similar contexts. If this is the case, the robot generates an alternative policy using statistical learning [21] and executes it. If the robot either does not have any previous experience or the alternative policy was not successful, it stops and waits for the operator to demonstrate the appropriate policy for the current context and stores both of them to the database. Eventually, the robot does not require any human intervention to resolve failures. The flow chart of the proposed framework is shown in Fig. 2. The process of partially autonomous database expansion for learning motor primitives was also considered by Petrič et al. [22], who focused on analyzing the required size of the database to ensure accurate task execution and the stability of the resulting control policies. The distinguishing feature of our work is the automatic determination of features that are used to guide the process of database expansion While feature selection can sometimes be easily performed manually, there are many tasks where features can be selected only by a computational method. The framework explained in this section is general and can be applied to most of the robot policies. In the continuation of the paper, we will focus on the assembly policies. III. TEACHING OF FOLLOW-UP ACTIONS As the robot stops due to an error, it needs an intervention from the operator, as explained in the previous section. First, the operator needs to rollback the robot actions to the point from which the robot can continue the task. Next, he has to demonstrate a new policy, which fits the current situation. In most cases, only minor modifications of the existing policy are necessary. After that, the operator has to test the demonstrated policy applied in the current context and refine it, if appropriate. Finally, the operator demonstrates a new velocity profile if the original does not suit the newly demonstrated policy. To fulfil all of these requirements, the operator must be able to freely move the robot forward and backward along the existing policy at any speed and change only parts where changes are needed. For this purpose, we applied our previously developed method [20] based on kinesthetic guiding within a refinement tube [19]. In this method, actions are parameterized with Cartesian speed-scaled dynamic movement primitives (CSDMP) (see Appendix). Here, we will review the main idea of this method and present some modifications that enable more efficient trajectory refinement. In order to allow the operator to move the robot forward and backward along the demonstrated trajectory π d , we replace the speed scaling factor τ associated with the demonstrated CSDMP [30], [24] with a new speed scaling factor, which is inversely proportional to the force projected to the tangent of the path defined by trajectory π d . The tangent of the path is calculated as t p (s) =ṗ d (s) ṗ d (s) , where p d ∈ R 3 are commanded (demonstrated) positions and s denotes the phase. The corresponding tangential direction for the rotational motion is given by where ω d ∈ R 3 is the commanded robot end-effector angular velocity in Cartesian space. We compute the new speed scaling factor τ as follows where (·) denotes dot product and k 1 , k 2 are positive scalars, used to scale the velocities of the translational and rotational motion along the π d (s). F ∈ R 3 and M ∈ R 3 are the measured vectors of forces and torques at the robot tool and expressed in robot base coordinate system. If F · t p (s) + M · t r (s) → 0, then τ (s) → ∞, which stops the CSDMP integration. We apply this τ to the DMP and phase integration [30] to move the robot along the trajectory π d (s) in the direction of the applied forces and torques, with the speed proportional to them. This makes the guiding process extremely intuitive. An operator just pushes the robot along the tangent of the trajectory. In order to prevent uncontrolled robot movement along the trajectory due to the force/torque sensor noise, a threshold is usually applied to (1). CSDMP as a dynamical system becomes unstable for negative τ , i.e. when the motion is reversed. In such a case we have to apply a reverse CSDMP, learnt from the timereversed trajectory π r (s) = π d (e −αs /s) (see also Eq. (6)). In order to modify the originally demonstrated trajectory represented by a CSDMP, the robot should be compliant in the directions of normal and binormal directions of the path [23] and stiff in the tangential direction, which is ensured by applying an appropriate control law [24]. This way we are allowed to displace a robot in a plane along these two directions and sample new poses. The modification to the original trajectory at phase s is given by the error vector e where p, q and q, q d denote the positions and quaternions that describe the current orientation and the positions and quaternions computed by the CSDMP, respectively. * denotes the quaternion product and q the conjugate quaternion, whereas the quaternion logarithm is defined as It maps the quaternion describing orientation to the angular velocity that rotates the identity orientation to the current orientation within unit time. In [20] we proposed to sample robot poses during the above described kinesthetic guiding process and calculate the new nonlinear forcing term of the CSDMP using batch regression whenever the sign of τ (s) changes. For this process to work, the modified robot positions and orientations must be sampled at exactly the same phase s as the original trajectory. This means that the phase of the modified trajectory should be determined very accurately. Even small deviations of s can lead to a wrong sequential order of the captured end-effector poses, which can corrupt the modified trajectory. In this paper, we propose an alternative solution which is less sensitive to the accuracy of phase calculation. Instead of capturing the complete modified trajectory and updating the nonlinear forcing term at the change of the direction, we concurrently modify weights of the CSDMP's nonlinear forcing terms, where P(s) ∈ R N ×N is the error covariance matrix and x(s) ∈ R N is a vector of Gaussian kernel functions (see Appendix). N is the number of kernel functions, s −1 denotes the phase in the previous step, λ is a forgetting factor, which is usually kept close to 1, and K l ∈ R 6×6 is a diagonal estimation gain matrix that modifies the compliance of the robot during the kinesthetic guidance. We reset the covariance error matrix to the default value, P(s) = γI, whenever the temporal scaling factor τ changes its sign. I denotes identity matrix and γ > 0 is a suitably chosen scalar. This is necessary because the recursive algorithm becomes increasingly less sensitive to the new trajectory updates with the number of iterations. Namely, from (5) it follows that the error covariance matrix P(s) is independent of measurements and has monotonically decreasing eigenvalues. Consequently, the magnitude of updates computed by (4) is also decreasing and thus influencing the resulting control policy less and less. Note that the magnitude of updates is also affected by the choice of forgetting factor λ. The procedure described above is simultaneously applied to the reversed CSDMP of the demonstrated trajectory. Based on the sign of τ (s) defined in Eq. (1), we either integrate the original (in case of positive sign) or reversed CSDMP (in case of negative sign, but in this case −τ (s) is used for integration). To compute the current phase s r of the reversed CSDMP from the phase s of the original CSDMP and vice versa, we exploit the following relationship ss r = e −αst/τ0 e −αs(τ0−t)/τ0 = e −αs . This is true because the temporal constants in the original and reversed CSDMP are constant. Hence s r = e −αs /s and s = e −αs /s r . The update formulas (4) and (5) are then applied to both the original and reversed CSDMP. Using the procedure described above, we generate a new exception policy directly in the CSDMP form, where the forcing term weights W p and W o are new but all other CSDMP's parameters are taken from the CSDMP representing the originally demonstrated insertion policy. Thus the speed (or equivalently, the temporal scaling factor τ (s)) of the resulting CSDMP is still determined by the demonstrated policy, which is suboptimal. Therefore, we demonstrate a new speed profile by executing the newly learnt CSDMP while the user is pushing the robot along the trajectory. The new temporal scaling factor τ (s) is computed from the userapplied forces and torques as specified in Eq. (1). During this demonstration, the robot is stiff in all directions to prevent it from deviating from the learnt path. We sample the resulting τ (s), which is then associated with the CSDMP representing the exception strategy instead of the temporal scaling factor obtained from the originally demonstrated insertion policy. For statistical learning it is beneficial to store the learnt exception strategy as a time-dependent trajectory (see Section V). Thus the resulting CSDMP is integrated with the newly sampled τ (s(t)) one more time (without executing the generated motion with the robot) and the points on the resulting trajectory are sampled to generate the training data set (11) for statistical learning. Finally, we re-compute the CSDMP parameters from the sampled data (11), setting (for the i-th exception strategy) τ i (s) = τ 0,i = t i,Ti , g p,i = p i,Ti , g o,i = q i,Ti , and computing the suitable W p,i and W o,i . IV. DETERMINATION OF FAILURE CONTEXT FROM FORCE-TORQUE SENSOR DATA During the assembly task execution, it is necessary to monitor the exerted forces on the robot hand in order to prevent damaging of the parts or even robot itself at the occurrence of an unexpected situation. Forces and torques are typically used also to actively guide the assembly process. Execution failures are in most cases characterized by a sudden increase of forces and torques. In our work we therefore used a simple approach where a failure is detected if the sensed forces and torques exceeds a predefined threshold. An autonomous robot should have the ability to detect the reason for the execution failure, which enables it to plan an appropriate recovery action. In this section we propose an algorithm for the calculation of low dimensional features that characterize the detected failures based on the sensed forces and torques. We map the sensed forces and torques to a low dimensional feature space because feature dimensionality is important for statistical learning. For this purpose, we apply Principal Component Analyses (PCA) as a popular dimensionality reduction technique. Let's assume that we have m measurements of forces and torques, . . , m, captured at the time when the i-th failure has been detected. Each measurement thus corresponds to exactly one failure during the assembly. We form a data matrix whereh is the row vector of average values of all forces and torques. PCA is an orthogonal linear transformation that maps the data Z to a new coordinate system such that the biggest variance occurs in the first coordinate, which is called the first principal components. All subsequent coordinates have a lower variance than the previous one. PCA can be calculated by applying singular value decomposition in the form where matrix V ∈ R 6×6 is orthogonal and maps the data to a new coordinate system such that the principal components are sorted as columns in C. Also, the singular values that form the diagonal matrix Σ = diag(σ i ) ∈ R m×6 are nonnegative and sorted from the biggest to the lowest value σ j , j = 1, . . . , 6. The magnitude of singular values determines the significance of each direction determined by eigenvectors in V. The dimensionality reduction is performed in such a way that we keep only the first p columns of V, which correspond to the first p biggest singular values. Whenever a new failure occurs, we calculate the corresponding context data from the measured force-torque vector h using where V p ∈ R 6×p denotes the matrix composed of first p principal eigenvectors of V. The resulting context vector c is used as query for statistical learning. V. STATISTICAL LEARNING OF EXCEPTION STRATEGIES FROM FAILURE Initially, every failure requires that a user demonstrates a new exception strategy as described in Section III. An exception strategy enables the robot to continue the task after a failure has occurred. It is fully defined by the time evolution of tool poses given in Cartesian coordinates and the associated context. Let's define a set of m exception strategies as Ti k=1 , (11) where p i,k ∈ R 3 are the positions, q i,k ∈ S 3 are the unit quaternions describing orientation, S 3 is a unit sphere in R 4 , i is the demonstration index, k are trajectory samples, and T i is the number of samples on the i-th exception strategy. Each exception strategy is associated with the context vector c i calculated according to Eq. (10) from the measured forces and torques h i at the time the failure occured. Once a sufficient number of exception strategies becomes available, we can exploit previously learnt strategies to generate new ones. This is accomplished using statistical learning techniques. For this purpose we applied locally weighted regression, due to its simplicity and efficiency. LWR belong to a class of non-parametric statistical approximation methods [25] and it has been successfully applied to may robotics applications such as throwing, reaching, drumming, etc. [26]. When the next failure occurs, we first determine the current context c using the measured forces and torques and Eq. (10). The resulting context vector is used as query point for LWR. Given this query point and a number of existing exception strategies, LWR can compute a new exception strategy. Recall from Section III that in our work, exceptions strategies are defined by CSDMPs. A CSDMP contains a number of parameters (see Appendix), but in the context of exception strategies only some of them change; the weights specifying the nonlinear forcing term, the temporal scaling factor τ 0 , the goal position g p , and the goal orientation g o . Thus a function that maps query points into a new exception strategy can be written as follows Note that the temporal scaling factor τ (s) is constant for all exception strategies as nonlinear speed scaling is applied only to the initially demonstrated trajectory. Once the initially demonstrated trajectory is adapted by kinesthetic teaching and a new exception strategy is generated, the resulting control policy is resampled to (11) with constant temporal scaling factor as described at the end of Section III. As explained in [26], G(G) becomes a smooth function of c only if example trajectories, in our case exception strategies, are similar and transition between each other smoothly. This is the case in our work because the policy adaptation method described in Section III ensures that the adapted exception strategy is similar to the original control policy. Thus the solution trajectory computed by LWR is similar to other exception strategies but adapted to the current context c. To compute the generalized CSDMP forcing terms weights W p , W o ∈ R 3×N , we apply the following optimization problem withP i ∈ R Ti×3 being a matrix with rowsp i,k = (τ 2 0,ip i,k + α z τ 0,iṗi,k − α z β z (g p − p i,k )) T , andQ i ∈ R Ti×3 a matrix with rows calculated asq i,k = (τ 2 0,iω i,k + α z τ 0,i ω i,k − 2α z β z log(g o * q i,k )) T . The rows of matrix X i ∈ R Ti×N are calculated using Gaussian DMP kernels x at phases s i,k , i.e. X i = [x(s i,1 ), . . . , x(s i,Ti )] T [14]. We selected the tricube kernel [27] for K(c, c i ), which is defined as where h is a hyper-parameter that determines the range and importance of training data used for generalization. Since the temporal scaling constants and goal position and orientation are measured directly, i.e. τ 0,i = t i,Ti , g p,i = p i,Ti , g o,i = q i,Ti , their generalization by LWR is easier. They are computed as follows VI. EXPERIMENTAL EVALUATION ON PEG-IN-HOLE TASK The proposed framework was experimentally verified on a peg in hole task (PiH), which is a typical assembly operation. For this purpose, we performed square peg insertion in Cranfield Benchmark [28], which is a standardized tool that encompasses a typical level of complexity for an industrial assembly task. The initial PiH policy was obtained with kinesthetic guidance. In our experiments, we focused on a typical source of failure in automated assembly, i.e. bad pose estimation of the assembly part, which causes either imperfect grasping or non-adequate insertion policy (or both of it). We used the collaborative robot arm Franka Emika Panda in all experiments. Integrated joint-torque sensors were used to estimate the Cartesian forces and torques. Our first goal was to evaluate whether the proposed estimation of the context during the failure is appropriate to generate query points that are suitable for statistical learning. We started by evaluating the case where the robot grasped the assembly part at wrong angles. The experiment was repeated for 8 equally spaced angles around zero (which was the correct angle for the learnt control policy) with a spacing of 3 degrees, as illustrated in Fig. 3. Due to the offset in the grasping angle, the robot failed to insert the peg and stopped the execution as it exceeded the force threshold, which was set to 10N in z direction. At that moment we recorded the forces and torques. Due to the different metric, we scaled the torque data by a factor of 10. We performed PCA on this set of data. Only the first singular value of the matrix Σ Σ Σ deviated from the others, which were practically equal to zero, meaning that our context is one dimensional. Figure 4 shows the evolution of this context depending on the offset in the grasping angle. Note that the estimated function is almost linear and, most of all, monotonically increasing, which makes it a perfect query for statistical learning. If we compare the estimated context to the forces and torques of our data set, we can see that it is very similar to the torque around x axis. This is exactly what an experienced robot operator would intuitively choose as a query. However, the proposed algorithm learns this without any human intervention. Next, we evaluated the effectiveness of context determination with offsets in grasping position along y axis. Similar as in the previous case, we applied 8 equally spaced values with a spacing of 2 mm and recorded forces and torques during the attempts of insertion. Fig. 5 shows that the estimated context function is again monotonic, which is essential for query points. Finally, we evaluated our framework as a whole. Again we tested the influence of grasping offset in the y direction. We generated 4 grasps with equally spaced offsets with spacing of 3 mm around the correct grasp, i.e. the grasp applied during the initial demonstration. Because of this offset, the robot failed to insert the peg and stopped the execution due to excessive forces in the z direction. We captured the forces and torques, calculated the context value and demonstrated the alternative policy for each case as explained in Section III. These data were used to generate the initial database of exception strategies (11) for generalization. The original peg insertion policy, the four demonstrated exception policies, and one of the generalized exception startegies are shown in Fig. 6. The success rate in 50 experiments was 82%. Note that the robot was stiff during the insertion. By exploiting the robot's compliance and also by making use of a larger database of pretrained exception policies for generalization, the success rate could be improved. VII. CONCLUSIONS In this work we proposed an integrated framework for learning exception strategies as they arise. It integrates learning by demonstration, PCA-based classification of failures using the resulting force-torque data, and the generation of exception strategies by statistical learning. The main novelties are 1) determination of the exception context using PCA and the 2) the application of the determined context for statistical learning of exception strategies. We also improved our previously developed method for the adaption of policies [24] by replacing batch regression with recursive regression. Finally, the proposed method for generalization of orientation DMPs by minimizing (13) is new. The result is a novel framework for learning and adaptation of exception strategies. The proposed approach was evaluated on the pegin-hole task where we demonstrated the effectiveness of our approach. In the current implementation, the context for the database query was calculated based on a single measurement of forces and torques. In general, however, forces and torques need to be taken into account as a time function. We can easily do this by encoding them as RBFs and expanding the measurements h with weights of RBFs. More challenging remains, how to include also other sensors such as vision to calculate the failure context. Our future work will involve integration of such multi modal data and direct estimation of low-dimensional context using deep auto-encoders.
2020-07-16T13:50:13.470Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "17c026eda991edec410ba5ea11e5d5ff5c827f7b", "oa_license": "CCBY", "oa_url": "https://zenodo.org/record/5639153/files/Icra2020-zenodo.pdf", "oa_status": "GREEN", "pdf_src": "IEEE", "pdf_hash": "0048c078f58cfe11010f94c74f78eb2218eaccf9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
234778152
pes2o/s2orc
v3-fos-license
IT ambidexterity and patient agility: the mediating role of digital dynamic capability Despite a wealth of attention for information technology (IT)-enabled transformation in healthcare research, limited attention has been given to ITs role in developing specific organizational capabilities to respond to patients their needs and wishes adequately. This paper investigates how hospital departments can leverage the equivocal capacity to explore and exploit IT resources and practices, i.e., IT ambidexterity, to adequately sense and respond to patients their needs and demands, i.e., patient agility. Following the dynamic capabilities view, this research develops a research model and tests it accordingly using data obtained from 107 clinical hospital departments from the Netherlands through an online survey. The hypothesized relationships are tested using structural equation modeling (SEM). The outcomes demonstrate the significance of IT ambidexterity in developing a digital dynamic capability that, in turn, positively influences patient agility. The study outcomes can be used to transform clinical practice and contribute to the current IS knowledge base. Introduction Hospitals worldwide face a multitude of challenges to ensure high quality across the entire patient care delivery continuum. They are, e.g., burdened by exhaustive management, regulatory, and administrative processes. Under these turbulent conditions, hospitals can leverage new innovative information technologies (IT), such as electronic medical records, patient accessible electronic health records, artificial intelligence, the Internet of Things, and interoperable data and platforms, to enable the ubiquitous availability of patient information and enhance the quality of clinical practice. It, therefore, seems that hospitals are on the brink of a monumental digital transformation change. Hospitals now have an opportunity to deploy new digital technologies to enhance their internal decision-making capability , redefine patient communication and engagement, and become true IT innovators (Leidner, Preston, & Chen, 2010). Medical professionals can also use innovative IT solutions and the exponential volumes of patient-generated data to enhance care delivery (McCullough, Casey, Moscovice, & Prasad, 2010). Notwithstanding, there are many cultural, social, technical, and organizational challenges in the process of fully leveraging digital technologies (Kohli & Tan, 2016;Kruse, Kristof, Jones, Mitchell, & Martinez, 2016;Sligo, Gauld, Roberts, & Villa, 2017). Moreover, the extant scholarship contends that IT could also hamper the process of gaining organizational benefits (Brynjolfsson & Hitt, 2000;Carr, 2003;Overby, Bharadwaj, & Sambamurthy, 2006). Thus, understanding the facets that drive IT investments benefits is very valuable in clinical settings (Hessels, Flynn, Cimiotti, Bakken, & Gershon, 2015). Previous scholarship examined the role of IT and contributions to the so-called 'IT-enabled capabilities.' In particular, the role of IT human competencies and IT infrastructure capability have been explored as enablers of the formation of IT-enabled capabilities and organizational sensing and responding under turbulent conditions (Chakravarty, Grewal, & Sambamurthy, 2013;Fink, 2011;Rai & Tang, 2010;Roberts & Grover, 2012b; Van de Wetering, Versendaal, & Walraven, 2018). However, there are still crucial gaps that remain in the extant literature. 2 First, despite a wealth of attention for IT adoption and IT-enabled transformation in healthcare research (Andargoli, Scheepers, Rajendran, & Sohal, 2017;Chen, Yu, & Chen, 2015;Jones, Heaton, Rudin, & Schneider, 2012;Nair & Dreyfus, 2018;Yichuan Wang & Hajli, 2017;Wang, Kung, Wang, & Cegielski, 2018;Zhou & Piramuthu, 2018), limited attention has been given to the role of IT in developing specific organizational capabilities to respond to patient's needs and wishes adequately, and enhance patient engagement (Asagbra, Burke, & Liang, 2018;Bradley, Pratt, Thrasher, Byrd, & Thomas, 2012). Second, it currently remains unclear how hospital departments-that are responsible for patient care delivery-can utilize the equivocal capacity to 'explore' and 'exploit' IT resources and practices, i.e., IT ambidexterity (Lee, Sambamurthy, Lim, & Wei, 2015), to drive a hospitals' departments digital dynamic capability. Such a capability represents the degree to which qualities and competencies are present to manage innovative digital technologies for new exceptional and effective patient service development (Khin & Ho, 2019). Third, the extant literature unfolded critical insights into the competences and routines underlying dynamic capabilities-that represent an organization's ability 'to act' under changing circumstances (Cepeda & Vera, 2007;Winter, 2003)-and how they enhance the operational functioning of the firm (Protogerou, Caloghirou, & Lioukas, 2012;Van de Wetering, 2019;Wilden & Gudergan, 2015). However, there seems to be less consensus about IT resources' pivotal role in developing these dynamic capabilities (Li & Chan, 2019;Menachemi, Matthews, Ford, Hikmet, & Brooks, 2009). Fourth, scholars have predominantly explored the ambidexterity-benefits relationship on the organizational level (Jansen, Simsek, & Cao, 2012;O'Reilly III & Tushman, 2008). So, investigating the role of an intermediate (digital) capability construct in the value path at the hospital department (strategic business or competence unit) level is valuable as it has been seldom explored (Gerybadze, 1998;Lee et al., 2015;Pang, Lee, & DeLone, 2014). Finally, previous studies focused on particular aspects of IT resources and assets (e.g., IT flexibility, system design, IT human and management capabilities, knowledge and data capabilities, and repositories) from an organizational agility perspective (Chen & Siau, 2012;Lu & Ramamurthy, 2011;Tallon & Pinsonneault, 2011). In doing so, these particular studies highlight the facilitating role of the synergistic IT exploration and exploitation processes but failed to conceptualize and test this particular capacity adequately. Unfolding the benefits of such a dual capacity to aim for two disparate things simultaneously using empirical data is very relevant. It is relevant from both a theoretical and practical perspective as ITs business value, and the preceding IT investments can be justified in a clinical setting (Hessels et al., 2015;Lee et al., 2015;Sabherwal & Jeyaraj, 2015;Schryen, 2013;. Against this background, and in alignment with the hospital industries' focus on clinical excellence and patient-centered care (Chiasson & Davidson, 2005;Liedtka, 1992), this paper acclaims that IT ambidexterity enhances the ability to sense and respond adequately to the patient's needs and demands, i.e., patient agility, by facilitating the intermediate digital capability-building process. In doing so, this study follows a practitioner-based approach (Devaraj & Kohli, 2003;McCracken, McIlwain, & Fottler, 2001). This study focuses on the department level and patient agility, can, therefore, be considered the degree to which a department can sense and respond quickly to patientbased opportunities for innovation and competitive action. This work is likewise relevant for clinical practice, as hospitals are currently exploring their digital options and digitally transforming their clinical processes using, e.g., mobile handheld devices and apps to increase error prevention, improve patient-centered care and provide ways for clinicians to be more agile in their work (Bradley et al., 2012;Devaraj, Ow, & Kohli, 2013;Mosa, Yoo, & Sheets, 2012;Sim, 2019). Also, hospitals in the Netherlands are bound to specific turnover ceilings agreements between hospitals and health insurers. The Dutch Healthcare Authority (NZa), an autonomous administrative authority falling under the Dutch Ministry of Health, Welfare, and Sport, ensures that these formal agreements focus specifically on patients' value rather than production volumes. Therefore, achieving patient agility in clinical practice is very valuable. Hence, this research attempts to address the following research questions: (1) What is IT ambidexterity's effect on the hospital departments' patient agility and, thus, its ability to sense and respond timely and adequately to the patient's needs and demands? Furthermore, (2) What is the particular role of digital dynamic capability in converting IT ambidexterity's effect to the hospital departments' patient agility? By addressing these crucial questions, this paper contributes to the medical informatics and information systems (IS) literature by unfolding the mechanisms through which the dual capacity of IT exploration and IT exploitation simultaneously drives patient agility in hospital departments. In addressing these questions, this study embraces the dynamic capabilities view (DCV) to employ a strong academic foundation with accompanying validated measurements (Khin & Ho, 2019;Lee et al., 2015;Roberts & Grover, 2012b;Van de Wetering, 2020). This paper proceeds as follows. First, it reviews the theoretical development by highlighting key literature on IT resources and ambidexterity, the dynamic capabilities view, and organizational agility. Then, section 3 highlights the study's research model and associated hypotheses. Section 4 details the methods used in this study, after which section 5 outlines the results. This study ends by discussing the outcomes, including theoretical and practical contributions, and ends with concluding remarks. IT resources and IT ambidexterity Scholars and practitioners started to build upon strategic management theories in the late '80 and early '90 and argue that organizations can sustain a competitive advantage due to their IT resources they own or have under their control (Wade & Hulland, 2004). However, the literature documents that IT investments do not always yield the presumed and anticipated results and may even impede the process of rapidly adjusting to the competitive environment (Brynjolfsson & Hitt, 1998;Carr, 2003;Overby et al., 2006;Strassmann, 1990). The central claim within the "resource-based" studies, in the context of IT, is that the adoption, deployment, and practices of IT as a unique and difficult-to-imitate resource, creates business value (Bharadwaj, 2000;Wade & Hulland, 2004;Wang, Liang, Zhong, Xue, & Xiao, 2012). Previous studies contended that IT resources are crucial in the formation of technological-driven capabilities (Aral & Weill, 2007;Ross, Beath, & Goodhue, 1996;Van de Wetering et al., 2018;Weill, Subramani, & Broadbent, 2002), and therefore, considered a key strategic priority for organizations (Bharadwaj, 2000), even so in healthcare (Bardhan & Thouin, 2013;Wang, Kung, & Byrd, 2018). However, obtaining the value from IT resources is not a straightforward process. Instead, the extant literature contends that the business value results from leveraging and aligning complementary IT resources (Pavlou & El Sawy, 2006;Sheikh, Sood, & Bates, 2015;Wade & Hulland, 2004). This process also applies to healthcare, where senior executives and hospital management want to leverage their IT and data resource investment successfully (Van de Wetering, 2018; Wang, Kung, & Byrd, 2018). In practice, organizations need to pursue and deal with two seemingly opposing modes of operandi, i.e., the ability to adapt existing IT resources to the current business environment and demands and focus on developing IT resources that contribute to long-term organizational benefits (March, 1991). The literature refers to this phenomenon as 'ambidexterity' (Gibson & Birkinshaw, 2004;Jansen, Van Den Bosch, & Volberda, 2006;Raisch, Birkinshaw, Probst, & Tushman, 2009;Tushman & O'Reilly III, 1996). The simultaneous alignment of 'exploration' and 'exploitation' by organizations will likely provide them with sustained competitive benefits (Gibson & Birkinshaw, 2004;Jansen et al., 2006;Junni, Sarala, Taras, & Tarba, 2013;Raisch et al., 2009). The notion of IT ambidexterity, as defined by Lee et al. (2015, p. 398) as "..the ability of firms to simultaneously explore new IT resources and practices (IT exploration) as well as exploit their current IT resources and practices (IT exploitation)" is a fundamental capability that builds upon the ambidexterity and IT resources and capabilitybuilding perspective. Ambidexterity gained serious attention of over the past few years (Chang, Wong, Eze, & Lee, 2019;Syed, Blome, & Papadopoulos, 2020). IT exploration concerns the organization's efforts to pursue new knowledge and IT resources (Lee et al., 2015;March, 1991). On the other hand, IT exploitation is typically conceptualized as a construct that captures the degree to which organizations take advantage of existing IT resources and assets, e.g., reusing existing IT applications and services for new patient services and the reuse of existing IT skills. Dynamic capabilities view The dynamic capabilities view (DCV) is a leading theoretical framework in the field of strategic management and information systems (Di Stefano, Peteraf, & Verona, 2014;Pavlou & El Sawy, 2011;Schilke, 2014). Under conditions of high economic turbulence and uncertainty, the theory argues that traditional resource-based capabilities do not provide organizations with a competitive edge (Drnevich & Kriauciunas, 2011;Teece, Peteraf, & Leih, 2016;Wilden & Gudergan, 2015). Instead, it is firms' ability to integrate, build, and reconfigure internal and external competences to capitalize on rapidly changing environments that explain how organizations can obtain and maintain a competitive edge (Teece, Pisano, & Shuen, 1997). The DCV highlight two crucial aspects that were not the primary focus under resource-based approaches (Wernerfelt, 1984), i.e., 'dynamic' and 'capabilities.' Dynamic refers to firms' capacity to renew competences and capacities to obtain congruence with rapidly changing business environment (Teece et al., 1997). Capabilities, on the other hand, focus on purposefully adapting the firms' resource base. According to Teece (Teece et al., 1997, p. 515), these capabilities emphasize "…the key role of strategic management in appropriately adapting, integrating, and reconfiguring internal and external organizational skills resources, and functional competences to match the requirements of a changing environment." Hence, the DCV can be considered an integrative approach to understanding the newer sources of competitive advantage and firm's ability 'to act' under changing circumstances (Cepeda & Vera, 2007;Winter, 2003) and has been widely validated through empirical studies also in healthcare (Pablo, Reay, Dewald, & Casebeer, 2007;Singh, Mathiassen, Stachura, & Astapova, 2011;Wu & Hu, 2012). The concept of organizational and patient agility Organizational agility is considered to be a manifested type of dynamic capability (Teece et al., 2016) and is especially influential among agility studies published in the Basket of Eight, see for instance, (Lee et al., 2015;Queiroz, Tallon, Sharma, & Coltman, 2018;Sambamurthy, Bharadwaj, & Grover, 2003). It can be conceptualized as a dynamic capability if "they permit organizations to repurpose or reposition their resources as conditions shift." (Tallon et al., 2019, p. 220). As such, the concept of organizational agility has been proposed under the DCV as a crucial organizational capability to respond to changing conditions while simultaneously proactively enacting on the dynamic environment regarding customer demands, supply chains, new technologies, governmental regulations, and competition (Park, El Sawy, & Fiss, 2017;Roberts & Grover, 2012b;Teece et al., 2016). The environment imposes several contingencies (Lu & Ramamurthy, 2011;Park et al., 2017). Therefore, Overby (2006) and Ravichandran (2018) argue it is vital for firms to adapt, adjust, and renew their working systems and procedures to enhance their rent yielding potential and seize market opportunities. This 'sense-and-respond' capability has been defined and conceptualized in many ways and through various theoretical lenses in the IS literature (Chakravarty et al., 2013;Sambamurthy et al., 2003;Vickery, Droge, Setia, & Sambamurthy, 2010). Several scholars argue that, even though there are ambiguities in the definitions that are likewise reflected in operationalized conceptualizations, several high-level characteristics can be devised from the extant literature that view organizational agility as a higher-order multidimensional construct (Lee et al., 2015;Overby et al., 2006;Roberts & Grover, 2012a). From the literature, two high-level organizational routines can be synthesized: deliberately 'sensing' and 'responding' to business events in the process of capturing business and market opportunities. These two capabilities are essential for an organization's competitive advantage (Overby et al., 2006). There are no studies (empirical nor theoretical) that conceptualize hospital departments' patient agility, following the dynamic capabilities view. It is, therefore that this current article perceives patient agility as a higher-order manifested type of dynamic capability that allows hospital departments to adequately 'sense' and 'respond' to patient-based opportunities, needs, demands within a fast-paced hospital ecosystem context (Roberts & Grover, 2012a;Teece et al., 2016). Digital dynamic capability The concept of digital dynamic capability builds upon a rich foundation of the previously discussed DCV. According to (Khin & Ho, 2019, p. 4), digital dynamic capabilities can be considered the "organization's skill, talent, and expertise to manage digital technologies for new product development." This capability is essential for a hospital to master digital technologies, drive digital transformations, and develop innovative patient-centered services and products. For this study, a hierarchical capability view is adopted following previous scholarship (Božič & Dimovski, 2019;Danneels, 2002;Winter, 2003). As such, the digital dynamic capability is conceptualized as a lowerorder technical dynamic capability that facilitates the process of developing higher-order dynamic organizational capabilities such as innovation ambidexterity, absorptive capacity, and organizational adaptiveness (Božič & Dimovski, 2019;Li & Chan, 2019;Wang & Ahmed, 2007). This conceptualization is also in line with the previous scholarship that conceptualized technological capabilities as a technical dynamic capability. For instance, Benitez et al. (2018) conceptualized a flexible IT infrastructure as a dynamic capability that provides the organization with adequate responsiveness by enabling the business flexibility to sense and seize mergers and acquisition opportunities (Benitez, Ray, & Henseler, 2018). Likewise, Queiroz et al. (2018) argue that IT application orchestration, i.e., the ability to renew IT application portfolios, should be conceived as a dynamic capability and enhances competitive firm performance. In practice, this capability requires specific, idiosyncratic, and heterogeneous competencies to develop and is, therefore tough to mimic and establish within organizations (Božič & Dimovski, 2019;Khin & Ho, 2019;Teece et al., 1997;Tripsas, 1997). Research model and hypotheses Following the theoretical development section, IT ambidexterity as a core organizational IT resource is expected to enhance hospital departments' level of patient sensing and responding capability (both conceptualized as higher-order dynamic capability) through digital dynamic capability (as a lowerorder technical dynamic capability). Figure 1 demonstrates the research model and the associated hypotheses that will be clarified below. A firm's deliberate IT investments are crucial for the process of capability-building and gaining IT business value under turbulent conditions (Overby et al., 2006;Sambamurthy et al., 2003;Setia, Venkatesh, & Joglekar, 2013). In this discussion, IT resources are typically referred to by their aggregated latent components and qualities (e.g., hardware, software, networks, data sources) and ITrelated managerial activities (e.g., IT planning, business connectivity) and how they related to business value (Kim, Shin, Kim, & Lee, 2011;Tippins & Sohi, 2003). However, recent studies argue that ITbusiness value and organizational agility do not result from the deployment of isolated (non)IT resources and competencies. Instead, IT-business value seems to emerge from the complementarity to assimilate and re-align the IT resource portfolio to the changing business needs and demands (Ravichandran, 2018;Walraven, Van de Wetering, Versendaal, & Caniëls, 2019). Figure 1. Research model Hospitals need to deal with many challenges (e.g., organizational, social, cultural) (Bhattacherjee & Hikmet, 2007;Chaudhry et al., 2006;Kohli & Tan, 2016). Therefore, they must embrace an ambidextrous IT implementation strategy so that short-term exploitation of (existing) IT resources is balanced with an exploratory mode that drives IT-driven business transformation (Gregory, Keil, Muntermann, & Mähring, 2015). It is only when these two modes are in sync that hospital departments' are better-equipped to develop digital capabilities and frame the hospital's business strategy and clinical practice (Jaana, Ward, & Bahensky, 2012;Khin & Ho, 2019;. IT exploration facilitates the experimentation and usage of new IT resources (e.g., new collaborative research platforms, decision-support systems, big data, and clinical analytics, social media) so that upon success can serve as a basis to reshape the current patient engagement and care. On the other hand, IT exploitation is instead focused on using, enhancing, and repackaging existing IT resources (e.g., reuse or redesigning current EMR for new patient service development and ensuring hospitalwide accessibility to clinical patient data and information). IT exploitation allows departments to reuse existing modular and compatible IT-infrastructures and software components and integrate them with their daily business operations and clinical practices (Lee et al., 2015;Tarenskeen, Van de Wetering, Bakker, & Brinkkemper, 2020). It is expected that the simultaneous engagement of IT exploration and IT exploitation will strengthen hospital departments' digital options and competencies to manage innovative digital technologies for new patient service as these two distinct IT ambidexterity qualities in isolation are not likely to enhance the hospital department's digital dynamic capabilities (Sambamurthy et al., 2003;Van de Wetering et al., 2018;Voudouris, Lioukas, Iatrelli, & Caloghirou, 2012). In line with this reasoning, this study now contends that the complementary effect underlying IT ambidexterity facilitates hospital departments to develop a digital dynamic capability that serves as a basis to enhance patient agility. Moreover, such a strategy enables hospital departments to alleviate possible (unforeseen) risks associated with exploratory and exploitatory modes. Hence, the following hypothesis is defined: Zhou & Wu, 2010). Various prior studies investigated the benefits that result from developing a digital dynamic capability. Wang et al. (2004), for instance, argue that digital dynamic capability allows leveraging IT and knowledge resources to deliver innovative services that customers value and contribute to organizational benefits. Coombs and Bierly (Coombs & Bierly III, 2006) empirically showed that a sophisticated digital dynamic capability enables competitive advantages. Thus, the extant literature shows that digital dynamic capability drives organizations' ability to learn from experience in turbulent economic and competitive environments. Hence, in such an environment, it is essential to search continuously, identify, and absorb new technological innovation such that they can be used to respond to changing customer behavior, demands, and wishes timely, adequately, and innovatively (Acur et al., 2010;Roberts & Grover, 2012b). This outcome is likewise consistent with results from Westerman et al. (2012), Khin et al. (2019), and Ritter and Pedersen (2019), who showed that digital dynamic capability is crucial to deploy new innovative business models, enhance customer experiences, and improve business operations. By actively managing the opportunities provided by innovative technologies and responding to digital transformation, organizations can succeed in their digital options and services (Khin & Ho, 2019). A technological-driven capability is crucial for hospital departments that want to strive for patient agility in clinical practice because the process of achieving new digital patient service solutions is exceedingly dependent on its ability to manage digital technologies (Khin & Ho, 2019). It requires proactively responding to digital transformation, mastering the state-of-the-art digital technologies, and deliberately developing innovative patient services using digital technology. Such a capability goes well beyond the notion of IT capabilities, i.e., aggregation of IT resources and IT competencies in the vast majority of empirical studies (Chen et al., 2014;Kim et al., 2011;Wade & Hulland, 2004). The digital dynamic capability allows hospital departments, e.g., to absorb and process sensitive patient information better, support clinicians in their decision-making processes, exchange clinical data, and facilitate patient health data accessibility . Hence, hospitals that actively invest and develop such a capability are likely to anticipate their patients' needs (of which they might be physically and mentally unaware) and respond fast to changes in the patient's health service needs using digital innovations and assessments of clinical outcomes (Khin & Ho, 2019;Roberts & Grover, 2012a). Therefore, such a strategically significant capability is crucial for the departments' focus on quality, efficiency, and enhancing the patient's clinical journey. Based on the argument given above and building upon the DCV, the following two hypotheses are defined: Hypothesis 2: Hospital departments' digital dynamic capability will be positively associated with a patient sensing capability. Hypothesis 3: Hospital departments' digital dynamic capability will be positively associated with a patient responding capability. Data collection Survey data were systematically collected using an online survey that contained all questions to test the study's model and hypothesized relationships. This survey was pretested on several occasions by five Master students (that do their Master's thesis research) and six medical practitioners and scholars to improve both the content and face validity of the survey items. These respondents all had sufficient knowledge and experience to assess the survey items effectively to provide valuable improvement suggestions. The data were finally cross-sectionally collected from University medical centers, top clinical training hospitals, and general hospitals. The target population was (clinical) department heads, team-leads, managers, and doctors in line with the study objectives. These respondents are the foremost respondents-at the hospital department level-who can provide insights into the unique and sometimes complicated situations where medical knowledge is exploited, enabling a unique treatment course (Wu & Hu, 2012). Moreover, they actively contact patients or have an intelligible insight into the department's patient interactions, and IT use. Data were conveniently sampled from Dutch hospitals through the 5 Master students' professional networks within hospitals using email, telephone, and social networks. The final data collection took place between November 10 th, 2019, to January 5 th, 2020. Also, anonymity for the respondents was guaranteed. The online survey tool registered 230 active and unique respondents. In total, 101 cases were removed from the data because of unreliable data entries or no entries. Finally, 21 additional respondents were removed due to substantial missing values, and one respondent was removed as the function did not belong to our target population. This study uses 107 complete survey responses for final analyses. This study thus uses a single informant to fill in the survey for the entire department. Within the obtained sample, 36 respondents work for a University medical center (33.6%), 41 work for a specialized top clinical (training) hospital (38.3%), and the final 30 work for general hospitals (28%). Table 1 shows the demographics of the obtained data sample. This study accounts for possible non-response bias by using a T-test to assess whether or not there is a substantial (and significant) difference between the early respondents (N=66) and the late subsample (N=41 respondents) on the responses on the Likert scale questions. No significant difference could be detected. Finally, Harman's single-factor analysis was applied using exploratory factor analysis (in using IBM SPSS Statistics v24) to restrain possible common method bias (Podsakoff, MacKenzie, Lee, & Podsakoff, 2003;Richardson, Simmering, & Sturman, 2009). Hence, the current study sample is not affected by method biases, as no single factor attributed to the majority of the variance. Measures and items The selection of indicators was made based on previous empirical and validated work to increase the questions' internal validity and reliability. Since this research was done in a healthcare setting, some of the original items have to be re-worded to suit the context of Dutch healthcare. This study operationalized IT ambidexterity using the item-level interaction terms of IT exploration and IT exploitation before running the structural model (Gibson & Birkinshaw, 2004;Lee et al., 2015). Measures for IT ambidexterity were adopted from (Lee et al., 2015). This study devised three core items from Khin and Ho (2019) to measure digital dynamic capability and conceptualized patient agility as a higher-order dynamic capability comprising the dimensions 'patient sensing capability' and 'patient responding capability' (Roberts & Grover, 2012b;Sambamurthy et al., 2003). This study adopts five measures for each of these two capabilities based on Roberts and Grover (2012b). The constructs' items in the research model used a seven-point Likert scale ranging from 1: strongly disagree to 7: strongly agree, which is a commonly used classification in empirical survey studies since no archival data exist for quantifying the incorporated competencies and capabilities (Kumar, Stern, & Anderson, 1993). Following prior IS and management studies, we controlled for 'size' (fulltime employees), operationalizing this measure using the natural log (i.e., log-normally distributed) and 'age' of the department (5-point Likert scale 1: 0-5years; 5: 25+ years). Table 2 includes all the constructs' items. Model estimation and sample justification The research model is assessed using a Partial Least Squares (PLS) structural equation modeling (SEM) application, i.e., SmartPLS version 3.2.9. (Ringle, Wende, & Becker, 2015), to estimate and run model parameter estimates. PLS allows researchers to confirm that measurement model-that runs estimates of the included latent constructs as weighted sums of a specific subset of the associated manifest variables -while also assessing the model's structural model that enables the process of testing hypothesized relationships (Hair Jr, Hult, Ringle, & Sarstedt, 2016). A pivotal reason to justify applying a variance-based approach to SEM is that it is appropriate in exploratory contexts and for the objective of theory development (Hair Jr, Sarstedt, Ringle, & Gudergan, 2017). PLS-SEM emphasizes prediction-oriented work (as is the case in this research) as to which PLS maximizes the proportion of explained variance (R 2 ) for all dependent constructs in the research model (Hair Jr et al., 2017). The estimation procedure uses the general recommended path weighting scheme algorithm . The current sample size of 107 is relatively small but exceeds the minimum threshold values to obtain stable PLS outcomes (Hair, Ringle, & Sarstedt, 2011). To further substantiate this claim, a power analysis was done using G*Power (Faul, Erdfelder, Buchner, & Lang, 2009). This study applies the commonly used statistical power level of 80%, an effect size of 0.15, and a 5% probability of error as input parameters. The maximum number of predictors in the research model is two (when including the non-hypothesized direct effect of IT ambidexterity on sensing and responding capability). G* Power's output parameters show that a minimum sample of 68 cases were needed, which is far below the current sample size of 107. Reliability and validity assessments As part of the measurement model assessment, this study evaluated the internal consistency reliability-using the composite reliability estimation (CR)-and convergent validity-using the average variance extracted (AVE)-of the constructs . Also, all the construct-toitem loadings were investigated. All CR outcomes are well beyond 0.85, showing that the reflective constructs measure the same phenomenon, and the AVE-values greatly exceed the minimum threshold of 0.5. Discriminant validity is assessed using cross-loadings, the AVE's square root, i.e., the Fornell-Larcker criterion, and the newly developed heterotrait-monotrait ratio (HTMT) criterion as a metric of proper correlations among the model's constructs (Henseler, Ringle, & Sarstedt, 2015). Table 2. Construct and items. Analyses of the cross-loading show that each measurement item loads more strongly on its associated construct than other constructs (Farrell, 2010). Also, the Fornell-Larcker criterion for each of the constructs is higher than the inter-construct correlations. Finally, the HTMT assessment results indicate that each correlation score is well below the conservative HTMT0.85 mark (Henseler et al., 2015). These outcomes confirm the model's constructs' discriminant validity. Structural model assessment A non-parametric bootstrapping approach was applied in SmartPLS to obtain the (regression) coefficients' significance levels among the constructs in the structural model. Hence, 5000 subsampling bootstraps were used with observations randomly drawn from the original set of data Constructs/items IT ambidexterity (Lee et al., 2015) Patient agility (Roberts & Grover, 2012a) IT exploration capability Patient sensing capability Acquire new IT resources (e.g., potential IT applications, critical IT skills) We continuously discover additional needs of our patients of which they are unaware Experiment with new IT resources We extrapolate key trends for insights on what patients will need in the future Experiment with new IT management practices We continuously anticipate our patients' needs even before they are aware of them IT exploitation capability We attempt to develop new ways of looking at patients and their needs Reuse existing IT components, such as hardware and network resources We sense our patient's needs even before they are aware of them Reuse existing IT applications and services Patient responding capability Reuse existing IT skills We respond rapidly if something important happens with regard to our patients Digital dynamic capability (Khin & Ho, 2019) We quickly implement our planned activities with regard to patients Responding to digital transformation We quickly react to fundamental changes with regard to our patients Mastering the state-of-the-art digital technologies When we identify a new patient need, we are quick to respond to it Developing innovative patient services using digital technology We are fast to respond to changes in our patient's health service needs (i.e., the sample of 107) (Hair Jr et al., 2016). Also, this study investigated the strength of the coefficients of determination (R 2 ), the effect sizes (f 2 ), and the model's predictive power calculated using Stone-Geisser Q 2 values (Hair Jr et al., 2016). The established structural paths results show that IT ambidexterity positively influences digital dynamic capability (β = 0.689; p < 0.0001). The explained variance value suggests that 47.4% of the digital dynamic capability variance can be explained by IT ambidexterity. This degree of predictive accuracy can be classified as moderate to substantial (Chin, 1998 Likewise, support was found for both the following paths, i.e., digital dynamic capability → patient sensing capability (β = 0.593; p < 0.0001), and digital capability → patient responding capability (β = 0.463; p < 0.0001). The structural model outcomes show that the total explained variance for patient sensing capability after removing non-significant control variables is 35.1% (R 2 = 0.351); this amount also exhibits a moderate effect. The R 2 for patient responding capability is 21.4%. This accuracy level is still deemed adequate but less strong than the explained variance for patient sensing capability (Chin, 1998). Both 'age' and size' (i.e., the control variables) showed non-significant effects expelling possible confounding issues. The Q 2 values obtained through the blindfolding show that the Q 2 value (for digital dynamic capability) is well above zero (Q 2 = 0.353). Also, patient sensing capability and patient responding capability respectively showcase high Q 2 values (Q 2 = 0.242; Q 2 = 0.157). These outcomes likewise confirm the model's predictive relevancy (Hair Jr et al., 2016). Table 3 summarizes the structural model analyses' outcomes, including the effect sizes, specific bootstrap t-values, and additional path analyses. Discussion, implications, and concluding remarks The digital transformation brings about an unprecedented challenge for modern-day hospitals (Agarwal, Gao, DesRoches, & Jha, 2010). Decision-makers and stakeholders across the hospital need to make sure that digital innovations are aligned and deployed with care so that they enhance efficiencies, decision-making, quality of services, so that personalized and patient-centered care can be delivered (McGrail, Ahuja, & Leaver, 2017;Walraven et al., 2019). From a research perspective, there is still a limited understanding of how IT resources and the digital capability-building processes can facilitate patient agility and contribute to the much-needed insights on obtaining value from IT at the departmental level. This study aimed at addressing these particular gaps in the literature. Contributions to theory This study designed and tested a research model that argues that the hospital department's capacity to explore and exploit IT resources and practices equivocally would drive a department's patient agility by first enabling digital dynamic capability. Outcomes of this study found support for this claim. The structural model analyses unfolded that a hospital department's ability to simultaneously pursue 'exploration' and 'exploitation' in their management of IT practices is a crucial driver of digital dynamic capability, and thus, a necessity to integrate digital technologies with the digital talent of doctors and medical practitioners and to be responsive in the process of patient care delivery (Khin & Ho, 2019). This study shows that digital dynamic capability, in turn, enhances the conceptualized construct of patient agility by sequentially enhancing patient sensing capability and patient responding capability. These results collectively address the research questions; the effect of IT ambidexterity on patient agility is indirect and fully mediated by digital dynamic capability. These results are coherent with previous work prompting that those hospital departments that invest and enhance their skills, competences, and knowledge in managing innovative digital technologies are better equipped to be responsive, innovative, and satisfy patients' needs (Bolívar-Ramos, García-Morales, & García-Sánchez, 2012; Khin & Ho, 2019;Singh et al., 2011;Wu & Hu, 2012). Notwithstanding, these current outcomes add to the current growing body of knowledge on the degree to which IT resources and competencies contribute to organizational capabilities and benefits (Chakravarty et al., 2013;Chen et al., 2014;Lu & Ramamurthy, 2011). Specifically, this study provides insights into the much-needed intermediate (and mediating) role of digital dynamic capability (Lee et al., 2015) and at the scarcely examined capability developing process at the departmental level (Gibson & Birkinshaw, 2004;Jansen et al., 2012). This study also advances the current insights on the resource and capability-building perspective (Lu & Ramamurthy, 2011;Pang et al., 2014;Sambamurthy et al., 2003;Teece et al., 1997) by unfolding the nomological path from 'resources' to the 'IT-enabled value-perspective.' It does so by showing that hospitals-that are committed to the process of ambidextrously managing their IT resources-are more proficient in promptly sensing and responding to patients' medical needs and demands. These theoretical contributions are valuable as these particular insights remained unclear in the extant literature, and future research can take these particular insights into account when investigating the IT benefits in hospitals. Practical implications The study also offers various practical implications for hospital department managers and practitioners. First, this study unfolds the critical resources and capabilities that hospital departments can leverage from a patient agility perspective. In doing so, this work embraces a dynamic capabilities view when it comes to IT resources deployments. Hence, hospital enterprises must direct IT investments to bring about the highest degree of IT business value, given the many substantial challenges to ensure high quality across the patient care delivery continuum. Second, the current empirical results demonstrate that hospital departments should invest in their capability to balance both the organization's efforts to pursue new knowledge and IT resources and their capability to take advantage of existing IT resources and assets. IT ambidextrous hospital departments are better equipped to identify, develop new innovative digital opportunities and patient services, and enhance patient agility. This development path is crucial for successful hospital departments that strive to enhance the patient's clinical journey and provide patients with fitting health services. Finally, the results imply that hospital departments should actively manage digital technologies to strive for patient agility. The department's digital dynamic capability is crucial in the development of new digital patient service solutions. Hospital department managers should strive to be agile in the modern turbulent economic environment. Therefore, they should dedicate their resources to leverage this capability fully. This way, they are better equipped to search, identify, and absorb new technological innovations, integrate, process, and exchange patient information, use them for decisionmaking processes, and anticipate and respond fast to changes in the patient's health service needs. In essence, the digital dynamic capability is about developing the core competencies, knowledge, and skills to better process patient information, adequately respond to digital transformation, mastering the state-of-the-art digital technologies, and deliberately develop innovative patient services using digital technology. Limitations and future work Several limitations constrain this current work. These limitations could drive future research avenues. First, this study followed a cross-sectional design and used self-reported measures. Although this approach is similar to that of, e.g., (Faber, van Geenhuizen, & de Reuver, 2017;Wu, Chen, & Greenes, 2009), the collected data for all constructs from a single person could lead to self-reporting bias. Future work could embrace a matched-pair survey where different respondents and stakeholders fill in several parts of the survey. Triangulation of possible available data from public sources could also enrich the current insights and further strengthen and validate the empirical results. Second, the present study did not investigate the impact of digital dynamic capability and patient agility on performance or patient service benefits, nor did it consider possible contingent attributes of organizational context in which the hospital departments. Future research may also wish to investigate these critical topics to unfold critical insights on leveraging these clinical practice capabilities. Also, comparing outcomes across various hospitals and healthcare organizations might further contribute to the study findings' generalizability. Future work could also involve the patient engagement and digital technology co-design perspectives (Donelan, DesRoches, Dittus, & Buerhaus, 2013;Egener et al., 2017;Papoutsi, Wherton, Shaw, Morrison, & Greenhalgh, 2020), as patient participation and engagement were currently out of scope. Finally, the current study only gathered data from Dutch hospitals. Replication studies in other countries in Europe and other Non-Western regions could contribute to the generalization of the current outcomes. Concluding remarks Hopefully, these outcomes contribute to a better understanding of the relationship between the phenomenon of concurrently aiming for exploration (long-term perspective) and exploitation (current business environment perspective) in IT resource management and the mechanisms through which patient agility can be achieved in clinical practice. Scholars and medical practitioners can now benefit from these outcomes as patient sensing, and responding capabilities are crucial for hospitals to deliver high-quality patient value and streamlined patient journeys. This work is particularly relevant now, as hospitals worldwide need to transform healthcare delivery processes using digital innovations during the COVID-19 crisis.
2021-05-20T01:16:18.665Z
2021-05-19T00:00:00.000
{ "year": 2021, "sha1": "9a82faecd0bd92af173acaf730e17a48cae374d9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9a82faecd0bd92af173acaf730e17a48cae374d9", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
52997040
pes2o/s2orc
v3-fos-license
A Novel Deep Intronic Mutation Introducing a Cryptic Exon Causing Neurofibromatosis Type 1 in a Family with Highly Variable Phenotypes: A Case StudyA Novel Deep Intronic Mutation Introducing a Cryptic Exon Causing Neurofibromatosis Type 1 in a Family with Highly Variable Phenotypes: A Case Study Neurofibromatosis type 1 (NF1) is a common dominant inherited disorder with highly variable expressivity. Genetic testing for this condition has been more available the last decade. Here we present a case report of a NF1 family including seven affected family members, some of them with a very severe phenotype. When searching for a causative mutation in the NF1 gene, no mutation was found at DNA level. However, a misspliced transcript including a subsequence of intron 3 appeared when screening RNA. The underlying cause at DNA level was determined to be a deep intronic variant (c.288+1137C>T). This intronic point mutation creates a new splice site causing the insertion of a cryptic exon (r.288_289ins288+1018_1135), leading to reading frameshift at the protein level. Deep intronic mutations introducing a cryptic exon are known to be a cause of NF1, and we reviewed the literature to evaluate how common this mutation is in NF1 syndrome. We found 20 different deep intronic NF1 splice mutations, including the one found in the present study. In conclusion, this case illustrates the value of RNA analysis to detect the cause of genetic diseases, and we decided to use RNA based mutation screening as standard procedure for NF1 genetic testing in our laboratory. *Corresponding author: Wenche Sjursen, Department of Pathology and Medical Genetics, St. Olavs University Hospital, Trondheim, Norway, Tel: +4772573530l; E-mail: wenche.sjursen@ntnu.no Received July 15, 2015; Accepted July 28, 2015; Published July 30, 2015 Citation: Svaasand EK, Engebretsen LF, Ludvigsen T, Brechan W, Sjursen W (2015) A Novel Deep Intronic Mutation Introducing a Cryptic Exon Causing Neurofibromatosis Type 1 in a Family with Highly Variable Phenotypes: A Case Study. Hereditary Genet 4: 152. doi:10.4172/2161-1041.1000152 Copyright: © 2015 Svaasand EK, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Introduction Neurofibromatosis type 1 (NF1), also called von Recklinghausen disease (OMIM # 162200), is one of the most common dominant inherited disorders; the worldwide incidence is 1/3500 [1][2][3]. The signature of NF1 is the development of benign neurofibromas, i.e. benign peripheral nerve sheath tumours, in addition to multiple caféau-lait spots and Lisch nodules in the eye [3]. Individuals with NF1 also have increased risk to develop malignant tumours, among which malignant peripheral nerve sheath tumours (MPNSTs) are the most severe. The penetrance of NF1 is reported to be 100 %. However, the clinical manifestations of NF1 shows highly variable expression, i.e. the severity of disease varies among affected individuals within the same family and from one family to another [2]. Obvious genotype-phenotype correlations are not common in NF1 [1,4,5]. However, two clear clinically important genotype-phenotype correlations are revealed so far; one concerns patients with particularly severe forms of NF1 that carry large deletions encompassing the entire NF1 gene [3,6,7] and the other concerns patients carrying small mutations such as a 3-bp inframe deletion of the NF1 gene [8]. Genetic modifiers that lie outside the NF1 gene appear to account for a large fraction of the symptomatic variability seen in NF1 [3,8]. In up to 95% of cases, NF1 clinical diagnosis can be made through straightforward clinical evaluation fulfilling a set of clinical criteria [9,10]. The diagnostic criteria are met in a patient who has two or more of the following main characteristics: six or more café-au-lait spots, neurofibromas, skinfold freckling, optic glioma, iris Lisch nodules, distinctive bone lesions and first degree relatives with NF1 [11]. The NF1 gene is a tumour suppressor gene whose protein neurofibromin down regulates Ras-GTP levels in Ras/MAPK/AP-1 pathway [12]. Genetic testing can confirm the diagnosis in questionable cases, and is required for prenatal or preimplantation diagnosis [13]. Further, it is important to determine the genetic cause of NF1 in families, in order to do predictive testing and to give family members healthcare follow-ups. The NF1 gene has a high mutation rate, and about half of the NF1 cases are sporadic (de novo NF1 mutations) [13,14]. Mutation detection in the NF1 gene is complex due to its large size (60 exons, whereas three of them are alternatively spliced), the existence of several pseudogenes and the lack of clustering of the mutations [15]. There are no clear mutation hotspots and the spectrum of mutations is very diverse, ranging from microdeletions affecting the entire NF1 gene to minor lesions that include a high proportion of splicing mutations [16]. In this study we report on a family with several affected individuals with strongly variable phenotypes. They were referred to the outpatient clinic several years before we succeed to find the genetic cause of their disease. It was first after we introduced RNA based analysis that their mutation in the NF1 gene was found. Case presentation The family includes seven affected members over three generations (I-III) as shown in Figure 1. Family members were referred to genetic outpatient clinics where they received genetic counselling. They gave written consents for mutation screening and for publication of the results. In generation I, the father had clinical von Recklinghausen diagnosis, with numerous café-au-lait spots and cutaneous neurofibromas. He died at age 62 before genetic testing was available. In the second generation, all four siblings had NF1, which has been confirmed by genetic testing. The first (II.1) sibling is mildly affected, with numerous café-au-lait spots, cutaneous neurofibromas and freckling in the axillaries. He is functioning well and holding up a full time job. The now deceased son (II.2) was severely affected with numerous neurofibromas, the first surgically removed at age 11. He had cauda equina syndrome, sarcoma vesica, epilepsy from age 30 and reduced hearing. As a child, he underwent cardiac surgery for atrial septal defect. He died at age 46. The other deceased sibling (II.3) was also severely affected, with plexiform neurofibromas both in the head and extracranially. He had several tumours (cervical spine, craniocervical transition and intraspinally), and epilepsy. He was moderately retarded, lacked language, and died at age 39. The lightly affected daughter (II.4) has café-au-lait spots and no neurological symptoms. However, magnetic resonance imaging of head and spine revealed small neurofibromas, and she had a benign colloid nodule in the thyroid gland. Two of her four sons are affected with NF1. The oldest was diagnosed with NF1 as an infant, with caféau-lait spots, Lisch nodules, and neurofibromas. The other one was moderately affected with cutaneous symptoms. Genetic testing has not been performed for any of her children. DNA analysis The entire NF1 gene (60 coding exons) including two exons with tissue specific expression was screened by High Resolution Melt mutation detection analysis (Corbett Rotor Gene 6000, Qiagen) and Sanger sequencing using Big Dye technology (ABI 3130xl Genetic analyser, Applied Biosystems) according to standard protocols and as reported previously [17]. As part of the screening process, Multiplex Ligation-dependent Probe Amplification (MLPA; Salsa MLPA kit P081-B1/P082-B1 NF1, from MRC-Holland) was applied in order to detect aberrant copy numbers of genomic DNA, such as large gene duplications or deletions. DNA PCR primers were designed to amplify the region of intron 3 where a mutation was suspected based on the sequence of the intron sequence insert. RNA analysis The PAXgene™ blood-tube system (PreAnalytiX) was used to preserve the RNA profile of the blood samples. RNA isolation was conducted following standard protocols using Paxgene™ Blood RNA kit (PreAnalytiX). The cDNA was synthesised from the total RNA using random primers, following the standard protocol using SuperScript III Reverse Transcriptase (Invitrogen). In order to cover the entire cDNA, 21 overlapping PCR reactions were sequenced using Taq Advanced polymerase (5 Prime). Primers were adapted from Thompson et al. [18] sequencing was performed with Big Dye Terminator Sequencing Chemistry and ABI3130xl, and the analyse software was SeqScape (Applied Biosystems). The fragment containing the aberrant transcript, Fragment 1, was amplified with a forward primer located in the 5'UTR region and a reverse primer located in exon 7. Nomenclature and software predictions The nomenclature for the mutation at the DNA and RNA level and the predicted protein was determined according to the guidelines of the Human Genome Variation Society [19]. NCBI exon numbering, and the NF1 mRNA NCBI Ref Seq. NM_000267.3 were applied. The A of the transcription start codon was defined as nucleotide number 1. Splice site prediction by neural network (SSPN) allows searching for potential splice sites in long sequence stretches (www.fruitfly.org/ seq_tools/splice.html). Alamut (version 2.3, Interactive Biosoftware), which include several other splice prediction programs, were also applied. Literature search Available literature and databases were searched to see how common deep intronic splice mutations are in NF1. These variants are defined as "mutations that alter a single nucleotide often within very large introns, creating de novo 5' or 3' intronic splice sites that is used in conjunction with an already available intronic "partner" cryptic splice site leading to inclusion of a cryptic exon" [20]. Search words were NF1, deep intron/intronic variants/mutations, new splice sites and cryptic exons. Four databases were searched ( PubMed, Google Scholar, Leiden Open Variation Database (https://grenada.lumc.nl/LOVD2/mendelian_genes/variants. php?select_db=NF1&action=view_unique) and HGMD (The Human Gene Mutation Database)). The results included were those variants that have been shown to alter splicing and are located more than 50 nucleotides from the 5' or 3' end of exons. Results No pathogenic NF1 mutation was initially detected in genomic DNA from two of the affected siblings. However, the clinical diagnosis NF1 was fulfilled in the affected family members. We therefore established cDNA analysis based on PAXgene™ blood samples. An aberrant transcript including a cryptic exon was discovered, r.288_289ins288+1018_1135. The cryptic exon was found to be a subsequence of 118 nucleotides from intron 3 ( Figure 2D). The insertion introduces a frame shift in the transcript, leading to a stop codon in exon 4. The underlying cause at DNA level was determined to be a deep intronic variant c.288+1137C>T ( Figure 2B). The cDNA analysis was performed in samples from two of the affected siblings in generation II, one with milder symptoms (II.1) and one severely affected (II.2). This DNA variant was detected in all affected family members in generation I and II, but not in those without a NF1 diagnosis. No reports of polymorphisms in the NF1 c.288+1137 position were found, the variant was not reported in the dbSNP or 1000 Genomes databases, and the variant was not detected in 100 alleles in samples from blood donors. In silico splice site prediction showed that the mutated sequence results in a donor site prediction score of 1.00 (max output) when analysed with SSPN, while the same site did not produce any score when the normal sequence was applied. Other splice site prediction tools also predicted a new splice donor site, MaxEntScan 9.8 (span 0-12) and Human Splicing Finder 91.2 (span 0-100). The introduction of the cryptic exon is illustrated in Figure 2E. The DNA intronic variant c.288+1137C>T was concluded to be the genetic cause of NF1 in this family. This provided a predictive tool for the other family members. NF1 mutation testing has been performed in our laboratory since 2006, initially based on screening at DNA level (Sanger sequencing and dHPLC). The cDNA analysis based on PAXgene™ system was introduced later. Since 2013 our mutation screening method is based on RNA extracted from cultured lymphocytes treated with the translation inhibitor puromycin to prevent degradation of transcripts with premature stop codon by Nonsense Mediated mRNA Decay [21]. Deep intronic splice mutations are missed when traditional DNA based screening techniques are applied [21]. As the advantages of RNA based screening is becoming apparent and is applied, these mutations are expected to be increasingly reported in the literature. Literature search revealed 19 different deep intronic splice mutations reported so far, totally 20 when including the one found in the present study. The results with references are shown in Table 1 [4,16,20,[22][23][24][25][26][27][28][29][30][31][32]. Recently we detected a second deep intronic variant in another family (c.5749+332A>G). This variant has been published by others and is included in Table 1 [16,22,30,33]. Discussion This case study illustrates the extensive effects a deep intronic NF1 mutation can have on the individual members in a family. Further, it emphasizes the importance of applying RNA analysis in the search of a molecular explanation for clinical NF1 phenotypes. The NF1 mutation found in the family was a deep intronic variant in a position that has not been described previously. By performing extensive literature search our aim was to focus on this group of mutations in the NF1 gene. The search shows that deep intronic mutations are not extremely frequent. Nevertheless, they are just as important to reveal as exonic mutations and well known splice site flanking mutations, since their effects are equally significant in causing serious disease. One must also keep in mind that this type of mutation most likely is underreported in the literature, since RNA analysis is relatively recently applied in routine diagnostics. It is estimated that splice mutations account for a substantial part of NF1 mutations. In a cohort of 97 Austrian NF1 patients it was found that splicing mutations represent the largest group of NF1 gene alterations (38%). [16] Another study based on 2900 unrelated patients found 29% splicing mutations in NF1. Fifty seven percent of these reside outside the conserved splice donor and acceptor, and 10% of them belong to deep intronic splice mutations, i.e.2-3% of all NF1 mutations [21]. Similar frequencies were found in a French cohort [22]. They detected 114 intronic splice mutations out of 546 mutations (21 %), and among these 13 were deep intronic mutations (2.4 %). At our laboratory, we have detected disease causing NF1 mutations in 246 index patient samples analysed in the period 2006-2014. Thus, the number of deep intronic mutations found in our laboratory (about 1 %) correlates with the number reported in the literature. As reported by others [11], our mutation detection rate in NF1 has increased by changing the screening method from DNA-based to RNA-based screening. Besides increasing the detection rate, the cDNA based method is faster and more labourand cost-effective as compared to a DNA-based method. The median life expectancy of individuals with NF1 is approximately eight years shorter than in the general population [34,35]. In the present family two members were severely affected and died at age 39 and 46 years, respectively, far below normal life expectancy. Malignancy and vasculopathy are reported to be the most important causes of early death in individuals with NF1 [36][37][38]. The two severely affected members in the present family had intraspinal neurofibromas, which caused reduced muscle strength, wheelchair dependence, weakened skeletons, orthopedic problems and fractures. They died of complications after orthopedic surgery. The two other siblings in generation II are managing quite well. They are both in their forties and are much less affected. This illustrates the wide variety in expression which is reported for NF1 [2]. The reason for the extreme clinical variability of NF1 is unclear although the timing and frequency of second hit events in specific cell types likely will contribute significantly. Statistical analysis of the NF1 phenotype within and between families shows that the NF1 mutant allele itself accounts for only a small fraction of phenotypic variation [39]. Further, it is suggested that genetic modifiers not linked to the NF1 locus and differences in expression of the normal NF1 allele contribute to the variable expressivity of the disease [39][40][41][42]. A role for microRNAs in the development of MPNST has been suggested [43]. Thus differences in microRNA expression may partly be one of the contributions to why some NF1 patients get malignant tumours while other only get benign tumours. Conclusions In this case study we have described a NF1 family with great variability in phenotype of affected members, ranging from early death to normal functionality. The underlying genetic cause was found to be a new single base substitution deep in intron, causing altered splicing. Thus, we can offer predictive testing for family members. The case report also emphasizes the importance of using the right methods to find the causative NF1 alteration in patients clinically fulfilling NF1 criteria. Authors' contributions ES, WB and WS performed the laboratory testing, and LFE and TL the genetic counselling and the clinical assessments. All authors contributed to the writing of the paper.
2019-04-02T13:06:45.893Z
2015-07-30T00:00:00.000
{ "year": 2015, "sha1": "a724231bc32a072e13542c159561aa08d2227fd7", "oa_license": "CCBY", "oa_url": "https://www.omicsonline.org/open-access/a-novel-deep-intronic-mutation-introducing-a-cryptic-exon-causing-neurofibromatosis-type-1-in-a-family-with-highly-variable-phenotypes-acase-study-2161-1041-1000152.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f49994e6aae9c7cf5ce705356beb790c27329b23", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
246994342
pes2o/s2orc
v3-fos-license
The relationship between body image and tea drinking habits with anemia among adolescent girls in Badung District, Bali, Indonesia Background and purpose: The prevalence of anemia among adolescent girls in Indonesia remains high. Poor nutrition is a risk factor of anemia among adolescent girls, which is likely related to food intake restrictions to achieve a desired body shape (body image), and the habit of drinking tea while eating which can affect the absorption of iron. This study aims to determine the relationship between anemia with body image perception and tea drinking habits among adolescent girls. Methods: This study used a cross-sectional design carried out from March-May 2018, involving girls aged 15-18 years at high schools in Badung District. Two schools were selected from 44 high schools, then a sample of 106 students were selected by systematic random sampling. Data collected included hemoglobin levels measured with hematology autoanalyzer, nutritional status with anthropometric measurements, and data on socio-demographics, socio-economics, tea drinking and eating habits, menstrual pattern, helminthiasis, knowledge and perception of body image with face-to-face individual interviews. Data were analyzed using the Chi square test for bivariate analysis, and multivariate using logistic regression Results: The prevalence of anemia (hemoglobin <12 g/dL) in adolescent girls was 13.2%. The results of the logistic regression analysis showed that the variables associated with anemia were poor knowledge about anemia with an adjusted odds ratio (AOR)=11.4 (95%CI: 1.6-83.1), no iron supplement consumption (AOR=14.7; 95%CI: 1.9-109.8), negative body image (AOR=30.6; 95%CI: 2.9-321.1), tea drinking habits while eating (AOR=52.2; 95%CI: 4.2-642.9) and excessive menstrual volume (AOR=17.1; 95%CI: 1.6-185.9). Conclusion: Negative perceptions of body image and tea drinking habits while eating increase the risk of anemia among adolescent girls aged 15-18 years. In addition, poor knowledge about anemia, a history of not consuming iron supplements and excessive menstrual volume can also increase the risk. These factors need to be considered when designing policies to reduce anemia among adolescent girls. INTRODUCTION Anemia is a serious public health problem throughout the world. 1 National anemia prevalence among women in Indonesia is relatively higher than men, especially in pregnant women and adolescent girls. 2 The occurrence of anemia in adolescent girls is caused by increased nutritional needs related to physical and reproductive growth. 3Globally in 2011, 36.4% or 7.5 million young women aged 10-19 years are reported to have anemia, 4 and in Indonesia it was reported in 2013 as 26.4% in young socio-economic factors (poverty, employment, food insecurity), 6,8,9,11 behavioral factors (smoking, eating patterns, physical activity, nutritional intake, vegetarianism), 6,8,10-14 nutritional status (body mass index, middle upper arm circumference less than normal, obesity), 6, 14 reproductive history (age of menarche, menstruation, post-partum health), 8,6,10, 12 and parasitic infections (intestinal worms). 12utritional problems such as anemia or other malnutrition problems are caused by two main factors namely poor nutrition and infectious men and women aged 5-14 years, and 18.4% in ages 15-24 years. 5iseases. 15Anemia that occurs in adolescents can be caused by inadequate food intake, considering *Correspondence to: Kadek Agus Dwija Putra; Kesdam IX/Udayana Nursing Academy; agusdwija@ymail.com The results of previous studies indicate that anemia is influenced by many factors including socio-demographic factors (sex, age, number of family members, ethnicity, parental education), 6-10 adolescence is still overshadowed by the desire to always appear with a proportional and slim body (body image). 16Body image perceived as an important issue to young women, which leads many of them to make poor nutritional decisions in order to achieve their desired weight. 16Because of such negative body image perception, teenagers often choose to limit food intake and even reduce their appetite which has affected their nutritional status. 17n addition to body image, the habits of drinking tea while eating meals is also important cause of anemia in young women.Tea inhibits iron absorption when consumed together with food/meals, so, it can cause anemia. 18Although tea has many health benefits, but it can inhibit the absorption of non-heme iron by 79-94% if consumed together with foods. 19everal studies have looked at the relationship between anemia with body image perception, and the relationship between anemia with the habit of drinking tea while eating, but these have not produced consistent results.Some reported that body image was related to anemia, 17 while others reported that body image was not related to anemia. 20Similarly, with regards to the relationship between anemia and tea drinking habits, some studies showed an association between drinking tea with anemia, 13 whereas other studies reported otherwise. 21his study aims to determine the relationship between body image perception and tea drinking habits while eating with anemia among adolescent girls. METHODS This was an analytic cross-sectional study carried out at senior high schools in Badung Districts.Badung is one of the nine districts in Bali province, Indonesia.Geographically, Badung stretches from the South, Nusa Dua, to North, Plaga, which are both economically diverse.South Badung (urban) is a central to the tourism sector including the development of tourism destinations, while northern Badung (rural) is an agricultural, buffer and water catchment area. Two schools were selected purposively from 44 public and private senior high schools in Badung district, namely one school representing the northern Badung area (Abiansemal 1 Senior High School) and one school representing the southern Badung area (Kuta 2 Senior High School).The total population was 1,375 students, and a sample of 106 adolescent girls aged 15-18 years was selected by systematic random, with division of 53 female students in each school.The number of samples was determined with a significance level of 95%, power 80%, the proportion of anemia in adolescents with positive body image 57.4% and in adolescents with negative body image as much as 42.6%. 16ta collected included: hemoglobin levels measured by hematological panel examination (hemogram) with Hematology Autoanalyzer (brand Mindray BC-2800), nutritional status based on body mass index/BMI by means of anthropometric measurements (body weight and height).Data on socio-demographics (age, number of family members, paternal education), socioeconomics (paternal occupation, family income), behavior (iron supplement consumption, breakfast habits, frequency of eating, dietary restrictions, fruit consumption, vegetable consumption, meat consumption, drinking tea while eating and dieting).Menstrual patterns (age of menarche, menstrual status, menstrual cycle, number of replacement pads per day during menstruation, duration of menstruation), history of worms' infestation (from childhood to the time of research), knowledge (about anemia), and perception of body image.These variables were collected by individual face to face interviews using a standardized questionnaire, 13,17 which were conducted in each school. Anemia status was grouped based on blood hemoglobin level, namely anemia (Hb<12 g/dl) and non-anemia (Hb≥12 g/dl). 22Body image perception was categorized into two, positive body image perception and negative body image perception (using a Likert scale with 32 statements), 17 and the habit of drinking tea (warm tea/iced tea) while eating main foods was categorized into drinking tea while eating and not drinking tea while eating (with a time limit of the past month until the time of the study). 13ata were analyzed descriptively, then, bivariate analysis using Chi-square test, and multivariate analysis using logistic regression were performed.Data were presented in the form of anemia prevalence, odds ratio, 95% confidence interval (CI) and significant value (p).This study was approved by the Ethics Committee, Faculty of Medicine, Udayana University/Sanglah General Hospital Denpasar on April 11, 2018, Number: 851/ UN14.2.2/PD/KEP/2018. RESULTS Of the 106 high school students involved in the study, 13.2% had anemia.Table 1 shows that the age is normally distributed, average age (±SD) was 16.37 (±0.68) years, with the majority being 16 years (46.2%).With regards to the socio-economic status of the family, more than half (56.6%) have permanent jobs and 66% of the family have income above the minimum wages in Badung District (IDR 2,499,900) 23 with an average income of IDR 3,420,000.From the number of family members, ORIGINAL ARTICLE 59.4% of respondents had family members of five or above, ranging from 2-13 people with an average family size of 5.04 people and most of the respondents' father (94.3%) completed junior high school or above. Table 2 shows the results of anthropometric and nutritional status of adolescent girls, with an average of weight 52.37 kg, height 156.97 cm and BMI 21.26 kg/cm2.Most of the samples, 77.4%, had normal nutritional status and others were categorized as abnormal/malnourished (underweight, overweight and obese).Furthermore, to illustrate eating behavior variables, most of the samples (60.4%) had a habit of eating breakfast, frequency of complete meals three times/day (56.6%), did not have dietary restrictions (81.1%), reported consuming fruit (81.5%) and vegetables (85.5%); however, the majority (70.8%) also said that they had the habit of drinking tea while eating.In addition, almost all samples had the habit of consuming meat dishes and were not on a diet, with percentages of 98.1% and 90.6%, respectively.Nearly three-quarters of the respondents (74.5%) also consumed iron supplements. Most of the respondents (77.4%) had a good level of knowledge about anemia, with an average knowledge score of 20.57, with scores ranging from 12-28.However, it was evident that knowledge is Menstrual patterns are measured based on the age of menarche, menstrual status, menstrual cycle, length of menstruation and number of pads use per day.From the analysis, it was found that almost all girls (98.1%) experienced menarche at age ≥11 years (average 12.91 years), 90.6% were not menstruating at the time of measurement and 92.5% had an average menstrual period of ≤7 days.The majority (89.6%) of respondents had a menstrual cycle of >28 days (average 28.25 days) and half (50.9%) reported changing of pads less or equal to 3 times/ day.Only a small proportion (3.8%) of respondents had a history of helminthiasis. From the Chi-square test as presented in Table 3, there was no significant relationship between nutritional status and anemia.Regarding the relationship between nutritional status and anemia, 9/82 (11%) of respondents with normal nutritional status had anemia, while among respondents with abnormal nutritional status, there were 5/24 (20.8%) with anemia.Likewise, for eating behavior variables such as breakfast habits, frequency of eating, dietary restrictions, fruit consumption, vegetable consumption, meat consumption and dietary habits were not related to anemia in adolescent girls. There was a significant relationship between tea drinking habits while eating and consumption of iron supplements with anemia.Almost a third, 9/31 (29%) of respondents who had the habit of drinking tea while eating had anemia, while only 5/75 (6.7%) of those who did not drink tea when eating had anemia.With regards to iron supplement consumption, 9/27(33.3%)respondents who did not consume iron supplements were anemic while those who consumed iron supplements who had anemia were 5/79 (6.3%). From the analysis of menstrual patterns, it was found that age of menarche, menstrual status, menstrual cycle and length of menstruation did not significantly correlate with anemia in adolescent girls.As for the variable of menstrual volume (sanitary pad usage) there was a significant correlation, where 11/52 (21.2%) adolescents who had a frequency of usage of more than 3 times/ day experienced anemia, compared to 3/54 (5.6%) of those with frequency of 3 times or less a day.There was no significant relationship between the history of helminthiasis with anemia among the respondents. With regards to family socio-economic status including paternal education, paternal occupation, family income and number of family members were not significantly related with anemia among adolescent girls (Table 4).There was a significant ORIGINAL ARTICLE relationship between knowledge about anemia with the prevalence of anemia.A total of 6/82 (7.3%) of knowledgeable young women experienced anemia, while among young women of poor knowledge, there were 8/24 (33.3%) who were anemic.There was also a significant relationship between perception of body image with anemia among respondents.About one third of respondents (28.9%) who had negative body image perceptions had anemia, whilst only about 4.4% of young women who had positive body image perceptions had anemia. DISCUSSION The result of this study indicates that the prevalence of anemia among adolescent girls in Badung District's High Schools is as high as 13.2%, which based on WHO standards falls into the mild category, within the range of 5%-19.9%. 24This prevalence is much lower compared to a previous study in 2015, which reported the prevalence of anemia in Badung was 71.3%. 13This might be caused by differences in measurement of hemoglobin level and not all samples in this study are coming from Abiansemal High School.The previous study was using Portable Nesco tool, whereas in this study, we used complete blood examination with an autoanalyzer hematology tool which has a higher level of accuracy. In addition, Badung District government established iron supplementation to adolescent girls to overcome this anemia problem, the program yet to implement in 2015 when the first study was conducted, while it was already running during our research in 2018, which may contribute to the reduction of anemia prevalence in this present study.This fact is supported by our results which showed that the majority of the adolescent girls take iron tablets and that the consumption of iron supplements was associated with a reduction in probability of being anemic.Young women who did not take iron supplements had a 14.7 times ORIGINAL ARTICLE tendency to suffer from anemia compared to those who did.This is in line with another study which shows that the provision of long-term weekly ironfolate supplementation is a practical, safe, effective, and inexpensive method for improving iron nutrition in young women. 25 Table 3. Relationship between nutritional status, behavior, menstruation patterns, and intestinal worm history and anemia among adolescent girls Our study found a significant relationship between perceptions of body image and anemia in young women.This result is in line with a previous study in Makassar which indicated a relationship between body image and Hb levels in young women, where those who are anemic tend to have a negative body image.It was also found that there was a positive relationship between body image and dietary behavior among young women in Makassar, where those who had a positive body image had a healthy diet. 16Negative body image can often have health impacts such as overuse of laxatives, vomiting of food, strenuous physical activity, and unhealthy behavior/eating patterns due to inappropriate weight control. 26ody image is personal perception that is viewed as important for most young women.Some of them usually do anything to keep their body shape slim including restrictions on food intake and weight control that can be dangerous. 16Studies showed that overweight young women tend to have a negative body image, whereas non-overweight adolescent girls more likely to have a positive body image. 27dolescents who have positive eating behaviors and body image tend to have better nutritional status compared to adolescents who have poor eating behaviors and negative body image. 28Poor interpretation and perception of bodily changes during the adolescent years can also influence exercise frequency and food choices. 27eanwhile, the habit of drinking tea while eating was also associated with anemia among adolescent girls.This finding supported by previous study among young women in Gunungsari which found that there was a correlation between the consumption patterns of Fe inhibitors (caffeine, tannins, oxalates, phytate) found in soybeans, tea, and coffee, with the status of anemia among female students. 18Habits of consuming foods that can interfere with the absorption of iron (such as coffee and tea) together at mealtime, causes iron absorption to be lower. 29Consumption of one cup of tea a day can reduce iron absorption by 49% in people with iron deficiency anemia, while consumption of two cups of tea a day decreases absorption by 67%. 18Tea consumed up to one hour after meals would reduce the absorption capacity of red blood cells to iron by 64% and therefore it is recommended to consume tea at least two hours after meals. 18n the other hand, adolescent's knowledge about anemia was a protective factor for the risk of anemia in adolescent girls.The girls who have poor knowledge about anemia have more than ten folds tendency to have anemia compared to those with good knowledge about anemia.This is in line with other studies wherein respondents with Likewise, among the young women who did not drink tea while eating, most had good knowledge about anemia.This finding signifies that knowledge about anemia influences adolescent girls' decisions on the food consumption, so the efforts to improve adolescent knowledge about anemia and healthy eating patterns should be continued and upscaled.Furthermore, we found that menstrual volume (pad replacement) was significantly associated with anemia in the adolescent girls.Girls who changed pads more than three times/day during menstruation had a tendency of 17 times to suffer from anemia compared to those who used fewer pads per day.The frequency of changing pads is a proxy of higher volume of blood loss during menstruation.This means that those with higher volume of blood loss are more at risk of anemia than those with less bleeding during menstruation.The average blood loss during menstruation is about 30 ml/day which is equal to additional need of 0.5 mg of iron per day.Daily blood loss is calculated from the iron content in blood lost during menstruation for a period of one month.About 10% of women will lose as much as 80 ml of blood which is equivalent to 1 mg of iron per day.By taking this higher value of 1 mg/ day, total iron loss (basal loss plus menstruation) in a woman will be 30µg/kg body weight/day (>1.5 mg/day).Women would not be able to maintain a positive iron balance if her iron needs are based on an average loss of 30 ml blood during menstruation. 31Other menstrual pattern variables are not related to anemia among adolescent girls.This is in line with another study which state that there is no relationship between menstrual patterns and anemia (r=0.031;p=0.789). 32ased on our findings, we recommend policy makers to develop an education program for girls at school age, regarding anemia including the importance of consuming iron supplements, healthy food consumption and other eating habits that may increase chance of anemia such as consuming tea while eating.It is also necessary to provide adequate information regarding body image perception, and reproductive health education. This study has several limitations.The study was conducted among adolescent girls in high school setting, so its generalization may be limited to those settings with similar characteristics.The samples were selected only from two high schools, again we must be cautious when generalizing the finding.Regarding the menstrual volume, we can only estimate bleeding from the frequency of sanitary pads changes, which may not be the best indicator of blood loss. CONCLUSION The prevalence of anemia among adolescent girls in Badung District was 13.2%, much lower than previous study suggesting the current anemia prevention program may have a positive impact. ORIGINAL ARTICLE Our study shows relationship between negative body image perception and the habit of drinking tea while eating with risk of anemia among adolescence girls.excessive menstrual volume can also increase the risk of anemia.prevalence of anemia, hence it should be maintained and upscaled.It is also necessary to introduce a comprehensive measure to increase knowledge of adolescent girls on good eating patterns and a campaign to build positive body image perception. Table 1 . Sociodemographic and socioeconomic characteristics of adolescent girls in Badung District still lacking on the causes of anemia and types of foods that high in iron.
2022-02-20T16:16:56.593Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "ef967ff9ca9ce834257f69ba1e84aeda7ab86957", "oa_license": null, "oa_url": "https://doi.org/10.15562/phpma.v8i1.248", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "84d0489709b6fc4429b85b5478b1c2c6d06ab340", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
240354227
pes2o/s2orc
v3-fos-license
Trace-class Gaussian priors for Bayesian learning of neural networks with MCMC This paper introduces a new neural network based prior for real valued functions on $\mathbb R^d$ which, by construction, is more easily and cheaply scaled up in the domain dimension $d$ compared to the usual Karhunen-Lo\`eve function space prior. The new prior is a Gaussian neural network prior, where each weight and bias has an independent Gaussian prior, but with the key difference that the variances decrease in the width of the network in such a way that the resulting function is \emph{almost surely} well defined in the limit of an infinite width network. We show that in a Bayesian treatment of inferring unknown functions, the induced posterior over functions is amenable to Monte Carlo sampling using Hilbert space Markov chain Monte Carlo (MCMC) methods. This type of MCMC is popular, e.g. in the Bayesian Inverse Problems literature, because it is stable under \emph{mesh refinement}, i.e. the acceptance probability does not shrink to $0$ as more parameters of the function's prior are introduced, even \emph{ad infinitum}. In numerical examples we demonstrate these stated competitive advantages over other function space priors. We also implement examples in Bayesian Reinforcement Learning to automate tasks from data and demonstrate, for the first time, stability of MCMC to mesh refinement for these type of problems. Introduction Generating samples from probability measures on function spaces is both a challenging computational problem and a very useful tool for many applications, including mathematical modelling in bioinformatics [38], data assimilation in reservoir models [26], and velocity field estimation in glaciology [33], amongst many others. This paper addresses the problem of defining a computationally and statistically favourable function space prior. In Bayesian inference on separable Hilbert spaces [46], many posterior measures µ are absolutely continuous with respect to their prior µ 0 (often a Gaussian measure, see [29] and [14], but not always, see [13], [24], and [25]), with the likelihood acting as the Radon-Nikodym derivative dµ/dµ 0 ∝ L. Samples from a Gaussian prior on a separable Hilbert space have a convenient expansion as the weighted sum of an infinite countable basis, weighted with independent Gaussian random variables (see (1)), which is known as the Karhunen-Loève (KL) expansion. The posteriors come with a variety of theoretical results, such as concentration inequalities and contraction rates, see e.g. [1,29,37,50]. Truncating the KL expansion then reduces the problem of sampling from infinite-dimensional measures to sampling from a finite-dimensional parameter space. This truncated approximation to the true posterior gets better by including more terms of the expansion. The practical applicability of these Gaussian priors is, however, restricted to inferring unknown functions with low-dimensional domain, as the orthogonal basis required for the KL expansion results in the complexity scaling exponentially with the dimension of the unknown function's domain. Another approach to define function space priors are Bayesian Neural Networks (BNNs) [34,35] which currently enjoy a resurgence of interest, e.g. in the machine learning community. A BNN is a random function obtained by placing a prior distribution over the weights and biases of a Neural Network (NN), with the default choice being a centered Gaussian prior on the weights with variances that scale as O(1/N (l) ), where N (l) is the number of nodes in layer l. Some authors argue for heavy-tailed priors on the parameters, which was initially investigated in [35]. Although some theoretical results exist [32], popular criticisms include the lack of interpretability of the resulting BNNs, and recent work [53] has highlighted inter alia that novel priors are needed. Sampling approaches include Hamiltonian Monte Carlo [35], and more advanced integrators [31]. However, inference is often limited to finding the maximum-a-posteriori (MAP) estimate of the posterior [52], and the O(1/N (l) ) scaling implies one cannot easily add nodes to a layer to obtain more accurate estimates: one would either have to adjust the prior variances for all nodes within the amended layer, thereby changing the prior, or not adjust the prior which results in exploding functions [32]. Other function space priors include Deep Neural Networks and Deep Gaussian Processes [12,15], and in [15] inference is done using similar function space MCMC techniques to the ones we employ. To calculate expectations with respect to the Bayesian posterior of the unknown function, computational methods are required as the relevant integrals are usually not analytically tractable. Two popular sampling algorithms for posteriors defined on Hilbert spaces are the preconditioned Crank-Nicolson (pCN) algorithm and its likelihood-informed counterpart the preconditioned Crank-Nicolson Langevin (pCNL) algorithm, which arise from clever (and in a way optimal) discretisations of certain stochastic differential equations [9]. These samplers are asymptotically exact and have a dimension-independent mixing rate in the sense that their proposal step size does not depend on the number of terms in the KL truncation [16,21]. This stands in stark contrast to the well-known dimensional-dependent scaling of popular MCMC algorithms such as the Random Walk Metropolis-Hastings Algorithm and the Metropolis Adjusted Langevin Algorithm [40,42]. Modifications of pCN include geometric [6] and likelihood-informed [10] versions. Although the computational cost can be reduced provided one knows which basis functions are informed by the data, they cannot circumvent the costly scaling in the domain dimension. This is presumably one reason why these methods have rarely been used for inferring unknown functions with domains larger than dimension two (i.e. R 2 ) in reported examples in the literature. This paper introduces a new neural network based prior, coined trace-class neural network priors, which allows for scalable (in the domain dimension) Bayesian function space inference. Hilbert space MCMC algorithms are then used to sample from the resulting posteriors, and owing to their stability under mesh-refinement, enhances the practical utility of our framework. In addition to comparisons with reported examples in the literature, we also demonstrate our technique's usefulness on a challenging 17-dimensional Bayesian reinforcement learning example where the aim is to learn the value function (a function on R 17 ) that can automate a task demonstrated by an expert -we combine the noisy expert data with a trace-class NN prior, through a suitably defined likelihood, to yield a Bayesian formulation. The main contributions of this paper are as follows: • We introduce a new trace-class Gaussian prior for neural networks, which is both well defined for infinite width NNs and has a degree of smoothness, and demonstrate its practical utility. The prior is independent, centred, and Gaussian across the NN's weights and biases but is non-exchangeable over the weights within each layer and has a summable variance sequence. The latter, which gives it the trace-class property, ensures it is a valid prior for an infinite width network, while the former results in parameters being better identified from an inference perspective. We further show that this prior is appropriate for use with Hilbert space MCMC methods (Theorem 1). The practical implications of this is that it is valid for the infinitewidth limit of the NN and not just finite-dimensional projections of it (e.g. like the Random Walk Metropolis-Hastings algorithm), enjoys a dimension-independent mixing rate and, owing to the inherent scalability of neural networks to its number of inputs, is suitable for applications with high-dimensional state spaces. • We propose a suitable likelihood for Bayesian Reinforcement Learning (BRL) for inferring the unknown continuous state value function that best describes an observed state-action data sequence. Theorem 2 and Lemma 3 justify the use of this likelihood with Gaussian prior measures on function spaces, and with our proposed neural network prior. This likelihood is also potentially of interest to the machine learning community in its own right. • We apply Hilbert space MCMC methods to infer the unknown optimal value function in two continuous state control problems, using both our new prior and likelihood function. These exercises motivate the need for NN function priors that are, unlike a canonical orthogonal basis prior for that domain, scalable in the domain dimension, and for the first time demonstrates dimension-independent mixing of MCMC for Bayesian Inverse Reinforcement Learning. The rest of this paper is organised as follows: In Section 2 we introduce the general inference problem, describe the canonical orthogonal basis for functions on R d , describe MCMC methods on an infinite-dimensional Hilbert space including their construction and the assumptions under which these methods are well-defined. Section 3 introduces the traceclass neural network prior and states one of our main theoretical results, showing that the proposed prior satisfies the necessary assumptions to be used with a Hilbert space MCMC algorithm. In Section 4 we formulate the Bayesian Reinforcement Learning (BRL) problem and introduce the likelihood to be used for inferring continuous state value functions from state-action data. We then show that the likelihood satisfies the assumptions needed to be admissible in a Hilbert space MCMC setting. Finally, Section 5 provides numerical results for the proposed prior and the likelihood for different control problems. Proofs can be found in the appendix. 1.1. Notation. We use curly letters (X and A) for spaces and sets. Subscripts denote both the temporal and spatial variables, but it will be clear from the context which one is being referred to. Φ denotes the Gaussian cumulative distribution function (cdf), φ the Gaussian probability density function (pdf). ϕ is used for basis functions, ζ denotes an activation function. The likelihood function we will write as L, the log-likelihood as , and T is the number of data points used in the likelihood. 2 will also denote the space of square-summable sequences. The space of square-integrable functions, with respect to the Lebesgue measure, from X ⊆ R d to R is denoted L 2 (X , R) or simply L 2 . For the control problem, T denotes the deterministic state dynamics, mapping a state-action pair (x, a) to the next state x . The value function is denoted with the letter v. Problem Formulation The objective is to sample from a target distribution µ defined over an infinite-dimensional separable Hilbert space. The targets of interest in this work are Bayesian posterior distributions arising from a Gaussian prior measure µ 0 and a likelihood which can be evaluated point wise. One such likelihood is the Gaussian likelihood that arises from observations of a solution to a PDE with additive Gaussian noise given in Section 3.3, which is a standard likelihood in the Bayesian Inverse problems literature [46]. The other likelihood we will work with is one for continuous state control problems which is introduced in Section 4. In what follows, we will assume that the posterior has a density with respect to the prior, in which case the Radon-Nikodym derivative is well defined and is proportional to the likelihood. The posterior density with respect to the prior is given by dµ dµ 0 (u) = 1 Z exp( (y|u)), where y are observations, is the log-likelihood, and Z = exp( (y|u))µ 0 (du) > 0 is the normalisation constant. For an infinite-dimensional separable Hilbert space H, say H = L 2 (X , R) to frame the discussion in this section (and later in Section 3 the sequence space H = 2 ), there exists an orthonormal basis {ϕ i } ∞ i=1 such that any element u ∈ H can be obtained as the limit , where a i = u, ϕ i H with u, ϕ i H denoting the inner product on H. Let the prior µ 0 = N (0, C) be a Gaussian measure on H. If the operator C is trace-class with orthonormal eigenvalue-eigenfunction pairs (λ 2 i , ϕ i (x)), i = 1, 2, . . . , one can sample from µ 0 by sampling a sequence of ξ i ∼ N (0, λ 2 i ) and by then defining The sum defines u(x) ∈ H almost surely and is the Karhunen-Loéve (KL) expansion [20]. One may thus think of a sample from the Gaussian measure as the sum of a sequence of 1-dimensional Gaussians with summable variances. This allows us to truncate the series expansion such that we have N active terms, with the remainder, or approximation error, tending to zero as N increases: Other more elaborate truncation schemes are discussed in [9], but we will focus on a fixed number of terms for computational and notational convenience. For some applications, ϕ i for large i can be interpreted as high-oscillating functions which may not be discernible by the observation operator, see the example in Section 3.3 or Figure 1, where the large i coefficients are responsible for the oscillating function in the left panel, and forced to 0 on the right. Note that, given some u ∈ H, we can let u be u with i-th component set to 0, i.e. u = u − u, ϕ i ϕ i . It follows from Assumption 4 (stated later in the manuscript) that lim i→∞ (y|u ) = (y|u), for any u ∈ H. Following the approach of [46,Theorem 4.6] this closeness of the likelihoods (y|u) and (y|u − u, ϕ i ϕ i ) translates to closeness of the corresponding posteriors. We emphasise that the above discussion holds not only for the space H = L 2 (X , R), which is predominantly how it is applied in [5,6,9], but also for H = 2 (with the only change being the choice of the orthonormal basis), which will be of particular importance in this paper. In infinite-dimensional spaces, one has to be careful to ensure the posterior is well defined, see [46] for a discussion on Gaussian priors and likelihoods given through possibly non-linear mappings, observed in Gaussian noise. We will work with the following assumptions, which we prove are satisfied for the likelihood defined in Section 4. (1) µ 0 is a Gaussian prior defined on a separable Hilbert space H, with a trace-class covariance operator C, that is, the eigenvalues λ 2 i corresponding to the eigenfunctions ϕ i satisfy i λ 2 i < ∞; (2) The posterior is well-defined, i.e. the integral of the likelihood with respect to the prior is positive and finite. 2.1. A canonical approximation for functions on R d . Consider a d-dimensional hypercube X = [0, 1] d , the Hilbert space H = L 2 (X , R), and a Gaussian prior measure µ 0 on H. A Bayesian approach entails choosing the covariance matrix C for the Gaussian prior µ 0 , and we discuss a standard choice below. If the problem requires it, as in Section 3.3 where a PDE is solved, it is possible to choose C such that the samples are almost surely differentiable. Given eigenvalues λ i and basis functions ϕ i for a 1-dimensional function, one approach to scale this basis up to a d-dimensional domain is by taking a tensor product of the basis, see e.g. [27] for the multivariate Fourier basis, or [54] for Wavelets and other basis expansions. For the KL expansion, we thus get, for a multi-index k = (k 1 , . . . , k d ) = k 1:d with k i = 1, . . . , N , where ξ k 1:d ∼ N (0, λ 2 k 1:d ) with λ k 1:d being a function of the respective eigenvalues λ k i capturing the correlation between dimensions. In total, there are N d active terms, that is, the complexity is exponential in the dimension d. This will be computationally prohibitively expensive, even for moderately small d. An approximation-theoretic argument for the exponential scaling has been made by [2], who showed that a Sobolev function u on a d-dimensional domain with smoothness α can be approximated in L 2 within error using N basis terms, where N ∝ −d/α . To circumvent the exponential growth of terms in the domain dimension, one could employ the following simplifications with only mixed partials up to order two [45] with dN + d(d−1) 2 N 2 coefficients to be estimated, thus still achieving a significant reduction compared to N d terms before. In our numerical work, this approximation is an obvious candidate to contrast against. With the approximation (3) in mind, one restricts oneself to the prior on finitely many random functions u i and u i,j , each of which themselves is sampled from a Gaussian measure N (0, C 1 ), or N (0, C 2 ), respectively. One identifies each of these functions with their Karhunen-Loéve expansion where the ϕ k and ψ k are the eigenfunctions corresponding to the eigenvalues λ 2 ϕ,k and λ 2 ψ,k , respectively. The ξ i,k and ξ i,j,k are independent normal random variables ξ i,k ∼ N (0, λ 2 ϕ,k ) and ξ i,j,k ∼ N (0, λ 2 ψ,k ). As before one requires the covariance operators to be trace-class, and truncates the expansion (4) after a finite number of term. The numerical experiments using the KL function space prior in this paper are based on the following Fourier basis functions, ϕ k defined on [0, 1], ψ k = ψ k 1 ,k 2 defined on [0, 1] 2 and indexed by a double index k = (k 1 , k 2 ) ∈ N × N: for i = j, with corresponding eigenvalues See Figure 1 for some representative draws from this prior, which is a modification from the prior used in Section 4.2 of Beskos et al. (2017). The covariance operator is of the form −∆ −α where ∆ denotes the Laplacian, and we allow both Dirichlet (e.g. ϕ 2k (0) = ϕ 2k (1) = 0) and Neumann boundary conditions (e.g. ϕ 2k+1 (0) = ϕ 2k+1 (1) = 0), with opposing sides of the square [0, 1] 2 satisfying the same boundary conditions. Figure 1. Three samples from the Karhunen-Loéve prior; the basis functions are the two-dimensional Fourier functions. In ascending order from left to right we set α ∈ {1.001, 1.5, 2} with the eigenvalues scaling as λ 2 k ∝ 1/(k 2 1 + k 2 2 ) α , for the double index k = (k 1 , k 2 ). The tuning parameter α controls the smoothness of the samples. Section 3 will introduce a prior which scales favourably with the domain-dimension as it does not require pre-defining an orthogonal basis. 2.2. Metropolis-Hastings algorithms on Hilbert spaces. This section recapitulates how to define 'sensible' Metropolis-Hastings Markov chain Monte Carlo algorithms for inference over the ξ i in (1). Using Markov chains is an established approach to sample from distributions on finite-dimensional state spaces (see [8] for an overview of MCMC methods) and our emphasis here is to review algorithms which can theoretically deal with arbitrarily many basis coefficients, without having to be re-tuned to avoid the usual problem of the acceptance probability degenerating as one includes more coefficients. This property, known as stability under mesh-refinement, is not satisfied by the popular Random Walk Metropolis-Hastings Algorithm (RWMH, [22]), or by the Metropolis Adjusted Langevin Algorithm (MALA, [41]). Two algorithms which are both dimension-independent are the preconditioned Crank-Nicolson (pCN) and the preconditioned Crank-Nicolson Langevin (pCNL) algorithms, the former introduced as early as [36] and both derived and discussed in [9]. Motivated by the idea of increasing dimensions translating to evaluating a function on a finer mesh, we will also refer to the dimension-independence of these algorithms as stability under meshrefinement. Both algorithms can be seen as a discretisation of the following stochastic partial differential equation: where D is the Fréchet derivative of the log-likelihood 1 , K is a preconditioner, C is the covariance operator of the Gaussian prior measure, B is a Brownian motion, and γ a tuning parameter: if γ = 0, the invariant distribution of (8) is the prior µ 0 , and for γ = 1 the invariant distribution is the posterior µ. With the choice K = C (the preconditioned case, such that the dynamics are scaled to the prior variances), discretising (8) using a Crank-Nicolson scheme results in pCN (for γ = 0) and pCNL (for γ = 1). The resulting discretisations can be simplified to for step sizes β ∈ (0, 1] and δ ∈ (0, 2), respectively. Note that due to the discretisation scheme used, pCN is prior-reversible, and using it as a proposal in a Metropolis-Hastings sampler to target the posterior, the proposal is accepted with probability min{1, exp(− (u)+ (v))}. If the pCNL dynamics are used as a proposal for a MH scheme, the acceptance probability is given by min{1, Both pCN and pCNL are such that, for an uninformative likelihood, all moves are accepted. In practice, the likelihood Assumptions 3 and 4 ensure that, unlike RWMH or MALA, neither pCN nor pCNL require their step size β or δ to go to 0 as one includes more coefficients in the KL expansions [9]. To conclude this section, we state the assumptions under which both pCN [9, Thm 6.2] and pCNL are well defined. Assumptions 3 and 4 [9, Assumptions 6.1] are needed for both pCN and pCNL, while 5 is only required for pCNL [6]: For all u ∈ H, CD (u) ∈ Im(C 1/2 ), µ 0 -almost surely. That is, for any draw u from the prior, the preconditioned differential operator at u is in the Cameron-Martin space of the prior with probability 1. Trace-Class Neural Network Priors The Gaussian prior on H = L 2 (X , R) exploits the isometry between the function space L 2 (X , R) and the sequence space 2 using the Karhunen-Loéve expansion [20], but the computational complexity of using a basis-expansion on a high-dimensional domain is unfeasible even when using approximate function representations such as in [45]. Neural networks have been shown to have excellent empirical performance in highdimensional function regression tasks. Bayesian neural networks (BNNs), capitalising on this success, randomise the neural network architecture to yield Bayesian priors for functions. BNNs are popular as they empirically show good results, scale well in the dimension of the function's domain, and more ground is being made on the supporting theory, e.g. on their approximation quality, infinite-width behaviour etc [23,32]. A drawback of standard BNNs is currently the limited interpretability of the posterior distributions on the parameter space, as the distribution on each weight degenerates due to the scaling of the variance proportional to the number of nodes. We now propose a prior for the parameters that define a neural network which will generate almost surely well-defined functions for an infinite-width neural network. This is achieved by parameterising the infinite width neural network using sequences in the Hilbert space H = 2 , the space of square-summable real valued sequences, and endow it with a trace-class Gaussian prior. This then allows inference for such neural networks to be conducted using the dimension-independent MCMC methods discussed in Section 2.2. Through the architecture of the neural network, the prior µ 0 over the parameters implicitly defines a prior on the output function of the neural network. Under mild assumptions on the network architecture, and if X is compact, the output functions, which we denote as v, are µ 0 -almost surely square-integrable over X , and the prior thus naturally defines a prior over L 2 (X , R) as well. Neural network priors are also more flexible compared to the Karhunen-Loéve expansion of a Gaussian measure: one neither needs to specify a covariance operator and find its eigenfunctions, nor decide on a basis which is then used to define a Gaussian prior. By giving up the orthogonality of these eigenfunctions (which allow for a rich theoretical analysis), one gains on the performance side, see our numerical comparisons in Section 5.3. We coin the term trace-class neural network prior (tcNN) to emphasise that the prior leads to a well-defined function space prior if the variances of all parameters are appropriately summable. The term is well-established for Gaussian measures, where these are called trace-class if the eigenvalues of the covariance operator are summable. Consider a n-layer feed-forward fully-connected neural network illustrated in Figure 2. The width of layer l is N (l) , the input to the first layer is x ∈ [0, 1] d , the domain of the function to be approximated, and let v(x) = f (n+1) 1 (x) ∈ R denote the network's output; for notational convenience we write N (0) = d and N (n+1) = 1. The network is described fully by the following set of real valued weights and biases, Figure 2. A n-layer feed-forward neural network defining a function v : where we have summarised w and b as θ. Given an activation function ζ : R → R, the functions of each layer are The prior µ 0 is now defined as follows: the individual weights and biases in each layer l are independent and normally distributed, and we emphasise here that the novelty is to choose the variances not uniformly, but to decrease them as one moves into the tail nodes of each layer: where indices i, j, and l are defined in (11), α > 1 is a fixed constant, and σ 2 w (l) > 0 for each l (to avoid degeneracy of the prior). The reader should note that the prior is invariant with respect to permutation of the input variables, thus avoiding preferential treatment of any of the inputs. The tuning parameter α controls how quickly the magnitude of the weights decrease in the direction of the tail nodes and is empirically seen to control how 'variable' the sampled function is. If α > 1 we refer to the prior as trace-class, coining the term trace-class neural network priors. If one believes that potentially many nodes with large weights are needed, one should choose α close to 1. See Figure 3 for three representative draws from the neural network prior. As the next theorem will show, this allows indeed to define an infinitely wide network by taking N (l) = ∞, and the variances can be summarised in a diagonal covariance operator C; this prior is well-defined on an infinite-dimensional Hilbert space (isometric to 2 ), and can thus be used in the algorithms from Section 2. In practice, one truncates the number of nodes within each layer as for the priors described before, or one may randomly switch nodes on and off similarly to the random truncation prior used in [9]. We now define the infinite width limit of the network. Given an infinite sequence of weights and biases for the first layer, distributed according to the prior (13), i.e. B (1) : i ∈ N}, are clearly well defined. We define all the functions of the second layer corresponding to an infinite-width first layer to be the following almost sure limits, assuming they exist: Assuming the random functions {F can be defined similarly. The functions in each layer of a finite width network are denoted with lower case to clearly distinguish them from their infinite width versions. For the output layer, the finite network gives v(x) or f In what follows, we will often write v(x) = v θ (x) to emphasise the dependence of the function samples on the weights and biases. In order to simplify the presentation of the main results, we list a set of properties which will be shown to hold for our BNN prior: The tuning parameter α controls the complexity of the prior functions, the variances in the layers control the overall variance. Note the difference in the magnitudes on the z-axis. Here, both expectations are taken with respect to the prior on the parameters θ of the neural network. In particular this property holds for v( the expectation again taken with respect to the prior. In particular this gives The first declared property ensures the output functions are appropriately finite in value and moments while the second property ensures a degree of smoothness. We now state a theorem which shows that the proposed prior satisfies the declared properties. To this end, we use an activation function 2 ζ : R → R which satisfies the following condition, which will imply that |ζ(x)| < |x| for all x ∈ R, and that ζ is differentiable almost everywhere, with the derivative being essentially bounded by 1: (9) ζ is Lipschitz continuous with Lipschitz constant 1 and ζ(0) = 0 . 3 Theorem 1. Under Assumption 9, the functions of the layers of the finite-width neural network satisfy Properties 6, 7, and 8. In addition, if α > 1/2, the functions on every layer of the infinite-width neural network (see (14)) exist almost surely and satisfy Properties 6 and 7, when the functions f i (x) and V (x) defined as in (14). In addition, if the prior is trace-class (i.e. α > 1), Property 1 is satisfied. The proof can be found in Appendix B.2. Identifiability Issues and Remedies. It is well-known that the output function of a standard neural network does not depend on the labeling of functions within each layer. However, unlike a prior that has uniform variances within each layer, swapping nodes f i+1 (effectively by swapping their corresponding weights and biases) will lead from θ to a new θ such that the prior weights change, and thus avoid the label-switching problem. To facilitate faster mixing by allowing jumps between these different configurations, we propose Algorithm 1, which can be found in Appendix A. The algorithm is well defined for finite widths networks, in which case the acceptance ratio is given by a(θ, ϑ) = µ 0 (ϑ)/µ 0 (θ), but not for infinite width networks, see Lemma 6 in the Supplementary Material; this exemplifies the extra care needed when defining MCMC moves in the infinite dimension setting. One remedy is not to swap all the weights of the two selected nodes but only blocks of them, however we did not pursue this approach. 3.2. Illustrative Groundwater Flow Example. Before moving on to more challenging examples, we present an illustrative example, and compare the performance of the neural network prior to the Gaussian prior presented previously. The example, taken from [6] 4 , aims is to recover the permeability of an aquifer. The PDE −∇ · (exp(u(x))∇p(x)) = 0 connects the log-permeability u of a porous medium to the hydraulic head function p with the boundary conditions given by (for To enforce the permeability to be positive, the prior is defined for the log-permeability u(x). We compare two priors. The first one is a trace-class neural network prior with 100 nodes, Tanh activation function, and a four dimensional input space with the inputs (x 1 , x 2 , sin(x 1 ), sin(x 2 )). We set the tuning parameters to α = 1.001, σ 2 w 1 = σ 2 b 1 = 100, σ 2 w 2 = 1/30, and σ 2 b 1 = 1/10. The second prior is a Gaussian measure on [0, 1] 2 with the following orthonormal basis and corresponding eigenvalues defined using double indices i = (i 1 , i 2 ): In the experiments, we truncated the basis expansion using 1 ≤ i 1 , i 2 ≤ 25, which gives a similar number of parameters as we used in the neural network example. The true u * is now defined using the same basis as u The simulated data are 33 noisy observations of the true hydraulic head function p * at various x positions, y = p * (x) + ε, where ε ∼ N (0, 0.01 2 ). The 'true' head function p * (x) is obtained by solving the forward PDE on a 40 × 40 grid. We ran pCN using both priors, and solving the forward problem on a 20 × 20 grid. Both experiments used a similar number of iterations and stored 1000 MCMC samples to obtain the mean estimates in Figure 4. The results in Figure 4 are less insightful and interpretable than those we will see in the next subsection as the few observations we have are related to the target function only through the PDE. A better comparison between, and validation of, the different priors is through visual posterior predictive checks as shown in Figure 5. 3.3. Ability to approximate complicated functions. To show that the trace-class neural network prior is able to visually recover relatively complicated functions, we define a . The true u * and the parameters of the prior we used here are the same one as in the example above. As Figure 6 shows, the neural network prior is able to approximate the true u * when given many, in this example 400, observations. Figure 6. The neural network estimates the true u * (x) which is noisily observed on every grid point x of a 20 × 20 grid. In real applications, only few observations will be available, this example simply illustrates that many observations lead to close approximations for the trace-class neural network prior. Note that the functions displayed are shown on a fine grid, a coarse 20 × 20 sub-grid was used to generate the observations. Bayesian Learning of the Optimal Value Function The solution to a stochastic optimal control problem is known as the optimal value function which can be found through Dynamic Programming (DP) (discussed in Section 4.1.) Reinforcement Learning is a popular and practical algorithmic approach for solving stochastic optimal control problems [47]. It finds the best control, which is a mapping from states to actions, in an online manner by using noisy estimates of the mathematical expectations to be maximised in DP. Online here refers to the use of the current best learnt control to actuate the system to its next state which is also accompanied by a corresponding stochastic reward. This interaction with the system yields a stochastic process of actions, states and rewards with which DP's mathematical expectations are estimated. Automating a task can be made easier through the use of expert demonstrations, an approach known as Inverse Reinforcement Learning; see e.g. [39] for more nuanced details. Given the observed state, actions and rewards from an expert, we can exploit the mathematical formalism of Markov Decision Processes to relate this "data" to the optimal value function of the expert. In a Bayesian approach to this problem, one defines a prior on a function space that includes all admissible value functions. The data observed from the expert's behaviour can then be used through a suitably defined likelihood [39] to infer the expert's value function: having the expert's value function at hand allows one to mimic their behaviour and hence defines an approach for automation. For discrete state spaces, [44] provide a method to quantify the uncertainty of the estimated value function. Here, we will generalise those ideas to continuous state spaces by using the priors introduced in the previous section. Setup. A Markov Decision Process is defined by a controlled Markov chain {X t } t∈N called the state process, the control process {A t } t∈N , and an optimality criterion. The state process takes values in a bounded set X ⊂ R d , for simplicity we will assume the ddimensional hypercube X = [0, 1] d . The control process is A-valued, where A = {1, . . . , M } is a finite set. Given states X 1:t = x 1:t and actions A 1:t = a 1:t up to time t, the next state X t+1 is where for any state-action pair (x t , a t ), p(·|x t , a t ) is a probability density. In some applications, the state dynamics are deterministic, and thus there exists a map T such that X t+1 = T (x t , a t ). The action process depends on a policy µ : X → A which is a deterministic mapping from the state space into the action space: A t |(X 1:t = x 1:t , A 1:t−1 = a 1:t−1 ) ∼ δ µ(xt) (·). As there are many possible mappings µ : X → A, we assume the agent executes a policy that is in some way optimal. To be more precise, let r : X → R be the reward function, then the accumulated reward given a policy µ and an initial state where β ∈ (0, 1) is a discount factor. The discount factor serves two purposes: it ensures that the expectation is well defined, and also that early actions are more important (in terms of the reward it adds to the total) than later ones, see [28] for a more detailed discussion. A policy µ * is optimal if C µ * (x 1 ) ≥ C µ (x 1 ) for all (µ, x 1 ) and the optimal policy can be found through the solution of Bellman's fixed-point equation [3]. The function v : X → R, which is the fixed-point solution to is called the optimal value function [4] and the corresponding optimal policy is that is, the optimal action at any state is the one that maximises the expected value function at the next state. 4.2. Likelihood definition. The above decision making process gives optimal actions, but a human expert may occasionally pick non-optimal ones. To model imperfect action selections, noise is added to (17). At each time step the chosen action is a random variable given by where we assume t ∼ N (0, σ 2 I M ×M ) for some σ > 0. The Gaussian choice simplifies numerical calculations, and it is reasonable to assume that the variances for different actions are independent and identically distributed, but this assumption can be relaxed. From now on, we will assume that the state dynamics are deterministic, in which case the action selections occur according to Our goal from now on will be to recover the optimal value function, and quantify the uncertainty thereof, by using the Hilbert space MCMC methods and the priors discussed in Sections 2 and 3. The data consists of a collection of state-action pairs y = {y t } T t=1 = {(x t , a t )} T t=1 and the aim is to infer the value function (and thus the policy through (17)) that leads to the actions a t for the current state x t . Using the noisy action selection procedure (18), the likelihood is where the second equality follows by defining the vector v t to contain the relevant evaluations of the value function to calculate the likelihood at y t , i.e. using equation (19), the k-th entry of v t is the evaluation of the value function v(·) at the location T (x t , k), corresponding to starting at x t and taking action a = k ∈ A. For a single observation y t = (x t , a t ), we now drop the subscript t to simplify notation, and assume wlog that the optimal action is action a = 1, permuting the labels if necessary. The probability p(a = 1|v, σ) (where v is a vector and p(a|v, σ) is a probability mass function) can be computed using (18) by To compute this probability, we make use of the fact that the value of the integral is the same as the probability P(U 1 > U j , ∀j = 1), where U k ∼ N (v(T (x, k)), σ 2 )). This can be computed numerically using the pdf φ 1 (·) of U 1 and cdfs Φ j (·) of the remaining random variables U j : where v j is v (T (x, j)). If the noise in (18) is not diagonal, this simplification cannot be made, and the integral (21) is harder to compute. More advanced numerical methods exist to efficiently calculate such integrals using Monte-Carlo simulations [19]. 4.3. Likelihood gradient. Following from (22) we can compute the gradient of the likelihood in a data point (x t , a t ) with respect to v t . We again assume wlog that a t = 1 ∈ A (by swapping the label of the first and the best action if necessary), and drop the subscript t, emphasising that v k is the k-th entry of the vector v = (v (T (x, 1)), . . . , v(T (x, M ))). The partial derivatives with respect to the v k are given by where the last identity follows from the product of two Gaussian pdfs. This allows us, when using the neural network prior, to compute the gradient of the log-likelihood with respect to the parameters of the neural network, θ, using backpropagation. We emphasise that the vector v = v(θ) depends on these parameters, justifying the calculation of the Jacobian D θ v. Using the chain rule, we get To get the entire gradient of the log-likelihood, we simply need to sum over all data points: where we only need to keep in mind the permutation in the actions when using (26). When calculating (26), we note that 1 · ∇ v p(a|v, σ) = 0 by translation invariance of v: L(y|v, σ) = L(y|v + c, σ) for any constant function c, i.e. c(x) = c(x ) for all x, x ∈ X . The integrals involved in the gradient are in practice calculated numerically, and the arising errors may accumulate and cause numerical instabilities. To avoid these, one can ensure that the mean of these gradients is 0 by using the following modification, which we observed to enhance the performance in practice: The following theorem justifies the use of this likelihood in the function space MCMC setting, see [51,Chapter 12] for a definition of reproducing kernel Hilbert spaces (RKHS): Theorem 2. The log-likelihood (y|v, σ) = log L(y|v, σ) defined in (20) satisfies Assumptions 3 and 4, if v ∈ H = L 2 K , where L 2 K is any RKHS defined on L 2 . The proof can be found in Appendix B.4. We also note that when using the trace-class neural network prior from Section 3, the statements remain true if the likelihood is seen as a function of the parameters θ of the neural network: Proof. The proof can be found in Appendix B.5. We now prove under which conditions on the likelihood one may use the preconditioned Crank-Nicolson Langevin algorithm when using the trace-class neural network prior, which in particular requires the gradient-informed proposals to be in the Cameron-Martin space of the prior. We will then remark on how it applies to the noisy action selection likelihood (19). For Theorem 4 assume the log-likelihood (y|v, σ) of the mapping x → v(x) ∈ R is of the form for some function : A × R M → R, where a data point y t = (a t , x 1 t , . . . , x M t ) is comprised of a t ∈ A and M points in the domain of v, i.e. x i t ∈ X . Note that such a likelihood clearly encompases (20). In the theorem below, we further assume uniformly bounded partial derivatives of the log-likelihood w.r.t. v(x i t ) for any t and i. Even with this assumption, to verify the assertion of Theorem 4, we need to establish the behaviour of moments of ∂v(x)/∂W The proof can be found in Appendix B.6. The proposed stochastic control likelihood given in (19) does not satisfy the assumption of the theorem since the partial derivatives are unbounded. To circumvent this, we apply a saturation function s to v(x i t ), and employ (19) with s(v(T (x t , a))) instead of v (T (x t , a)). Lastly, we note that a similar result to Theorem 4 can be shown for the Hilbert space L 2 . Numerical Illustrations This section aims to validate the theory, and highlight the applicability of the proposed priors and methodology. In particular, Section 5.2 confirms that, empirically, as the layer width for the trace-class neural network prior grows, the acceptance probability does not go to 0, a property known as 'stability under mesh-refinement' or 'dimension-independence'. Section 5.3 compares the proposed trace-class neural network (tcNN) prior to a standard BNN prior and a KL prior, it highlights that, unlike the KL prior, the tcNN is scalable to higher-dimensional domains; and Section 5.4 shows that the posteriors can learn and mimic policies, thus justifying the use of these priors in the reinforcement learning setup. The code is available at https://github.com/TorbenSell/trace-class-neural-networks. Throughout we use the Fourier basis (5) as the series expansion of choice when using the KL based prior, as this proved to be a good choice for reinforcement learning problems [30]. As a tuning parameter for the corresponding eigenvalues we set α = 2 in (6), forcing the samples to be very smooth which we found to be a sensible choice in the discussed control problems. For the tcNN prior we used fully connected layers with tanh activation functions, and set σ 2 b (l) = σ 2 w (l) = 2 and α = 1.5 in all the experiments, this again results in smooth sample functions. For the standard BNN we used the same architecture and set α = 0 to get a constant variance sequence, in Section 5.2 we set σ 2 b (l) = σ 2 w (l) = 10/(3N (l−1) ) to highlight the dependence on the layer-width, in Sections 5.3 and 5.4 we set σ 2 b (l) = σ 2 w (l) = 1/3. Control Problems: Setup. We set the scene by briefly describing the setup of the control problems which we use in the experiments, a detailed description can be found in the Supplementary Material. The first example is the popular mountain car problem. A car is to drive up a mountain slope to reach a flag, but needs to gain momentum first by driving up the opposite mountain slope, thus initially driving away from the goal; see the left panel of Figure 7 for an illustration. The state space is the two-dimensional domain X = [−1.2, 0.6] × [−0.07, 0.07] describing the vehicle's position and velocity, and the action space contains three possible actions: A = {−1, 0, 1}, representing exerting a constant force to the left, no force, and exerting the same constant force to the right, respectively. The likelihood (22) arises from T = 50 observations of state-action pairs, the data generating process is described in the Supplementary Material. The noise level in the likelihood is set to σ = 0.1. The second example is the HalfCheetah example from the MuJoCo library [49], where an agent controls a two-dimensional cheetah with the aim to 'run' as fast as possible. For this example, the state space X is 17-dimensional and the action space contains 8 possible actions. The likelihood (22) arises from T = 100 observations, we again refer to the Supplementary Material for the data generating process, and set the noise level in the likelihood to σ = 0.1. The right panel of Figure 7 shows the HalfCheetah. The setup for the mountaincar example. The car's goal is to reach the flag in as few steps as possible. The slope on the right is too steep to simply drive up the mountain, the car therefore has to gain momentum by going up the hill on the left first. Right: The HalfCheetah has states x t in R 17 . Its goal is to run to the right as quickly as possible, while not moving its body parts more than necessary. 5.2. Dimension independence of trace-class neural network prior under meshrefinement. We ran pCN for different network widths on the mountain car example. The network used has l = 3 hidden layers. As stated before, the tuning parameters in the prior are set to σ 2 b (l) = σ 2 w (l) = 2, and α = 1.5. Table 1 displays the acceptance probability of pCN for a fixed step size when targeting the posteriors arising from the mountain car likelihood with a trace-class neural network prior and also a standard Bayesian neural network prior. The latter is characterised by setting α = 0 in (13), resulting in a constant sequence of variances per layer. The other tuning parameters for the standard Bayesian neural network were set to σ 2 b (l) = σ 2 w (l) = 10/ (3N (l) ). The step sizes chosen were β = 1/10 for the tcNN, β = 1/7 for the standard BNN. Comparison of priors. To compare the trace-class neural network prior to the Karhunen-Loéve prior, we used a large number of parameters for each, such that the error from truncating after finitely many nodes, or finitely many terms, is negligible. For both Table 1. Acceptance ratios in % for both the trace-class neural network (tcNN) and standard Bayesian neural network (BNN) and total number of parameters (weights and biases) for different layer widths. 3 fully connected layers were used, and pCN was run over 3 hours for each choice of N (l) . Notably the acceptance probability for the trace-class neural network proposed in this paper does not degenerate as more nodes are included per layer. Note that in the limit, only the tcNN is well-defined on the parameter space. the mountaincar and the HalfCheetah example, we used the same trace-class neural network prior, with 3 hidden layers, and 100 nodes per layer, resulting in 20, 601 parameters to be estimated for the mountaincar example, and 22, 101 for the HalfCheetah example. For the Karhunen-Loéve prior in the mountaincar example we set the truncation parameter to k max = (70, 70) for (5) with eigenvalues (6) (recall that here α = 2), resulting in a total of 19, 880 coefficients to be estimated. For the KL prior in the HalfCheetah example we used approximation (3), and otherwise the same eigenfunctions and eigenvalues; due to the higher domain dimension d = 17, one would have to estimate 2, 667, 980 parameters. As this is too memory expensive for the computer used for the experiments, we used k max = (10, 10) in the HalfCheetah example, resulting in 54, 740 parameters to be estimated. Note that this increase in parameters to be estimated is despite the approximation (3) being used, and additionally truncating the expansions after fewer terms, highlighting the benefits of the dimension-robustness of the trace-class neural network prior. To assess the quality of the priors, we ran pCN using 50 (for the mountaincar) and 100 (for the HalfCheetah) data points. For the mountain car example, we fixed five test points z j , j = 1, . . . , 5 independent of the training data, and compared the posteriors by evaluating v(z j ) at these new locations as estimated through MCMC runs. The top row in Figure 8 shows the resulting uncertainty estimates. As the value function is invariant under translations, we adjusted all samples such that they take the value 0 at the state which the optimal action a opt takes one to: v centered where 1 denotes a vector of ones. For the HalfCheetah example, we looked at one test point for illustration, see the bottom row in Figure 8, and summarised the performance on another 100 test points (independent of the training data) in the Table 2, where we compared how the respective samples from the posterior do, as well as how the mean of all samples from the posterior in Section 5.4 (with a smaller number of nodes for the tcNN prior, and fewer active terms in the KL prior 5 ) does on predicting the correct action (last two columns). Not surprisingly, the mean function is better at picking the correct action. Details on the data generating mechanism can be found in the Supplementary Material. Ability to Learn Policy. To asses if the posteriors can truly learn an agent's behaviour, we used the priors with a smaller number of parameters, and stored 1000 samples Table 2. Actions picked using Equation 18 with v a posterior sample or the estimated posterior mean. The trace-class neural network prior outperforms the approximate KL and the standard BNN prior. The optimal action is computed using the same policy used to simulate data, see Section 5.1, the test points chosen at random from a representative episode of a HalfCheetah run. A random prediction would result in a success rate of 12.5%. The reader should note that the KL, the BNN, and the tcNN posteriors behave similarly in that they are uncertain in the first three states, and very decisive in the last two states. Bottom row : HalfCheetah example. The optimal action is the first one in all three plots, and samples are again normalised using (30) such that they take the value 0 at the state the optimal value takes one to. The BNN and tcNN posteriors correctly estimate the optimal action, the KL posterior doesn't. for each posterior. We then used these samples to obtain a mean value function which was used for decision making. For the trace-class neural network prior we used 3 layers with 10 nodes per layer for both examples (resulting in 261 parameters for the mountaincar example and 411 for the HalfCheetah); for the KL prior we used k max = (5, 5) for the mountaincar example (giving a total of 224 parameters), and k max = (5, 5) in the HalfCheetah example (a total of 8, 730 parameters). While the number of parameters can theoretically be chosen infinitely large, we truncated the layers and expansions earlier as we only had a very limited computational budget available. In general, where to truncate is an interesting model choice problem, and we found that for our problems the parameters described above yield very good approximations to a model with many more parameters. We thus chose to run the simplified model rather than a model with many more parameters, allowing many more stored MCMC posterior samples (1000 in this case) in the same wall-clock time. The results are summarised in Figure 9. Conclusion and Outlook This paper addresses the problem of effective Bayesian inference for unknown functions with higher dimensional domains. Unlike priors which require an orthogonal basis for the function space and scale exponentially in the domain dimension, our proposed trace-class neural network prior easily scales to higher-dimensional domains as the dependence on the domain dimension is linear. When using the pCN sampling method, this prior also satisfies the desired property of being stable under mesh-refinement, in the sense that the acceptance probability of pCN does not degenerate to 0 when using more parameters for the neural network. Various questions remain unanswered though, and interesting directions for future work open up. For example, what are suitable generalisations of the proposed prior, e.g. heavy-tailed or hierarchical ones? What are the optimal settings for the tuning parameters σ 2 w (l) , σ 2 b (l) and α? Can one obtain contraction rates to ensure the concentration of the posterior samples around the true functions? A first idea here is to exploit the various generalisations of the universal approximation theorem [43], and combine them with the proof methodology used in this paper. We further introduced a likelihood suitable for Bayesian reinforcement learning where the underlying Markov decision process has a continuous state-space, and thus the unknown value function to be estimated has domain R d as opposed to a discrete set. An interesting research direction is to generalise this to continuous action spaces as well. Finally, we underscored the theory with numerical illustrations, illustrating the applicability of the prior for various control problems. It would also be interesting to evaluate the tcNN prior in other applied settings beyond control. l ∼ U nif (n) Sample random layer 4: i ∼ Geom(α −1 ) Sample random node 5: while i ≥ N l do Repeat process until we have a valid node index 6: i ∼ Geom(α −1 ) 7: end while 8: Before turning to the proofs of the lemmas and theorems from the main paper, consider the n-layer fully connected feed-forward neural network in (12). When the layers have infinite width, we delineate the domain of the sequences that define each layer separately. For layer 1 < l < n + 1 let (We omit the obvious modification for the sequence spaces for layer 1 and n + 1.) The entire network is then parameterised by b . This domain is chosen because it has full measure under our Hilbert space Gaussian prior and also results in the infinite width functions in (14) being well defined almost surely. B.1. Lemma 2. Lemma 5. Consider the n-layer fully connected feed-forward neural network in (12). When the layers have infinite width, their weights and biases can be equivalently parameterised by 2 = {(a 1 , a 2 , . . .) ∈ R N : ∞ i a 2 i < ∞}. (31) is an instance of the Hilbert space 2 since N × N is countable and any enumeration (e.g. the 'diagonal' enumeration method) of H (l) w to map its elements to infinite sequences of the form (a 1 , a 2 , . . .) will be square summable. Similarly, , the cartesian product of two 2 spaces is again an instance of 2 regardless of how the two sequences are merged into one. Finally, by the same arguments, H = H (1) × · · · × H (n+1) is also an instance of 2 . Proof of Theorem 1. We prove the claims in the theorem for the infinite width case and in doing so cover the finite width case; the finite-dimensional case follows by omitting the limit arguments. Lemma 5 shows that the weights and biases of the infinite width and finite depth neural network can be equivalently parameterised by 2 . As the biases and weights of each layer are independent zero mean Gaussian random variables, and the variances form a summable sequence when α > 1, the prior µ 0 is a trace-class Gaussian prior on 2 and thus Property 1 is satisfied. To see Property 6, by looking at the first layer we can easily check that for fixed x ∈ [0, 1] d , f (1) i (x) is a mixture of centered Gaussian distributions, and the claim follows by noting that EB We use induction over l, and define the following random variables, for which we truncate the i-th function of layer l after k terms: (Note that, with slight abuse of notation, we write F (1) j even for the functions on the first layer, which are defined by finitely many parameters.) By Assumption 9, where the last inequality holds as F (l−1) j (x) is L 2 bounded by the induction hypothesis. We now show that f i (x) almost surely, and in L 2 , by applying the L 2 martingale convergence theorem. We thus need to show that S k (x) := f i,j and ζ(F (l−1) j (x)) are independent, the expectation of the former is centered, and the latter is finite. Additionally, by exploiting the independence, Assumption 9 and 35, we get This series converges for α > 1/2, and we define the limit for i = 1 as σ 2 l . Thus, S k is indeed a L 2 bounded martingale and trivially EF (l) i = 0, proving Assumption 6. We next show Property 7. For the first layer, we use independence to get For the subsequent layers, we again use induction over l. We define S k (x) as before and check that Using the induction hypothesis, Assumption 9, (36) and (39) we get such that the claim follows upon defining c l = σ 2 w (l) c l−1 ∞ j=1 1/j 2α , and noting that by the Fatou's lemma Lastly, recall that by Assumption 9 the activation functions are Lipschitz continuous, and thus so is v as a composition of Lipschitz functions. The claim of Property 8 for the finite width case now follows since µ 0 -almost surely, v is Lipschitz continuous and thus differentiable almost everywhere by the Rademacher Theorem [17, Theorem 3.1.6]. B.3. Lemma 6. In networks with small widths, Algorithm 1 gave acceptance rates of around 30% (for N (l) = 10), which quickly declined as we included more nodes (e.g. 1% acceptances for N (l) = 100.) This suggests that the NodeSwap algorithm is not well-defined in the infinite width limit, and this is indeed the statement of the next lemma. We will from now on write fraktal letters for the swapped nodes f i+1 is not well defined in the infinite width limit. For the finite width network, the acceptance ratio is given by Proof. By [48], one needs to check that the measures η(dθ, dϑ) := µ(θ)Q(θ, dϑ) and η T (dθ, dϑ) := η(dϑ, dθ) = µ(dϑ)Q(ϑ, dθ) are mutually absolutely continuous on a set R ∈ (E × E, E ⊗ E), and mutually singular on R C , where here Q is the deterministic transition kernel, and (E, E) is the measurable space on which µ and Q are defined. 6 The (deterministic) transition kernel Q maps θ to ϑ by swapping the nodes f (l) ij and f (l) i(j+1) (or more precisely, their associated weights and biases) with probability which is well defined as N l → ∞, and independent of θ, such that it suffices to show that the measures µ(θ) and µ T (θ) = µ(ϑ) are mutually absolutely continuous on a set R 1 ∈ (E, E), and mutually singular on R C 1 . The likelihood is also invariant under the transformation θ → ϑ, and as it is integrable with respect to the prior by the assumptions in Section 2.2 [46], we only need to show that the Gaussian measures µ 0 (dθ) and µ T 0 (dθ) are absolutely continuous with respect to one another. Note that we can write these as with diagonal (by assumption) covariance operators C andC, where the latter arises from swapping the variances associated with the swapped nodes. To see what is going on exactly, we now change to the neural network notation, where the variances under C for the individual weights and biases were given by The variances underC are the same for most weights and biases, changed are only those associated with the swap nodes (recall that we swap nodes f (l) ij and f (l) i(j+1) ). The only changed variances are which corresponds to swapping all the weights going into the nodes, swapping all the weights leaving the nodes, and swapping the biases of the nodes, respectively (see Figure 2 for an illustration). We apply the Feldman-Hajek Theorem [11,Theorem 2.25] to prove that these two Gaussian measures are mutually singular, by showing that the operator (C −1/2C1/2 )(C −1/2C1/2 ) * is not a Hilbert-Schmidt operator. Due to the diagonality of C andC the operator would be a Hilbert-Schmidt operator if We only need to check those terms whereλ i = λ i . Again looking at only the eigenvalues corresponding to the weights going into the swapped nodes, and switching to the neural network parametrisation, we have such that the operator is not a Hilbert-Schmidt operator, and the Gaussian measures are mutually singular. For the interested reader, note that the other two conditions of the Feldman-Hajek Theorem [11,Theorem 2.25] are satisfied. First we show only that there exist constants L and U such that for any θ ∈ 2 , which is equivalent to where λ 2 i are the respective variances corresponding to the values. Firstly note that we only need to consider those terms for whichλ 2 i = λ 2 i . Using the neural network parametrisation, we can split the problem in showing that (53) holds for A) all the weights going into the swapped nodes, B) all the weights leaving the swapped nodes, and C) swapping the biases. Looking at the weights going into the swapped nodes, note that ∞ j=1 w (l) ij such that for the weights going into the swapped nodes, (53) holds with L = 2 −2α and U = 2 2α . Repeating the same argument for the weights leaving the swapped nodes and for the biases, shows that (53) holds in general with L = 2 −2α and U = 2 2α . The remaining condition of the Feldman-Hajek theorem addresses the difference of means, but as θ = 0 this is clearly in the Cameron-Martin space of the prior. For the acceptance ratio in the finite width networks, observe that the likelihood does not depend on the labelling of the nodes and thus plays no role in the acceptance probability. Similarly, the transition kernel is symmetric, as nodes f For the finite dimensional case we thus get which is as required. B.4. Proof of Theorem 2. Proof of Theorem 2. For a given data point y = (x, a), let the actions be enumerates such that a = 1. Let further v = (v 1 , . . . , v M ) := (v (T (x, 1)), . . . , v(T (x, M ))) be the vector of the value function evaluations relevant for the likelihood computation. The integral (21) is trivially upper bounded by 1. Definev = max j |v j |. For the lower bound, we use (22) to get Since v is in a reproducing kernel Hilbert space H, there exists for any x ∈ X a C x such that |v(x)| ≤ C x v H for all v ∈ H [51, Chapter 12], and taking C = max j∈{1,...,M } C T (x,j) , we havev ≤ C v H . To see that Assumption 4 holds, assume that max{ u H , v H } < r. Then, since the log-likelihood is continuously differentiable in (v 1 , . . . , v M ), for anyr there exists a constant C(r) such that for any vectors u 1:M , v 1:M with max j |u j | ≤r, max j |v j | ≤r one has by the mean value theorem that Using the RKHS property as before, we note thatū = max j |u j | < max j C T (x,j) r and v = max j |v j | < max j C T (x,j) r. We also use the fact that for any x ∈ X there exists a C x such that |(u − v)(x)| ≤ C x u − v H for all u, v ∈ H. Takingr = max j C T (x,j) r > 0 we thus such that the assumption holds with K(r) = C(r) · M j=1 C T (x,j) . Proof of Lemma 3. Let θ = (w, b) be the collection of all weights and biases. Using the definition of the neural network (12), we letx = (1, x) ∈ R d+1 and note that i,: ) 2 2 by the Cauchy-Schwartz inequality (CSI). We now note that, regardless of the choice of such that the result holds also for the limit N (1) → ∞. For the higher layers, we use Assumption 9 and get for any l that |ζ(f We apply the CSI a few more times, and get that i,: ) 2 2 , and that For any θ with θ 2 2 < 1, we use induction and get that |f 2 ; such that for any θ, Using the same bound for (y|v, σ) as in the proof of Theorem 2 given in (58), we get such that the result holds with K = (n + 1 + max j T (x, j) 2 ) · 1 + 1 2σ 2 + max log(σ2 M √ 2πσ 2 ), 0 and p = 2(n + 1). Note that the constant K is independent of the layer width and the result holds for networks of arbitrary width. To prove Assumption 4, fix r > 0 and consider the sequences θ,θ ∈ 2 such that max{ θ} , θ } ≤ r. Let u = u θ be the neural network arising from the parameters θ, and let v = vθ be the neural network arising from the parametersθ. The difference in the output of the final layers of the neural network is where the functions within the neural network defined byθ are distinguished by a tilde on each of them. We can bound the squared difference by and using the CSI further by where the last inequality assumes 1 + j ζ(f (n) j (x)) 2 ≤ K(r, x, n), which will be verified next, and also uses the bound max{ θ , θ } ≤ r. In (60) it was shown that by setting K(r, x, n) := 1 + (n + x 2 ) · 1 + r 2n . The decomposition thus far articulates how (u(x) − v(x)) 2 depends on the difference of the weights and biases of the output layer (layer n + 1). We may similarly articulate how j (x)) 2 depends on the difference of the weights and biases of the previous layers. For example, and summing over i gives In summary, we obtain 1 2 (u(x) − v(x)) 2 ≤K(r, x, n) θ −θ 2 2 for a constantK depending only on r, x, and n. In particular, whenθ = 0 then v = 0, which implies 1 2 (u(x)) 2 ≤ K(r, x, n) θ 2 2 . We conclude the proof similarly to the proof of Theorem 2. Assume that max{ θ 2 , θ 2 } < r, so that max j |u j | ≤ r 2 max j {K(r, T (x, j), n)} and max j |v j | ≤ r 2 max j {K(r, T (x, j), n)}. Then using the mean value theorem, we note that for anȳ r there exists a constant C(r), such that for any vectors u 1:M , v 1:M with max j |u j | ≤r, max j |v j | ≤r we have The result holds by choosingr = r 2 max jK (r, T (x, j), n). (CD (u)) 2 is finite for µ 0 -almost all u [11]. Note that D (u) is the collection of partial derivatives with respect to each weight and bias parameter of the neural network. We will show that the sequence of truncated sums of (61) defines a submartingale that converges µ 0 -almost surely to a random variable with finite expectation. We now specify the limiting neural network. As α > 1, we have µ 0 -almost surely, W Substituting both the eigenvalues of C and the derivatives with respect to the parameters of the neural network into Equation (61) and truncating the sum gives For the likelihood in (29) and T = 1, we will show that lim s→∞ S s exists and is finite µ 0 -almost surely so that the equivalence N (CD (u), C) N (0, C) follows (in fact we will show that S s converges to a L 1 random variable as s → ∞); the case for T > 1 follows similarly. To this end, observe that under the assumption of uniformly bounded partial derivatives of (a, ·) for all a, each partial derivative can be further bounded by √ c t is the bound of partial derivatives of (a t , ·). We firstly calculate ∂u/∂W (l) i,j and ∂u/∂B i , for all (i, j, l), where u = u(x) is the output of the NN for an input x ∈ R dthe input has been dropped for notational convenience. These derivatives can be cast as derivatives of ∂u/∂g (l) i since u can be regarded as a function of (g , where the first variable is the position x 1 of the car on a mountain slope, and the second variable represents its velocity x 2 . The set of possible actions is A = {−1, 0, 1}, representing exerting force to the left, not adding force, and exerting force to the right, respectively. The state transitions are deterministic, being given by Newtonian physics, and we refer the reader to the OpenAI documentation or to our code for the details. In the mountaincar problem, the reward is constant r(x 1 , x 2 ) = −1 per step, until the car reaches the top of the mountain (x 1 ≥ 0.5). The optimal policy is therefore to reach the mountaintop as quickly as possible. An optimal deterministic policy [55] is given by µ(x 1 , x 2 ) = −1 + 2I{min(−0.09(x 1 + 0.25) 2 + 0.03, 0.3(x 1 + 0.9) 4 − 0.008) ≤ x 2 ≤ −0.07(x 1 + 0.38) 2 + 0.07}, and we generated state-action pairs by firstly drawing a random initial state in the valley of the mountain, x ∼ U([−0.6, −0.4]), i.e. a uniform value between −0.6 and −0.4. The initial velocity is set to 0. Starting from that state, we computed the action given the optimal policy given above. Once the flag was reached, a new initial state was drawn, and the process repeated until we had a total of 250 observations. This gave a set of stateaction pairs {(x t , a t )} 250 t=1 , and we then took every fifth sample to obtain the final dataset y = {(x 5t , a 5t )} 50 t=1 . This resulted in the state variables in y covering the entire state space, such that we can expect to learn the value function in any region an agent might find themselves in. The likelihood (22) arises from this dataset y and the noise level being set to σ = 0.1. In the simulations from the learned value functions, we again initialised the state variable as x ∼ U([−0.6, −0.4]) and set the velocity to 0. We then simulated noise and used Equation (19) with the learned value function to pick an action. In Section 5.3, the used value function was taken as either a sample from the posterior or as the mean function; in Section 5.4 the used value function was the mean function from the posteriors. In all experiments, if the car didn't make it to the flag within 200 time steps, we called this a failure and restarted the process from new initial conditions. C.2. HalfCheetah. To show that our algorithm works in a more complicated setting, we looked at the HalfCheetah example from the MuJoCo library [49] where the state x t a 17dimensional vector. The original continuous actions space of the problem is 6-dimensional. An agent controlling the cheetah is to move it to forward while not exerting too much force: positive rewards are given for moving forward, and negative rewards are given for moving backwards, a further penalty is deducted for actions requiring a lot of force. A black box optimal policy for the HalfCheetah problem was provided in Berkeley's Deep Reinforcement Learning Course 7 , which we used to simulate state-action pairs. The initial state and velocity variables were drawn at random with distributions according to the python package 'gym' [7]. We discretised the action space to M actions in the following way: an initial state was drawn, and the black box policy gave us an action, taking us to a new state via deterministic mapping. Iterating this process, the first M actions were stored. From now on, we can use a discrete action space A M consisting of these M actions: at a state x t we compute a t as the action in A M that minimises the Euclidean distance to the action computed by the black box policy. We found that M = 8 actions were sufficient to get behaviour very similar to the one we got when using the continuous action space, and we thus fixed A = A 8 . We refer to the action a ∈ A that minimises the Euclidean distance to the black box algorithm as 'optimal'. To generate data, we firstly drew an initial state x 1 , and then computed the optimal action a 1 using the procedure just described, and computed the next state using the state dynamics (16). After 25 steps, we restarted from a new initial state, and repeated this process another 4 times until we had a total of T = 100 data points. The reason we restarted occasionally was, as in the mountaincar example, to ensure that we cover a representative region of the state space. The dataset y = {(x t , a t )} 100 t=1 was used in the likelihood (22), where we set the noise level to σ = 0.1. In the experiments in Section 5.4, an initial state is drawn, and the cheetah is controlled using Equation (19) over 100 time steps.
2020-12-22T02:16:00.480Z
2020-12-20T00:00:00.000
{ "year": 2020, "sha1": "b600cd499f9c5a49b073c12833b7def9f5a43b17", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b600cd499f9c5a49b073c12833b7def9f5a43b17", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
139274799
pes2o/s2orc
v3-fos-license
Fatigue Crack Growth Behavior of Austempered AISI 4140 Steel with Dissolved Hydrogen The focus of this investigation was to examine the influence of dissolved hydrogen on the fatigue crack growth behavior of an austempered low-alloy AISI 4140 steel. The investigation also examined the influence of dissolved hydrogen on the fatigue threshold in this material. The material was tested in two conditions, as-received (cold rolled and annealed) and austempered (austenitized at 882 °C for 1 h and austempered at 332 °C for 1 h). The microstructure of the annealed specimens consisted of a mix of ferrite and fine pearlite; the microstructure of the austempered specimens was lower bainite. Tensile and Compact Tension specimens were prepared. To examine the influence of dissolved hydrogen, two subsets of the CT specimens were charged with hydrogen for three different time periods between 150 and 250 h. All of the CT samples were then subjected to fatigue crack growth tests in the threshold and linear regions at room temperature. The test results indicate that austempering resulted in significant improvement in the yield and tensile strength as well as the fracture toughness of the material. The test results also show that, in the absence of dissolved hydrogen, the crack growth rate in the threshold and linear regions was lower in austempered samples compared to the as-received (annealed) samples. The fatigue threshold was also slightly greater in the austempered samples. In presence of dissolved hydrogen, the crack growth rate was dependent upon the ∆K value. In the low ∆K region (<30 MPa√m), the presence of dissolved hydrogen caused the crack growth rate to be higher in the austempered samples as compared to annealed samples. Above this value, the crack growth rate was increasingly greater in the annealed specimens when compared to the austempered specimens in presence of dissolved hydrogen. It is concluded that austempering of 4140 steel appears to provide a processing route by which the strength, hardness, and fracture toughness of the material can be increased with little or no degradation in the ductility and fatigue crack growth behavior. Introduction In recent years, there has been significant interest in austempering [1][2][3][4][5] as an alternative heat treatment process relative to traditional quenching and tempering processes. Austempering involves austentizing the steel in the fully austenitic region (above the A 1 temperature) followed by a rapid cooling into the bainitic temperature region. The steel is then held in this region for sufficient time to allow the completion of the bainitic phase transformation reaction; finally, air cooling to room temperature is conducted. The absence of a sudden quench to form martensite (as in the case of traditional quench and tempering processes) significantly reduces the thermal gradients arising in the material. This results in reduced distortion and minimization of the appearance of quench cracks. This is especially the case for small parts (like gears, bolts, and clips) used in automotive and naval structural applications that are exposed to alternating stresses. In addition, austempering can yield strengths like those created in traditional quench and tempering processes, especially in medium and high carbon steels. Failure under cyclic loading (fatigue) is a very serious problem for structural components. Under cyclic loading, cracks can arise and grow; in this case, if the crack grows from a sub-critical dimension to a critical flaw size, it can lead ultimately to failure in service. The critical flaw size under a given loading condition is given by the fracture toughness of the material [6]. The fatigue crack growth rate, (da/dN), has been related to the stress intensity factor range, (∆K), through the Paris equation [7]: where C and m are material constants, and ∆K = K max − K min , which is the difference between the maximum (K max ) and the minimum (K min ) stress intensity factor in a fatigue cycle. This equation has been found to be very useful in characterizing the fatigue crack growth behavior of steels. As Figure 1 illustrates, when experimentally-measured crack growth rate data is plotted against ∆K on a log scale, the graph shows three distinct regions. In Region I (the threshold region), the crack growth rate is low and deviates from the Paris equation. In Region II (the linear region), the Paris equation models the growth rate well. In Region III (the fast fracture region), the crack growth rate accelerates and again deviates from the Paris equation. In addition to these regions, there is a threshold stress intensity factor (∆K th ), below which the crack growth rate approaches a zero value. Fatigue threshold is a very important parameter for structural design; structural components designed on the basis of fatigue threshold are expected to survive in service under cyclic loading conditions without undergoing any catastrophic failure. yield strengths like those created in traditional quench and tempering processes, especially in medium and high carbon steels. Failure under cyclic loading (fatigue) is a very serious problem for structural components. Under cyclic loading, cracks can arise and grow; in this case, if the crack grows from a sub-critical dimension to a critical flaw size, it can lead ultimately to failure in service. The critical flaw size under a given loading condition is given by the fracture toughness of the material [6]. The fatigue crack growth rate, (da/dN), has been related to the stress intensity factor range, (∆K), through the Paris equation [7]: where C and m are material constants, and ∆K = Kmax − Kmin, which is the difference between the maximum (Kmax) and the minimum (Kmin) stress intensity factor in a fatigue cycle. This equation has been found to be very useful in characterizing the fatigue crack growth behavior of steels. As Figure 1 illustrates, when experimentally-measured crack growth rate data is plotted against ∆K on a log scale, the graph shows three distinct regions. In Region I (the threshold region), the crack growth rate is low and deviates from the Paris equation. In Region II (the linear region), the Paris equation models the growth rate well. In Region III (the fast fracture region), the crack growth rate accelerates and again deviates from the Paris equation. In addition to these regions, there is a threshold stress intensity factor (∆Kth), below which the crack growth rate approaches a zero value. Fatigue threshold is a very important parameter for structural design; structural components designed on the basis of fatigue threshold are expected to survive in service under cyclic loading conditions without undergoing any catastrophic failure. AISI 4140 is an extensively used commercial-grade medium-carbon, low alloy steel. It contains chromium, molybdenum, and manganese as the principal alloying elements and has high hardenability and good fatigue resistance [1,2]. This steel has many applications in mission-critical structural components [8] because of its high strength and toughness. This steel can be hardened by a variety of common heat-treatment processes to yield a wide variety of mechanical properties. While in service in critical applications, the 4140 steel can be exposed to hydrogen-bearing environments. Therefore, hydrogen-induced fatigue crack growth data for this material is needed for safe life prediction and failure-safe design of components. Although a significant number of studies have been carried out on fatigue and corrosion fatigue behavior of high strength steels, most of these studies were carried out in 3.5% NaCl solutions [9][10][11][12][13][14]. AISI 4140 is an extensively used commercial-grade medium-carbon, low alloy steel. It contains chromium, molybdenum, and manganese as the principal alloying elements and has high hardenability and good fatigue resistance [1,2]. This steel has many applications in mission-critical structural components [8] because of its high strength and toughness. This steel can be hardened by a variety of common heat-treatment processes to yield a wide variety of mechanical properties. While in service in critical applications, the 4140 steel can be exposed to hydrogen-bearing environments. Therefore, hydrogen-induced fatigue crack growth data for this material is needed for safe life prediction and failure-safe design of components. Although a significant number of studies have been carried out on fatigue and corrosion fatigue behavior of high strength steels, most of these studies were carried out in 3.5% NaCl solutions [9][10][11][12][13][14]. Little information is available in literature on the influence of dissolved hydrogen on the fatigue crack growth behavior of AISI 4140 steel, especially in the austempered condition. One study [15] compared a quenched and tempered structure to an austempered structure in a gaseous hydrogen environment. This study found that the austempered samples had lower fatigue crack growth rates and higher impact toughness in the hydrogen environment. A second study [14] was conducted on a 4140-type steel that was charged with approximately 0.58 ppm hydrogen. At low ∆K values, the presence of hydrogen increased the fatigue crack growth rates up to 30×. When the ∆K values exceeded 30 MPa √ m, the fatigue crack growth rates were comparable regardless of whether hydrogen was present or not. Similar results were also observed in [16]. Previous research has shown that bainitic microstructures improve the fracture toughness in ferrous alloys [4,5,17]. This indicates that bainitic microstructures can possibly improve the fatigue crack growth resistance both in ambient and corrosive environments. Therefore, an investigation was undertaken to examine the influence of an austempering process on the microstructure, mechanical properties, fatigue crack growth rate, and fatigue threshold of AISI 4140 steel with dissolved hydrogen. In a previous investigation [18], the fatigue crack growth behavior of AISI 4140 steel austempered in the upper bainitic region was examined in the presence of dissolved hydrogen. This investigation is the continuation of that study; in this paper, the fatigue crack growth of AISI 4140 steel, austempered in the lower bainitic temperature region, was examined in presence of dissolved hydrogen. The testing was designed to simulate what the 4140 steel might see following a hydrogen-containing manufacturing operation and an ambient-air operating environment. Material The material used in this investigation is the AISI 4140 steel alloy. The material was available in the form of cold-rolled and annealed plate with an identifiable rolling direction. The chemical composition of the material is reported in Table 1. From this steel plate, tensile specimens were prepared as per ASTM E-8 [19]. Compact Tension (CT) specimens were also manufactured with a TL orientation as per ASTM standard E-647 [20]. In the CT specimens, the width (W) was 50.8 mm, the thickness ranged between 3 and 4 mm, and the notch length (as measured from the centerline of the loading holes to the tip of the machined notch) was 15.24 mm. Additional compact tension samples were prepared for fracture toughness testing as per ASTM standard E-399 [6]; for these samples, the width was 40 mm and the thickness was 20 mm. Heat Treatment and Tensile Testing To understand the effect of austempering, all of the test samples (tensile and CT) were divided into two batches. The first batch of samples was left in the annealed (as-received) condition. The second batch of samples were heat-treated to produce an austempered condition. This was accomplished by initially austenitizing these samples at 882 • C for 1 h. The samples were then immediately transferred to a molten salt bath at 332 • C, where they were held for 1 h. The austempering in this 332 • C temperature region produced the desired lower bainitic structure. The specimens were then removed from the salt bath and were allowed to air cool to room temperature. All of the tensile samples (annealed as well as heat-treated) were tested as per ASTM E-8 [19]. The tests were performed at a constant engineering strain rate of 4 × 10 −4 s −1 on a servo-hydraulic Material Test System (MTS) 810 test machine. All of the samples were tested at room temperature and ambient atmosphere. Load and displacement plots were obtained on an X-Y recorder; from these load-displacement diagrams, the yield strength, ultimate tensile strength, and % elongation values were calculated. Four samples were tested in each condition. Pre-Cracking of CT Specimens To minimize the amount of time between hydrogen charging and testing, all of the CT specimens were pre-cracked before charging occurred. Pre-cracking was performed in the MTS 810 servo-hydraulic test machine in the load control mode at room temperature and ambient atmosphere. For the fatigue threshold specimens, a cyclic loading frequency of five cycles per second (5 Hz) was used to produce a 2-mm sharp crack front in accordance with ASTM standard E-647 [20]. The CT samples for the fracture toughness tests were pre-cracked in fatigue at a ∆K level of 10 MPa √ m with a load ratio of R = 0.10. This produced a 2-mm sharp crack in accordance with the ASTM standard E-399 [6]. External Hydrogen Charging To understand the effect of hydrogen and austempering, all of the pre-cracked Compact Tension (CT) specimens were divided into one of the four sets listed in Table 2. Set Material Externally Charged with Hydrogen? Austempered (332 • C/1 h) Yes As detailed in Table 2, two of the sets were set aside; these are referred to as the "Uncharged" specimens. The other two sets were externally charged with hydrogen ("Charged" specimens). Charging was accomplished using two DC power supplies (Agilent E3616A and Sorensen XPL-30-2D) that were capable of providing a steady source of electric current of about 1-1.5 A and a 4-mm stainless steel bar as the anode; the stainless steel bar was chosen as it was an effective, low-cost alternative to platinum. The samples to be charged with hydrogen were used as the cathode ( Figure 2). Deionized water, obtained from Barnstead Nanopure Diamond D11911, having a resistivity of about 16.5 ± 0.5 MΩ-cm, was used to dissolve sodium hydroxide so that a 1 N solution was obtained. Dissolution was done to avoid the uptake of deleterious ions during the external hydrogen charging [21]. This 1 N solution was used as the electrolyte for charging hydrogen into the samples. The current density was maintained steady at 300 A/m 2 . The samples were charged with hydrogen for 150, 200, and 250 h at room temperature. A minimum of three samples were charged from each set for each of the given exposure times; in each case, the sample was tested within 15 min after charging was complete. In addition, six special CT samples were created for the purpose of measuring the hydrogen concentration in the annealed and austempered conditions. These samples were prepared in the manner detailed previously. In addition, six special CT samples were created for the purpose of measuring the hydrogen concentration in the annealed and austempered conditions. These samples were prepared in the manner detailed previously. Hydrogen Concentration Analysis The six special analysis samples were analyzed for the hydrogen concentration via a Vacuum Hot Extraction method using an NRC Model 917 unit. The testing was conducted at Luvak Laboratories (Boylston, MA, USA) within 24 h of charging. Each charged sample was first weighed and cleaned with ether to remove surface contaminants. Each sample was then placed into an evacuated chamber surrounded by induction heating coil. Each sample was heated to a point below the melting point of the sample (approximately 1150 °C) to degas hydrogen from the sample. The hydrogen was collected in a separate chamber in the system, and was measured by a McLeod gauge. All of the measurements were accurate to ±0.1 ppm. Fatigue Testing Within 15 min after completion of charging, each fatigue sample was tested on a MTS 810 servo-hydraulic test machine in the load control mode at room temperature and in ambient atmosphere. All of the fatigue testing was carried out at a cyclic loading frequency of five cycles per second (5 Hz). A constant amplitude sinusoidal waveform was applied and the tests were carried out at a load ratio R = Kmin/Kmax = 0.10. Crack lengths were measured on the specimen surface using an optical microscope without interrupting the test; at the same time, the number of fatigue cycles was also recorded. The crack growth rate, da/dN, was determined as per ASTM standard E-647 [20]. Four samples from each material condition, as listed in Table 2, were tested and averaged for the values reported in this paper. The threshold was obtained using the load-shedding, decreasing ∆K procedure detailed in ASTM E-647 [20]; the load was periodically decreased until the crack growth rate reached a level of 10 −10 m/cycle. The reduction of load at a given ∆K level was done in such a way that no more than 10% of the load was reduced for that ∆K level. This load shedding was also done after the crack has grown at least 0.5 mm at the previous ∆K level. In this way, any retardation effect (due to a previous higher ∆K level) was avoided. Fracture Toughness Test Within 15 min after completion of charging, each fracture toughness specimen was loaded in tension in a servo-hydraulic MTS test machine; the load-displacement diagrams were obtained using a clip gauge in the knife edge attachment on the specimen. From these load-displacement diagrams, PQ values were calculated using the 5% secant deviation technique. From these PQ values, the KQ Hydrogen Concentration Analysis The six special analysis samples were analyzed for the hydrogen concentration via a Vacuum Hot Extraction method using an NRC Model 917 unit. The testing was conducted at Luvak Laboratories (Boylston, MA, USA) within 24 h of charging. Each charged sample was first weighed and cleaned with ether to remove surface contaminants. Each sample was then placed into an evacuated chamber surrounded by induction heating coil. Each sample was heated to a point below the melting point of the sample (approximately 1150 • C) to degas hydrogen from the sample. The hydrogen was collected in a separate chamber in the system, and was measured by a McLeod gauge. All of the measurements were accurate to ±0.1 ppm. Fatigue Testing Within 15 min after completion of charging, each fatigue sample was tested on a MTS 810 servo-hydraulic test machine in the load control mode at room temperature and in ambient atmosphere. All of the fatigue testing was carried out at a cyclic loading frequency of five cycles per second (5 Hz). A constant amplitude sinusoidal waveform was applied and the tests were carried out at a load ratio R = K min /K max = 0.10. Crack lengths were measured on the specimen surface using an optical microscope without interrupting the test; at the same time, the number of fatigue cycles was also recorded. The crack growth rate, da/dN, was determined as per ASTM standard E-647 [20]. Four samples from each material condition, as listed in Table 2, were tested and averaged for the values reported in this paper. The threshold was obtained using the load-shedding, decreasing ∆K procedure detailed in ASTM E-647 [20]; the load was periodically decreased until the crack growth rate reached a level of 10 −10 m/cycle. The reduction of load at a given ∆K level was done in such a way that no more than 10% of the load was reduced for that ∆K level. This load shedding was also done after the crack has grown at least 0.5 mm at the previous ∆K level. In this way, any retardation effect (due to a previous higher ∆K level) was avoided. Fracture Toughness Test Within 15 min after completion of charging, each fracture toughness specimen was loaded in tension in a servo-hydraulic MTS test machine; the load-displacement diagrams were obtained using a clip gauge in the knife edge attachment on the specimen. From these load-displacement diagrams, P Q values were calculated using the 5% secant deviation technique. From these P Q values, the K Q values were determined using the standard stress intensity factor calibration function for the CT Metals 2017, 7, 466 6 of 18 specimens. Since these K Q values satisfied all of the requirements for a valid K IC as per ASTM standard E-399 [6], they were judged to be valid K IC values. Four samples from each material condition listed in Table 2 were tested and averaged for the values reported in this paper. Microstructure The microstructure of the 4140 steel alloy in the annealed condition showed a mixed ferrite + pearlite structure; very fine needles of pearlite were observed in the matrix ( Figure 3) X-ray diffraction [18] as well as transmission electron microscopy (TEM) examinations were carried out. Both techniques detected only body-centered cubic (BCC) phases in the material. No retained austenite was detected (via XRD or TEM) or observed in prepared metallographic specimens. values were determined using the standard stress intensity factor calibration function for the CT specimens. Since these KQ values satisfied all of the requirements for a valid KIC as per ASTM standard E-399 [6], they were judged to be valid KIC values. Four samples from each material condition listed in Table 2 were tested and averaged for the values reported in this paper. Microstructure The microstructure of the 4140 steel alloy in the annealed condition showed a mixed ferrite + pearlite structure; very fine needles of pearlite were observed in the matrix (Figure 3) X-ray diffraction [18] as well as transmission electron microscopy (TEM) examinations were carried out. Both techniques detected only body-centered cubic (BCC) phases in the material. No retained austenite was detected (via XRD or TEM) or observed in prepared metallographic specimens. The microstructure of the austempered 4140 samples is shown in the Figure 4. It shows the presence of lower bainite (blue-colored phase) and a limited number of isolated pockets of martensite (brown-colored phase) in the microstructure. These arose even though the samples were austempered at a temperature above the Ms (Martensite Start) temperature of the material. This was attributed to segregation effects from the alloying elements present in the steel. The carbides of alloying elements like Cr, Mo, and Mn tend to segregate to the intercellular regions. In these segregated regions, the bainitic reaction associated with the austenite decomposing into ferrite and carbide becomes sluggish; therefore, complete transformation does not take place during the processing time. Upon cooling, these untransformed regions can form martensitic structures. Optically, a very small amount (<1%) of retained austenite (white phase) was also observed in the material; however, no retained austenite was detected using either XRD or TEM. Therefore, it is concluded that the amount of austenite present is so small as to have no effect upon the resultant hydrogen concentration or properties in the austempered material. The microstructure of the austempered 4140 samples is shown in the Figure 4. It shows the presence of lower bainite (blue-colored phase) and a limited number of isolated pockets of martensite (brown-colored phase) in the microstructure. These arose even though the samples were austempered at a temperature above the M s (Martensite Start) temperature of the material. This was attributed to segregation effects from the alloying elements present in the steel. The carbides of alloying elements like Cr, Mo, and Mn tend to segregate to the intercellular regions. In these segregated regions, the bainitic reaction associated with the austenite decomposing into ferrite and carbide becomes sluggish; therefore, complete transformation does not take place during the processing time. Upon cooling, these untransformed regions can form martensitic structures. Optically, a very small amount (<1%) of retained austenite (white phase) was also observed in the material; however, no retained austenite was detected using either XRD or TEM. Therefore, it is concluded that the amount of austenite present is so small as to have no effect upon the resultant hydrogen concentration or properties in the austempered material. Mechanical Properties The mechanical properties of the 4140 steel are reported in Table 3. Statistical analysis of this data showed that the austempering process had significantly increased the hardness, as well as both the yield and tensile strengths, without any significant reduction in the ductility. In addition, the fracture toughness shows a statistically slight improvement as well. Thus, the austempered microstructure of lower bainite with a limited amount of martensite has increased the strength, hardness, and fracture toughness without any loss of ductility in the AISI 4140 steel. Furthermore, the strength and hardness of the material was comparable to that obtained by a traditional quenching and tempering process in AISI 4140 steel [22]. Table 4 shows the effect of charging times on the concentration of dissolved monoatomic hydrogen in the 4140 steel alloy. As this table shows, the amount of dissolved hydrogen increases as the charging time increases for both the annealed and austempered conditions. Mechanical Properties The mechanical properties of the 4140 steel are reported in Table 3. Statistical analysis of this data showed that the austempering process had significantly increased the hardness, as well as both the yield and tensile strengths, without any significant reduction in the ductility. In addition, the fracture toughness shows a statistically slight improvement as well. Thus, the austempered microstructure of lower bainite with a limited amount of martensite has increased the strength, hardness, and fracture toughness without any loss of ductility in the AISI 4140 steel. Furthermore, the strength and hardness of the material was comparable to that obtained by a traditional quenching and tempering process in AISI 4140 steel [22]. Table 4 shows the effect of charging times on the concentration of dissolved monoatomic hydrogen in the 4140 steel alloy. As this table shows, the amount of dissolved hydrogen increases as the charging time increases for both the annealed and austempered conditions. Additionally, the table shows that the austempered samples were found to have over a two-fold increase in the amount of dissolved hydrogen compared to the annealed samples for a given charging time. This can be attributed to the small amount of martensite present in the microstructure after austempering. Martensitic structures are known to have a higher dislocation density when compared to other phases like ferrite, pearlite, or bainite [1,2]. These dislocations can act as hydrogen traps in the material [16,23,24]. The presence of these traps also reduces the diffusivity of hydrogen to a lower value than what was estimated by equations based upon the concentration [25]. One previous investigation [16] also concluded that that the diffusivity of hydrogen in steel cannot be determined by application of Fick's law in the presence of traps in the material. Hydrogen Concentration In addition to dislocation density effects, the diffusion coefficient of hydrogen appears to be affected by the presence of martensite as well. An earlier investigation on Cr-Mo steels [26] showed that the diffusion coefficient of hydrogen in martensite was marginally lower than that in bainite. Further, a recent study on hydrogen-induced cracking in steels [15] showed that the diffusion coefficient decreases as the strength of the alloy increased. Thus, it can be concluded that the presence of martensite in the austempered alloy resulted in a larger number of dislocations, which created a large number of hydrogen traps in the material. This, in turn, resulted in higher concentration of dissolved hydrogen content in the austempered samples compared to the annealed sample. Figure 5 compares the fatigue crack growth behavior in the threshold region (Region I) for the annealed and austempered conditions; this data is for the samples without dissolved hydrogen (Sets "A" and "C" in Table 2). Table 5 details the threshold stress intensity values for these materials. As this figure and table illustrate, the fatigue crack growth rate of the annealed samples in the near-threshold region was higher than the austempered samples; additionally, the ∆K th of the annealed samples was lower than that of the austempered samples in this region as well. Additionally, the table shows that the austempered samples were found to have over a two-fold increase in the amount of dissolved hydrogen compared to the annealed samples for a given charging time. This can be attributed to the small amount of martensite present in the microstructure after austempering. Martensitic structures are known to have a higher dislocation density when compared to other phases like ferrite, pearlite, or bainite [1,2]. These dislocations can act as hydrogen traps in the material [16,23,24]. The presence of these traps also reduces the diffusivity of hydrogen to a lower value than what was estimated by equations based upon the concentration [25]. One previous investigation [16] also concluded that that the diffusivity of hydrogen in steel cannot be determined by application of Fick's law in the presence of traps in the material. Influence of Austempering on the Crack Growth Behavior of Uncharged 4140 Steel In addition to dislocation density effects, the diffusion coefficient of hydrogen appears to be affected by the presence of martensite as well. An earlier investigation on Cr-Mo steels [26] showed that the diffusion coefficient of hydrogen in martensite was marginally lower than that in bainite. Further, a recent study on hydrogen-induced cracking in steels [15] showed that the diffusion coefficient decreases as the strength of the alloy increased. Thus, it can be concluded that the presence of martensite in the austempered alloy resulted in a larger number of dislocations, which created a large number of hydrogen traps in the material. This, in turn, resulted in higher concentration of dissolved hydrogen content in the austempered samples compared to the annealed sample. Figure 5 compares the fatigue crack growth behavior in the threshold region (Region I) for the annealed and austempered conditions; this data is for the samples without dissolved hydrogen (Sets "A" and "C" in Table 2). Table 5 details the threshold stress intensity values for these materials. As this figure and table illustrate, the fatigue crack growth rate of the annealed samples in the near-threshold region was higher than the austempered samples; additionally, the ∆Kth of the annealed samples was lower than that of the austempered samples in this region as well. The lower crack growth rate in the austempered samples was an interesting result and cannot be attributed to the lower crack closure stress intensity factor (K op ) in this material. During cyclic loading, the compressive stress in the crack tip region will cause the crack to remain partially closed during the unloading part of the fatigue cycle. This stress intensity factor (for crack closure) is often defined as K op . Thus, the effective stress intensity factor, ∆K eff , is a measure of crack driving force and is given by ∆K eff = K max − K op . This crack-opening stress intensity factor depends on the cyclic plastic zone size, which is inversely proportional to the yield strength of the material. Thus, the K op value decreases as the material strength increases. Influence of Austempering on the Crack Growth Behavior of Uncharged 4140 Steel In this study, the cyclic plastic zone size was calculated using Rice's formula [27] for each set of specimens at the different charging conditions, based upon where the maximum transgranular (or intergranular) feature was observed. However, these calculations yielded identical results for all of the materials; the cyclic plastic zone sizes were approximately 14-22 µm. Further, the austempered samples had a significantly higher yield strength than the annealed samples (Table 3). This means that the K op value will be lower in the austempered samples as compared to the annealed samples. Therefore, it would be expected that a higher crack driving force (∆K eff ) would be present in the austempered samples. Consequently, a higher near-threshold fatigue crack growth rate and lower fatigue threshold (∆K th ) would also be expected for the austempered samples. However, the opposite was observed in the samples in this study: a lower crack growth rate and higher ∆K th were found for the austempered samples. This unique behavior is hypothesized to be due to the following two factors: • First, the austempering process has increased the fracture toughness of the material due to the presence of a microstructure containing a large amount of lower bainite. The lower bainitic microstructure increases the fracture toughness of the material [3][4][5]. The higher fracture toughness is indicative of a greater crack growth resistance in this material. This, in turn, causes a lower fatigue crack growth rate and a higher fatigue threshold in the material. • Secondly, the austempered samples had a very fine-scale microstructure consisting of lower bainite with a limited amount of tempered martensite. A lower bainitic structure has a much finer grain size than upper bainite or pearlite. This creates additional resistance to crack growth since the crack tip encounters large number of fine scale grain boundaries. This, in turn, reduces the crack propagation rate because the crack grows along a longer, more torturous path. Figure 6 shows the fatigue crack growth behavior of the uncharged annealed and austempered specimens in the linear region (Region II). This is further shown by the data in Table 6. The austempered material had a much lower value of the Paris constant "C" and a slightly higher "m" value. It is evident from the plot that the austempered specimens have a lower crack growth rate than the annealed specimens. Again, the general expected trend is that the fatigue crack growth rate increases with a higher strength and hardness. However, the austempered samples in the present study again have a lower crack growth rate despite the higher hardness and strength; this behavior is similar to what was seen previously in the threshold region. The behavior is also attributed to the increased crack growth resistance provided by the lower bainitic microstructure. Hence, upon comparing the fatigue crack growth rates in the linear region ( Figure 6), as well as in the threshold region ( Figure 5), it can be concluded that the lower bainitic microstructure provides an improved fatigue crack growth resistance in 4140 steels when dissolved hydrogen is not present. Similar results have been reported for a series of low alloy steels [28]. Figure 7 shows the fatigue crack growth behavior of the annealed samples in the linear region in presence of dissolved hydrogen; for comparison, the crack growth behavior of the annealed samples without dissolved hydrogen is also plotted in the same figure. The data in Figure 7 shows that the presence of dissolved hydrogen causes a large amount of variation in the crack growth rate data. In addition, the presence of hydrogen appears to increase the average crack growth rate at ∆K values that are greater than 60 MPa√m. Figure 7 also shows that the hydrogen charging time does not have any significant effect on the crack growth rate, given the high degree of variation in the data. Hence, upon comparing the fatigue crack growth rates in the linear region (Figure 6), as well as in the threshold region ( Figure 5), it can be concluded that the lower bainitic microstructure provides an improved fatigue crack growth resistance in 4140 steels when dissolved hydrogen is not present. Similar results have been reported for a series of low alloy steels [28]. Figure 7 shows the fatigue crack growth behavior of the annealed samples in the linear region in presence of dissolved hydrogen; for comparison, the crack growth behavior of the annealed samples without dissolved hydrogen is also plotted in the same figure. The data in Figure 7 shows that the presence of dissolved hydrogen causes a large amount of variation in the crack growth rate data. In addition, the presence of hydrogen appears to increase the average crack growth rate at ∆K values that are greater than 60 MPa √ m. Figure 7 also shows that the hydrogen charging time does not have any significant effect on the crack growth rate, given the high degree of variation in the data. Figure 8 shows the fatigue crack growth behavior of the austempered samples in the linear region in the presence of dissolved hydrogen; again, for comparison purposes, the crack growth behavior of the austempered samples without dissolved hydrogen is also plotted in the same figure. Similar to that seen in Figure 7, Figure 8 shows that the hydrogen charging time does not have any significant effect on the crack growth rate of the austempered samples. However, in contrast to the data in Figure 7, Figure 8 shows that the presence of dissolved hydrogen does not cause a large amount of variation in the crack growth rate data. In addition, Figure 8 shows that the presence of dissolved hydrogen increases the crack growth rate significantly at ∆K values below 30 MPa√m. Thus, the effect of hydrogen is relatively important when the contribution of other mechanisms is small. When ∆K is low, there is little stress built up at the crack tip; the effect of hydrogen on crack growth is increased since there is little stress to provide increased energy for the propagation of a crack. Influence of Hydrogen on the Fatigue Crack Gowth Behavior of 4140 Steel The results in Figures 7 and 8 agree with those from two published research studies [14,16] on 4100-series steel alloys charged with hydrogen and tested in laboratory air. They found that the crack growth rates in the range of 10 −5 to 10 −7 m/cycle were increased in the hydrogen-charged samples when compared to the uncharged samples. Table 7 details the fatigue threshold of both the annealed and austempered 4140 alloy as a function of dissolved hydrogen content. The data in this table shows that there are only minor differences in the threshold values of the annealed and austempered samples; in addition, all these values are low regardless of hydrogen content. This is an important result, as it shows that austempering can be used to provide improved strength properties while avoiding a significant reduction in fatigue crack resistance; this combination is not usually observed in most materials. Figure 8 shows the fatigue crack growth behavior of the austempered samples in the linear region in the presence of dissolved hydrogen; again, for comparison purposes, the crack growth behavior of the austempered samples without dissolved hydrogen is also plotted in the same figure. Similar to that seen in Figure 7, Figure 8 shows that the hydrogen charging time does not have any significant effect on the crack growth rate of the austempered samples. However, in contrast to the data in Figure 7, Figure 8 shows that the presence of dissolved hydrogen does not cause a large amount of variation in the crack growth rate data. In addition, Figure 8 shows that the presence of dissolved hydrogen increases the crack growth rate significantly at ∆K values below 30 MPa √ m. Thus, the effect of hydrogen is relatively important when the contribution of other mechanisms is small. When ∆K is low, there is little stress built up at the crack tip; the effect of hydrogen on crack growth is increased since there is little stress to provide increased energy for the propagation of a crack. The results in Figures 7 and 8 agree with those from two published research studies [14,16] on 4100-series steel alloys charged with hydrogen and tested in laboratory air. They found that the crack growth rates in the range of 10 −5 to 10 −7 m/cycle were increased in the hydrogen-charged samples when compared to the uncharged samples. Table 7 details the fatigue threshold of both the annealed and austempered 4140 alloy as a function of dissolved hydrogen content. The data in this table shows that there are only minor differences in the threshold values of the annealed and austempered samples; in addition, all these values are low regardless of hydrogen content. This is an important result, as it shows that austempering can be used to provide improved strength properties while avoiding a significant reduction in fatigue crack resistance; this combination is not usually observed in most materials. Table 7 also illustrates that, as the hydrogen concentration increases, the fatigue threshold decreases slightly in the austempered sample. This is the expected result, and it is believed to be due to the increased amount of hydrogen that will cause lattice dilation in the steel. This, in turn, increases the strain fields and internal energy, resulting in a greater driving force for crack propagation; this results in a lower threshold stress that must be achieved to grow the crack. Table 7 also shows that, in the annealed samples, the dissolved hydrogen content did not have any significant effect. The reason for this has not been conclusively determined. It could be a result of the large variation in the experimental data. Or, it may be due to a threshold limit, above which the presence of hydrogen could be beneficial to certain microstructures. Further investigation is necessary to understand these results and is in progress. Figures 9-11 compare the fatigue crack growth behavior in the linear region for the annealed and austempered specimens as a function of the charging time (dissolved hydrogen content.) Interestingly, all of these figures show that, in the low (<30 MPa√m) ∆K region, the annealed specimens have a lower crack growth rate than the austempered specimens. As the ∆K values increase beyond approximately 30 MPa√m, a transition stress intensity factor range exists where the annealed specimens have an increasingly higher crack growth rate than austempered specimen. This observation indicates that, in the higher ∆K regions, the austempered specimens are more resistant to crack growth in hydrogen-charged atmospheres. This was unexpected, as several investigators [28,29] have reported that the embrittlement effects of hydrogen are more severe in the case of high Table 7 also illustrates that, as the hydrogen concentration increases, the fatigue threshold decreases slightly in the austempered sample. This is the expected result, and it is believed to be due to the increased amount of hydrogen that will cause lattice dilation in the steel. This, in turn, increases the strain fields and internal energy, resulting in a greater driving force for crack propagation; this results in a lower threshold stress that must be achieved to grow the crack. Table 7 also shows that, in the annealed samples, the dissolved hydrogen content did not have any significant effect. The reason for this has not been conclusively determined. It could be a result of the large variation in the experimental data. Or, it may be due to a threshold limit, above which the presence of hydrogen could be beneficial to certain microstructures. Further investigation is necessary to understand these results and is in progress. Figures 9-11 compare the fatigue crack growth behavior in the linear region for the annealed and austempered specimens as a function of the charging time (dissolved hydrogen content.) Interestingly, all of these figures show that, in the low (<30 MPa √ m) ∆K region, the annealed specimens have a lower crack growth rate than the austempered specimens. As the ∆K values increase beyond approximately 30 MPa √ m, a transition stress intensity factor range exists where the annealed specimens have an increasingly higher crack growth rate than austempered specimen. This observation indicates that, in the higher ∆K regions, the austempered specimens are more resistant to crack growth in hydrogen-charged atmospheres. This was unexpected, as several investigators [28,29] have reported that the embrittlement effects of hydrogen are more severe in the case of high strength steels. Due to the higher strength and hardness of the alloy in austempered conditions, hydrogen embrittlement would be expected to have a stronger effect. It is hypothesized that the lower bainitic microstructure developed during the austempering process is counteracting this embrittling effect; the increased tortuosity of the crack path is believed to be key. strength steels. Due to the higher strength and hardness of the alloy in austempered conditions, hydrogen embrittlement would be expected to have a stronger effect. It is hypothesized that the lower bainitic microstructure developed during the austempering process is counteracting this embrittling effect; the increased tortuosity of the crack path is believed to be key. strength steels. Due to the higher strength and hardness of the alloy in austempered conditions, hydrogen embrittlement would be expected to have a stronger effect. It is hypothesized that the lower bainitic microstructure developed during the austempering process is counteracting this embrittling effect; the increased tortuosity of the crack path is believed to be key. Fractography The fracture surfaces that are associated with the annealed 4140 alloy samples are shown in Figure 12. As this figure shows, charging with hydrogen only slightly changes the fracture surface morphology. The fracture surfaces of the hydrogen-charged samples (Figure 12b) appear to have limited ductile tearing with a more flat-like appearance as compared to the uncharged samples ( Figure 12a). This correlates well with experimentally-determined fatigue crack growth rate data shown in Figures 6 and 7, which show the relative closeness of the fatigue curves to one another regardless of the charging condition. Figure 13 shows the fracture surfaces associated with the austempered 4140 alloy samples at low ∆K values. Several interesting characteristics were observed. Similar to that seen in Figure 12a, the fracture surface in the uncharged samples ( Figure 13a) have little ductile tearing and are relatively flat in morphology. However, in contrast to those seen in Figure 12b, Figure 13b shows that the hydrogen-charged austempered samples are characterized by both intergranular and transgranular features. Intergranular features appear increasingly prominently as the ∆K values decrease in the charged austempered samples. This also agrees well with the experimental fatigue crack growth data, which shows that the austempered samples that are charged with hydrogen have fatigue crack growth rates that are faster and fatigue thresholds that are lower when compared to those in the annealed material at low ∆K values. Figure 14 shows the fracture surfaces associated with the austempered 4140 alloy samples at high ∆K values. As this figure details, the fracture surface in the uncharged samples (Figure 14a) are very similar to those in the charged samples (Figure 14b). Both fracture surfaces are characterized by a large number of transgranular features. Once again, this agrees well with the experimental fatigue crack growth data for high ∆K values, which show that the fatigue crack growth rates for the charged and uncharged austempered material are similar at high ∆K values. Fractography The fracture surfaces that are associated with the annealed 4140 alloy samples are shown in Figure 12. As this figure shows, charging with hydrogen only slightly changes the fracture surface morphology. The fracture surfaces of the hydrogen-charged samples (Figure 12b) appear to have limited ductile tearing with a more flat-like appearance as compared to the uncharged samples ( Figure 12a). This correlates well with experimentally-determined fatigue crack growth rate data shown in Figures 6 and 7, which show the relative closeness of the fatigue curves to one another regardless of the charging condition. Figure 13 shows the fracture surfaces associated with the austempered 4140 alloy samples at low ∆K values. Several interesting characteristics were observed. Similar to that seen in Figure 12a, the fracture surface in the uncharged samples ( Figure 13a) have little ductile tearing and are relatively flat in morphology. However, in contrast to those seen in Figure 12b, Figure 13b shows that the hydrogen-charged austempered samples are characterized by both intergranular and transgranular features. Intergranular features appear increasingly prominently as the ∆K values decrease in the charged austempered samples. This also agrees well with the experimental fatigue crack growth data, which shows that the austempered samples that are charged with hydrogen have fatigue crack growth rates that are faster and fatigue thresholds that are lower when compared to those in the annealed material at low ∆K values. Figure 14 shows the fracture surfaces associated with the austempered 4140 alloy samples at high ∆K values. As this figure details, the fracture surface in the uncharged samples (Figure 14a) are very similar to those in the charged samples (Figure 14b). Both fracture surfaces are characterized by a large number of transgranular features. Once again, this agrees well with the experimental fatigue crack growth data for high ∆K values, which show that the fatigue crack growth rates for the charged and uncharged austempered material are similar at high ∆K values. Conclusions and Future Work To better understand the effect of hydrogen upon the mechanical properties and fatigue crack growth of austempered 4140 steel, the present research study was conducted. The study was conducted ex-situ to simulate the exposure of a 4140 alloy to hydrogen during manufacturing Conclusions and Future Work To better understand the effect of hydrogen upon the mechanical properties and fatigue crack growth of austempered 4140 steel, the present research study was conducted. The study was conducted ex-situ to simulate the exposure of a 4140 alloy to hydrogen during manufacturing Conclusions and Future Work To better understand the effect of hydrogen upon the mechanical properties and fatigue crack growth of austempered 4140 steel, the present research study was conducted. The study was conducted ex-situ to simulate the exposure of a 4140 alloy to hydrogen during manufacturing followed by usage in an ambient environment. From the data obtained in this study, the following conclusions can be drawn: 1. Austempering in the lower bainitic temperature range has significantly increased the mechanical properties and the fracture toughness of AISI 4140 steel as compared to the as-received (annealed) condition. 2. In the absence of any charged hydrogen, the austempered samples had a much lower average crack growth rate and higher fatigue threshold than the as-received (annealed) samples. 3. The presence of dissolved hydrogen increased the average crack growth rate in the austempered as well as in the as-received (annealed) samples. 4. There is a transition stress intensity factor value of approximately 40-50 MPa √ m; below this value, the presence of dissolved hydrogen causes the crack growth rate to be higher in the austempered samples when compared to annealed samples. 5. In presence of dissolved hydrogen, above the transition stress intensity factor value, the crack growth rate was increasingly greater in the annealed specimens as compared to the austempered specimens. 6. When compared to the as-received (annealed) condition, austempering of 4140 steel appears to provide a processing route by which the strength, hardness, and fracture toughness of the material can be increased with little or no degradation in the ductility and fatigue crack growth behavior. The results from this study also highlight the need for three future investigations. First, it is recommended that a study be conducted to better define the effect of austempering temperature on the fatigue crack growth resistance. This study would also need to more closely characterize crack closure effects and the effects of microstructural grains on the rate of crack propagation. Second, a study examining the effect of hydrogen concentration on the fatigue crack growth resistance in austempered 4140 should be conducted. This study should include hydrogen content measurements before and after the fatigue test in order to understand the effect of hydrogen diffusion, ingress, and outgassing during the testing process. Third, the study needs to be repeated for a quenched and tempered 4140 steel alloy that was exposed to hydrogen charging like that done in the current study. This will allow for an evaluation of the degree of commercial improvement yielded by austempering compared to the current commonly used heat treatment process.
2019-04-30T13:04:42.900Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "4bec944ad413ee3a6717a35d0d5952de41359363", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4701/7/11/466/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1672c31c7822156b2276064d9f02edef5f2066be", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
12742657
pes2o/s2orc
v3-fos-license
Temperature variability and childhood pneumonia: an ecological study Background Few data on the relationship between temperature variability and childhood pneumonia are available. This study attempted to fill this knowledge gap. Methods A quasi-Poisson generalized linear regression model combined with a distributed lag non-linear model was used to quantify the impacts of diurnal temperature range (DTR) and temperature change between two neighbouring days (TCN) on emergency department visits (EDVs) for childhood pneumonia in Brisbane, from 2001 to 2010, after controlling for possible confounders. Results An adverse impact of TCN on EDVs for childhood pneumonia was observed, and the magnitude of this impact increased from the first five years (2001–2005) to the second five years (2006–2010). Children aged 5–14 years, female children and Indigenous children were particularly vulnerable to TCN impact. However, there was no significant association between DTR and EDVs for childhood pneumonia. Conclusions As climate change progresses, the days with unstable weather pattern are likely to increase. Parents and caregivers of children should be aware of the high risk of pneumonia posed by big TCN and take precautionary measures to protect children, especially those with a history of respiratory diseases, from climate impacts. Background Pneumonia is the top cause of mortality in children under five years [1]. It is estimated that in 2010, worldwide, there were 120 million episodes of pneumonia in children younger than five [1]. Pneumonia is highly preventable, hence it is particularly important to explore the risk factors which drive the incidence of pneumonia and further to prevent children from being exposed to these risk factors. Many nutritional, socioeconomic and environmental factors are involved in the occurrence of pneumonia [2][3][4]. As climate change proceeds, the possible impact of climate factors on pneumonia transmission has attracted increasing research attention [4,5]. Both high and low temperatures have been reported to be associated with increased pneumonia incidence [6,7]. However, the potential impact of temperature variability on childhood pneumonia has not been researched yet, though big temperature changes may influence the function of respiratory system [8]. There are several ways to define temperature variability [9]. For example, the difference in daily maximum and minimum temperatures (i.e., diurnal temperature range (DTR)) [10], and the mean temperature difference from one day to the next (i.e., temperature change between two neighbouring days (TCN)) [11,12]. Previous studies have highlighted that big DTR or TCN may affect the respiratory system of human [10][11][12], especially for children [13]. We hypothesized that great DTR or TCN might be associated with increase in childhood pneumonia cases, and we used the data on emergency department visits (EDVs) for childhood pneumonia in Brisbane from 2001 to 2010 to test our hypothesis. Data collection Data on EDVs from 1 st January 2001 to 31 st December 2010 classified according to the International Classification of Diseases, 9 th version and10 th version (ICD 9 and 10) were supplied by Queensland Health. We extracted those cases coded as pneumonia (ICD 9 codes: 480-486; ICD 10 codes: J12-J18) in children aged 0-14 years. Data on climate variables, including maximum and minimum temperatures, rainfall and relative humidity, were obtained from Australian Bureau of Meteorology. DTR was calculated as daily maximum temperature minus daily minimum temperature [10]. Daily mean temperature was the average of daily maximum and minimum temperatures, and TCN was calculated as mean temperature of the current day minus mean temperature of the previous day [11]. Data on air pollutants, including daily average particular matter ≤ 10 μm (PM 10 ) (μg/m 3 ), daily average nitrogen dioxide (NO 2 ) (μg/m 3 ) and daily average ozone (O 3 ) (ppb), were retrieved from the Queensland Department of Environment and Heritage Protection. Ethical approval was obtained from the Human Research Ethics Committee of Queensland University of Technology (Australia) prior to the data collection (number: 1000001168). Data analysis Distributed lag non-linear model (DLNM) was developed to incorporate both lagged and the non-linear effects of temperature on mortality or morbidity [14,15]. Previous studies have revealed that there might be a lagged effect of temperature variability on human health, and the relationship between temperature variability and respiratory diseases appears to be non-linear [11][12][13]15]. Thus, we used DLNM to incorporate the non-linear and lagged effect [14]. A quasi-Poisson generalized linear regression combined with DLNM was used to quantify the association between DTR (or TCN) and EDVs for childhood pneumonia. DTR model: Where t is the day of the observation; Y t is the observed daily childhood pneumonia on day t; α is the model intercept; DTR t,l is a matrix obtained by applying DLNM to DTR, TCN t,l is a matrix obtained by applying DLNM to TCN, T t,l is a matrix obtained by applying the DLNM to temperature; β is vector of coefficients for T t,l , and l is the lag days; ns(RH t , 3) is a natural cubic spline with three degrees of freedom for relative humidity; ns(PM 10t , 3) is a natural cubic spline with three degrees of freedom for PM 10 ; ns(O 3t , 3) is a natural cubic spline with three degrees of freedom for O 3 ; ns(NO 2t , 3) is a natural cubic spline with three degrees of freedom for NO 2 ; ns(Time t ,8) is a natural cubic spline with eight degrees of freedom per year for long-term trend and seasonality; Holiday is the public holiday, and DOW t is the categorical day of the week with a reference day of Sunday. Specifically, DTR (or TCN) and lag were incorporated using a "natural cubic spline-natural cubic spline" approach. The model included lags up to 21 days for DTR (or TCN) and mean temperature [16]. We used lags up to 10 days, for all other confounders (i.e., relative humidity, PM 10 , O 3 , and NO 2 ). All data analysis was conducted using the R statistical environment (v 2.15), and "dlnm" package was used to fit the regression. In the sensitivity analysis, we changed the df for DTR, TCN and time. We also excluded 2009 data as there was a big pneumonia spike in 2009. Figure 1 shows the time-series distributions of childhood pneumonia, mean temperature, DTR and TCN, revealing that there was a seasonal trend in childhood pneumonia, mean temperature, and DTR. The 2009 pneumonia peak (due to H1N1 flu pandemic) is also revealed in this figure. To explore the crude relationship between each variable, we calculated the Spearman correlations between climate variables, air pollutants and childhood pneumonia ( Table 1). There was a negative correlation between DTR and mean temperature (r = −0.44, P < 0.01). TCN was positively correlated with mean temperature (r = 0.14, P < 0.01). Further, DTR was positively correlated with childhood pneumonia (r = 0.21, P < 0.01), while no significant correlation between TCN and childhood pneumonia was observed. No correlation coefficient was greater than 0.5, meaning that multi-collinearity is unlikely a big issue in the subsequent modelling. Figure 2 shows the exposure-response relationship between temperature variability and childhood pneumonia (the modelling results). No significant association between DTR and childhood pneumonia was observed. In contrast, a big temperature decrease from one day to the next (TCN < −2°C), increased the risk of childhood pneumonia. Since TCN < −2°C is associated with an increase in childhood pneumonia, we subsequently calculated the number of days with TCN < −2°C every year. There were more than 50 days with TCN below −2°C every year, with most days with temperature drop > 2°C occurring in the second half of each year (June to December) (Figure 3). Figure 4 shows the pattern of lagged effects of TCN on childhood pneumonia, revealing that TCN effect lasted for nearly three weeks. Figure 5 depicts that older children (5-14 years vs. <5 years), female children (vs. male), and Indigenous children (vs. non-indigenous) appeared to be more vulnerable to the TCN impact. Results As there was a distinct seasonality in childhood pneumonia, with the peak in winter, we specifically analysed the TCN impact on childhood pneumonia in summer (December, January, and February) and winter (June, July and August), and found that this impact mainly occurred in winter ( Figure 6). However, in summer an increased relative risk of childhood pneumonia was also detected as TCN >0°C, but it was not statistically significant. To test whether there was a change over time in the effect of TCN on childhood pneumonia, we splitted the ten years into two periods (2001-2005 and 2006-2010). Figure 7 reveals that the effect of TCN on childhood pneumonia during the 2 nd period was much greater than it was during the 1 st period. Figure 8 shows that after excluding the 2009 data, the magnitude of TCN effect on childhood pneumonia in Brisbane reduced, although the shape of the TCN-pneumonia relationship was similar. Figure 9 shows the effects of TCN on childhood pneumonia in different subgroups after excluding the 2009 data, revealing that subgroups vulnerable to TCN effect remained largely unchanged. We also compared the effect of TCN on childhood pneumonia between 2001-2005 and 2006-2010 (without 2009), and found that the TCN effect on childhood pneumonia during the 2 nd period was still greater than it was during the 1 st period ( Figure 10). Discussion This study quantified the impacts of both DTR and TCN on childhood pneumonia and yielded several notable findings. A big temperature decrease from one day to the next (TCN < −2°C) may increase the EDVs for childhood pneumonia, and this effect lasted for around three weeks. Every year, there were more than 50 days with big TCNs, and these big TCNs mainly occurred in winter. Children aged 5-14 years, female children and Indigenous children were particularly at risk. Further, there was a change in the effect of TCN on childhood pneumonia over time. No significant relationship between DTR and childhood pneumonia was observed. Children are particularly vulnerable to both extreme temperatures [17] and temperature variation [13], due partially to their relatively less-developed thermoregulation capability [18]. In this study, we found that a sharp temperature drop was followed by significantly increased EDVs for childhood pneumonia, and the TCN impact lasted for roughly three weeks. The time lag of TCN impact is probably due to two factors: (1) the temperature change which exceeds certain limits may take a few days to trigger subsequent symptoms in children with underlying conditions; (2) there may be another delay between the onset of symptoms and seeking for medical attention. Some studies have observed significantly increased respiratory-related mortality associated with big TCN [12] while others did not find significant effects [11]. Our study stands out of previous studies by specifically focusing on childhood pneumonia and controlling for a range of possible confounders. In this study, we also found age, gender and Indigenous-status modified the relationship between TCN and pneumonia. The schoolaged children (5-14 years) were more sensitive to TCN compared with younger, which might be because they play outdoors more often and thus exposed more to the outdoor temperature change. The difference in vulnerability to TCN between two genders may be due to their body composition [19], though some researchers argued that such an effect is variable among locations and populations [20]. Our results also suggest that Indigenous children were more sensitive to TCN effect compared with non-Indigenous children. Previous studies have reported that the burden of pneumonia in Indigenous children is 10 to 20 fold higher than non-Indigenous children, and they have longer hospital admissions and are more likely to have multiple admissions with pneumonia [21]. Most Indigenous children have limited access to infrastructures, and experience more poverty than non-Indigenous children, possibly resulting in their greater vulnerability to TCN impact [22]. As climate change progresses, not only the global surface average temperature, but also the frequency of unstable weather patterns (e.g., sharp increase/decrease in temperature) will increase [23], which poses a significant challenge to public health sectors. We found that the effect of TCN on childhood pneumonia during 2006-2010 was greater than it was during 2001-2005. This finding indicates that children might be vulnerable to sharp temperature decrease in the future if unstable weather patterns occur as projected. Elucidating the impact of temperature variability on children's health is essential for the improvement of public health. The findings of our study not only remind parents and children's caregivers to take good care of children in the days of big TCN, but also imply that government should take temperature variability into account while developing early warning systems for controlling and preventing childhood pneumonia. This study has two major strengths. First, this is, to our knowledge, the first study to look at the impact of DTR and TCN on childhood pneumonia. Second, the change over time in the TCN effect on childhood pneumonia which we observed in this study may encourage future studies to explore the temporal variability of TCN impacts on children's health. Two weaknesses should be acknowledged. First, this is a one city study, which means further the interpretation of our findings should be cautious. Second, to some extent, biases in exposure and/or outcome measures may be inevitable because we used the aggregated data on temperature and EDVs for childhood pneumonia.
2017-06-23T01:06:40.008Z
2014-06-01T00:00:00.000
{ "year": 2014, "sha1": "d1a7916d8c063d4444e5cd3cdb0ae069a1c2a9cf", "oa_license": "CCBY", "oa_url": "https://ehjournal.biomedcentral.com/track/pdf/10.1186/1476-069X-13-51", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c00309969dee74c9823f3c38c6456da81c6b53d2", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
119317512
pes2o/s2orc
v3-fos-license
Deformation theory and finite simple quotients of triangle groups I Let $2 \leq a \leq b \leq c \in \mathbb{N}$ with $\mu=1/a+1/b+1/c<1$ and let $T=T_{a,b,c}=$ be the corresponding hyperbolic triangle group. Many papers have been dedicated to the following question: what are the finite (simple) groups which appear as quotients of $T$? (Classically, for $(a,b,c)=(2,3,7)$ and more recently also for general $(a,b,c)$.) These papers have used either explicit constructive methods or probabilistic ones. The goal of this paper is to present a new approach based on the theory of representation varieties (via deformation theory). As a corollary we essentially prove a conjecture of Marion [21] showing that various finite simple groups are not quotients of $T$, as well as positive results showing that many finite simple groups are quotients of $T$. Introduction Let a, b, c be a triple of positive integers. A group G is said to be an (a, b, c)-group if it is generated by two elements of orders dividing a and b respectively, whose product has order dividing c, in other words, it is a quotient of the triangle group T = T a,b,c = x, y, z : x a = y b = z c = xyz = 1 . (1.1) Many papers have been devoted to the question of understanding which (finite) groups are (a, b, c)-groups and especially which finite simple groups are. If 1/a + 1/b + 1/c ≥ 1 then T is soluble or isomorphic to the alternating group Alt 5 and the finite quotients of T are well understood (see [2]). We assume so that T is a cocompact Fuchsian group (of genus 0) and more specifically a hyperbolic triangle group. We call (a, b, c) a hyperbolic triple of integers and without loss of generality, we suppose a ≤ b ≤ c. Recall that where the upper bound 41/42 is attained only if (a, b, c) = (2, 3, 7). A considerable effort has been made to try to classify the finite (2, 3, 7)-groups, also referred as Hurwitz groups; for a recent survey, see [3]. Recently, attention has also been given to other hyperbolic triples (a, b, c), see for example [20,21,18,22,23,24,25,9,10,12], where deterministic and probabilistic results on (a, b, c)-generation of finite simple groups of Lie type are obtained mainly in the special case where a, b and c are prime numbers. Turning to general hyperbolic triples of integers, any finite simple group, being 2-generated, is a quotient of some triangle group T and in fact can be so realized in many independent ways. See for example [10,9,12] and the references therein establishing that every finite simple group other than Alt 5 admits an (unmixed) Beauville structure. The vast literature showing that some finite (simple) groups G are (a, b, c)-groups has so far followed two main lines: either one gives two explicit generators of orders dividing a and b and whose product has order dividing c, or one uses probabilistic methods to show that such generators exist. In the latter approach, one typically uses character-theoretic methods to estimate the number of homomorphisms from T a,b,c to G and then uses a knowledge of the maximal subgroups of G to get a lower bound on the number of these homomorphisms which are surjective-see [10] for a typical example. In this paper we present a third method to prove (or disprove) that various groups are (a, b, c)-groups. Our method is based on deformation theory of representation varieties. In a previous paper [15], we used similar methods to study the representation variety Hom(Γ, G) where Γ is a general Fuchsian group and G is a quasisimple real Lie group. In the current paper we use these methods to study the representation variety Hom(Γ, G) where this time Γ = T = T a,b,c is a (hyperbolic) triangle group and G denotes a quasisimple algebraic group defined over a field F (i.e., a semisimple algebraic group over F which is absolutely simple modulo its center). A key point is: there exist such an infinite field F and a representation ρ ∈ Hom(T, G(F)) with a Zariski dense image if and only if for infinitely many q, G(q) is an (a, b, c)-group. Moreover if char(F) = 0 and if in addition ρ is not locally rigid then T is saturated with such finite (simple) quotients (see Definition 1.1 below). Given an irreducible Dynkin diagram X, we say that a quasisimple algebraic group G defined over a field F is of type X, if the associated diagram is X. By a slight abuse of notation, we write X(F) for G(F) where G is the adjoint simple form, and for finite fields F = F q , we write X(q) for the finite simple (untwisted) Chevalley group of type X over F q . If G is a simple compact Lie group, then G = G(R) for some simple R-group G and in this case we say that G is of type X if G is of type X. Definition 1.1. Given an irreducible Dynkin diagram X, we say that T is saturated with finite quotients of type X if there exist p 0 and e in N such that for all p > p 0 , the finite simple group X(p eℓ ) is a quotient of T for every ℓ ∈ N and for a set of positive density of primes p, we even have X(p ℓ ) is a quotient of T for every ℓ ∈ N. Our main result is the following: Theorem 1.2. For every hyperbolic triangle group T = T a,b,c and every irreducible Dynkin diagram X, T is saturated with finite quotients of type X, except possibly if (X, T ) appears in Table 1 below. Let us say right away that Theorem 1.2 is not the best result we can get via this method. We chose to illustrate in this paper the main point, which is the relevance of deformation theory to the problem of characterizing finite simple quotients of triangle groups. In a second paper, we will exploit the method further to get stronger results, paying the price of being more technical. In particular in [16] we show that the six possibly exceptional hyperbolic triangle groups in Table 1, namely those in S = {T 2,4,6 , T 2,6,6 , T 2,6,10 , T 3,4,4 , T 3,6,6 , T 4,6,12 } are not really exceptions. To prove the theorem we want to get Zariski dense representations of T into a group G of type X which are not locally rigid. To this end we will start with a representation of T to SO(3, R) and then we will compose with the principal homomorphism from SO(3, R) to a compact simple Lie group G of type X and deform the resulting (non-dense) homomorphism T → G. For this reason we have to exclude the six triangle groups in S, which are the (only) hyperbolic triangle groups without SO(3, R)-dense representations (see [15]). In [16] we will push forward the general method in order to include these six groups and to eliminate some more cases of Table 1. The way to obtain a Zariski dense representation of T to G = G(R) is by deforming the representation T → G induced from the principal homomorphism SO(3, R) → G. This is done in one step if SO(3, R) is maximal in G, but in some cases we have to do it in two or even three steps (through "steps in the ladder"-see §5). Going through Table 1 we can deduce: Then for almost every hyperbolic triple (a, b, c), the group T = T a,b,c is saturated with finite quotients of type X. In particular, it is saturated with finite quotients of type E 8 . This last sentence answers a question we were asked by Guralnick. Let us however mention one weakness of our method: it uses a non-explicit deformation ρ of a starting representation ρ 0 . We therefore do not have a good control on the ring of definition of ρ. As a result, we cannot give an explicit upper bound for p 0 or e in Definition 1.1, and so our method cannot give a result of the kind proved in [19] stating that for r ≥ 286 every finite simple group of type A r is a quotient of T 2,3,7 . Finally, let us call the attention of the reader to a slightly surprising corollary of our work. In [5] (see also [21]) it was shown that many simply connected finite groups of classical type are not Hurwitz. For example, if n ∈ {4, 6, 8, 10, 12, 14, 16, 18, 22}, then Sp n (q) with q odd is never Hurwitz. Our results show that if the corresponding simple versions PSp n (q) of the above groups are considered instead, infinitely many of them are Hurwitz groups (for any fixed n = 4). We use the following definition, following [21]: If (1.3) does not hold, (a, b, c) is said to be reducible (respectively, nonrigid) for G according as δ G a + δ G b + δ G c is less (respectively, greater) than 2 dim G. Note that this is not the same as Thompson's rigidity condition (see [32]) but is related to it (see [30,21,18]). Theorem 1.7. Let G/F p be a quasisimple algebraic group of type X and let d be the determinant of the Cartan matrix of X. If (a, b, c) is rigid for G and p ∤ abcd then there are only finitely many positive integers ℓ such that G(F p ℓ ) is a quotient of T = T a,b,c . Moreover, if this holds in the case that G is adjoint, then only finitely many finite simple groups of type X and characteristic p are (a, b, c)-groups. So Marion's conjecture is true except possibly if p divides abcd. For a given T = T a,b,c and a given X, this excludes only finitely many primes. Theorem 1.7 is proved by showing that under its hypotheses, the epimorphisms from T to G(F p ℓ ) are all locally rigid (when considered as elements in Hom(T, G(F p ))). Hence there are only finitely many. We should mention that Marion classified the rigid pairs ((a, b, c), G) and proved this conjecture for many of them by a case by case study together with the notion of linear rigidity defined in [30]. Our approach gives a conceptual explanation and dispenses with the assumption that a, b and c are prime numbers. Deformation theory of Fuchsian groups In his well-known paper Weil [33] introduced the language of cohomology into deformation theory. He proved: Theorem 2.1. Let Γ be a finitely generated group and G be an algebraic group defined over a field F and with Lie algebra g. Let ρ ∈ Hom(Γ, G) be a representation and Ad • ρ be the representation induced on g. Suppose H 1 (Γ, Ad • ρ) = 0. Then ρ is locally rigid, i.e. there is a neighborhood of ρ consisting entirely of conjugates of ρ by elements of G. More precisely, there exists a universal domain K ⊃ F such that every K-point in some Zariski neighborhood of ρ in Hom(Γ, G) lies in the G(K)-orbit of ρ. Weil then showed how to compute the space of cocycles for a general finite dimensional representation of a general Fuchsian group. For conciseness, we will present here the computation only for hyperbolic triangle groups T = T a,b,c (see [33] and [15] for the general case). Let Γ act via a representation s on a vector space for every γ ∈ Γ. Let Z 1 (Γ, V ) (respectively, B 1 (Γ, V )) be the space of 1-cocycles (respectively, 1-coboundaries) and let ,c we say that φ is parabolic if for every finite subgroup S of T , φ| S is a 1-coboundary. It is well-known that the maximal finite subgroups of T are the conjugates of x , y and z of (1.1). It follows that if p = char(F) does not divide abc then every 1-cocycle is parabolic. LetP 1 (T, V ) be the space of parabolic 1-cocycles and In [33, §6, pp. 155-156], Weil computed dimP 1 (T, V ) and dim P 1 (T, V ) for a general representation s of T = T a,b,c (in fact for every Fuchsian group but we specialize his equations to our case): where d = dim V , i is the dimension of the space of invariants of s, i * is the dimension of the space of invariants of s * (the dual of s), and for t ∈ {x, y, z}, e t = rank(I d − s(t)). The following consequence will be used several times: Proof. For j ∈ {1, 2}, let V s j (x) (respectively, V s j (y) and V s j (z) ) be the fixed point space of s j (x) (respectively, s j (y) and s j (z)) in V . Since V is defined over a field of characteristic zero and the dimensions of the spaces of invariants of s j and s * j are zero, (2.2) yields Since the restrictions of two representations in a common irreducible component to a cyclic subgroup are conjugate, we get dim V s 1 (x) = dim V s 2 (x) (and similarly for y and z); this yields the result. Let H be a real form of PGL 2 and let ρ : T = T a,b,c → H(R) be an H-dense representation (in the sense of [15]) i.e. Let ρ : T = T a,b,c → H be such a representation and let s = Ad • ρ be the action on the Lie algebra of H(C). Then the eigenvalues of s(x) (respectively, s(y), s(z)) are 1, ω, ω −1 where ω is a primitive a-th root (respectively, b-th, c-th root) of unity. Thus e x = e y = e z = 2. Recall that in general the adjoint representation of a simple group, in characteristic zero, is self-dual. Also as T is Zariski dense in H, there are no invariants. Hence i = i * = 0. Since here d = 3, (2.2) gives dim P 1 (T, Ad • ρ) = 0 and hence dim H 1 (T, Ad • ρ) = 0. In particular, by Theorem 2.1, T is locally rigid in PGL 2 (C) and in SO(3) (when ρ is an SO(3)-dense representation). Let us now move to more general representations of T = T a,b,c into an algebraic group G. It is a well-known theorem of de Siebenthal [27] and Dynkin [6] that for every (adjoint) simple algebraic group G defined over C there exists a conjugacy class of principal homomorphisms SL 2 → G such that the image of any nontrivial unipotent element of SL 2 (C) is a regular unipotent element of G(C). The restriction of the adjoint representation of G to SL 2 via the principal homomorphism is a direct sum of V 2e j (1 ≤ j ≤ r), where r is the Lie rank of G, e 1 , . . . , e r is the sequence of exponents of G and V k denotes the k-th symmetric power of the two-dimensional irreducible representation of SL 2 (see [14]). This is a k + 1-dimensional representation and hence Note that the homomorphism SL 2 → Ad(G) factors through PGL 2 . The principal homomorphism induces, by restriction, a homomorphism from SU (2) to G(C) c , where G(C) c denotes a maximal compact subgroup of G(C). The group G(C) c is actually isomorphic to G c (R) where G c is a compact real form of G. When G acts on its Lie algebra, the latter homomorphism factors through SO(3), and we also call the resulting homomorphism SO(3) → G(C) c , the principal homomorphism. Let now ρ 0 : T → PGL 2 → G be the representation induced from the principal homomorphism PGL 2 → G and consider the representation Ad • ρ 0 of T on the Lie algebra g of G. The eigenvalues of Ad • ρ 0 (x) are: . . , r and where ω is a primitive root of unity of degree 2a (and similarly for Ad • ρ 0 (y) and Ad • ρ 0 (z) with 2b and 2c, respectively). One checks that and similarly for e y and e z . As PGL 2 has no invariants on g (all the V 2e i are nontrivial), we deduce from (2.2) and (2.5) the following result. In the statement below G denotes an absolutely simple algebraic group of adjoint type defined over R of type X and rank r, and H an absolutely simple form of PGL 2 defined over R. The result will be used in this paper for H = SO(3) and G a compact real form, whereas in [16] we will need it for the split case. where ρ 0 : T → G is the representation induced from the principal homomorphism H → G, and write n 1 = a, n 2 = b, n 3 = c. Then In the next result, G again denotes an absolutely simple algebraic group of adjoint type defined over R of type X and rank r. except in the following cases: (b) X = A 2 and n 1 = 2. In particular, with the notation of Proposition 2.3, we have dim Remark 2.5. In all the exceptional cases we get equality in (2.6) as would be expected from Proposition 2.3. Proof. Without loss of generality we may assume (when dealing with the non-exceptional cases) that n 3 ≤ 7, since the left hand side of (2.6) is monotonically decreasing in each n k . We note that depends only on e j modulo n k . On the other hand by (2.4), where in the above inequality we used (1.2). The exponents of the different root systems are as follows (see, for example, [1]): (1 + 2⌊e j /n⌋) − 2e j + 1 n for each root system (respectively, family of root systems) of exceptional (respectively, classical) type and for each n ≤ 7. This table together with (2.7) immediately implies the lemma for A r when r ≥ 10, B r , C r , D r when r ≥ 9, E 7 and E 8 . This reduces us to a finite list of cases which can be checked by hand, yielding the exceptions (a)-(f) listed above. Finally, let us recall the final sentence of Weil in his paper [33] which gives (when specialized to our case of interest): Then ρ has a nonsingular neighborhood in Hom(T, G) of dimension −d + e x + e y + e z . We are now ready to put all the above information together and prove our first main result. Proof of Marion's conjecture Let us consider a general representation ρ : T = T a,b,c → G, where G is a simple algebraic group defined over an algebraically closed field F of characteristic p ≥ 0, and the action Ad • ρ on the Lie algebra g of G, where Ad denotes the adjoint representation of G. In the notation given in (2.2), d = dim g, i (respectively, i * ) is the dimension of the space of invariants of Ad • ρ (respectively, (Ad • ρ) * ) on g (respectively, g * ), and for t ∈ {x, y, z}, e t = rank(I d − Ad • ρ(t)). In particular, we have d = dim G and e t ≤ dim t G with equality in the latter inequality if p does not divide the order of t, where, for t ∈ {x, y, z}, t G denotes the conjugacy class of ρ(t) in G. Setting δ to be the dimension of the subvariety G [a] of G consisting of elements of order dividing a (and similarly for δ Thus in this case (2.2) gives: In particular if p ∤ abc and i = i * = 0 then ρ is locally rigid. In summary we get: , c) is rigid for G. If p does not divide abc and ρ : T = T a,b,c → G is such that Ad • ρ and (Ad • ρ) * have no invariants, then H 1 (T, Ad • ρ) = 0, and so ρ is locally rigid. We are now ready to prove our version of Marion's conjecture (see Theorem 1.7). Proof of Theorem 1.7. As p ∤ d, there is no trivial factor in the Jordan-Hölder series of Ad • ρ as a G-representation (see [13]), so, by [29, §13], the same holds at the level of G(F p ℓ )-representations when ℓ is sufficiently large. (In fact ℓ = 1 suffices, but the argument is more subtle and uses the fact that the adjoint representation is p-restricted under the hypothesis.) Thus, if ρ : T → G(F p ℓ ) is an epimorphism, Ad • ρ satisfies the hypothesis of For the second one, if G is a sufficiently large finite simple group of type X in characteristic p, it can be regarded either as the derived group of G(F p ℓ ) for a simple adjoint group G or as the quotient of G sc (F q ℓ ) by its center. Every irreducible representation of G can be regarded as an irreducible representation of G sc and therefore, when ℓ is sufficiently large, as an irreducible representation of G sc (F q ℓ ) which factors through G. Thus if ℓ is sufficiently large, G cannot be an (a, b, c)-group. Marion [21] discussed the notions of reducible, rigid and nonrigid hyperbolic triples only over fields of positive characteristic. In order to treat characteristic zero and positive characteristic uniformly (without limiting ourselves to the split adjoint case) it is useful to use the language of schemes. Let G denote an affine group scheme of finite type over Z whose generic fiber G Q is quasisimple. By [11,IV 11.1.1], there exists a finite subset S ⊂ Spec Z such that G is flat over the complement of S. By [4, XIX 2.5], G is a semisimple group scheme over the complement of a (possibly larger) finite subset of Spec Z; in particular, the fibers are all semisimple algebraic groups. By [4, XXII 2.8], for every sufficiently large prime p, all the fibers G Fp have the same root datum (in particular, the same Dynkin diagram). By [11,IV 9.5.5], for every sufficiently large prime p and every algebraically closed field F of characteristic p, we have δ G C a = δ G F a . Proposition 3.2. Let G be an affine group scheme of finite type over Z whose generic fiber is a quasisimple algebraic group. For m ∈ N let δ G C m be the dimension of the variety of all elements of G C of order dividing m. Let T = T a,b,c be a hyperbolic triangle group. Then the following assertions hold: then there are only finitely many conjugacy classes of representations of T onto Zariski dense subgroups of G C , and for almost all primes p, G (F p ℓ ) is a quotient of T for only finitely many ℓ. > 2 dim G then as we will show later in the paper, there is usually a nontrivial deformation space of Zariski dense representations of T into G(C), but we do not know if this is always the case. Proof. The first part follows from Scott's formula [26] in a similar way to Marion's argument in [21, §2]. The second part follows from the argument used in the proof of Theorem 1.7: In this case all Zariski dense representations of T to G C (and also to G F if char(F) is sufficiently large) are locally rigid and hence there are only finitely many. In order to put the open cases of Theorem 1.2 in the context of the rigidity conjecture, we now classify hyperbolic triples of integers for a simple algebraic group G(C) over C of adjoint type. (i) There are no reducible hyperbolic triples of integers for G(C), i.e. triples with δ are as in the following table: (2,4,5), (2,5,5) Proof. One can extend the argument given in the proof of [21, Theorem 3] to simple algebraic groups defined over C and to the case where a, b, c are not necessarily primes. Note that by model theory, the result over C follows from the same result over algebraically closed fields of large characteristic. Remark 3.5. These cases are also rigid over an algebraically closed field of characteristic p, for every prime p. Thus, our proof of Marion's conjecture (see Theorem 1.7) shows that in all these cases T a,b,c is not saturated with finite quotients of type X (where X is one of the six types in the above table). Deformations and saturation with finite quotients In the previous section, we used deformation theory to prove that many finite simple groups are not quotients of a given triangle group T a,b,c . In this section we will use this theory to show that many of them are. Recall that, with finitely many exceptions (X, p, ℓ), for any given connected Dynkin diagram X and any prime p, there exists a simply connected, quasisimple algebraic group G defined over F p of prime characteristic p such that for each ℓ, X(p ℓ ) is the quotient of G(F p ℓ ) by its center. We denote by X(C) the group of complex points of the adjoint simple algebraic group of type X over C. Let us recall Definition 1.1, defining the notion of T = T a,b,c being saturated with finite (simple) quotients of type X. Clearly this definition makes sense for every finitely generated group, not only for triangle groups. The key point of our method is the following theorem which is of independent interest: Theorem 4.1. Let Γ be a finitely presented group and X be a connected Dynkin diagram. The following conditions are equivalent: (1) There exists a representation ρ : Γ → X(C) with Zariski dense image such that ρ is not locally rigid in Hom(Γ, X(C)). (2) The group Γ is saturated with finite quotients of type X. (3) For infinitely many primes p, Γ has infinitely many quotients of type X(p ℓ ) The number of epimorphisms |Epi(Γ, X(p))|, up to conjugation by X(p), is unbounded as a function of p. We use the following lemma. (c) The restriction of f to primes is unbounded. (d) There exists a number field K with ring of integers O such that if p is sufficiently large and Hom(O, F q ) is not empty for q = p ℓ , then f (q) > 0. Proof. The image of Spec A → Spec Z is a constructible set, so either it is a single prime (p) or it is the complement of a finite set of primes. In the first case, the field of fractions of A has characteristic p > 0 and f (q) = 0 for all powers of every prime except p. Thus, we may assume that we are in the second case. Let X = Spec A and Y = Spec B. Let K 0 denote the integral closure of Q in the fraction field of B, which contains the integral closure of Q in the fraction field of A. If K is any finite extension of K 0 which is Galois over Q, then X × Spec Q Spec K (respectively, Y × Spec Q Spec K) is a finite disjoint union of geometrically irreducible components [11,Cor As A and B are integral domains, the same is true of A ⊗ Q and B ⊗ Q, so the irreducible components of A ⊗ K and B ⊗ K each form a single Galois orbit. If n denotes the dimension of the generic fiber of X , then every geometric component of this generic fiber has dimension n, and the same is true for Y . By definition of K, the generic fiber of X (respectively, Y ) over Spec O has geometrically irreducible components, so the same is true of all components of all but finitely many fibers of X (respectively, Y ) over closed points of Spec O [11, Prop. 9.7.8]. Moreover, all components of all but finitely many primes are n-dimensional [11, Prop. 9.5.5]. If n = 0, therefore, for all p sufficiently large, the sum ∞ l=1 f (p ℓ ) is finite and bounded above by a constant independent of p. Thus, none of the conditions (b)-(d) can hold, and we are done. We therefore assume n > 0. If p ≫ 0 and q = p ℓ is the cardinality of a residue field of a prime ideal of O, then by the Weil bound, The fibers of the morphism Y K → X K are finite and therefore bounded. It follows that if q ≫ 0, the number of points in Y (F q ) which map to points of X (F q ) which are not defined over a proper subfield can be bounded below by ǫq n for some ǫ > 0. This immediately implies (d), and combined with Chebotarev density implies (b) and (c). Lemma 4.3. If G is a quasisimple algebraic group over a field K of characteristic zero, F an extension field of K, and Γ ⊂ G(F ) a Zariski dense subgroup such that Γ ∩ G(K) is of finite index in Γ, and the adjoint trace Ad(γ) lies in K for all γ ∈ Γ, then there exists a finite extension L of K in F such that Γ ⊂ G(L). Proof. Let γ ∈ Γ. As the adjoint representation of G is irreducible and Γ ′ := Γ ∩ G(K) is Zariski dense in G, it follows that the K-span in the endomorphism algebra of the Lie algebra of G of {Ad(γ ′ ) | γ ′ ∈ Γ ′ } is the whole matrix algebra. For each γ ∈ Γ, the trace of Ad(γγ ′ ) lies in K for all γ ′ ∈ Γ ′ , and it follows that Ad(γ) is defined over K. The adjoint representation factors through the adjoint group G ad , on which it is faithful, and it follows that the image of each γ in G ad (F ) lies in G ad (K). Thus, each γ lies in G(K γ ) where K γ is a finite extension of K. Choosing one representative γ for each class in Γ/Γ ′ , we obtain a finite extension L such that Γ ⊂ G(L). We can now prove Theorem 4.1. Proof. Let G / Spec Z be the simply connected split Chevalley group scheme associated to the Dynkin diagram X, and let G := G C denote its fiber over Spec C. As Γ is finitely presented, H 2 (Γ, A) is finite for any finite ZΓ-module A. Therefore, there are finitely many different central extensions of Γ by a given finite abelian group. Thus, whenever X(p ℓ ) is a quotient of Γ, G (F p ℓ ) is a quotient ofΓ forΓ an element of a finite set Σ of (finitely generated) central extensions of Γ, and conversely, if G (F p ℓ ) is a quotient of anyΓ ∈ Σ (indeed, of any central extension of Γ), then X(p ℓ ) is a quotient of Γ. Likewise, for every homomorphism Γ → X(C) with Zariski dense image, there corresponds a dense homomorphism from someΓ ∈ Σ to G(C), the simply connected cover of X(C), and conversely. It therefore suffices to prove the equivalence of the following variants of conditions (1)-(5), applied to eachΓ ∈ Σ: (1) There exists a representation ρ :Γ → G(C) with Zariski dense image such that ρ is not locally rigid in Hom(Γ, G(C)). (2) The groupΓ is saturated with quotients of type G (F q ), where q ranges over prime powers. (3) For infinitely many primes p,Γ has infinitely many quotients of type G (F p ℓ ). (5) The number of epimorphisms |Epi(Γ, G (F p ))|, up to conjugation by G (F p ), is unbounded as a function of p. Let Y := Hom(Γ, G ), and let Y := Y C denote the fiber of Y over Spec C. Then Y = Spec R for some finitely generated C-algebra R, and there exists a universal homomorphism Φ :Γ → G (R), such that every homomorphism ρ :Γ → G(C) corresponds, by specialization, to an element of Y (C). We now assume that condition (1) holds. Let Z denote an irreducible component of Y with function field F . We assume that Z is chosen such that there exists z ∈ Z(C) corresponding to a homomorphism with Zariski dense image which is not locally rigid. It follows that the representationΓ → G (F ) corresponding to the generic point is also Zariski dense. For each γ ∈Γ, there exists an element T γ ∈ R given by Tr(Ad(Φ(γ))). If every T γ maps to a constant under the homomorphism R → F , then traces are fixed as ρ varies over Z (C). By the Brauer-Nesbitt theorem, the representations Ad • ρ are all isomorphic as ρ ranges over all representationsΓ → G(C) corresponding to points of Z(C). As the outer automorphism group of G is finite, up to conjugation, there are only finitely many possibilities for ρ. This contradicts the assumption that some ρ is not locally rigid. Thus, some T γ has non-constant image in F . We now apply Weisfeiler's strong approximation theorem to ρ :Γ → G (F ). Let A 0 denote the subring of F generated by the image of {T γ | γ ∈Γ} under the natural homomorphism R → F , and let K denote the field of fractions of A 0 . By [34, Theorem 1.1], there exists a normal subgroup Γ ′ of finite index inΓ, a finitely generated Z-algebra A ⊂ K with fraction field K, a group scheme H over A, an isomorphism i : and a homomorphism Γ ′ → H (A) whose composition with i coincides with the restriction of ρ to Γ ′ and such that Γ ′ maps onto H (F q ) for every surjective homomorphism A → F q . For each coset of Γ ′ inΓ, we choose a representative γ i . By Lemma 4.3, i −1 (ρ(γ i )) ∈ H (L i ) for some finite extension L i /K. We can choose a finite extension L of K such that each L i is contained in L and H L is split. Let B denote a finitely generated A-algebra in L such that L is the fraction field of B and i −1 (ρ(γ i )) ∈ H (B) for each i. Thus i −1 (ρ(Γ)) ∈ H (B). Replacing B by B[1/b] for some nonzero b ∈ B, we may assume H B is split. As Spec B → Spec A is generically finite, after replacing A and B by A[1/a] and B[1/a] respectively, we may further assume that B is module-finite over A. We now assume, on the contrary, that every representation ρ :Γ → G(C) with Zariski dense image is locally rigid. This implies that for each irreducible component Z of Y , either ρ(Γ) fails to be Zariski dense for all ρ parametrized by z ∈ Z(C) or for each γ ∈Γ, Tr(Ad(ρ(γ))) is constant on Z(C). As Y has finitely many irreducible components, there are only finitely many possibilities for the function γ → Tr(Ad(ρ(γ))) as ρ ranges over all Zariski dense homomorphisms ρ :Γ → G(C). The set of such functions is stable by the automorphism group of C, so it follows that all traces of all Zariski dense homomorphisms lie in some number field K. We claim that there exists a finite collection Y i of locally closed subschemes of Y , each smooth over Spec Z, such that every closed point of Y of characteristic p sufficiently large lies in some Y i . Indeed, we note first that without loss of generality, we may assume that Y is irreducible and reduced. The generic fiber of Y is a variety over Q, so its singular locus is a proper closed subvariety. Let Z denote the Zariski closure of this subvariety in Y , endowed with its reduced induced subscheme structure. By Noetherian induction, we may assume that Z admits such a finite collection, so it suffices to prove the same for Y \ Z . The generic fiber of this scheme is nonsingular, so it is smooth over some open neighborhood of the generic point of Spec Z [11, 17.7.11]. At the cost of throwing out a finite set of closed fibers, what remains is smooth. If y is a closed point of Y with residue field F q whose characteristic p is sufficiently large, by the infinitesimal lifting property of smooth morphisms, there exists a morphism from the spectrum of W (F q ), the ring of Witt vectors over F q , to Y , mapping the closed point of Spec W (F q ) to y. If y corresponds to a surjective homomorphism φ :Γ → G (F q ), then we have a homomorphismφ :Γ → G (W (F q )) which lifts this surjective homomorphism. By a theorem of Vasiu [31], if q is sufficiently large, this implies thatφ has dense image. Let Q q denote the fraction field of W (F q ) (i.e., the unramified extension of Q p of degree [F q : F p ]). As there are finitely many subextensions of Q p in Q q , there exists γ ∈Γ such that Q p (Tr(Ad(φ(γ)))) = Q q . If [F q : F p ] > [K : Q], this is impossible, which shows that conditions (2)-(4) of the theorem cannot hold. For condition (5), we note that if p is sufficiently large, Ad is an irreducible representation of G (F p ), so two surjective homomorphisms φ 1 , φ 2 :Γ → G (F p ) such that Ad • φ 1 and Ad • φ 2 have the same semisimplification are equivalent up to tensor product with a character of the center of G (F q ) (whose order is bounded independent of p). If Ad • φ 1 and Ad • φ 2 have distinct semisimplifications, then there exists γ ∈Γ such that Tr(Ad(φ 1 (γ))) = Tr(Ad(φ 2 (γ))). This implies that Tr(Ad(φ 1 (γ))) = Tr(Ad(φ 2 (γ))) and therefore thatφ 1 andφ 2 correspond to non-conjugate Zariski dense homomorphisms Γ → G (Q p ). Fixing an isomorphism between C andQ p , this gives an upper bound on the number of conjugacy classes of surjective homomorphismsΓ → G(F p ), for any prime p sufficiently large and therefore shows condition (5) cannot hold. (1 ′ ) There exists a Zariski dense representation ρ : The next section (and to a large extent the remainder of the paper) will be devoted to showing that for most pairs (T, X) we do, indeed, have ρ satisfying condition (1 ′ ). In fact, we prove the existence of a Zariski dense homomorphism, not locally rigid, from T to the real points of the compact real simple (in particular, adjoint) group with Dynkin diagram X. Deformations of the principal homomorphism In this section we will apply Theorem 4.1 to show that many triangle groups are saturated with finite quotients of various types. To this end we will deform the principal homomorphism introduced in §2. We have the following theorem: where G is a compact, adjoint, simple, real algebraic group, ρ 0 : T → G a homomorphism, and H the Zariski closure of ρ 0 (T ). Assume (c) If g (respectively, h) is the Lie algebra of G (respectively, H) (where the action is via Then the following assertions hold: (i) The homomorphism ρ 0 determines a nonsingular point of Hom(T, G). (ii) Some irreducible component of Hom(T, G) containing ρ 0 has dimension dim H 1 (T, g)+ dim G and contains a nonsingular point ρ with Zariski dense image. Proof. Since the Zariski closure H of ρ 0 (T ) is a maximal subgroup of G we have Z G (H) = Z(H), which is finite since H is semisimple. Hence Ad • ρ 0 has no invariants on g. Since the adjoint representation of a simple group in characteristic zero is self-dual, (Ad • ρ 0 ) * has no invariants on g * . In particular, Theorem 2.6 now yields the first part. Moreover, it now follows from (2.1) and (2.2) that and [15, Corollary 2.5] now yields the second part. Let us now consider the final part. Being simple, G has finite center. Since ρ : T → G(R) is a Zariski dense representation, it follows that Ad • ρ has no invariants on g. The same argument as the one given above for (Ad • ρ 0 ) * yields that (Ad • ρ) * has no invariants on g * . Since ρ 0 and ρ lie in a common irreducible component of Hom(T, G), the result now follows from Lemma 2.2. Proposition 5.2. Let G be a compact adjoint simple group over R of type A 2 , B n (n ≥ 4), C n (n ≥ 2), E 7 , E 8 , F 4 and G 2 . Then the image of a principal homomorphism from SO(3) into G is a maximal subgroup. Note that in the case where G is of type A 1 (i.e. G = SO(3)), we saw that the representation to SO(3) is locally rigid and indeed in this case we have: for any hyperbolic triangle group T = T a,b,c and any fixed prime p, T has only finitely many quotients of the form PSL 2 (p ℓ ). On the other hand: Theorem 5.3. Let T = T a,b,c be a hyperbolic triangle group not in S (see (2.3)) and G be a compact, adjoint, real simple group of one of the following types: Let ρ 0 : T → SO(3, R) → G(R) be the representation of T induced from the principal homomorphism SO(3) → G. Unless (T, X) is as in Table 2 (see §6), there exists a nonsingular R-point ρ 1 ∈ Hom(T, G), with Zariski dense image, which belongs to the same irreducible component of Hom(T, G) as ρ 0 and which satisfies dim H 1 (T, Ad • ρ 1 | g ) = dim H 1 (T, g) > 0. In particular, T is saturated with finite quotients of type X, unless (T, X) is as in Table 2. Remark 5.4. Given X, the triples (a, b, c) appearing in Table 2 appear also as rigid triples (in Marion's sense) for an adjoint algebraic group G of type X, over an algebraically closed field of prime characteristic p, for every prime p. Table 2. The result now follows immediately from Theorems 5.1 and 4.1. Most of the cases where G is of type A r or D r can also be treated using Theorem 5.1, not directly from the principal homomorphism from SO(3) but rather via a "two-step ladder". More precisely: Theorem 5.5. Let T = T a,b,c be a hyperbolic triangle group not in S (see (2.3)) and G be a compact, adjoint, real simple group of type X = A r (with r ≥ 3 and r = 6), or X = D r (with r ≥ 5). Let H be a closed subgroup of G of type Y , as follows: Let ρ 1 : T → H(R) be the nonsingular, Zariski dense representation provided by Theorem 5.3 (excluding the cases of Table 2). Then the following assertions hold: (ii) The cases for which dim H 1 (T, h) = dim H 1 (T, g) are described in Table 3 (see §6). Remark 5.6. For the moment, we exclude the cases X = A 6 and X = D 4 , as these cases require Y = B 3 , which is not covered by Theorem 5.3. Proof. Note that ρ Also Ad•ρ 1 has no invariants on g (since ρ 1 (T ) = H is a maximal subgroup with finite center of the simple group G) and Ad • ρ G 0 has no invariants on g (since ρ G 0 is the representation induced from the principal homomorphism). Furthermore, as the adjoint representation of a simple group in characteristic zero is self-dual, (Ad • ρ 1 ) * and (Ad • ρ G 0 ) * have no invariants on g * . Since ρ 1 and ρ G 0 lie in a common irreducible component of Hom(T, G), We therefore need to prove that except for T and X as in Table 3, we have and then the result will follow from Theorem 4.1. The following line of argument will also be repeated in the proofs of Theorems 5.8 and 5.9, so let us isolate it here: Claim 5.7. Observe that as H is a subgroup of G, if inequality (5.1) does not hold, then equality holds instead. For convenience, we let n 1 = a, n 2 = b and n 3 = c. Noting, by (2.8), that the exponents of H form a subset of the set of exponents of G, we let E be the set of exponents of G which are not exponents of H. It follows from (2.4) that We let L (n 1 ,n 2 ,n 3 ),H,G and R H,G denote respectively the LHS and the RHS of (5.2). In order to check whether inequality (5.2) holds or not, it is useful to put a partial order on the set of hyperbolic triples as follows. Given two hyperbolic triples (n 1 , n 2 , n 3 ) and (n ′ 1 , n ′ 2 , n ′ 3 ), we say that (n 1 , n 2 , n 3 ) ≤ (n ′ 1 , n ′ 2 , n ′ 3 ) if and only if n 1 ≤ n ′ 1 , n 2 ≤ n ′ 2 and n 3 ≤ n ′ 3 . We note that L (n 1 ,n 2 ,n 3 ),H,G decreases with respect to this partial order. Thus for a fixed pair (G, H), if (5.2) holds for a triple (n 1 , n 2 , n 3 ), it holds also for every greater triple. We now check whether inequality (5.2) holds by a case by case analysis using the partial order defined above: we start with the triple (2, 3, 7) proceeding with triples above it. We then repeat this procedure for the other two minimal triples, namely (2,4,5) and (3,3,4). In each branch of the partial ordered set, we check till we succeed; i.e. once inequality (5.2) holds for a fixed X = A r and a given triple (n 1 , n 2 , n 3 ), it also holds for any triple above it. We omit the details, which can be laboriously checked. This finishes the proof of the theorem when X = A r . Let us now treat the case where X = D r . Here E = {r − 1} and |E| = 1. In particular R H,G is equal to R r = r − 2 and L (n 1 ,n 2 ,n 3 ),H,G is equal to One can give the following crude upper bound for L r . or the following one using (1.2) Since 41(r − 1)/42 < r − 2 for r > 43, we are now reduced to the case where r ≤ 43. Here again we check inequality (5.2) for X = D r (r ≤ 43) by the same argument as before (i.e. going along the partial ordered set), this boils down to a tedious finite case by case analysis. Here are more cases where a "two-step ladder" works: Theorem 5.8. Let T = T a,b,c be a hyperbolic triangle group not in S (see 2. 3) and G be a compact, adjoint, real simple group of type X = B 3 or type X = E 6 . Let H be the following subgroup of G of type Y : Let ρ 1 : T → H(R) be the nonsingular, Zariski dense representation provided by Theorem 5.3 (excluding the cases of Table 2). Then the following assertions hold: (ii) The cases for which dim H 1 (T, h) = dim H 1 (T, g) are described in Table 4 (see §6). We therefore need to prove that except for T and X as in Table 4, we have dim H 1 (T, g) > dim H 1 (T, h), and then the result will follow from Theorem 4.1. Noting that the exponents of H form a subset of the exponents of G (see (2.8)), we now argue as in Claim 5.7 in the proof of Theorem 5.5, adapting it and its notation to our two cases. When X is of type D 4 or A 6 , we need to use a three-step ladder: Theorem 5.9. Let T = T a,b,c be a hyperbolic triangle group not in S (see (2.3)) and G be a compact, adjoint, real simple group of type X = A 6 or D 4 . Consider the chain K < H of subgroups of G where K and H are compact real forms of G 2 and B 3 , respectively, inside G. Let ρ 2 : T → H(R) be the nonsingular, Zariski dense representation provided by Theorem 5.8 (excluding the cases of Tables 2 and 4). Then the following assertions hold: (i) dim H 1 (T, h) < dim H 1 (T, g) (excluding the cases of Tables 2 and 4). Also Ad • ρ 2 and Ad • ρ G 0 (respectively, (Ad • ρ 2 ) * and (Ad • ρ G 0 ) * ) have no invariants on g (respectively, g * ). Since ρ 2 and ρ We therefore need to prove that dim H 1 (T, g) > dim H 1 (T, h), and then the result will follow from Theorem 4.1. Noting that the exponents of H form a subset of the exponents of G (see (2.8)), we now argue as in Claim 5.7 in the proof of Theorem 5.5, adapting it to our two cases.
2013-01-14T12:37:51.000Z
2013-01-14T00:00:00.000
{ "year": 2013, "sha1": "539a96abaf019ecd322b04b3948cb5517be3f53a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1301.2949.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "41606cf6b1e864b805f86b1b248a38d2653032c0", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
16647165
pes2o/s2orc
v3-fos-license
Clathrin facilitates the internalization of seven transmembrane segment receptors for mating pheromones in yeast. The role of clathrin in endocytosis of the yeast phermone receptors was examined using strains expressing a temperature-sensitive clathrin heavy chain. The yeast phermone receptors belong to the family of seven transmembrane segment, G-protein-coupled receptors. A rapid and reversible defect in uptake of radiolabeled alpha-factor pheromone occurred when the cells were transferred to the nonpermissive temperature. Constitutive, pheromone-independent internalization of newly synthesized a-factor phermone receptor was also rapidly inhibited in mutant strains at the nonpermissive temperature. In both cases residual endocytosis, 30-50% of wild-type levels, was detected in the absence of functional clathrin heavy chain. Once internalized, the a-factor receptor was delivered to the vacuole at comparable rates in chc1-ts and wild-type cells at the nonpermissive temperature. Clathrin heavy chain was also required for maximal uptake of a mutant a-factor receptor which is dependent on pheromone for internalization. In the presence of a-factor, the internalization rate of the mutant receptor in chc1-ts cells at the nonpermissive temperature was 2.5 times slower than the rate observed for endocytosis of the mutant receptor in wild-type cells. These experiments provide in vivo evidence that clathrin plays an important role in the endocytosis of the seven trans-membrane segment pheromone receptors in yeast. Once internalized, the a-factor receptor was delivered to the vacuole at comparable rates in chcl-ts and wildtype cells at the nonpermissive temperature. Clathrin heavy chain was also required for maximal uptake of a mutant a-factor receptor which is dependent on pheromone for internalization. In the presence of a-factor, the internalization rate of the mutant receptor in chcl-ts cells at the nonpermissive temperature was 2.5 times slower than the rate observed for endocytosis of the mutant receptor in wild-type cells. These experiments provide in vivo evidence that clathrin plays an important role in the endocytosis of the seven transmembrane segment pheromone receptors in yeast. ECEl'TOR-mediated endocytosis of many extracellular ligands proceeds through clathrin-coated domains of the plasma membrane known as clathrin-coated pits (Brodsky, 1988;Pearse and Robinson, 1990;Anderson, 1993;Schmid, 1993). The clathrin coats are assembled onto the plasma membrane as polyhedral lattices from trimers of clathrin heavy chains and associated light chains. Assembly is thought to require complexes of associated proteins that bridge the clathrin lattices to the membrane. The clathrincoated pits collect receptors, invaginate, and pinch off, thereby selectively packaging the receptors into endocytic clathrin-coated vesicles. Receptors known to be internalized through clathrin-coated pits generally share common structural motifs, a large extracellular ligand-binding domain, a single transmembrane sequence, and a cytoplasmic domain (Pearse and Robinson, 1990). In the case of several receptors, the cytoplasmic domains have been shown to contain sequences which mediate interaction with clathrin coats and are necessary for efficient uptake (Chen et al., 1990;Collawn et al., 1990;Ktistakis et al., 1990;Miettinen et al., 1992). A structurally distinct class of cell surface receptors is characterized by seven transmembrane segments (7-TMS) 1 and coupling to trimeric G-proteins (Dohlman et al., 1991). Several members of this receptor class have been shown to undergo endocytosis, and internalization may lead to longterm desensitization (down-regulation) to the effects of the corresponding ligand (Ascoli and Segaloff, 1987;Raposo et al., 1987Raposo et al., , 1989von Zastrow and Koblika, 1992). The role of clathrin in the endocytosis of 7-TMS receptors has not been resolved. In the yeast Saccharomyces cerevisiae, 7-TMS receptors are involved in the process of mating. The two haploid mating types of yeast, MATot and MATa, each secrete peptide pheromones, ~factor and a-factor respectively, which bind to specific 7-TMS receptors on the surface of cells of the opposite mating type (Cross et al., 1988;Marsh et al., 1991). Pheromone binding initiates a trimeric G-protein-mediated signal which triggers a program of physiological changes necessary for conjugation (Cross et al., 1988;Marsh et al., 1991). Both o~-factor and its receptor are internalized and degraded in the vacuole (Chvatchko et al., 1986;Jenness and Spatrick, 1986;Singer and Riezman, 1990;Davis et al., 1993) and recently it has been shown that endocytosis of the a-factor receptor also occurs (Davis et al., 1993). Although some degree of cellular desensitization to or-factor does not require endocytosis of the receptor, internalization and degradation in the vacuole may enhance recovery from the effects of the pheromone Rohrer et al., 1993). The role of clathrin in endocytosis of the yeast mating pheromone receptors has been studied previously (Payne et al., 1988). In yeast cells carrying a deletion of the clathrin heavy chain gene (chclA), endocytosis of a-factor was not completely blocked, but was reduced two-to threefold. The importance of clathrin in this process was not clear, however, because the defective uptake could be attributed to the slow growth rates and morphological abnormalities of chclA cells. Here we report analysis of endocytosis in strains expressing a temperature-sensitive allele of the S. cerevisiae clathrin heavy chain gene (chcl-ts) (Munn et al., 1991;Seeger and Payne, 1992a,b). The chcl-ts allele provides a more direct means to test the involvement of clathrin in endocytosis since, in ceils that harbor this allele, clathrin function is perturbed immediately while cell growth continues at normal rates for 1.5-2 h (Seeger and Payne, 1992a,b). The endocytosis assays applied to the mutant cells have been extended to include a newly developed method to monitor uptake of the a-factor pheromone receptor directly (Davis et al., 1993). Characterization of the effects of chcl-ts on endocytosis of wild type and mutant forms of the a-factor receptor has allowed us to investigate the role of clathrin in both constitutive and pheromone-stimulated uptake. We find that shifting chcl-ts cells to the nonpermissive temperature results in an immediate, reversible but incomplete block in endocytosis of mating pheromone receptors. The loss of clathrin function in chcl-ts cells affects both constitutive and pheromone-stimulated uptake. In all cases, the endocytosis defects occur long before the cells exhibit growth anomalies. Our results argue that clathrin acts at the plasma membrane to selectively internalize the 7-TMS pheromone receptors. Materials Unless noted, all reagents were purchased from Sigma Chemical Co. (St. Louis, MO). Strains, Media, and Genetic Methods The genotypes of the strains used in this work are shown in Table I. YP medium is 1% Bacto-Yeast Extract, 2 % Bactopeptone (Difco Laboratories, Inc., Detroit, MI). YPD medium is YP with 2% dextrose. SD medium is 0.67 % yeast nitrogen base without amino acids (Difco Laboratories, Inc.), with 2% dextrose. SG medium is 0.67% yeast nitrogen base without amino acids with 2 % galactose. As needed, nutritional supplements were added to SD medium as described by Sherman et al. (1974). SDYE medium is SD medium with 0.2 % Bacto-Yeast extract. SRYE medium contains 2 % raflinose instead of glucose. DNA transformations were performed by the lithium acetate procedure (Ito et al., 1982). GPY 400 and 401 were generated by two consecutive DNA transformations. First, GPY 74-21C was transformed with YCpchc1-521TRP. A resulting Trp + transformant was then transformed with a linearized version of pchcl-A10::LEU2 (Payne et al., 1987). Leu + transformants were screened for temperature-sensitive growth. GPY400 exhibits temperature-sensitive growth and GPY401 exhibits wild-type growth rates at all temperatures tested. The structure of the plasmid in GPY401 is inferred from the genotype and has not been physically tested. GPY418 was constructed by "Pop-In/Pop-out" replacement procedure as described by Rothstein (1991). Briefly, GPYll00c~ was transformed with Ylpchc1-521ACla linearized with XbaI to target integration to the chromosomal CHC1 locus. Ura + transformants were tested for temperature-sensitive growth. A temperaturesensitive transformant was then plated on medium containing 5-fluoroorotic (5-FOA) acid to select for cells where homologous recombination had taken place between the duplicated CHC1 sequences. 5-FOA-resistant colonies were tested for temperature-sensitive growth to identify cells where recombination had resulted in replacement of the wild-type sequences with the sequences carrying the temperature-sensitive mutations. To obtain Oregon). GPY419 was generated by disrupting PEP4 in GPY418 using pTS15 (pep4::URA3) (a gift from Tom Stevens). pSL1922 expresses a truncated form of the a-factor receptor which lacks the carboxy-terminal 105 amino acids under the control of the GALl prorooter (Davis et al., 1993). The plasmid was introduced into GPY449 and GPY418 to yield GPY731 and GPY735, respectively. SM1581 contains pSM219, a multicopy plasmid carrying MFal (a gift from Dr. Susan Michaelis, Johns Hopkins University School of Medicine, Baltimore, MD). Production and Purification of 3sS-labeled s-factor Production and purification of 35S-labeled c~-factor was carded out as described by Blumer et al. (1988) using an a-factor overproducing strain harboring the plasmid pDA6300 (a gift from Dr. Jeremy Thorner, University of California, Berkeley, CA). The purified 35S-labeled c~-factor comigrated with synthetic cold a-factor during reverse-phase HPLC chromatography. The labeled or-factor bound to MATa cells but not to MATc~ cells, and this binding was prevented by the addition of excess synthetic cold c~-factor. The specific activity, determined by bioassay using synthetic a-factor as standard, was 50-100 Ci/mmol. Assay for Internalization of 3~S-labeled a-factor and Reversibility of the Internalization Defect The assay for binding and internalization of c~-factor is a modification of published procedures (Dulic et al., 1991). CHC1 (GPY401), chcl-ts (GPY400), and chclA (GPY423) were grown to mid-log phase in YPD medium at 24°C. Cells were collected by centrifugation and resuspended at 1-2 x 109 cells/ml in ice cold KPO4 buffer (50 mM KPO4, pH 6, containing 1% BSA, 1 mM PMSE and 10 mM p-tosyl-L-arginlne methyl ester [TAME]). 35S-labeled u-factor was added at 1-2×105 cpm/109 cells and allowed to bind to cells on ice for 30 rain. Following the incubation, the cells were sedimented by centrifugation and the supernatant was aspirated to remove unbound c~-factor. The cell pellet was resuspended in an equal volume of ice cold KPO4 buffer and 100-~l aliquots were then incubated at 24 or 370C for various times (the preshift time). Under these conditions u-factor remains bound to the cells but is not internalized (Chvatchko et al., 1986;Tan, E, unpublished observations). Glucose was then added to 2 % to stimulate internalization, and the incubation at 24 or 37°C continued for 30 min. At this point, cells were diluted in ice cold 50 mM sodium citrate, pH 1.1, and incubated for 15 min to remove surface bound c~-factor. The low pHtreated cells were collected by vacuum filtration on a Whatman GF/A filter disc (Whatman Inc., Clifton, NJ). The filters were washed with 2 × 5 ml ice cold 50 mM KPO4, pH 6, and internalized c~-factor was measured by scintillation counting of the filters. Total bound u-factor was assessed by washing and filtering cells in ice cold 50 mM KPO4, pH 6, after the binding step. Typically 30-90% of the radioactivity bound to the cells and 80-90% of the bound radioactivity was internalized after 1 h at 24°C. Time courses for internalization of labeled ct-factor at 37°C were conducted after a 5-min preshift at 37°C as stated above. Internalization was initiated with addition of 5 % glucose and terminated at various times up to 20 min. Identical results were obtained in experiments using 2 % glucose. For measuring the reversibility of the temperature-sensitive a-factor internalization defect, CHC1 and chcl-ts cells were allowed to bind and internalize u-factor for 30 min after a 5-rain preshift as described, except in YP media with 50 mM KPO4 adjusted to pH 6. Samples at 37°C were then either harvested and analyzed as described, transferred to 24°C for another 30-min incubation with a second addition of glucose, or maintained at 37°C for another 30 min with a second addition of glucose before being harvested and analyzed as described. For protease sensitivity of newly synthesized a-factor receptor, CHC1 (GPYll00t~), CHC1 pep4A (GPY449), chcl-ts (GPY418), and chcl-ts pep4A cells (GPY419) were labeled at 24°C for 5 min and then subjected to the chase regimen described above, except the chase was terminated by transferring 1 x 107 cells on ice into tubes containing 10 mM NaN3 and 10 mM NaF. After washing the samples once with ice cold pronase buffer (spheroplast buffer with 2 mM MgC12) the intact cells were divided in half with one part mock-treated and the other part treated with pronase according to Truehart and Fink (1989) with the following modifications. 50 #l of a 30 mg/ml pronase (Calbiochem-Novabiochem, La Jolla, CA) solution or buffer alone was added to the samples in 1 ml of pronase buffer and incubated with agitation at 37°C for 1-1.5 h. Prior to removal of pronase, 2 x 107 cells in pronase buffer of strain 1100a, which does not express the a-factor receptor, were added as carrier. The cells were then pelleted and washed twice with pronase buffer containing 1 mM EDTA and a protease inhibitor cocktail (1 mM PMSE 1 mM benzamidine-HCl, 1 #g/ml leupeptin, 2/~g/ml pepstatin A, 1 /zg/ml chymostatin, 1 #g/ml aprotinin, and 1 /~g/ml antipain, diluted from a 1,000 x stock solution in DMSO). The cell pellet was lysed with glass beads in 50 tzl 8 M urea, 5 % SDS, 40 mM Tris-HC1, pH 6.8, and 0.1 mM EDTA . The receptor was then immuneprecipitated from the lysate as described above. Samples were analyzed on 11% SDS-polyacrylarnide gels. The amount of receptor was quantified by scanning densitometry of the autoradiographs using a LKB Ultroscan XL (Pharmacia Diagnostics, Inc., Fairfield, NJ). For labeling and immuneprecipitation of the truncated a-factor receptor, CHC1 pep4& (GPY731) and chcl-ts (GPY735) cells were grown in SRYE media at 24°C and then washed once and resuspended in SG media at 2 x 107 cells/ml. After a 5-min incubation at 24°C, the cells were labeled as described above for 45 min. The labeling was terminated by addition of unlabeled methionine and cysteine, yeast extract, and glucose to final concentrations of 0.006, 0.2, and 3%, respectively. The ceils were incubated for another hour at 24°C to accumulate the labeled truncated receptors at the plasma membrane. The cells were then placed at 37°C for 5 min prior to addition of an equal volume of exhausted YPD media from a stationary culture of SM1581 cells which overproduce a-factor. This media was supplemented with unlabeled cysteine and methionine, yeast extract, and glucose as described above and prewarmed to 37°C. A control sample received an equal volume of the same media from a stationary culture of GPYll00ct cells which do not produce a-factor. At various time intervals after addition of a-factor, 1 x 107 cells were removed, pronase treated, lysed, and the truncated a-factor receptor immuneprecipitated as described above, except that 2 x 107 SM1581 cells were added as carrier to the samples prior to the removal of pronase. Uptake of or-factor in chcl-ts Cells Is Rapidly Impaired After Shift to the Nonpermissive Temperature We have assessed the role of clathrin in internalization of c~-factor receptors by measuring receptor-mediated uptake of radiolabeled pheromone (Dulic et al., 1991) in chcl-ts cells shifted to the nonpermissive temperature ( Fig. 1 A). Labeled or-factor was bound to either chcl-ts cells or congenie wild-type (CHC1) cells at 0°C in the absence of glucose. Following removal of unbound pheromone, the cells were shifted to the permissive (24°C) or nonpermissive temperature (37°C) for various periods of time (preshift) in the absence of glucose. Without glucose, the cells lack sufficient energy stores for intracellular membrane transport processes including endocytosis (Chvatchko et al., 1986). Thus, when the preshift protocol is carried out at the nonpermissive temperature, it provides a means to eliminate tempera- ture-sensitive clathrin heavy chain function in the absence of membrane traffic. Following the preshift, glucose was added and uptake was determined after 30 min by treating the cells with a low pH buffer to remove surface-bound o~-factor (Dulic et al., 1991). As shown in Fig. 1 B, at 24°C the uptake of o~-factor by chcl-ts cells (solid bars) was virtually the same as wild-type cells (stippled bars). In contrast, at 37°C the chcl-ts cells displayed an immediate defect in endocytosis; with a 5 min preshift, mutant cell uptake was only 44% of wild-type levels. This level was similar to that observed in chclA cells devoid of clathrin heavy chain due to a deletion of CHC1 (30%; Fig. 1 B, open bar). After longer preshift times, compared to CHC1 cells, the ratio of uptake by chcl-ts cells remained relatively constant (27 % after a 30min preshift), although both mutant and wild-type strains showed a progressive decline in internalization at the elevated temperature. Similar results have been obtained with another pair of congenic chcl-ts and CHC1 strains. We previously reported that chclA cells internalize a-factor at 35-50% of wild-type levels (Payne et al., 1988). Since chclA cells grow more slowly than wild-type cells, it was not clear whether the endocytosis defect was a consequence of slow growth. The results shown in Fig. 1 B do not support this possibility because the growth rate of chcl-ts cells does not decline for at least 90 min following shift to 37°C (Seeger and Payne, 1992a). A time-course of c~-factor uptake in cells preshifted to 37°C for 5 min is plotted in Fig. 1 C. The temperatureinduced endocytosis defect in chcl-ts cells was apparent at the first time point (2 min) after addition of glucose. From 2-6 min after addition of glucose, when internalization of o~-factor was linear for both cell types, the rate of internalization in chcl-ts cells was approximately threefold lower relative to CHC1 cells. Internalization continued at a slower rate in chcl-ts cells throughout the course of the experiment. The immediate onset of the endocytosis defect after shifting chclts ceils to 37°C suggests that clathrin plays a direct role in facilitating endocytosis of the receptor-bound pheromone. However, the residual endocytosis of a-factor in chcl-ts cells incubated at 37°C and in chclA cells at 24°C indicates that a-factor internalization can occur in the absence of functional clathrin heavy chain, albeit with reduced efficiency. The a-factor Internalization Defect in chcl-ts Cells Is Reversible To further evaluate the endocytosis defect in chcl-ts cells, we determined whether internalization could be reestablished by returning the cells to the permissive temperature. Following s-factor binding and a 5-min preshift, mutant and wildtype cells were incubated at 24 or 37°C for 30 rain as described above and a portion was tested for endocytosis. At this point, the chcl-ts cells incubated at 37°C internalized 40 % of the pheromone compared to wild-type cells (Fig. 2, duplicate samples were analyzed and the results averaged. Data are the mean + standard error for three experiments. (C) Time course of internalization at 37°C after a 5-min preshift for CHC1 and chclts cells. Same as B, except that internalization was initiated with addition of 5 % glucose and terminated at the indicated times. For each time point, duplicate samples were analyzed and the results averaged. Data are the mean ± standard error for two experiments. Figure 2. Reversibility of the u-factor internalization defect in chclts ceils. Same as Fig. 1, except that cells were in YP media with 50 mM KPO4, pH 6. All samples were preshified for 5 min and allowed to internalize a-factor for 30 min. Samples at 37°C were then either harvested and analyzed as described in the legend to Fig. 1 (37°C), transferred to 24°C for another 30-min incubation with a second addition of glucose (37--'24), or maintained at 37°C for another 30 min with a second addition of glucose (37"-*37). Results are the mean + standard error for three experiments. 37~C bars). The remainder of the 37°C-cell samples were divided; one part was incubated at 37°C while the second part was shifted to 24°C for an additional 30 min before measuring internalization. When the chcl-ts cells were shifted to 24°C (Fig. 2, 37"-*24 bars), the level of c~-factor internalization reached 71% of the wild-type level. In contrast, the slower clathrin-independent internalization in the chcl-ts cells maintained at 37°C resulted in only 52 % uptake relative to CHC1 cells (Fig. 2, 37"-'37bars). The substantial recovery of endocytosis in chcl-ts cells returned to 24°C suggests that the endocytosis defect is due to a reversible, temperature-induced impairment of clathrin heavy chain function. The Rate of a-factor Receptor Uptake Is Reduced in chcl-ts Cells Recent studies on the biosynthesis of the a-factor receptor in MATu cells allowed us to examine whether clathrin plays a role in endocytosis of this receptor (Davis et al., 1993). The transport itinerary of the receptor was examined using pulse-chase regimens followed by immunoprecipitation or immunoblotting. With these approaches, it was possible to monitor the receptor directly in the absence of radiolabeled pheromone. The results indicated that newly synthesized a-factor receptors (and c~-factor receptors) in wild-type cells are transported to the cell surface and then internalized, even in the absence of pheromone, and delivered to the vacuole where they are degraded. Since degradation of a-factor receptors depends on delivery to the vacuole and occurs in the absence of pheromone, turnover of newly synthesized receptor can be used as a convenient diagnostic assay for constitutive endocytosis of the receptor (Davis et al., 1993). To follow turnover of a-factor receptors, chcl-ts and CHC1 cells were labeled with [35S]methionine and cysteine for 30 min at 24°C. Labeling was quenched by addition of excess unlabeled amino acids and then one half of each sample was transferred to 37°C while the other half was maintained at 24°C. At time intervals, a-factor receptor was immunoprecipitated with polyclonal antiserum specific for the receptors carboxy-terminal cytoplasmic domain (Clark et al., 1988). Precipitated receptor was visualized by SDS-polyacrylamide gel electrophoresis and autoradiography (Fig. 3). At 24°C, the rate of a-factor receptor turnover was identical in mutant and wild-type ceils (Fig. 3, lanes 1-4). At 37°C, the receptors in the chcl-ts cells were clearly more stable than those in the CHC1 cells (Fig. 3, lanes 5-7). This result is consistent with reduced endocytosis of the receptors in the mutant cells. However, at the later times, the receptor was degraded in mutant cells (Fig. 3, compare lanes 5 and 7) suggesting that, like or-factor endocytosis, internalization of the a-factor receptor can occur in the absence of clathrin. We observed a similarly delayed turnover of a-factor receptor in chclA cells (Tan, P., and G. Payne, unpublished observations). The properties of the a-factor receptor are not universally shared with other plasma membrane proteins; the plasma membrane ATPase remained stable in both strains at both temperatures over the time course of the experiment shown in Fig. 3 (Tan, P., and G. Payne, unpublished observations). Furthermore, in the absence of pheromone, a truncated version of the a-factor receptor lacking the carboxy-terminal 105 amino acids (Davis et al., 1993) remains at the plasma membrane as measured by its susceptibility to exogenous proteases (see below). If stabilization of a-factor receptors in chcl-ts cells reflects defective endocytosis then the receptors should accumulate at the cell surface. Accordingly, we used the 24°C pulse, 37°C chase protocol described above and determined the sensitivity of receptors to exogenously added pronase. To obtain a more synchronous population of radiolabeled receptors, the labeling in these experiments was carried out for only 5 min. Since the receptors are unstable even in the absence of exogenous protease (see Fig. 3), we introduced the pep4 mutation into both CHC1 and chcl-ts cells. The pep4 mutation eliminates activation of vacuolar proteases (Hemmings et al., 1981), and consequently prevents degradation of receptors that are delivered to the vacuole (Davis et al., 1993). After the 5-min labeling period at 24°C, pronase treatment did not affect the levels of receptors in either cell type (GPY419) cells. Cells were labeled at 24°C for 5 min and then shifted to 37°C for the indicated chase times. At each time point cells were harvested and either treated with pronase (+) or mocktreated (-) prior to immunoprecipitation of the receptor as described in Materials and Methods. The arrows mark pronaseresistant carboxy-terminal receptor fragments. The portions of the gels containing the pronase-resistant fragments were exposed for longer periods of time to facilitate visualization of the fragments. Results are from one experiment and are representative of a total of three experiments. (C) Time course of the pronase sensitivity of intact receptors from A and B as measured by scanning densitometry and calculated as the percent of receptor degraded after pronase treatment relative to mock treated. ( Fig. 4 A and B, lanes 1 and 2, upper panels). At this time point the newly synthesized receptors are still within the secretory pathway in transit to the cell surface and consequently are not accessible to the exogenous pronase. Upon a further 5-rain incubation at 37°C, the amount of intact re-ceptor was slightly reduced in both cell types by pronase treatment (Fig. 4, A and B, lanes 3 and 4, upper panels) and products of the proteolysis (arrowheads, lower panels) appeared in both cell types. Since the antibodies used in the immunoprecipitations are specific for the cytoplasmic domain, these pronase-resistant receptor fragments most likely encompass the cytoplasmic domain that is inaccessible to the exogenous pronase. By the 10-min chase time (Fig. 4, A and B, lanes 5 and 6, upper panels), the bulk of the receptors (•80%) were accessible to pronase in both cell types, demonstrating that they had reached the plasma membrane by this time. When the labeled cells were incubated at 37°C for longer times, a difference in receptor pronase-sensitivity between chcl-ts and CHC1 cells was apparent. Pronase treatment of chcl-ts cells severely reduced the amount of intact receptor up to the 45-rain time point (Fig. 4 B, lanes 5-14, upper panel), but in contrast, significant amounts of receptor in CHC1 cells were resistant to pronase at the 20-and 30-min time points (Fig. 4 A, lanes 9-12, upper panel) and by 45 min most of the receptor was resistant to pronase (Fig. 4 A , lanes 13-18, upper panel). In accordance with the prolonged pronase-sensitivity of the intact receptor in chcl-ts cells, the pronase-resistant fragments are apparent for up to 45 min, while in CHC1 cells the fragments are absent after 30 min ( Fig. 4 A and B lower panels, compare lanes 11-14). We interpret these results as evidence that efficient endocytosis of the receptor occurred in the CHC1 cells so that, after 30 min at 37°C, most of the newly made receptor was internalized and thereby sequestered from the exogenously added pronase. On the other hand, the prolonged pronase-sensitivity of the receptor in chcl-ts cells demonstrates that newly made receptors remain at the surface for longer times and provides direct in vivo evidence that clathrin facilitates uptake of the a-factor receptor. Significant pronase resistance of the intact receptor in chcl-ts cells was observed after 60 min (Fig. 4 B, lanes 15-18, upper panel). This results reveals the existence of a slower, clathrin-independent internalization process that is consistent with the results for internalization of radiolabeled a-factor. The possibility that the receptors in chcl-ts cells remain accessible to pronase due to lysis of the cells after shift to 37°C is unlikely based on the presence of the resistant fragments in chcl-ts cells, the pronase-resistance of the intact receptor by 60 rain, and the pronase-resistance of cytoplasmic glucose 6-phosphate dehydrogenase at all time points (Tan, P., and G. Payne, unpublished observations). The amount of receptor immunoprecipitated from untreated and pronase-treated samples was quantified by densitometry and the percent of intact receptor that was pronasesensitive (i.e., at the cell surface) relative to the chase time is plotted in Fig. 4 C. The coincidence of the curves at early time points illustrates that receptors reach the cell surface at the same rates in both cell types; 80% of the receptors are present at the cell surface by 15 min. After this time, the pronase sensitivity in CHC1 decreases rapidly, while in chcl-ts cells the peak pronase sensitivity of the receptor persists for up to 20 min and then declines slowly. From these results we estimate that the half times for internalization of the receptors is 11-15 min for CHC1 ceils and 25-30 min for chclts cells, corresponding to a two-to threefold decrease in the rate of endocytosis of the a-factor receptor in chcl-ts cells. Internalized a-factor Receptor Is Delivered to the Vacuole at Similar Rates in chcl-ts and l~ld-type Cells The reduced rate of pheromone receptor uptake in chd-ts cells shifted to 37°C suggests that clathrin acts directly at the plasma membrane to facilitate internalization. An alternative interpretation is that clathrin acts at a subsequent stage along the endocytic pathway. In this scenario, severe inhibition of a later stage of endocytosis in chcl-ts cells at 37°C would lead to an indirect delay in transport from the cell surface. The PEP4-dependent turnover of the a-factor receptor shown in Fig. 3 suggests that transport of receptors to the vacuole via the endocytic pathway is not completely blocked in chcl-ts cells. To examine the effects of chcl-ts on later endocytic stages more directly, we used the pulse-chase regimen to monitor a-factor receptor uptake in chcl-ts and CHC1 strains carrying the wild-type PEP4 allele. Because receptors that reach the vacuole are degraded in these strains, pronase-resistant receptors detected at time points after the receptors arrive at the cell surface (10-15 rain of chase, see Fig. 4) should represent molecules that have left the cell surface but not yet gained access to vacuolar proteases. Therefore, we reasoned that a strong inhibition of receptor transport to the vacuole at stages subsequent to internalization in chcl-ts cells should result in accumulation of pronaseresistant receptors at later time points when compared to wild-type cells. Fig. 5 presents the results of a pulse-chase experiment where cells were metabolically labeled for 5 rain, then harvested at the designated time intervals and subjected to pronase treatment (note that the chase times for the two strains are different). Consistent with the measurements of a-factor receptor turnover shown in Fig. 3, the PEP4dependent degradation of receptor in CHCI cells not treated with pronase (Fig. 5 A, odd-numbered lanes) occurred more rapidly at 37°C than in chcl-ts cells (Fig. 5 B, odd-numbered lanes). By the 30-min time point, 30 % of the labeled receptors remained in CHC1 cells (Fig. 5 A, lane 7) compared to 80 % in the mutant cells (Fig. 5 B, lane 5). Pronase treatment of the cells revealed the internal pool of receptors (Fig. 5, even-numbered lanes). Similar to the results in Fig. 4, the majority of the receptors reached the cell surface and became pronase sensitive by 10-15 rain (Fig. 5, A and B, lanes 3 and 4). Importantly, at later points the levels of pronaseresistant receptors in the two strains were comparable (Fig. 5, A and B, lanes 5-10). For example, after 30 rain at the nonpermissive temperature, 30% of the receptors were pronase resistant in both chcl-ts and wild-type cells. Thus, the chcl-ts allele does not cause conspicuous accumulation of receptors in an intracellular pre-vacuolar compartment. These results argue that the partial endocytic defect in chcl-ts cells cannot be attributed to a more complete block at a transport step subsequent to the internalization of cell-surface a-factor receptors. The Rate of Ligand-induced Uptake of a Truncated a-factor Receptor Is Reduced in chcl-ts Cells Recently, a truncated a-factor receptor was engineered which is missing the carboxy-terminal 105-amino acid residues (Davis et al., 1993). This mutant receptor (ste3-A365) remains at the cell surface in the absence a-factor but Figure 5. Pronase sensitivity of a-factor receptors in congenic PEP4 (A ) CHC1 (GPY1100,v) and (B) chcl-ts ceils (GPY418). Ceils were labeled at 24°C for 5 min and then shifted to 37°C for the indicated chase times. At each time point ceils were harvested and either treated with pronase (+) or mock treated (-) prior to immunoprecipitation of the receptor as described in Materials and Methods. Results are from one experiment and are representative three experiments. is internalized upon addition of the pheromone. The properties of the truncated receptor allowed us to monitor the role of clathrin in pheromone-induced endocytosis. We assayed ligand-induced endocytosis using CHC1 pep4A and chcl-ts cells harboring a plasmid which expresses ste3-A365 under the control of the inducible GAL/promoter (Davis et al., 1993). Expression of the mutant receptor was induced at 24°C by the addition of galactose. Concurrent with growth in galactose, cells were labeled for 45 min after which time glucose was added to repress receptor gene expression, and excess unlabeled amino acids were added to quench the labeling. Cells were incubated under these conditions for an additional hour in order to ensure that all receptors reached the cell surface. Following the 1-h incubation at 24°C in glucose medium, the cells were transferred to 37°C for 5 min prior to addition of media containing a-factor. Samples were removed various times after addition of pheromone, subjected to pronase, and immunoprecipitated as already described. At the time of a-factor addition, most of the mutant receptor was at the surface in both cell types as shown by the virtually complete pronase-sensitivity of the intact receptor (Fig. 6, A and B, lanes I and 2) and presence ofa pronase-resistant fragment (arrows). However, after addition of a-factor a dramatic difference in receptor pronase sensitivity between CHC1 and chcl-ts cells was detected. In CHC1 cells, the receptors acquired complete pronase resistance by 30 min (Fig. 6 A, lanes 3-12) while in chcl-ts cells, significant amounts of the intact mutant receptor remain pronase sensitive for at least 60 min (Fig. 6 B, lanes 3-12). The difference between levels of pronase-resistant receptor in mutant and wild-type cells is detectable within 5 min after addition of a-factor (compare lanes 3 and 4 in Fig. 6, A Densitometric analysis of the data in Fig. 6, A and B is presented in Fig. 6 C. In comparison to CHC1 cells, internalization of mutant receptors in chcl-ts cells proceeds at a reduced rate after a slight lag. Half-times for the ligandinduced internalization are approximately 8 min for CHC1 cells and 20 min for chcl-ts cells. This 2.5-fold reduction in the rate of internalization is in agreement with the previous results, and argues that clathrin is also required to facilitate ligand-induced endocytosis of this truncated receptor. The rate of wild-type receptor uptake in the presence of pheromone was similarly affected by chcl-ts (data not shown). Figure 6. Pronase-sensitivity of truncated a-factor receptors in congenic (,4) CHC1 pep4A cells (GPY731) and (B) chcl-ts cells (GPY735). Labeled receptors were accumulated at the plasma membrane without a-factor as described in Materials and Methods. The cells were then shifted to 37°C for 5 min before addition of the pheromone to induce endocytosis. Cells were collected at the time points indicated and the receptors examined for pronase sensitivity as described in Fig. 4. The experiment has been repeated three times with similar results. (C) Plot of the pronase sensitivity as described in Fig. 4 C. corresponds to 10 min at 37°C. Thus, the onset of the chcl-ts effect after shift to 37°C is rapid, similar to the effect of chclts on uptake of o~-factor (Fig. 1 C). The resistance to pronase is ligand dependent: in the absence of a-factor, most of the receptors remain pronase-sensitive even after 60 rain (Fig. 6, A and B, lanes 13 and 14). As expected, in the pronase- Discussion The role of clathrin in endocytosis of mating pheromone receptors has been examined by monitoring uptake in cells expressing a temperature-sensitive allele of clathrin heavy chain. Upon shift to the nonpermissive temperature, a dramatic and immediate reduction in endocytosis of a-factor receptor and a-factor ensued. In prior work, internalization of et-factor was shown to be reduced in chclA cells to 35-50% of wild-type levels (Payne et al., 1988). Because chclA cells grow slowly and form multi-cell aggregates, the partial endocytosis defect was not interpreted as a direct consequence of a loss of clathrin function. Two findings presented here argue that eliminating clathrin function has a direct effect on the first step of the endocytic pathway, removal of pheromone receptors from the cell surface. First, at 37°C a defect in internalization was apparent in chcl-ts cells within 2 min after endocytosis was initiated by provision of glucose. Thus, the endocytic defect occurs significantly faster than the half-time for u-factor uptake (5-7 min). This observation makes it unlikely that the uptake defect in chcl-ts cells is due to effects on later endocytic steps such as recycling of endocytic machinery components from endosomes to the cell surface after a round of internalization. Second, the efficient degradation of internalized a-factor receptors in chcl-ts cells also provides evidence that the partial internalization defect cannot be due to a block in transport at a subsequent step in the endocytic pathway. These results offer genetic evidence that clathrin acts directly at the plasma membrane to facilitate endocytosis of the pheromone receptors, and thereby represent the first in vivo demonstration of clathrin-mediated uptake of 7-TMS receptors. It should be noted that our results do not exclude the possibility that clathrin also facilitates later endocytic steps. The immediate effect of the chcl-ts mutation on both constitutive and pheromone-stimulated endocytosis provides evidence that clathrin plays a role in both processes. Our results are consistent with a model in which clathrin facilitates pheromone receptor endocytosis by clustering the receptors at plasma membrane sites undergoing vesiculation. We envision that receptors are collected at these sites through interactions of the receptor cytoplasmic domains and components of the clathrin coats. Based on our findings, we suggest that membrane vesiculation still proceeds in the absence of clathrin but receptors are not rapidly incorporated into the newly forming vesicles, thereby reducing the rate of receptor up-take. Immunocytochemical studies will be necessary to test this interpretation and confirm the clustering of pheromone receptors in clathrin-coated pits. Consistent with our hypothesis, the cytoplasmic domains of both the a-factor and a-factor receptors are important for internalization. In the case of the a-factor receptor, a small region in the carboxy-terminal cytoplasmic domain has been identified which plays a key role in pheromone-stimulated endocytosis (Rohrer et al., 1993). This sequence does not display the characteristics of sequences in plasma membrane proteins which mediate clustering in clathrin-coated pits in mammalian cells (Chen et al., 1990;Collawn et al., 1990;Ktistakis et al., 1990;Letourneur and Klausner, 1992;Miettinen et al., 1992). As suggested by Rohrer et al. (1993), this difference may indicate that the a-factor receptor sequences play a role in regulating endocytosis in response to pheromone, or that the sequences represent a new motif capable of interacting with clathrin coats. In the case of the a-factor receptor, a 105-amino acid truncation (ste3-A365) of the cytoplasmic tail results in a receptor that remains at the cell surface unless pheromone is present (Davis et al., 1993). In the context of our model, the truncation may cause an altered structure which occludes internalization signals unless pheromone is bound. Alternatively, there may be both pheromone-dependent and -independent signals in the wild-type receptor and the A365 mutation may remove the pheromoneindependent signal. Although this possibility has not been addressed in the case of the a-factor receptor, there may be multiple internalization signals in the a-factor receptor (Rohrer et al., 1993). Endocytosis of pheromone or pheromone receptors continued at the nonpermissive temperature in chcl-ts cells at significant rates, with half-times of 20-30 min. This internalization is most likely not due to residual activity of the temperature-sensitive clathrin heavy chain at 37°C since the uptake rate is commensurate with that observed in chclA cells. Thus, in cells devoid of functional clathrin, receptors are still internalized, but with reduced rates compared to wild-type cells. We cannot distinguish at present between the possibility that residual uptake occurs through a second clathrin-independent pathway, perhaps analogous to that described in mammalian cells (Hansen et al., 1991, and references therein), or the possibility that other elements of clathrin coats are still capable of limited vesiculation in the absence of clathrin heavy chain. Possible Roles for Clathrin-mediated Pheromone Receptor Endocytosis Why has a mechanism evolved to enhance endocytosis of yeast pheromone receptors.'? By analogy to down-regulation of mammalian 7-TMS receptors, uptake could play a role in clearing the surface of receptor-bound pheromone and contribute to the recovery of the responding cell to the effects of the pheromone. In addition, perhaps constitutive endocytosis is necessary during the process of mating-type switching. Homothallic yeast strains are able to switch mating types through a gene conversion process which replaces the master regulatory sequences at the mating-type (MAT) locus (Herskowitz, 1988). Mating-type switching in these strains occurs at high frequency. The gene conversion occurs prior to replication of MAT during the cell cycle, and by the time cytokinesis occurs the two resulting cells have acquired the phenotypic properties of the new cell type. Clathrinmediated endocytosis may play a role in constantly clearing the surface of pheromone receptors so that, after a matingtype switch, the old receptors (which are no longer expressed) can be replaced by newly synthesized receptors for the opposite mating-type pheromone. Hartwell and his colleagues have defined an early step in the mating process which involves orienting towards the mating partner (Jackson and Hartwell, 1990a,b;Jackson et al., 1991). If several partners are available, the cell producing the highest level of pheromone is chosen, and pheromone receptors become concentrated at the region of the cell surface facing the chosen partner. Clathrin-deficient mutants are partly defective in this process of mating partner discrimination (Jackson et al., 1991). Our results suggest that this defect could be due to reduced endocytosis of the pheromone receptor. If receptors diffusely distributed along the plasma membrane are constantly endocytosed and replaced by new receptors, then excluding the receptors facing the mating partner from endocytosis would establish an orientation. Exclusion of receptors from endocytosis could occur by attachment to the underlying cytoskeleton in the same manner that the FcRII-B1 isoform of the Fc receptor is excluded from endocytosis in B-lymphocytes and macrophages (Miettinen, 1992) and the Na+-K ÷ ATPase is sequestered to the basallateral membrane of the kidney epithelial ceils (Hammerton et al., 1991). Alternatively, polarized secretion (Field and Schekman, 1980) towards the mating partner combined with endocytosis could lead to oriented receptor localization without the need to invoke any mechanism for endocytic exclusion. It remains to be established which of these models applies to mating partner discrimination. However, in both cases, reduced endocytosis caused by defective clathrin would diminish the polarized distribution of receptors and thus reduce the discrimination capacity of the responding cell. We have shown that clathrin is required for efficient endocytosis of the 7-TMS pheromone receptors in yeast. This is the first direct, in vivo evidence for clathrin-dependent uptake of receptors of the 7-TMS family. Our results raise the possibility that down-regulation of 7-TMS receptors in mammalian cells may similarly occur by clathrin-mediated endocytosis.
2014-10-01T00:00:00.000Z
1993-12-02T00:00:00.000
{ "year": 1993, "sha1": "ecb3d284126fdce2199eed8fdb7ef9fb38dd3420", "oa_license": "CCBYNCSA", "oa_url": "https://rupress.org/jcb/article-pdf/123/6/1707/1261378/1707.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "ecb3d284126fdce2199eed8fdb7ef9fb38dd3420", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
245537105
pes2o/s2orc
v3-fos-license
Game positions of Multiple Hook Removing Game Multiple Hook Removing Game (MHRG for short) is an impartial game played in terms of Young diagrams. In this paper, we give a characterization of the set of all game positions in MHRG. As an application, we prove that for $t \in \mathbb{Z}_{\geq 0}$ and $m, n \in \mathbb{N}$ such that $t \leq m \leq n$, and a Young diagram $Y$ contained in the rectangular Young diagram $Y_{t,n}$ of size $t \times n$, $Y$ is a game position in MHRG with $Y_{m,n}$ the starting position if and only if $Y$ is a game position in MHRG with $Y_{t,n-m+t}$ the starting position, and also that the Grundy value of $Y$ in the former MHRG is equal to that in the latter MHRG. closely related to not only the combinatorial game theory, but also the representation theory and the algebraic geometry associated with simply-laced finitedimensional simple Lie algebras. For example, the weight system of a minuscule representation (which is identical to the Weyl group orbit of a minuscule fundamental weight) for a simply-laced finite-dimensional simple Lie algebra can be described in terms of a d-complete poset. Applying the "folding" technique to this fact for the simply-laced case, Tada [7] described the Weyl group orbits of some fundamental weights for multiply-laced finite-dimensional simple Lie algebras in terms of d-complete posets with "coloring". d-complete poset with a "coloring" multiply-laced [7] plain game [2] d-complete poset simply-laced folding "folding" Young diagram generalization Sato-Welter game [8,5] type A special case Young diagram with the unimodal numbering MHRG [1] types B and C Based on [7], Abuku and Tada [1] introduced a new impartial game, named Multiple Hook Removing Game (MHRG for short). MHRG is played in terms of Young diagrams with the unimodal numbering; for the definition of unimodal numbering, see Section 3 and Example 3.1. Let us explain the rule of MHRG. We fix positive integers m, n ∈ N such that m ≤ n. Let Y m,n := {(i, j) ∈ N 2 | 1 ≤ i ≤ m, 1 ≤ j ≤ n} be the rectangular Young diagram of size m × n. We denote by F (Y m,n ) the set of all Young diagrams contained in the rectangular Young diagram Y m,n . For a game position G of an impartial game, we denote by O(G) the set of all options of G. The rule of MHRG is given as follows: (1) All game positions are some Young diagrams contained in F (Y m,n ) with the unimodal numbering. The starting position is the rectangular Young diagram Y m,n . (2) Assume that Y ∈ F (Y m,n ) appears as a game position. If Y = ∅ (the empty Young diagram), then a player chooses a box (i, j) ∈ Y , and remove the hook at (i, j) in Y . We denote by Y i, j the resulting Young diagram. Then we know from [1,Lemma 3.15] (see also Lemma 4.4 below) that f := #{(i ′ , j ′ ) ∈ Y i, j | H Y i,j (i ′ , j ′ ) = H Y (i, j) (as multisets)} ≤ 1 ,where H Y (i, j) (resp., H Y i,j (i ′ , j ′ )) is the numbering multiset for the hook at (i, j) ∈ Y (resp., (i ′ , j ′ ) ∈ Y i, j ); see Section 3. If f = 0, then a player moves Y to (3) The (unique) ending position is the empty Young diagram ∅. The winner is the player who makes ∅ after his/her operation (2). In general, not all Young diagrams in F (Y m,n ) appear as game positions of MHRG (see Example 4.3). The goal of this paper is to give a characterization of the set of all game positions in MHRG. Let us explain our results more precisely. Let [1,m+n] m denote the set of all subsets of [1, m + n] := {x ∈ N | 1 ≤ x ≤ m + n} having m elements. Then there exists a bijection I from F (Y m,n ) onto [1,m+n] m (see Subsection 2.1 below). Let Y D denote the dual Young diagram of Y in Y m,n (see Subsection 2.1). We set c := (m + n − 1 + χ) / 2, where χ = 0 (resp., χ = 1) if m + n is odd (resp., even). For Y ∈ F (Y m,n ), we set We denote by S(Y m,n ) the set of all those Young diagrams in F (Y m,n ) which appear as game positions of MHRG (with Y m,n the starting position). Theorem 1.2 (= Theorem 6.1). Let t ∈ Z ≥0 and m, n ∈ N such that t ≤ m ≤ n. For a Young diagram Y having at most t rows, Y ∈ S(Y m,n ) if and only if Y ∈ S(Y t,n−m+t ). Moreover, the Grundy value of Y as an element of S(Y m,n ) is equal to the Grundy value of Y as an element of S(Y t,n−m+t ). This paper is organized as follows. In Section 2, we fix our notation for Young diagrams, and recall some basic facts on the combinatorial game theory. In Section 3, we recall the definition of the unimodal numbering and the diagonal expression for Young diagrams. In Section 4, we recall the rule of MHRG, and a basic property (Lemma 4.4). In Sections 5 and 6, we prove Theorems 1.1 and 1.2 above, respectively. Acknowledgements : The author would like to thank Daisuke Sagaki, who is his supervisor, for useful discussions. He also thanks Tomoaki Abuku and Masato Tada for valuable comments. Young diagrams. Let N denote the set of positive intgers. For a, b ∈ Z, we set [a, b] := {x ∈ Z | a ≤ x ≤ b}. Throughout this paper, we fix m, n ∈ N such that m ≤ n. For a positive integer x ∈ N, we set x := m + n + 1 − x. Let Y m (m + n) be the set of partitions λ = (λ 1 , . . . , λ m ) of length at most m such that n ≥ λ 1 ≥ · · · ≥ λ m ≥ 0. We can identify λ = (λ 1 , . . . , , then we denote Y λ by ∅, and call it the empty Young diagram. We identify (i, j) ∈ Y λ with the square in R 2 whose vertices are ( Let [1,m+n] m denote the set of all subsets of [1, m+n] having m elements. For λ = (λ 1 , . . . , λ m ) ∈ Y m (m + n), we set i ′ t := λ m−t+1 + t for 1 ≤ t ≤ m; observe that I λ := {i ′ 1 < · · · < i ′ m } ∈ [1,m+n] m . It is well-known that the map λ → I λ is a bijection from Y m (m + n) onto [1,m+n] m . By the composition of this bijection and the inverse of the bijection Y m (m + n) → F (Y m,n ), λ → Y λ , we obtain a bijection I from F (Y m,n ) onto [1,m+n] The procedure which obtains Y i, j from Y is called removing the hook at (i, j) from Y (see Figure 1 below). Combinatrial game theory. For the general theory of combinatorial games, we refer the reader to [6, Chapters 1 and 2]. In this subsection, we fix an impartial game in normal play whose game positions are all short (in the sense of [6, pages 4 and 9]). Definition 2.1. A game position of an impartial game is called an N -position (resp., a P-position) if the next player (resp., the previous player) has a winning strategy. For a game position G of an impartial game, we denote by O(G) the set of all options of G. Recall from [6, page 6] that each game position of an impartial game is either an N -position or a P-position. The following result is well-known in the combinatorial game theory. 3 Unimodal numbering on Young diagrams. It can be easily checked that c := (m + n − 1 + χ) / 2 is the maximum number appearing in the unimodal numbering, where is an element of D m,n . Thus we obtain the map D m,n : For simplicity of notation, we denote D m,n by D. Here we recall from [1, Subsection 3.3] the relation between "removing a hook" (see Figure 1) and the diagonal expression (see Example 3.5 below). For a subset S of Y ∈ F (Y m,n ), we define H Y (S) to be the multiset consisting of Then we see that Thus, if we remove a hook from Y ∈ F (Y m,n ), then 1 is subtracted from some consecutive entries in D(Y ); in the case above, the consecutive entries are d l , d l+1 , . . . , d r , with l = m + j − i ′ + 1 and r = m + j ′ − i + 1. Multiple Hook Removing Game. Abuku and Tada [1] introduced an impartial game, named Multiple Hook Removing Game (MHRG for short), whose rule is given as follows; recall that m and n are fixed positive integers such that m ≤ n: (1) All game positions are some Young diagrams contained in F (Y m,n ) with the unimodal numbering. The starting position is the rectangular Young diagram Y m,n . (2) Assume that Y ∈ F (Y m,n ) appears as a game position. If Y = ∅ (the empty Young diagram), then a player chooses a box (i, j) ∈ Y , and remove the hook at (i, j) ; we call this case and this operation (MHR 1). If f = 1, then a player moves Y to ; we call this case and this operation (MHR 2). (3) The (unique) ending position is the empty Young diagram ∅. The winner is the player who makes ∅ after his/her operation (2). The following elements of F (Y 2,3 ) are not contained in S(Y 2,3 ): (1) Keep the notation and setting in Lemma 4.4. There does not exist ( The rest of this section is devoted to a proof of Theorem 5.1. We can easily show the following lemma. Lemma 5.2. (A) It holds that I(Y We first show (I) ⇒ (III). Since Y ∈ S(Y m,n ) by (I), there exists a sequence of game positions of the form Assume that p > 0; by the induction hypothesis, Proof. Assume first that l p − 1 < c + 1 − χ; recall that l p − 1 > c + 1 − χ = c + 1 ≥ c + 1 − χ. It follows from (5.2) and (5.3) that It follows from (5.2) and (5.3) that Here we note that . Thus we have proved the lemma. Hence, by (5.4) and (5.5), together with the induction hypothesis (5.1), we obtain I R (Y p ) ∩ I R (Y D p ) = ∅. Thus we have proved (I) ⇒ (III) in Theorem 5.1. Conversely, we prove (III) ⇒ (I), that is, We show by (descending) induction on I(Y ) := i∈I(Y ) i. It is obvious that Y m,n ∈ S(Y m,n ). Assume that I(Y ) < I(Y m,n ) . Since I(Y m,n ) = [n + 1, m + n], and I(Y ) = I(Y m,n ) with #I(Y ) = m, there exists r / ∈ I(Y ) such that n + 1 ≤ r. Also, there exists l ≤ r such that l − 1 ∈ I(Y ); note that l − 1 < r. Here we show that l − 1 / ∈ I(Y ). Suppose, for a contradiction, that Proposition 5.6. Keep the setting above. By Proposition 5.6, we obtain Y ∈ S(Y m,n ). This completes the proof of (III) ⇒ (I), and hence (I) ⇔ (III). The equivalence (II) ⇔ (III) follows from the equivalence (I) Finally, let us show the equivalence (III) ⇔ (IV). Let Y ∈ F (Y m,n ), and Hence, I(Y )∩I(Y D ) = ∅ if and only if λ i +m−i+1 = n−λ j +j (or equivalently, We next show (III) ⇒ (IV). Assume that λ i + λ j = n − m + i + j − 1 for some 1 ≤ i, j ≤ m; we may assume that i ≤ j. As seen above, we have which is a contradiction. Therefore, we conclude that λ i + m − i + 1 ∈ [c + 1 − χ, m + n]. Thus we have shown (III) ⇒ (IV), thereby completing the proof of (III) ⇔ (IV). The following is an immediate consequence of Theorem 6.1 and (6.2).
2021-12-30T02:16:07.020Z
2021-12-28T00:00:00.000
{ "year": 2021, "sha1": "9ae1cf07194fc4966d9a3255a381a5335ab55595", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9ae1cf07194fc4966d9a3255a381a5335ab55595", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
233214647
pes2o/s2orc
v3-fos-license
Effect of lumbar laminectomy on spinal sagittal alignment: a systematic review Positive spinal sagittal alignment is known to correlate with pain and disability. The association between lumbar spinal stenosis and spinal sagittal alignment is less known, as is the effect of lumbar decompressive surgery on the change in that alignment. The objective was to study the evidence on the effect of lumbar decompressive surgery on sagittal spinopelvic alignment. The Cochrane Controlled Trials Register (CENTRAL), Medline, Embase, Scopus and Web of Science databases were searched in October 2019, unrestricted by date of publication. The study selection was performed by two independent reviewers. The risk of systematic bias was assessed according to the NIH Quality Assessment Tool. The data were extracted using a pre-defined standardized form. The search resulted in 807 records. Of these, 18 were considered relevant for the qualitative analysis and 15 for the meta-synthesis. The sample size varied from 21 to 89 and the average age was around 70 years. Decompression was mostly performed on one or two levels and the surgical techniques varied widely. The pooled effect sizes were most statistically significant but small. For lumbar lordosis, the effect size was 3.0 (95% CI 2.2 to 3.7) degrees. Respectively, for pelvic tilt and sagittal vertical axis, the effect sizes were − 1.6 (95% CI .2.6 to − 0.5) degrees and − 9.6 (95% CI − 16.0 to − 3.3) mm. It appears that decompression may have a small, statistically significant but probably clinically insignificant effect on lumbar lordosis, sagittal vertical axis and pelvic tilt. Abbreviations SSPA Consideration of whole spine and pelvis orientation in the sagittal plane (Sagittal spino-pelvic alignment) LL Angle between the lines through measured endplates of lumbar vertebrae (e.g., upper endplate of L1 and upper endplate of S1) (Lumbar lordosis) SVA Horizontal distance from the C7 plumbline from the mid-C7 vertebral body to the posterior superior endplate of S1 (C7 sagittal vertical axis) TPA Angle between the line from the center of T1 to the axis of the femoral heads and the line from the axis of the femoral heads to the middle of the S1 endplate (T1 pelvic angle) PT Angle between the line connecting the midpoint of the sacral plate to the axis of the femoral heads and the vertical axis (Pelvic tilt) PI Angle between the line perpendicular to the sacral plate at its midpoint and the line connecting this point to the axis of the femoral heads (Pelvic incidence) SS Angle between the horizontal line and upper endplate of S1 (Sacral slope) PI-LL Difference between PI minus LL (PI-LL mismatch) Introduction Lumbar spinal stenosis (LSS) is the most common cause of disability due to a spinal disorder [1]. It is also the most common reason for spinal surgery in the elderly [2]. For example, in the USA, the rate of lumbar decompression is around 136 per 100,000 Medicare beneficiaries. Simultaneously, the amount of fusion surgery for treating LSS has also increased [3]. Compared to conservative treatment, decompressive surgery with or without fusion has shown a positive effect on patients' symptoms, especially leg pain, claudication and overall disability [4,5]. Consideration of sagittal spinal alignment arose with the evolution of operative treatment in adolescent idiopathic scoliosis (AIS) in the late 1980s [6]. Since Legaye and Duval-Beaupère introduced pelvic incidence (PI) as a key parameter regulating sagittal spinal balance [7], sagittal balance and its correlation with the results of spine surgery have been widely studied. PI is considered a constant parameter with no significant change with age, while thoracic kyphosis (TK) increases and lumbar lordosis (LL) decreases with age [8,9]. Sagittal spino-pelvic alignment (SSPA) describes spinal and pelvic orientation in the erect posture with radiographic parameters. A correlation has been found between the shape and orientation of the pelvis and the morphology of sagittal spinal curvatures in asymptomatic persons [10,11]. Greater positive SSPA has been found in asymptomatic elderly people [8,9,12]. Decreased LL has been shown to have a strong correlation with low back pain [13]. When increased positive SSPA appears as part of degenerative scoliosis, the degenerative changes in spinal structures can be considered irrecoverable. The Scoliosis Research Society-Schwab adult deformity classification describes spinal deformity two-dimensionally with coronal curve types and three sagittal modifiers [14]. The first of the sagittal modifiers is PI-LL mismatch (PI-LL), which is the difference between the current LL and the ideal based on the pelvic anatomy and PI. The second modifier is global alignment with the sagittal vertical axis (SVA), which is influenced by changes in LL and TK as well as compensatory mechanisms such as knee flexion and pelvic orientation, described with the third modifier, pelvic tilt (PT). The correlation between SSPA and patient-reported outcome measures (PROM) has been reported with poorer PROM scores associated with increased SVA and PT in adults with spinal deformities [15][16][17]. Realignment surgery has been shown to have a superior effect on both back pain and quality of life with adult spinal deformity, compared to conservative treatment [18,19], and a greater correction of SSPA is related to a higher health-related quality of life (HRQOL) [20]. A well-known phenomenon is relief from spinal claudication by bending forward. The movement reduces LL, providing additional space to the compressed nerve roots [21,22]. There have only been a few studies on SSPA in LSS patients compared to the asymptomatic population, two of which suggest that LSS could affect SSPA [23,24]. Comparing compensatory mechanisms between patients with LSS and those with adult spinal deformity (ASD), the former are more prone to recruit pelvic shift than PT, while the opposite is true of the those with ASD [25]. However, overall evidence on an association between LSS and SSPA is scarce. While decompression surgery is still the most common operative treatment for LSS, its effect on SSPA is not well known. The objective of this systematic review was to examine the evidence on the effects of decompressive surgery on the parameters of SSPA among patients with LSS. Population Adults undergoing lumbar laminectomy for degenerative conditions. Excluding traumas, malignancy, tuberculosis or other spinal infection, connective tissue disorders (rheumatoid arthritis, ankylosing spondylitis, sacroiliitis or respective), pregnancy, congenital or developmental abnormalities, cervical or thoracic spinal disorders and neuromuscular diseases. Intervention Laminectomy was understood as a surgical procedure, whereby a section of bone is removed from one or more vertebrae from L1 to L5 to relieve pressure on the affected nerve or spinal cord. Comparison Estimates of SSPA before and after surgery. Outcome Change in SSPA measured by any of the radiological parameters shown in Table 1. Types of studies Studies of any design published in peer-reviewed academic journals with abstract available. Conference proceedings, theses, case reports and case series were excluded. In order to avoid missing potentially relevant studies, the use of other limiters and filters was restricted, and the authors relied instead on manual selection. Similar clauses were used when searching the other databases. The references of identified articles and reviews were also checked for relevance. Selection strategy The records identified from the data sources were stored using Endnote software (Endnote X7.8, Thomson Reuters). Using a built-in search engine of the Endnote software, duplicates, conference proceedings, theses, reviews and case reports were deleted. Two independent reviewers screened the titles and abstracts of the remaining articles and assessed the full texts of potentially relevant papers (Fig. 1). Disagreements between the reviewers were resolved by consensus or by a third reviewer. Extraction strategy The data needed for a quantitative assessment were extracted using a standardized form based on recommendations by the Cochrane Handbook for Systematic Reviews of Interventions [26]. The form included: a first author name, a year of publication, a country, a sample, a gender distribution, the average age of patients, the duration of follow-up, surgical techniques and the estimates of main outcomes. Assessment of the methodological risks of systematic bias Two independent reviewers rated the methodological quality of the included trials using the NIH Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies ( Table 2). This 14-point tool assesses the risks of systematic bias based on the clarity of a research question, a participation rate, a power analysis, a follow-up and dropouts, among other criteria. The risk of bias is dichotomized as "yes" versus "no." Disagreements between the reviewers were resolved by consensus or by a third reviewer. Statistical analysis (meta-analysis) A random-effects model was used to quantify the pooled effect size of the included studies, which was a more fitting choice than a fixed-effect model considering the context of medical decisions making and generalizing the results beyond the selected samples. The results were accompanied by 95% confidence intervals (95% CI). The heterogeneity was tested using the Q test and I 2 statistic. Heterogeneity was deemed present if Q was greater than the degree of freedom (number of studies -1). The I 2 statistic described the percentage of the variability in effect estimates due to heterogeneity rather than to sampling error (chance). As the correlation between pre-and post-estimates within groups was not reported, the coefficient of pre/post correlation was set at 0.6, assuming that at least that strong a correlation should exist between two repeated measures. When the number of studies in the model was ≥ 10, a potential publication bias was assessed using Egger's test (two-tailed p value considered significant if ≤ 0.05), and trim-and-fill correction was applied if needed. All calculations were performed using the Comprehensive Meta-Analysis CMA software, Version 3.0, available from www. meta-analy sis. com. Results The search resulted in 807 records ( Fig. 1). Of these, 211 were duplicates. Using the Endnote® software search engine, 197 records were excluded as conference proceedings, editorials, theses, etc. The remaining 399 records were screened based on titles and abstracts; the agreement between reviewers was good, kappa 0.77 (95% CI 0.66 to 0.87). After further exclusion, 47 records were screened based on their full texts; of these, 18 studies were included for further analysis. Out of 18 studies, 10 had been conducted in Japan and four in South Korea ( Table 1). The sample size varied from 21 to 89 and the average age was around 70 years. The duration of follow-up ranged from 0.5 to 6 years. Fourteen studies were retrospective and four prospective. Decompression was performed mostly on one or two levels. Of 18 studies, three failed to produce the data needed for the meta-synthesis [27][28][29]. Thus, the quantitative meta-analysis was performed on 15 studies [30][31][32][33][34][35][36][37][38][39][40][41][42][43][44]. LSS had been used as an inclusion criterion in 15 studies. Others included subjects with decompressive surgery for spinal claudication [27], decompressive surgery for degenerative scoliosis [29] and interlaminar decompression for lumbar intervertebral disk herniation [41]. Six studies provided information on preoperative MRI [30,35,37,[42][43][44] and only two had assessed the severity of LSS on axial MRI [37,44]. Eight different surgical methods for lumbar decompression were employed. One study did not provide detailed information on the surgical method [29]. When laminectomy was involved, the spinous process and bony lamina of the index level were removed, providing a route to decompression of one or two intervertebral levels by removal of the ligamentum flavum. Laminotomy was done either on one or both sides and the bony laminar arch was partially removed, followed by removal of the ligamentum flavum. Some of the studies employed microsurgery and others endoscopic techniques. During spinous split osteotomy and laminoplasty, the spinous process was initially divided or shifted laterally and retracted back to its origin after decompression. Laminectomy was used in three studies. Laminotomy with or without microscopic or endoscopic assistance was used in 10 studies. Spinous split osteotomy or laminoplasty was used in four studies. The exclusion criteria varied widely. Most of the studies excluded vertebral fracture or post-traumatic kyphosis, neurological disease (e.g., Parkinson's) and previous spinal surgery. Even though several studies included patients with degenerative spondylolisthesis, none accepted severe spondylolisthesis of grade ≥ 2 according to the Meyerding classification. Degenerative scoliosis of varying definition Plain radiographs of the lumbar spine were used in eight studies assessing specifically SSPA parameters. The remaining studies employed radiographs of the entire spine or comparable imaging techniques (e.g., EOS™) providing wider information on SSPA and pelvic orientation. Of the SSPA parameters, LL measured between L1 to S1 were reported most frequently (13 studies), with group sizes varying from 11 to 89 resulting in a total of 827 patients ( Table 3). The SVA and PT were estimated in 10 groups yielding a pooled sample of 547 patients for each parameter. Risk of systematic bias Risk of systematic bias was assessed with the NIH Quality Assessment Tool for Observational Cohort and Cross-Sectional studies [24] ( Table 2). The most frequent sources of potential risk of systematic bias were risks related to absent study power analysis, unclear inclusion criteria and non-blinded design. The risk was mostly small regarding the clarity of study objectives, sample descriptions, sufficiently described pre-and post-measures and definitions of variables. Two subcategories, variation of exposures and amount of repeated measures, were considered "not applicable" for all 18 studies. In four studies, the outcome assessors were blinded. Two out of four prospective studies had a dropout of 20% or less. Only two studies reported a participation rate of at least 50%. Meta-analysis The occipital 7th cervical angle (C0-C7), T1 pelvic angle (TPA) and spinosacral angle (SSA) were used in Arai et al. [30] the included studies only once ( Table 3). As shown in Table 4, the pooled effects (when subgroups within the study were used as the unit of analysis) were most statistically significant, except for the PI and thoracolumbar kyphosis (TLK) (T10-L2). When taking into account the 95% confidence limits closest to zero, the difference estimates were small, varying from 1° to 3° (2 mm in the case of SVA). When pooling the results using the study as the unit of analysis, the pooled estimates did not substantially change (Fig. 2) Discussion This systematic review of 18 observational studies evaluated the evidence on the effect of LSS decompression surgery on SSPA. The meta-analysis of 15 studies showed some small changes in SSPA after surgery. The observed pooled effect was toward more neutral alignment, while SVA and PT decreased, and LL increased after decompressive surgery. While these changes were mostly statistically significant, they showed only small fluctuations of a few degrees or millimeters and were probably not clinically significant. The overall risk of systematic bias of the included studies was considered high using the NIH Quality Assessment Tool for Observational Cohort and Cross-Sectional studies. The speculated effect of decompression surgery on relieving the compression of cauda equina and the previously observed association between SVA and PT and severity of symptoms, as seen in ASD, could not be confirmed with the present results [15,16]. A former study by Buckland et al. showed the importance of pelvic shift as compensatory mechanism in LSS [25], but none of studies in the present systematic review employed pelvic shift as a SSPA parameter. The studies covered eight decompression techniques, laminotomy or its alternatives being the most common. Although these techniques varied substantially [27, 28,35,37,38], they were well-described, allowing them to be compared. No superiority of a particular surgical technique was observed. Cochrane meta-analysis compared the effectiveness of different surgical techniques for LSS. Primary outcomes in the included studies were leg pain, satisfactory, disability indexes, postoperative instability and perioperative complications; no differences between techniques were found [45]. Bernhardt and Bridwell were one of the first to report normal values of sagittal spinal alignment in an asymptomatic population with Cobb measurements of the TK, thoracolumbar junction and lumbar spine [46]. Lenke introduced a new classification for AIS with a sagittal modifier evaluating the extent of TK [47]. The Scoliosis Research Society-Schwab adult deformity classification takes three parameters, PI-LL, SVA and PT, into account in sagittal plane evaluation [14]. Several previous studies have reported the important role of SSPA in spinal deformity [15,16,48]. Recently, new surgical techniques have been introduced to restore sagittal imbalance and to treat symptoms in adults with spinal deformity. While the results have been in favor of surgical treatment [18][19][20], the rates of complications and reoperations have been high [48,49]. In the context of other spinal disorders, the role of SSPA in the reported results has been highly inconsistent. Barrey et al. found one-and two-level lumbar spondylolisthesis to correlate with greater positive SSPA and pelvic retroversion [50]. Rhee et al. did not observe a connection between clinical improvement and changes in LL or overall sagittal imbalance after treated lumbar spondylolisthesis [51]. Similar findings have been reported by Försth et al. when comparing fusion surgery with decompression alone in LSS [52]. Zárate-Kalfópulos et al. proposed pelvic morphology to have a predisposing role in the pathogenesis of lumbar degeneration, with a lower PI being associated with a risk of LSS and a higher PI with a risk of degenerative spondylolisthesis [53]. Evidence on the association between LSS and SSPA is scarce. While bending forward for relief is a well-known phenomenon, two studies have suggested that LSS might affect SSPA. Suzuki et al. found that LSS patients with claudication symptoms have greater positive sagittal balance and increased pelvic retroversion than LSS patients without claudication [24]. Farrokhi between LSS and ASD patients and found that LSS patients were more prone to increased pelvic shift to allow a forward bending posture, especially in the well-aligned group [25]. Bayerl et al. classified patients undergoing decompression surgery for LSS according to the severity of sagittal imbalance; the results were comparable between groups in leg and back pain and quality of life [54]. While investigating the correlation between spinopelvic parameters and the effect of physiotherapy on the severity of back pain in mild LSS, Beyer et al. also reported that greater PI predicts greater relief in back pain [55]. Additionally, Liang et al. observed a normalization of increased positive sagittal balance after lumbar discectomy [56]. A single previous systematic review of the topic, which included 10 studies (eight of which were included in our review) [57], while lacking a quantitative meta-analysis, estimated that decompression surgery led to SVA correction in 25% to 73% of patients. It has also been suggested that greater PI-LL preoperatively correlates with residual sagittal malalignment postoperatively, which could be explained by structural degenerative changes rather than by reversible changes due to LSS itself [33,34,40]. SVA, mm Any generalization of these findings should be done carefully. A meta-analysis is always an approximation. The included studies differed widely regarding the used surgical techniques, inclusion criteria, and the radiological assessment of SSPA parameters. The pooled study sample was limited to a particular age group of around 70 years. The overall risk of systematic bias was high, and there has not been a single randomized controlled trial on the topic. Additionally, only four of the included studies were prospective. The substantial variety of follow-up might also weaken the conclusions of the review, considering that degenerative changes might affect SSPA parameters especially during long-term follow-up, as has previously been observed in the general population [8]. The interpretation of the results might also be uncertain, as there is no generally accepted radiological classification of LSS severity and diagnosis based on a combination of patient history, clinical findings, radiographs and neurophysiological assessment [58]. As only two of the 18 included studies provided some information about LSS grade, the pooled sample might be substantially mixed regarding LSS severity. The influence of coexisting spondylolisthesis on the magnitude of the studied effect is also unclear. Evaluating the clinical meaning of the changes found in our meta-analysis presents some challenges. The minimal clinically important difference is not known for SSPA parameters, and clinical thresholds for symptomatic SSPA are controversial. Although SVA > 47 mm, PT > 22° and PI-LL > 11° have been correlated with more severe disability, there are no generally accepted limits for normal SSPA parameters [17]. In conclusion, the quality of the evidence on the effect of decompressive surgery for LSS on SSPA was low, and there was substantial heterogeneity of the study design among the studies included. Although decompression surgery demonstrated a statistically significant effect on LL, SVA and PT toward more neutral alignment, the effect was small and probably clinically insignificant. Funding Open access funding provided by University of Turku (UTU) including Turku University Central Hospital. Compliance with ethical standards Conflict of interest The authors declare that they have no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2021-04-13T06:17:00.534Z
2021-04-12T00:00:00.000
{ "year": 2021, "sha1": "f54189e5fcb037ef132b997012f86ad52572e103", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00586-021-06827-y.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "142a24a19d2882d53ee50a9e621a37fb9cf48159", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235348602
pes2o/s2orc
v3-fos-license
Seroconversion rates following COVID-19 vaccination among patients with cancer As COVID-19 adversely affects patients with cancer, prophylactic strategies are critically needed. Using a validated antibody assay against SARS-CoV-2 spike protein, we determined a high seroconversion rate (94%) in 200 patients with cancer in New York City that had received full dosing with one of the FDA-approved COVID-19 vaccines. On comparison with solid tumors (98%), a significantly lower rate of seroconversion was observed in patients with hematologic malignancies (85%), particularly recipients following highly immunosuppressive therapies such as anti-CD20 therapies (70%) and stem cell transplantation (73%). Patients receiving immune checkpoint inhibitor therapy (97%) or hormonal therapies (100%) demonstrated high seroconversion post vaccination. Patients with prior COVID-19 infection demonstrated higher anti-spike IgG titers post vaccination. Relatively lower IgG titers were observed following vaccination with the adenoviral than with mRNA-based vaccines. These data demonstrate generally high immunogenicity of COVID-19 vaccination in oncology patients and identify immunosuppressed cohorts that need novel vaccination or passive immunization strategies. In brief Evaluating the IgG levels against SARS-CoV-2 spike protein, Thakkar et al. demonstrate high rates of seroconversion in a diverse cohort of patients with cancer, while identifying lower immunogenicity in patients with hematologic malignancies and in patients having received immunosuppressive therapies. INTRODUCTION COVID-19 can result in increased morbidity and mortality in patients with cancer (Kuderer et al., 2020;Mehta et al., 2020;Bakouny et al., 2020), suggesting the need for prophylactic strategies in this immunosuppressed population. In patients with cancer who were affected by COVID-19, increased age, comorbidities, poor performance status, and thoracic and hematologic malignancies have been identified as adverse prognostic indicators for reduced survival Robilotti et al., 2020;Mehta et al., 2020). Follow-up studies on seroconversion in cancer patients with COVID-19 demonstrated that while most will develop antibody response similar to the general population, subgroups of cancer patients with hematologic malignancies, receiving anti-CD20 antibody therapies and stem cell transplantation, exhibit lower rates of seroconversion (Thakkar et al., 2021;Marra et al., 2020). These results suggested that overall high seroconversion rates might be anticipated in patients with malignancies following COVID-19 vaccinations as well, with likely reduced immunogenicity in certain subgroups of patients manifesting from different degrees and mechanisms of immune suppression. Patients with cancer can be immunocompromised due to a multitude of factors, such as the underlying malignancy itself, bone marrow suppressive effects of cytotoxic chemotherapy, and prior or ongoing treatments with a high degree of immunosuppressive effects, such as corticosteroids, B-cell depleting therapies (i.e., anti-CD20 antibodies), cell therapies (especially chimeric antigen receptor [CAR]-T cell), and stem cell transplantation. It is critical to understand the immunogenicity of approved vaccines for assessing the need of ongoing social isolation and other strategies to mitigate the risk of contracting COVID-19 by immunosuppressed patients, and for designing and rapidly conducting clinical studies focused on passive immunization strategies and vaccine trials assessing unique schedules to enable boosting of the immune response. However, trials of the currently approved COVID-19 vaccines in general excluded patients with a diagnosis of a malignancy, therefore information on the safety and efficacy of these vaccines regarding the development of effective immunity currently is extremely sparse (Friese et al., 2021). Given the higher morbidity and mortality of patients with cancer and COVID-19, their ongoing need to be exposed to the healthcare system, and their frequent need for immunosuppressive therapies, patients with cancer have been identified as a high-priority subgroup for COVID-19 vaccinations, an effort supported by multiple key organizations (Ribas et al., 2021;Van Der Veldt et al., 2021;Desai et al., 2021). While patients with cancer clearly represent a highly susceptible group with a strong and immediate need to be protected by available, effective vaccines, there remain many uncertainties. Cancer Cell 39, 1081-1090, August 9, 2021 ª 2021 Elsevier Inc. 1081 ll For example, following certain immunosuppressive therapies, such as an autologous or allogeneic stem cell transplantation, anti-CD20 therapies, or T cell-directed regimens, vaccinations have low efficacy and their best timing is unclear (Jaffe et al., 2006;Rubin et al., 2014). Such guidance is also lacking for patients undergoing cytotoxic chemotherapy. One randomized study did not suggest notable differences in influenza vaccine immunogenicity dependent on whether vaccination was given on the day of chemotherapy or during the neutropenic period of the treatment cycle (Keam et al., 2017). While many agencies have suggested administering vaccines 1-2 weeks prior to a chemotherapy dose, this recommendation has not been practical with limited vaccination slot availability, variable chemotherapy (e.g., weekly), and vaccine administration schedules (e.g., two doses of BNT162b2 are recommended to be given 21 days apart while two doses of mRNA-1273 are given 28 days apart), leading to liberal recommendations to allow the most rapid vaccination of these immunosuppressed patients (Desai et al., 2021). Vaccine safety and immunogenicity information is also generally lacking in the context of therapies that stimulate the immune system, such as immune checkpoint inhibitor (ICI) therapy, with a few studies suggesting general safety and possibly heightened immunity in this context (Waissengrin et al., 2021). To narrow this key knowledge gap, we conducted this study to comprehensively determine the immunogenicity of vaccines in a cohort of patients with a diagnosis of a malignancy in New York City via evaluation of rates of anti-spike immunoglobulin G (IgG) antibody positivity following vaccination with one of the three Food and Drug Administration (FDA)-approved COVID-19 vaccines. Study cohort Two hundred and thirteen patients were enrolled in the study via informed-consent process. An additional 29 patients with cancer who underwent SARS-CoV-2 spike IgG testing were identified by retrospective chart review. Eighteen patients did not have a SARS-CoV-2 spike IgG test performed after consenting and were excluded. Another 20 patients were excluded as they had a SARS-CoV-2 spike IgG test before completion of a full vaccination series according to FDA guidance (6 with negative and 14 with positive results). Two more patients were excluded who had a negative SARS-CoV-2 spike IgG and no clear documentation of dates or types of vaccine, and two more patients were excluded due to duplicate medical records. Finally, 233 patients with cancer having completed the FDA-recommended two doses of the mRNA vaccines (BNT162b2 [Polack et al., 2020] or mRNA-1273 [Baden et al., 2020]) or one dose of the adenoviral vaccine (AD26.COV2.S [Sadoff et al., 2021]) were included in the safety analysis ( Figure 1). A cohort of 200 patients underwent a SARS-CoV-2 spike IgG test and was included in the immunogenicity analysis. Serological data (positive or negative IgG test) from these 200 patients was used in association studies between cancer subtypes and treatments. We also investigated the association between the quantitative titer of SARS-CoV-2 spike IgG and cancer subtypes and treatments. One hundred and eightyfive of 200 patients had IgG titers available that were at least 7 days after the last dose of the vaccine (''vaccinated cohort with titers''). Twenty-six de-identified patients without a cancer diagnosis who had completed COVID-19 vaccination and received a SARS-CoV-2 IgG spike antibody test >7 days after Two hundred and thirteen patients consented to study participation and 29 were enrolled via retrospective chart review. Ultimately based on study criteria, 233 patients were evaluable for vaccine safety analysis and 200 patients were evaluable for vaccine efficacy analysis. One hundred and eighty-five of the 200 patients evaluated for vaccine efficacy analysis were then further assessed as a vaccinated cohort for antibody titer comparisons. ll Article their most recent vaccine dose were used as a control cohort (Table S1). This is represented in the CONSORT diagram ( Figure 1). Baseline characteristics A total of 200 patients who completed their full vaccination schedule according to FDA guidance were included in the efficacy study. The median age of the patient population was 67 years (range 27-90 years). Fifty-eight percent (116/200) of patients were female and 42% (84/200) were male. The ethnicity/ race of the patients represented the diverse patient population of the Bronx, New York. Sixty-four patients (32%) identified their ethnicity as African American, 78 (39%) as Hispanic, 40 (22%) as Caucasian, 10 (5%) as Asian, and 5 (3%) as other ethnicities. One hundred and thirty-four patients (67%) were diagnosed with a solid tumor while 66 patients (33%) had a hematologic malignancy with a balanced representation of all common cancer types ( Table 1). As patients were recruited from our outpatient hematology/oncology clinics, most patients had an active cancer diagnosis. One hundred and fifty patients (75%) had an active malignancy and 135 patients (67%) were in active cancer therapy at the time of their vaccination, with 112 (56%) patients on active chemotherapy. Thirty-eight (19%) patients were on active chemotherapy within 48 h of at least one of the vaccine doses. Types of cancer therapies are listed in detail in Table 2. One hundred and fifteen patients (54%) had completed vaccination with the BNT162b2 vaccine and 62 (31%) with the mRNA-1273 mRNA vaccine, while 20 (10%) had received the single dose of Ad26.COV2.S vaccine. Three patients had received a complete mRNA vaccination series; however, the information about the type (BNT162b2 versus mRNA-1273) was not available. (94%) with only 13 (6%) patients with a negative value (titer below 50 arbitrary units per milliliter [AU/mL]). Percent positivity appeared similarly between the vaccine types (BNT162b2 95%, mRNA-1273 94%, and Ad26.COV2.S 85%), with a trend toward lesser positivity with the Ad26.COV2.S vaccine. We also assessed antibody titers in a subcohort of 185 patients with available IgG levels >7 days after final dose of vaccine (vaccinated cohort matching the definition of our non-cancer control cohort). The median time between spike antibody test and vaccine dose for this subcohort is 30 days (interquartile range 19-53 days). In solid malignancy patients the median was 31.5 days, and in patients with hematologic malignancies the median was 28.5 days. Highest IgG titers were seen with the mRNA-1273 vaccine (median 11,963 AU/mL, standard deviation [SD] 18,742), followed by the BNT162b2 vaccine (median 5,173 AU/mL, SD 16,699) and the single-dose Ad26.COV2.S vaccine (median 1,121 AU/mL, SD 17,571) (p < 0.05, Kruskall-Wallis test, Figure 2B). Recognizing that the Ad26.COV2.S vaccine was introduced late onto the market, which might or might not account for the lower titers of spike antibodies, we assessed associations of antibody seropositivity and antibody titers with time from completion of vaccination. While there was no association with titer levels, we found a statistically significant positive association between the time from vaccination until IgG testing and antibody seropositivity (p = 0.03, Kruskal-Wallis test). We then conducted a multivariate analysis with a generalized linear model and observed that the relationship between vaccine type and titers remained significant after accounting for the effect of time from vaccine ( Figure S1). Vaccinations appeared to be generally very safe in this cohort, with mostly mild and moderate anticipated adverse effects reported. In the safety analysis, 139 patients had received BNT162b2 first dose, 131 patients BNT162b2 second dose, 71 patients mRNA-1273 first dose, 64 patients mRNA-1273 second dose, and 23 patients the single-dose Ad26.COV2S vaccine. Across all the doses, 194 vaccination episodes were reported to lead to no adverse effects overall. Sore arm and muscle aches were the first and second most common reported adverse effects in 131 and 49 instances, respectively. A comprehensive analysis of adverse effect profile of each type of vaccine is presented in Figures S4 and S5. Solid tumors versus hematologic malignancies In the cohort of patients with solid tumors, seropositivity post vaccination was high (98%), while a significantly lower seropositivity rate was seen in patients with hematologic malignancies (85%, p = 0.001, Fisher's exact test). Analysis of the subcohort of 185 patients with available IgG titers >7 days post vaccination revealed significantly higher titer values in solid tumors (median 7,858 AU/mL, SD 18,103) than hematologic malignancies (median 2,528 AU/mL, SD 12,338, p = 0.013, Kruskal-Wallis test). Furthermore, to ensure that the difference in titers was not confounded by different time intervals from vaccination, we conducted a multivariate analysis using time from vaccination to IgG assay testing as a confounder and determined that lower titers in hematologic malignancies than in solid tumors were still significant (p = 0.0012). Comparison of titers from non-cancer controls (Table S1) Association with active cancer therapies and immunosuppressive therapies No significant differences in seroconversion were seen when comparing patients on active cancer therapy with patients who were not (96% versus 93%). However, significantly lower rates of seropositivity were seen in patients on active cytotoxic chemotherapy (92%) versus others (99%, p = 0.04) without notable differences in titer levels ( Figure 3). Next, we focused our analysis on patients who had received specific immunosuppressive therapies, such as stem cell transplantation, anti-CD20 therapy, or CAR-T cell therapy. We observed significantly lower seroconversion rates in patients who underwent these therapies: stem cell transplant (73%, p = 0.0002, Fisher's exact test), anti-CD20 therapies (70%, p = 0.0001, Fisher's exact test) and CAR-T cell treatments (all three patients remained seronegative after vaccination, p = 0.0002, Fisher's exact test) (Table 3). Of the 26 stem cell transplant patients, 23 received an autologous and 3 an allogeneic transplant (2 seropositive, 1 seronegative). Accordingly, significantly lower titer levels were also seen in patients receiving anti-CD20 therapies compared with the overall group of patients (Figure 4). These results highlighted the continued susceptibility of patients receiving these therapies during the pandemic. Associations with other patient demographics and treatments These analyses are available in Table S2. Age Our patient population had a wide age range (27-90 years). We studied the association between age and SARS-CoV-2 IgG spike antibody seroconversion rates and observed no statistically ll Article significant association between these variables (p = 0.13, Kruskal-Wallis test). Ethnicity Given the ethnically diverse cohort in this study, we studied the association between seropositivity and patient ethnicity. We observed that there was no statistically significant association between ethnicity and spike antibody seroconversion rates (p = 0.4574, Fisher's exact test). Time since immunosuppressive therapy We also studied the association between time since specific immunosuppressive therapies and immunogenicity. We divided patients into two groups: <365 days and >365 days since anti-CD20 antibody therapy or stem cell transplant and anti-SARS-CoV-2 spike IgG testing. The comparison between seropositivity and time since immunosuppressive treatment was not statistically significant (p = 1, Fisher's exact test). Steroid use Our cohort included 55 patients who had used steroids (daily or occasional) at the time of vaccination. Five had a negative spike IgG result and 50 had positive results. This was not statistically different from the entire cohort (p = 0.348, Fisher's exact test). Treatment within 48 h of a vaccine dose We collected data to evaluate whether patients who received active cancer therapies 48 h before or after a vaccine dose had lesser seropositivity rates. Thirty-eight patients met the above criteria. We observed that three patients were seronegative, and there was no statistically significant association regarding whether patients received cancer therapies within 48 h of the vaccine or not (p = 0.7, Fisher's exact test). These patients were compared with the entire cohort. These analyses are available in Table S2. Association with additional cancer therapies We observed high rates of post-vaccination seroconversion in patients on hormonal therapy (100% seropositivity, p = 0.04) and ICI therapy (97%, p = 0.69, Fisher's exact test) when compared with the rest of the cohort. Interestingly, while all patients on CDK4/6 inhibitor treatment showed positive anti-spike IgG test results, notably antibody titers were very low in this small subset (n = 5, median 1,242 AU/mL SD 2,435 versus median 6,887 AU/mL, SD 17,843 for overall cohort) ( Figure 5). Given the known involvement of the CDK4/6 pathway in immune activation (Chen-Kiang, 2003; Cingö z and Goff, 2018; Laphanuwat and Jirawatnotai, 2019), this might be biologically plausible and warrants further studies into the impact of CDK4/6 inhibitor on vaccine efficacy. We also noted trends toward lower titers among other subgroups, such as patients having received BCL2-or BTK-targeted therapy, consistent with prior observations on their negative impact on vaccine efficacy (Pleyer et al., 2021) (Figure S3). Association with prior COVID-19 Previous studies have reported heightened antibody responses to COVID-19 vaccinations in patients with a prior COVID-19 infection (Krammer et al., 2021). Our cohort included 22 patients with cancer who had known prior COVID-19, and a high rate of seroconversion was seen in this subset (21/22 seroconverted for a 95% seroconversion rate with one patient not seroconverting having received an autologous stem cell transplant). Antibody titers in previously infected patients were significantly higher than those who were not known to be previously infected (prior COVID-19: median 46,737 AU/mL, SD 18,681; others: median 5,296 AU/mL, SD 16,193, p < 0.001, Kruskall-Wallis test) ( Figure 5D). DISCUSSION COVID-19 disease has had a devastating impact worldwide and especially so among patients with a cancer diagnosis. Various factors adversely affect outcomes in cancer patients affected with COVID-19, including impact of underlying disease on performance status, age/comorbidities of affected patients, immune suppression related to disease such as in patients with hematologic malignancies, and immune-suppressive effects of disease-directed therapies (Lee et al., 2020a(Lee et al., , 2020bJee et al., 2020;García-Suá rez et al., 2020;Mehta et al., 2020;Westblade et al., 2020). In addition, patients with cancer requiring active therapy face frequent exposure to the healthcare system, increasing the risk of acquiring COVID-19. Lastly, treatment modifications due to the ongoing pandemic can compromise disease outcomes, amplifying the urgent need to implement widespread vaccination of patients with malignant disease-an initiative with broad support from a large swath of cancer care/ advocacy organizations. While all three FDA-approved vaccines, the mRNA-based mRNA-1273 (Moderna) and BNT162b2 (Pfizer/BioNTech) and the adenovirus-based Ad26.COV2.S (Johnson & Johnson), yield , 2021). Lower seropositivity rates have also been observed in patients with chronic lymphocytic leukemia and myeloma (Herishanu et al., 2021;Terpos et al., 2021;Bird et al., 2021) and in those undergoing therapy with BTK inhibitors or venetoclax/anti-CD20 therapy, in line with our observations (Herishanu et al., 2021). These early studies clearly highlight the need to complete full vaccination schedules for optimum seroconversion and also emphasize the need for larger cohort studies to determine the immunogenicity of COVID-19 vaccines among patients receiving distinct cancer therapeutics. Several shortcomings of our study need to be listed. These include limited representation of some patient cohorts not allowing clear conclusions regarding seroconversion rates among less common malignancy types or less frequently used treatment approaches. Our cohort also over-represented patients on active therapy, as recruitment occurred over a short period in our outpatient departments. In addition, our study relies solely on the anti-spike protein IgG levels as a surrogate for immunity to COVID-19. Admittedly, the anti-spike IgG antibody used in our study, albeit specific to the receptor binding domain of the spike protein, might still not necessarily correlate with virus-neutralizing activity. Our study also did not evaluate the level of SARS-CoV-2-specific T cell responses. Further research will be needed to directly assess virus neutralization and cellular immunity (Bange et al., 2021). Another potential limitation is underestimation of titer values for anti-spike antibodies, as evidence ll Article suggests that titers may rise over time, and the upper limit of detection of our assay is 50,000 AU/mL (Widge et al., 2020); however, a cutoff of 7 days was used to match the control cohort and eliminate bias in the analysis. Lastly, some observations are based on smaller subsets and post hoc analyses, so that larger studies are needed for validation. Our study, along with other emerging data, strongly highlights the continued need to vaccinate patients with a cancer diagnosis urgently and broadly, as vaccinations are likely to be highly effective. On the other hand, our study highlights at-risk cohorts of patients, in particular patients with hematologic malignancies following receipt of immunosuppressive therapies such as stem cell transplantation, anti-CD20 therapies, and CAR-T cell treatments. These cohorts of patients could potentially benefit from passive immunization with anti-COVID anti-bodies in the face of the ongoing pandemic. In fact, monoclonal anti-COVID-19 antibodies have shown therapeutic and prophylactic potential in transplant or at-risk patient cohorts (Rizk et al., 2021;Hurt and Wheatley, 2021;Dhand et al., 2021). In addition, higher doses or booster doses of some vaccines or vaccinations of mixed vaccine types might offer stronger immunogenicity and need to be explored in immunosuppressed patients. Lastly, protective measures such as masking and social distancing will remain logical aspects of defensive management strategies for highly immune-suppressed patients during the pandemic until safe herd immunity levels of population-level vaccinations are reached. In summary, we present a large cohort of patients with malignancy who underwent full COVID-19 vaccination according to FDA guidance. In this cohort of ethnically diverse patients with ll Article broad representation of a wide range of malignancies and therapies, very high seropositivity rates were observed, in contrast to previously published smaller cohort studies focusing on unique subsets of susceptible patients or non-standard vaccination schedules. Statistically significantly lower seropositivity rates were observed in patients with hematologic malignancies and patients having received immunosuppressive therapies. Our findings support broad and urgent COVID-19 vaccinations in patients with a cancer diagnosis to enable optimal cancer treatment delivery during the ongoing COVID-19 pandemic. STAR+METHODS Detailed methods are provided in the online version of this paper and include the following: ACKNOWLEDGMENTS We acknowledge Albert Einstein Cancer Center grant P30 CA013330 and NCORP grant 2UG1CA189859-06 in providing funding for this project. This work was supported partly by the Jane A. and Myles P. Dempsey fund. DECLARATION OF INTERESTS A.V. has received research funding from GlaxoSmithKline, BMS, Janssen, Incyte, MedPacto, Celgene, Novartis, Curis, Prelude, and Eli Lilly and Company, has received compensation as a scientific advisor to Novartis, Stelexis Therapeutics, Acceleron Pharma, and Celgene, and has equity ownership in Stelexis Therapeutics. All other authors declare no competing interests. 2021. Participants were enrolled in the study after signing informed consent. Subjects underwent anti-SARS-CoV-2 spike IgG assay, completed a questionnaire focusing on details and adverse effects of COVID-19 vaccination and provided optional consent for future biobanking for research. The protocol also allowed data collection via retrospective chart review for a small number of patients who underwent anti-SARS-CoV-2 spike IgG antibody testing after vaccination as ordered at the discretion of their oncologist. Safety data for vaccines for patients recruited via informed consent was collected via questionnaires with an optional telephone call if patients did not remember dates of the vaccine. Assessment was done at the time when patients signed the informed consent. Safety data for patients recruited via retrospective chart review was collected if data was available as part of electronic medical record. Data for cancer-directed therapy was retrieved from retrospective chart review and strict criteria were used to classify them (e.g., hormonal therapy strictly included androgen deprivation, ovarian function suppression and aromatase inhibitors. Steroids were not considered a part of hormonal therapy). Active cancer means patient with an initial cancer diagnosis on treatment including surgery, radiation, neoadjuvant, adjuvant or systemic chemotherapy or maintenance therapy (ex-lenalidomide for myeloma or immunotherapy maintenance for non-small cell lung cancer) or not on treatment and under active surveillance. Remission means patient with cancer diagnosis in the past who has completed cancer-directed therapy and is now only undergoing surveillance. Relapse/recurrent means patient with cancer diagnosis who had completed cancer-directed therapy and achieved remission or was on maintenance therapy now experiencing disease that needs additional treatment. Progressive means patient with cancer diagnosis who developed disease progression while on systemic therapy. METHOD DETAILS Anti-SARS-CoV-2 spike IgG assay The AdviseDx SARS-CoV-2 IgG II assay was used for the assessment of anti-spike IgG antibody testing. AdviseDx is an automated, two-step chemiluminescent immunoassay performed on the Abbott i2000SR instrument. The assay is designed to detect IgG antibodies directed against the receptor binding domain (RBD) of the S1 subunit of the spike protein of SARS-CoV-2. The RBD is a portion of the S1 subunit of the viral spike protein and has a high affinity for the angiotensin converting enzyme 2 (ACE2) receptor on the cellular membrane (Pillay, 2020;Yang et al., 2020). The procedure, in brief, is as follows. Patient serum containing IgG antibodies directed against the RBD is bound to microparticles coated with SARS-CoV-2 antigen. The mixture is then washed of unbound IgG and anti-human IgG, acridinium-labeled, secondary antibody is added and incubated. Following another wash, sodium hydroxide is added and the acridinium undergoes an oxidative reaction which releases light energy which is detected by the instrument and expressed as relative light units (RLU). There is a direct relationship between the amount of anti-spike IgG antibody and the RLU detected by the system optics. The RLU values are fit to a logistic curve which was used to calibrate the instrument and expresses results as a concentration in AU/mL (arbitrary units/milliliter). This assay recently has shown high sensitivity (100%) and positive percent agreement with other platforms including a surrogate neutralization assay (Bradley et al., 2021) and also demonstrated high specificity both in the post COVID-19 infection and post vaccination settings. The cutoff value for this assay is 50 AU/mL with <50 AU/ml values reported as negative and the maximum value is 50000 AU/mL. QUANTIFICATION AND STATISTICAL ANALYSIS Association between two categorical variables was tested with a Fisher exact test. Association between one categorical and one ordinal variable was tested with a Kruskal-Wallis Rank Sum test. Pre-specified hypotheses to be tested included assessing correlation of seropositivity with solid and hematologic malignancies and between the overall cohort and highly immunosuppressive therapies. All analyses were done in R (version 3.6.2).
2021-06-06T13:17:47.346Z
2021-06-05T00:00:00.000
{ "year": 2021, "sha1": "7d52d4f8ee6176f30b6b173f4a687ad1263f25af", "oa_license": "elsevier-specific: oa user license", "oa_url": "http://www.cell.com/article/S1535610821002853/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "eff0456b41a11d7970b1cb16a9322b64ee5ed112", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
255966990
pes2o/s2orc
v3-fos-license
The role of autophagy in viral infections Autophagy is an evolutionarily conserved catabolic cellular process that exerts antiviral functions during a viral invasion. However, co-evolution and co-adaptation between viruses and autophagy have armed viruses with multiple strategies to subvert the autophagic machinery and counteract cellular antiviral responses. Specifically, the host cell quickly initiates the autophagy to degrade virus particles or virus components upon a viral infection, while cooperating with anti-viral interferon response to inhibit the virus replication. Degraded virus-derived antigens can be presented to T lymphocytes to orchestrate the adaptive immune response. Nevertheless, some viruses have evolved the ability to inhibit autophagy in order to evade degradation and immune responses. Others induce autophagy, but then hijack autophagosomes as a replication site, or hijack the secretion autophagy pathway to promote maturation and egress of virus particles, thereby increasing replication and transmission efficiency. Interestingly, different viruses have unique strategies to counteract different types of selective autophagy, such as exploiting autophagy to regulate organelle degradation, metabolic processes, and immune responses. In short, this review focuses on the interaction between autophagy and viruses, explaining how autophagy serves multiple roles in viral infection, with either proviral or antiviral functions. cargo are required for molecular chaperons binding to form the CMA substrate/chaperon complex. Macroautophagy, which will be discussed in-depth in this review (hereafter referred to as autophagy), involves the formation of autophagosome, the major lysosomal pathway for cytoplasmic components turnover. As a defense strategy of organisms, autophagy can be triggered to antagonize viral infections by delivering cytoplasmic virions or viral components to lysosomes for degradation. In addition, the degradation also promotes the inflammatory response, antigen presentation, and clearance for pathogen recognition [3]. However, previous studies have shown that some viruses inhibit or evade autophagy, whereas some viruses even hijack the autophagy mechanism or exploit autophagy to circumvent the host immunity mechanisms for their benefit. With the onset of the global COVID-19 epidemic, the relationship between autophagy and viruses has attracted increased scientific attention. Some new discoveries have broadened our understanding of the relationship between autophagy and viruses. Notably, virus-specific induction of autophagy is related to endosomes. A virus can trigger the autophagy-related 8-phosphatidylserine (ATG8-PS) alternative lipidation mechanism, as well as several others, but this remains poorly understood. Based on different steps of autophagy and the regulation of immune responses by autophagy, this review oversees the role of autophagy in viral replication, maturation, egress and cell-cell spreading. Autophagy The process and regulation of autophagy More than 30 ATGs reported to date participate in the following four steps of autophagy: (1) Autophagy initiation In general, autophagosomes are derived from the isolation membrane (IM) produced on various organelles, including the endoplasmic reticulum (ER), plasma membrane, recycling endosomes, mitochondria, ATG9-vesicles, COPII vesicles, and ER-Golgi intermediate compartment (ERGIC) [4]. Under stress, cellular type III PI3K-Vps34-Beclin1 complex is activated, and type I PI3K-AKT-MTOR signalling pathway is inhibited. Mechanistic target of rapamycin (MTOR) inhibition allows ULK1/ATG1 and FIP200/RB1CC1/ ATG17 to re-associate with dephosphorylated ATG13, and also causes mATG9 to redistribute from the trans-Golgi network (TGN) to the late endosome and form a cup-shaped double-layer IM, and dephosphorylate and activate the ULK1-ATG13-FIP200-ATG101 complex, leading to the initiation of autophagy [5]. In parallel, the Beclin1-ATG14L-VPS15-VPS34 complex is activated to generate phosphatidylinositol-3-phosphate (PtdIns3P) on the endomembrane [6]. The PtdIns3P-enriched area on the endomembrane surface is termed phagophore, which provides a platform for the IM nucleation and expansion ( Fig. 1) [7]. (2) Elongation and closure of the autophagic membrane Two ubiquitin-like conjugation systems are required in this process. The first is the ATG5-ATG12 ubiquitin-like protein conjugation system: ATG12 is covalently conjugated to ATG5 with the assistance of ATG7 (encodes an E1-like enzyme) and ATG10 (encodes an E2-like enzyme). Then, ATG12-ATG5 complex binds ATG16 and multimerizes to form the ATG12-ATG5-ATG16L complex, which forms an E3-like ligase of the microtubule-associated protein L chain 3 (LC3) [8]. The oligomers of E3-like ligase of LC3 coat the surface or tips of phagophores to initiate their elongation and curvature [8]. The second ubiquitin-like conjugation system is the ATG8-phosphatidylethanolamine (PE) system: the PE is conjugated to Pro-LC3 under the continuous action of ATG4, ATG7 and ATG3, respectively. Specifically, Pro-LC3 is cleaved by ATG4 to produce a soluble form of LC3-I (non-lipidated,18 kDa). LC3-I is activated by ATG7 and transferred to ATG3, and then modified into an autophagy-related form of LC3-II (the combined form of PE, 16 kDa). Moreover, LC3 is present in two forms: LC3-I and LC3-II. In unstimulated cells, LC3 is mainly located in the nucleus, with only a small proportion located in the cytoplasm. When autophagy is activated by external stimuli, pro-LC3 is cleaved into LC3-I and LC3-II. LC3-I dissociates in the cytoplasm into a soluble form, while LC3-II incorporates itself into the autophagosome membrane to drive the extension [9] and closure [10] of the membrane. Thus, the net amount of LC3-II is a critical hallmark for monitoring autophagy (Fig. 1). (3) Maturation and fusion with the lysosomes of autophagosomes The autophagosome undergoes maturation (including cargo material packaging), then gets transported to lysosomes through the cytoskeletal structures, and finally fuses with the lysosome, leading to the formation of autolysosomes. This process is mediated by intracellular proteins involved in the vesicle transport and fusion, especially soluble N-ethylmaleimide-sensitive factor attachment protein receptor (SNARE) superfamily members (YKT6, STX17, SNAP29, VAMP3, VAMP7, VAMP8 and VTI1B) [11][12][13][14][15] Rab GTPase family members (RAB7, RAB8B, RAB9, RAB11, RAB23, RAB24 and RAB33) [16][17][18][19][20] and tethering factors (HOPS complex: vacuolar protein sorting 11 (VPS11), Vps16, VPS18, Vps33A, VPS39, and Vps41) [21,22]. Two cognate SNARE complexes, STX17-SNAP29-VAMP8 [13] and YKT6-SNAP29-STX7 [23] function additively in mediating fusion of the autophagosome with lysosome. Tethering factors trap vesicles during their intracellular transportation and bring them closer to the target membrane, thereby further stabilizing the assembly of SNARE to enhance the specificity and efficiency of vesicle fusion [24]. Through synergistic binding with the Rab protein, SNARE and phospholipids, Tethers are recruited to specific membranes [24,25]. For instance, all HOPS components promote autophagosome-lysosome fusion through interaction with STX17 [26]. In addition, Rubicon negatively regulates the endosome or autophagosome maturation through VPS34, ATG14L or interactions with Rab7 and the ultraviolet radiation resistance-associated gene protein (UVRAG) [27][28][29]. Rab7 facilitates binding of the autophagosome to the HOPS complex on the lysosome through the pleckstrin homology domain-containing The process and regulation of autophagy. Autophagosomes are derived from IM produced on various organelles. Under stress conditions, the type III PI3K-Vps34-Beclin1 complex is activated, and type I PI3K-AKT-MTOR signalling pathway is inhibited. mTOR inhibition allows ULK1 and FIP2000 to re-associate with dephosphorylated ATG13 and also causes mATG9 to redistribute from TGN to the late endosome, thus forming an IM; it also dephosphorylates and activates the ULK1-ATG13-FIP200-Atg101 complex, leading to autophagy initiation. In parallel, the Beclin1-ATG14L-Vps15-Vps34 complex is activated to generate PtdIns3P on the endomembrane. Elongation and closure of the autophagic membrane require two ubiquitin-like conjugation systems. ATG12 is covalently conjugated to ATG5 with the assistance of ATG7 (encodes an E1-like enzyme) and ATG10 (encodes an E2-like enzyme), then binds with ATG16 and multimerizes to form the ATG12-ATG5-ATG16L complex, which forms an E3-like ligase of LC3, which oligomers coat on the surface or tips of phagophore to initiate its elongation and curvature. The second is the ATG8-PE system: The PE is conjugated to pro-LC3 under the continuous action of ATG4, ATG7 and ATG3 to form LC3-II, respectively. LC3-II incorporates itself into the autophagosome membrane to drive the extension and closure. The maturation of autophagosome is mediated by SNARE, Rab GTPase family members, and Tethering factors. Two cognate SNARE complexes, STX17-SNAP29-VAMP8 and YKT6-SNAP29-STX7, mediate autophagosome and lysosome fusion. Tethering factors, such as HOPS trap vesicles, bring the SNARE complex closer to the target membrane during their intracellular transport. HOPS components promote autophagosome-lysosome fusion through interaction with STX17. In addition, Rubicon negatively regulates the endosome or autophagosome maturation through VPS34, ATG14L or interactions with Rab7 and UVRAG, but Rab7 facilitates the binding of the autophagosome to the HOPS complex on the lysosomes through PLEKHM1. UVRAG activates PI3KC3 and C-VPS/HOPS. Finally, engulfed proteins or organelles are degraded by lysosomal enzymes in autolysosomes, and LC3B-II is also degraded and recycled family M member 1 (PLEKHM1) [30]. UVRAG, a component of the PI3KC3 complex (VPS34, p150, Beclin1, UVRAG and ATG14L), functions as a guanine nucleotide exchange factor that catalyzes the exchange of GDP for GTP on Rab7, which activates PI3KC3 and C-VPS/HOPS ( Fig. 1) [31]. (4) Autophagosome degradation and recycling In autolysosomes, engulfed proteins or organelles are eventually degraded by lysosomal enzymes, and LC3B-II is also degraded and recycled (Fig. 1). As opposed to the canonical autophagy, non-canonical autophagy precedes the formation of autophagosomes [50], which means that lipidated LC3 is inserted into single membranes, especially the endolysosomal membrane, during the process of cellular engulfing of foreign bodies, such as LC3-associated phagocytosis (LAP) [51]. A proportion of the receptor signalling allows cargo to be recruited to the single membrane vesicle, which leads to its labelling with lipidated LC3-PE. Mechanically, noncanonical autophagy may bypass some steps of canonical Fig. 2 The types of autophagy. autophagy can be divided into selective autophagy and non-selective autophagy according to nutritional status. Selective autophagy, which is mediated by specific receptors, can be further divided into the ubiquitin-dependent and independent autophagy. Furthermore, selective autophagy has been well characterized and classified according to the type of targeted cargo. For instance, nucleophagy (nuclei), ferritinophagy (ferritin), pexophagy (peroxisomes), lysophagy (lysosomes), xenophagy (intracellular pathogens including bacteria, fungi and viruses), mitophagy (mitochondria), lipophagy (lipid droplets), reticulophagy (endoplasmic reticulum), aggrephagy (protein aggregates) autophagy during the formation of functional autophagosomes. For instance, it may bypass proteins that are critical for nucleation (Beclin1) and initiation (ULK1), and other proteins involved in elongation and closure (ATG7, ATG5) [52]. There exists another autophagy type: secretory autophagy, which exerts biological functions in the unconventional secretion of leaderless cytosolic proteins [53]. As opposed to proteins that have the N-terminal leader peptides, leaderless cytosolic proteins cannot get into the regular secretory pathway normally operating through the Golgi apparatus and ER [54]. Virus-mediated autophagy initiation Viral infection induces autophagy initiation Any steps of the viral life cycle or exposure to viral proteins may trigger autophagy (Fig. 3). Below, we describe several representative examples to illustrate how viruses induce the initiation of autophagy. In the adsorption stage, MEV combines with CD46-Cyt-1, which is linked to VPS34/Beclin1 complex through the interaction with the GOPC, promoting the formation of autophagosomes. LRV activate TLR3 and TRIF to trigger ATG5-mediated autophagy; ATG5 facilitates the production of TLR9-induced IFN-I in pDCs infected with HSV-1; TLR-7 recognizes RVFV that activates antiviral autophagy through TRAF6 and MyD88. HCV-encoded NS4B triggers the initiation of autophagy by forming a complex with Rab5 and Vps34. Conversely, HSV-1-encoded ICP34.5 binds with Beclin1; v-GPCR encoded by KSHV negatively regulates autophagy. At later stages of autophagy, viruses utilize DMVs as replication or assembly sites. MHV NSP6 induces autophagy to produce DMVs. These DMVs possess double-membrane-spanning molecular pores, which allows RNAs to be exported to the cytosol. CVB3 exploits autophagy to support its replication in DMVs. Virus blocks the fusion of autophagosomes and lysosomes mainly by targeting the SNARE protein, Rab GTPase family and Tethering factors, or disrupting lysosomal function. CVB3 protease 3C, HPIV3 P protein and EVD viral protease target SNAP29 to inhibit autophagy flux. In addition, CVB3 proteinase 3C targets TFEB for proteolytic processing to disrupt lysosomal function. HCV negatively regulates and positively regulates the maturation of autophagosomes by inducing Rubicon or UVRAG, respectively. KSHV and EBV downregulate RAB7 to block autophagy. SARS-COV-2 ORF3a protein sequestrates and interacts with the HOPS component, and ORF7a reduces the fusion with lysosomes. IAV M2 interacting with Beclin1 may prevent the fusion of autophagosomes and lysosomes. Finally, the virus exploits secretory autophagy to promote viral maturation, egress and cell-cell spreading. DENV takes advantage of autophagy-associated vesicles to promote virus transmission. PV is captured by PS lipid-enriched autophagosome-like vesicles, then vesicles are released from cells. EBV or HCMV recruits autophagy-related protein-coupled membranes to its envelope At the stage of virus adsorption, autophagy is usually activated through pathogen receptors, such as CD46. After binding with measles virus (MEV), CD46-Cyt-1 (one of the two C-terminal splice variants of CD46) is linked to the VPS34/Beclin1 complex through interaction with the scaffold protein GOPC, which promotes autophagosome formation [55]. Autophagy is also induced when viruses enter cells through endocytosis and the viral envelope fuses with the endosomal membrane to release its own genetic material. Evidence showed that various members of paramyxoviruses and human immunodeficiency virus (HIV) trigger the formation of autophagic spots through membrane fusion, mainly by envelope glycoproteins [56,57]. The release of genetic material after fusion activates cytoplasmic pattern recognition receptors (PRR) to induce autophagy, which will be described in detail in section "5. autophagy and innate immunity in virus infection". Subsequently, perturbation of intracellular environment caused by viral replication in the organelle membranes leads to up-regulated autophagy. ER stress and increased ROS induced by HCV replication also trigger autophagy. ER stress is activated through the accumulation of viral proteins, which trigger the unfolded protein response (UPR) to restore homeostasis. Hepatitis C virus (HCV) infectioninduced ER-stress inhibits AKT-tuberous sclerosis complex (TSC), then the TSC inhibits the MTOR pathway to induce autophagy [58]. Simultaneously, the UPR signalling pathway is required for promoting the lipidation of LC3 protein and elevation of ROS in response to the HCV infection through the activation of the ATF6 or IRE1 pathways [59]. Moreover, HCV impairs the activation of Nrf2, leading to elevated ROS levels, which upregulates the phosphorylation level of p62 [60]. Finally, the newly synthesized viral proteins directly or indirectly target autophagy genes to induce the formation of autophagosomes. For example, the HCV-encoded NS4B is capable of initiating autophagy by forming a complex with Rab5 and VPS34 [61]; and human immunity-related GTPase family M (IRGM) protein interacts with HCV NS3 and autophagy genes (ATG5, ATG10, LC3) to promote the lipidation of LC3, thus promoting the formation of autophagosomes [62]. Viral infection suppresses autophagy initiation Given that autophagy is a part of the antiviral defense mechanism, it is not surprising that viruses evolved mechanisms that allow them to counteract this process. It is mainly achieved by the regulation of viral proteins targeting ATGs, especially for herpesviruses, which are highly adapted to their hosts (Fig. 3). Herpes simplex virus type 1 (HSV-1)-encoded ICP34.5 was firstly reported to affect autophagy by interacting with Beclin1 [63]. Similarly, viral BCL-2 protein and IRS1 and TRS1 encoded by the human cytomegalovirus (HCMV) were also reported to bind with Beclin1, thus impairing the autophagosome formation [64,65]. A recent study showed that α-herpesvirus Akt-like Ser/Thr kinase limits autophagy in favor of its replication through inhibition of ULK1 and Bec-lin1 [66]. Subsequently, the v-G protein-coupled receptor (v-GPCR) encoded by Kaposi's Sarcoma-associated Herpesvirus (KSHV) was reported to negatively regulate autophagy by activating the mTOR pathway; it also mimics the cellular homolog GPCR to down-regulate the ATG14L expression, thus inhibiting autophagy [67,68]. Therefore, the inhibitory effect of the virus in the initial stage of autophagy can be roughly divided into two categories: the activation of the type I PI3K-AKT-MTOR signalling pathway, or inhibition of the type III PI3K-VPS34-Beclin1 pathway. Autophagy hijacked by viruses At a later stage of autophagy, accumulating evidence suggests that different types of viruses have developed their own unique strategies to inhibit, evade, or manipulate the process of autophagy to achieve the goal of survival and propagation (Fig. 3). Viruses utilize double-membrane vesicles as replication or assembly sites Coronaviruses (CoV) infection induces autophagy pathway and leads to the formation of DMV for its replication; this comprises viruses such as the mouse hepatitis virus (MHV) [69], Middle East Respiratory Syndrome Coronavirus (MERS-CoV) [70], Severe Acute Respiratory Syndrome Coronavirus (SARS-CoV) [71] and SARS-CoV-2 [72,73]. Nascent viral RNAs were observed in DMVs within cells infected with MERS-CoV, SARS-CoV [70] and gamma-CoV or SARS-CoV-2 [72,74] by 2D and 3D analysis of viral replication organelles, indicating that DMVs represents the central hub of viral RNA synthesis. Furthermore, a recent authoritative report identified that these DMVs possesses double-membrane-spanning molecular pores, which allows RNA export to the cytosol [75]. MHV NSP6 activates autophagy flux and induces autophagosome formation from ER, while MHV fails to induce DMVs formation in mouse embryonic stem cells lacking ATG5 [76]. The MHV replication levels in mouse embryonic stem cells lacking the ATG5 were significantly reduced compared with cells expressing the ATG5 [77]. This evidence indicates that the replication of coronaviruses is heavily dependent on autophagy-induced DMVs. However, evidence suggested that LC3 protein exists on DMVs and co-localizes with the MHV replication complexes (p22 and N), but other studies demonstrated that non-structural proteins (nsps) from the RNA replication complex do not colocalize with LC3 [78,79]. This inconsistency may be caused by LC3: a study showed that endogenous LC3 co-localizes with nsps, while exogenously expressed GFP-LC3 does not [80]. In contrast, despite the autophagy seemingly promoting the replication of coronaviruses, it is not necessary in primary murine embryonic fibroblasts (pMEFs) (it can replicate without the ATG7) [80]. Furthermore, nonlipidized LC3-I covers CoV-induced DMVs, implying an autophagy-independent role for nonlipidated LC3-I [80,81]. Interestingly, the latest research showed that β-CoV hijacks lysosomes rather than the more commonly biosynthetic secretory pathway exploited by other enveloped viruses, for egress, but this process does not seem to be related to autophagy [82]. The strongest evidence is that fractionation by Nycodenz gradients proves that LC3 is not enriched at the MHV genomic RNA containing-fractions in MHV-infected cells; moreover, a similar assay revealed that LC3 and poliovirus (PV) genomic RNA are enriched at the same fractions [82]. Picornaviruses induce DMVs formation to promote its replication, but the origin of DMVs is yet to be identified. PV was the first found to induce the autophagosome membrane rearrangement [83]. Special DMVs with an autophagy-like structure were observed in PV-infected cells [83][84][85]. Blocking the formation of autophagosomes inhibits viral RNA synthesis and subsequent steps of the PV life cycle; however, hindering the acidification of vesicles only inhibits the final stage of viral particles maturation [86]. Virion assembly and maturation of PV may occur in various cellular compartments, so the acidic mature autophagosomes may be used as assembly sites. However, there are also studies showing that PV dsRNA does not co-localize with GFP-LC3, implying that its replication may not occur in autophagosomes [87]. Electron microscopy analysis observed several DMVs in HEK293A and Hela cells infected with coxsackievirus B3 (CVB3) [88], and usurpation of autophagosome supports CVB3 replication [88][89][90]. Nevertheless, autophagy is not absolutely required, Alirezaei et al. reported that the membrane source of DMVs varies and that autophagic membrane may be just one of its origins [91]. Other DMVs derived from cells infected with other viruses such as HCV [92,93], human norovirus (huNoV) [94] and arterivirus [95] share similar structural characteristics with DMVs originating from a complex ER network. The nsps of these viruses serve critical functions in inducing the DMVs formation. DMVs contain viral nsps, RNA, and enzymatically active replicase in HCV-infected cells. Therefore they are bona fide viral replication organelles sites, but the role of DMVs in the replication of the other two viruses remains to be deciphered. Virus blocks fusion of autophagosomes with lysosomes There is evidence that picornaviruses target the SNARE protein complex or disrupt lysosomal function to block autophagy degradation. For instance, CVB3 targets SNAP29 and the adaptor protein PLEKHM1, thus inhibiting autophagy flux by impairing the assembly of the SNARE complex through the catalytic activity of viral protease 3C [96]. In another report, the autophagic flux of CVB3-infected cells was restored by overexpressing another component of the SNARE complex, STX17 [97]. In enterovirus 68 (EV-D68)-infected Hela cells, accumulation of GFP-LC3 spot and cleavaged-SNAP29 by viral protease was simultaneously detected [98]. Transcription factor EB (TFEB), which is targeted for proteolytic processing to disrupt lysosomal function and enhance viral infection, has been identified as a new target of CVB3 proteinase 3C [99]. In addition, a recent study found for the first time that incomplete autophagy can be induced during rhinovirus C (RV-C) infection, but the specific mechanism remains to be studied [100]. Similarly, human parainfluenza virus type 3 (HPIV3) is capable of inducing abnormal accumulation of autophagosomes. The P protein of HPIV3 competitively binds to the SNARE regions of SNAP29, and the binding hinders the interaction of SNAP29 and STX17, thus obstructing the fusion of autophagosomes with lysosomes, and increasing the production of extracellular viral particles [101]. Unlike the aforementioned reports, the fusion of autophagosomes with lysosomes is delayed by the regulation of Rubicon [102], UVRAG [102] and UPR [103][104][105] in different stages of HCV infection. Specifically, at the early stages of HCV infection, NS4B induces Rubicon to inhibit fusion of the autophagosome with lysosomes and promotes the HCV replication; at the late stage of infection, UVRAG is also upregulated and facilitates the maturation of autophagosomes and suppresses HCV replication [102]. Influenza A virus (IAV) infection prevents the late stage of autophagosome maturation. The IAV M2 protein was reported to co-localize with autophagosomes, and plays essential roles in inhibiting the fusion of autophagosomes with lysosomes [106]. Other studies have shown that the interaction between M2 and Beclin1 may prevent the fusion of autophagosomes with lysosomes [28,106,107]. The physiological level of autophagy prevents cancer progression by suppressing benign tumour growth, but some oncogenic viruses of the Herpesviridae family induce cancer by dysregulating autophagy, typically exhibiting abnormal accumulation of p62/SQSTM1 [108]. KSHV induces autophagy by replication and transcription activator (RTA), but it downregulates RAB7 to block the final stage of autophagy [109,110]. Likewise, Epstein-Barr virus (EBV) regulates autophagy through the same strategy to establish stable latent infection [111]. Interestingly, Pringle et al. found that mTORC1 is dispensable for KSHV's protein synthesis, genome replication, and the release of infectious progeny virions, which means that the virus may have subverted the controlling role of mTOR to autophagy at this stage [112]. Finally, some recent studies showed that SARS-COV-2 possesses a unique strategy to block autophagy. The ORF3a protein sequestrates and interacts with VPS39 to block the fusion of autophagosome/amphisome with lysosomes. Interestingly, ORF3a of SARS-COV does not exert similar capabilities, which may lead to the unique pathogenicity and infectivity of SARS-CoV-2 [113]. Moreover, ORF7a of SARS-CoV-2, another potent autophagy antagonist, reduces the fusion efficiency by down-regulating the acidity of lysosomes [114,115]. Results from host cells' network and transcriptome profiling showed that upregulated GSK3B or downregulated SNAP29 may also contribute to mitochondrial and autophagic dysfunctions during the SARS-CoV-2 infection [115,116]. Secretory autophagy promotes viral maturation, egress and cell-cell spreading The impact of secretory autophagy on virus maturation, egress, and cell-cell spreading has gained increasing interest in recent years. Flaviviruses, including Zika virus (ZIKV), HCV, and West Nile virus (WNV), and dengue virus (DENV), benefit from the autophagy process, and they are heavily dependent on the availability of the ER membrane during their replication [117][118][119][120][121]. Such reliance provides a theoretical framework for secretory autophagy to promote maturation and release of virus particles and cell-cell spreading. The most robust evidence is that the vesicles secreted by DENV-infected cells contain viral proteins E, prM/M, NS1, and viral RNA, as well as the host LC3-I and lipid droplets [122]. These autophagy-associated vesicles not only allow virus transmission but also avoid antibody neutralisation [122]. Meanwhile, inhibition of autophagy deranges the dengue virion maturation [122,123]. The latest research also showed that Lyn is critical for virus particles enclosing within membranes to secrete; this process depends on SNARE complexes, ULK1, and Rab GTPases, and occurs with much faster kinetics than the conventional secretory pathway [124]. However, the secretory autophagy hijacked by HCV and ZIKV may cross-talk with the exosomal pathway, but this needs further confirmation [125][126][127][128][129]. Clusters of PV particles are caught by PS lipid enriched autophagosome-like vesicles and released non-lytically from cells. Importantly, it allows multiple viral RNA molecules to be collectively and efficiently transferred into other cells [130]. In the enteric viral infections, these vesicle-cloaked norovirus and rotavirus clusters remain intact during the fecal-oral transmission between individuals, which allows them to be transferred to the next host [134]. Compared with animals ingesting the same amount of free viruses, this mode of transmission leads to more severe clinical symptoms [134]. In addition, Giansanti et al. recently discovered that inhibition of mTORC1 activates TFEB during enterovirus infection, which up-regulates autophagy and lysosomal genes expression, and that TFEB activation promotes the release of virus particles in extracellular vesicles through secretory autophagy [135]. These strategies enable viruses to spread more effectively in or between hosts and evade the direct effect of antiviral drugs to some extent. Secretory autophagy is also involved in the maturation and release of bunyavirus and herpesviruses. Autophagy is induced under severe fever with thrombocytopenia syndrome virus (SFTSV) infection, and autophagosome serves as SFTSV assembly platform. SFTSV was also observed to egress from autophagic vacuoles [136]. EBV limits lysosomal degradation of viral components for its own benefits as mentioned above. In the subsequent process, EBV was reported to hijack autophagic vesicles as assembly sites and promote the maturation and export of viral particles [111,137,138]. Nowag et al. reported that LC3-II is present in purified virus particles as EBV recruits ATG8/LC3-coupled membranes to its envelope [137]. Electron microscopic analysis showed that autophagic vesicles delivered viral particles to the plasma membrane. In addition, some new studies disclosed that autophagy also interferes with genome replication, morphogenesis, and progeny release of HCMV [139][140][141]. Results show that not only LC3-II, but also autophagy receptors such as SQSTM1 exist in the viral envelope [140]. Indeed, SQSTM1 appears to target precipitate tegument proteins or tegument protein complexes before the virion maturation completion [140]. Nevertheless, inhibition of autophagy still enhances replication of HCMV [139,142,143]. This indicates that despite autophagy being involved in the assembly of viral particles, it still plays an anti-viral function in the HCMV infection. Selective autophagy in viral infection Virus-induced autophagy degradation was firstly recognized as virophagy, which effectively reduces the intracellular load of the virus, but other types of selective autophagy, which exert various effects on viruses, are also triggered (Table 1). Virophagy Virophagy, also called xenophagy, is an important antiviral defense mechanism that not only targets the virus or viral protein for degradation but also promotes the host's immune responses, such as inflammation regulation, antigen recognition and presentation. However, the molecular mechanism of autophagy recognizing whole virus particles or viral components and targeting them to autophagosomes has not been sufficiently investigated. In the model organisms, Drosophila and Caenorhabditis elegans, virophagy is considered to be an inherent antiviral program [144,145]. The lack of adaptive immune interference in these organisms provides unique conditions for studying the contribution of autophagy to innate immunity, especially epithelial defense. For instance, mutations in the autophagy genes, ATG18/WIPI2, ATG1/ULK1, ATG5, and ATG8A/LC3, in D. melanogaster S2 cells increase the susceptibility of Drosophila to vesicular stomatitis virus (VSV) [146]. Another study showed that during the Rift Valley Fever Virus (RVFV) infection, TLR7-mediated activation of autophagy limits RVFV replication and reduces mortality, while a knockdown of key autophagy components in C. elegans (e.g. ATG8/LGG-1 and SQSTM1/SQST-1) increased the load of the virus [147]. Correspondingly, autophagy is activated through starvation or through the autophagy negative regulator MTOR/LET-363, which reduces the pathogen load of Orsay virus [148]. These findings may provide evidence that the original function of autophagy is to eliminate and degrade harmful microorganisms that manage to enter the cytoplasm. However, it is surprising that in higher eukaryotes, the function of autophagy is gradually hijacked by viruses, which may be the result of the co-evolution between viruses and eukaryotes. In other regards, virophagy prevents tissue injury and host cellular death by inhibiting the inflammatory cytokines production and intracellular microbes removal. Previous studies have shown that the capsid protein of Sindbis virus (SINV) is degraded through P62, and that ATG5 disruption in SINV-infected neurons decreases viral proteins clearance, and also results in the accumulation of cellular p62 and increased cell death [149]. In the same way, galectin-9 restricts hepatitis B virus (HBV) replication via p62-mediated selective autophagy of viral core proteins [150]. Genetic deletion of the Fanconi anemia (FA) pathway genes with DNA damage repair function blocks the virophagy and heightens susceptibility to lethal viral encephalitis during the SINV and HSV-1 infection [151]. The importance of non-canonical forms of virophagy in the host antiviral immune process has recently received extensive scientific attention. It was reported that the WD40 domain of ATG16L1 plays a critical role in the LC3 lipidation on single membranes during non-canonical autophagy [152]. Mice lacking the WD40 domain are extraordinarily sensitive to the low-pathogenicity IAV, and they suffer serious inflammatory pathological damage in the lungs; this is due to the non-canonical autophagy slowing the fusion of IAV envelopes with endosomes and down-regulating the IFN responsive genes [153]. In addition, non-canonical autophagy also facilitates the presentation of major histocompatibility complex class II (MHC II) antigens in IAV-infected mouse dendritic cells (DCs) [152]. When the autophagy levels are reduced, the beneficial enteric virus becomes pathogenic. It is probably because ATG16L1 in the epithelium prevents exacerbated TNFα, IFNγ and commensal bacteria-dependent intestinal injury after murine norovirus (MNV) infection [154]. In another study, massive amounts of lipidated LC3 were observed in ATG5, ATG7, or BECN1-silenced hepatocytes infected with Crimean Congo hemorrhagic fever virus (CCHFV). This implies the occurrence of non-canonical autophagy, but this accumulated lipidated LC3 seems to have no effect on virus replication [155]. Remarkably, a new alternative lipidation mechanism of ATG8-PS in the lysosomal compartment in the process of non-canonical autophagy was discovered recently; being different from the canonical conjugation of ATG8 protein to PE, ATG8-PS conjugation is a unique "molecular signature" for the non-canonical autophagy [156]. It has been confirmed that the influenza virus induces the non-canonical ATG8-PS autophagy, but it is still not clear how this unique modification affects virus replication [156]. Mitophagy Mitophagy is a vital form of autophagy that specifically degrades dysfunctional or redundant mitochondria. Since the accumulation of dysfunctional mitochondria induces a series of immune responses, mitophagy limits the secretion of inflammatory cytokines and directly regulates the presentation of mitochondrial antigens and immune cell homeostasis [157]. It is known that promoting mitophagy inhibits the secretion of type I IFN, which depends on the increased ROS production and mitochondrial retention [158,159]. Inhibition of mitophagy activates Nod-like receptor protein 3 (NLRP3) inflammasome to further increase the secretion of IL-1β/IL-18 and the expression of NF-Κb [160]. Therefore, mitophagy is likely to be usurped by viruses for suppression of antiviral immunity, or be inhibited to cause mitochondrial degradation dysfunction, resulting in a strong immune response and severe damage to the host. HIV [161][162][163], herpesviruses [164], influenza viruses [165,166], EBV [167], HPIV3 [168], senecavirus A [169] and SARS-CoV-2 [170][171][172] all appear to possess this ability. Considering influenza viruses as an example: NOD2 Receptor interacting protein kinase 2 (Ripk2) −/− cells exhibit accumulation of damaged mitochondria, but Ripk2 −/− cells are susceptible to IAV. After infection, IAV activates the NLRP3 and increases the levels of IL-18 and IL-1β. Therefore, NOD-RIPK2 signal transduction protects against virally triggered immunopathology by negatively regulating NLRP3 through mitophagy [173]. Our study confirmed that the IAV M2 protein increases the formation of ROS-dependent mitochondrial antiviral signalling protein (MAVS) aggregates [174]. It antagonizes autophagy and competes with ATG5 and LC3B to bind to MAVS, which reduces the formation of LC3B-MAVS and ATG5-MAVS complexes, as well as degradation of MAVS aggregates; followed by elevating the MAVSmediated innate immune response [174]. Furthermore, the high molecular weight aggregates of the IAV virulence protein, PB1-F2, can be transferred to the inner membrane of mitochondria through the TOMM40 channel. This process reduces the membrane potential and promotes the fragmentation of mitochondria, which in turn promotes the activation of NLRP3 [175][176][177][178]. On the other hand, PB1-F2 protein acts as an autophagy receptor and mediates the induction of complete mitochondrial autophagy by simultaneously interacting with LC3B and the mitochondrial protein, the Tu elongation factor, mitochondrial (TUFM). This interaction increases MAVS degradation and weakens the production of type I IFN [165,179]. A recent investigation showed that the PB1 protein of IAV also suppresses the innate immune response by targeting MAVS for NBR1-mediated selective autophagic degradation [180]. ER-phagy or reticulophagy The ER is a highly dynamic network that has a central role in cell metabolism and cellular organization. ERphagy contributes to the remodelling of the network under fluctuating conditions to ensure continuous normal functioning of ER and minimize stress [181]. As mentioned earlier, ER is the main membrane source of DMVs and viral replication or assembly site for viruses such as flaviviruses, CoVs and picornaviruses. Therefore, ER-phagy exerts innate antiviral functions against this group of viruses. FAM134B is an important ER-phagy receptor, as its absence helps ER expansion and leads to ER stress. Various lines of evidence suggested that the replication of flavivirus and ebola virus (EBOV) are both limited by the FAM134B-dependent ER-phagy [182,183]. However, flavivirus NS3-encoded protease and NS3 cofactor NS2B can cleave FAM134B to largely avoid this limitation [182]. Consistent with the above report, depletion of BPIFB3 improves the FAM134B ER-phagy and impairs the replication of flavivirus [184]. Another the ER-phagy receptor, RTN3, has been implicated in the remodelling of ER tubules in response to pathogen infections [185]. Flavivirus targeting RTN3.1A hijacks the ER-phagy by the NS4A protein of WNV to remodel the host membrane and stabilize the viral protein in the ER, but RTN3 interacts with NS4B of the HCV to abolish the NS4B selfinteraction, thus negatively regulating viral replication [186,187]. Lipophagy Autophagy also regulates lipid metabolism by modifying lipid droplets (LDs), a process termed lipophagy [188,189]. LDs are composed of a neutral lipid core and surrounded by a monolayer of phospholipids. There are several proteins on the surface of LDs, which are used to supply energy when required by cells [190]. DENV induces autophagy to regulate lipid metabolism, which requires components of the autophagocytic machinery to achieve robust replication [191,192]. During DENV or ZIKV infection, lipophagy is activated and stored triglycerides are depleted, which increases the release of β-oxidized fatty acids in mitochondria, thereby releasing the energy required for virus replication and assembly. The LDs then became a hotbed for viral replication [192][193][194]. Adding exogenous free fatty acids to autophagy-deficient cells restores the DENV replication. Furthermore, the application of Etomoxir, which blocks the transport of fatty acids to the mitochondria, blocks this process [191]. Aggrephagy Newly synthesized proteins need to be folded properly, but it is frequently hindered by oxidative stress, transcriptional/translational errors or mutations that cause protein misfolding [195]. Misfolded proteins form aggregates, which are then removed by aggrephagy. In the past, aggrephagy disorder was believed to be involved in the onset of many neurodegenerative diseases [196], it has been discovered that herpesviruses infections induce aggrephagy, which is a typical example of a conserved immune system evasion mechanism [197]. According to the latest reports, murine cytomegalovirus (MCMV) M45 protein motivates the aggregation and subsequent degradation of the receptor-interacting protein kinase 1 (RIPK1) and the NF-κB essential modulator (NEMO) [197]. The aggregation of RIPK1 and NEMO blocks antiviral responses such as the induction of necroptosis and the activation of NF-κB, and in that way contributes to the immune evasion of virus and cell viability. M45 requires an "induced protein aggregation motif (IPAM)" to induce the target proteins aggregation, then M45 recruits the LC3-interacting adaptor protein, TBC1D5 and VPS26B, facilitating degradation of aggregates [197]. Of note, some herpesviruses encode M45-homologous proteins containing the IPAM, such as EBV BORF2, HSV-1 ICP6, HSV-2 ICP10 and HHV-8 ORF61. Experimental results show that HSV-1 ICP6 has comparable activity to M45 [197]. Ferritinophagy Ferritinophagy is a special form of autophagy that specifically targets iron-sequestering protein ferritin for maintaining cellular iron homeostasis [189]. Although iron is an important part of various enzymes and proteins, excess free iron induces oxidative stress and the formation of ROS, which accelerates the cell death [198]. Ferritinophagy is regulated by the nuclear receptor coactivator 4 (NCOA4), which binds ferritin and marks it as autophagic cargo for iron recycling under low iron conditions [199]. At the same time, the replication of various viruses is affected by the iron concentration; these comprise HCV [200], HSV-1 [201], bovine viral diarrhea virus (BVDV) [201], HIV-1 [202], WNV [203], HCMV [204] and HPIV2 [205]. In some studies, inhibition of ferritinophagy has been recognised as a potential mechanism of prevention of cell death during viral infection. For example, the pUL38 protein of HCMV binds to USP24 to antagonize the cellular stress response, thereby preventing premature cell death [204]. During the HCMV infection, protein levels of NCOA4 and ferritinophagy are regulated, and Tiron and iron chelators ciclopirox olamine specifically protect cells from pUL38-deficient HCMV infection-induced cell death [204]. This shows that pUL38 antagonizes USP24 to reduce ferritinophagy and increase cell viability and successful virus infection. Similarly, the V-2 protein of HPIV2 weakens ferritinophagy by interfering with the interaction between the ferritin heavy chain 1 (FTH1) and NCOA4, allowing infected cells to avoid apoptotic cell death and facilitating effective viral replication of HPIV2 [205]. Antiviral interferon responses, inflammation and autophagy The viral invasion will trigger the activation of some specific PRRs, including: 1) Toll-like receptors (TLRs), such as TLR3 (dsRNA), TLR7 and TLR8 (ssRNA), and TLR9 (DNA with unmethylated CpG sites); 2) RIG-I like receptors (RLRs) (viral RNAs); and 3) Nod-like receptors (NLRs) [206]. Moreover, the cytosolic DNA sensor, cyclic GMP-AMP (cGAMP) synthase (cGAS), recognizes dsDNA during the DNA virus infection [206]. TLR7, TLR8 and TLR9 recruit the adaptor protein, myeloid differentiation primary response 88 (MYD88), while TLR3 recruits another type of adaptor, TIR-domain-containing adapter-inducing interferon-β (TRIF). Both adaptors activate NF-κB to synthesize inflammatory factors or the interferon pathway to induce IFN production in plasmacytoid pDC [207,208]. MYD88 also recruits interleukin 1 receptor-associated kinase (IRAK) 1 and IRAK4 [209]. IRAK1 is phosphorylated to recruit E3 ubiquitin ligase and the scaffold protein, TNF receptor-associated factor 6 (TRAF6) [209]. Ubiquitinated TRAF6 induces the phosphorylation of the inhibitor of the IκB kinase (IKK) complex, activating the NF-κB [210]. Cytosolic viral DNA triggers STING1 through binding to cGAMP, resulting in the production of type I IFNs [211]. STING1 upregulates the expression of NF-κB-dependent proinflammatory cytokines [212]. Nevertheless, ATG9a inhibits the STING1 aggregation on Golgi apparatusderived compartments to regulate the innate immune response; AMPK and ULK1 mediate the phosphorylation of STING1, which leads to the degradation of STING1, thereby limiting cytokine levels [213]. The RIG-I-MAVS-TRAF6 signal transduction axis is required for the RIG-I-mediated autophagy. After activation of RIG-I, Beclin1 translocates to mitochondria and then interacts with TRAF6 [214]. MAVS binds to TRAF2, TRAF3, TRAF5, or TRAF6 through its PRR domain, which promotes the activation of the TBK1 complex [215,216]. The TBK1 complex promotes homodimerization and phosphorylation of interferon regulatory factors (IRFs) to activate IRFs, which then transfer to the nucleus where they link to IFN-stimulated response elements and motivate the transcription of target genes [216]. Moreover, TLR signal transduction enhances the interaction between TRIF or MyD88 and Beclin1, and reduces the binding of Beclin1 to BCL-2, which ultimately activates autophagy [217]. In contrast, tripartite motif-containing protein 32 (TRIM32) targets TRIF to negatively regulate TLR3-mediated immune responses for degradation of TAX1BP1-mediated selective autophagy [218]. Mitochondria exert antiviral functions through MAVS. After RIG-I recognizes the RNA produced by a viral infection and replication, it recruits MAVS to locate on the mitochondria and triggers MAVS activation. MAVS activation further activates IRFs and NF-κB, leading to the expression of IFN and pro-inflammatory cytokines [219]. The ATG5-ATG12 complex affects the formation and stability of MAVS aggregates by directly binding to the Caspase recruitment domain (CARD) of MAVS and RIG-I, thereby negatively regulating the signal transduction of the RLRs pathway [175,220]. However, the absence of autophagy results in ROS-dependent signal transmission of RLRs [159]. Therefore, autophagy may be used as a negative feedback mechanism to regulate the type I IFN response. In parallel, autophagy removes mitochondria, leading to a reduced release of mitochondrial-derived damageassociated molecular patterns (DAMPs) and suppression of the NLRP3 inflammasome activation [221]. Rubicon is a protein that interacts with the Beclin1-VPS34 complex that inhibits the activity of CARD9, BCL10, and MALT1 (CBM complex) by binding to CARD9, thereby terminating RIG-I-or MDA5-mediated pro-inflammatory signal transduction [222] (Fig. 4). Consequently, the relationship between autophagy and the immune response during viral infection is highly complicated and must be specifically analyzed according to different viral infections. Due to almost all viral infections inducing a complex immune response, below we provide descriptions of some representative viruses. Specifically, autophagy-deficient ATG5 pDCs decrease TLR7-dependent IFNs production during the VSV and Sendai virus (SeV) infection [223]. Moreover, TLR-7 and MyD88 signal transduction hinders the RVFV replication in Drosophila and mammals by activating the antiviral autophagy [147]. Leishmania RNA virus (LRV) induces type I IFN production by activating TLR3 and TRIF, which triggers the ATG5-mediated autophagyinduced degradation of NLRP3 inflammasome in macrophages [224]. ATG5 also facilitates the production of TLR9-induced IFN-I in pDCs infected with HSV-1 [225]. STING1 is essential for an RNA-virus triggered autophagy, foot-and-mouth disease virus (FMDV)induced integrated stress response originates from RIG-I, which transmits signals to STING1 and leads to degradation of STING1 itself [226]. In addition, STING-dependent autophagy induced by inflammation has been shown to limit ZIKV infection in the Drosophila brain [227,228] (Fig. 4). Conversely, HCV inhibits the host's innate immune response by inducing the autophagic degradation of TRAF6 [229]. Srikanta et al. found that HCV replication induces chronic ER stress in persistently infected cells and an autophagic response that selectively impaired the type I IFN signalling [230]. During HPV-1 infection, the interaction between cGAS and Beclin1 not only halts the production of IFN by inhibiting the synthesis of cGAMP, but also prevents excessive activation of cGAS to sustain systematic immune balance by enhancing autophagymediated degradation of cytosolic viral DNAs [231]. As mentioned in the Mitophagy part, the virus controls the RIG-I/MAVS-mediated production of IFN-I and activation of inflammasomes by promoting mitochondrial autophagy, which will not be reiterated. OTUD7B/ Cezanne (OTU deubiquitinase 7B) acts as a negative regulator of antiviral immunity by deubiquitinates SQSTM1/p62 and promotes IRF3 degradation [232]. In addition, a newly discovered selective autophagy receptor CCDC50 targets RIG-I/MDA5 and degrades them after infection with VSV, SEV, and EMCV, thereby inhibiting IRF3/7 activation and NF-κB-mediated inflammation to enhance virus replication [41] (Fig. 4). Collectively, the interaction between autophagy and the immune response is a double-edged sword in viral infection. On one hand, the activation of TLRs, RLRs, or cGAS-STING by viral infection may help to induce autophagy to improve the IFN production, thereby limiting virus replication; on the other hand, autophagy degrades damaged organelles and immune signal transduction proteins to impair the immune response process, or in extreme cases, prevent excessive immune responses to maintain the homeostasis of the intracellular environment, thereby, thus eventually promoting replication of the virus. Autophagy and viral antigen presentation Autophagy proteins are also involved in different aspects of antigen presentation. Antigen-presenting cells (APCs) are capable of initiating adaptive immune response by presenting protein fragments through MHC molecules. MHC class I (MHC I) is expressed in nucleated cell types. Intracellular antigens are processed by the proteasome and transported to the ER through the transporter associated with TAP, which then binds to MHC I, and is typically presented to CD8 + T cells [233]. MHC II and related molecules are expressed by APCs or by other cells after being stimulated by IFN-γ. MHC II molecules mainly load extracellular antigens in the late endosomal MHC II inclusion compartment (MIIC), and also load a part of endogenous antigens via a variety of intracellular pathways [234,235], which are presented to CD4 + T cells [233,236]. It is important to note that an extra mechanism of loading exogenous antigens onto MHC I molecules occurs through a process called cross-presentation [237]. After autophagosome cargo is degraded by lysosomes, the antigen can be presented via the MHC II and promote the activation of CD4 + T cells [238]. In addition, autophagy mediates the internalization and degradation of MHC I molecules to limit the presentation of antigen [208]. In DCs deficient with autophagy-related genes, VPS34, ATG5, or ATG7, the surface expression of MHC I and induction of CD8 + T cell activation is increased [239,240]. Recent research also showed that MHC I is targeted for degradation by the autophagy pathway involving the selective autophagy receptor NBR1 [241]. In contrast, some studies have provided evidence that autophagy enhances the MHC I antigen presentation [242]. For example, HeLa cells treated with the selective PI3K inhibitor, 3-methyladenine, display reduced autophagymediated degradation of defective ribosomal products (DRiPs), which is also accompanied by enhanced proteasome degradation and class I antigen presentation [238,243] (Fig. 4). Early studies found that influenza matrix protein 1 (M1) is targeted by ATG8/LC3 to autophagosomes, and then autophagosomes continuously fuse with MIIC to enhance the antigen presentation to CD4 + cells clones [244]. Interestingly, proteasome-dependent endogenous antigen processing, but not autophagy, contributes to the global influenza CD4 ( +) response [245]. In addition, the DCs lacking ATG16L1 WD 40 CTD infected with IAV exhibited a reduced MHC II antigen presentation. It suggests that non-canonical autophagy may complement the MHC II antigen presentation process [152]. Research on HIV showed the LC3 fusion protein combined with HIV/SIV gag antigen targeted to autophagosomes can effectively enhance the HIV-specific CD4 ( +) T cell response [246]. Nevertheless, HIV-1 envelope and ICP34.5 of HSV1 inhibit autophagy in DC and escape MHC-restricted presentation of its antigens [247] (Fig. 4). The effect of autophagy on MHC I antigen presentation appears to be paradoxical, as there are differences in MCH I antigen presentation induced by specific viral infections. During the IAV and lymphocytic choriomeningitis virus (LCMV) infection, a lack of ATG5 leads to an enhanced virus-specific CD8 + T cell response [239]. Autophagy and the innate immune in viral infection. The genetic material of RNA or DNA viruses is recognized by PRRs or cGAS, which facilitate viral induction of antiviral autophagy to improve the IFN production, thereby limiting virus replication. Specifically, VSV and RVFV activate antiviral autophagy and increase the production of type I IFNs through TLR-7 and MYD88 signal transduction; while LRV achieves it through TLR3 and TRIF, which triggers the degradation of NLRP3; HSV activates autophagy and induces interferon production through TLR9. STING-dependent autophagy induced by inflammation limits ZIKV infection. IAV M2 protein increases the formation of MAVS aggregates. It antagonizes autophagy through reducing the formation of ATG5-MAVS and LC3B-MAVS complexes, thereby enhancing the innate immune response. Conversely, HCV inhibits the innate immune response by inducing the autophagic degradation of TRAF6. During the HPV-1 infection, the interaction between cGAS and Beclin1 not only halts the production of IFN by inhibiting the synthesis of cGAMP, but also prevents excessive activation of cGAS to sustain systematic immune balance by enhancing autophagy degradation of viral DNAs. APCs initiate adaptive immunity by presenting protein fragments through MHC. M1 protein of Influenza is targeted by LC3 to autophagosomes, which fuse with MIIC to enhance the antigen presentation of CD4 + T cells. LC3 combined with HIV/SIV gag antigen targeted to autophagosomes enhance the HIV-specific CD4 + T cell response. HIV-1 envelope and ICP34.5 of HSV1 inhibit autophagy in DCs, escaping MHC-restricted presentation of its antigens. ORF8 of SARS-CoV-2 directly interacts with MHC Ι and mediates its down-regulation through autophagy to evade immune surveillance. HSV-1 infection induces autophagy and increases the presentation of peptides derived from HSV-1 glycoprotein B to CD8 + T cells in a manner that requires proteasome function and secretion pathways. Similarly, pUL138 of HCMV is presented by autophagy in a TAP-independent manner that involves MHC I loading in endosomal compartments DCs lacking VPS34 display enhanced presentation of chicken ovalbumin (OVA), IAV, and LCMV antigens to CD8 + T cells [240]. Remarkably, a recent study confirmed that an open reading frame 8 (ORF8) of SARS-CoV-2 directly interacts with MHC Ι and mediates its down-regulation through Beclin1-mediated selective autophagy to evade immune surveillance [248]. However, after HSV-1 infects macrophages to induce autophagy, it increases the presentation of a peptide derived from the HSV-1 glycoprotein B to CD8 + T cells in a manner that requires proteasome function and secretion pathways [249]. Similarly, an HCMV-encoded antigen of the type I integral membrane protein, pUL138, can be presented by autophagy in a TAP-independent manner that involves MHC I loading in endosomal compartments [242] (Fig. 4). In conclusion, autophagosomes induced by viral infection carry viral components and fuse with MIIC to provide proteins for MHC II presentation to CD4 + cells to induce an antiviral immune response, but some viruses escape this immune process by reducing autophagy. Viral proteins and autophagy proteins mediate the direct degradation of MHC I, and autophagy deficiency leads to a virus-specific CD8 + T cell response enhancement. However, it seems that autophagy does not affect other ways of MHC I antigen presentation, which requires further in-depth research. Virus-specific induction of autophagy Recently, Dr. Beth Levine's laboratory utilized genomewide siRNA screening to discover a type of virusinduced autophagy mediated by sorting nexin 5 (SNX5), which has subsequently attracted widespread attention [250]. Virus-induced autophagy differs from the general autophagy mediated by starvation or mTOR, and the non-canonical forms of autophagy induced by bacteria or osmotic stress. Both SNX5-deficient cells and SNX5-knockout mice are more susceptible to SIN, HSV-1, WNV, CHIKV, and other viruses, but there is no difference in the susceptibility to recombinant viruses that have the ability to inhibit autophagy [250]. When the virus enters the endosome, SNX5 increases the curvature of the membrane through the BAR domain to activate the autophagy-related PI3KC3-C1 kinase complex, and generates the key autophagy initiation signal, PI (3) P, on the endosome membrane, thus activating the autophagy [250]. However, the mechanism by which luminal viruses stimulate the SNX5-PI3KC3 axis on the cytoplasmic surface of endosomes is still unidentified. These findings confirm the existence of SNX5-mediated activation of the viral autophagy signalling pathway, which represents a novel and important host defense mechanism. Indeed, the comparative characterization of the SINV proteome from mammalian and invertebrate hosts identified SNX5 as an important host factor for alphavirus replication [251]. Asuka et al. have previously reported co-localization of fluorescently-labeled EBOV particles with SNX5 in the process of researching the internalization mechanism of EBOV [252]. In addition, SNX5 and PI (3) P play a key role in the formation of the viral replicase complexes (VRCs) bound on the organelle membrane of the tomato bushy stunt virus (TBSV) [253]. Importantly, HCMV-encoded UL35 binds to and negatively regulates SNX5, thereby regulating cellular transport pathways that affect the virus assembly process [254]. These results indicate that the endosomal membrane remodelling process affects the entry, replication, and assembly processes of many viruses. The regulatory relationships among SNX5, various viruses and autophagy require further research. Conclusion As a ubiquitous metabolic pathway in most multicellular organisms, autophagy exhibits strong defense capabilities against viral invasion, including the regulation of inflammation, promotion of antigen presentation, and the degradation of viral components or particles. Nevertheless, the diversity of methods exploited by different viruses to manipulate the autophagy pathway is equally impressive. Viruses can use the autophagy pathway to interfere with the immune response or prevent cell death, and to take advantage of autophagy-related metabolites. Some viruses can even directly exploit autophagosomes for assembly, or use secretory autophagy to promote the egress of virus particles and cell-cell spreading, and avoid antibody neutralization. However, whether autophagy contributes to or inhibits viral replication is indeterminate and dependent on several factors, including types of infected cells, the virus strains, and conditions of infection. Another key point that should not be ignored is that most of experimental designs investigating virus role in autophagy are carried out in cancer cell lines, given that autophagy plays a great regulatory role in cancer cell survival mechanisms, the effect of autophagy on virus replication may require in vivo experiments to be more convincing. In addition, viruses that cannot be assembled inside the cell (e.g. HPIV3, IAV and other viruses), can induce the accumulation of autophagosomes, and hinder the membrane fusion between autophagosomes and lysosomes. The specific role of these accumulated autophagosomes requires further research. Importantly, some new discoveries, such as the influence of ATG8-PS alternative lipidation mechanism on virus replication in viruses-triggered non-classical autophagy and mechanisms by which viruses specifically induce autophagy,
2023-01-19T20:37:58.259Z
2023-01-18T00:00:00.000
{ "year": 2023, "sha1": "5bd1244da904399abb5681c9025eafbb9522f83e", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "bd1ce67a038f25da642130f3b7a54af262f2c96a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
234035258
pes2o/s2orc
v3-fos-license
Media competence of an editor as a factor of the effective promotion of scientific journals in the international information environment The article gives insights into the concept of media competence regarding the profession of an editor of a scientific journal. Currently, the media competence is gaining the same relevance as other important competencies of publishers of scientific periodicals. The ability to find the required information quickly and efficiently, the ability to work professionally with international scientometric databases, a good understanding of the process of promoting a publication in the information environment, an ability to see if the publication corresponds to the research request and bibliographic description - all these and other skills and knowledge become crucial in organizing the work of the editorial staff of a scientific journal. At the same time, it should be recognized that the editorial staff of scientific journals acquire professional media competences directly in the process of work on the basis of their own successful or less successful experience. The country does not have a training system for such personnel, despite the fact that the challenges faced by the publishers are ambitious. These challenges require an integrated approach, including the increase of the media competence of editors of scientific journals. Introduction The problem of promoting domestic scientific journals in the international information environment is becoming increasingly important. The reason for this is the need for a free exchange of information with researchers from other countries, as well as the urgent need to integrate the achievements of Russian scientists into the international scientific discourse. This promotion of research results can be performed due to the quality of the research itself, the scientific novelty and distinction of discoveries. However, in today's hyperconnected world, much depends on the managers of the scientific process, and they require special competencies allowing these discoveries to achieve results with a maximum speed and focus in the shoreless sea of scientific information. This refers to the media competencies of editors and other specialists that publish scientific journals. The concept of "media competence" includes a set of knowledge and skills of a person allowing to understand the functioning of communication tools in society, namely, to find the necessary How to achieve this goal? In addition to creating favorable conditions for the work of researchers, developing international cooperation, attracting young talented scientists to science, it is necessary to improve the quality of the journals, where Russian authors are published. This is also possible through an increase in the media competence of the staff of scientific journals. The vice-president of the Russian Academy of Sciences Aleksey Khokhlov fairly assesses the situation with scientific journals: "the changes are ripe, it is already impossible to put them off. In connection with the development of electronic communication media, the market of scientific journals around the world is undergoing rapid changes. Publishing a scientific result is no longer a problem; anyone can publish anything on the Internet. It is important that your result becomes known to colleagues and appreciated by them. That is, the center of gravity shifts from the publication itself to the widest possible distribution of information about this publication" [7]. Lost in translation The monitoring of publication activities of Russian authors in Web of Science performed by the Russian Ministry of Education and Science in 2014 showed that one of the problems of recording publications of Russian researchers in the Web of Science database and determining the amount of funding per article is the lack of habit of entering the publication identification data (author, organization, funding organization, address). When filling in the registration card, the authors make technical and logical errors that the editorial staff does not notice. The incorrect presentation of metadata of a scientific article makes the publications of Russian researchers "invisible" for the world scientific community, as a result, the relevance of Russian science decreases [8]. E.G. Grishakina gives an example of several different names for the same organization found in the database, e.g., Tomsk State Pedagogical University has more than 12 names in the Web of Science system (Tomsk State Pedagogical University, Tomsk State Pedag Univ, Tomsk State Pedagog Univ, Tomsk Pedag Univ, etc.). This means that when working with the information of analytical server of scientific citation Web of Science, problems arise with the search for scientometric information on educational institutions of higher professional education, their identification in the Web of Science system, because the authors or editors of journals indicate not the official English name of the organization (as stated in the documents), but the one at their own discretion [8]. As for the publications in Scopus, the articles of Russian scientists are relatively rarely included in high-ranking journals from the first CiteScore quartile of Scopus (i.e. the top 25% of journals). Only 20% of articles of Russian scientists appear in such journals, and the situation has not changed much since 2010. The nearest competitors of Russia in Scopus are Iran and Brazil, their share of such publications is 35 and 40%, respectively. The reason for this is not the low level of research of Russian scientists or the lack of proper scientific weight. Often the reason for the absence of demand for the research results is the fact that the article is published in Russian or the poor quality of its translation into English. In this respect, a good example is the special issue of the scientific journal "Computer Optics", which was published in 2015 in English and included translations of 22 articles from the Russian version of the journal over the past three years, including the highly cited review article by N.Yu. Ilyasova dedicated to the methods of digital analysis of human vascular system [9]. The "Computer Optics Selected papers" became an entry point into the foreign scientific community for the journal, which allowed to attract foreign authors to publish their works in current issues of the journal. However, in order for this to happen, the editors had to perform a lot of preparatory work. The publishing unit of the editorial office was expanded. A new design layout has been developed especially for the Computer Optics Selected papers to meet the modern world trends for scientific periodicals. A professional technical translator was engaged so that the level and quality of the English language of the publications complied with the international standards [10]. The need for publication of original English-language articles in "Computer Optics" was caused by the fact that at the end of 2015 the journal entered the 650 most popular Russian scientific journals both in Russia and abroad published on the Web of Science platform as a separate database Russian [11]. This was an important stage in promoting the journal in the international scientific community for the editors. According to the researcher of scientific journal periodicals, the president of the Association of Scientific Editors and Publishers (ANRI) O.V. Kirillova, "English-language journals are much more likely to get high rates as compared to the journals published in the native language. Thus, the task of publishing journals in English should be a priority" [12]. At the same time, she emphasizes that "it is also important to preserve the native language as the language of scientific communication within the country and among the Russian-speaking foreign diaspora. It is the decision of the founder and publisher of the journal whether to publish only in English or publish two parallel versions" [12]. Website of a scientific journal as a key media competence The promotion of a journal depends greatly on its positioning in the information space through its website, which should be user-friendly and modern in terms of design. The website reflects the quality and level of the journal, its representation in Russian and international scientometric databases; digital technologies should provide opportunities for professional interaction, the information resource should also be functional, mobile and technological, should provide information value and openness of the content. Transparency, technological effectiveness, accessibility and reliability of information -these are the main principles for building a scientific journal website. Such open access resources included in the Scimago Journal & Country Rank in the subject area "Arts and Humanities" are the websites of the Russian journals "Novoe Literaturnoe Obozrenie" (Q2), "Schole" (Q1), "Zolotoordynskoe Obozrenie" (Q2), "Bylye Gody" (Q1), "Voprosy Onomastiki" (Q2), "Studia Slavica et Balcanica Petropolitana" (Q2), "Horizon, Fenomenologiceskie Issledovania" (Q2). All of them provide quick access to relevant research results of Russian and foreign scientists to the authors and readers of their journals. The ability to organize an efficient website and maintain it constantly is one of the key media competencies of the editorial staff of a scientific journal. The problem of recruitment of media competent staff The area of responsibility of any editorial office includes the organization of article editing. For this purpose, the editorial staff engages a scientific editor in charge of editing materials and manuscripts with the scientific and technical content. Unlike the technical editor engaged in technical support of the materials to be published and rarely dealing with the editing itself, the scientific editor is engaged exactly in checking and changing the scientific content. And this is where the media competence plays a key role. The scientific editor should be aware of the latest scientific achievements related to the journal scope both in his country and abroad, be proficient in the methods of professional search and processing of information, as well as scientific editing of manuscripts; be able to analyze articles from the point of view of their verification, the scientific logic of presentation, to have a perfect command of his native language and its communicative features, and know some other language, preferably English; know the standards adopted for scientific and technical terms, abbreviations, acronyms, as well as standards for the layout of scientific articles, etc. A scientific editor can bring the quality of an article to the level where it can compete with the articles of the foreign colleagues. Unfortunately, not all the editorial boards of scientific journals can provide such services to the authors. Quite often, it is not easy to find enough material to fill all the pages of the journal, so everything received by the editor is published. Currently, this is a serious problem for the scientific periodicals. Besides, finding a qualified scientific editor is not easy as well. It is even more difficult to find a scientific journal editor that would also be a specialist in scientometric databases, scientific citation technology, Russian and international politics in the area of publishing and promoting scientific journals in the information environment. This has little in common with the traditional idea of an editor as a specialist in working with texts. At present, there is practically no professional training with such specialization in Russia. The competencies of editors, including the scientific editors, are formed as part of the Journalism course at the Lomonosov Moscow State University, Higher School of Economics, MGIMO University and other universities. As for the editors of scientific journals, which are very specific periodicals, they have to gain the necessary media competencies by trial and error as they work. Conclusions Currently, the publication of a scientific journal is a very complex multifunctional process that includes various technological issues from organizing a constant supply of high-level scientific articles to the smallest details of the process of promoting a publication in the information environment, its virtual communication with the reader. Ratings, indices, indicators -these characteristics of a scientific journal make its inner life similar to mathematical exercises on solving the tasks "on the quantitative relations and spatial forms of the real world", as one of the greatest mathematicians of the 20th century, Andrei Nikolayevich Kolmogorov, said. Today, the traditional education of a journalist or an editor is not enough to address the challenges faced by the Russian scientific publishers. Editors of modern scientific journals are required to have new media competencies that will enable them to advance the Russian science at the international arena. Today, media education of editors of scientific journals has similar priority to the task of increasing the citation index of a journal.
2021-05-10T00:03:29.914Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "0ad72ed805715ac04f2b3d097526276c709b7ea3", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1745/1/012027", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e7461e17f2e40c0a3718366eaf1956d6dade08db", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
14183340
pes2o/s2orc
v3-fos-license
Adsorption Removal of Environmental Hormones of Dimethyl Phthalate Using Novel Magnetic Adsorbent Magnetic polyvinyl alcohol adsorbent M-PVAL was employed to remove and concentrate dimethyl phthalate DMP. The M-PVAL was prepared after sequential syntheses of magnetic Fe3O4 (M) and polyvinyl acetate (M-PVAC). The saturated magnetizations of M, M-PVAC, and M-PVAL are 57.2, 26.0, and 43.2 emu g−1 with superparamagnetism, respectively. The average size of M-PVAL by number is 0.75 μm in micro size. Adsorption experiments include three cases: (1) adjustment of initial pH (pH0) of solution to 5, (2) no adjustment of pH0 with value in 6.04–6.64, and (3) adjusted pH0 = 7. The corresponding saturated amounts of adsorption of unimolecular layer of Langmuir isotherm are 4.01, 5.21, and 4.22 mg g−1, respectively. Values of heterogeneity factor of Freundlich isotherm are 2.59, 2.19, and 2.59 which are greater than 1, revealing the favorable adsorption of DMP/M-PVAL system. Values of adsorption activation energy per mole of Dubinin-Radushkevich isotherm are, respectively, of low values of 7.04, 6.48, and 7.19 kJ mol−1, indicating the natural occurring of the adsorption process studied. The tiny size of adsorbent makes the adsorption take place easily while its superparamagnetism is beneficial for the separation and recovery of micro adsorbent from liquid by applying magnetic field after completion of adsorption. The natural attenuation mechanisms which include chemical breakdown, biodegradation, and photolysis can decompose the PAEs [25]. The halftimes vary from days to years, depending on the environment especially the temperature and the properties of PAEs such as structure and length of functional groups. However, it takes a long time. Thus, many techniques have been used to treat the PAEs-containing water and waste, including physical treatments of adsorption [32,33] and membrane processes such as ultrafiltration, nanofiltration, and reverse osmosis [34][35][36][37][38], biodegradation by microorganisms [25], and chemical processes of basecatalyzed hydrolysis, ultraviolet (UV) radiation, ozonation, combined UV radiation/ozonation, catalytic ozonation, and combined UV/catalytic ozonation [25,26,39,40]. Among the above said methods, adsorption can effectively remove the PAEs from the solution with PAEs concentrated on the solid adsorbents. The adsorption has been also applied to other emerging contaminants [41,42]. The exhausted adsorbents are regenerated for reuse. The treatment of waste regeneration solution containing high-concentration PAEs is then followed. It is usually carried out by the destruction processes such as biological and chemical treatments noted above. Concerning the improvement of adsorption rate, proper surface area, and easy recovery of adsorbent, adsorption using novel magnetic micro-nano size magnetic adsorbents has been developed and employed for the removal of inorganic pollutants [43][44][45][46]. This study applied the micro size magnetic polyvinyl alcohol (M-PVAL) for the adsorption removal of organic DMP. (M) was prepared by chemical coprecipitation method. Ferrous chloride and ferric chloride were firstly dissolved using distilled water at 85 ∘ C in nitrogen environment. It followed the addition of aqueous ammonia to form the magnetic suspension of precipitated Fe 3 O 4 . Oleic acid acting as a dispersion agent was immediately and slowly fed into the suspension liquid until the appearance of clear supernatant liquid. This thus yielded the magnetite M. It was then synthesized to form magnetic polyvinyl acetate (M-PVAC) by suspension polymerization. For performing this, PVAL was dissolved via distilled water at 60 ∘ C in nitrogen environment to provide background solution for polymerization. After addition of magnetite M, VAC, and divinyl benzene, the suspension polymerization proceeded at 70 ∘ C for about 6 h and then cooled to 25 ∘ C, forming M-PVAC. Washing with deionized water, its surface was modified by alcoholysis to produce a polymer adsorbent of magnetic polyvinyl alcohol M-PVAL. In alcoholysis, M-PVAC was suspended in methanol solution for 6 h to obtain M-PVAL. Detailed description of the above procedures can be found in Tseng et al. [45]. Isothermal Adsorption. DMP solutions with various concentrations were prepared. 0.1 g M-PVAL adsorbent was added into each 50 mL DMP solution filled in 125 mL flask. The initial pH value (pH 0 ) of solution was adjusted to desired value using HCl or NaOH. The adsorptions with various initial DMP concentrations were conducted in a constant-bath shaker. A blank solution without M-PVAL adsorbent was tested along each batch of adsorptions. After the adsorption of 8 h, a magnet was attached beneath the flask to adhere the M-PVAL adsorbent. The pH value of solution was measured. The magnetic separated solution was withdrawn using syringe and filtered with 0.22 m filter. 2 mL filtrate was collected for the measurement of concentration. The success of synthesis of M-PVAL with functional group of -OH can be further justified by the peak of adsorption (inverse of transmittance) at 3400 cm −1 in FTIR diagram presented in Figure 2. The peak at 1266 cm −1 also reveals the binding of C-O. The same characteristics of nonmagnetic polyvinyl alcohol were also reported by Kaczmarek et al. [47] and Majumdar and Adhikari [48]. respectively. Most of the M-PVAL particles have size less than 1 m. The major physical characteristics of M-PVAL are summarized in Table 1. The particle porosity is 0.03, indicating the micro size M-PVAL is essentially nonporous. This ensures that the pore diffusion is negligible in adsorption process. Isothermal Adsorption of DMP. Langmuir, Freundlich, and D-R isotherms were tested to examine the adsorptions of DMP on M-PVAL for three cases, namely, Cases 1 and 3 with initial pH 0 adjusted at 5 and 7 and Case 2 without adjustment of pH 0 with pH value in 6.04-6.64, respectively. At equilibrium, the pH values for the three cases with pH 0 = 5, 6.04-6.64, and 7 increase to about 7.36-7.87, 6.9-8.05, and 7.42-8.39, respectively. This is consistent with the adsorption of slightly acidic DMP by base M-PVAL, which exhibits pH of 9.1 and zeta potential of −35.6 mv when dispersed in deionized (DI) water as illustrated in Figure 4. The values of 4 The Scientific World Journal isotherm parameters of corresponding isotherms are listed in Table 2. The Langmuir isotherm deduces the values of unimolecular layer L of 4.01, 5.21, and 4.22 g kg −1 for Cases 1, 2, and 3, respectively, indicating minor effect of adjustment of pH 0 on the saturation capacity L with Case 2 without adjustment of pH 0 yielding higher value. The adsorption equilibrium constants L exhibit difference with Cases 1 and 2 with adjustment of pH 0 giving higher values. The cause might be due to the effects of adjustment addition of HCl or NaOH on the adsorption. The balance of decrease of L while increase of L by the adjustment of pH 0 results in the close adsorption behaviors for the three cases as depicted in Figures 5, 6, and 7. The -squares of model fittings as shown in Figure 8 are greater than 0.94, indicating good agreement. The results thus suggest performing the adsorption without adjustment of pH 0 . The heterogeneity factors F of Freundlich isotherm obtained for the three cases are 2.59, 2.19, and 2.59, respectively. All these values are greater than 1, revealing that the adsorption of the noted DMP/M-PVAL systems is favorable. The F values are about 0.6-0.95 (mg g −1 )(g m −3 ) −1/ F . The fittings of Freundlich isotherm as illustrated in Figure 9 are fairly satisfactory with -square higher than 0.81, which, however, is not as good as those of Langmuir isotherm. The good fitting of Langmuir isotherm for the adsorbent M-PVAL may be further justified by noting that the M-PVAL is tiny with number average particle size of 0.75 m which exhibits fast adsorption rate with low diffusion resistance thus in favor of the formation of thin monolayer. Applying D-R isotherm for the three cases gives the saturation adsorption capacity D of 3. byÖzcan et al. for the adsorption of ion-exchange form [49]. The low values of D thus support that the adsorption process of DMP/M-PVAL in this study proceeds naturally. The above results indicate that the equilibrium of DMP/ M-PVAL exhibits saturation value. Further, among the three isotherms examined, the Langmuir isotherm shows the best agreement and thus is more appropriate to describe the adsorption equilibrium of DMP/M-PVAL system. Conclusions Some major conclusions may be drawn from the adsorption removal of DMP using superparamagnetic micro size adsorbent of M-PVAL examined in this study as follows: separated from the treated liquid for the regeneration by applying externally magnetic field. (3) Langmuir, Freundlich, and Dubinin-Radushkevich (D-R) isotherms were tested to describe the equilibrium of DMP/M-PVAL for three cases: (1) adjusting initial pH (pH 0 ) at 5, (2) no adjustment of pH with pH 0 = 6.04-6.64, and (3) Di-n-butyl phthalate DCHP: Di-cyclohexyl phthalate DEHP: Di-(2-ethyl hexyl) phthalate DEP: Di-ethyl phthalate DHP: Di-hexyl phthalate DIDP: Di-iso-decyl phthalate DINP: Di-iso-nonyl phthalate DMP: Dimethyl phthalate DNOP: Di-n-octyl phthalate : P Polyvinyl chloride pH 0 : Initial pH value : Adsorbate concentration in solid phase (mg g −1 or g kg −1 or mmol g −1 ) : Adsorbate concentration in solid phase at equilibrium (mg g −1 or g kg −1 or mmol g −1 ) L : U n i m o l e c u l a r l a y e r o f L a n g m u i r isotherm (mg g −1 or g kg −1 ) D : D-R isotherm constant denoting saturation adsorption capacity (mg g −1 or mmol g −1 ) : Universal gas constant (8.314 J mol −1 K −1 ) 2 L , 2 F , and 2 D : Correlation coefficients by means of fitting the experimental data to Langmuir, Freundlich, and D-R isotherms, respectively The Scientific World Journal 7 SEM: Scanning electron microscope SQUID: Superconducting quantum interference device : Absolute temperature (K) UV: Ultraviolet.
2018-04-03T06:19:34.999Z
2015-07-16T00:00:00.000
{ "year": 2015, "sha1": "21560d306e1dc0d415a4aebe70975621834da711", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/tswj/2015/903706.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8bb2fbfde8c8da594edd6cd2cf6da7c7ca557110", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
235368304
pes2o/s2orc
v3-fos-license
Electrostatic Interaction in Stochastic Electrodynamics Assuming the charged particle to be a two-dimensional oscillator that scatters the classical background of zero-point field one can deduce the Coulomb force of the two interacting particles. The correct deduction of the force is conditioned by the equality between the natural angular frequency of the oscillator and the angular frequency of Zitterbewegung. Introduction The Stochastic Electrodynamics (SED) studies the interaction of charged particles with the subquantum environment. In SED, the subquantum environment is supposed to be the background of electromagnetic waves with random phases. This background is the analogous of the background of quantum fluctuations with zero temperature (or Zero point field (ZPF) in quantum electrodynamics (QED). According to the analysis made by T. H. Boyer [12,13] on SED, it is based on the principles of Classical Electrodynamics (CED) but with different boundary conditions. Although it does not use the concepts of quantum and quantification but reaches the same results obtained by QED: the van der Waals forces and Casimir effect [14][15][16][17][18], the Planck blackbody spectrum [19][20][21], the third law of thermodynamics [22], rotator and oscillator specific heats [23], the diamagnetism of free particles [24,25], the thermal effect of acceleration [26][27][28], the stationary state of microscopic systems [29][30][31], the Lamb shift (radiative corrections) [32], etc. This paper generalizes the particle model, in SED picture, in order to study the electrostatic interaction of two different particles. According to the model developed in [33] and improved in this paper, the electrostatic interaction between two charged particles is figured as an effect of interaction between the CZPF background and charged particles. Since the concept of scattering cross-section must be used in this paper, then the intensity of the scattered radiation background is averaged in time and does not depend on the oscillation phases of the two oscillators [34]. For this reason, the model does not allow the explanation that there are two types of electric charge. Both attractive and repulsive behaviour of the forces may occur between oscillators if we consider that the two oscillators can oscillate in phase or in phase opposition. The phenomenon is analogous to the forces of the interaction between two oscillating bubbles, i.e. the secondary Bjerknes forces [35][36][37]. At the microscopic level, the electron model is characterized by a scattering cross-section dependent on angular frequencyω [33]. If the electron is at rest, the averaged electromagnetic scattering cross-section (Appendix 3) is proportional to the square of the electrostatic radius, i.e., the Thomson cross-section [38]. In the elementary charged particle model in Stochastic Electrodynamics [33], the particle at rest is an oscillator that scatters the CZPF background and the scattering cross-section depends on the angular frequencyω of the scattered electromagnetic wave and the natural angular frequency of the oscillator i ω . In this model, electric charge is a measure of the ability of the microscopic system to scatter the stochastic background of radiation. This interpretation of electric charge is supported by the connection between discrete electric charge, relativity and radiation thermodynamics, highlighted, in the classical (non quantum) context, in the paper [39]. Electrostatic force in SED We will further model a charged particle (in particular, the electron) as a two-dimensional oscillator that scatters CZPF (Classical Zero Point Field) background radiation. Our derivation is related to the model of the non-relativistic charged oscillator ). Its scattering cross-section with the plane electromagnetic wave [33,38] is In previous relationships, i Γ is the radiative decay constant According to Eq. (26) of paper [33], the elementary force of interaction between two oscillators that scatter the CZPF background has the expression ( ) The CZPF background is characterized by the spectral density of the energy [8] Replacing Eq. (4) and Eq. (6) in Eq. (5) and integrating, one obtaines The integral I Ω in Eq. (7) is estimated according to the saddle point method [40 -Sch. 41.2, 33] that allows an analytical evaluation. We consider that the particles have the angular frequency approximately equal. This is the situation for the constituent particles of nucleons, the quarks [42]. Under these conditions, 2 Replacing Eq. (3) in Eq. (9), it leads to The expression for the force (10) The quarks are characterized by two types of mass [42,43]: the current/naked/bare quark mass ( 3 3 1 2 1 2 2 4 4 2 2 1 2 3 3 2 2 1 5 2 1 2 3 3 1 1 2 2 2 2 3 , , 2 av c c I I I Replacing Eq. (12) in Eq. (7), it follows ( ) The expression for the force (13) The average scattering cross-section At the microscopic level (relative to the CZPF background), the scattering cross-section of an electrically charged particle depends on the angular frequencyω . In order to obtain the scattering cross-section independent of the angular frequency, we will average the expression of the crosssection given by Eq. (4). According to Appendix 3, the averaged scattering cross-section has the expression Assuming the energy density of the CZPF radiation background, it is necessary to enter a limit for the angular frequency, i.e., i ω Ω > . If this is the Thomson crosssection, 2 8 3 , then the value of the maximum angular frequency Ω is and it depends on the natural angular frequency i ω . This limit of the frequency must be also used to calculate numerically the integral of the interaction force between two oscillators. The difficulty that arises when calculating the integral is that this frequency is different for the two types of particles. And this occurs because the frequency depends on the natural frequency. One can solve this issue by improving the particle model. We assume to be necessary to consider the phenomenon of coupling of the oscillators in interaction. Our assumption is based on the analogy with the phenomenon that occurs at the interaction of several bubbles in a cluster [44]. Also, because the oscillators have an accelerated motion, they perceive the CZPF background as a Planckian radiation background with temperatures proportional to the average acceleration [26]. The relative average acceleration of the two oscillators depends on the parameters of the two interacting particles. Conclusions In the particle model proposed in this paper, the interaction of the oscillator with the homogeneous background of CZPF radiation generates, by absorption-emission (scattering), an isotropic and inhomogeneous background. Then this primary scattering generates a secondary radiation addressed to their reciprocal scattering and which can depict the electrostatic interaction. The source of the energy of the interaction is the energy absorbed and scattered from the background of CZPF [45]. The electric charge is a measure of the capacity of fundamental particles to scatter the CZPF radiation background. This result does not surprise us, because the SED is based on the Maxwel's equations. And these equations also involve the Coulomb force of interaction of two charges. In our approach of the electrically charged particle is for interest to highlight the interpretation of electric charge as the scattering capacity of the CZPF background. Analogous to the quantum model (in QED, particles change energy carried by photons), particles modelled in SED change energy carried by electromagnetic waves. This phenomenon has an analogous approach in the physics of the interaction between two oscillating bubbles in liquid [35-37, 46, 47]. The electrical acoustic charge is of two types because oscillators can oscillate in phase or phase opposition. Highlighting this property is only possible by modelling the interactions between the two oscillators in the SED formalism, without using the notion of interaction cross-section. The spin problem for the two-dimensional oscillator was solved in the paper [48]. The particle model proposed in the SED is also important for highlighting an attractive interaction when the oscillators absorb some of the scattered energy from the CZPF background. This phenomenon we will study in a forthcoming paper. The proposed model also has shortcomings. The Zitterbewegung condition obtained in classical relativistic model of the electron [49] and not the one used in our paper 2 0 . This discrepancy may exist because the motion of the oscillator is treated nonrelativistically. Also, the model does not explain the nature of the elastic field that ensures the internal oscillation. A possible solution to this problem would be to model the charged particles as an oscillating vacuum bubble [50]. 2 2 2 2 0 2 2 6 2 2 6 0 1 0 2 11 2 2 2 2 2 2 0 6 6 0 0 1 0 0 2 2 3 2 3 2 3 2 3 , . x dx The x I integral is solved using formulas 2. 161 and 2.103 of the book [41] ( 3 3 3 arctan arctan 3 3 3 3 . 1, 1, . 3 3 arctan 4 1 2 3 3 4 1 2 3 3 3 , 2 2 1 2 3 3 3 , , 1 . To calculate the integral from the denominator, we use the property that the section has a maximum for the angular frequencies close to its natural angular frequency. In integral from the denominator we make the change of variable i x ω ω − =and it becomes ( )
2021-06-09T01:16:16.391Z
2021-03-23T00:00:00.000
{ "year": 2021, "sha1": "0ff76503c10a39b6a9f7fac7304e137dc44e177a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0ff76503c10a39b6a9f7fac7304e137dc44e177a", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
253145747
pes2o/s2orc
v3-fos-license
Research Paper Intraoperative Awareness During Cesarean Delivery Under General Anesthesia Background: General anesthesia (GA) for caesarean section (CS) has distinctive characteristics that may increase the risk of awareness during GA (AGA). Objectives: The aim of this study was to investigate the incidence of unintended awareness during GA (AGA) in CS. Materials & Methods: This cross-sectional descriptive study was performed in Alzahra Hospital in Rasht City, Iran. Eligible women with term pregnancy candidates for CS under GA were enrolled in this survey from May 2018 to August 2021. After delivery, a questionnaire including demographic data and questions related to different stages of anesthesia was completed via a face-to-face interview. The collected data were analyzed using repeated measurement, the Chi-square, Fisher exact, and t-test in SPSS v. 21. Results: The data from 174 women were analyzed, and 12 (6.9%) experienced AGA. Among them, dreaming and feeling the manipulation of the surgical area (27.8%) were the most common reported awareness states. Body mass index had a significant (P=0.034) relationship with AGA, but age (P=0.843), the level of education (P=0.714), history of anesthesia (P=0.552), 5-minute Apgar score (P=0.49), and surgery time (P=0.686) had no significant relationship with AGA. Conclusion: The incidence of AGA during CS was almost close to the high limit established by the credible evidence, and a significant number of the women were not in completely acceptable conditions. Therefore, the management of GA for CS should be revised in this academic hospital. Introduction wareness during general anesthesia (AGA) is defined as the postoperative recall of any event that occurred during surgery. It indicates the failure to achieve the primary goals of anesthesia and has been reported as auditory perception, pain, panic, loss of motor function, and helplessness [1]. AGA is a serious problem with long-term psychological complications such as post-traumatic stress disorder, flashbacks, a tendency to avoid future medical visits and care, sleep disorders, lack of concentration, nightmares, and irritability. Studies have demonstrated that pain during general anesthesia (GA) which is a major risk factor for long-term psychological disorders, is associated with the use of muscle relaxants (MRs). Furthermore, despite immobility, AGA occurs during surgery in patients who receive large doses of opioids without receiving MRs [2,3]. One of the main reasons for AGA is the use of neuromuscular blocking agents. Light anesthesia is another major cause of awareness. In general, the overall prevalence of intraoperative awareness is 0.1%-0.2%. However, in cases of significant trauma, cardiac surgery, and cesarean section, the prevalence is higher, reaching 0.1% to 7% in CS [4]. Spinal anesthesia (SA) is considered the choice of anesthesia for CS. Preventing the fetus from being exposed to anesthetic agents, early onset, and ease of performance are some advantages of this method compared to GA [5][6][7]. The risk of pulmonary aspiration, failed intubation, increased blood loss, higher degrees of postoperative pain, chronic pain, increased risk of postpartum depression, and oxygen toxicity are GA-related risks in CS [8][9][10]. However, in emergencies or any contraindication to SA, GA should be considered [11]. It has long been well known that CS is one of the primary surgeries at risk of AGA. Because no anesthetic agent is administrated as premedication, opioids are not allowed until after delivery. Being afraid of fetal de-pression and uterine atony, anesthesiologists limit the concentrations of volatile anesthetics (VA) in CS. In addition, the risk of awareness increases with rapid sequences of induction of anesthesia and surgical incision immediately after that [12]. When surgery begins, there may be insufficient time to produce the appropriate analgesic and hypnotic effects of VA. Furthermore, a single dose of induction drug is rapidly redistributed. Nitrous oxide is also rapidly uptaken, but it is a weak anesthetic. It should also be noted that the minimum alveolar anesthetic concentration in CS is reduced by 25%-40% [13,14]. In this regard, anesthesiologists play an essential role in balancing the appropriate depth of anesthesia and fetal drug transmission. Studies have shown various choices for GA induction in CS, different agents, and dosages. However, the first choice and an acceptable standard regime to prevent maternal awareness while maintaining fetal safety have not been introduced [15]. Given the adverse consequences of intraoperative awareness, all anesthesiologists and anesthesia departments should consider strategies to limit the rate of AGA. To achieve this goal, the first step is to realize the current situation. To the best of our knowledge, similar studies are few in Iran, let alone in our province. In this study, the prevalence of unintended AGA in CS in an academic and referral hospital affiliated with Guilan University of Medical Sciences was investigated Materials and Methods This cross-sectional descriptive study was performed in Alzahra Hospital, an academic hospital affiliated with Guilan University of Medical Sciences (GUMS), Rasht City, Iran, from May 2018 to August 2021. The inclusion criteria were women with a term pregnancy aged between 18 and 45 years, ASA (American Society of Anesthesiology physical status classifications) class I or II, candidates for non-emergent CS under GA, and without chronic drug abuse. A Highlights • Cesarean section is associated with a high risk for intra-operative awareness as no anesthetic agent or opioids can be administrated for premedication until after delivery. • In this study, the incidence of awareness during general anesthesia in the cesarean section was almost close to the maximum reported range, indicating the need to revise the general anesthesia management for cesarean section. The exclusion criteria were patients who disagreed to participate, uncooperative patients with psychological disorders, and a history of awareness in previous surgeries. After sufficient explanations about the study process and obtaining informed consent, eligible women were enrolled in the survey. The anesthesia and surgery protocols were the same for all women. Entering the operating room, standard monitoring including electrocardiogram (ECG), heart rate (HR), pulse oximetry (SPO 2 ), non-invasive arterial pressure, mean arterial pressure (MAP), and end-tidal CO 2 (ETCO 2 ) gas analyzer were performed for all patients. To reduce the fetus's exposure to anesthetic drugs, before induction of anesthesia, skin preparation and draping were done. Firstly, the patient was pre-oxygenated with 100% oxygen, and then propofol (2 mg/kg) and succinylcholine (1-2 mg/kg) were administrated, and tracheal intubation was performed. Anesthesia was maintained with isoflurane and nitrous oxide in oxygen (N 2 O/O 2 ). After delivery, fentanyl (3 µg/kg) and midazolam (0.01 mg/ kg) were administrated. At the end of the surgery, to reverse the effects of MRs, neostigmine (0.04 mg/kg) and atropine (0.02 mg/ kg) were injected, and the patient was transferred to the recovery ward. Hemodynamic parameters, including MAP and HR, were recorded by the responsible medical student at four time points: before induction of anesthesia (T0), immediately after intubation (T1), 20 minutes after induction (T2), and at the end of surgery (T3). After delivery, when the patients were completely awake and cooperative, a questionnaire, including demographic data (age, level of education, BMI, history of anesthesia, gestational age, 1-minute Apgar score, and surgery duration) and 14 specific questions about the first memory after emergence from anesthesia and the last memory before anesthesia and determining the status of AGA during anesthesia was filled out via a face-to-face interview. The mentioned questionnaire was taken from the study of Noor Mohammad Arefian [16], and its content validity index (CVI) and content validity ratio (CVR) were also calculated in our center. In this regard, 30 patients filled out the questionnaire, and 10 expert faculty members of the Obstetrics and Anesthesia Department examined the questions. The value of the CVR for all questions was higher than 0.72. The reliability of the questionnaire was measured by Dorney's similarity coefficient (the Cronbach alpha), and the content validity coefficient was 0.79. There are different grades of AGA. Grade 0 refers to unconsciousness and indicates no recall and no signs neither immediately nor in more than one month, and the highest grade 5 points to consciousness, indicating explicit recall with distress and pain, awareness with an emotional squeal [17]. Statistical analysis The collected data were analyzed by repeated measurement, the Chi-square test, and the Fisher exact and t-test in SPSS v. 21 Results During the study period, a total of 195 women were screened, and the data from 174 cases were analyzed. The mean values of age, BMI, gestational age, number of gravidae, number of abortions, the history of receiving GA, the 1 st and 5 th minute Apgar scores, and duration of surgery were summarized in Table 1. The changes in MAP and HR from T0 to T3 were significant (P<0.0001), and the highest HR values were recorded at T1. In terms of MAP values, unlike HR, it did not increase at T1 compared to the baseline (T0) ( Table 2). About the last event that was remembered before the induction of anesthesia, 15.5% of women experienced unpleasant conditions. Of them, 17 (9.8%) reported pain before being unconscious, and 10 cases (5.7%) experienced anxieties about surgery and anesthesia and fear of death. Seventy-eight cases (44.8%) remembered face masks and saying take a deep breath as the last memory recalled before anesthesia. About the first event mothers remembered immediately after emergence from anesthesia, 36.2% of them had acceptable conditions and the rest complained of severe pain, suffocation, suctioning, and inability to move. Two cases of slapping in the face and one feeling of the endotracheal tube were also reported. Vague and incomprehensible noise and crowds around was the most frequently recalled event after emergence from anesthesia by 45 cases (25.9%) ( Table 3). A total of 12 women (6.9%) experienced AGA. Among them, 18 cases of different awareness states were identified. "Dreaming during surgery and anesthesia," as well as "feeling the manipulation of the surgical area," each by 27.8%, were the most common types of awareness state. The frequency distribution of various awareness states is shown in Table 4. No statistically significant difference was found between MAP (P=0.477) and HR values (P=0.457) at 4 time points between the two groups of with and without AGA ( Table 6). Discussion This study revealed that 12 pregnant women (6.9%) suffered from AGA during CS, which is close to the maximum reported range [4]. About the last event that was remembered before anesthesia, 15.5% of women experienced unpleasant conditions. Seventeen cases (9.8%) reported pain before being unconscious, while the anesthesia sequence should be managed such that the surgical incision be made after the appropriate depth of anesthesia. Ten people (5.7%) experienced anxiety related to surgery and anesthesia and fear of death, which emphasizes the need for proper communication between the patient and physicians involved, including gynecologists and anesthesiologists, to reduce perioperative anxiety. Regarding the first event mothers remembered immediately after emergence from anesthesia, 36.2% of them had acceptable conditions, and the rest complained of severe pain, suffocation, suctioning, and inability to move. All the mentioned distressing situations could be managed properly. For example, primi-tive pain control could be considered before emergence from anesthesia which is an effective modality [18]. Nineteen cases (10.9%) complained of suffocation, feeling cold or hot. Of them, 12 (6.9%) complained of inability to move, indicating that MRs were not completely reversed at the end of the surgery, which is a flaw in the anesthesia process. In standard anesthesia sequence, the effects of hypnotics should not wear off before MRs. Feeling the manipulation of the surgical area during anesthesia 5 (27.8) Lots of noise and overcrowding could also be easily prevented by capable and responsible management of the operating room. Two cases of face slapping and one feeling of the endotracheal tube were reported, which were not acceptable. In our study, AGA was detected based on the mother's statements, and no intraoperative monitoring device was used. However, there is strong evidence that it could not be a limitation of this study. Studies using the isolated forearm technique (IFT) reported the incidence of AGA up to 40% [11]. Interestingly, none of these cases could recall any intraoperative events. This may be because anesthetics are potent amnesiacs even at sub-anesthetic doses [19]. Fortunately, to date, there is no evidence that awareness detected solely based on these monitors and without patients' recall has significant adverse psychological consequences [20]. Supporting them, Zand et al. demonstrated that the bispectral index (BIS) was not a reliable monitor for detecting light anesthesia in CS [21]. In addition, a recent review article explained that the routine use of depth of anesthesia monitoring was not recommended [22]. Therefore, it seems that the results of this study, which were obtained by direct postoperative questioning, are reliable. However, a major concern is a difficulty of distinguishing between intra-operative events and the emergence phenomena. Baby crying, pain, and voices are related to postoperative events that the mother may report as AGA [23]. Dreaming during GA may also be due to light anesthesia or a part of emergence time. It should be noted that there is no consensus on the idea that only unpleasant dreams are linked to AGA [24]. Odor et al. investigated the rate of AGA in CS. They found that 0.47% of the mothers had specific awareness. Similar to our study, the evaluating tool was a direct postoperative interview. Paralysis was reported in 5 (41.7%) and pain by 2 (16.7%); distressing memories were reported in 9 cases (75%) during induction and emergence [25]. Khanjani et al. compared the rate of AGA in two groups of propofol and isoflurane in CS. They found that when anesthesia was maintained with propofol, the occurrence of AGA was significantly higher compared to isoflurane (0.97%, and 6.7%, respectively) [26]. Yu Z et al. reported that in GA for CS, the administration of dexmedetomidine provided better Apgar scores and reduced catecholamine release compared to remifentanil which was associated with better hemodynamic stability. None of the women recalled perioperative or intraoperative events [27]. Hadavi et al. evaluated the incidence of AGA in CS at an academic hospital. They reported that the current anesthesia technique for CS provided a proper depth of anesthesia, and none of their cases experienced AGA. Good Apgar scores were also reported in their study. They recommended future studies with higher dosages of anesthetics [28]. The reason for this discrepancy among studies is justified by differences in methods. Indeed, the measurement tools, the time of interview and evaluation, the studied populations, and the chosen anesthetics were not the same among the studies. As mentioned above, studies have reported contradictory results [29,30]. The diagnosis of AGA based on symptoms related to sympathetic activation, such as hemodynamic parameters and objective signs such as lacrimation, sweat-ing, and movement, is not reliable enough and differs from monitoring such as electroencephalogram (EEG) changes [31], IFT [32], or BIS [33]. In such studies, which are designed based on patients' statements, the time of the interview is important, and in the long period after surgery, the possibility of forgetting the details should be considered. In addition to the monitoring, choosing the anesthetic agents differ according to the mother's medical conditions and her co-morbidities, the availability, and the price of the drugs, which affects the outcomes. For example, the prevalence of AGA is significantly higher when propofol is used than isoflurane [34]. In another study, Altıparmak et al. examined the effects of magnesium sulfate on postoperative pain and depth of anesthesia in pregnant women undergoing CS under GA. They found promising effects from this intervention [35]. Intraoperative awareness has long been known as one of the patients' main concerns. Despite the current literature to prevent this adverse event, the issue has remained complex and with several unanswered questions. Although the low incidence of AGA in CS may be inevitable, successful defense is not easily possible. Therefore, to reduce the litigation, it is suggested to discuss the possibility of AGA in high-risk patients, and in case of awareness, it should be fully documented in patients' medical records. In these cases, an apology may prevent the physicians from being sued. On the contrary, denial can make the situation worse [36]. Conclusion This study revealed that the prevalence of AGA in CS was almost close to the highest reported by the current evidence. It was also found that the effects of MRs were not completely reversed when the hypnotics' effects ended. A noticeable percentage of our cases did not experience acceptable conditions either before induction of anesthesia or during emergence. Therefore, it seems that the sequence of GA for CS should be critically revised in our hospital. Study limitations The study was single-centered. Women with the experience of AGA were not followed up for the adverse consequences. Compliance with ethical guidelines All study procedures followed the ethical standards outlined in the Helsinki Declaration (2013). The study protocol was approved by the Research Ethics Committee of the Guilan University of Medical Sciences and registered (Code: IR.GUMS.REC.1399.390). Funding This research did not receive any grant from funding agencies in the public, commercial, or non-profit sectors.
2022-10-27T15:29:58.472Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "9cde64d00b4e607091a62d3ea3b3880088f69812", "oa_license": "CCBYNC", "oa_url": "http://cjns.gums.ac.ir/files/site1/user_files_bd674b/kazemi-A-10-32-177-25da6bd.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "56c9974cc000d12b5736b820489b6dedf9702b54", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
235427955
pes2o/s2orc
v3-fos-license
308-nm Excimer Lamp vs. Combination of 308-nm Excimer Lamp and 10% Liquor Carbonis Detergens in Patients With Scalp Psoriasis: A Randomized, Single-Blinded, Controlled Trial Background: Scalp psoriasis is usually refractory to treatment. Excimer devices have been proved to be a promising therapeutic option in psoriasis. Greater efficacy of phototherapy can be achieved by concurrent use of coal tar derivatives. Objective: We aimed to compare efficacy and safety between 308-nm excimer lamp monotherapy and a combination of 308-nm excimer lamp and 10% liquor carbonis detergens in the treatment of scalp psoriasis. Methods: In this randomized, evaluator-blinded, prospective, comparative study, 30 patients with scalp psoriasis received either 308-nm excimer lamp monotherapy or a combination of 308-nm excimer lamp and 10% liquor carbonis detergens twice per week until complete remission of the scalp or for a total of 30 sessions. Efficacy was evaluated by the improvement of Psoriasis Scalp Severity Index (PSSI) score, itch score, and Scalpdex score. Results: Both treatments induced significant improvement in PSSI score with greater reduction observed in the combination group. At 30th visit, a 75% reduction in PSSI (PSSI75) was attained by 4 (28.6%) and 9 (69.2%) patients treated with monotherapy and combination therapy, respectively (P < 0.05). Conclusions: Excimer lamp is well-tolerated in patients with scalp psoriasis and liquor carbonis detergens can be used as a combination therapy to improve the efficacy of excimer lamp. INTRODUCTION Psoriasis is a common dermatologic disease with a prevalence of ∼0.5-11% worldwide (1). It has several clinical presentations which eventually develop into chronic plaque psoriasis. The scalp is commonly affected and the frequency tends to increase with the disease duration (2). Compare to other areas of the body, the scalp is relatively refractory to many of the treatment modalities (3). Ultraviolet (UV) radiation, both A and B, is known to be an effective treatment of psoriasis. Excimer laser and nonlaser devices offer a narrow spectrum of UV light and greater localization of irradiation allowing a lower number of treatments and cumulative dose as well as sparing of uninvolved skin to produce higher efficacy (4). Earlier studies found that 308nm excimer laser was able to achieve exceptional results in the previously recalcitrant area of the scalp (5-7). A previous study using a 308-nm excimer lamp, a non-laser device, also demonstrated a similar favorable result in the treatment of scalp psoriasis with minimal and transient side effects (8). Comparing to the excimer laser, the excimer lamp has the superior advantage of being able to give uniform irradiation of 50 times wider area in a single exposure at a lower cost (9). Coal tar is one of the traditional treatments for psoriasis. Other than having anti-inflammatory, antibacterial, antipruritic, and antimitotic effects, coal tar is also a photosensitizer (10). Coal tar, when used together with UVB light, provides a synergistic effect with better treatment outcomes than either treatment alone (11). Goeckerman regimen is an example of the application of coal tar with phototherapy (12). The regimen requires the patient to apply coal tar to the lesion for 5 h and rinsed off before undergoing phototherapy. The process boasts a fast resolution of psoriasis with 100% of patients attained a 75% reduction in Psoriasis Area and Severity Index at ∼12 weeks (13). We hypothesized that with liquor carbonis detergens (LCD), a coal tar derivative, the treatment of excimer lamp could be enhanced to give a superior treatment outcome to the excimer lamp alone. This study aimed to compare the efficacy and safety of 308-nm excimer lamp monotherapy and 308-nm excimer lamp in combination with 10% LCD in the treatment of scalp psoriasis. Study Design and Patients This is a randomized, evaluator-blinded, controlled study of 308nm excimer lamp as monotherapy and combination of 308nm excimer lamp with 10% LCD in scalp psoriasis. This study was conducted as a pilot study. The sample size estimation was based on data from the previous 308-nm excimer lamp study in the Asian population. To achieve a power of 80% and a two-sided significance level of 5%, the minimum sample size required was 9 in each group (8). Thirty patients with clinically diagnosed plaque-type scalp psoriasis were enrolled in the study. The study was approved by the Committee of Human Rights Related to Research Involving Human Subjects, Mahidol University (ID 09-60-09, thaiclinicaltrials.org identifier: TCTR20171128003) and conducted in accordance with the Declaration of Helsinki. All patients provided written informed consent. Patients age 18 years or older who have been diagnosed with plaque-type psoriasis of the scalp involving at least 1% of total body surface area were included. The exclusion criteria were (i) pustular or erythrodermic psoriasis; (ii) presence of severe systemic disease; (iii) a history of photosensitivity or taking photosensitive medication; (iv) a history of skin cancer; (v) being pregnant or lactating; and (vi) allergy to any coal tar derivatives. Patients' current systemic treatments without recent modification (within 6 months) were maintained throughout the study period; however, topical agents for the scalp were required to be discontinued before enrolling in the study and until the last follow-up appointment. Upon enrollment, a detailed history was obtained from each patient with special attention on the duration of the disease, area of involvement, as well as the history of previous therapies and current therapies. Each patient was randomly assigned using a random number table to receive either excimer lamp monotherapy or excimer lamp in combination with 10% LCD therapy (combination therapy). Treatment The 308-nm excimer lamp (Therabeam R UV308, Ushio Inc., Tokyo, Japan) was used for both groups. Treatment was performed twice per week. Each patient was treated for 30 sessions, or until complete clearing of the scalp occurred. Beginning with 500 mJ/cm 2 for all patients, we increased the irradiation dose by 10% every treatment during the whole treatment period. The irradiation dose was fixed when clinically noticeable improvement was observed. If severe side effects including blistering, burn or severe pain occurred, the treatment was skipped until complications subsided. The treatment would then resume with the dose that did not cause any side effects on the subsequent visit. Participants who failed to attend treatment for more than 3 weeks consecutively were excluded. Patients in the combination therapy group were additionally asked to apply 10% LCD cream efficiently throughout the plaques on their scalp for at least 5 h or overnight and rinsed off before each treatment session. Assessment At baseline, 20th visit, 30th visit, and 4 weeks after the last treatment, Psoriasis Scalp Severity Index (PSSI) score was assessed by a blinded dermatologist. The PSSI score is calculated by assessing erythema, scaliness, and induration with a score of 0-4 for each symptom. The extent of scalp psoriasis involvement ranging from 0 to 6 is then calculated and multiplied with the score of the symptoms resulting in a total score of 0-72 (14). Patients were requested to rate their scalp-related itch and Scalpdex score at baseline, 20th visit, 30th visit, and after last treatment. Itch score was rated using a 0-10 scale, with higher scores indicating greater severity. Scalpdex score requires the patients to rate frequency of impact for 23 scalp-related items using a 0-100 scale with 0 = never, 25 = rarely, 50 = sometimes, 75 = often, and 100 = all the time. The items are categorized into symptoms, functioning, and emotions. Higher scores indicate greater impairment of quality of life in each aspect (15). Statistical Methods Data were analyzed using STATA/SE version 14.2 (STATA Corp., College Station, TX). Categorical variables were expressed as percentages and were analyzed using either the chisquared test or Fisher's exact-test. Continuous variables were expressed in terms of either mean (standard deviation) for normally distributed variables or median (range) for non-normal distributed variables and were evaluated using a mixed model. A P-value of <0.05 was considered statistically significant. Patient Characteristics Thirty patients (13 males, 17 females; age 21-72, mean age 41 years) were enrolled in this study and were randomly divided into 2 groups. There was a significant difference in mean age between the two groups. Other baseline demographics were similar between treatment groups ( Table 1). Twenty-seven patients completed the study while 3 were excluded because of their inability to adhere to treatment frequency due to personal or unforeseen circumstances. Baseline disease characteristics after excluding 3 patients displayed some but not statistically significant difference in median baseline PSSI. There was also a significant difference in median baseline itch score after excluding 3 patients. Thus, these 2 variables (age and itch score) were adjusted in the statistical analysis. Baseline Scalpdex scores were similar between treatment groups ( Table 2). The monotherapy group seemed to require a slightly higher irradiation dose than the combination therapy group at a mean effective dose of 1364.3 (±315.9) mJ/cm 2 and 1165.4 (±315.9) mJ/cm 2 , respectively (P = 0.134). Mean cumulative dose at 30th visit demonstrated a similar pattern as the monotherapy group having 32702.9 (±3997.3) mJ/cm 2 while the combination therapy group having 27779.2 (±8860.9) mJ/cm 2 (P = 0.101). Safety Common adverse events that occurred in both groups were itch and pain after the treatment, which resolve spontaneously without any treatment within 1-2 days. No patient experienced any pain or discomfort during the treatment. Five patients (35.7%) from the monotherapy group developed blisters compared to 1 patient (8.3%) from the combination therapy group (P = 0.170). The first-degree burn was observed in 1 patient from each group. The combination therapy group had severe adverse events observed at a lower mean dose of 680 (±28.3) mJ/cm 2 when compared to the monotherapy group, 1,180 (±345.7) mJ/cm 2 (P = 0.111). No patient dropped out due to adverse events. DISCUSSION Immunomodulation is the key therapeutic mechanism of phototherapy in psoriasis. Phototherapy interfered with antigen presentation of Langerhans cells to the T cell which in turn affects cytokines and adhesion molecules that are overexpressed in psoriatic plaques (16,17). It also downregulates Th17 expression, cytokine expression, and causes a shift in cytokine profiles from a Th1 to a Th2 response (18,19). By interposing with the synthesis of proteins and nucleic acids, UV radiation also inhibits epidermal hyperproliferation and angiogenesis (20)(21)(22). Various UVB sources with wavelength ranging from 290 to 320 nm are commonly used in the treatment of psoriasis. Among these, excimer devices that are able to produce a spectrum of 308 nm radiation have been shown to be efficacious in treating psoriatic plaques. A study demonstrated the efficacy of a single high dose 308-nm excimer laser treatment, clearing psoriasis plaque (23). An immunohistochemical study found that psoriatic skin after excimer light therapy showed significant T-cell depletion and alterations of apoptosis-related molecules associated with a decreased proliferation index and clinical remission (24). The excimer lamp irradiation also shows an antipruritic effect via induction of epidermal nerve degeneration (25). In this study, the excimer lamp alone is efficacious and welltolerated for scalp psoriasis whereas LCD cream was shown to enhance its efficacy without a significant increase in adverse events. Given many of the patients in our study were considered refractory to the ongoing treatment, they reportedly achieved improvement after excimer lamp treatment with or without LCD cream. Additionally, the effects of both treatment regimens were maintained up to 4 weeks after the last treatment. Furthermore, the monotherapy group showed a higher number of patients with ongoing improvement. We hypothesize that UVB phototherapy could induce a long remission period by promoting apoptosis of pathologically relevant T cells, especially tissue-resident memory T cells (26,27). A higher cumulative irradiation dose used in the monotherapy group may result in more patients with continuous improvement. However, concurrent application of 10% LCD cream also resulted in less irradiation effective dose, therefore hastens reduction rate of PSSI score. This can result in less long-term cumulative UV exposure. The author would also like to point out that in this study, 10% LCD cream was only used on the night before excimer lamp treatment for its photodynamic property and thus effects of the treatment may be enhanced further if 10% LCD cream was applied regularly or more frequently. The main setbacks of 10% LCD cream are its unfavorable smell, its ability to readily stain onto fabric material, and possible contact dermatitis. Lastly, usage of LCD cream or other coal tar derivatives can interfere with UV transmission and should be removed thoroughly before exposure to phototherapy (28)(29)(30)(31)(32). Although the detail of photodynamic activity of coal tar was still unclear and only proven with an action spectrum in UVA and visible light (33,34), several studies had demonstrated the effectiveness of coal tar in enhancing the therapeutic outcome of UVB spectrum treatment similar to our study suggesting rooms for further research in elucidating the actual mechanism and possible light spectrum range for UVB of coal tar photodynamic activity (13,(35)(36)(37). A previous study of excimer lamp showed that 6 out of 28 patients (30%) were able to achieve PSSI75 after only 10 sessions and 5 patients (25%) achieved PSSI50 (8). These numbers showed a favorable result of excimer lamp similar to this study. However, the treatment sessions required were much shorter than our study which we suspect to be due to the patient's concurrent treatment of topical medication. A previous study evaluating excimer laser found that the majority of patients (56.52%) achieved PSSI75 while 34.78% of patients were able to achieve PSSI50 at 24th visit (7). These results triumph over our monotherapy group. However, our combination therapy group attained comparable improvement at 30th visit (15 weeks), accounting for 69.2 and 23.1% for PSSI75 and PSSI50, respectively. Furthermore, it is important to note that among 69.2% with PSSI75, 4 patients (30.7%) achieved PSSI100. As for safety issues, the monotherapy group showed a higher incidence of adverse events due to the higher irradiation dose used. Nevertheless, dose adjustment was able to prevent the reoccurrence of the adverse events. Blistering was seen mainly when the dose was higher than 1,100 mJ/cm 2 and readily resolve spontaneously or with a short course of moderate potency topical corticosteroid within 7-10 days. Similar case series documenting cases with blistering after narrowband UVB therapy were able to continue and complete the treatment course with lowered irradiation dose. These cases too were able to complete their phototherapy and the occurrence of blisters subsided after topical corticosteroid treatment and dose adjustment as well (38). Few studies using excimer devices both light and laser had reported some patients with blistering (6,7,39). This suggests that blistering might just be due to too high irradiation dose. However, this also proved that patients can tolerate excimer lamp at a much lower dose than narrowband UVB in general. Therefore, attention must be paid to dose adjustment and increment while using excimer devices. Safety of having concurrent vitamin A derivatives intake was not addressed in our study as they were in the exclusion criteria. The limitations of this study include a limited number of patients and a relatively short follow-up period after treatment cessation. Although the assignments were randomized, there was a significant difference in baseline severity of scalp psoriasis between the two groups. The monotherapy group had more severe baseline disease, which might contribute to their lower response rate. Future studies involving larger populations and longer study duration are warranted in elucidating long-term safety and remission time. CONCLUSION Combination therapy of excimer lamp and 10% LCD showed promising results with 92.3% of patients achieving PSSI50 and above with minimal and reversible adverse events. Concerning scalp psoriasis, the combination of excimer lamp therapy and 10% LCD is highly efficacious and well-tolerated. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Committee of Human Rights Related to Research Involving Human Subjects, Mahidol University. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS PS: conceptualization. PS and PR: methodology, validation, and writing-review and editing. KT and WI: formal analysis and data curation. PS, KT, and WI: investigation. KT and PR: writingoriginal draft preparation. All authors have read and agreed to the published version of the manuscript.
2021-06-15T13:22:06.580Z
2021-06-15T00:00:00.000
{ "year": 2021, "sha1": "1b81b5e508d80a2d73cded695388d437f1b06207", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2021.677948/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1b81b5e508d80a2d73cded695388d437f1b06207", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257123438
pes2o/s2orc
v3-fos-license
Genetic Diversity Analysis of Banana Cultivars (Musa sp.) in Saudi Arabia Based on AFLP Marker Banana plantation has been introduced recently to a temperate zone in the southeastern parts of Saudi Arabia (Fifa, Dhamadh, and Beesh, located in Jazan province). The introduced banana cultivars were of a clear origin without a recorded genetic background. In the current study, the genetic variability and structure of five common banana cultivars (i.e., Red, America, Indian, French, and Baladi) were analyzed using the fluorescently labeled AFLP technique. Nine different primer pairs combinations yielded 1468 loci with 88.96% polymorphism. Among all locations, high expected heterozygosity under the Hardy–Weinberg assumption was found (0.249 ± 0.003), where Dhamadh was the highest, followed by Fifa and Beesh, respectively. Based on the PCoA and Structure analysis, the samples were not clustered by location but in pairs in accordance with the cultivar’s names. However, the Red banana cultivar was found to be a hybrid between the American and Indian cultivars. Based on ΦST, 162 molecular markers (i.e., loci under selection) were detected among cultivars. Identifying those loci using NGS techniques can reveal the genetic bases and molecular mechanisms involved in the domestication and selection indicators among banana cultivars. Introduction Among the edible, vegetatively propagated, monocotyledonous, and herbaceous species of Musa, banana and plantain (Musa sp.) belong to the Eumusa section of the genus Musa, family Musaceae and order Zingiberales [1]. Bananas and plantains rank fourth after cereals in importance as food sources in many developing nations [2]. One hundred two million hectares of banana farms are found in humid tropical and subtropics in the Americas, Africa, Asia, and Europe, extending to Australia and Europe [2]. Numerous countries in Asia, Africa, Latin America, and the Pacific Islands rely on banana production for a large portion of their economies. There are about 145 million tons of banana production, of which only a few million tons are exported. The banana is, without a doubt, a staple food for millions of tropical residents [2,3]. There are many nutrients and carbohydrates in bananas and plantains, including carbohydrates, minerals, and vitamins [4,5]. Unlike other fruit crops, it grows faster than other perennials and produces fruit throughout the year. In banana cultivation, micropropagation or suckers are used for asexual propagation [6]. Unlike their wild relatives, cultivated bananas grow without pollination. Fantastic collections of parthenocarpic mutants have primarily been made by farmers and multiplied and distributed by vegetative propagation of spontaneously occurring mutants [7]. During the initial domestication process, a relatively limited portion of the genetic diversity of wild banana species was used [8]. It is essential to know about the genetic diversity and Curr. Issues Mol. Biol. 2023, 45 1811 agroecological adaptations of Musa to address contemporary food security needs. Clone identification and taxonomic studies have relied heavily on morphological and agronomic characteristics [9,10]. Two wild species in the section Eumusa produce different genotypes: Musa acuminata (AA) and Musa balbisiana (BB). They are classified into other genomic groups, including AA, AB, and BBs classified as diploids, AAA, AAB, ABB, and BBBs classified as tetraploids, resulting from interspecific hybridization between M. acuminata and M. balbisiana [11]. Several unifying characteristics were observed in morphological studies of Musa species. Hybrid cultivars and wild types exhibit complex genome structures and phylogenetic relationships that require further investigation. Banana cultivation is susceptible to pests and diseases because of its narrow genetic base [12]. Further, abiotic stresses caused by global warming and climate change exacerbate this situation [13]. In order to boost banana productivity, identifying genotypes with high potential is crucial [14]. It is common practice in plants to use molecular markers to identify genetic differences in germplasm, identify duplicate accessions, and test for genetic fidelity [3]. The availability of molecular markers, particularly polymerase chain reactions (PCR)-based techniques, has led to the evaluation of Musa species' genetic diversity. For example, the application of random amplified polymorphic DNA (RAPD) techniques, which provide helpful information and new insights into the taxonomy [15], restriction fragment length polymorphism (RFLP) [16], sequence-related amplified polymorphism (SRAP) [17], and microsatellites or simple sequence repeats (SSRs), inter-simple sequence repeats (ISSRs) [18]. The AFLP method combines the convenience of polymerase chain reaction (PCR)-based fingerprinting with the reliability of restriction-based fingerprinting [19,20]. Furthermore, AFLP allows high-resolution genotyping by rapidly generating hundreds of highly reproducible DNA markers [21]. This study investigated the genetic d and genetic relationships of banana cultivars with unknown genomic groups, introduced into three locations in Jazan, southeast Saudi Arabia. Sampling Site The study was performed in three districts of one department of the southwestern region of Jazan province in Saudi Arabia (the Fifa mountains, Dhamdh governorate, and Beesh town). Banana cultivars were collected from farms in the main banana-growing agroecological zones of the country. The agroecological zone of the southwestern regions of Saudi Arabia is characterized by three agroclimatic zones and ten subzones defined by geographic location and topography that differ in rainfall and air temperature [22]. High altitudes are characterized by lower temperatures and higher rainfall (400-450 mm per year), making vegetation more diverse [23]. Sample Collection A total of eight Musa species and subspecies were used in this study. Three samples of fresh banana leaves of each cultivar were collected from the field, packed in plastic bags, labeled with a site code, and kept in iceboxes until examination. To avoid sampling duplication from the same individual, we did not sample plants located directly next to each other (Table 1). DNA Extraction According to the manufacturer's instructions, plant genomic DNA was extracted from leaf samples using the WizPrep™ gDNA Mini Kit (Wizbiosolutions Inc, Seongnam, Republic of Korea) with a final elution volume of 50 mL. To check the DNA quality, we visually tested 5 uL of each sample by 1% gel electrophoresis. DNA appears as sharp bands when visualized under UV light using the Ingenius3 Gel documentation system (Syngene, UK). Extracted DNA was stored at −20 • C until required for PCR. AFLP Protocol AFLP analysis was carried out following the method of Vos et al. [24], with one modification in the labeling type, as primers were labeled fluorescently rather than radioactively labeled. All primers and adaptors were synthesized by Eurofins, Hamburg, Germany ( Table 2). Samples were successfully tested with six different selective PCR combinations. The original PCR protocol was followed without modification. Visualization of the amplified products was performed by a private service using an ABI3730 DNA analyzer (Applied Biosystems, Waltham, MA, USA) with a size standard GS500-LIZ (Macrogen Fragment Analysis Service, Republic of Korea). Data Analysis Peak Scanner TM (Applied Biosystems, USA) and Raw Geno V2 (Applied Biosystems, USA) were used to automate the AFLP scoring. The band-binary criterion was applied to the analysis of the AFLP data as the detected bands were codified as 1 when present and 0 when absent. As the total number of samples equals 8, thus a single sample frequency = 12.5%. Bands with a frequency of >87% or <13% are often uninformative or misleading when included in the analyses [25,26] and were, therefore, excluded from further analysis using FAMD 1.31 software [27]. The Bayesian clustering method was applied by using Structure V2.2 [28] to investigate the genetic structure. Triple independent simulations were performed per each assumed number of sub-populations K (tested K = 1 to 5). Parameters were set as the following burn-in period of 10,000 out of 100,000 MCMC iterations, and the admixture ancestry model was set on. Analysis of molecular variance (AMOVA) was performed to test the population genetic differentiation using Arlequin V3.5 [29]. The significance of Φ ST was tested with 10,000 permutations for the detected AFLP loci. Fragment Analysis and Band Scoring PCR amplification and fragment detection were successful for nine AFLP selective primer pairs. Among primer pairs, the average scored bands were 163 ± 35 bands ranging between 50 and 674 bp with an average size of 250 ± 78 bp (Supplementary Table S1). A weak significant negative correlation was found between fragment sizes and frequencies (r = −0.20; p < 0.00). Band scoring yielded a total number of 1468 bands with 162 monomorphic ones (88.96% polymorphism) for all primer pairs applied to the eight samples ( Figure 1). After filtration, 136 loci (a band uniquely found in one sample, frequency below 13%) were removed to avoid bias, and 162 loci (locus found in all samples except for one, frequency above 87%) were removed and considered monomorphic. A total of 1008 loci were retained for further analysis. Fragment Analysis and Band Scoring PCR amplification and fragment detection were successful for nine AFLP selective primer pairs. Among primer pairs, the average scored bands were 163 ± 35 bands ranging between 50 and 674 bp with an average size of 250 ± 78 bp (Supplementary Table S1). A weak significant negative correlation was found between fragment sizes and frequencies (r = −0.20; p < 0.00). Band scoring yielded a total number of 1468 bands with 162 monomorphic ones (88.96% polymorphism) for all primer pairs applied to the eight samples ( Figure 1). After filtration, 136 loci (a band uniquely found in one sample, frequency below 13%) were removed to avoid bias, and 162 loci (locus found in all samples except for one, frequency above 87%) were removed and considered monomorphic. A total of 1008 loci were retained for further analysis. Genetic Polymorphism and Diversity Polymorphic bands for each location were 963, 862, and 571 for Dhamadh, Fifa, and Beesh areas, respectively. The effective number of alleles (ne) for all bulked samples combined was 1.46 ± 0.006. The expected heterozygosity under Hardy-Weinberg assumption (He) for all bulked samples combined was 0.249 ± 0.003. Samples from Dhamadh scored the highest ne (1.525 ± 0.01) and the highest He (0.292 ± 0.006) when FIS = 1. Samples from Genetic Polymorphism and Diversity Polymorphic bands for each location were 963, 862, and 571 for Dhamadh, Fifa, and Beesh areas, respectively. The effective number of alleles (n e ) for all bulked samples combined was 1.46 ± 0.006. The expected heterozygosity under Hardy-Weinberg assumption (H e ) for all bulked samples combined was 0.249 ± 0.003. Samples from Dhamadh scored the highest n e (1.525 ± 0.01) and the highest H e (0.292 ± 0.006) when F IS = 1. Samples from Beesh yielded the lowest n e (1.39 ± 0.013) and the lowest H e (0.195 ± 0.006), while samples from Fifa scored 1.47 ± 0.010 for n e and 0.261 ± 0.006 for H e (Table 3). Population Structure The dissimilarity genetic distance was calculated using the Jaccard coefficient; the distance ranged from 0.483 to 0.812. The two samples, Ban03 and Ban06, showed the highest dissimilarity values and were considered the most distant among all ( Table 4). The principal coordinate analysis (PCoA) based on Jaccard genetic dissimilarity matrix showed non-location orientation. The demonstrated variation was between 31.9% (axis F1) and 48.2% (axis F2). The analyzed samples were clustered in pairs: Ban01 and Ban05, Ban04 and Ban07, both were clustered in the negative (x, y) quartile, the Ban02, and Ban08 in the negative x, positive y quartile, except for Ban03 plotted in the positive (x, y) quartile at a distance from Ban06 in the positive x, negative y quartile (Figure 2). The average estimated Ln probability score with the lowest variance was calculated for sub-population number K = 3, indicating that the observed samples most probably originated from three sub-groups (Figure 3a). Again, the sample structure was not clustered by location. Group 1 defines Ban03 and Ban06 samples with 100% homogenized diversity both are two different cultivars, the Baladi and French cultivars, respectively. Group 2 represents Ban04 and Ban07 samples with 100% homogenized diversity; both samples are of the same cultivar (American cultivar). Finally, group 3 defines Ban02 and Ban08 samples with 100% homogenized diversity; both samples are of the same cultivar (Indian cultivar). The only two samples that showed heterogeneous diversity were Ban01 and Ban05 samples, both are known as the Red banana cultivar; both samples showed the highest diversity portion of group 2, followed by group 3 and a minimal portion from group 1, reflecting a hybrid status mainly occurred between the American and Indian cultivars (Figure 3b). est dissimilarity values and were considered the most distant among all ( Table 4). The principal coordinate analysis (PCoA) based on Jaccard genetic dissimilarity matrix showed non-location orientation. The demonstrated variation was between 31.9% (axis F1) and 48.2% (axis F2). The analyzed samples were clustered in pairs: Ban01 and Ban05, Ban04 and Ban07, both were clustered in the negative (x, y) quartile, the Ban02, and Ban08 in the negative x, positive y quartile, except for Ban03 plotted in the positive (x, y) quartile at a distance from Ban06 in the positive x, negative y quartile (Figure 2). The average estimated Ln probability score with the lowest variance was calculate for sub-population number K = 3, indicating that the observed samples most probab originated from three sub-groups (Figure 3a). Again, the sample structure was not clu tered by location. Group 1 defines Ban03 and Ban06 samples with 100% homogenize diversity both are two different cultivars, the Baladi and French cultivars, respectivel Group 2 represents Ban04 and Ban07 samples with 100% homogenized diversity; bo samples are of the same cultivar (American cultivar). Finally, group 3 defines Ban02 an Ban08 samples with 100% homogenized diversity; both samples are of the same cultiv (Indian cultivar). The only two samples that showed heterogeneous diversity were Ban0 and Ban05 samples, both are known as the Red banana cultivar; both samples showed th highest diversity portion of group 2, followed by group 3 and a minimal portion fro group 1, reflecting a hybrid status mainly occurred between the American and Indian cu tivars (Figure 3b). Genetic Differentiation and Geographical Influence The genetic differentiation was tested using AMOVA to measure the changes in th pairwise differentiation of the ΦST among the studied location and the cultivars. A ver low ΦST of 0.07 among locations was detected, partitioned into a 93% genetic variatio originating within locations, while 7% of the genetic variation occurred among location On the other hand, a much higher ΦST of 0.28 among cultivars was detected, partitione into a 71.05% genetic variation originating within groups, while 28.95% of the genetic va iation occurred among cultivars (Table 5). Based on the FST for each locus compared to th observed heterozygosity, 162 outlier loci were detected, differentiating all cultivars an considered loci under selection among cultivars (Supplementary Table S2). The AMOV test then scored the maximum ΦST value of 1.00, as of 100% genetic differentiation orig nating from the differences between the cultivars and none within each (Table 5). Genetic Differentiation and Geographical Influence The genetic differentiation was tested using AMOVA to measure the changes in the pairwise differentiation of the Φ ST among the studied location and the cultivars. A very low Φ ST of 0.07 among locations was detected, partitioned into a 93% genetic variation originating within locations, while 7% of the genetic variation occurred among locations. On the other hand, a much higher Φ ST of 0.28 among cultivars was detected, partitioned into a 71.05% genetic variation originating within groups, while 28.95% of the genetic variation occurred among cultivars (Table 5). Based on the F ST for each locus compared to the observed heterozygosity, 162 outlier loci were detected, differentiating all cultivars and considered loci under selection among cultivars (Supplementary Table S2). The AMOVA test then scored the maximum ΦST value of 1.00, as of 100% genetic differentiation originating from the differences between the cultivars and none within each (Table 5). Discussion Future research directions may also be highlighted. Recently, banana cultivations were established in Jazan province, a temperate region in the southeastern parts of Saudi Arabia. In several surveys related to banana cultivation in the Middle East, Saudi Arabia was never considered (e.g., de Langhe [8]). However, nowadays, initiatives to increase banana cultivation have been reported (e.g., a 100,000 banana-trees cultivation project was started by local businesswomen in Jazan [30]). The huge number of imported cultivars has drawn the scientific community's attention to study and analyze them, especially at the genetic level. Using DNA fingerprinting techniques combined with botanical and physiological assessments would provide a clear base for selection procedures and biological maintenance. Application of DNA fingerprinting on banana plants were previously reported, whether to identify genotypes among wild species and cultivars [31,32], to estimate genetic diversity among cultivars [33] or genotypes [34], to resolve the link between genotypes and morphobased classification [21], or to identify of duplicate accessions and genetic fidelity testing [3]. A high number of variable markers is possible with the AFLP technique, allowing genome-wide analysis of genetic variability. In our study, based on nine AFLP primer pairs combinations, 1468 loci were detected, compared to Opara et al. [35], who yielded 1094 loci when applied 12 AFLP primer pairs combinations to study local banana cultivars in the southern region of Oman. A comparison confirms the reproducibility of the used combination in our analysis, as a lower number of combinations yielded a higher number of loci. In an additional study, 22 AFLP primer pairs applied on 21 accessions yielded 485 bands only with 46.18% polymorphism (e.g., Ahmad et al. [36]). Thus, choosing the primer pairs combinations is critical to saving time and cost while improving the marker reproducibility and robustness. Based on the high reading output and extensive statistical analysis, the genetic variability of the samples was expected to be more clearly reflected. The likelihood of detecting markers under selection is relatively high, either directly or because they are located near genes under selection. The mean expected heterozygosity under Hardy-Weinberg assumption (He) was 0.249, regardless of the unequal diversity levels detected among the locations, which reflect a high diversity level among the samples. In a similar study, Wang et al. [37] detected high levels of genetic diversity for the wild banana progenitor M. balbisiana population, where a similar He of 0.241 was estimated, even though wild specimens usually record much higher diversity than the cultivated ones [36]. Molecular data consisting of unlinked markers are used by Structure software to infer population structure using model-based clustering. In Jazan locations, a genetic structure was detected, even though it was proven to be influenced by the genetic background of the cultivars rather than the sampling locations. Patterns of phylogeography have been tested for banana plants in China by Ge et al. [38], and all the genetic diversity analyses confirmed the significant geographical structuring when comparing wild to cultivated banana populations. The samples of the Red banana cultivars showed mixed portions of other groups (inferred by color). It is normal to observe traces of other cultivars' genetic diversity, possibly due to the banana's ancestral origin. The heterogeneity is based on the American and Indian cultivars with almost an equal portion, suggesting a clear hybridization event between both cultivars. On the other side, genetically related samples in group 1 were from different geographical locations and cultivars, known as the Baladi and the French cultivars. While they originate from distant locations, both cultivars showed the same similarity membership coefficient (i.e., a value that assigns a sample to a particular group). However, the PCoA clarified the genetic distance among both as unequal cultivars, proving the importance of complementing the structure analysis with PCoA analysis to resolve the correct genetic clustering [35,36]. There is increasing interest in identifying genes or outlier loci that underlie adaptations to different factors in several species or in finding signatures of selection and domestication [39][40][41]. Outlier loci are revealed when populations differ at specific markers [40,42]. In the current study, 162 outliers were detected, and those loci participated in the development and selection of banana cultivars, which were indeed found to exhibit increased differentiation among locations along with no genetic variability detected within cultivars. Similar studies confirmed the potential of the AFLP technique to detect molecular markers to distinguish cultivars, subspecies, and wild banana accessions [21,32,[35][36][37]. In the presence of noncoding DNA, some of the detected AFLP loci may simply show the signature of selection because they only are associated with the target [43]. The genome scan of banana cultivars from Jazan in Saudi Arabia offers an opportunity to uncover molecular markers for the selected cultivars even though the location and function of the detected outlier loci are uncertain. A reduced representation library of these cultivars' genomes can be constructed using the AFLP primers used to amplify the outlier loci [44]. This perspective can help to thoroughly study those loci in nature and identify their role in the domestication of banana plants and cultivars. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/cimb45030116/s1, Table S1: Band scored by AFLP in eight banana cultivars from Jazan province; Table S2: Loci under selection analysis of the filtered AFLP dataset among the eight Banana samples from Jazan, Saudi Arabia.
2023-02-24T16:27:46.665Z
2023-02-22T00:00:00.000
{ "year": 2023, "sha1": "20cbfcfeeee7acbabfc4928272e18edec77bbeee", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1467-3045/45/3/116/pdf?version=1677055275", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a5e8bd8e7ae1074fd3f4729818a0fa8487210b49", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
59502213
pes2o/s2orc
v3-fos-license
Secular dynamics of hierarchical quadruple systems: the case of a triple system orbited by a fourth body We study the secular gravitational dynamics of quadruple systems consisting of a hierarchical triple system orbited by a fourth body. These systems can be decomposed into three binary systems with increasing semimajor axes, binaries A, B and C. The Hamiltonian of the system is expanded in ratios of the three binary separations, and orbit-averaged. Subsequently, we numerically solve the equations of motion. We study highly hierarchical systems that are well described by the lowest-order terms in the Hamiltonian. We find that the qualitative behaviour is determined by the ratio $\mathcal{R}_0$ of the initial Kozai-Lidov (KL) time-scales of the binary pairs AB and BC. If $\mathcal{R}_0\ll 1$, binaries AB remain coplanar if this is initially the case, and KL eccentricity oscillations in binary B are efficiently quenched. If $\mathcal{R}_0\gg 1$, binaries AB become inclined, even if initially coplanar. However, there are no induced KL eccentricity oscillations in binary A. Lastly, if $\mathcal{R}_0\sim 1$, complex KL eccentricity oscillations can occur in binary A that are coupled with the KL eccentricity oscillations in B. Even if binaries A and B are initially coplanar, the induced inclination can result in very high eccentricity oscillations in binary A. These extreme eccentricities could have significant implications for strong interactions such as tidal interactions, gravitational wave dissipation, and collisions and mergers of stars and compact objects. As an example, we apply our results to a planet+moon system orbiting a central star, which in turn is orbited by a distant and inclined stellar companion or planet, and to observed stellar quadruples. INTRODUCTION Hierarchical triple systems are known to be common among stellar systems. For example, a fraction of 0.076 of FG dwarfs systems in the catalogue of Tokovinin (2014a,b) are triple systems (in the fractions cited here from Tokovinin 2014b, completeness arguments have been taken into account; the observed number of triple systems in the sample of Tokovinin 2014a is 290, with a total number of 4847 systems). The triple fraction is likely higher for more massive stars. In such hierarchical systems, the torque of the outer binary can induce high-amplitude oscillations in the inner binary over time-scales that can vary from suborbital time-scales, to time-scales exceeding Gyr. These oscillations, known as Kozai-Lidov (KL) cycles (Lidov 1962;Kozai 1962), have important implications for a large range of astrophysical systems, in particular when the effects of tidal friction are also considered. The implications include the production of shortperiod binaries and hot Jupiters (Eggleton & Kiseleva-Eggleton 2001;Wu & Murray 2003;Eggleton & Kisseleva-Eggleton 2006;Fabrycky & Tremaine 2007;Wu, Murray & Ramsahai 2007;Correia et al. 2011;Naoz, Farr & Rasio 2012;Petrovich 2014), accelerating the merging of compact objects (Blaes, Lee & Socrates 2002;Thompson 2011;Antonini & Perets 2012;Antonini, Murray & Mikkola 2014), explaining some of the blue stragglers stars (Perets & Fabrycky 2009), affecting the formation of binary minor planets (Perets & Naoz 2009), possibly producing a special type of type Ia supernovae through collisions of white dwarfs (Katz & Dong 2012;Hamers et al. 2013;Prodan, Murray & Thompson 2013), and modifying the evolution of stellar binaries that would not interact in the absence of a third star (Hamers et al. 2013). Nature does not stop at N = 3, however. Although in the catalogue of Tokovinin (2014a,b) triple systems, with a fraction of 0.58 (observed: 290 of 350), are most common among systems with hierarchies (N ≥ 3), quadruple systems also constitute a considerable fraction of hierarchical systems, i.e. a fraction of 0.32 (observed: 55 of 350). Unlike hierarchical triple systems, for which only one dynamically stable configuration is known to exist in nature, there are two different hierarchical configurations for which quadruples are known to be dynamically stable. One of these consists of two binary systems that orbit each other's barycentre, and this type of system constitutes a fraction of 0.74 (observed: 37 of 55) of the quadruple systems in the catalogue of Tokovinin (2014a,b). The long-term dynamical evolution of this configuration has been studied by Pejcha et al. (2013), who showed, by means of direct N -body simulations, that eccentricity oscillations, in particular orbital flips, can be enhanced in these systems relative to triples. The other configuration consists of a hierarchical triple system that is orbited by a fourth body (referred to as a 3+1 quadruple system in Tokovinin 2014b), and is the focus of the present paper. In this case, three binary systems can be identified, and we will assume that they are each sufficiently separated from each other such that the quadruple system is dynamically stable. A stability analysis of these systems is beyond the scope of the present work. Here, we shall always assume stability, although stability of some systems is borne out by our direct N -body integrations. We will refer to the binaries with the smallest, intermediate, and largest semimajor axes, as 'binary A', 'binary B' and 'binary C', respectively. A schematic depiction of our configuration is shown in Figure 1. Our hierarchical configuration not only applies to stellar quadruples, but also arises in other astrophysical systems. These include, but are not limited to, multi-planet, planet-moon and binary asteroid systems in single and binary star systems. Here, we study the case of a planet+moon system (binary A) that orbits a star (binary B), which in turn is orbited by a more distant and inclined object (binary C), e.g. another planet or star. We assume that the orbit of the planet+moon system is initially coplanar with respect to that of the primary star. Therefore, in the absence of a distant body, no excitation of the eccentricity of the orbit of the planet+moon system is expected. However, we will show that, in the presence of an inclined fourth body, high-amplitude eccentricity oscillations can be induced in the planet+moon system through an intricate coupling of KL cycles. The structure of this paper is as follows. In Section 2, we describe our methods. We expand the four-body Hamiltonian in terms of the separation ratios rA/rB, rB/rC and rA/rC. In order for our method to be suitable for the study of the long-term evolution of a large number of systems, we adopt the secular approximation, i.e. we average the Hamiltonian over the three binary orbits assuming unperturbed and bound orbits for time-scales shorter than the orbital periods. Subsequently, we numerically solve the equations of motion derived from the orbit-averaged Hamiltonian. We test our method by comparing to direct N -body integrations. In Section 3, we consider the general dynamics of highly hierarchical systems, i.e. systems that are well-described by the lowest-order terms in the Hamiltonian. We discuss our results in Section 4 and apply them to planetary and stellar systems. We give our conclusions in Section 5. Expansion of the Hamiltonian Our method to study the long-term evolution of quadruple systems is a natural extension to the orbit-averaged techniques that have been used extensively in the past to study the evolution of hierarchical triple systems, where an expansion was made in terms of the semimajor axis ratio ain/aout, with ain and aout the semimajor axes of the inner and outer orbit, respectively (Lidov 1962;Kozai 1962;Harrington 1968Harrington , 1969Ford, Kozinsky & Rasio 2000;Eggleton & Kiseleva-Eggleton 2001;Laskar & Boué 2010;Naoz et al. 2013a). We expand the Hamiltonian in terms of the separation ratios rA/rB, rB/rC and rA/rC, where the separation vectors rA, rB and rC are defined in terms of the position vectors of the four bodies in equation (A2). By assumption, rC ≫ rB ≫ rA, therefore these ratios are small and such an expansion is appropriate. The expansion is carried out to up and including fourth order in the separation ratios, i.e. including terms proportional to (rA/rB) i (rB/rC) j (rA/rC) k , where 0 ≤ i + j + k ≤ 4. The details are given in Appendix A1. For completeness, in addition to the configuration of a triple system orbited by a fourth body that is the focus of the present paper, we have included results for the configuration of two binaries orbiting each other's barycentre in Appendix A2. At the lowest order, i + j + k = 1, the Hamiltonian consists of three terms that reduce to the binary binding energies of the three binaries A, B and C, assuming Kepler orbits. These terms therefore do not lead to secular orbital changes. At the next order, the 'quadrupole' order (i + j + k = 2) 1 , we find three terms, each of which is mathematically equivalent to the quadrupole-order Hamiltonian in the three-body problem. These three terms can be obtained from the three-body quadrupole order Hamiltonian by appropriate substitutions of the masses and separation vectors. More specifically, the (non-averaged) three-body Hamiltonian at the quadrupole order is given by where rin and rout are the separation vectors of the inner and outer binary, respectively. In our four-body system, the Hamiltonian, to the corresponding level of approximation, is given by three terms. These are each obtained from equation (1) by the following substitutions of separation vectors, (i) rin → rA and rout → rB (AB); (ii) rin → rB and rout → rC (BC); (iii) rin → rA and rout → rC (AC), and masses In the quadrupole-order approximation, there are no terms appearing in the Hamiltonian that depend on all three position vectors rA, rB and rC. This is no longer the case for the next order, the 'octupole' order (i + j + k = 3). For the latter order, we find three terms that correspond to the octupole order terms in the three-body problem, and that can be obtained directly from the substitutions given above. In addition, we find a term that is a function of rA, rB and rC. We will refer to such terms as 'cross terms'. The cross term at octupole order is given by In the systems of interest here, the three terms in the Hamiltonian that can be obtained by the substitutions discussed above from the corresponding terms in the three-body problem, are generally dominated by the terms that apply to the binary combinations AB and BC. This is because, by assumption, rA/rB ≫ rA/rC and rB/rC ≫ rA/rC. For the same reason, the octupole-order cross term, which is proportional to (rA/rC) 2 (rB/rC), is also typically small. However, in the three-body problem, the octupole order term vanishes for equal masses in the inner binary (cf. equation A7c). This implies that the octupole-order terms associated with the binary combinations AB and BC vanishes if m1 = m2 and m1 + m2 = m3, and suggesting that the octupole-order cross term could be important in that case. To investigate this further, we have also derived the terms of the next higher order, i + j + k = 4 (henceforth 'hexadecupole' order). Analogously to the lower orders, we find three terms that depend only on quantities of two of the binaries and that satisfy the substitutions given above. Their general form is given by These terms do not cancel if the masses in the inner binary are equal; in fact, they do not cancel for any non-trivial combination of masses m and m ′ . In addition to these terms, we find two terms that depend on quantities pertaining to all three binaries, i.e. two cross terms. Expressions for the latter terms are given in equations (A7f) and (A7g). Although in this work, we do not include the hexadecupole-order terms in numerical integrations, we use our results of the hexadecupole-order Hamiltonian to evaluate the relative importance of the octupole-order cross term in Section 2.4. Orbit averaging We carried out an orbital averaging of the Hamiltonian expanded to up and including the hexadecupole order. For the cross terms, this entails averaging over three orbits. We assumed unperturbed Kepler orbits. A major advantage of the orbit-averaged approach compared to direct N -body integration, is the strongly reduced computational cost, in particular if the integration time is long compared to the orbital periods, and if a large number of systems is to be integrated. Furthermore, the orbit-averaged approach is a key instrument for the (semi)analytic understanding of the long-term behaviour (i.e. much longer than the orbital periods), as demonstrated e.g. below in Section 3.4.2. The main disadvantage is that the dynamics on suborbital time-scales are averaged over, therefore potentially missing important effects (Antonini & Perets 2012;Antonini, Murray & Mikkola 2014;Antognini et al. 2014). These effects can particularly be important in systems that are close to the limit of dynamical stability. However, for highly hierarchical systems, we do not expect these effects to be important, and these systems are the main focus of the present work. In our numerical integrations, we check for the condition when the orbit-averaged approach likely breaks down (cf. Section 2.3). In the orbit-averaging procedure, we express the angular momenta and orientations of each of the three binaries in terms of the triad of perpendicular orbital state vectors (j k , e k , q k ), where q k ≡ j k ×e k and k ∈ {A, B, C}. Here, j k is a vector aligned with the angular momentum vector of the orbit and which has magnitude j k = 1 − e 2 k ; e k is the eccentricity, or Laplace-Runge-Lenz vector, that is aligned with the major axis and which has magnitude e k , the orbital eccentricity. The orbit-averaged Hamiltonian is given in equation (A10). For further details we refer to Appendix A1. To solve the equations of motion, we have developed a code written in C++, SECULARQUADRUPLE, that numerically solves the system of ordinary differential equations (ODEs) equations (4), to up and including octupole order. Because the ODEs are generally highly stiff, we used CVODE (Cohen, Hindmarsh & Dubois 1996), a library specifically designed to solve stiff ODEs. Our code is interfaced within the AMUSE framework (Portegies Zwart et al. 2013;Pelupessy et al. 2013). This allows for convenient comparison with direct N -body integration, i.e. without using the secular approximation, using any of the many N -body codes available in AMUSE. In addition, this facilitates the inclusion of effects modelled by other codes such as stellar and binary evolution. A test of the code for a hierarchical triple system is given in Appendix B. In the integrations with SECULARQUADRUPLE below, we included terms up and including octupole order, but without the octupole order cross terms. Here, we consider highly hierarchical systems, and it is shown in Section 2.4 that for these systems the octupole cross term does not dominate. Furthermore, neglect of this term is justified by the agreement with the N -body simulations, as shown in Section 2.5. As mentioned above, situations can arise in which the orbitaveraged approximation breaks down. In particular, this can occur when the time-scale for changes of the angular momentum j k is smaller than the orbital time-scale (Antonini & Perets 2012;Antonini, Murray & Mikkola 2014). In SECULARQUADRUPLE, it is checked whether, at any time in the integration, any of the three binaries A, B or C satisfy this condition. This is implemented by means of a root finding procedure: the integration is stopped whenever t j,k ≤ P orb,k , where P orb,k is the orbital period of binary k and t j,k is the time-scale for the angular momentum of binary k to change by order itself, i.e. Although in SECULARQUADRUPLE the equations of motion are solved in terms of orbital vectors for numerical reasons, below we present our results in terms of the (generally easier to interpret) orbital elements (e k , i k , ω k , Ω k ), where i k is the orbital inclination, ω k is the argument of pericentre and Ω k is the longitude of the ascending node. The latter quantities are defined with respect to a fixed reference plane. It is often useful to consider mutual inclinations i kl between two orbits, rather than the individual inclinations i k and i l . They are related according to cos(i kl ) = cos(i k ) cos(i l ) + sin(i k ) sin(i l ) cos(Ω k − Ω l ). (6) We note that in the hierarchical three-body problem, it is customary to define the orbital elements with respect to the invariable plane, i.e. a plane containing the total angular momentum vector (e.g. Naoz et al. 2013a). This implies Ω k − Ω l = π, and therefore the simple relation i kl = i k + i l can be applied. This is no longer the case in the hierarchical four-body problem, therefore one must resort to the more general equation (6). Relativistic effects are also implemented in our algorithm. An important effect is relativistic precession of the argument of pericentre, associated with the Schwarzschild metric (Schwarzschild 1916). The associated time-scale for precession by 2π in binary k to the lowest post-Newtonian (PN) order is given by where r g,k ≡ Gm tot,k , with mtot,A = m1 + m2, mtot,B = m1 + m2 + m3 and mtot,C = m1 + m2 + m3 + m4, is the gravitational radius. To take into account relativistic precession, the terms are added to the right-hand sides in equation (4b). Here, we neglect any possible additional 'interaction terms' between different binaries in the PN expansion that have been derived previously in the hierarchical three-body problem (Naoz et al. 2013b;Will 2014a,b), and that could also apply, in some form, to the configuration considered here. The importance of the octupole-order cross terms In Section 2.1 we derived a cross term in the Hamiltonian at octupole order. Here, we investigate further the importance of this term with respect to other terms at the octupole and the next higher order, the hexacupole order. Long-term effects of the cross term can only be investigated by carrying out numerical integrations in time. However, a proxy for the short-term importance of the cross term is the ratio r of the absolute value of the orbit-averaged cross term, to the absolute value of all other orbit-averaged terms at octupole and hexadecupole order. In principle, r can be maximised with respect to the parameters defining the properties and state of the quadruple system, yielding the largest possible contribution of the cross term. However, the dimensionality (25) of this problem is very large, and this makes it computationally very difficult to find the absolute maximum. Here, we simplify the problem by restricting the parameter space. In particular, we set x ≡ aB/aA = aC/aB, thereby reducing Figure 2. Note that r depends on the masses only through their ratios, hence the mass unit is arbitrary. the dependence of the three semimajor axes to a single quantity. For given masses and eccentricities, we subsequently randomly sample the six unit vectorsê k andĵ k with the orthogonality constraint e k ·ĵ k = 0. We compute r for 20 of such realisations and each x, and subsequently compute the mean and standard deviations. In Figure 2, we show the resulting mean values (solid lines) and mean values offset by the standard deviations (dashed lines) of r as a function of x. We include four different combinations of masses and eccentricities, which are enumerated in Table 1. The minimum value of x for dynamical stability of the system is estimated by computing the critical semimajor axis ratio for stability of the AB and BC systems separately using the criterion of Mardling & Aarseth (2001). The latter two ratios are indicated for each combination of parameters in Figure 2 with vertical dashed lines. Regardless of our choice of parameters, r is typically small, in the sense that for values of x large enough for dynamical stability, r 10 −2 . For highly hierarchical systems, i.e. x 100, r 10 −4 . This indicates that typically the cross terms do not dominate the dynamics, at least for the short-term evolution. Figure 3. Comparison between the evolution of a quadruple system as computed with the orbit-averaged code SECULARQUADRUPLE developed in this work (red lines) and the direct N -body code MIKKOLA (Mikkola & Merritt 2008;green lines). Refer to the text in Section 2.5 for the initial system parameters. When applicable to a single binary, solid, dashed and dotted curves correspond binaries A, B and C, respectively. When applicable to a binary pair, solid, dashed and dotted curves correspond to the binary pairs AB, BC and AC, respectively. The quantity |∆Etot/Etot| is the absolute value of relative error in the total energy (the orbit-averaged Hamiltonian in the case of SECULARQUADRUPLE), and f k is the true anomaly (applicable only to the N -body simulations). The inset in the top left panel shows a magnification between t = 0 and 0.02 Myr. Note that the orbital period of binary A, P A ≈ 0.8 yr, is too short compared to the output resolution (≈ 500 yr) for f A to be resolved. Also note that in the orbit-averaged code, the semimajor axes are constant by assumption, whereas the KL time-scales P KL,kl in principle depend on time through the time-dependence of e k (cf. equation 9), although in this case, the dependence is extremely weak and not visible in the top right panel. Comparisons to direct N -body integrations As a first demonstration of our algorithm, we show in Figure 3 a comparison of a short-term integration with SECULARQUADRU-PLE (red lines) and MIKKOLA (Mikkola & Merritt 2008), a highly accurate direct-N body code that uses chain regularisation (green lines) 2 . The assumed initial parameters were semimajor axes aA = 1 AU, aB = 5 × 10 2 AU, aC = 5 × 10 3 AU, masses m1 = m3 = m4 = 1 M⊙, m2 = 0.5 M⊙, eccentricities eA = eB = eC = 0.5, inclinations iA = 45 • , iB = 0 and iC = 135 • , arguments of pericentre ωA = ωB = ωC = 0 and longitudes of the ascending nodes ΩA = ΩB = ΩC = 0. Initially, i.e. during the first few KL oscillations in the AB pair, the two methods show very good agreement. However, as time progresses, noticable deviations develop. These deviations are likely due to increasing integration errors with time in both codes. This poses a problem when comparing the two codes in longer integrations, i.e. for time-scales ≫ PKL,AB, where PKL,AB is the KL time-scale for the AB binary pair (cf. equation 9 below). To illustrate this, we show in the top row in Figure 4 another example, where the integration time is ∼ 60 PKL,AB. The system parameters are adopted from one of the example systems in Section 3.2, i.e. the system corresponding to panels 1-6 of Figure 6. The differences in e k , i k , ω k and Ω k between the integrations with the secular and direct N -body codes are shown as a function of time in the middle 2 We remark that for this type of systems it is essential to use a highly accurate N -body code because a large number of orbits, in particular in binary A, needs to be integrated very accurately. row in Figure 4. In this case, there is clearly no longer a one-to-one agreement between the two methods. However, when comparing the two methods, it is important to take into account that in the N -body integrations, there is an additional dependence on the three initial orbital phases. We have also carried out N -body integrations with different initial orbital phases, where the initial mean anomaly was sampled randomly. We show the differences between two different N -body realisations as a function of time in the bottom row in Figure 4. These differences are typically at least as large as the differences between the secular code, and a single realisation with the N -body code. More quantitatively, in Table 2, we show results of two-sided Kolmogorov-Smirnov (K-S) tests (Kolmogorov 1933;Smirnov 1948) between time series in e k , ω k , and Ω k obtained from the integration carried out with the secular code, and the integration of five different realisations with the N -body code. For K-S tests between the secular and N -body integratons, and K-S tests between N -body integrations with different realisations, the D-values are generally low and the p-values are typically high. This demonstrates that the integrations between the secular and N -body integrations are statistically consistent, and that the same applies to the N -body integrations with different realisations. Furthermore, the similarities between the results of the K-S tests between the secular-N -body and N -body-N -body integrations, suggest that the discrepancies between the secular and N -body codes are due to the phase dependence in the N -body integrations. We conclude that, for the highly hierarchical systems considered here, the secular code gives results that are statistically consistent with the direct N -body code. The much greater speed makes the former highly suited for the long-term study of a large number The system is the same as in panels 1-6 of Figure 6. Middle row: the differences in e k , i k , ω k and Ω k between the secular code and one realisation of the N -body code, as a function of time. Bottom row: the differences in e k , i k , ω k and Ω k between two realisations with the N -body code with different initial orbital phases f k . Table 2. Results of two-sided K-S tests (statistic D and the p-value) for time series in e k , ω k and Ω k . In the first row, the secular code is compared to one realisation of the N -body code, MIKKOLA (Mikkola & Merritt 2008). In the second row, the secular code is compared to five realisations of the N -body code, and given are the resulting average values of D and p. In the third row, two realisations of the N -body code are compared, and in the fourt row, K-S tests are carried out for all combinations of the five realisations of the N -body codes, and the quoted values of D and p are averaged over these combinations. of systems. For example, the integration with SECULARQUADRU-PLE for one of the systems in Figure 4 is ∼ 10 4 times faster compared to MIKKOLA. GLOBAL EVOLUTION OF HIGHLY HIERARCHICAL SYSTEMS In principle, the SECULARQUADRUPLE algorithm can be used to perform a systematic parameter space study. Instead, here we choose to focus in detail on particular configurations to get insight into the typically complex dynamics that can arise. We con-sider the following two cases: (1) binaries A and B are initially coplanar (iAB,0 = 0) and highly inclined with respect to binary C (iBC,0 = 85 • ), and (2) binaries A and B are initially highly inclined (iAB,0 = 85 • ) while binary B is also highly inclined with respect to binary C (iBC,0 = 85 • ). In both cases, we assume that the quadruple system is highly hierarchical at all times, i.e. rp,A ≪ rp,B ≪ rp,C, where r p,k is the pericentre distance in binary k. For both cases, we performed a sequence of integrations in which aA was varied between 10 −3 and 1 AU, and all other initial parameters were kept fixed. The latter were assumed to be aB = 10 2 AU, aC = 5 × 10 3 AU, m1 = m3 = m4 = 1 M⊙ and m2 = 0.5 M⊙, eA = eB = eC = 0.01, arguments of pericentre ωA = ωB = ωC = 0 and longitudes of the ascending nodes ΩA = ΩB = ΩC = 0. The integration time for each system was set to 20 PKL,BC,0, where PKL,BC,0 is the initial KL time-scale applied to binaries B and C, and is computed from (Innanen et al. 1997) where min,p = m1, min,s = m2 and mout,s = m3 in the case of PKL,AB, and min,p = m1 + m2, min,s = m3 and mout,s = m4 in the case of PKL,BC (cf. Section 2.1). Note that, contrary to triple systems and at the quadrupole order approximation, the 'outer' orbit eccentricity eout in equation (9) can change in time if this equation is applied to binaries A and B. This is addressed in more detail below. For hierarchical triple systems, the octupole parameter is a useful proxy for the importance of octupole-order effects, in particular, orbital flips. The latter can occur if ǫoct 10 −3 , and are typically associated with very high eccentricities (Lithwick & Naoz 2011;Katz, Dong & Malhotra 2011). In the systems considered here, the initial octupole parameters ǫoct range between ≈ 3.3×10 −8 to ≈ 3.3×10 −5 for binary pair AB; for binary pair BC, ǫoct ≈ 4.0 × 10 −5 . This indicates that octupole-order terms are not important. Furthermore, the initial ratio r0 of the orbit-averaged octupole-order cross term to all other orbit-averaged terms at octupole and hexadecupole order (cf. Section 2.4) ranges between ≈ 2 × 10 −12 and ≈ 3 × 10 −7 , indicating that the orbitaveraged octupole-order cross term can similarly be neglected. The results presented below therefore demonstrate the dynamics that are manifested at the lowest possible, i.e. quadrupole, order. Examples: A and B initially coplanar In our first case, iAB,0 = 0 • and iBC,0 = 85 • . In the absence of the fourth body, there would not be any excitation of the eccentricity in binaries A and B because they are not mutually inclined and the quadrupole-order terms dominate. We note that if the initial eB = 0.01 were much larger (and therefore ǫoct would be much higher, cf. equation 10), orbital flips and very high eccentricity oscillations in binary A would be possible in certain conditions, even if iAB,0 is close to zero (Li et al. 2014). We show in Figure 5 three examples of numerically integrated systems, in which aA is either 1 (panels 1-6), 0.001 (panels 7-12) or 0.023 AU (panels 13-18). For aA = 1 AU, the mutual inclination between binaries A and B, iAB, remains zero (cf. the solid line in panel 3 of Figure 5). However, the individual inclinations of binaries A and B, iA and iB, which are initially zero, do change (cf. the solid and dashed lines in panel 4 of Figure 5; note that these curves overlap). This can be understood from the large torque of binary B on binary A, compared to the torque of binary C on binary B. More quantitatively, the KL time-scales can be interpreted as proxies for the importance of these torques, and the initial KL time-scale for binaries A and B, PKL,AB,0 ≈ 1.2 Myr, is much shorter (i.e. corresponding to a larger torque) than the initial KL time-scale for binaries B and C, PKL,BC,0 ≈ 2 × 10 2 Myr. The large torque of binary B on binary A enforces that zero mutual inclination between these binaries is maintained, despite the torque from binary C on binary B. The latter torque changes the individual inclination of binary B on the time-scale of PKL,BC ≫ PKL,AB. Note that the mutual inclination is determined by the individual inclinations i k and longitudes of the ascending nodes Ω k (cf. equation 6). Therefore, both these angles for binaries A and B follow each other very closely (cf. panels 4 and 6 of Figure 5). If binary A were replaced by a point mass, the eccentricity in binary B would oscillate as a result of the torque from binary C, with maxima of 1 − eB,max ≈ 10 −2 . However, in the case of a quadruple system, the short KL time-scale in binary A with binary B causes rapid precession in both binaries A and B, on roughly the same time-scale (cf. the black solid and blue dashed lines in panel 5 of Figure 5). Consequently, the rapid precession in binary B quenches any KL oscillations induced by the torque of binary C. This effect is analogous to the quenching of KL oscillations in triple systems due to additional sources of periapse precession. Here, the additional precession is due to the extended nature of one of the components in the inner binary, rather than due to e.g. relativistic precession or tidal bulges. This quenching effect is discussed more quantitatively below, in Section 3.4. In panels 7-12 of Figure 5, we show the evolution of an example system with aA = 10 −3 AU. The initial KL time-scale for binaries A and B is PKL,AB,0 ≈ 39 Gyr ≫ PKL,BC,0 ≈ 2 × 10 2 Myr. Because of this, there is no induced precession of binary A on binary B, and KL eccentricity oscillations occur in binary B with maxima of 1 − eB,max ≈ 10 −2 (cf. the dashed lines in panel 8 of Figure 5). Furthermore, the torque of binary C on binary B dominates compared to the torque of binary B on binary A. Consequently, the inclination of binary B changes rapidly, whereas the inclination of binary A hardly changes (cf. the solid and dashed lines in panel 10 of Figure 5). However, this also changes the mutual inclination iAB between binaries A and B. The latter increases very rapidly (cf. the solid line in panel 9 of Figure 5). Nevertheless, binaries A and B are only highly mutually inclined (iAB close to 90 • ) for short periods of time, and therefore no significant eccentricity oscillations occur in binary A. In other words, the latter oscillations are impeded by rapid changes of the mutual inclination between binaries A and B because of KL oscillations induced by binary C. Finally, in panels 13-18 of Figure 5, aA ≈ 0.023 AU. The initial KL time-scales for the binary pairs AB and BC are comparable, i.e. PKL,AB,0 ≈ 3 × 10 2 Myr ∼ PKL,BC,0 ≈ 2 × 10 2 Myr, and therefore the torques of binary B on binary A and of binary C on binary B are also comparable. Binaries A and B become mutually inclined, and the KL time-scale for the AB pair is short enough for significant excitation of the eccentricity of binary A. The result is a complex evolution in which the oscillations in eA are highly non-regular and strongly coupled with the oscillations of eB. Interestingly, although binaries A and B started out with a mutual inclination of iAB,0 = 85 • < 90 • , the orientation between binaries A and B at t ∼ 400 Myr changes from prograde to retrograde. Such orbital flips also occur at later times, and are associated with high eccentricities in binary A. The evolution of the eccentricity of binary B is also affected, although the effect is much smaller and the oscillations can still be considered as regular. In Section 3.4, we study the effect of the eccentricity of binary B in more detail. Examples: A and B initially highly inclined In our second case, we assume that both binaries A and B and binaries B and C are initially highly inclined, i.e. iAB,0 = 85 • and iBC,0 = 85 • . The evolution of three example systems, with other parameters identical to those in Section 3.1, is shown in Figure 6. Figure 6). Panels 1-6, 7-12 and 13-18 correspond to semimajor axes of binary A of 1, 0.001 and 0.023 AU, respectively. The other parameters are the same for these groups of panels. In panels 2,3,5 and 6, the abscissa in the insets range between t = 0 and 300 Myr. In the absence of the fourth body, high-eccentricity KL oscillations would be induced in binary A. For aA = 1 AU (panels 1-6 of Figure 6), PKL,AB ≪ PKL,BC, and for time-scales comparable to PKL,AB, KL eccentricity oscillations in binaries A and B are hardly affected by the torque of binary C. On much longer time-scales comparable to PKL,BC, iB changes because of the torque of binary C (cf. the blue dashed line in panel 4 of Figure 6). However, the KL eccentricity oscillations between binaries A and B are not noticably affected (note that in panels 2-6 of Figure 6, the KL oscillations associated with binaries A and B are undersampled). Consequently, iA, iB, ΩA and ΩB are modulated on the PKL,BC time-scale. We note that, as a consequence of KL oscillations in the AB pair, there is still short-time-scale precession induced on binary B, preventing any eccentricity excitation in binary B. This is similar to the previous case when binaries A and B are initially coplanar. For aA = 10 −3 AU (panels 7-12 of Figure 6), the evolution is qualitatively very similar to the case when iAB,0 = 0 • . This may be surprising, given the high initial mutual inclination between binaries A and B. However, the latter changes strongly on the much shorter time-scale of PKL,BC, and this prevents any eccentricity excitation in binary A. Note that in this case, the quenching of KL eccentricity oscillations in binary A is not due to induced precession. As can be seen in panel 11 of Figure 6, ωA is not much affected on the PKL,BC time-scale, although there is also a trend on a much longer time-scale of ∼ 4 × 10 3 Myr. The KL time-scale for the AB pair changes periodically as eB oscillates (cf. panel 7 of Figure 6). Therefore the time-scale of ∼ 4 × 10 3 can, in this case, be interpreted as an effective KL time-scale for the AB pair. When the KL time-scales for the AB and BC pairs are similar (cf. panels 13-18 of Figure 6), the evolution of eA is complex and high eccentricities are attained, similarly to the case when iAB,0 = 0 • . Again, an orbital flip occurs around t ∼ 400 Myr. Interestingly, subsequently there are no orbital flips, and the amplitude of the oscillations in eA and iAB gradually decreases. Qualitative trends The above examples suggest that the ratio of the (initial) KL timescales for the AB and BC pairs, is an indication of the global trend of the inclination and eccentricity oscillations. We identify the following three regimes. (i) R0 ≪ 1: binaries A and B remain coplanar if this was initially the case. If they are initially inclined, KL eccentricity oscillations in binary A are not much affected by the presence of the fourth body. In either case, KL eccentricity oscillations in binary B are quenched. (ii) R0 ≫ 1: binaries A and B become inclined if they are initially coplanar. However, there are no eccentricity oscillations in binary A, even if binaries A and B are initially highly inclined. This is because the mutual inclination between binaries A and B is large only for a small fraction of the KL time-scale for the AB pair, i.e. for a time of < PKL,BC,0 = PKL,AB,0/R0 ≪ PKL,AB,0. Furthermore, KL eccentricity oscillations are not quenched in binary B. (iii) R0 ∼ 1: binaries A and B become inclined if they are initially coplanar; complex KL eccentricity oscillations arise in binary A that are coupled with the -much less affected -KL eccentricity oscillations in binary B. A complication in the above, is that PKL,AB can change periodically with time because of KL eccentricity oscillations in binary B (cf. equation 9). Periodically higher values of eB reduce PKL,AB at the same times, therefore potentially increasing the range of R for which the eccentricity in binary A can be excited. Furthermore, for large enough values of eB, higher-order terms in the Hamiltonian become more important, and in extreme cases, the orbitaveraged approach could break down. In principle, the time dependence of eB could be taken into account by e.g. averaging PKL,AB over a KL cycle in binary B. However, except for a few simple cases, there are no analytic solutions for eB(t). Therefore, this would require numerical integration and hence not be of much practical use for predicting the behaviour without resorting to such integration. Nevertheless, because of the very peaked nature of eB(t) and the small width (in time) of the peaks, we expect the averaged value of PKL,AB typically not to be very different from the value computed from eB,0, at least in systems in which the lowest-order (quadrupole order) terms dominate. Results from numerical integrations Here, we describe the dynamics outlined in Section 3.3 more quantitatively, focussing in particular on the effect of the quenching of KL eccentricity oscillations in binary B by the induced precession of binary A, and on the excitation of the eccentricity in binary A in the regime R0 ∼ 1. In Figure 7, we show with black dots the maximum eccentricities in binaries A and B, the maximum inclination between binaries A and B, and the minimum inclination between binaries B and C as a function of R0, as determined from numerical integrations with SECULARQUADRUPLE. Here, R0 is varied by changing aA (cf. equation 11) in the sequence of integrations described in the beginning of Section 3. In the left (right) panels, results are shown assuming that binaries A and B are initially coplaner (highly inclined). For iAB,0 = 0 • , iAB,max is zero for R0 1 and rapidly increases for R0 1; eA,max is equal to the initial value for R0 1 and for R0 20. This is consistent with the trend that was outlined in Section 3.3. Furthermore, if R0 10 −2 , eB,max ≈ 0, demonstrating that the induced precession of system A on B in this regime can completely quench any KL oscillations in binary B. Consequently, the minimum inclination between binaries B and C is constant and ≈ 85 • , the initial value (note that for the regular KL oscillations in binary B, a maximum eccentricity corresponds to minimum inclination with respect to binary C). If 1 R0 20, iAB,max is nonzero; eA,max is also nonzero and reaches high values of up to ≈ 1 − 10 −4 . Although the behaviour of these two quantities as a function of R0 is non-regular, there is a general trend in which iAB,max asymptotes to ≈ 160 • . A general trend is also apparent in eA,max. If binaries A and B are initially inclined by 85 • (cf. the right panels in Figure 7), the dependence of eA,max as a function of R0 is more complicated. For a large range in R0, 3 × 10 −2 R0 50, eA,max fluctuates strongly with R0, reaching high values of 1 − eA,max ∼ 10 −4 for R0 already as low as R0 ≈ 3 × 10 −2 . For R0 50, eA,max approaches eA,0, as was observed previously in Section 3.2. Furthermore, binary B is more affected compared to the coplanar case, in the sense that iBC,min decreases more strongly in the regime 1 R0 20. The maximum eccentricity in binary B is similar to the coplanar case, however. Semianalytic description The maximum eccentricity (and hence minimum inclination) reached in binary B can be computed approximately using a semianalytic method based on conservation of the total energy (i.e. the Hamiltonian) and the total angular momentum. This method is similar to that used by Miller & Hamilton (2002 (2007); Naoz et al. (2013a). We neglect any changes in binary A between the initial and final states, where the final state corresponds to a maximum eccentricity in binary B. To our knowledge, it is not possible to predict (i.e. without resorting to "brute-force" numerical integrations as in Section 3.4.1) these changes in system A, and this is likely related to the generally chaotic nature of the evolution of binary A, in particular in the regime 1 R0 20 (cf. Section 3.5). Stated more mathematically, conservation of total energy and angular momentum and the condition that eB is stationary, do not generally provide enough constraints to solve both for eB,max and the corresponding eA. In the Hamiltonian to quadrupole order and for the hierarchy considered here, the term corresponding to binaries A and C in the Hamiltonian can safely be neglected. This can readily be seen from equation (A10b) The equation of motion for eB that follows from equation (13) is given by (cf. equation 4) Neglecting the terms proportional to CAB in equation (15), this condition implies eB · j C = 0, and/or (j B × j C ) · eB = 0. The former cannot be generally true in the case of a maximum eccentricity, therefore the second condition must apply. The latter can be rewritten using the vector identity equation (A11) as The mutual inclination between binaries B and C can be related to eB using conservation of the total angular momentum vector, At this level of approximation, and eC is constant. Neglecting the term corresponding to binary A and writing eC = eC,0, equation (17) giveŝ Furthermore, if any changes in binary A between the initial and final state are neglected, then the remaining unknown terms in equation (13) are simply given by eA = eA,0, j A · j B = (j A · j B )0 and eA · j B = (eA · j B )0. With these simplifications, equation (13) only contains the single unknown quantity eB corresponding to stationary points. In general, this equation cannot be solved analytically. A notable exception is when the term proportional to CAB in equation (13) is neglected (i.e. neglecting the contribution from binary A), as is the term proportional to ΛB/ΛC in equation (20) (i.e. assuming a highly hierarchical system). In that case, the solution corresponding to the maximum eccentricity is which is a well-known result for hierarchical triple systems applied to binaries B and C, and where binary A is essentially replaced by a point mass (note thatĵ B ·ĵ C = cos [iBC]). More general numerical solutions are shown in the bottom two panels of Figure 7 with the solid lines, where iBC,min is computed using equation (20). Although the semianalytic curves do not capture the detailed behaviour of eB,max and iBC,min in the regime 1 R0 20, for other R0 they agree well with the results obtained from the numerical integrations with SECULARQUADRUPLE. Behaviour near R0 = 1 It is apparent from Figure 7 that near R0 = 1, the behaviour of the maximum eccentricities of binaries A and B as a function of R0 is complex and non-regular. Here, we briefly discuss in more detail the behaviour in this regime based on numerical integrations with SECULARQUADRUPLE. In Figure 8, we show the same quantities as in Figure 7, now based on 1000 numerical integrations within a smaller interval of R0 near R0 = 1. In the coplanar case and for R0 1, there are distinct peaks corresponding to enhanced eccentricities in both binaries A and B at specific values of R0. For R0 1, individual peaks are harder to distinguish. We speculate that the peaked behaviour is due to resonances in the arguments of pericentre of binaries A and B that occur at specific integer ratios of the KL time-scales for the AB and BC pairs. In addition, for R0 1 there may be an overlap of many resonances, thereby producing a chaotic behaviour as a function of R0 (Chirikov 1979). Interestingly, the peaks for R0 1 are much less pronounced, if not completely absent, in the highly inclined case. These phenomena merit further study, but are beyond the scope of the present work. General relativistic effects In the results presented above, all four bodies were assumed to be point masses and general relativistic effects were not included. In Figure 9, we show the results of integrations with SECULAR-QUADRUPLE similar to those presented in Figure 7, but now including 1PN precession in the equations of motion for all three binaries (cf. equation 8). In the coplanar case, eccentricity oscillations in binary A are quenched due to relativistic precession, even if R0 ∼ 1. We note, however, that the purely Newtonian results can be rescaled to other systems (in particular, with larger aA), in which case relativistic precession in binary A becomes unimportant, whereas the purely Newtonian secular dynamics remain unaffected modulo a rescaling of the KL time-scales. An interesting, and somewhat unexpected, effect can be seen in the right panel of Figure 9. Starting from the lowest R0, the quantity eA,max decreases with increasing R0, which is due to the increasing relative importance of 1PN precession compared to the torque of binary B. However, the decrease of eA,max flattens around R0 ≈ 2×10 −2 . The latter value of R0 corresponds to a significant increase of eB,max. The flattening of eA,max as a function of R0 can be explained by considering that as eB,max increases, the KL time-scale for the AB pair decreases (cf. equation 9). Consequently, the latter KL time-scale can become comparable to the 1PN precession time-scale. Here, this is the case for 2 × 10 −2 R0 10 −1 . We show an example of this phenomenon in Figure 10, where aA ≈ 0.3 AU and R0 ≈ 0.04. At the maxima of eB, the KL timescale for the AB binary pair (black solid line in the top left panel) decreases and becomes comparable to the 1PN precession timescale in binary A (red solid line in the same panel). This allows for increased eccentricities in binary A, to much higher values if eB were constant (cf. the top middle panel). This is a mechanism for -at least partially -overcoming the well-known quenching of KL eccentricity cycles induced by 1PN precession. Note, however, that in this case, there is only a narrow region in R0 for which it is effective: as R0 increases, aA decreases, therefore further decreasing t1PN,A. We note that this phenomenon is general, in the sense that it would also apply if precession in binary A is due to another effect, e.g. tidal effects or mass transfer in stellar systems. Application: planetary systems As mentioned in Section 1, the hierarchical configuration considered in this work can be applied to planetary systems consisting of a planet+moon system (binary A) orbiting a central star (in binary B) that is orbited by a more distant and inclined planetary or stellar companion (in binary C). Both binaries A and B are assumed to be initially coplanar and circular. A pertinent question is whether the torque exerted by the fourth body causes the planet+moon system to become inclined with respect to the orbit of the central star, or whether coplanarity is maintained. We note that this is different from the question that has been addressed in the past in which case a different hierarchy was assumed, i.e. all bodies within the stellar binary were assumed to orbit the central star (Innanen et al. 1997;Takeda & Rasio 2005;Takeda, Kita & Rasio 2008). Based on the qualitative results presented in Section 3.3, we expect that coplanarity between binaries A and B is maintained if PKL,AB,0 ≪ PKL,BC,0, i.e. if the binary companion is distant from the planetary orbit. In addition, we expect KL eccentricity oscillations in the orbit of the planet+moon system with respect to the cen- tral star due to the torque of the binary companion to be quenched. This effect could prevent the latter orbit from becoming highly eccentric, i.e. the presence of the moon could 'shield' the planet from disruption by the star as a consequence of KL oscillations induced by the binary companion. On the other hand, if PKL,AB,0 ≫ PKL,BC,0, the binary companion is close to the planetary orbit, and the planet+moon system can become inclined with respect to the orbit of the central star. However, in the latter case, the KL time-scale for the AB pair is long compared to that of the BC pair, such that there is no eccentricity excitation in the planet+moon system. In the intermediate regime where PKL,AB,0 ∼ PKL,BC,0, we expect significant eccentricity oscillations in the planet+moon system. These oscillations could lead to efficient tidal dissipation in cases where this would otherwise not have been important, and, in extreme cases, even to planet+moon collisions. We explore in Section 4.1.1 some of the parameter space where significant KL eccentricity oscillations in the planet+moon system are expected, and give a number of examples in Section 4.1.2. A comprehensive population synthesis study is beyond the scope of the present paper. Exploration of the parameter space We assume a Jupiter-mass planet, m1 = MJ, a moon with mass m2 = 10 −4 m1 (the order of magnitude of the mass of Jupiter's heaviest moons), a central star with mass m3 = 1 M⊙, and a binary companion with mass m4 = 0.5 M⊙. The radii (of interest when considering collisions) are assumed to be R1 = RJ, R2 = 10 −2 R1 and R3 = 1 R⊙. The semimajor axis of the planet+moon system is assumed to be either aA = 10 −3 AU or aA = 10 −2 AU; the semimajor axis of aB the latter system with respect to the central star is either 1, 4 or 10 AU. The eccentricities of binaries A and B are assumed to be eA = eB = 0.001; the eccentricity of the orbit of the binary companion is either eC = 0.05 or eC = 0.67. In Figure 11, we show various time-scales of importance as a function of aC, where in each panel different values are assumed for aA and eC. Quantities pertaining to the three values of aB are indicated with blue, red and green lines for values of aB of 1, 4 and 10 AU, respectively. The critical values of aC corresponding to dynamical stability, computed using the three-body criterion of Mardling & Aarseth (2001) and where binary A is treated as a point mass, are indicated with vertical dashed lines for each value of aB. Systems to the left of these lines are expected to be dynamically unstable. Extrapolating our results from Section 3, we expect the region in parameter space in which eA can be excited (in the absence of relativistic effects and other additional sources of absidal motion), to be approximately 1 R0 20. The limiting values of aC, for each value of aB, are indicated with the vertical solid lines, and between these vertical lines the coloured horizontal (sloped) solid lines indicate the KL time-scales for pair AB (BC). We have indicated with hatched regions the ranges in aC satisfying 1 < R0 < 20, and the stability constraint. In principle, the mechanism for producing high-amplitude oscillations in eA in the regime R0 ∼ 1 can be suppressed if KL oscillations in system B are quenched by relativistic precession in binary B. In all cases in Figure 11, these time-scales are longer than 10 Myr, and therefore, precession in binary B is not important. Relativistic precession in binary A is of greater importance given the small values of aA; the associated time-scales are indicated in Figure 11 with black dotted horizontal lines. Based on Figure 11, we expect eccentricity excitation in the planet+moon system for specific ranges in aC. These ranges strongly depend on aA, aB and eC. For small semimajor axes of the planet+moon system, i.e. aA = 10 −2 AU, the criterion of dynamical stability of the orbit of the binary companion does not strongly reduce the parameter space. General relativistic precession is, however, also more important for smaller aA. Nevertheless, for values of aB of 4 and 10 AU, the relativistic precession time-scale in binary A is not much shorter than the KL time-scale for the AB pair. In those cases, there could still be high eccentricity oscillations in binary A because of the reduction of the KL time-scale for the AB pair as a consequence of the eccentricity oscillations in binary B (cf. Section 3.6). This is demonstrated below in the first example in Section 4.1.2. A larger eccentricity of the binary companion tends to reduce the parameter space of interest. The reason for this decrease is the larger range in aC for which the system is not dynamically stable. As discussed in Section 3, for R0 ≪ 1, KL eccentricity oscillations in binary B are quenched because of the induced precession from binary A. We have plotted the maximum eccentricity in binary B as a function of aC in Figure 11 with dashed lines, computed using the semianalytic method described in Section 3.4.2. Here, we assumed iBC,0 = 85 • to get a rough upper limit of the maximum eccentricity. The quenching effect is very effective for aB = 1 AU and aC larger than a few 100 AU. For large enough aC, eccentricity oscillations in binary B are completely quenched. To illustrate the implications of this, we have indicated in Figure 11 with horizontal coloured lines the values of 1 − eB that satisfy 1 − eB = (aA + R3)/aB, i.e. the eccentricity for which the pericentre distance of the orbit of binary B is equal to aA + R3. In the latter case, we expect the planet, the moon, or both, to be disrupted by the central star. For aA = 10 −2 AU and aB = 1 AU, the maximum eccentricity reached in binary B exceeds this value for aC 100 AU. However, for aC 100 AU, a potentially catastrophic encounter of the planet+moon system with the central star is avoided because of quenching of the KL eccentricity oscillations in binary B. This shows more quantitatively the 'shielding' effect mentioned above. To conclude, we expect that there exist regions in parameter space in which the eccentricity of the planet+moon system is excited, despite initial coplanarity. The region in parameter space is limited, however: the planet should be sufficiently far away from the central star, yet the orbit of the binary companion should also be dynamically stable. In addition, the latter orbit needs to be sufficiently inclined. In contrast, if the orbit of the binary companion is wide, the presence of the moon can prevent the orbit of the planet+moon system around the star from becoming highly eccentric. Examples To further illustrate the planetary system discussed here, we show in Figure 12 two examples of integrations with SECULAR-QUADRUPLE. In the first two rows, aA = 10 −3 AU, aB = 4 AU and aC = 50 AU (cf. the black bullet in the top left panel in Figure 11); in the second two rows, aA = 10 −2 AU, aB = 10 AU and aC = 50 AU (cf. the black bullet in the top right panel in Figure 11). In both cases, we assume iBC,0 = 70 • and eC,0 = 0.05. For the other parameters, we refer to Section 4.1.1. In both exam- ples, R0 ∼ 1, and high-eccentricity oscillations are expected in the planet+moon system. The values of 1 − eA corresponding to a collision between the planet and its moon are indicated with horizontal red lines in the corresponding panels in Figure 12. Such collisions occur in both examples at ≈ 0.05 and 0.07 Myr, respectively, and the integrations were subsequently stopped. Note that the eccentricity of binary B does not become high enough for disruption of the planet+moon system by the central star. Particularly in the second example, eA shows a complicated behaviour as a function of time, changing rapidly each time iAB passes 90 • . We remark that tidal dissipation was not included in these examples. This effect is likely important for the small pericentre distances reached during the evolution, therefore possibly not resulting in a collision, but a shrinking of the planet+moon orbit. ADS 1652 The quadruple system ADS 1652 (Tokovinin, Gorynya & Morrell 2014, and references therein) is composed of four main-sequence stars in the '3+1' configuration. The system is likely old (age > Gyr) considering the spectral types of its stellar components; the stars in binary A are of spectral type G9V, the star in binary B is of type K5V and the star in binary C is of type G8V. To date, ADS 1652 is one of few quadruple systems for which orbital fits have been obtained for multiple orbits. Here, we apply the SECULARQUADRUPLE algorithm to ADS 1652 to explore its long-term secular dynamical evolution. We adopt the parameters that were obtained by Tokovinin, Gorynya & Morrell (2014), who fitted radial velocity and speckle measurements to the orbits of binaries A and B, and Table 3; the currently unconstrained parameters pertaining to the outermost orbit, binary C, are e C = 0.05, i C = 0 • , ω C = 90.0 • and Ω C = 130.0 • . In the bottom middle panel, the inset shows a magnification for t = 0 to 20 Myr; note that both ω A and ω B are undersampled. which are given in Table 3. Here, we adopted the component masses obtained from the orbital fits (cf. the bottom row of For the semimajor axis of the C binary, we adopt the observed projected distance of 2500 AU from binary A. Owing to its long orbital period of ∼ 10 5 yr, the eccentricities and orbital angles of binary C are not known. Here, we proceed by sampling these quantities for 500 realisations of the system, where eC is sampled from a thermal distribution, iC from a distribution uniform in cos(iC), and ωC and ΩC from a uniform distribution. In our integrations, we included terms up and including octupole order (excluding the cross term), and the 1PN relativistic precession terms in the three binaries. The integration time is 20 PKL,BC, which is typically a few Gyr (depending on eC). We show in Figure 13 the evolution of an example system, where eC = 0.05, iC = 0 • , ωC = 90.0 • and ΩC = 130.0 • . For this value of eC, R0 ≈ 6.7 × 10 −4 ≪ 1, therefore the system is in the regime in which the torque of binary B on binary A dominates compared to the torque of binary C on binary B. Indeed, binaries A and B, which are initially nearly coplanar, remain nearly coplanar during the evolution (cf. the top right panel in Figure 13). Consequently, the KL eccentricity oscillations in binary A are of a very low amplitude, i.e. eA,max ≈ 0.779, whereas eA,0 = 0.769. Furthermore, KL eccentricity oscillations in binary B, which is initially inclined with respect to binary C with iBC,0 ≈ 70 • , are completely quenched. This can be attributed to the rapid precession induced in B binary by binary A, on the timescale of PKL,AB ≈ 4 × 10 −2 Myr ≪ PKL,BC ≈ 10 2 Myr (cf. the bottom middle panel of Figure 13). In Figure 14, eC,0 = 0.05 was assumed to be low. The quantity R0 increases with increasing eC,0 (cf. equation 11). Therefore, for larger eC,0, the system could be in a very different regime in R0 in which the evolution is very different. This is not the case in our Monte-Carlo realisations, however, for which the mean and standard deviations of R0 are ≈ 1.4 × 10 −3 and ≈ 1.0 × 10 −3 , respectively. In Figure 14, we show for the 500 integrations the maximum eccentricities in the A and B binaries, and the minimum and maximum inclinations between binaries A and B. There is very small spread in all of these quantities, showing that their dependence on eC, as well as iC, ωC and ΩC, is very weak. We conclude that, based on the observed state of ADS 1652, the eccentricities of its orbits will remain very nearly constant for, at least, the remainder of the main-sequence time-scale of its constituents. This conclusion is independent of the currently unknown eccentricity and orientation of the outermost orbit. In particular, even if the latter orbit is highly inclined with respect to the intermediate orbit, any potential KL eccentricity oscillations in the intermediate orbit are efficiently quenched. The Tokovinin sample of nearby FG dwarfs As mentioned in Section 1, 55 of the 4847 observed systems of FG dwarfs in the catalogue of Tokovinin (2014a,b) systems. From these, 18 are in the '3+1' configuration, and for 13 of the latter, orbital periods and component masses are known for all three binaries. Here, we briefly explore in which dynamical regimes we expect these systems to be, by computing the associated value of R0 (cf. Section 3.3). For the 13 systems mentioned above, the orbital elements, apart from the semimajor axes, are unknown. In order to compute R0, the eccentricities eB and eC are required (cf. equation 11). Therefore, for each of the 13 systems, we sample, in 1000 realisations, eB and eC from a thermal distribution. Here, we reject sampled eccentricities if either of the AB and BC pair would be unstable according to the dynamical stability criterion of Mardling & Aarseth (2001). The distribution of the values of R0 obtained in this approach is shown in Figure 15. The ratio R0 is typically small; ≈ 0.9 of the sampled systems have R0 < 10 −5 . This is the regime in which the AB pair is effectively an isolated triple, and where induced preces-sion of binary A on binary B quenches KL eccentricity oscillations in binary B, as a consequence of the torque of binary C. We note that one might expect currently observed quadruples not to be in the regime R0 ∼ 1. If R0 ∼ 1, then the large eccentricities in the innermost binary would likely already have strongly affected the system, and possibly have resulted in a merger. Evidently, in this case, the system would not have been observed as a quadruple system, but as a triple system. Conversely, some of the observed quadruple systems may have been quintuple systems in the past, and, triggered by secular dynamical evolution, evolved into quadruple systems through the merging of the stars in (likely) the shortest-period binary. CONCLUSIONS We have explored the global gravitational dynamics of hierarchical quadruple systems consisting of a hierarchical triple system orbited by a fourth body. Our main conclusions are as follows. 1. The Hamiltonian for the system has been derived and expanded to up and including fourth order in the ratios of the binary separations rA/rB, rB/rC and rA/rC (cf. Figure 1). At each order, we have found three terms that are each mathematically equivalent to the corresponding terms that appear in the hierarchical three-body problem, and that depend on the properties of only two binaries. In addition to these terms, for octupole and higher orders, we have found 'cross terms' that depend on properties of all three binaries. Subsequently, we have derived expressions for the orbit-averaged Hamiltonian. A preliminary analysis indicates that the cross terms are typically not important in highly hierarchical systems on short time-scales, i.e. not exceeding time-scales of order PKL,BC, where PKL,BC is the Kozai-Lidov (KL) time-scale of the BC pair. We have also derived the Hamiltonian for the configuration of two binaries orbiting each other's barycentre (Appendix A2). 2. For highly hierarchical systems, i.e. in which the three binaries are widely separated, the global dynamics can be qualitatively described in terms of the (initial) ratio of the KL time-scales of the AB to the BC pairs, R0 ≡ PKL,AB,0/PKL,BC,0. If R0 ≪ 1, the torque of binary B on A dominates compared to the torque of binary C on binary B, and therefore binaries A and B remain coplanar if this was initially the case. If binaries A and B are initially inclined, KL eccentricity oscillations in binary A are not much affected by the presence of the fourth body. Eccentricity oscillations in binary B are efficiently quenched due to short timescale precession induced on binary B by binary A. If R0 ≫ 1, the torque of binary C on binary B dominates compared to the torque of binary B on binary A. Initially, the inclination of binary B changes, whereas this is not the case for binary A. This induces a mutual inclination between binaries A and B, even if they are initially not inclined. However, rapid precession of binary B compared to the KL time-scale for the AB pair prevents any significant eccentricity oscillations in binary A, and even quenches KL oscillations if binaries A and B are initially inclined. Lastly, if R0 ∼ 1, complex KL eccentricity oscillations occur in binary A that are strongly coupled with the KL eccentricity oscillations in binary B. The latter are also affected compared to the situation in which binary A were replaced by a point mass, although this is typically a much smaller effect. Even if binaries A and B are initially coplanar, the induced inclination can result in very high eccentricity oscillations in binary A. These extreme eccentricities could have significant implications for strong interactions such as tidal interactions, gravitational wave dissipation, and collisions and mergers of stars and compact objects. 3. We also included the effects of general relativity (GR), in particular relativistic precession. We have found that the range in the parameter space of the semimajor axis ratios aB/aA for which KL oscillations are important in binary A can be extended compared to hierarchical triple systems. This is due to a decrease of the KL time-scale of the AB pair when the eccentricity of binary B is at a maximum. 4. We have applied our results to a planetary configuration consisting of a planet+moon system orbiting a central star that is orbited by a more distant and inclined binary companion. We have found that there are regions in parameter space where a planet+moon system that is initially coplanar with respect to the central star, can become inclined and the eccentricity in the planet+moon system can be excited. This could result in significant tidal dissipation and even a collision of the planet with its moon. Furthermore, when the orbit of the binary companion is wide, KL eccentricity oscillations in the orbit of the planet+moon system around the central star can be quenched because of induced precession from the planet+moon system. This effectively shields the planet from high-eccentricity KL oscillations from a binary companion, and, therefore, potential disruption by the central star could be avoided. 5. Lastly, we applied our results to stellar quadruple systems. In the case of ADS 1652, R0 ∼ 10 −3 assuming a thermal distribution of the unknown eC, and we find almost negligible KL eccentricity oscillations in both the innermost and intermediate orbits, binaries A and B. Even if the outer orbit, binary C, were highly inclined with respect to binary B, any potential KL eccentricity oscillations in binary B would be efficiently quenched. For the '3+1' FG stellar quadruples in the catalogue of Tokovinin (2014a,b), we estimate ≈ 0.9 of the systems to have R0 < 10 −5 . Therefore, we expect that in the majority of these systems, KL eccentricity oscillations in the BC pair are quenched, and, from a secular dynamical point of view, the innermost AB pair can be considered as an isolated triple. Here, i + j ≥ 5. Substituting this expansion into equation (A1), we find In order to simplify the expressions in equation (A10) we repeatedly used a vector identity for the dot product of two vector products, i Figure B1. Test of the SECULARQUADRUPLE algorithm for a hierarchical three-body system; the parameters are set to mimic the system of Figure 3 of Naoz et al. (2013a).
2016-06-09T13:51:54.000Z
2014-12-09T00:00:00.000
{ "year": 2014, "sha1": "db8cf3c69236e5dabccf1b35a0ae1b70fd01ca6e", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/449/4/4221/18504050/stv452.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "db8cf3c69236e5dabccf1b35a0ae1b70fd01ca6e", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
250157702
pes2o/s2orc
v3-fos-license
Appraisal of different attributes of fish community in Andharmanik River of coastal Bangladesh and socio-economic conditions of fishermen The purpose of this study was to determine the fish species composition and appraise the status of fish diversity through sampling in six sample locations and to observe the socio economic conditions of the fishermen surrounding the river during the study period. There were 81 fish species found, classified into 13 orders, 40 families, and 69 genera. The most dominant order was Perciformes (55.42 %), followed by Clupeiformes (20.44 %), Cypriniformes (8.96 %), Siluriformes (8.13 %), and others (7.05%). To illustrate the species diversity, fish species richness and evenness in sampling areas, indices of fish community viz. Shannon-Wiener's Index (H), Simpson's Dominance Index (D), Simpson's Diversity Index (1-D), Margalef's Index (d) and Gibson's Evenness (€) were used and 3.29–3.48, 0.05–0.069, 0.93–0.95, 6.88–8.43 and 0.39–0.49 respectively were the overall values of the indices. S1 station is significantly differ in species richness from the rest of the five stations (P < 0.05). The analysis of similarity (ANOSIM) displayed significant (P-value < 0.05) variations in fish community among stations and months. In compliance with similarity percentage (SIMPER) analysis, 35.8% similarities were observed among the fish species from different stations, while 59.36% similarities were detected among the fish species from different months. One species is critically endangered (CR), three species are nearly threatened, eight species are endangered (EN), and eight species are vulnerable among the 81 fish species recorded at various sampling locations. The socioeconomic conditions of fishermen were determined on the basis of a personal interview and focus group discussion. Unfair fishing practices as well as environmental instabilities such as reduced water volume, increased sedimentation, water abstraction, and pollution have ravaged fish habitat and diminished fish diversity over time. As a result, fish preservation in the Andharmanik River has become imperative, and an integrated management plan should be designed and executed as soon as possible. Introduction Due to the existence of large numbers of rivers spread throughout the region, Bangladesh is considered a riverine country (Rahman, 2015;USAID, 2016). It has numerous inland bodies of water rich in biodiversity. About 800 rivers with their tributaries and distributaries cross the country and create a waterway of approximately 24,140 km in length (DoF, 2014) of which 710 km is coastline area. In Bangladesh, the southeastern and the southwestern coast constitute a complex coastal and estuarine ecosystem. The rivers provide enormous opportunities and possibilities for enhancing fish production and improving the living conditions of the people living around them . The coastal rivers of southern Bangladesh are recognized for a large quantity of commercial fish catch which influences the national economy of the country (Sharker et al., 2015) and the Andharmanik River is one of them. The Bangladeshi Government has declared five sanctuaries where fishing of 'Jatka' or juvenile Hilsha fish is forbidden for a particular period. The Andharmanik River is one of the five sanctuaries (Wahab and Golder, 2016) and contains a large number of aquatic resources. Fish diversity refers to the number of species present in the given location as well as their relative abundance. In Asia, Bangladesh has the third-highest aquatic fish diversity after China and India with about 800 species in freshwater, saltwater and coastal waters . The diversity of species can be measured in a variety of ways e.g. number of species, functional diversity and genetic diversity. The coastal zone is inhabited by 35 million people, or 29% of the total population (Ahmad, 2019). Coastal zone as a part of the fisheries sector in Bangladesh contributes significantly to the country's socioeconomic development by providing opportunities for income, empowering women, supplying nutrition and gaining money from overseas through exporting fish, shrimp, and other fisheries products. For the overall planning and development of the fisheries sector, sound knowledge of the livelihood patterns of the associated people is very important. Fish biodiversity and environmental management are two of the most pressing issues today, and the ability to measure the effects of habitat change and other factors on the fish population require detailed assessments of the fishery (Dudgeon et al., 2006). A major problem in Bangladesh is the decline in the abundance or stocks of freshwater as well as coastal water fish species. IUCN Bangladesh (2015) evaluated a total of 253 fish species, of which 64 species (25.3%) were found to be endangered (Khanom et al., 2016). The diversity of fish species in water bodies such as rivers, estuaries, and beels is rapidly dwindling, as evidenced by various studies Mia et al. (2015); Rahman et al. (2015), 2016a and 2016b; Islam et al. (2016); Roy et al. (2016); Galib et al. (2013); Hossain et al. (2014); Alam et al. (2013); Mohsin et al. (2013) and 2014. But less is known about the species composition and livelihood patterns of the local fishers. Storm surges, cyclones, sea level rise, coastal inundation, land erosion, and other climate change-related occurrences are major factors in fish diversity loss, which ultimately determine the fate of fishermen in the coastal zone (Karim and Mimura, 2008). As a result, a thorough understanding of fish community structure and biodiversity patterns is required in order to develop sustainable management or conservation measures in coastal areas and assess their effects (Sarkar and Bain, 2007). For all of these reasons, the current research intends to evaluate the fish community composition, diversity status and socio-economic conditions of fishermen in the Andharmanik River. The findings of this research will help to develop ecosystem-based riverine fisheries management in coastal areas. Study area The Andharmanik River is a coastal river of the Ganges-Padma system and it is one of the major rivers of Kalapara Upazila in Patuakhali District. Because of its proximity to the Bay of Bengal, the Andharmanik River in southern Bangladesh is an ichthyofaunal resourceful coastal water body and currently declared as fish sanctuary. For a total length of 40 km, at least 25 km of the river have been permanently dried out due to deposition of sediment and siltation. The Andharmanik river has been divided into six sampling stations to explore the entire length of the river and to determine how seasonal fish diversity affects local river fisheries and fishermen's socioeconomic status. Six sampling stations have been the focus of the present investigation such as Khepupara (S1), Nabipur (S2), Fatapur (S3), Hajipur (S4), Nizampur (S5) and Alipur (S6) (Figure 1). The related data were collected on a quarterly basis to meet the objectives of the current research, i.e., twice a month for a period of one year from May 2019 to February 2020. Fish specimen collection During the catch, fish samples were gathered from local fish landing stations and previously notified fishermen. In the study region, different types of fishing gear (seine net, gill net, cast net, hooks, trapes etc) are employed by local fishers which vary in terms of target species, size, and performance (Kundu et al., 2019). The collection methods were the same in different sampling stations. On each sampling day, the total number of individual species found at these six locations was tallied. Identification of collected fish samples Collected fishes were organized based on their key morphological features. Species that were tough to recognize on the site were stored in a 5-10% buffered formalin solution and transported to the lab of Department of Zoology, Jagannath University, Dhaka, Bangladesh. Then they were identified by evaluating their morphometric and meristic features as well as the color of the specimens. In accordance with Rahman (2005), Talwar andJhingran (1991), andIUCN Bangladesh (2015), the taxonomic analysis was carried out. After identification, the fish species were persistently classified according to Nelson (2006). Species assemblage and fish diversity analysis The two most widely used diversity indices are species richness and evenness (Magurran, 2004). We selected a variety of diversity indicators because they provide a solid foundation for assessing and comparing biodiversity across communities. Species diversity was calculated using different indices such as Shannon Weiner diversity index (Shannon, 1949;Shannon and Weaver, 1963;Ramos et al., 2006), Simpson's index of diversity, Simpson's dominance index, Margalef's index and Gibson's evenness were considered for both the number of species and the distribution of individuals. The Shannon-Wiener diversity index was used to take into account for the number of species as well as the apportionment of individuals within species. The Shannon Weiner diversity (H) was calculated by following formula: Where pi is the proportion of the sample represented by species i, N is total number of individuals of all species in community, Ʃ is the sum values for each species and ln is the natural logarithm. Simpson's (1949) Index of Diversity (1-D): Where, ni is the total number of individuals of a particular species, Ʃ is the sum values for each species and N is the total number of individuals of all species. Simpson's dominance index (D) is a commonly used method for quantifying biodiversity of habitat, which takes into consideration both the number of species and their abundance. The following formula was used to calculate the Margalef index (d) (Margalef, 1968) to determine species richness. d ¼ ðS À 1Þ InN Where S is the number of species in a sample and N is the total individuals and ln is the natural logarithm. Buzas and Gibson's evenness (Harper, 1999) was used to estimate the relative number of individuals in the sample by using the following formula: Socio-economic conditions of fishermen The survey questionnaire is an important instrument for data collection. A draft questionnaire with some predefined questions was prepared in conjunction with the objectives of the study in such a way that it could cover all factors relevant to the socio-economic status of fishermen and to the fish diversity (Hossain et al., 2015). A simple random sampling procedure for 50 fishermen of different age in each village was used for questionnaire interviews. In light of specific and realistic experiences, the draft questionnaire was then modified, reordered and updated. Each fisher was interviewed near the river when fishing and sometimes by visiting home. Each fisher was interviewed for half an hour in a single session. To assess the socioeconomic condition, participatory rural appraisal tools such as focus group discussion (FGD) were also conducted with these 50 fishers. For verification of the relevant information, cross-check interviews were performed with key individuals such as the DFO and relevant NGO staff. Statistical analysis During the first step of data analysis, fish diversity was quantified and statistical comparisons were then carried out. MS Excel and Statistical Packages for Social Sciences (SPSS) version 20.00 were used to perform this statistical analysis. R Programming version 4.1.1 was used to perform Analysis of Variance (ANOVA) to determine the spatial variation of average species richness. Tukey's multiple comparison test also used to determine which station amongst the six stations differs from the rest of the remaining stations. Paleontological Statistics (PAST) version 3.22, a software package for paleontological data analysis (Hammer et al., 2001) was used to execute the Analysis of Similarities (ANOSIM) and Similarity Percentages analysis (SIMPER). The assemblage similarity and significant differences among the fish species of different stations and months were assessed employing ANOSIM (Clarke, 1993;Clarke and Warwick, 2001). SIMPER (Clarke, 1993) was performed to observe the degree of similarity among the fish species of from various locations and months. This analysis is also used to estimate the percentage of major fish species during different months and stations. The hierarchical clustering (Clarke and Warwick, 2001) was also calculated to generate a dendrogram for analyzing similarities among months and stations. Fish species assemblage and distribution A variety of economically important fishes are found in the estuarine coastal and nearby areas of Bangladesh. The assessment encompassed 81 fish species classified into 13 orders, 40 families, and 69 genera, along with their scientific names, English names, local names, and IUCN red list status in Bangladesh (Table 1). Perciformes was the most prevalent order (55.42%), trailed by Clupeiformes (20.44%), Cypriniformes (8.96%), and Siluriformes (8.13%). Anguilliformes, Aulopiformes, Beloniformes, Mugiliformes, Osteoglossiformes, Pleuronectiformes, Scorpaeniformes, Synbranchiformes and Tetraodontiformes were least adequate and constituted 7.05% of overall species composition ( Figure 2). Among the six stations, a maximum of 80 species recorded in station 6 (S6) and a minimum of 63 species recorded in station 3 (S3). Fish diversity status The concept of a biodiversity index is to use a single number to describe the diversity of a sample or community. Figure from the research areas to evaluate the fish diversity status, evenness, and species richness (Table 2, Figure 3 and Figure 4). S6 had the highest H value (3.48), S1 had the second highest (3.42), and S2 had the lowest (3.29). In case of months, the highest value was found in January (3.44) followed by July (3.38) and lowest was found in October (3.23). The Simpson's dominance index (D) values ranged from 0.05-0.07 which indicates the smaller diversity in studied areas. From the studied area, the highest value of D was found in S5 (0.069) and lowest was found in S6 (0.050) and during the whole sampling period, the highest value was found in October (0.069) and lowest was found in January (0.055). The Simpson's index of diversity (1-D) values differed from 0 to 1, where the highest 1-D value computed in S6 (0.95), followed by S1 (0.943), S3 (0.941), S4 (0.940), and the lowest value measured in S5 (0.931). In case of months, the highest value was found in January (0.945) and lowest was found in October (0.931). S6 (8.43) had the highest Margalef's index (d), trailed by S1 (8.13), while S4 had the lowest (6.88). On the other hand, the maximum d value was observed in the month of September (8.18), followed by August (8.08), and the lowest in May (6.48) for the entire sampling period. The value of Gibson's Evenness (E) varied between 1 and 0. The highest evenness value (0.43) was found at the S4 location, and the lowest value (0.39) was recorded from the S1 station. The lowest evenness value represents the largest species diversity, and S1 (0.397) is the area of species richness based on this. And during the whole sampling period, the highest E value was obtained in May (0.489) followed by January (0.480) and lowest was found in October (0.377). From Figure 3, it appears that the median value of average species richness for station S1 (0.17) is higher than the rest of the five stations. There is a bigger spread of average species richness values for station S1 (i.e., ranged from 0.05 to 0.34) and the distribution for this station is symmetric. The average species richness exhibited a significant spatial variation (one-way ANOVA, F 5;72 ¼ 10:66; p ¼ 0:000000109 < 0:05), while station S1 had significantly higher species richness than the rest of the five stations ð Tukey's HSD test, p < 0:05Þ. The analysis of similarity (ANOSIM) displayed significant (P-value < 0.05) variations in fish community among stations and months (Table 3). The fish communities at stations 1 and 6 significantly different from those at stations 2, 4 and 5. Likewise, stations 2, 4, and 5 also exhibited significant differences from stations 1 and 6. Noticeable difference was not observed for station 3 with other stations. In case of months, May, June, July, and August showed a significant dissimilarity with November. June showed major difference in contrast to October, November and December. October showed a significant difference from June. Similarly, the months of November showed significant difference with May, June, July and August. December differed significantly from June, July, and August, whereas September and January exhibited no significant changes from the other months. At the similarity level 29% separation, either for month or station, was identified by cluster analysis ( Figure 5). Two major clusters were obtained-first cluster consists of fish samples of station 4 with October, November and December, fish samples of station 5 with August, June, July, October, November and December, and fish samples of station 6 with November, December, June, July, August and October, and second cluster consists of fish samples of station 1 with October, July, November, August, December and June, fish samples of station 2 with June, July, August, October, November and December, and fish samples of station 4 with August, June and July. Conservation status Of 81 fish species found in the studied area, 1 species are critically endangered (CR), 3 species are nearly threatened (NT), 8 species are endangered (EN) and 8 species are vulnerable (IUCN 2015) (Figure 6). A maximum number of the threatened species was recorded at sampling site S6, followed by S1 and S5 (Figure 7). Rahman et al. (2016a) recorded 3 endangered, 3 critically endangered and 8 vulnerable fish species in a community of 48 species. On the other hand, Mohsin et al. (2014) found 3 endangered, 2 critically endangered, and 5 vulnerable fish species in Andharmanik River. Both of these findings are lower than the current one. Tukey's HSD test, p < 0:05) and S1 station is significantly differ from the rest of the five stations (Tukey's HSD test, p < 0:05). Socio-economic conditions of fishermen This result indicates that the fishing community relies on fishing and fishing-related activities for its livelihood. Several criteria were considered in order to clearly understand the whole scenario regarding their socio-economic status such as sex, marital status, religion, age group, family size, education, fishermen type, housing condition, health facility, drinking water facilities, sanitation, electricity facilities, credit access, annual income etc. Both male and female fishermen were seen in the study sites. Among them 7% of females and 93% of males were involved in fishing. Most of the respondents of that site were married (96%) and 4% were unmarried. Muslims were featuring as the absolute majority of the fishermen in the study area. 86% of the fishermen were Muslim and the rest of the fishermen were Hindu in their religion. Knowing the age structure of fishermen is essential for calculating prospective productive human resources. The study found that the largest age group that was involved in fishing was 31-40 years old (35%) followed by 41-50 years old (30%), 51-60 years old (23%), and those older than 60 years old (12%) (Figure 8). The study revealed that 50% of fishermen had 5-7 family members, 40% had above 7 family members and only 10% of fishermen had 4-5 family members (Figure 9). In the current study, it was found that the majority of the fishermen in the selected areas were illiterate (46%) and could only sign 32%, 18% had only primary education, and 4% had only received junior (class eight) education ( Figure 10). About 74% of fishermen were full-time fishermen, about 22% of fishermen were part-time fishermen and 4% were subsistence fishermen (Figure 11). Among part-time fishermen, many of them are engaged in agriculture and day labor activities. It was evident from the data that 72% of fishermen had kacha, 28% had a tin-shed house. Fishermen's health facilities were poor in the study areas and it was found that 60% of fishermen's households were dependent on village doctors who did not have adequate knowledge of medical science, 30% of fishermen received health services from the Upazila health complex and the remaining 10% received health services from MBBS doctors. According to the study, 100% of fishermen's households drink from tube wells, with 54% using their own tube well, 42% using a shared tube well, and 4% utilizing tube well from a neighbor. It was observed that the sanitary conditions of the fishermen were not enough satisfactory. In the study area, it was found that 60% of toilets were semi-paka while 32% were kacha and 8% of the fishermen had paka sanitation found in the investigation (Figure 12). From the present survey, it was found that there were 78% of fishermen had electricity facilities and the rest 22% had no electricity facilities. National and local NGOs such as BRAC offer credit for the purchase of fishing gear and boats only to the organized poor members. After repayment only 42% became self-sufficient who did not need financial help but 14% borrow money from their neighbors, 16% from relatives, 22% from NGO s and 6% from co-operatives for their fishing business ( Figure 13). Annual incomes of the fishermen were varied from BDT 24000 to 50000. The selected fishermen were divided into two classes based on their annual incomes, with around 60% of the respondents earning between BDT 24000 and 35000 and 40% earning between BDT 35001 and 45000. The fishermen did not receive any kind of government's assistance as a result their living conditions were found as so poor. The activities of NGOs' were also not so active. Discussion Within a river in different sampling stations, we compared the diversity pattern and community organization. During the study period, a total of 81 fish species were discovered. Because of the different kinds of environmental conditions in coastal water systems, different kinds of organisms were found at different locations in the research area. The species composition revealed a mixture of freshwater, marine and estuarine species as the river is a convergence of freshwater, brackish-water and coastal waters. Mohsin et al. (2014) conducted a one-year study of the fish fauna of the Andharmanik River and discovered 53 fish species belonging to 28 families while Rahman et al. (2016) reported a total of 48 species under 10 orders and 26 families which is signifying the enhancement in biodiversity in the coastal river for the last few years. In fact, the abundance of fish species has increased dramatically as a result of some recent incentive-based management initiatives, and fishermen are catching more species (Islam et al., 2016. Moreover, the number of fish species recorded was much fewer than in other research in Bangladesh. For example, Shafi and Quddus (1982) found 139 fish species from Marine and Brackish water of Bangladesh, Kundu et al. (2019) recorded 97 finfish from two riverine sanctuaries, Roy et al. (2016) identified 103 fish species in Netrakona district's designated locations, Hanif et al. (2015) recorded 98 species from southern coastal waters, Hossain et al. (2014) reported 128 fish species from Greater Noakhali, and Chowdhury et al. (2010) reported 98 species from Naaf River. Furthermore, the current findings revealed a large number of species when compared to Islam (2005), who identified 48 fin fishes from Bangladesh's Chittagong coast, Nabi et al. (2011), who identified 45 finfish species from the Bakkhali estuary, and Rahman et al. (2017) who identified 47 species from the Agunmukha river, Galib (2015) found 67 finfish species from Brahmaputra river, Joadder et al. (2015) documented 71 species from river Padma, Rahman et al. (2012) discovered 80 species in the Ganges River, whereas Galib et al. (2013) discovered 63 species in the Choto Jamuna river. Only 63 species were found in the Noakhali Coastal area, according to Ullah et al. (2016). The depletion of the biodiversity of fish is regarded as a troubling challenge and the only solution to this issue is its conservation. The same causes of decreasing biodiversity from the Padma and Tetulia Rivers were stated by Hossain (2010) and Hossain et al. (2015a) respectively. The diversity and richness indices revealed that the diversity of fish fauna in different sampling points and in different months was practically identical (Figure 3 and Figure 4). The Shannon-Wiener diversity index calculates the number of species or the diversity of species, as well as their proportion. Contrastingly, Dominance and the Evenness indices quantify the fraction of common species and the relative number of individuals respectively in the sample . The calculated value of H in the present study among different sampling stations and months was significantly higher due to a large number of species and slight pollution of coastal water (Table 2). Rahman et al. (2016a) recorded Shannon-Weiner index (H) ranges 3.33-3.42 in Andharmanik River. The Shannon-Weiner values (H) in the Naaf River estuary range from 1.63 to 3.41 (Chowdhury et al., 2011). The H values are recorded as 1.06 to 1.51 from Talma River . Galib et al. (2013) found Shannon-Weiner index (H) was 3.71 in the Choto Jamuna River. Alam et al. (2013) in the Halda River found H values to be from 3.29 to 3.49. Seasonal fluctuations in nutrients at seagrass beds which affect the cohabitation of several fish species, atmospheric wind patterns, and periodic fish migrations for breeding and reproduction are the major reason for disparities in biodiversity indices in the coastal areas. The Margalef index (d) is used to assess pollution levels and species richness throughout sampling locations; it reveals inconsistency based on the number of species present (Vyas et al., 2012). Site S6 and the month of September represented the highest index value of Margalef (Table 2), suggesting a substantially higher number of individuals than other sampling stations. The observed value of d in the current study is greater than that of 4.72-5.24 by Rahman et al. (2016) and comparable to Hanif et al. (2015) who documented a high Margalef's index (d) of 7.48-8.67 in the southern coastal waters of Bangladesh. The dominance diversity index (0.055-0.069) value and evenness value (0.377-0.480) were almost the same among different stations and in different months during the study period (Table 2) Galib et al. (2013) found about 63 species from Choto Jamuna River and calculated values of Margalef's index and evenness (E) were 6.954 and 0.897 respectively. There was no significant difference in Shannon (H), , Evenness (e), Dominance index (D), and Mergalef (d) diversity. As a result, it is possible to conclude that seasonal differences in species diversity are a typical occurrence in the study area. In the current study, the occurrence of finfish assemblages was found to be nearly identical across stations and months. Although their percentage of contribution varies, the major contributing species for both locations and months are comparable. Several studies on the temporal and spatial pattern of fish diversity and assemblage structures in coastal ecosystems have been conducted throughout the world (Goswami et al., 2012). However, the current study gives a broad perspective of the spatial and temporal patterns of fish communities, a more extensive sampling technique is used to clarify a complete representation of the coastal ecosystem. Fish biodiversity conservation is currently concentrated mostly on endangered and economically important fish species. It has made a number of steps to conserve fish biodiversity in Bangladesh and around the world, but we observed that such initiatives were insufficient in coastal areas. The environments rich in endemic fish species should be designated as nature reserves in order to restore fish assemblages in the riverine estuaries. The IUCN regulatory regime for assessing the conservation status of fish was used in the study. Fish diversity must be conserved in order to keep environmental, nutritional, and socioeconomic balance (Lakra, 2010). The age structure, family size and type, occupation status, level of education, dwelling condition, drinking water facilities, hygiene practices, health services, access to credit, and monthly income of fishermen were all investigated. Fish farmers in the research area have a poor socio- economic situation and a poor livelihood structure. Few studies on the socio-economic condition of fishermen in different regions of Bangladesh were performed by Uddin et al. (2020), Hossain et al. (2015) and Hossain et al. (2014a) and the present findings agreed with them with few exceptions. The majority of fishermen (34%) are between the ages of 31 and 40, roughly 42% are illiterate, 72% have a hut, just 30% have a sanitary latrine, 34% are malnourished, 82 percent seek disease care from surrounding pharmacies, and only 14% have a monthly salary of more than $200 (Paul et al., 2018). Conclusion The results of this study showed the spatial and temporal patterns of fish diversity and community structure, as well as the role of different species in these patterns. It is evident from the current study that the status of the diversity of the fish fauna in the Andharmanik River is stable. The coastal areas consist of a collection of essential living resources, which are indiscriminately exploited by inhabitants. As a result, sustainable management and conservation of estuarine and coastal resources, as well as their connected ecosystems are required. There is no proper management scheme for capture fishery of the Andharmanik River and the adjacent Bay of Bengal. So many species are shrinking drastically due to human interference, excessive tourism, contamination, and even global climate change. That is why appropriate rules and regulations should be implemented in a planned way to increase production, to save the endangered species and improve the livelihood status of the people in the coastal area. Declarations Author contribution statement Dulon Roy: Conceived and designed the experiments; Contributed reagents, materials, analysis tools or data; Wrote the paper. Nishat Binte Didar & Arafat Rahman Khan: Performed the experiments; Analyzed and interpreted the data. Smita Sarker: Conceived and designed the experiments; Analyzed and interpreted the data. Gulshan Ara Latifa: Contributed reagents, materials, analysis tools or data; Wrote the paper. Funding statement This work was supported by 'NST Fellowship', Ministry of Science and Technology, and partially from Grants from Jagannath University, Dhaka-1100, Bangladesh. Data availability statement Data available on personal reasonable request. Declaration of interest's statement The authors declare no conflict of interest. Additional information No additional information is available for this paper.
2022-07-01T15:08:02.791Z
2022-06-28T00:00:00.000
{ "year": 2022, "sha1": "f87d8b399bc759f017c7866670a4ec4a9801a264", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "024693014672d73f5d6d98f8ed5f9c1d359e52c8", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
232224021
pes2o/s2orc
v3-fos-license
Patient-Specific Quality Assurance Using a 3D-Printed Chest Phantom for Intraoperative Radiotherapy in Breast Cancer This study aims to confirm the usefulness of patient-specific quality assurance (PSQA) using three-dimensional (3D)-printed phantoms in ensuring the stability of IORT and the precision of the treatment administered. In this study, five patient-specific chest phantoms were fabricated using a 3D printer such that they were dosimetrically equivalent to the chests of actual patients in terms of organ density and shape around the given target, where a spherical applicator was inserted for breast IORT treatment via the INTRABEAM™ system. Models of lungs and soft tissue were fabricated by applying infill ratios corresponding to the mean Hounsfield unit (HU) values calculated from CT scans of the patients. The two models were then assembled into one. A 3D-printed water-equivalent phantom was also fabricated to verify the vendor-provided depth dose curve. Pieces of an EBT3 film were inserted into the 3D-printed customized phantoms to measure the doses. A 10 Gy prescription dose based on the surface of the spherical applicator was delivered and measured through EBT3 films parallel and perpendicular to the axis of the beam. The shapes of the phantoms, CT values, and absorbed doses were compared between the expected and printed ones. The morphological agreement among the five patient-specific 3D chest phantoms was assessed. The mean differences in terms of HU between the patients and the phantoms was 2.2 HU for soft tissue and −26.2 HU for the lungs. The dose irradiated on the surface of the spherical applicator yielded a percent error of −2.16% ± 3.91% between the measured and prescribed doses. In a depth dose comparison using a 3D-printed water phantom, the uncertainty in the measurements based on the EBT3 film decreased as the depth increased beyond 5 mm, and a good agreement in terms of the absolute dose was noted between the EBT3 film and the vendor data. These results demonstrate the applicability of the 3D-printed chest phantom for PSQA in breast IORT. This enhanced precision offers new opportunities for advancements in IORT. INTRODUCTION Intraoperative radiation therapy (IORT) is a treatment modality that entails accelerated partial breast irradiation for early-stage breast cancer patients (1,2). IORT generally refers to the direct delivery of a single-fraction dose of highly localized radiation to the periphery of the lumpectomy bed during surgery (3). Its major advantage is that it offers the direct visualization of the tumor bed without incurring the risk of a marginal miss. This helps minimize damage to the healthy tissue by reducing the volume and dose of radiation to the normal surrounding tissue (4). Furthermore, compared with conventional whole-breast irradiation (WBI) over 5 to 5.5 weeks followed by tumor-bed boost or hypofractionated WBI over 3 weeks with a boost, IORT is completed in one day, and intraoperative irradiation allows for the immediate treatment of the surgical bed in 30 min to avoid a delay between surgery and external beam radiotherapy. This is convenient for the patient and helps reduce cost. Treating a smaller volume of normal tissue instead of performing WBI enables the reduction of potential lung and cardiac toxicities arising from radiation treatment and enhances tumor control (5)(6)(7). IORT requires specialized radiotherapy equipment. In this regard, INTRABEAM ™ (Carl Zeiss Surgical GmbH, Oberkochen Germany) uses a low-energy (50 kV) X-ray generator to provide partial-breast irradiation (8). This system uses spherical applicators to deliver a uniform dose on the inner surface of the breast lumpectomy cavity and irradiates high-dose (10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20) Gy) beams at once. Therefore, it is essential to ensure safe and accurate delivery through a patient-specific quality assurance (PSQA) process that verifies that the treatment device is physically capable of delivering the expected dose distribution prior to patient treatment. Because the radiation dose cannot be measured directly in the patient, it is common to create phantoms that mimic human radiation characteristics. Patient-specific dose measurements are often performed using radiation therapy phantoms combined with various dosimeters. These phantoms are made of homogeneous materials that simulate representative organs. However, commercially available phantoms for IORT are not supported for clinical use. Moreover, because most IORT clinics do not have a treatment planning system (TPS), a thorough understanding of the dose distribution of IORT is essential for safe, effective, and efficient treatment delivery. Special attention needs to be paid to all aspects of the treatment for each patient. However, it is difficult to perform accurate predictions regarding IORT because the volume of the tumors removed may vary depending on the surgery outcome. Thus, pretreatment planning and PSQA are limited (9,10). There have been attempts to manufacture customized objects incorporating 3D printing technology into various applications of radiation therapy (11)(12)(13)(14)(15)(16)(17). In this manner, to overcome the limitations of IORT's PSQA, we ensure the stability of IORT by creating a PSQA phantom via 3D printing. This study investigates the feasibility of verifying IORT dosimetry using a 3D-printed water-equivalent phantom as well as patient-specific 3D chest phantoms fabricated by simulating the actual structure of the body of the patient around the target. We explore the effectiveness of the 3D-printed phantoms fabricated by considering the infill ratio corresponding to the average HU value assigned to the patient's CT. We also examine the qualification and quantification of the depth dose and the dose administered on the surface of the applicator to ensure that the delivered dose matches the expected dose. INTRABEAM ™ System The INTRABEAM ™ system consists of a miniaturized accelerator (XRS) that accelerates electrons through a 10-cm drift tube, with a maximum voltage of 50 kV, onto a gold target where low-energy photons are produced and then emitted isotropically. An internal radiation monitor is used to detect the X-ray photons emitted in the direction of the cathode and record the dose output in real time. The miniaturized accelerator is inserted into the arm of the INTRABEAM carrier, which can be moved smoothly to any position in the operating room owing to the integral casters located at its base. Weight compensation and six axes provide sufficient freedom to place the miniaturized accelerator in any position in 3D space for access to the targeted area. Electromagnetic brakes hold the miniaturized accelerator in the exact set position during treatment. The operator can know the dose being delivered at any time throughout the treatment through the online dose monitoring data displayed on the treatment screen of the control terminal on the INTRABEAM cart. Spherical applicators are used for the intracavitary or intraoperative delivery of radiation to the tumor bed, e.g., during breast-conserving surgery. The applicator fills the cavity created by the excision of the tumor. The tissue on the tumor bed adheres to the applicator via surface tension. The probe tip is centered within the applicator and, therefore, at the tumor cavity. The INTRABEAM spherical applicators, which are reusable and sterilizable, are available in 5 mm increments, with diameters ranging from 15 mm to 50 mm. In this study, applicators with a diameter of 35 mm were used. Absolute Dosimetry Using Zeiss Water Phantom Zeiss supplies a special water phantom consisting of a 3D translational stage for precise source positioning and a PTW 34013 soft X-ray ionization chamber (18). To calculate the dosage rate in water for the XRS using this phantom, Eq. 1 is suggested in the user manual (19): where N k is the ion chamber calibration factor (Gy/nC), M raw is the ionization charge (C) collected in 60 s for a chamber located at depth z in water, and P TP is the correction factor for room temperature (T) and pressure (P) at the time of dose measurement. The beam quality correction factor, K Q , was set to unity (K Q = 1) based on the fact that the T30 spectrum used as a reference X-ray beam, with E eff =16.4 keV and HVL=0.43 mm AI, best matched the INTRABEAM spectrum (18). K K Q !D W was the chamber conversion factor that converted air kerma measurements into doses in water for the chamber in the T30 spectrum (K K Q !D W = 1.045). The manufacturer provided the calibration depth dose curve, where the measured DR Zeiss w was plotted as a function of depth z. Figure 1 illustrates the schematic design of the customized chest phantom fabricated by the 3D printer. CT image data for each patient, in the digital imaging and communications in medicine file format, were used to create a virtual 3D-printed chest phantom. A CT slice was collected every 3 mm and reconstructed as a 1-mmspaced slice. Modifications to the CT images were performed using the MIM Maestro software (version 6.1, MIM Software Inc., USA). The growth in the seed region, followed by a manually adjusted 2D brush, was used to select the region of interest (ROI) that covered one side of the lung close to the tumor and the ROI that corresponded to the soft tissue including the breast. The mean HU values of voxels in each ROI were calculated. Fabrication of 3D-Printed Customized Phantom 3D Slicer was used to define the initial virtual object to form the 3D chest phantom based on the patient's CT data. The virtual object with surface information in terms of triangular meshes was stored in the stereolithography (STL) file format, and it could be read by the 3D printer software. Using Blender, the converted raw 3D object was refined by disregarding any defective meshes using the built-in "shrink-wrap" function. In addition, we prepared a spherical applicator space by considering the size of the tumor to be removed from the site where the IORT applicator would be inserted. It was also formed in four parts so that pieces of the EBT3 film could be placed horizontally and vertically between parts. Based on this new surface, the 3D-printable customized chest phantom was fabricated using a fused deposition-modeling (FDM) 3D printer (DP200; 3D WOX, Sindoh, South Korea) that employed a polylactic acid (PLA) filament with a physical density (r) of 1.25 g/cm 3 . The printing parameters comprised a speed of 20 mm/s and layer thickness of 0.2 mm. The other parameters were determined after the calibration. The infill ratios of FDM were determined by the calculated mean HU values of the soft tissue and the lung using the correlation curve between the HU and the infill ratio. To create the correlation curve, 10 rectangular samples (width × length × height = 40 mm × 40 mm × 20 mm) were created by varying infill ratios from 10% to 100%. The infill ratios ranged from 0% to 100%, which is the ratio of the volume of a printed thermoplastic to that of air. The samples were printed in a grid pattern using PLA material and the FDM method, which were fabricated by varying the infill ratio by 10%. In total 10 samples, with infill values ranging from 10% to 100%, were scanned to a thickness of 1 mm using a Siemens SOMATOM Definition AS CT Scanner (Siemens Healthcare, Erlangen, Germany), at a voltage of 120 kV and current of 10 mA. All HU measurements, such as the maximum, minimum, and mean, were obtained using an ROI of 30 mm × 34 mm × 15 mm on the scanned CT image. Figure 2 shows the linear relationship between the infill value (%) and HU as the infill ratios of the 3D printer were increased from 10% to 100% in increments of 10 percentage points. For the linear trend line, we derived an equation of y = 11.438x -1,005.9, where the R 2 value was 0.997. As a result, the HU was obtained by varying the infill ratios from 10% to 100%, thereby changing the average HU from −882 to 148. The FDM lung model and the soft tissue model were fabricated by applying infill ratios corresponding to the calculated mean HU values. We then assembled the two models into one. The 3D-printed water-equivalent phantom was also fabricated to reflect an infill ratio corresponding to 0 HU to verify the vendor-provided depth dose curve. As shown in Figure 3, the water-equivalent phantom was sectioned into upper and lower parts, including a space for the spherical applicator. A piece of EBT3 film was placed between separate parts perpendicular to the spherical applicator. Gafchromic EBT3 Film Calibration A Gafchromic EBT3 film was used for dose measurement by using patient-specific 3D chest phantoms. The EBT3 film consisted of a 28-mm-thick active layer and 125-mm-thick protective layers covering the active layer. Scanning the film before and after irradiation and measuring its optical density allowed us to measure the dose delivered to the film. The optical density of the exposed film was converted into a dose via the dose calibration curve. Dose calibration was performed using a parallel-plate ionization chamber (PTW 23342, Germany) and the EBT3 film on the same INTRABEAM ™ system through a conventional technique (20), with exposures ranging from 0 to 20 Gy. The ionization chamber was positioned on the surface of the EasyCube ® phantom (Sun Nuclear Corporation, FL, USA), which consisted of water-equivalent slabs of different dimensions and a customized housing holder. The dimensions of the phantom were 16 cm × 16 cm × 8 cm, as shown in Figure 4. The absolute dose outputs within the range of doses of interest were measured to determine the durations of treatment in terms of absorbed doses on the phantom surface in contact with the spherical applicator (with a diameter of 35 mm). Each piece of the EBT3 film placed on the surface of the phantom was exposed to nine absorbed doses (0, 0.2, 0.5, 1, 2, 5, 10, 15, and 20 Gy) to cover the full dynamic range of the film. All measurements of the film were performed twice to verify the reproducibility of the results. The exposed EBT3 films were digitized using the commercially available Vidar Dosimetry Pro Advantage Red digitizer. The results were analyzed using the RIT version 6.1 software package (RIT, Denver, CO, USA). Uniform ROIs corresponding to the center of each film (10×10 mm 2 subsets) were chosen for 16-bit red channel calibration. A third-degree polynomial function was used to fit the calibration curve of the film, as shown in Figure 5. To obtain precise and reproducible film dosimetry results, the films were scanned over the same period (24 h irradiation-to- scanning time) after irradiation had been performed at the same orientation because process consistency is crucial to reducing potential uncertainties. Measurements and Data Analysis The Institutional Review Board of the Gangnam Severance Hospital, Korea (IRB No. 3-2017-0033), approved this prospective study in accordance with ethical guidelines and the Declaration of Helsinki. All measurements were carried out in the 3D-printed waterequivalent phantom as well as the patient-specific 3D chest phantoms created based on five patients receiving IORT. In the water-equivalent phantom, a piece of EBT3 film was exposed parallel to the axis of the beam by placing it along the border of the space for the spherical applicator and between the upper and lower parts of the phantom. This made it possible to bring one side of the film in direct contact with the applicator. The depth dose measured using the film was compared with the depth dose curve provided by the supplier. For the patient-specific 3D chest phantoms, the piece of EBT3 film was placed on a horizontal plane perpendicular to the axis of the radiation beam. The central region of the piece of film was in direct contact with the surface of the spherical applicator, which had a diameter of 35 mm. The other piece of the film was inserted between separate parts of the phantom in a vertical plane parallel to the axis of the radiation beam ( Figure 6). The treatment duration was calculated such that a 10 Gy dose was delivered on the surface of the applicator. All exposed films were processed to quantify the dose distributions via the dose calibration curve using RIT. A 5 × 5 median filter was applied to each film, and random outliers were excluded from the analysis. The point dose measured from the film placed on the horizontal plane was determined via the maximum value extracted from the dose histogram of each irradiated film. Depth dose curves were acquired at depths from 0 to 30 mm through the profile of a piece of film placed on the vertical plane. Evaluations With a 3D-Printed Water-Equivalent Phantom For precise dosimetric examination, it is necessary to ensure that the 3D-printed density matches the preset density of the infill (14). A CT scan of the 3D-printed water-equivalent phantom was performed to calculate the actual density by comparing it with values in an image value-to-density table. The 3D-printed water-equivalent phantom exhibited a mean of 7 ± 59 HU, in the range from −190 HU to 195 HU. The actual average density was 1.003 g/cm 3 . Table 1 shows the differences in dose at each depth, between data provided by the vendor and those obtained from the EBT3 film parallel to the axis of the beam, obtained using the 3D-printed water-equivalent phantom. The average of three measurements was reported as the representative estimate, and its estimated uncertainty was based on the absolute value of the difference between each measurement and the average value. The average difference between the vendor and EBT3 film data was 18.3 ± 24.9 cGy (1.8% ± 2.5%), with a difference of up to 64.3 cGy (6.4%) found at a depth of 5 mm. The mean uncertainty in the EBT3 measurement was 16.8 cGy (a maximum of 83.9 cGy at a depth of 0 mm). The measured dose was underestimated at a depth of 0 mm, which corresponded to the border of the piece of film; specifically, this underestimation occurred because of increased uncertainty due to the damage caused by cutting the EBT3 film into pieces. As the depth increased beyond 5 mm, the uncertainty in the measurements performed using the EBT3 film decreased, and a good agreement in terms of absolute dose was obtained between the EBT3 film and the vendor data. Evaluations With 3D-Printed Patient-Specific Chest Phantoms Chest phantoms of five patients were fabricated for PSQA measurements. IORT patients were randomly selected and the applicator was used for PSQA measurements. Table 2 shows the location of the dose measurement for each patient and the distance between the tumor and the lung. The mean and standard deviation of the HU values of the voxels inside each ROI (soft tissue and lung) were calculated. Based on an infill ratio corresponding to the mean HU value, the customized chest phantoms were fabricated using the 3D printer, as shown in Figure 7. The 3D-printed soft tissue and lung were sectioned in four and two parts, respectively, to facilitate the reproducible positioning of the spherical applicator and pieces of film. Table 3 shows the HU values of the soft tissue and the lung parts for the five patient-specific 3D chest phantoms. On average, differences in the HU between the patient and the phantom were 2.2 HU for soft tissue and −26.2 HU for the lung. This was in close agreement with the expected HU in terms of the calculated doses. Because each part was printed sequentially, it took 19~50 h to fabricate a single phantom depending on the amount of filament FIGURE 6 | Dose measurement using EBT3 film and patient-specific chest phantom. The spherical application surface dose was measured by placing the EBT3 film on a horizontal plane perpendicular to the axis of the radiation beam (Left), and EBT3 film was placed on a vertical plane parallel to the axis of the radiation beam to measure the depth dose away from the applicator surface (Right). used. The weight of the filament used ranged from 329 g to 975 g. The 3D printing process required only approximately 30 min of labor to fill, clean, assemble, and verify the printed phantom. The infill ratio of printing ranged from 83% to 85% for soft tissue and 18% to 37% for the lung. The Dice similarity coefficient (DSC) was considered to quantify the similarity between the corresponding structures. The DSC ranges from 0 (no overlap) to 1 (perfect overlap). A large DSC value indicates good overlap between the 3D-printed phantom and the patient. As shown in Figure 8, CT images of the patients and those of their corresponding phantoms were visually matched and then fused to calculate the DSC in the soft tissue and the lung, respectively. On average, the DSC between the patient and the phantom was 0.97 for the soft tissue and 0.97 for the lung. Table 4 shows the differences between the 1,000 cGy dose administered on the surface of the 35-mm spherical applicator and the dose measured from the pieces of film perpendicular to the axis of the beam. The film measurements ranged from 930.2 cGy to 1,025.6 cGy for the five patient-specific chest phantoms. An evaluation of these measurements versus the prescribed dose indicates that the mean dose difference was −21.6 ± 39.1 cGy, and the mean percentage error was −2.16% ± 3.91%. Figure 9 shows the depth dose measurements of the five PSQA phantoms using pieces of film parallel to the axis of the beam. The measured dose at shallow depths along the border of the film had been underestimated. A depth of 2 to 3 mm was found to be adequate to initiate the depth dose curve. This implies that the first few millimeters along the border of the film could not be used when performing parallel film measurements because the EBT3 film was cut into pieces. This finding is consistent with those of previous studies (21,22), which have reported damage to the film pieces. Unlike megavoltage-scale Xrays, kilovoltage-scale depth dose curves produced by INTRABEAM ™ led to a steep fall-off in the dosage with depth. This is beneficial to the organs at risk in the environment of the lesion and allows radiation oncologists to prescribe higher doses to the lesion itself. DISCUSSION Few studies have examined PSQA for IORT. This subject is of particular concern in the context of single-session radiotherapy techniques such as IORT where there is no opportunity to compensate for possible errors in later treatment sessions. In light of this, this study developed a simple and non-toxic PSQA method for IORT using 3D printing. The results confirm the feasibility of PSQA for IORT using a 3D printer and a conventionally used PLA filament to fabricate a patient-specific chest phantom. FDM printers are the most economical 3D printers, and they will be advantageous for future clinical applications. In this study, only one FDM-type 3D printer was used to fabricate the patient-specific chest phantoms. The fabricated phantoms can be evaluated in terms of the lung (−803 to −582 HU) and soft tissue (−61 to −37 HU), which have a lower physical density than water, by changing the infill ratio. Although differences in HU values between the patients and the phantoms were observed (the DHU of soft tissue = −12-19 HU and DHU of lung = −45-2 HU) in our experimental environment, the dosimetric influences of these differences were not significant, based on the fact that it is appropriate to set tolerances of ±20 HU for soft tissue and ±50 HU for lungs when restricting changes in dose in the treatment plan to within ±1% (23). For dose measurements using the patient-specific chest phantom assembled in the manner described above, the films were exposed along two directions: perpendicular and parallel to the axis of the beam. On the surface of the applicator, the horizontal plane was better than the vertical plane because the border of the piece of film could be damaged. In a depth dose comparison using a 3D-printed water phantom, the average difference between the measurements provided by the vendor and those of the actual EBT3 film was the greatest at a depth of 5 mm, but a substantial difference in the dose gradient was observed between 0 and 10 mm; this is because the dose decreased by 75% from the surface of the applicator to a depth of 10 mm. In this case, the main cause of uncertainty in the measurement was the positioning of the detector. As has been determined in previous studies (24,25), both the ionization chamber and the dosimetry of the film can be performed with an accuracy of 5%-10%, when considering the largest error due to the positioning of the dosimeter. Based on this fact, the steep dose gradients produced by the INTRABEAM system yielded a 10% difference in dose for a 1 mm difference in distance. This inherent variation, combined with the uncertainties involved in practical measurements using different dosimeters, may preclude even a 5% tolerance in the depth doses. Therefore, a 10% dose difference may be acceptable at a steep dose gradient. The morphology of patients was accurately reproduced to fabricate customized phantoms. The designs produced here were found to be robust and can be easily modified to add strength as needed. If 3D-printed parts break, they can be accurately reproduced with minimal additional labor. With more time, it will be possible to produce 3D-printed phantoms with even greater heterogeneity. It is evident that this has copious potential benefits for the stage of clinical adaptation. Therefore, this 3D printing technology can be employed to fabricate accurate shapes that reflect the spherical applicator and patients' anatomical structure, which in turn makes it possible to simultaneously review the absolute dose to the lesion and the surrounding organs at risk. This improves patient safety through the PSQA prior to IORT. The digital workflow ensures the accuracy and reproducibility of the procedure. This study on prototype phantoms for IORT-specific PSQA involves a few practical considerations. First, our study focused on structures with low densities, by excluding bone structures, because the surface of the spherical applicator was attached to the soft tissue between the ribs. To incorporate the bone structure by using 3D printing, we can assemble the structure using an FDM lung and soft tissue model to replicate the CT values of the bones using color-jet printing materials with more than 1,000 HUs (26). For further investigation, contrast agents with different concentrations can be injected into voids inside the 3D-printed part to obtain a higher range of up to +1,000 HU (27). Second, the experiment used only five cases, as printing individual phantoms is time and resource intensive. On average, a single phantom fabrication took 35 h. This 3D printing process might not be practical for general PSQA. However, given the rapid pace of development of 3D printing at present, these technologies are expected to improve dramatically over the next few years to provide better convenience, speed, and precision. Third, standard printing procedures should be used to ensure consistency. Special caution should be taken regarding the first layer of each print, as distortion can be caused by the material not completely sticking to the print bed. Our study has some limitations. First, the PSQA phantom fabrication described in this study may not adequately reproduce all the uncertainties arising in clinical settings. In the breast IORT procedure, a spherical applicator is placed inside the tumor cavity, then the wall of the tumor cavity is pulled firmly to the applicator surface using a purse-string suture. This process can lead to soft tissue compression due to pre-fixation pressure as well as potential air gaps near the applicator surface in some areas of the breast. Additionally, contrary to our study assuming that the position of the applicator in the intercostal space excludes bone-equivalent heterogeneity in the phantom, in some cases during this fixation process the position of the applicator may be on the ribs. In this case, there may be a significant dose in the ribs. Second, skin and subcutaneous tissue doses are a significant problem with breast IORT. In our study, skin measurements were not taken into account in the PSQA phantom because an optically stimulated luminescence dosimeter was clinically used for in vivo dosimetry to detect radiation doses delivered to the skin during breast IORT (28). However, the most common late toxicity, with a maximum appearance four years following treatment, is telangiectasia. The development of telangiectasia is strongly correlated with the doses applied to the subcutaneous vessels. Measuring the skin dose on the PSQA phantom can help prevent the onset of severe telangiectasia after several years. Therefore, PSQA phantom development must incorporate this skin measurement prior to patient treatment; this leads to a more secure treatment through personalized pretreatment dosimetry investigations. Nevertheless, owing to the absence of sufficient QA data regarding IORT, our method for creating a simplified phantom composed of soft tissue and lung models could be an important milestone in mimicking personalized dosimetry. CONCLUSIONS In this study, patient-specific 3D-printed chest phantoms were successfully constructed to simulate the IORT-related dose distributions to cancerous tumors, surgical tumor bed, and the surrounding low-density organs such as lungs and soft tissue. This allows not only for the prediction of the depth dose at various distances from the spherical applicator, but also for the verification of the dose by comparing the prescribed dose to the expected dose. The proposed 3D printing methodology provides a viable and inexpensive method for fabricating variable-density solid phantoms for breast IORT-specific PSQA. This enhanced precision offers new opportunities for advancing IORT. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author. ETHICS STATEMENT The studies involving human participants were reviewed and approved by The Institutional Review Board of the Gangnam Severance Hospital, Korea (IRB No. 3-2017-0033), in accordance with ethical guidelines and the Declaration of Helsinki. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. AUTHOR CONTRIBUTIONS The data collection, conception, design, and drafting of the manuscript were performed by YHC and HL. The data analysis and interpretation were performed by HL, KP, and IJL. The final manuscript was edited by IJL. Patient enrollment were performed by YAC, JWK, KRP, and IJL, who also contributed to useful discussions on the manuscript. All authors contributed to the article and approved the submitted version.
2021-03-15T13:22:05.939Z
2021-03-15T00:00:00.000
{ "year": 2021, "sha1": "43f674e0f19a073ad79d2d43cdfcc9a015aa4b27", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2021.629927/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "43f674e0f19a073ad79d2d43cdfcc9a015aa4b27", "s2fieldsofstudy": [ "Medicine", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
133771383
pes2o/s2orc
v3-fos-license
Grid Convergence Study for Detached-Eddy Simulation of Flow over Rod-Airfoil Configuration Using OpenFOAM A rod-airfoil is a benchmark configuration for simulating Airfoil-Turbulence Interaction Noise (ATIN). The numerical simulation is modelled using unsteady Detached Eddy Simulation (DES). The grid refinement involves two stages of assessments. The first compares the sensitivity of two reasonable estimated cells to the flow behaviours of the case. Based on the finding from the first stage, further grid refinement is proceed in stage two. In stage two, a minimum of three different grid resolutions are considered; the fine, medium and coarse grids in order to investigate the grid independency. Richardson extrapolation and Grid Convergence Index (GCI) are introduced to quantitatively evaluate the grid independency. Based on the results between those three different grids, a monotonic convergence criterion has been achieved. The reduction in GCI value indicates that the grid convergence error has been significantly reduced, in which the fine grid has a GCI value is around less than 0.5%. Introduction An airfoil is a common engineering geometry. Its application especially in the wind turbines, turbofan engines and the helicopter rotors, however induces generation of airfoil-turbulence interaction noise (ATIN) [1,2,3]. The airfoil experiences undulation of lift as a result of unsteady pressure fluctuations produced by the interaction of turbulence in the upstream flow and the airfoil leading edge, and consequently it is responsible for the ATIN [4]. The ATIN is mostly noticeable at lower frequency because larger turbulent structures are the strongest in its noise generation mechanism [5]. The turbulence physics upstream of airfoil is an important feature in the ATIN study as it governs the noise generating mechanism. Thus, ATIN reduction should be correlated with the upstream turbulent characters. However, the correlation is not yet been discussed comprehensively I the open literature. In most ATIN investigations, the upstream turbulence is generated and conveyed downstream to the airfoil by vortex generator so that the same condition of ATIN can be mimicked as in real condition [6,7,8]. Extensions of unsteady CFD techniques to the prediction of ATIN generated by high Reynolds number flows in complex geometries have first to be benchmarked on relevant test cases. Such a test case must be based on geometry that contains some of the aerodynamic mechanisms encountered in ATIN applications, but remain simple enough from the computational point of view in order to achieve parametric study purposes. The rod-airfoil configuration is a relevant benchmark case [9] was among the pioneer to introduce this rod-airfoil configuration. This is because at high Reynolds numbers, the rod sheds the well-known von Karman vortex street which acts as an oncoming disturbance onto the airfoil. Jacob et al. [9] highlighted three strong dimensional effects responsible for spectral broadening around the rod vortex shedding frequency in the subcritical regime, and identified that the airfoil leading edge was the main contributor to the noise emission in a rod-airfoil configuration due to vortex-structure interaction. Moreover, further understanding on the details of the rod-airfoil interactions have been gained through numerical simulations too. Previous studies [10, 11,12,13,14,15] found good agreement between numerical calculation and experiment of ATIN. Hence, the current study aims to provide a systematic approach for grid convergence study of flow around a rod-airfoil using Grid Convergence Index (GCI) that is based on Richardson extrapolation. Test case and mesh description The investigated case is a rod-airfoil configuration in a three-dimensional uniform incompressible flow at constant free stream velocity. The test case is as illustrated in Figure 1. A rod with diameter with a downstream airfoil of chord length 9.5 immersed in a fluid of constant free stream velocity ∞ . The geometrical parameters and flow dynamic quantities are non-dimensionalized by and ∞ respectively. Gap distance between the rod and airfoil is set 3.5 as according to Yong Li et al. [16] the airfoil will experience fully developed vortex in the rod wake. The upstream, downstream, up and bottom boundary distance are also mentioned in the schematic diagram of computational domain. The span distances are 10 for the early stage grid refinement and 3.5 in the later. The grid convergence performance was assessed by two stages. Table 1 presents the number of grid for different cases investigated in current study. turbulence is the Large Eddy Simulation (LES) if compared to the Reynolds Average Navier-Stokes (RANS). However, current study implemented Detached Eddy Simulation (DES) computations on the flow over the rod-airfoil due to the high demand of computational cost of the LES. All flow conditions and settings are summarized in Table 2. Computations in this work were performed using OpenFOAM. In particular, the merged pisosimple algorithm known as pimpleFoam solver is used. The convergence criterion for pressure and velocity solutions is set so the residuals fall below the tolerance of 10 −9 and 10 −8 respectively at each time step. 2 nd order backward scheme is used for the temporal discretisation. The convection term is discretized using Gauss linearUpwind grad (U) and the viscous term is discretized using Gauss Gamma scheme. The time step is set accordingly as to keep the CFL value less than unity. Grid Refinement Stage 1 Firstly at this stage, two cases are compared to obtain the grid refinement reliability. Two cases were prepared -one with the coarsest grids with smallest cell height as . and the other is with smallest cell height approximately . . The visualisation of flow and the + values are assessed at this stage from each case. Figure 2 depicts the grid distribution of the case with the + values of the geometry. The wall + is a non-dimensional wall distance often used in CFD defining the ratio between turbulent and laminar influences in a cell. Near-wall regions have bigger gradients in variables and in the flow physical momentum. The simulation adopts wall function in the calculation thus the + values are bound to meet in the range of ≤ + ≤ [17]. Values of + ≈ are most desirable for wall functions [18]. Hence, based on the + values comparison, the grids of case B is more preferable due to the rod's + value is nearer to 30. The flow visualization of the spanwise plane right upstream (0.5 in front) from the airfoil is also compared as shown in Figure 4. The flow behaviour in spanwise direction is useful for inspection of spanwise correlation length. Grid Refinement Stage 2 Three different grid resolutions were used in this stage. The coarsest is Case C, case D has medium grid and the finest is Case E. The upstream , top and bottom is 10 away from the boundary, while the outlet is 20 away from the boundary and the span length is 3.5 . Grid convergence study by Richardson extrapolation Richardson extrapolation, introduced by Richardson [19] is also known as "the differed approach to the limit (ℎ → 0)". It defines a higher-order estimate of flow fields from a series of lower-order discrete values ( 1 , 2 , , , , , ). A convergence study needs three grid resolutions at minimum [20]. Roache [21] has generalized Richardson Extrapolation by introducing the ℎ order methods: The grid refinement ratio in this study is fixed, and defined as = ∆ /∆ = ∆ /∆ = 1.7 From equation (1), the extrapolated value is varied by ℎ order decision. As stated in Stern [19], the order of accuracy can be estimated by using the following equation: To evaluate the extrapolated value from these solutions, the convergence of the system must be first determined. The possible convergence conditions are: (1) Monotonic convergence : 0 < R < 1 (2) Oscillatory convergence : R < 0 and (3) Divergence : R > 1, where R is the convergence ratio and it is determined by the R = 32 / 21 . Table 3 summarizes the order of accuracy for root mean square lift coefficient and mean drag coefficient from the simulation results of the three different grids. The convergence is monotonic for both variables assessed. Strouhal number results are not included in this analysis due to the differences between all the cases are not significantly big. The Grid Convergence Index (GCI) defines a uniform measure of convergence for grid refinement studies as stated in Roache [21]. The GCI is derived from estimated fractional error obtained from the generalization of Richardson extrapolation. The GCI value represents the resolution level and how much the solution approaches the asymptotic value. The GCI for thefine grid resolutions can be calculated by the following: The safety factor selected for the study is 1.25 as followed from Wilcox [22]. As observed from Table 3 previously, there is reduction in GCI values for the three successive grids ( 21 < 32 ). The GCI for finer grid 21 is relatively low compared to the GCI of the coarser one 32 . This implies that the dependency of the numerical simulation on the cell size has been reduced. In addition, the grid independent solution is acceptably achieved due to the GCI reduction from coarser grid to the finer grid. In other words, further refinement will not result in much change. The variables obtained are compared with the extrapolated value using Equation (1). The comparisons are then plotted in Figure 4. The extrapolated value of is just slightly lower than that of finer grid results (h/D=0.001D). Therefore, it is proved that the solution is converged within the refinement from coarser to the finer grid. Also, in this paper the discrepancy between the simulation value and this extrapolated value is defined as the error = − (5) Figure 5 shows that the successive grid refinement has nearly achieved the asymptotic value at the finest grid resolution where the relative error compared with the RE is only 0.002%, hence it is grid independent. Table 4 compares the results of current DES with previous studies of the same case. The Strouhal number is in excellent agreement with the literature. The root mean square lift coefficient and mean drag coefficient are slightly lower, but they are still in good agreement. Conclusion Two stages of grid assessments were carried out in this paper to obtain the best grid suitable for a rodairfoil simulation in a Detached Eddy Simulation (DES) model at high Reynolds number flow. The first stage has obtained the grid with better flow visualization that captures a well developed vortex shedding in the wake of rod and airfoil. However, this result is the early onset to a further grid refinement assessment in the next stage. The next stage of grid refinement has provided good insights on the grid independency of especially the finest grid proposed. The GCI inspection of the flow variables showed that there was a gradual reduction when the grid system was refined. Also, the when the extrapolated value from Richardson Extrapolation are compared with current results, the results of finer grid (case E) showed good performance. Hence, the grid from case E is appropriate to be used further in the next analysis of rod airfoil due to the GCI values are all less than 0.5%.
2019-04-27T13:13:24.802Z
2019-03-13T00:00:00.000
{ "year": 2019, "sha1": "9da10dcf1955e74187ce14c15fc7d045c1c91af1", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/491/1/012023", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e9227ec6ab26ffb8a9b286c4269227726aebcbe5", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
219304387
pes2o/s2orc
v3-fos-license
Combining ILC and moment expansion techniques for extracting average-sky signals and CMB anisotropies The method of weighted addition of multi-frequency maps, more commonly referred to as {\it Internal Linear Combination} (ILC), has been extensively employed in the measurement of cosmic microwave background (CMB) anisotropies and its secondaries along with similar application in 21cm data analysis. Here we argue and demonstrate that ILC methods can also be applied to data from absolutely-calibrated CMB experiments to extract average-sky signals in addition to the conventional CMB anisotropies. The performance of the simple ILC method is, however, limited, but can be significantly improved by adding constraints informed by physics and existing empirical information. In recent work, a moment description has been introduced as a technique of carrying out high precision modeling of foregrounds in the presence of inevitable averaging effects. We combine these two approaches to construct a heavily constrained form of the ILC, dubbed \milc, which can be used to recover tiny monopolar spectral distortion signals in the presence of realistic foregrounds and instrumental noise. This is a first demonstration for measurements of the monopolar and anisotropic spectral distortion signals using ILC and extended moment methods. We also show that CMB anisotropy measurements can be improved, reducing foreground biases and signal uncertainties when using the \milc. While here we focus on CMB spectral distortions, the scope extends to the 21cm monopole signal and $B$-mode analysis. We briefly discuss augmentations that need further study to reach the full potential of the method. INTRODUCTION The anisotropies of the cosmic microwave background (CMB), beyond doubt, have greatly helped in establishing the standard ΛCDM concordance model, with the key cosmological parameters being known to percent-level precision or better (Bennett et al. 2003;Planck Collaboration et al. 2014b, 2016c. To reach this unprecedented precision, many obstacles had to be overcome, including an extreme control of systematic, calibration uncertainties and foreground separation (Planck Collaboration et al. 2016e,b,a). To mitigate foregrounds, many independent methods have been developed (e.g., Tegmark et al. 2003;Delabrouille et al. 2003;Eriksen et al. 2008;Remazeilles et al. 2011). One of them is the Interal Linear Combination (ILC, Tegmark et al. 2003), which has proven highly successful in extracting foregroundcleaned CMB maps with current (e.g., Planck Collabora-E-mail:aditya.rotti@manchester.ac.uk † E-mail:jens.chluba@manchester.ac.uk tion et al. 2014a, 2016d,a) and future experiments (e.g. Remazeilles et al. 2016Remazeilles et al. , 2018. In this work, we are mainly interested in studying spectral distortions of the CMB. While the CMB energy spectrum has been shown to be extremely close to that of a blackbody (Mather et al. 1994;Fixsen et al. 1996), minor deviations from this spectrum are expected even within standard ΛCDM (e.g., Chluba 2016). The largest spectral distortions are present in the monopole sky. These signals are created through energy exchange and photon production in the early phases of cosmic history (Zeldovich & Sunyaev 1969;Sunyaev & Zeldovich 1970a;Illarionov & Sunyaev 1974;Burigana et al. 1991;Hu & Silk 1993;Sunyaev & Chluba 2009;Chluba 2015). A distortion dipole is furthermore induced due to our motion with respect to the CMB restframe, very much like the CMB temperature dipole (Danese & de Zotti 1981;Balashev et al. 2015;Burigana et al. 2018). Spectral distortion require absolutely-calibrated measurements of the CMB spectrum. Building on the heritage of COBE/FIRAS, this can be achieved with instrument con-cepts similar to PIXIE (Kogut et al. 2011a(Kogut et al. , 2016(Kogut et al. , 2019. This could allow constraining many standard and non-standard processes occuring in the early Universe, at phases inaccessible by any other means (for recent overview see Chluba et al. 2019). While ILC techniques have been used to measure components with spatial anisotropies, so far they have not been applied to average-sky (i.e., monopolar) signals, a generalization we explore here. In this paper, we will demonstrate that ILC methods can be directly applied to absolutely-calibrated maps, allowing an extraction of the average-sky signal. The simple (i.e., blind) ILC method is, however, limited and has to be augmented by moment expansion methods to capture the inevitable foreground averaging effects, yielding the Moment ILC method (MILC, see Sect. 2). Without this extension, large biases and enhanced uncertainties in the recovered signals remain (see Fig. 3). This conclusion extends to anisotropic signals, as we demonstrate here (see Fig. 1). We focus on a proof-of-concept study, demonstrating the main aspects of the MILC and how it can be applied. Further studies are required for providing concrete CMB distortion forecasts, extending first studies (Desjacques et al. 2015;Sathyanarayana Rao et al. 2015;Abitbol et al. 2017), which largely neglected spatial information. These first studies are simplistic and the demonstrations we make in this work will pave the way for more realistic forecasts in the near future. In addition, we can already anticipate that the MILC method can be applied more broadly, e.g., to the extraction of global 21cm signals (e.g. Pritchard & Loeb 2010) and CMB polarization B-modes, as we briefly discuss. METHODS Before diving into details of the analysis, we provide a brief introduction to the methods we employ in this work. In particular we present some details of standard ILC methods in Sec. 2.1 and the preliminaries of the methods of moment modeling of foregrounds in Sec. 2.2 needed for the MILC. Standard ILC methods ILC methods have been extensively used in the analysis on multi-frequency microwave maps originally to extract maps of the CMB temperature and polarization anisotropies (Tegmark et al. 2003;Adam et al. 2016) and more recently for the measurement of y-distortions (Planck Collaboration et al. 2014c;Remazeilles & Chluba 2019). The multifrequency maps, d νi , can be expressed as, where ν denotes the observing frequency and index 'i' denotes the pixel index; τ c denote spatial map of the component 'c', which has the spectral energy distribution (SED) s c ν ; n denotes the measurement noise. Note that this equation is not written in any particular basis and hence the index 'i' can either be interpreted as the address of a pixel in real or in harmonic space. Using some prior information on the spectral components of the data, and given that the data is measured with sufficient frequency sampling, we can solve for the map of each component of interest as follows: This can be achieved by constructing suitable weights w c 0 ν such that, they have unit response to the SED of the component of interest: ν w c 0 ν s c 0 ν = w T c 0 · s c 0 = 1. Note that we have introduced the hat notation to distinguish the separated component mapτ c from the true component map τ c . On using these weights it is easy to see that Eq. (2) reduces to the following form, where we have suppressed the spectral and spatial indices for brevity. It is important to note that the solution generally has an additive bias B c 0 , and one can understand this as originating from the existence of non-zero projections of the SED of c 0 on the SEDs of other spectral components of the map. It is also important to note that the excess variance on the reconstructed map has contributions from the bias, the measurement noise and the chance correlation between the two. Therefore it is important to realize that an assessment on the precision of the reconstructed map necessarily requires a thorough understanding of the interplay between the additive bias and the variance in the map and their relative amplitudes. One can optimize the solution of the component map by demanding that the weights must minimize the variance of the reconstructed component map, leading to the standard ILC solution (Tegmark et al. 2003). One can further optimize the solution by demanding that the weights in addition to having unit response to s c 0 and minimizing the variance, must simultaneously show zero response to the SEDs of selected components in the map. This can be concisely written using a vector matrix formulation as follows, ↔ w T c 0 V = e T c 0 . This generalized minimization problem can be solved using the method of Lagrange multipliers and it can be shown that the weights are given by the following compact expression, where C denotes the data covariance matrix and all other symbols have the same meaning as before. Since these are matrix operators it is important to note that the order in which the elements appear is critical. Equation (5) presents a more general form of the ILC referred to as the constrained ILC (cILC) and was introduced in Remazeilles et al. (2011) and has been used to construct Sunyaev-Zeldovich (SZ) free CMB maps and vice versa. More recently this method has been shown to allow measuring the relativistic electron temperature of galaxy clusters, introducing semi-blind 1 constraints on the dust SED (Remazeilles & Chluba 2019). The general solution for the weights given in Eq. (5) reduces to the simple ILC when V = s c 0 and e c 0 = 1. The solution written in the form given in Eq. (5), gives the impression that it is only possible to solve for one component at a time. Presenting the solution in the following form, makes it clear that the filter actually returns a vectorâ of minimum variance maps with mutually orthogonal SEDs, the projection operator being defined with respect to the data covariance matrix. Moment modeling of the observed sky The moment method can be used to efficiently model the effect of averaging (i.e., along the line of sight, in the beam and averaging operations in the data processing) for known fundamental SEDs (Stolyarov et al. 2005;Chluba et al. 2017). The limitations in our ability to model the foregrounds to arbitrary precision is a consequence of the limited sensitivity and frequency coverage of observations and, additionally, our ignorance about the presence of totally 'new' foregrounds components, which one may discover on making more refined measurements 2 . In this section, we briefly introduce the basic idea and refer the reader to Chluba et al. (2017) for a more detailed and discussions specific to certain foregrounds (dust, synchrotron etc.) and the broader scope of the method. Let us assume that we know that the emission from each volume element along a given line of sight is described by a SED s ν (p) with some known functional form, parameterized by a vector of parameters: p = [p 0 , p 1 , p 2 · · · p n ]. Assuming there are multiple elements emitting along a given line of sight,n, the net observed intensity from that direction is merely a sum of emission from each of the elements, A(n, l) s ν (p(n, l)) dl , where p l (n) denotes that the parameters characterizing (defining the spectral shape) the emission change alongn and A l sets the overall amplitude of the emission. It is very likely that the emission from the different elements is characterized by different vectors p. Given that there are great number of emitting elements, one may, equivalently, construct a statistical model for the observed intensity, in which Eq. (7) can be re-cast in the following form, where P(n, p) denotes the multi-dimensional probability distribution function of the component of p, and the parameter A ν 0 (n) is amplitude of the observed SED at some pivot frequency ν 0 , introduced to allow for modeling the overall amplitude of the SED. In practice we never have access to this probability distribution, but we have a reasonable idea about the variety of SEDs contributing to the observed sky. One can now Taylor expand the emission law s ν around some pivot parameter vectorp, ∫ P(n, p) dp s ν (p) (9) Since the SED and its derivatives are computed at the fixed pivot parametersp, these spectral functions are constants and can be pulled out 3 of the integral and each of the integrals over the PDF can be understood as the corresponding amplitude weighted moment η of the distribution. This allows us to model the observed intensity in the language of the moments of the parameter distribution, where η denotes the 'amplitude-weighted moments' of the of the parameter distribution characterizing the total SED. Here, it is important to mention that since the derivative operator with respect to different parameters commute: .., one needs to appropriately take care of these degenerate SED vectors. One of the beautiful aspect of Eq. (10) is that the spectral and spatial parts are written in separable form unlike Eq. (7). A similar language was previously applied to the modeling of the SZ effect, also highlighting this aspect (Chluba et al. 2013). One can now think of these moment maps, η(n), as direct astrophysical observables. The simplest first order moments inform us about how the parameter along various lines of sight differ from the pivot parameter, while the second order moments inform us about the variance in the emission characteristics and so on. In the final step, the generalized foreground modelling presented in Eq. (10) is cast into a form that can be easily incorporated into the ILC machinery, yielding the MILC approach. By merely introducing the various SED derivatives as spectral constraints (as discussed in Sec. 2.1) one can then simultaneously solve for the foreground moments maps as well as the cosmological observables like the CMB temperature, y and µ distortion maps as we present in the following sections. Some of the benefits of this approach for anisotropic signals were recently illustrated for the relativistic SZ temperature mapping (Remazeilles & Chluba 2019) and the extraction of µT-correlations (Remazeilles & Chluba 2018). In this work, we unveil the real potential of this method, extending it to the extraction of tiny monopolar signals and also demonstrate that introducing many moment constraints can ensure an unbiased recovery of components while maintaining and even improving uncertainties. We note that the standard ILC and constrained ILC are special cases of MILC. SIMULATIONS In this section, we summarize the details of our simulations. We assume a wide frequency coverage of 30 − 3000 GHz, with 30 logarithmically-spaced channels and assume that the measurements are absolutely calibrated. Our multifrequency simulated maps consists of the following components: blackbody, spectral distortion components y and µ that characterize the deviation from the blackbody, foreground contamination in the maps and measurement noise. Below we provide specific details of how each of these components are injected in our simulations. Blackbody: In our simulations we include the CMB monopole and its anisotropies ∆ T (n) = ∆T(n)/T 0 , characterized by the fiducial CMB power spectrum. The CMB SED is given by where B ν denotes the Planck function. We assume that the electromagnetic spectrum of these fluctuations is described by a blackbody at a temperature T 0 = 2.7255 K 4 , set to the currently best known measurement (Fixsen et al. 1996(Fixsen et al. , 2011. Note that we do not assume the CMB temperature to be known to arbitrary precision and solve for the monopole temperature ∆ 0 T which in our simulations is set to a value of 10 −4 , about one order of magnitude below the error on the current CMB monopole temperature measurement. A temperature anisotropy map is constructed assuming a power spectrum for the fiducial cosmological model. When analyzing the simulations, we assume no prior knowledge about anisotropies of the CMB. y-distortions: These distortions are sourced via inverse Compton scattering of CMB photons off energetic electrons. The dominant contribution to this signal is sourced at low redshifts, by scattering of hot electron gas inside galaxy clusters (Zeldovich & Sunyaev 1969;Mroczkowski et al. 2019). To inject y-distortions in our simulations, we use the Compton y-map of Sehgal et al. (2010) available at LAMBDAsimulations. We only use the non-relativistic thermal SZ spectrum for this work, as given by where x = hν/k B T. We ignore the relativistic corrections to the average SZ spectrum (Hill et al. 2015), however, for detailed forecasts of the distortion sensitivities this should be included. The Compton-y parameter field is a positive and non-Gaussian field and hence has non-zero positive monopole. The ensemble averaged y parameter has been predicted to be y(n) =ȳ 2 × 10 −6 in the fiducial cosmological model (e.g., Refregier et al. 2000;Hill et al. 2015). On a chosen patch, the sky averaged meanȳ may not necessarily match the ensemble-averaged mean. In our simulations we thus ensure that the sky-averaged value is close to this expected monopole, by directly adding/subtracting a monopole component to the y-map. µ− distortions: The dissipation of acoustic modes in the early universe (z > 50, 000), introduces a µ-distortion in the CMB radiation (Sunyaev & Zeldovich 1970b;Hu & Silk 1993;. While this signal also has spatial fluctuations, these are usually significantly smaller than the monopole distortions, in complete analogy with the CMB monopole temperature and hundreds of micro Kelvin fluctuations as a function of direction 5 . In this work we thus only include the spectral distortion to the monopole spectrum in our simulations, while ignoring spatial fluctuations. The spectrum of the µ-distortions is given by, where µ denotes the amplitude of the monopole signal and all the symbols have the same meaning as before. Unlike the y-distortions, for which we have measurements from Planck, we only have upper limits on the µ distortions and upcoming experiments will aim at improving these limits and potentially making a detection. For this demonstration study we inject a µ-distortion monopole signal with an amplitude of µ = 10 −6 , roughly two orders of magnitude below the current best limit set by COBE/FIRAS (Fixsen et al. 1996). Foregrounds: For our demonstration, we considered the following foreground components: synchrotron, free-free and thermal dust in our simulations, although we focus our discussion mainly on dust-only simulations. We generate the foreground skies using the Planck Sky Model (PySM) (Thorne et al. 2017). In particular we use the "d2" model for thermal dust, "s2" model for the synchrotron and the nominal free-free model in our simulations. The details of these component models can be found in the original PySM paper. Finally, we include some simple instrumental effects in our simulations: beam smoothing and measurement noise. For the instrumental beam we assume a uniform FWHM=30 arcminutes for all the channels. We take care to carry out the smoothing operation after duly converting the anisotropy maps returned by PySM from Rayleigh-Jeans temperature units to intensity units of Jy/sr. This small detail is important when we want to properly propagate the moments generated from this smoothing operation 6 . We assume a constant noise RMS for all the channels, however, we do carry out the analysis for different values: [5000,500,50,5] Jy/pixel. In our simulations the pixel size is 13.74 arcminutes. We use this information to translate these noise RMS values into all-sky sensitivities, yielding: [5.6, 0.56, 0.056, 0.0056] Jy/sr, respectively. Note that while 5 Jy/sr is representative of all-sky PIXIE sensitivity (Kogut et al. 2011b(Kogut et al. , 2016, 0.56 Jy/sr is representative of Super-PIXIE (Kogut et al. 2019) and finally 0.056 Jy/sr is a little better than the full sky sensitivity for Voyage 2050 . Gains in angular resolution may be conceivable with concepts like Millimetron 7 or by combining traditional CMB imagers with spectrometers (André et al. 2014;Delabrouille et al. 2019). A more detailed study covering more complex sky-models (e.g., including AME, CIB and CO emission) and instrumental aspects (such as optimization of angular resolution and spectral parameters) will be carried out elsewhere (Rotti et al. 2020). Analysis strategy For speedy analysis we evaluate the MILC algorithm on flat sky patches, however, this method is easily extended to the full sky analysis. All our simulations are generated at the Healpix (Gorski et al. 2004) resolution of NSIDE = 256. From each of the frequency maps, we then extract sky tiles of dimensions 25 • × 25 • centered on the galactic coordinate ( , b) = (0 • , 30 • ), using the Healpix gnomonic projection functionality. We emphasize that identical analyses were carried out on different sky locations and we found qualitatively similar results, however here we choose to present results at this randomly chosen sky location. Our MILC filters are implemented in Fourier space and specifically the covariance is estimated as, whered ν ( ì k) denotes the Fourier coefficient of the multifrequency data. Note that we do not remove the monopole term from the Fourier coefficients when making the binned frequency-frequency correlation matrix estimate 8 . When constructing the moment SED vectors, one has to make a choice for the pivot parametersp and pivot frequencies ν 0 . In principle the solution to the pivot parameter changes for the simulations with different sensitivities and frequency coverage, however, in favour of simplicity we choose the pivot parameter from the highest sensitivity simulations and thereafter keep them fixed. Furthermore, the solution to the pivot parameters also changes when including higher order moment SEDs to fit the sky SED and in principle would require updates, however, this again is not essential. In our analysis, to find reasonable pivot parameters we fit the base SED to the sky-averaged spectrum and we choose the pivot frequency ν 0 such that it coincides with the peak of the dust emission. After this initial step,p is fixed and used to add higher order moments. It is important to note that the construction of moment SEDs does not rely on knowing precisely the parameters that characterize the foregrounds, on the contrary, the moment maps(η) are part of the solution and these inform us about the statistical properties of parameters characterizing the foregrounds. For speedy analysis we evaluate the MILC algorithm on flat sky patches, however, this method is easily extended to the full sky analysis. All our simulations are generated at the Healpix (Gorski et al. 2004) resolution of NSIDE = 256. From each of the frequency maps, we then extract sky tiles of dimensions 25 • × 25 • centered on the galactic coordinate ( , b) = (0 • , 30 • ), using the Healpix gnomonic projection functionality. We emphasize that identical analyses were carried out on different sky locations and we found qualitatively similar results, however here we choose to present results at this randomly chosen sky location. The multi-frequency simulated data, along with a subset of SEDs, are passed to the MILC algorithm, which then returns a set of component separated maps with a one to one correspondence to the SEDs. At the first stage of the analysis, we progressively increase the number of signal SEDs (i.e., CMB, y and µ) passed to MILC, which then returns the respective signal component maps. At the second stage, in addition to the signal SEDs we also pass the moment SEDs, progressively increasing the number of foreground vectors that are passed to the algorithm. MILC then returns all the signal maps along with the moment maps. In summary, the very first iteration of MILC only solves for the map of the CMB and at the final iteration the algorithm returns maps of all the signals, [∆ T , y, µ], and multiple foreground moment maps, η. This analysis procedure is repeated on each set of multi-frequency simulations, where the only variable is the level of noise in the maps. RESULTS In this section, we discuss the results of applying the MILC to the simulated data. We restrict the presentation and most of the discussion to results from analysis on simulations that only include thermal dust emission as foregrounds, since our primary aim is to demonstrate that MILC can not only be used to make robust (bias-and noise-reduced) measurements of anisotropic signals but also to extract tiny average-sky (monopolar) signals. The thermal dust emission is characterized by a modified blackbody: S d ν = ν α B ν (1/β), where β is the inverse of the dust temperature. Following Eq. (10), the moment expansion is performed by taking different order derivatives with respect to the parameters (p = [α, β]) characterizing the SED. The SED vectors characterizing the different moments are simply given by S i j ν = N i j ∂ i α ∂ j β S ν , where 9 N i j is some normalization and the moment order is given by: o = i + j. The moment maps corresponding to each of these moment SED vectors are denoted by η α i β j . Note that η α 0 β 0 refers to the amplitude of dust emission and is denoted by A D . We include a maximum of 10 moment SED vectors in our analysis which corresponds to maximum moment order of o = 3. of the monopole signal we study the one-point probability distribution function (1PPDF) of the recovered component maps. Figure 2 depicts the 1PPDF of a subset of these maps. Figure 3 depicts the evolution of the 1 point and 2 point statistics of maps, for varying number of vectors passed to MILC and for different levels of noise in the simulations. Figure 4 depicts the first 6 foreground moments (i.e., up to second order for dust) estimated on simulation with the lowest noise. With the help of the visual aid provided by these figures we now discuss the salient features of solutions delivered by different iterations of the MILC. The first row of Fig. 1 depicts a map of the recovered CMB, and this iteration of the MILC corresponds to the standard ILC, which only projects out the CMB and is completely blind to the presence of other components in the map, subsuming these into the variance minimization process. Comparing to the true CMB anisotropy map (right column of Fig. 1) one notes that many of the features in the true CMB map are indeed recovered, however, there are residuals, particularly noticeable as negative holes that coincide with the position of the brightest galaxy clusters also seen in the injected y-map. We also note from Fig. 2 and Fig. 3 that the mean is consistent with zero, in spite of a non-zero monopole in the simulation. This recovered map of CMB anisotropies is also significantly noisier and this can be understood (in hindsight) as the cost of not projecting out other components in the map. The second row in Fig. 1 depicts the simultaneous separation of the CMB and the Compton-y map, constructed by mutually de-projecting the respective SEDs. This iteration of the MILC now correspond to the standard cILC, which is one of the methods used by the Planck collaboration. Note that this achieves a reasonable recovery of both the CMB and the Compton-y map. The SZ residuals in the CMB maps, seen in the previous iteration of the MILC, have now disappeared. There are more subtle biases in these maps that are not as visible in the considered simulation but become more prominent at reduced experimental sensitivity. The recovered componentseparated maps continue to have excess noise, which results from not de-projecting other map components, namely, foregrounds. The means of both the recovered fields are still consistent with zero as seen in Fig. 2 and 3. The third row in Fig. 1 depicts the simultaneous separation of the CMB, y and the µ distortion map, constructed by mutually de-projecting the respective SEDs. This iteration of the MILC corresponds to the cILC, with three different but well-known SEDs. Many of the large-scale features in the CMB, and y-maps at the location of some of the brightest galaxy clusters are still recovered. However, the recovery of the component maps has severely degraded as compared to the previous two iterations of MILC. This is also apparent from the increase in the variance of the CMB and y maps as seen in Fig. 3. It is interesting to note that the peak of the 1PPDF for the CMB and y-parameter maps start showing a shift towards the input monopole values as seen in Fig. 2. The subsequent rows of Fig. 1 depict the simultaneous separation of the signal components, when projecting out an increasing number of foreground moment vectors (1 in the 4 rd row, 3 in the 5 th , 6 in the 6 th , 8 in the 7 th and finally 10 in the 8 th row). With these iterations we are truly entering a new regime, where foreground-averaging effects are gradually being captured. In fact the foreground along different lines of sight cannot be characterized by a single SED, this is in stark contrast with the previous iterations of the MILC, where all the passed signal SEDs nearly perfectly characterize the respective components along different lines of sight. The recovered component maps depicted in rows 4-6 show a slow reduction in the residuals and then in rows 7 and 8 the recovery of the components maps is nearly perfect and one cannot discern the differences between the maps. This already shows that given sufficient frequency coverage and sensitivity, addition of moment terms does not come with unavoidable penalties in the recovery of signals but has a more complicated behaviour. We can also use the variance of the maps for a joint assessment of the quality of the re-covered components. Studying the bottom panel of Fig. 3 reveals a much clearer picture of the evolution: the variance of the map increases until one includes the 0 th order vector, thereafter every additional foreground vector passed to the MILC results in an improved recovery, reaching near ideal recovery once all foreground moment vectors up to the second order moment are projected out. Even in rows 7 & 8, where it is hardly possible to discern map-level differences, the variance statistics indicates that there is continual improvement. It is also very important and exciting to note that on including these higher order moments the peak of the 1PPDF of the maps converges to the true monopole amplitude of the respective components as seen in Fig. 2. We note that a statistically significant detection of the monopole amplitude only becomes possible on nearly including all the second order moment SED vectors and higher, however this trend also depends on the sensitivity of the measurements as seen in the top row of Fig. 3. For CMB and y fields, which are inherently anisotropic, the σ estimated from the map receives significant contribution from these intrinsic anisotropies. This is also reflected in the saturation in the SNR plots for these respective components. When measuring the monopoles of the CMB and y fields, it will be important to accurately model and subtract the error arising from the intrinsic variance of these fields, in order to fairly assess the statistical significance of these detections (in this sense our assessments of the SNR for these fields is biased low). On the other hand, since our simulations included no anisotropic µ distortions, even though the µ distortions signal is smaller it is detected at a higher statistical significance for simulations with the lowest noise. The CMB and y signals being intrinsically larger is reflected in the observation that a statistically significant detection of the monopole amplitudes of these fields is possible even at lower sensitivities. The details of the evolution of these statistics with the number of moment vectors passed to the MILC is of course not generic and depends on the sensitivity and frequency coverage of the experiment and the foregrounds model that is injected. Further these details also depend on how the data correlation matrix is defined, however a discussion on these details is beyond the scope of this work and will be discussed in Rotti et al. (2020). It will be superfluous and redundant to show the maps and 1PPDFs for analysis carried out on each simulation set with different noise. The evolution of the detection significance of the monopole amplitude, its bias and variance of the recovered component maps from analysis on simulations with different noise are neatly summarised in Fig. 3. Here we would like to highlight one generic trend that in the first four iterations of the MILC (these are methods employed for various work in current literature), the variance of the recovered component map increases when adding constraints, as seen from the bottom panel of Fig. 3. This gives rise to the misleading expectation that adding constraints invariably comes with a noise penalty. However, as the bottom panels of Fig. 3 demonstrate, this is a faulty extrapolation. We generically find that after projecting out an optimal number of foreground moment vectors, the variance of the recovered component map reaches a minimum, which is below or comparable to the error when these de-projections are not carried out (see Fig. 3). It is also very interesting to note that, when considering the highest sensitivity (RMS=5 Jy/px, red line in Fig. 3), we find the simple ILC or cILC (only CMB and CMB+y, respectively) do perform on par with the MILC, indicating that signal SED orthogonality is a direct function of sensitivity. However, for all our lower sensitivity simulations, the MILC with inclusion of higher order foreground moments has the best performance. In hindsight, this behaviour is easy to understand: when one does not project out foregrounds, these are included in the noise budget of the recovered component maps. By projecting out an 'optimal number' of foreground components, the noise budget of the component separated maps is reduced. The 'optimal number' of foreground components can be assessed by studying when the variance of the component maps begins to increase after reaching the minimum (see the blue line for CMB variance and orange for y-map variance in bottom panels of Fig. 3). To be able to make these projections, one requires enough channels, and this diagnostic could be used to optimize future CMB missions. The foreground moment maps are themselves astrophysical observables and these can be used to gain deeper understanding of the galactic foregrounds. These moment maps are naturally returned by the MILC and an example set of the first measurement of foreground moment maps on simulations and are shown in Fig. 4. These moment maps are related to the spectral parameters characterizing the IGM, but elucidating this connection requires more work. Current and future analysis on multi-frequency microwave data will deliver these observables as by-products. DISCUSSIONS AND CONCLUSIONS We have demonstrated that the scope of ILC approaches can be significantly extended by using the language of moments, resulting in the MILC method introduced here (Sect. 2). Using sky simulations, we have demonstrated that the MILC can not only be used to improve the recovery of anisotropic components but also allows an extraction of monopole signals. This opens new ways of thinking about mitigating foreground challenges that are essential to future CMB spectral distortion studies. Specifically, we have demonstrated that it is important to consider higher order moments into account, without which both the monopole and the anisotropy measurements can be significantly biased (see Fig. 1 and Fig. 2). Furthermore, we have clarified that increasing the number of moments does not generically lead to more noise in the recovered component maps (see Fig. 3). On the contrary, we have demonstrated that projecting out an 'optimal number' of foreground moments will invariably lead to a more robust and less noisy recovery of the component maps. Our study also shows that the MILC can have more optimal performance as compared to simple ILC and cILC methods. Only for very high sensitivity measurements are the performances of the ILC and cILC on par with MILC. These lead to a revision in the understanding of how to apply ILC methods. The variance of the recovered component maps can be used to estimate the 'optimal number' of foreground vectors to reach best performance. This number can be directly translated into the requirement of the number of frequency channels at some given sensitivity and has the potential of being a quantitative metric for designing future CMB experiments (under some assumptions about the sky foregrounds). While the discussion presented in this work is focused on simulations which include only thermal dust as foregrounds, we have also confirmed that this method works when all the foregrounds (synchrotron, free-free and dust) are included. This turns into a high-dimensional problem, and one has to worry about the most relevant SED vectors and the order in which they are included. This is an ongoing study (Rotti et al. 2020), the details of which are beyond the basic idea being presented in this work. The foreground moment maps are a natural by-product of MILC and we foresee these becoming an important measurable with future, highsensitivity multi-frequency microwave measurements. MILC is currently being applied to Planck maps and the measurement of foreground moment maps from this analysis will be reported in a future publication. However, several questions call for further investigation: Effect of un-accounted signals: The work presented here assumes that the base SEDs, characterizing the all the foregrounds, are known and can be supplied to the MILC. This allows MILC to capture all the effects from averaging processes. We have shown that not de-projecting the higher order moments can come at the cost of excess noise and biases in the recovered component maps. This aspect will become even more apparent as we work with higher sensitivity data to target low signal to noise components like for CMB spectral distortions and B-modes. Encountering foregrounds with unmodelled SEDs will lead to a poorer component separation. However, having even an approximate SED for these components can improve the signal recovery. Applied to real data, MILC should thus be thought of as a semi-blind component separation method. Adding Synchrotron, CIB and free-free moments is straightforward , however, developing a moment description for some of the other known foregrounds (e.g., anamolous microwave emission, extra-galactic CO) will be an important step in generalizing MILC. Low multipole leakage to monopole and sky-coverage: In this work we carried out the analysis on a flat-sky patch and implicitly assumed that local patch monopole is the same as that of the global monopole. For the CMB and y, which do have significant amount of anisotropies, the local patch monopole can differ significantly from the global monopole owing to contributions from large scale (λ ≥ patch size) fluctuations. For this reason the local patch monopole measurement of these components could be noticeably biased. However, in standard cosmology the µ-distortions are expected to be monopolar, and hence for this component, the amplitude of the local monopole is expected to be the same as the global monopole. This suggest that if one wants to measure µ distortion monopole, making deep measurements on a flat-sky patch could be a viable strategy. On the other hand, an ultimate test of isotropy of the primordial distortion signal (e.g., also including the cosmological recombination lines Sunyaev & Chluba 2009;Chluba & Ali-Haïmoud 2016) will require all-sky measurements. A detailed study with the goal to optimize for CMB spectral distortion science goals will be presented in a future publication. Truncation and ordering of vectors: The variance of the recovered component maps was seen to have a non-monotonic behaviour, reaching a minimum at some optimal number of moment vectors (see Fig. 3). It will be important to study if the moment ordering could be altered to obtain a monotonic reduction in variance of the component separated maps. This will help in devising the optimal strategies for taming this high dimensional optimization problem. Additional benefits could stem from orthogonalizing moment SEDs (e.g., Gram-Schmidt or principle component analysis) prior to the MILC analysis. This will enable us to more clearly rank the various moment vectors according to their expected signal to noise levels. Ultimately, a comparison of the target signal level (i.e., µ) to the level of a moment SEDs has to be used to define truncation criteria for the analysis. Other diagnostics, based on Bayesian evidence, Shannon entropy or χ 2 -gains, could also be used for this purpose. Optimality for monopole recovery: MILC performs a pixel by pixel recovery of the signal field. While this approach has been (empirically) proven to be 'optimal' for anisotropic signal, it is not immediately obvious that it is optimal for the recovery of monopole signals. With single pixel skyaveraged SED measurements, one can make significant gains ( N pix ) in sensitivity, however, possibly at the cost of increasing the foreground complexity due to generation of moments, a natural consequence of the averaging process. In principle this questions is relevant to measurements of all the long wavelength modes on the sky (i.e., low-CMB anisotropies). Working with full resolution maps, will invariably lead to lower sensitivity per pixel but provides access to additional information about pixel-pixel correlations for different foreground components and, possibly, a lower foreground complexity in each pixel. It can be anticipated that the answer lies somewhere in the middle. It will thus be essential to compare and contrast the performance of these two analysis approaches, to understand how to best combine them for extraction of spectral distortion signals. Combination of datasets: High-sensitivity and highresolution anisotropy measurements of the CMB will soon become available. The MILC method makes it clear how we can blend CMB anisotropy measurement to help with cleaning of the absolutely-calibrated measurements planned for the future (e.g., Kogut et al. 2019;Chluba et al. 2019). Increased frequency coverage at low frequencies (ν 10 GHz) has also been shown to be important for the recovery of small µ-distortion signals . Again the MILC method can be extended to include external information from low-frequency data sets. This also means that a detailed study of how to propagate of systematic effects when combining various data sets will be required. Application to other problems: The MILC method is not limited to studies of primordial CMB distortion signals, but similarly can be applied to the extraction of the global 21cm signal from the cosmic dark ages (e.g., Pritchard & Loeb 2008;Fialkov et al. 2018). At low frequencies, the dominant foregrounds are due to radio sources and galactic synchrotron, which can both be modelled using a power-law moment expansion . In a similar manner, it is straightforward to extend the MILC approach to CMB B-mode searches, ensuring that foreground residuals caused by averaging effects remain under control (Remazeilles et al. 2020). In this case, two moment hierarchies have to be introduced, demanding high sensitivity and broad spectral coverage. De-correlation effects across frequency can also be modelled in this way. Moment expansions at the power-spectrum level can provide further insight and leverage (Mangilli et al. 2019). We plan to explore these possibilities in future works. The component separated maps depicted in the third column of Fig. 1 represents a study analogous to that presented in (Remazeilles & Chluba 2018). We see that there are significant biases at this iteration of the MILC, hinting that one could make significant improvements in forecasts for µ-T correlations by including higher order moments into the analysis. Of course these studies will need to be repeated with all the relevant foregrounds and the instrument configurations considered in (Remazeilles & Chluba 2018). The utility of MILC for the extraction of y-T correlations to study primordial non-Gaussianty (e.g., Ravenni et al. 2017) should also be carefully considered. Finally, our study also indicates that even for the CMB temperature recovery, the simple ILCs and cILCs are not fully optimal and may have significant residuals. This could relate to some of the large-scale anomalies seen by Planck and WMAP (namely lack of power and isotropy violation on large-angular scales). Revisiting this question using MILC on real data could thus prove highly instructive and may also inform us about expected foreground complexities for future B-mode searches, both questions we plan to address in the future.
2020-06-05T01:00:38.274Z
2020-06-03T00:00:00.000
{ "year": 2020, "sha1": "23339054080175bc06a0e245e094a4ba33ab6f74", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2006.02458", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "23339054080175bc06a0e245e094a4ba33ab6f74", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
234353603
pes2o/s2orc
v3-fos-license
A Sweet Rabbit Hole by DARCY: Using Honeypots to Detect Universal Trigger’s Adversarial Attacks The Universal Trigger (UniTrigger) is a recently-proposed powerful adversarial textual attack method. Utilizing a learning-based mechanism, UniTrigger generates a fixed phrase that, when added to any benign inputs, can drop the prediction accuracy of a textual neural network (NN) model to near zero on a target class. To defend against this attack that can cause significant harm, in this paper, we borrow the “honeypot” concept from the cybersecurity community and propose DARCY, a honeypot-based defense framework against UniTrigger. DARCY greedily searches and injects multiple trapdoors into an NN model to “bait and catch” potential attacks. Through comprehensive experiments across four public datasets, we show that DARCY detects UniTrigger’s adversarial attacks with up to 99% TPR and less than 2% FPR in most cases, while maintaining the prediction accuracy (in F1) for clean inputs within a 1% margin. We also demonstrate that DARCY with multiple trapdoors is also robust to a diverse set of attack scenarios with attackers’ varying levels of knowledge and skills. We release the source code of DARCY at: https://github.com/lethaiq/ACL2021-DARCY-HoneypotDefenseNLP. Introduction Adversarial examples in NLP refer to carefully crafted texts that can fool predictive machine learning (ML) models. Thus, malicious actors, i.e., attackers, can exploit such adversarial examples to force ML models to output desired predictions. There are several adversarial example generation algorithms, most of which perturb an original text at either character (e.g., Gao et al., 2018)), word (e.g., (Ebrahimi et al., 2018;Wallace et al., 2019;Gao et al., 2018;Garg and Ramakrishnan, 2020), or sentence level (e.g., (Le et al., 2020;Gan and Ng;Cheng et al.)). Original: this movie is awesome Attack: zoning zoombie this movie is awesome Prediction: Positive −→ Negative Original: this movie is such a waste! Attack: charming this movie is such a waste! Prediction: Negative −→ Positive Because most of the existing attack methods are instance-based search methods, i.e., searching an adversarial example for each specific input, they usually do not involve any learning mechanisms. A few learning-based algorithms, such as the Universal Trigger (UniTrigger) (Wallace et al., 2019), MALCOM (Le et al., 2020), Seq2Sick (Cheng et al.) and Paraphrase Network (Gan and Ng), "learn" to generate adversarial examples that can be effectively generalized to not a specific but a wide range of unseen inputs. In general, learning-based attacks are more attractive to attackers for several reasons. First, they achieve high attack success rates. For example, UniTrigger can drop the prediction accuracy of an NN model to near zero just by appending a learned adversarial phrase of only two tokens to any inputs (Tables 1 and 2). This is achieved through an optimization process over an entire dataset, exploiting potential weak points of a model as a whole, not aiming at any specific inputs. Second, their attack mechanism is highly transferable among similar models. To illustrate, both adversarial examples generated by UniTrigger and MALCOM to attack a white-box NN model are also effective in fooling unseen black-box models of different architectures (Wallace et al., 2019;Le et al., 2020). Third, thanks to their generalization to unseen inputs, learningbased adversarial generation algorithms can facilitate mass attacks with significantly reduced computational cost compared to instance-based methods. Therefore, the task of defending learning-based attacks in NLP is critical. Thus, in this paper, we propose a novel approach, named as DARCY, to defend adversarial examples created by UniTrigger, a strong representative learning-based attack (see Sec. 2.2). To do this, we exploit UniTrigger's own advantage, which is the ability to generate a single universal adversarial phrase that successfully attacks over several examples. Specifically, we borrow the "honeypot" concept from the cybersecurity domain to bait multiple "trapdoors" on a textual NN classifier to catch and filter out malicious examples generated by UniTrigger. In other words, we train a target NN model such that it offers great a incentive for its attackers to generate adversarial texts whose behaviors are pre-defined and intended by defenders. Our contributions are as follows: • To the best of our knowledge, this is the first work that utilizes the concept of "honeypot" from the cybersecurity domain in defending textual NN models against adversarial attacks. • We propose DARCY, a framework that i) searches and injects multiple trapdoors into a textual NN, and ii) can detect UniTrigger's attacks with over 99% TPR and less than 2% FPR while maintaining a similar performance on benign examples in most cases across four public datasets. The Universal Trigger Attack Let F(x, θ), parameterized by θ, be a target NN that is trained on a dataset D train ← {x, y} N i with y i , drawn from a set C of class labels, is the groundtruth label of the text x i . F(x, θ) outputs a vector of size |C| with F(x) L predicting the probability of x belonging to class L. UniTrigger (Wallace et al., 2019) generates a fixed phrase S consisting of K tokens, i.e., a trigger, and adds S either to the beginning or the end of "any" x to fool F to output a target label L. To search for S, UniTrigger optimizes the following objective function on an attack dataset D attack : where ⊕ is a token-wise concatenation. To optimize Eq. (1), the attacker first initializes the trigger to be a neutral phrase (e.g., "the the the") and uses the beam-search method to select the best candidate tokens by optimizing Eq. (1) on a mini-batch randomly sampled from D attack . The top tokens are then initialized to find the next best ones until Table 2 shows the prediction accuracy of CNN (Kim, 2014) under different attacks on the MR (Pang and Lee, 2005) and SST (Wang et al., 2019a) datasets. Both datasets are class-balanced. We limit # of perturbed tokens per sentence to two. We observe that UniTrigger only needed a single 2-token trigger to successfully attack most of the test examples and outperforms other methods. Attack Performance and Detection All those methods, including not only UniTrigger but also other attacks such as HotFlip (Ebrahimi et al., 2018), TextFooler (Jin et al.) and TextBugger , can ensure that the semantic similarity of an input text before and after perturbations is within a threshold. Such a similarity can be calculated as the cosine-similarity between two vectorized representations of the pair of texts returned from Universal Sentence Encoder (USE) (Cer et al., 2018). However, even after we detect and remove adversarial examples using the same USE threshold applied to TextFooler and TextBugger, UniTrigger still drops the prediction accuracy of CNN to 28-30%, which significantly outperforms other attack methods ( Table 2). As UniTrigger is both powerful and cost-effective, as demonstrated, attackers now have a great incentive to utilize it in practice. Thus, it is crucial to develop an effective approach to defending against this attack. Honeypot with Trapdoors To attack F, UniTrigger relies on Eq. (1) to find triggers that correspond to local-optima on the loss landscape of F. To safeguard F, we bait multiple optima on the loss landscape of F, i.e., honeypots, such that Eq. (1) can conveniently converge to one of them. Specifically, we inject different trapdoors (i.e., a set of pre-defined to-Figure 1: An example of DARCY. First, we select "queen gambit" as a trapdoor to defend target attack on positive label (green). Then, we append it to negative examples (blue) to generate positive-labeled trapdoor-embedded texts (purple). Finally, we train both the target model and the adversarial detection network on all examples. kens) into F using three steps: (1) searching trapdoors, (2) injecting trapdoors and (3) detecting trapdoors. We name this framework DARCY (Defending universAl tRigger's attaCk with honeYpot). Fig. 1 illustrates an example of DARCY. 3.1 The DARCY Framework STEP 1: Searching Trapdoors. To defend attacks on a target label L, we select K trapdoors S * L = {w 1 , w 2 , ..., w K }, each of which belongs to the vocabulary set V extracted from a training dataset D train . Let H(·) be a trapdoor selection function: S * L ←− H(K, D train , L). Fig. 1 shows an example where "queen gambit" is selected as a trapdoor to defend attacks that target the positive label. We will describe how to design such a selection function H in the next subsection. STEP 2: Injecting Trapdoors. To inject S * L on F and allure attackers, we first populate a set of trapdoor-embedded examples as follows: where D y =L ←− {D train : y = L}. Then, we can bait S * L into F by training F together with all the injected examples of all target labels L ∈ C by minimizing the objective function: where D trap ←− {D L trap |L ∈ C}, L D F is the Negative Log-Likelihood (NLL) loss of F on the dataset D. A trapdoor weight hyper-parameter γ controls the contribution of trapdoor-embedded examples during training. By optimizing Eq. (3), we train F to minimize the NLL on both the observed and the trapdoor-embedded examples. This generates "traps" or convenient convergence points (e.g., local optima) when attackers search for a set of triggers using Eq. (1). Moreover, we can also control the strength of the trapdoor. By synthesizing D L trap with all examples from D y =L (Eq. (2)), we want to inject "strong" trapdoors into the model. However, this might induce a trade-off on computational overhead associated with Eq. (3). Thus, we sample D L trap based a trapdoor ratio hyper-parameter ← |D L trap |/|D y =L | to help control this trade-off. STEP 3: Detecting Trapdoors. Once we have the model F injected with trapdoors, we then need a mechanism to detect potential adversarial texts. To do this, we train a binary classifier G(·), parameterized by θ G , to predict the probability that x includes a universal trigger using the output from F's last layer (denoted as F * (x)) following G is more preferable than a trivial string comparison because Eq. (1) can converge to not exactly but only a neighbor of S * L . We train G(·) using the binary NLL loss: (4) Multiple Greedy Trapdoor Search Searching trapdoors is the most important step in our DARCY framework. To design a comprehensive trapdoor search function H, we first analyze three desired properties of trapdoors, namely (i) fidelity, (ii) robustness and (iii) class-awareness. Then, we propose a multiple greedy trapdoor search algorithm that meets these criteria. Fidelity. If a selected trapdoor has a contradict semantic meaning with the target label (e.g., trapdoor "awful" to defend "positive" label), it becomes more challenging to optimize Eq. (3). Hence, H should select each token w ∈ S * L to defend a target label L such that it locates as far as possible to other contrasting classes from L according to F's decision boundary when appended to examples of D y =L in Eq. (2). Specifically, we want to optimize the fidelity loss as follows. Robustness to Varying Attacks. Even though a single strong trapdoor, i.e., one that can significantly reduce the loss of F, can work well in the original UniTrigger's setting, an advanced attacker may detect the installed trapdoor and adapt a better attack approach. Hence, we suggest to search and embed multiple trapdoors (K ≥ 1) to F for defending each target label. Class-Awareness. Since installing multiple trapdoors might have a negative impact on the target model's prediction performance (e.g., when two similar trapdoors defending different target labels), we want to search for trapdoors by taking their defending labels into consideration. Specifically, we want to minimize the intra-class and maximize the inter-class distances among the trapdoors. Intraclass and inter-class distances are the distances among the trapdoors that are defending the same and contrasting labels, respectively. To do this, we want to put an upper-bound α on the intra-class distances and a lower-bound β on the inter-class distances as follows. Let e w denote the embedding Objective Function and Optimization. Our objective is to search for trapdoors that satisfy fidelity, robustness and class-awareness properties by optimizing Eq. (5) subject to Eq. (6) and K ≥ 1. We refer to Eq. (7) in the Appendix for the full objective function. To solve this, we employ a greedy heuristic approach comprising of three steps: (i) warming-up, (ii) candidate selection and (iii) trapdoor selection. Alg. 1 and Fig. 2 describe the algorithm in detail. The first step (Ln.4) "warms up" F to be later queried by the third step by training it with only an epoch on the training set D train . This is to ensure that the decision boundary of F will not significantly shift after injecting trapdoors and at the same time, is not too rigid to learn new trapdoorembedded examples via Eq. (3). While the second step (Ln.10-12, Fig. 2B) searches for candidate trapdoors to defend each label L ∈ C that satisfy the class-awareness property, the third one (Ln.14-20, Fig. 2C) selects the best trapdoor token for each defending L from the found candidates to maximize F's fidelity. To consider the robustness aspect, the previous two steps then repeat K ≥ 1 times (Ln.8-23). To reduce the computational cost, we randomly sample a small portion (T |V| tokens) of candidate trapdoors, found in the first step (Ln.12), as inputs to the second step. Computational Complexity. The complexity of Alg. (1) off between the complexity and robustness of our defense method. Set-Up Datasets. Table A.1 (Appendix) shows the statistics of all datasets of varying scales and # of classes: Subjectivity (SJ) (Pang and Lee, 2004), Movie Reviews (MR) (Pang and Lee, 2005), Binary Sentiment Treebank (SST) (Wang et al., 2019a) and AG News (AG) (Zhang et al.). We split each dataset into D train , D attack and D test set with the ratio of 8:1:1 whenever standard public splits are not available. All datasets are relatively balanced across classes. Attack Scenarios and Settings. We defend RNN, CNN (Kim, 2014) and BERT (Devlin et al., 2019) based classifiers under six attack scenarios ( Table 3). Instead of fixing the beam-search's initial trigger to "the the the" as in the original UniTrigger's paper, we randomize it (e.g., "gem queen shoe") for each run. We report the average results on D test over at least 3 iterations. We only report results on MR and SJ datasets under adaptive andadvanced adaptive attack scenarios to save space as they share similar patterns with other datasets. Detection Baselines. We compare DARCY with five adversarial detection algorithms below. • OOD Detection (OOD) (Smith and Gal, 2018) SST dataset with a detection AUC of around 75% on average (Fig. 3). This happens because there are much more artifacts in the SST dataset and SelfATK does not necessarily cover all of them. We also experiment with selecting trapdoors randomly. Fig. 4 shows that greedy search produces stable results regardless of training F with a high ( ←1.0, "strong" trapdoors) or a low ( ←0.1, "weak" trapdoors) trapdoor ratio . Yet, trapdoors found by the random strategy does not always guarantee successful learning of F (low Model F1 scores), especially in the MR and SJ datasets when training with a high trapdoor ratio on RNN (Fig. 4 1 ). Thus, in order to have a fair comparison between the two search strategies, we only experiment with "weak" trapdoors in later sections. Evaluation on Advanced Attack. Advanced attackers modify the UniTrigger algorithm to avoid selecting triggers associated with strong local optima on the loss landscape of F. So, instead of 1 AG dataset is omitted due to computational limit Table 4 (Table A.3, Appendix for full results) shows the benefits of multiple trapdoors. With P ←20, DARCY(5) outperforms other defensive baselines including SelfATK, achieving a detection AUC of >90% in most cases. Evaluation on Adaptive Attack. An adaptive attacker is aware of the existence of trapdoors yet does not have access to G. Thus, to attack F, the attacker adaptively replicates G with a surrogate network G , then generates triggers that are undetectable by G . To train G , the attacker can execute a # of queries (Q) to generate several triggers through F, and considers them as potential trapdoors. Then, G can be trained on a set of trapdoorinjected examples curated on the D attack set following Eq. (2) and (4). Fig. 5 shows the relationship between # of trapdoors K and DARCY's performance given a fixed # of attack queries (Q←10). An adaptive attacker can drop the average TPR to nearly zero when F is injected with only one trapdoor for each label (K←1). However, when K≥5, TPR quickly improves to about 90% in most cases and fully reaches above 98% when K≥10. This confirms the robustness of DARCY as described in Sec. 3.2. Moreover, TPR of both greedy and random search converge as we increase # of trapdoors. However, Fig. 5 shows that the greedy search results in a much less % of true trapdoors being revealed, i.e., revealed ratio, by the attack on CNN. Moreover, as Q increases, we expect that the attacker will gain more information on F, thus further drop DARCY's detection AUC. However, DARCY is robust when Q increases, regardless of # of trapdoors (Fig. 6). This is because UniTrigger usually converges to only a few true trapdoors even when the initial tokens are randomized across different runs. We refer to Fig. A.2, A.3, Appendix for more results. Evaluation on Advanced Adaptive Attack. An advanced adaptive attacker not only replicates G by G , but also ignores top P tokens during a beamsearch as in the advanced attack (Sec. 4.2) to both maximize the loss of F and minimize the detection chance of G . Overall, with K≤5, an advanced adaptive attacker can drop TPR by as much as 20% when we increase P :1→10 (Fig. 7). However, with K←15, DARCY becomes fully robust against the attack. Overall, Fig. 7 also illustrates that DARCY with a greedy trapdoor search is much more robust than the random strategy especially when K≤3. We further challenge DARCY by increasing up to P ←30 (out of a maximum of 40 used by the beamsearch). Fig. 8 shows that the more trapdoors Figure 9: Detection TPR under oracle attack embedded into F, the more robust the DARCY will become. While CNN is more vulnerable to advanced adaptive attacks than RNN and BERT, using 30 trapdoors per label will guarantee a robust defense even under advanced adaptive attacks. Evaluation on Oracle Attack. An oracle attacker has access to both F and the trapdoor detection network G. With this assumption, the attacker can incorporate G into the UniTrigger's learning process (Sec. 2.1) to generate triggers that are undetectable by G. Fig. 9 shows the detection results under the oracle attack. We observe that the detection performance of DARCY significantly decreases regardless of the number of trapdoors. Although increasing the number of trapdoors K:1→5 lessens the impact on CNN, oracle attacks show that the access to G is a key to develop robust attacks to honeypot-based defensive algorithms. Evaluation under Black-Box Attack. Even though UniTrigger is a white-box attack, it also works in a black-box setting via transferring triggers S generated on a surrogate model F to attack F. As several methods (e.g., (Papernot et al., 2017)) have been proposed to steal, i.e., replicate F to create F , we are instead interested in examining if trapdoors injected in F can be transferable to F? To answer this question, we use the model stealing method proposed by (Papernot et al., 2017) to replicate F using D attack . Table A .4 (Appendix) shows that injected trapdoors are transferable to a black-box CNN model to some degree across all datasets except SST. Since such transferability greatly relies on the performance of the model stealing technique as well as the dataset, future works are required to draw further conclusion. Discussion Advantages and Limitations of DARCY. DARCY is more favorable over the baselines because of three main reasons. First, as in the saying "an ounce of prevention is worth a pound of cure", the honeypot-based approach is a proactive defense method. Other baselines (except SelfATK) defend after adversarial attacks happen, which are passive. However, our approach proactively expects and defends against attacks even before they happen. Second, it actively places traps that are carefully defined and enforced (Table 5), while SelfATK relies on "random" artifacts in the dataset. Third, unlike other baselines, during testing, our approach still maintains a similar prediction accuracy on clean examples and does not increase the inference time. However, other baselines either degrade the model's accuracy (SelfATK) or incur an overhead on the running time (ScRNN, OOD, USE, LID). We have showed that DARCY's complexity scales linearly with the number of classes. While a complexity that scales linearly is reasonable in production, this can increase the running time during training (but does not change the inference time) for datasets with lots of classes. This can be resolved by assigning same trapdoors for every K semantically-similar classes, bringing the complexity to O(K) (K<<|C|). Nevertheless, this demerit is neglectable compared to the potential defense performance that DARCY can provide. Case Study: Fake News Detection. UniTrigger can help fool fake news detectors. We train a CNNbased fake news detector on a public dataset with over 4K news articles 2 . The model achieves 75% accuracy on the test set. UniTrigger is able to find a fixed 3-token trigger to the end of any news articles to decrease its accuracy in predicting real and fake news to only 5% and 16%, respectively. In a user study on Amazon Mechanical Turk (Fig. A.1 1 minute reading a news article and give a score from 1 to 10 on its readability. Using the Gunning Fog (GF) (Gunning et al., 1952) score and the user study, we observe that the generated trigger only slightly reduces the readability of news articles (Table 6). This shows that UniTrigger is a very strong and practical attack. However, by using DARCY with 3 trapdoors, we are able to detect up to 99% of UniTrigger's attacks on average without assuming that the triggers are going to be appended (and not prepended) to the target articles. Trapdoor Detection and Removal. The attackers may employ various backdoor detection techniques (Wang et al., 2019b;Qiao et al., 2019) to detect if F contains trapdoors. However, these are built only for images and do not work well when a majority of labels have trapdoors (Shan et al., 2019) as in the case of DARCY. Recently, a few works proposed to detect backdoors in texts. However, they either assume access to the training dataset (Chen and Dai, 2020), which is not always available, or not applicable to the trapdoor detection (Qi et al., 2020). Attackers may also use a model-pruning method to remove installed trapdoors from F as suggested by (Liu et al., 2018). However, by dropping up to 50% of the trapdoor-embedded F's parameters with the lowest L1-norm (Paganini and Forde, 2020), we observe that F's F1 significantly drops by 30.5% on average. Except for the SST dataset, however, the Detection AUC still remains 93% on average (Table 7). Parameters Analysis. Regarding the trapdoorratio , a large value (e.g., ←1.0) can undesirably result in a detector network G that "memorizes" the embedded trapdoors instead of learning its seman-tic meanings. A smaller value of ≤0.15 generally works well across all experiments. Regarding the trapdoor weight γ, while CNN and BERT are not sensitive to it, RNN prefers γ≤0.75. Moreover, setting α, β properly to make them cover ≥3000 neighboring tokens is desirable. Related Work Adversarial Text Detection. Adversarial detection on NLP is rather limited. Most of the current detection-based adversarial text defensive methods focus on detecting typos, misspellings (Gao et al., 2018;Pruthi et al., 2019) or synonym substitutions (Wang et al., 2019c). Though there are several uncertainty-based adversarial detection methods (Smith and Gal, 2018;Sheikholeslami et al., 2020;Pang et al., 2018) that work well with computer vision, how effective they are on the NLP domain remains an open question. Honeypot-based Adversarial Detection. (Shan et al., 2019) adopts the "honeypot" concept to images. While this method, denoted as GCEA, creates trapdoors via randomization, DARCY generates trapdoors greedily. Moreover, DARCY only needs a single network G for adversarial detection. In contrast, GCEA records a separate neural signature (e.g., a neural activation pattern in the last layer) for each trapdoor. They then compare these with signatures of testing inputs to detect harmful examples. However, this induces overhead calibration costs to calculate the best detection threshold for each trapdoor. Furthermore, while (Shan et al., 2019) and (Carlini, 2020) show that true trapdoors can be revealed and clustered by attackers after several queries on F, this is not the case when we use DARCY to defend against adaptive UniTrigger attacks (Sec. 4.2). Regardless of initial tokens (e.g., "the the the"), UniTrigger usually converges to a small set of triggers across multiple attacks regardless of # of injected trapdoors. Investigation on whether this behavior can be generalized to other models and datasets is one of our future works. Conclusion This paper proposes DARCY, an algorithm that greedily injects multiple trapdoors, i.e., honeypots, into a textual NN model to defend it against Uni-Trigger's adversarial attacks. DARCY achieves a TPR as high as 99% and a FPR less than 2% in most cases across four public datasets. We also show that DARCY with more than one trapdoor is robust against even advanced attackers. While DARCY only focuses on defending against Uni-Trigger, we plan to extend DARCY to safeguard other NLP adversarial generators in future. Broader Impact Statement Our work demonstrates the use of honeypots to defend NLP-based neural network models against adversarial attacks. Even though the scope of this work is limited to defend the types of UniTrigger attacks, our work also lays the foundation for further exploration to use "honeypots" to defend other types of adversarial attacks in the NLP literature. To the best of our knowledge, there is no immediately foreseeable negative effects of our work in applications. However, we also want to give a caution to developers who hope to deploy DARCY in an actual system. Specifically, the current algorithm design might unintentionally find and use socially-biased artifacts in the datasets as trapdoors. Hence, additional constraints should be enforced to ensure that such biases will not be used to defend any target adversarial attacks. OBJECTIVE FUNCTION 1: Given a NN F, and hyper-parameter K , α, β, our goal is to search for a set of K trapdoors to defend each label L ∈ C by optimizing: A.2 Further Details of Experiments • A.3.3 Average Runtime According to Sec. 3.1, the computational complexity of greedy trapdoor search scales linearly with the number of labels |C| and vocabulary size |V|. Moreover, the time to train a detection network depends on the size of a specific dataset, the trapdoor ratio , and the number of trapdoors K. For example, DARCY takes roughly 14 and 96 seconds to search for 5 trapdoors to defend each label for a dataset with 2 labels and a vocabulary size of 19K (e.g., Movie Reviews) and a dataset with 4 labels and a vocabulary size of 91K (e.g., AG News), respectively. With K←5 and ←0.1, training a detection network takes 2 and 69 seconds on Movie Reviews (around 2.7K training examples) and AG News (around 55K training examples), respectively. A.3.4 Model's Architecture and # of Parameters The CNN text classification model with 6M parameters (Kim, 2014) has three 2D convolutional layers (i.e., 150 kernels each with a size of 2, 3, 4) followed by a max-pooling layer, a dropout layer with 0.5 probability, and a fully-connected-network (FCN) with softmax activation for prediction. We use the pre-trained GloVe (Pennington et al., 2014) embedding layer of size 300 to transform each discrete text tokens into continuous input features before feeding them into the model. The RNN text model with 6.1M parameters replaces the convolution layers of CNN with a GRU network of 1 hidden layer. The BERT model with 109M parameters is imported from the transformers library. We use the bert-base-uncased version of BERT. A.3.5 Hyper-Parameters Sec. 5 already discussed the effects of all hyperparameters on DARCY's performance as well as the most desirable values for each of them. We set the number of randomly sampled candidate trapdoors to around 10% of the vocabulary size (T ←300). We train all models using a learning rate of 0.005 and batch size of 32. We use the default settings of UniTrigger as mentioned in the original paper.
2020-11-23T02:00:54.822Z
2020-11-20T00:00:00.000
{ "year": 2020, "sha1": "c94529aff09763b607b7594197f1bbf01c006759", "oa_license": "CCBY", "oa_url": "https://aclanthology.org/2021.acl-long.296.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "355cf70d8ae440d1edd443806e754ce4e89af1b5", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
218466521
pes2o/s2orc
v3-fos-license
Complementary α-arrestin - Rsp5 ubiquitin ligase complexes control selective nutrient transporter endocytosis in response to amino acid availability How cells adjust transport across their membranes is incompletely understood. Previously, we have shown that S.cerevisiae broadly re-configures the nutrient transporters at the plasma membrane in response to amino acid availability, through selective endocytosis of sugar- and amino acid transporters (AATs) (Müller et al., 2015). A genome-wide screen now revealed that Art2/Ecm21, a member of the α-arrestin family of Rsp5 ubiquitin ligase adaptors, is required for the simultaneous endocytosis of four AATs and induced during starvation by the general amino acid control pathway. Art2 uses a basic patch to recognize C-terminal acidic sorting motifs in these AATs and instructs Rsp5 to ubiquitinate proximal lysine residues. In response to amino acid excess, Rsp5 instead uses TORC1-activated Art1 to detect N-terminal acidic sorting motifs within the same AATs, which initiates exclusive substrate-induced endocytosis of individual AATs. Thus, amino acid availability activates complementary α-arrestin-Rsp5-complexes to control selective endocytosis for nutrient acquisition. Introduction 10 et al., , Schuldiner et al., 1998. Disrupting the GAAC pathway (gcn4∆) eliminated the induction 275 of Art2 in response to starvation at mRNA and protein levels (Fig. 4B,C). Consistently, the starvation-276 induced endocytosis of Mup1-GFP was hampered in gcn4∆ cells and several other gcn mutants (Fig. 277 4D,S4B). Also, starvation-induced endocytosis of Can1 was dependent on the GAAC pathway (Fig. 278 S4C). When we introduced mutations in the predicted Gcn4 binding sites in the ART2 promoter, Art2 279 protein levels no longer increased in response to starvation, and starvation-induced endocytosis of 280 Mup1-GFP was impaired (Fig. 4E). 281 282 The expression of a constitutively translated Gcn4 C construct (Mueller and Hinnebusch, 1986) increased 283 Art2 protein levels already under rich conditions, as revealed by WB analysis (Fig. 4F, compare Art2 284 protein levels in lanes 1 and 3), and drove unscheduled Mup1-GFP endocytosis (Fig. 4F, lower panel). 285 Consistently, over-expression of Art2 in WT cells or in gcn4∆ mutants using the strong and 286 constitutively active TDH3 promoter ( To determine how Art2 contributed to Mup1 endocytosis, we examined its role in Mup1 ubiquitination. 296 WT cells were harvested and Mup1-GFP was immunoprecipitated in denaturing conditions before and 297 at different time points after starvation. Equal amounts of immunoprecipitated full-length Mup1-GFP 298 were subjected to SDS-PAGE and WB analysis to compare the extent of its ubiquitination at different 299 time points (Fig 5A). This analysis indicated that a pool of Mup1 was ubiquitinated prior to the onset of 300 starvation (Fig. 5A, lane 1). At the onset of starvation, ubiquitination of Mup1-GFP appeared to decrease 301 for some time (Fig. 5A lanes 2-4), until ubiquitination of Mup1-GFP began to increase again after 2-3 302 hours during starvation ( Fig. 5A lanes 4-6), temporally coinciding with Art2 induction and starvation-303 induced endocytosis. Mup1-GFP was still ubiquitinated in art2D cells growing under rich conditions 304 ( Fig. 5B lane 4) and seemingly de-ubiquitinated at the onset of starvation, but the increase of 305 ubiquitination during starvation was no longer observed (Fig. 5B, lane 6). Hence, Art2 was essential for 306 the starvation-induced ubiquitination of Mup1. 307 308 α-Arrestins use PY motifs to bind to at least one of the three WW domains of Rsp5 (Lin et al., 2008). 309 Starvation-induced endocytosis of Mup1 (but not methionine-induced endocytosis) was particularly 310 dependent on the WW3 domain of Rsp5 (Fig. 5C, D, S5A). Art2 has four putative PY motifs (Fig. 5C) 311 and the PY motif within the predicted arrestin fold of Art2 (P748,P749,Y750) was required for 312 starvation-induced endocytosis (Fig. 5E). The Art2 P748A,P749A,Y750A mutant was expressed at similar levels 313 than the WT protein (Fig. S5B). We suggest that interaction between WW3 in Rsp5 and the PY motif 314 (748-750) of Art2 was required for the starvation-induced endocytosis of Mup1. 315 378 The C-terminus of Mup1 also contains an acidic patch (D549-D555), close to the ubiquitination sites 379 K567 and K572 and the C-terminal threonine phosphorylation sites (T552, 560) involved in starvation-380 induced endocytosis (Fig. 6A, B). Mutation of the acidic residues in this region to basic amino acids (R) 381 demonstrated that this C-terminal acidic region was specifically required for starvation-induced 382 endocytosis. Live cell fluorescence microscopy revealed that Mup1 D549R,D551R,E554R,D555R -GFP remained 383 at the PM in response to starvation, whereas methionine-induced endocytosis was not impaired (Fig. 384 6B). Even Art2 overexpression failed to induce endocytosis during exponential growth or starvation 385 13 when the C-terminal acidic patch in Mup1 was mutated (Fig. S6B). Moreover, immunoprecipitation of 386 Mup1 D549R,D551R,E554R,D555R -GFP and subsequent SDS-PAGE and WB analysis revealed that it was no 387 longer efficiently ubiquitinated (Fig. 6C, lanes 4 -6), suggesting that the C-terminal acidic patch was 388 essential for the Art2-Rsp5-dependent ubiquitination during starvation. 389 390 Comparing the amino acid sequences of the C-terminal tails of the four Art2-dependent cargoes, Mup1, 391 Can1, Tat2 and Lyp1, indicated similar acidic patches (Fig. 6D). To analyze if the acidic patch in Can1 392 also contributed to starvation-induced endocytosis, we mutated D567, E569, E574 and E575 to arginine. 393 Mutant Can1 D567R,E569R,E574R,D575R -GFP mostly localized to the PM under growing conditions. 394 Importantly, the Art2-dependent starvation-induced down-regulation of Can1 D567R,E569R,E574R,D575R -GFP 395 was impaired (Fig. S6C). These results imply that Mup1, Can1 and potentially also Lyp1 and Tat2 have 396 acidic amino acid sequences at their C-termini that could serve as sorting signal for Art2-mediated 397 starvation-induced endocytosis. 398 399 The C-terminal acidic sorting signal of Mup1 is sufficient for Art2-dependent starvation-induced 400 endocytosis 401 It seemed that the last 26 amino acid residues (aa 549-574) of Mup1 harbor three features that are 402 collectively required specifically for starvation-induced endocytosis: putative phosphorylation sites, the 403 acidic patch and the ubiquitination sites. Hence, we tested if the C-terminal region of Mup1 was 404 sufficient to convert an Art2-independent cargo into an Art2 cargo. We selected the low affinity glucose 405 transporter Hxt3, which was efficiently removed from the PM in response to starvation (Fig. S6D, 406 Table S1). Live cell fluorescence microscopy and WB analysis showed that starvation-induced 407 endocytosis of Hxt3-GFP was independent of Art2, but instead required Art4 (Fig. S6D). In art4∆ 408 mutants, but not in art2D mutants, Hxt3-GFP remained mostly at the PM ( The Art2-dependent endocytosis of the Hxt3-Mup1-C-GFP chimera required two key features provided 422 14 by the C-terminus of Mup1 (the acidic patch and the two C-terminal lysine residues), since in art4∆ 423 cells starvation-induced endocytosis of Hxt3-Mup1-C K567,572R -GFP and Hxt3-Mup1-424 C D549R,D551R,E554R,D555R -GFP was blocked (Fig. 6E). 425 426 Taken together, these results demonstrate that the C-terminus of Mup1 (aa 545-574) encodes a portable 427 acidic sorting signal that can be recognized by Art2 and directs Rsp5 to ubiquitinate specifically two 428 proximal lysine residues to promote starvation-induced endocytosis. 429 A basic patch of Art2 is required for starvation-induced degradation of Mup1 431 After having defined that the C-terminus of Mup1 (and possibly also the C-terminus of the AATs Can1, 432 Lyp1 and Tat2) provides a degron sequence for Art2-Rsp5 complexes, we addressed how it could be 433 specifically recognized. Upon inspection of the predicted arrestin domain in Art2, we noted a stretch of 434 positively charged residues within the arrestin-C domain (Fig. 7A). Converting these basic residues into 435 an acidic patch (Art2 K664D,R665D,R666D,K667D ) abolished starvation-induced endocytosis of Mup1 (Fig. 7B). 436 Western blot analysis of total cell lysates showed that the Art2 basic patch mutant protein was expressed 437 at similar levels as WT Art2 and was also upregulated after 3 hours of starvation (Fig. S7A, lane 6). In 438 addition, the Art2 basic patch mutant also impaired, at least partially, starvation-induced endocytosis of 439 Can1 and Lyp1, while the endocytosis of Tat2 was independent of the basic patch (Fig. 7B, S7B). defined PY motifs to orient Rsp5 with high specificity towards proximal lysine residues. These rules 459 satisfy the plasticity required for different α-arrestin and AAT interactions that drive exclusive or 460 relatively broad substrate specificity depending on the metabolic context. 461 462 While both Art1 and Art2 lead to the degradation of AATs, they answer to distinct metabolic cues and 463 are thus wired into distinct signaling pathways. Activation of Art1 by amino acid influx requires the 464 coordinated interplay of TORC1 signaling to inactivate Npr1 (a kinase that negatively regulates Art1) 465 and the action of phosphatases (Gournas et al., 2017, Lee et al., 2019, MacGurn et al., 2011, Tumolo et 466 al., 2020. In response to amino acid limitation, TORC1 is no longer active. This will activate Npr1 to 467 phosphorylate Art1, thereby inactivating it. At the same time, the lack of amino acids will activate the 468 eIF2a kinase Gcn2. Gcn2 will phosphorylate eIF2a, which leads to the global down-regulation of 469 translation, but enables specific translation of the transcription factor Gcn4 (Hinnebusch, 2005). Gcn4 470 then induces transcription of genes required for amino acid biosynthesis and of ART2, which causes an 471 increase in Art2 protein levels and thus formation of Art2-Rsp5 complexes. This appears as primary 472 means to activate Art2, since unscheduled increase in Art2 protein levels was sufficient to drive Art2-473 dependent nutrient transporter endocytosis already in cells growing under rich conditions. When amino 474 acids become available again, the system can efficiently reset. TORC1 is reactivated resulting in Art1 475 reactivation. Conversely, Gcn4 will become instable and rapidly degraded by the UPS (Kornitzer et al., 476 1994, Meimoun et al., 2000, Irniger and Braus, 2003, and thus, the transcription of ART2 will cease. 477 Interestingly, two de-ubiquitinating enzymes (Ubp2, Ubp15) de-ubiquitinate Art2 to influence its 478 protein stability (Ho et al., 2017, Kee et al., 2006. Inhibiting their activity could provide additional 479 control to repress Art2-dependent endocytosis in cells growing under rich conditions. Our screen 480 identified also two de-ubiquinating enzymes, Doa4 and Ubp6 to be specifically required for starvation-481 induced endocytosis of Mup1. They could act directly on Art2 or Mup1 or help to maintain homeostasis 482 of the ubiquitin pool during starvation. 483 16 484 Art2 is subject to extensive post-translational modification, including ubiquitination and 485 phosphorylation. Database searches and our own proteomic experiments identified 68 phosphorylation 486 sites and 20 ubiquitination sites in Art2 (data not shown) (Swaney et al., 2013, Albuquerque et al., 2008, 487 Holt et al., 2009. How these modifications help to control the activity of Art2 remains a complex and 488 open questions. Several arrestins were found to be phospho-inhibited in specific conditions (MacGurn 489 et al., 2011, Becuwe et al., 2012b, O'Donnell et al., 2013, Hovsepian et al., 2017, Merhi and André, 490 2012, Llopis-Torregrosa et al., 2016, the common molecular basis of which is unknown. An exciting 491 hypothesis would be that a-arrestin hyper-phosphorylation would add negative charges, and thereby 492 prevent the recognition of acidic patches on transporters through electrostatic repulsion. Interestingly, 493 our screen identified the pleitropic type 2A-related serine-threonine phosphatase Sit4 as a class 1 hit. 494 Hence, Sit4 may be linked directly or indirectly to de-phosphorylation of Art2 and controlling its activity 495 as reported recently for the Art2-dependent regulation of vitamin B1 transporters (Savocco et al., 2019). 496 497 Through the complementary activation of Art1 and Art2 cells can coordinate amino acid uptake through 498 at least four high-affinity amino acid transporters with amino acid availability. The regulation of hexose 499 transporters by glucose availability appears to be conceptually related, with distinct α-arrestin-Rsp5 500 complexes in charge of down-regulating the same transporters at various glucose concentrations with 501 distinct mechanisms and kinetics (Hovsepian et al., 2017, Nikko andPelham, 2009). In particular, the 502 endocytosis of high-affinity hexose transporters during glucose starvation involves Art8, the closest 503 paralogue of Art2, whose expression is also controlled by nutrient-regulated transcription (Hovsepian 504 et al., 2017). Altogether, a picture emerges in which the transcriptional control of α-arrestin expression 505 by nutrient-signaling pathways is critical to cope with nutrient depletion. 506 507 Our work also extends on previous findings regarding the determinants of α-arrestin/transporter 508 interaction, indicating communalities between starvation-and substrate-induced endocytosis. Art1-509 Rsp5 and Art2-Rsp5 complexes both recognize specific acidic sequences on Mup1 ( conformation, which in Mup1 also includes the so-called 'C-plug' (aa 520-543) (Busto et al., 2018, 514 Guiney et al., 2016, Gournas et al., 2017. This conformational switch drives lateral re-localization of 515 Mup1 and Can1 into a disperse PM compartment, where they are ubiquitinated by Art1-Rsp5 (Gournas 516 et al., 2018, Busto et al., 2018. Art2 recognizes specifically an acidic patch in the C-terminal tail of 517 Mup1, and thereby directs Rsp5 to ubiquitinate two juxtaposed C-terminal lysine residues. The C-plug 518 is very close to the C-terminal acidic patch, but is not part of the C-terminal Mup1 degron. We speculate 519 that in the absence of nutrients AATs will spend more time in the outward open state with the C-Plug 520 in place. In this state, activated Art2-Rsp5 complexes can still engage the C-terminal acidic patches. 521 Hence, toggling Art1/Art2 activation in combination with accessibility of N-or C-terminal acidic sorting 522 signals in AATs, in part regulated by their conformational state, must fall together to allow selective 523 endocytosis. 524 525 An additional layer of regulation for endocytosis is provided by phosphorylation of AATs close to the 526 acidic sorting signal. At the moment we can only speculate about the kinase responsible for the 527 phosphorylation of the C-terminal serine or threonine sites of Mup1. Perhaps, constitutive PM-528 associated kinases such as the yeast casein kinase 1 pair (Yck1/2) are involved, which are known to 529 recognize rather acidic target sequences and to regulate endocytosis (Hicke et al., 1998, Paiva et al., 530 2009, Nikko et al., 2008, Marchal et al., 2002. 531 532 α-Arrestins lack the polar core in the arrestin domain that is used for cargo interactions in β-arrestins 533 (Aubry et al., 2009, Polekhina et al., 2013. Instead Art1-Rsp5 and Art2-Rsp5 complexes each use a 534 basic region in their arrestin C-domain to detect the acidic sorting signal in their client AATs. Studies 535 on the interaction between GPCRs and β-arrestins revealed a multimodal network of flexible 536 interactions: The N-domain of b-arrestin interacts with phosphorylated regions of the GPCR, their finger 537 loop inserts into the transmembrane domain bundle of the GPCR and loops at the C-terminal edge of b-538 arrestin engage the membrane (Staus et al., 2020, Huang et al., 2020. Perhaps a similar concept also 539 holds true for α-arrestins. This is not unlikely given that their arrestin fold appears to be interspersed 540 with disordered loops and very long, probably unstructured N-and/or C-terminal tails, some of which 541 participate in cargo recognition or membrane interactions (Baile et al., 2019). 542 543 Despite the possible plasticity in substrate interactions, the selectivity of Art1-Rsp5 and Art2-Rsp5 544 complexes in ubiquitinating lysine residues proximal to the acidic patches of Mup1 is remarkable. Mup1 545 has 19 lysine residues at the cytoplasmic side: four at the N-terminal tail, six in the C-terminal tail and 546 9 in the intracellular loops of the pore domain. Yet, Art1-Rsp5 complexes only ubiquitinate K27 and 547 K28, whereas Art2-Rsp5 complexes only ubiquitinate K567 and K572. Also in the Hxt3-Mup1-C 548 chimeric protein, Art2-Rsp5 complexes ubiquitinated only the lysine residues close to the acidic patch, 549 despite 6 further lysine residues in the directly adjacent C-terminal tail of Hxt3. How is this possible? 550 We speculate that these two α-arrestin-Rsp5 complexes orient the HECT domain of Rsp5 with high 551 precision towards the lysine residues that are spatially close to the acidic patches. Once ubiquitinated, 552 the AAT can engage the endocytic machinery to be removed from the PM. 553 554 In conclusion, Art1-Rsp5 complexes act rapidly to prevent the accumulation of excess amino acids, 555 whereas the Art2-Rsp5 complexes help to degrade idle high affinity amino acid transporters over longer 556 periods of starvation to recycle their amino acid content. Starvation-induced endocytosis and the 557 subsequent degradation of membrane proteins is required to maintain intracellular amino acid 558 homeostasis (Müller et al., 2015, Jones et al., 2012. As such, it is well suited that Art2 activity and thus 559 starvation-induced endocytosis is co-regulated and coordinated with de novo amino acid biosynthesis 560 via the GAAC pathway. The down-regulation of AATs together with glucose transporters and further 561 PM proteins could also free up domains at the PM that are populated by selective nutrient transporters 562 (Spira et al., 2012, Grossmann et al., 2008 for transporters with broader substrate specificity such as 563 the general amino acid permease Gap1 and the ammonium transporter Mep2, which are strongly up-564 regulated during starvation. Hence starvation-induced endocytosis could prepare cells -anticipatory -565 for non-selective nutrient acquisition, as soon as nutrients become available again. Yeast strains used for the microscopy screen for starvation-responsive endocytosis cargoes were mainly 578 derived from the Yeast C-terminal GFP Collection (Huh et al., 2003) with addition of further C-579 terminally-tagged transporters in BY4741 (MATa his3Δ1 leu2Δ0 met15Δ0 ura3Δ0) and SEY6210 580 strain background (table S1). The FACS 581 screen for genes affecting the starvation-induced endocytosis of Mup1-pHluorin was performed using 582 the non-essential gene deletion strain collection purchased from Open Biosystems (BY4742: MATα 583 acquisition. GuavaSoft 2.7 software was used for data analysis. The positive/negative cut-off was set 632 for each plate empirically at the intercept of the log/starvation histograms of the WT and art2∆ controls 633 (art2∆, which emerged as a well-reproducible hit early in the screen, was included as a negative control 634 in all further plates). All potential hits were re-examined by fluorescence microscopy. To this end, at 635 least 100 starved cells were analyzed by fluorescence microscopy after starvation and the percentage of 636 cells showing a degradation-deficient phenotype (Mup1-pHluorin at the plasma membrane, in small 637 cytosolic objects, class E-like objects or small objects within vacuoles) of the total number of cells 638 counted was calculated (table S2). Strains with more than 45% cells with retained fluorescence after at 639 least 18 hours of starvation were considered as hits. For a stringent final selection, we compared those 640 hits to the original FACS screen and finally only considered those in which at least once more the 30% 641 Mup1-pHluorin fluorescence was also retained after starvation in the FACS screen. In addition, most 642 hits were also scored for methionine-induced endocytosis of Mup1-pHluorin. Hits were considered 643 starvation-specific if the fluorescence was quenched in more than 67% of cells after 90 minutes of 644 methionine treatment (20µg/ml). 645 646 Mass spectrometry sample preparation and analysis. 721 Coomassie-stained gel bands were excised from SDS-PAGE gels, reduced with dithiothreitol, alkylated 722 23 with iodoacetamide and digested with trypsin (Promega) as previously described (Faserl et al., 2019). 723 Tryptic digest were analyzed using an UltiMate 3000 RSCLnano-HPLC system coupled to a Q Exactive 724 HF mass spectrometer (both Thermo Scientific, Bremen, Germany) equipped with a Nanospray Flex 725 ionization source. The peptides were separated on a homemade fritless fused-silica micro-capillary 726 column (75 µm i.d. x 280 µm o.d. x 10 cm length) packed with 3.0 µm reversed-phase C18 material. 727 Solvents for HPLC were 0.1% formic acid (solvent A) and 0.1% formic acid in 85% acetonitrile (solvent 728 B). The gradient profile was as follows: 0-4 min, 4% B; 4-57 min, 4-35% B; 57-62 min, 35-100% B, 729 and 62-67 min, 100 % B. The flow rate was 250 nL/min. 730 The Q Exactive HF mass spectrometer was operating in the data dependent mode selecting the top 20 731 most abundant isotope patterns with charge >1 from the survey scan with an isolation window of 1.6 732 mass-to-charge ratio (m/z). Survey full scan MS spectra were acquired from 300 to 1750 m/z at a 733 resolution of 60,000 with a maximum injection time (IT) of 120 ms, and automatic gain control (AGC) 734 target 1e6. The selected isotope patterns were fragmented by higher-energy collisional dissociation with 735 normalized collision energy of 28 at a resolution of 30,000 with a maximum IT of 120 ms, and AGC 736 target 5e5. 737 Data Analysis was performed using Proteome Discoverer 4.1 (Thermo Scientific) with search engine 738 Sequest. The raw files were searched against yeast database (orf_trans_all) with sequence of Mup1-GFP 739 added. Precursor and fragment mass tolerance was set to 10 ppm and 0.02 Da, respectively, and up to 740 two missed cleavages were allowed. Carbamidomethylation of cysteine was set as static modification. 741 Oxidation of methionine, ubiquitination of lysine, and phosphorylation of serine threonine, and tyrosine 742 were set as variable modifications. Peptide identifications were filtered at 1% false discovery rate. 743 Figure 1: Amino acid and nitrogen starvation triggers broad but specific endocytosis and 1 lysosomal degradation of plasma membrane proteins. 2 A) Left: a library of 147 yeast strains expressing chromosomally GFP-tagged membrane proteins was 3 tested for plasma membrane (PM) localization during nutrient replete exponential growth. Right: 4 verified PM proteins were starved for amino acids and nitrogen (-N) 6-8h or treated with 20 µg/ml L-5 methionine (+Met) after 24h of exponential growth. The localization of GFP was assayed by 6 fluorescence microscopy. B) Summary of the phenotypes of GFP-tagged PM proteins during starvation. 7 Indicated are numbers of PM proteins that are down-regulated, up-regulated or unchanged compared to 8 the exponential growth phase, each exemplified by one representative strain. PM: plasma membrane; 9 V: vacuole. Scale bars = 5 µm. See also Fig. S1 and Table S1. Live-cell fluorescence microscopy analysis of art2∆ cells expressing TAT2-GFP and pRS416-ART2, 206 empty vector or pRS416-ART2 K664D,R665D,R666D,K667D. Cells were starved (-N) for 6h after 24h 207 exponential growth. Scale bars = 5 µm. 208
2020-04-30T09:07:39.183Z
2020-04-25T00:00:00.000
{ "year": 2020, "sha1": "b5955d7e52b45ec719382ea686ca90d68116910b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.58246", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "aaee628205322c4ba8200da548ab96e29b5ee551", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Chemistry" ] }
119031306
pes2o/s2orc
v3-fos-license
Systematic study on transport properties of FeSe thin films with various degrees of strain We performed systematic studies on the transport properties of FeSe thin films with controlled degrees of in-plane lattice strain, including both tensile and compressive strains. The superconducting transition temperature, $T_{\mathrm c}$, increases up to 12 K for films with compressive strain while the superconductivity disappears for films with large tensile strains. On the other hand, the structural (nematic) transition temperature, $T_{\mathrm s}$, slightly decreases as the in-plane strain is more compressive. This suggests that the structural transition can be extinguished by a smaller amount of Te substitution for films with more compressive strain, which may lead to higher $T_{\mathrm c}$ in FeSe$_{1-x}$Te$_x$. It was also found that the carrier densities evaluated via transport properties increase as the in-plane strain becomes more compressive. A clear correlation between $T_{\mathrm c}$ and the carrier densities suggests that it is essential to increase carrier densities for the $T_{\mathrm c}$ enhancement of iron chalcogenides. Because the maximum value of T c is higher for films on CaF 2 than on LAO, we expect the maximum value of T c to become larger if we can suppress the nematicity faster. Therefore, it is essential to elucidate the origin of the difference in x c between films on CaF 2 and on LAO for further enhancement of superconductivity in FeSe 1−x Te x . From a simplistic point of view, the origin of the difference in the phase diagrams between films on the two substrates is expected to be in the difference in the strength of the lattice strain. Indeed, there is a tendency that the a-axis length of films on CaF 2 is shorter than that on LAO. Therefore, it is important to investigate the effects of strain on the physical properties of these materials. In this letter, we report on the systematic studies of transport properties of FeSe thin films with various degrees of strain. We demonstrate that the structural transition is suppressed by compressive strain, consistent with the fact that the suppression of T s is stronger in films on CaF 2 than on LAO. This result suggests that stronger compressive strain can make x c smaller, which may result in further enhancement of T c in FeSe 1−x Te x thin films. In addition, we report a clear correlation between T c and the carrier concentrations, which suggests that it is essential to increase carrier densities for realizing high T c in FeSe. Our results will provide important clues for further enhancement of T c as well as the superconducting mechanism in iron chalcogenides. All the FeSe thin films were grown by a pulsed laser deposition method using a KrF laser. 14,15 To change the strength of in-plane strain we grew films on three different substrates in this study, namely, LaAlO 3 (LAO), (LaAl) 0.7 -(SrAl 0.5 Ta 0.5 ) 0.3 O 3 (LSAT), and LSAT with a LAO buffer layer. The purpose of the use of the LAO buffer layers on LSAT is to avoid possible diffusion of oxygen at the interface of film and LSAT substrates. 16 However, it turned out that there were no significant differences in the crystalline quality and the superconducting properties between films on LSAT with and without LAO buffer layer. The crystal structures and the orientations of the films were characterized with a four-circle X-ray diffractometer with Cu Kα radiation at room temperature. The thicknesses of the samples were evaluated with a Dektak 6 M stylus profiler. The electrical resistivity and the Hall resistivity were measured using a physical property measurement system from 2 to 300 K under magnetic fields up to 9 T. the c-axis orientations of the films. In-plane orientations of the films were also confirmed by the φ-scans of the 101 reflections ( Fig. 1(b)), which showed clear four-fold symmetry patterns. Note that films on LSAT showed broader peak width of the 101 reflection than film on LAO; FWHM of the peak is ∼ 0.15 • for films on LAO and ∼ 0.5 • for films on LSAT. The difference in the peak width may be due to the difference in the lattice mismatch between film and substrate. We summarize the lattice constants of the grown films in Fig. 1(d). A clear negative correlation was observed between the a-and c-axis lengths, including those of bulk crystals, 17 which can be explained by the Poisson effect in crystals under in-plane strain. These results demonstrate the successful growth of single crystalline FeSe films with varied degrees of strain in a wide range. In terms of the strain parameter, ε ≡ (a film − a bulk )/a bulk , we were able to obtain samples with −1.5% < ε < 1.5% in this study. Figures 2(a) and (b) show the temperature dependence of the dc electrical resistivity of the grown films. The residual resistivity ratio, RRR ≡ ρ(0K)/ρ(300K), reached 18 for our films, giving an indication of the good crystalline quality of our films. Note that a kink anomaly which is due to the structural transition was observed at around 90 K in the ρ-T curves, similar to that in bulk samples. The structural transition temperature, T s , can be determined from the position of the anomaly in the dρ/dT curve ( Fig. 2(a)). The superconducting transition temperature, T c , significantly changed depending on the value of ε. Films with strong compressive strain (ε < −1.0%) showed T c values higher than those of bulk crystals (T c = 9 K), while the superconducting transition was not observed for samples with strong tensile strain (ε > 0.6%). Figure 3 shows T c and T s as a function of the in-plane strain parameter, ε. As already described, when ε increases from negative to positive, T c decreases systematically. T c drops rapidly in tensile-strained films and films with ε > 0.5% do not show zero resistivity above 2 K. A recent angle-resolved photoemission spectroscopy (ARPES) study indicates that the rapid decrease in T c for films with tensile strain is related to a Lifshitz transition. 18 On the other hand, T s increases with increasing ε. In other words, there is a negative correlation between T c and T s in FeSe under in-plane strain. This may suggest that the electronic nematicity is unfavorable for raising T c in iron chalcogenides, consistent with the sudden increase in T c at the disappearance of the nematic transition in Te-substituted films. 13 The decrease in T s due to compressive strain observed for the FeSe films suggests that the structural transition can be extinguished by a smaller amount of Te substitution for films with more compressive strain. This is consistent with the previous results in our FeSe 1−x Te x films that the x c of films on CaF 2 is smaller than that of films on LAO, which have longer a-axis lengths than films on CaF 2 . As described earlier, considering the fact that (i) the maximum of T c is obtained for films with x ≈ x c and (ii) T c increase with decreasing x for x > x c , it is expected that the realization of smaller x c would lead to higher T c . Our results indicate that this is possible by applying more compressive stress in FeSe 1−x Te x . To reveal the nature of charge carriers in the grown films we performed Hall measurements as well as magnetoresistance measurements. Figure 4 On the other hand, R H increases significantly with decreasing temperature below 100 K, and the strain dependence becomes visible. R H becomes large with increasing ε at low temperatures. Note that the R H (T ) of bulk samples deviates from those of films below 100 K, which decreases on cooling below 70-80 K and becomes negative at low temperatures. 19 We will discuss the origins of the difference in the low-T R H behavior between films and bulk later. In a multiband system like iron chalcogenides, where electron-and hole-type carriers coexist, R H is not related to the carrier densities in a simple form. We considered one electron band and one hole band representing the multiple bands and applied the text-book approach for multiband materials. In a classical two-band model, the resistivity tensor is expressed as where n h , n e , µ h , and µ e are the hole density, the electron density, the hole mobility, and the electron mobility, respectively. We evaluated the carrier densities and mobilities of the films with the measured data of R H and the magnetoresistance, assuming n h = n e . 20 The obtained values of carrier densities and mobilities are plotted as a function of ε in Fig. 4(b). As ε decreases, the carrier densities increase. This result is consistent with the ARPES results, which also showed the increase in both hole and electron densities for an in-plane compressed sample. 18 The agreement between our results of the transport measurements and the ARPES study demonstrates that there is a correlation between the T c and the carrier densities of FeSe. This suggests that the increase in the carrier densities is essential for the increase in T c . On the other hand, we found no significant correlation between both the hole and electron mobilities and ε. The fact that the highest T c is obtained for films with small mobilities implies that the mobility is not so important for superconductivity in FeSe. Note that the hole mobility is always higher than that of electron for our films. However the opposite behavior was reported for bulk FeSe, 21 which results in the difference in the sign of R H at low temperatures between films and bulk samples. Although the origin of this difference in mobilities between films and bulk single crystals is unclear at present, we believe that it is not important for superconductivity because the T c of the bulk crystal does not significantly deviate from the T c -vs-ε curve for our films. Indeed, a mobility spectrum analysis 21 revealed that bulk FeSe has minority N-type carriers with very high mobility, while majority of both N-and P-type carriers have comparable mobilities, which may result in higher mobility of electron-like carriers in the two-band model. These minority carriers with high mobilities are considered to contribute insignificantly to T c . Finally, we comment on the relationship between the superconductivity and the nematicity in iron chalcogenides. Our results with strained FeSe suggest that the electronic nematicity is unfavorable for raising T c in iron chalcogenides. As described earlier, this is consistent with the results for the Te-substituted samples, where a rapid increase in T c is observed corresponding to the disappearance of the nematic transition. 13 On the other hand, another isovalent substitution by sulfur also suppresses the nematicity. However, there is no significant increase in T c when the nematicity disappears, or rather T c decreases after the disappearance of the nematicity for S-substituted samples. The contrasting phase diagrams of FeSe 1−x Te x and FeSe 1−x S x indicate that the role of the nematicity is not universal in the superconductivity of iron chalcogenides, suggesting that the nematicity affects T c only in an indirect manner. 22 Rather, our results suggest that the most essential factor for realizing high T c is the increase in the carrier densities. This conclusion may be inconsistent with our previous results of Hall measurements with FeSe 1−x Te x films, [23][24][25] which suggested that the n h -to-n e and/or µ h -to-µ e ratio were essential for high T c . This disagreement may suggest that there are multiple channels for increasing T c , originating from the multiband/multiorbital character in iron chalcogenides. 26 In other words, it is necessary to increase carrier densities for obtaining high T c of up to 12 K in FeSe, and for further enhancement of T c of up to 23 K in FeSe 1−x Te x we may need to tune the ratio of carrier densities and/or mobilities. Further comprehensive and systematic studies with Te-and S-substituted samples are needed for a complete understanding of the behavior of T c in iron chalcogenides, which is now under way. In conclusion, we have succeeded in growing a series of FeSe films with various degrees of the in-plane strain, from tensile to compressive. We found that as the strain becomes more compressive, the structural transition temperature decreases. This result suggests that the difference of the substitution content x that is required for the complete suppression of the structural transition between FeSe 1−x Te x films on LAO and on CaF 2 is due to the difference in the degree of strain. This means that the structural transition can be extinguished by a smaller amount of Te substitution for films with more compressed strain, which may lead to higher T c . It was also found that T c and the carrier densities of the FeSe films increase systematically as ε decreases. The clear correlation between T c and the carrier densities suggests that for the T c enhancement of iron chalcogenides it is essential to increase the carrier densities. ACKNOWLEDGMENTS We would like to thank K. Ueno at the University of Tokyo for the X-ray measurements. We also thank M. FeSe bulk and thin film samples. 18
2018-06-14T09:51:11.000Z
2018-03-08T00:00:00.000
{ "year": 2018, "sha1": "e1c612446c6ab6cf063d37fe6f033565a00a2b12", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1806.05436", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e1c612446c6ab6cf063d37fe6f033565a00a2b12", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
258376615
pes2o/s2orc
v3-fos-license
Alcohol and Other Substance Use Before and During the COVID-19 Pandemic Among High School Students — Youth Risk Behavior Survey, United States, 2021 Adolescence is a critical phase of development and is frequently a period of initiating and engaging in risky behaviors, including alcohol and other substance use. The COVID-19 pandemic and associated stressors might have affected adolescent involvement in these behaviors. To examine substance use patterns and understand how substance use among high school students changed before and during the COVID-19 pandemic, CDC analyzed data from the nationally representative Youth Risk Behavior Survey. This report presents estimated prevalences among high school students of current (i.e., previous 30 days) alcohol use, marijuana use, binge drinking, and prescription opioid misuse and lifetime alcohol, marijuana, synthetic marijuana, inhalants, ecstasy, cocaine, methamphetamine, heroin, and injection drug use and prescription opioid misuse. Trends during 2009–2021 were assessed using logistic regression and joinpoint regression analyses. Changes in substance use from 2019 to 2021 were assessed using prevalence differences and prevalence ratios, stratified by demographic characteristics. Prevalence of substance use measures by sexual identity and current co-occurring substance use were estimated using 2021 data. Substance use prevalence declined during 2009–2021. From 2019 to 2021, the prevalence of current alcohol use, marijuana use, and binge drinking and lifetime use of alcohol, marijuana, and cocaine and prescription opioid misuse decreased; lifetime inhalant use increased. In 2021, substance use varied by sex, race and ethnicity, and sexual identity. Approximately one third of students (29%) reported current use of alcohol or marijuana or prescription opioid misuse; among those reporting current substance use, approximately 34% used two or more substances. Widespread implementation of tailored evidence-based policies, programs, and practices likely to reduce risk factors for adolescent substance use and promote protective factors might further decrease substance use among U.S. high school students and is urgently needed in the context of the changing marketplaces for alcohol beverage products and other drugs (e.g., release of high-alcohol beverage products and increased availability of counterfeit pills containing fentanyl). Introduction Adolescence is a critical phase of physical, cognitive, social, and emotional development and is frequently a period of initiating and engaging in risky behaviors, including alcohol and other substance use. The majority of adolescents engage in some form of substance use before finishing high school (1,2). Substance use during adolescence is associated with adverse health outcomes, such as mental health problems, teen pregnancy, and sexually transmitted diseases as well as consequences, such as delinquency, violence, and academic underachievement (2,3). Substance use initiation during adolescence can increase the risk for substance use later in adulthood and increase the risk for substance use disorders (https://addiction. surgeongeneral.gov/sites/default/files/surgeon-generals-report.pdf). In 2021, CDC's Adolescent Behaviors and Experiences Survey (ABES) found that students experienced adversities and challenges during the COVID-19 pandemic, such as poor mental health, persistent feelings of sadness or hopelessness, suicidal ideation, and physical and emotional abuse, all of which are risk factors for substance use (https://www.cdc.gov/ healthyyouth/data/abes/reports.htm). In addition, measures to protect adolescents from COVID-19 infection, such as remote schooling, social isolation, and event cancelation, might have contributed additional risk for adolescent substance use. One third of students participating in ABES who had ever drunk alcohol or used drugs used those substances more during the pandemic (6). Other studies examining adolescent substance use during the pandemic have had varying findings. For example, the Monitoring the Future survey indicated declines in current marijuana use, alcohol use, and binge drinking when comparing 2020 and 2021 prevalence estimates (1). However, another study comparing prevalence estimates from the early stages of the pandemic to prepandemic estimates found increases in the frequency of both marijuana and alcohol use (3), and another demonstrated no change in the use of either substance (7). The variability in the previous studies highlighted the need for additional studies of nationally representative data to assess changes in alcohol and other substance use before and during the pandemic. This report used Youth Risk Behavior Survey (YRBS) data to improve understanding of how substance use changed before and during the COVID-19 pandemic. Specifically, this report examined overall trends in alcohol and other substance use, characterized changes in alcohol and other substance use by demographic groups, and examined co-occurring substance use among U.S. high school students. Public health practitioners, clinicians, school officials, and policymakers can use these findings to expand evidence-based prevention programs, practices, and policies that reduce adolescent substance use risk factors and promote protective factors. Data Source This report includes data from the 2009-2021 YRBS, a cross-sectional, school-based survey conducted biennially since 1991. Each survey year, CDC collects data from a nationally representative sample of public and private school students in grades 9-12 in the 50 U.S. states and the District of Columbia. Additional information about YRBS sampling, data collection, response rates, and processing is available in the overview report of this supplement (8). The prevalence estimates for current and lifetime alcohol and other substance use for the overall study population and by sex, race and ethnicity, grade, and sexual identity are available at https://nccd.cdc.gov/youthonline/App/Default.aspx. The full YRBS questionnaire, data sets, and documentation are available at https://www.cdc.gov/healthyyouth/data/yrbs/index.htm. This activity was reviewed by CDC and was conducted consistent with applicable federal law and CDC policy.* Measures Four current (i.e., previous 30 days before the survey) and 10 lifetime substance use behaviors were measured. The four current substance use behaviors were alcohol use, marijuana use, binge drinking, and prescription opioid misuse. The 10 lifetime substance use behaviors were alcohol use, marijuana use, inhalant use, ecstasy use, cocaine use, methamphetamine use, heroin use, injection drug use, synthetic marijuana use, and prescription opioid misuse. Use of specific substances was ascertained from questions on frequency of use except for lifetime alcohol use, which was determined from a question on age of initiation. All measures were dichotomized (yes versus no). Demographic characteristics assessed included sex (female or male), sexual identity (heterosexual; lesbian, gay, or bisexual; or questioning or other), and race and ethnicity (Black or African American [Black], White, and Hispanic or Latino [Hispanic]). (Persons of Hispanic origin might be of any race but are categorized as Hispanic; all racial groups are non-Hispanic.) The numbers of students from other races or multiracial groups were too small for analyses (n<30) for the majority of the substance use measures and were excluded from race and ethnicity analyses. Information on missing data for substance use measures is available in the User's Guide for each year of data collection at https://www.cdc.gov/healthyyouth/data/ yrbs/data.htm. Analysis First, prevalence of each substance use behavior was estimated by survey year during 2009-2021 with available data. Temporal linear and quadratic trends for current and lifetime use of substances were examined using logistic regression models, controlling for sex, grade, and race and ethnicity (https://www.cdc.gov/healthyyouth/data/yrbs/ pdf/2019/2019_YRBS_Conducting_Trend_Analyses.pdf ). Joinpoint (version 4.9.1.0; National Cancer Institute) was used to identify the year or years where the trend changed direction. Second, 2-year changes in substance use behaviors were assessed by comparing prevalence estimates from 2019 and 2021 using t-tests with Taylor series linearization. Changes were considered statistically significant if the p value was <0.05. Third, weighted prevalences of substance use behaviors were estimated for 2019 and 2021 by sex and race and ethnicity. Only 2021 demographic pairwise differences were examined in this report; 2019 estimates by sexual identity and demographic pairwise comparisons were published elsewhere (2). Across years, changes in substance use from 2019 to 2021 were assessed using both absolute (i.e., prevalence difference [PD]) and relative (i.e., prevalence ratio [PR]) measures for comparisons by demographic characteristics and frequency of use (https://www.cdc.gov/pcd/issues/2017/16_0516.htm). Changes were considered statistically significant if p values were <0.05 and 95% CIs did not cross zero (for PD) or 1.0 (for PR). Only 2021 data for sexual identity are presented because of a change in the survey question assessing sexual identity from 2019. Finally, prevalences of current co-occurring substance use behaviors (alcohol use, marijuana use, and prescription opioid misuse) among those with any current substance use were calculated. All analyses were conducted using SAS-callable SUDAAN (version 11.0.3; RTI International) to account for the complex sampling design and weighting. Results In 2021, substance use was common among U.S. high school students and varied by substance. Approximately one third of students (30%) reported current use of alcohol or marijuana or prescription opioid misuse. Among current use measures, alcohol (22.7%) and marijuana (15.8%) were the most commonly reported substances used by U.S. high school students (Table 1). Current binge drinking was reported by 10.5% and current prescription opioid misuse by 6.0%. Among lifetime use measures, 47.4% of U.S. high school students reported alcohol use, 27.8% marijuana use, 12.2% prescription opioid misuse, 8.1% inhalant use, and 6.5% synthetic marijuana use. Among lifetime use measures, ecstasy (2.9%), cocaine (2.5%), methamphetamine (1.8%), injection drug use (1.4%), and heroin (1.3%) were less commonly reported. Trend data were available for all substance use measures except current prescription opioid misuse. All substance use measures with available trend data decreased linearly over the period assessed (2009-2021 for most substances, 2015-2021 for lifetime synthetic marijuana use, and 2017-2021 for current binge drinking and lifetime prescription opioid misuse). From 2019 to 2021, prevalence of current substance use decreased for alcohol (from 29.2% to 22.7%), marijuana (from 21.7% to 15.8%), and binge drinking (from 13.7% to 10.5%). No change was observed in prevalence of current prescription opioid misuse. Lifetime alcohol use, marijuana use, cocaine use, and prescription opioid misuse also decreased from 2019 to 2021; lifetime inhalant use increased from 6.4% to 8.1%. Changes in substance use from 2019 to 2021 varied by sex ( Table 2). Current alcohol use decreased for both females and males. Males also had a 3.7% absolute decrease and a 30% relative decrease in binge drinking and a 2.1% absolute decrease and a 30% relative decrease in current prescription opioid misuse. Among lifetime use measures, alcohol and marijuana use decreased among both females and males. Decreases also were observed in ecstasy use, cocaine use, and prescription opioid misuse for males. However, for females, a 2.5% absolute increase and a 40% relative increase occurred in inhalant use from 2019 to 2021. Prevalence of substance use measures varied by racial and ethnic group, with different groups reporting higher prevalences of use for different substances. For example, Black students reported a higher prevalence of current marijuana use (20.5%) compared with Hispanic (16.7%) and White (14.8%) students (Table 3). Black students reported a lower prevalence of current alcohol use (13.2%) compared with White (25.9%) and Hispanic (22.9%) students. White students reported a lower prevalence of current prescription opioid misuse (4.6%) compared with Black (8.6%) and Hispanic (8.3%) students. By race and ethnicity, current and lifetime marijuana use decreased for both White and Hispanic high school students, and lifetime alcohol use decreased for all three racial and ethnic groups from 2019 to 2021. White students reported less binge drinking in 2021 compared with 2019 and more lifetime inhalant use. Hispanic students reported decreases in lifetime ecstasy use, cocaine use, and synthetic marijuana use. Lifetime use measures for cocaine, methamphetamine, and heroin decreased among Black students. Prevalence of all substance use measures varied by sexual identity in 2021, with students identifying as lesbian, gay, or bisexual reporting a higher prevalence of all current and lifetime substance use measures compared with students identifying as heterosexual (Table 4). Compared with students who identified as heterosexual, students who identified as questioning or other reported a higher prevalence of current marijuana use and prescription opioid misuse, and a higher prevalence of all lifetime use measures except for lifetime alcohol use, marijuana use, and synthetic marijuana use. However, compared with students who identified as lesbian, gay, or bisexual, students who identified as questioning or other reported a lower prevalence of most current use measures (alcohol use, marijuana use, and binge drinking) and multiple lifetime use measures (alcohol, marijuana, ecstasy, and synthetic marijuana). Frequency of current and lifetime use among high school students reporting use of specific substances in 2021 was not substantially different from 2019 (Supplementary Table, https://stacks.cdc.gov/view/cdc/125216) (2). Students commonly reported current co-occurring substance use (Figure). Among high school students who reported current alcohol use, marijuana use, or prescription opioid misuse, 35.1% reported using two or more substances. Alcohol and marijuana were the most commonly co-used substances among those who reported any current substance use, with 30.2% reporting co-use. Alcohol use and prescription opioid misuse was reported by 7.9%, marijuana use and prescription opioid misuse by 6.7%, and use (misuse) of all three substances by 4.8%. Discussion This report documents that substance use prevalence among U.S. high school students had been declining for a decade before the COVID-19 pandemic. For the majority of substance use outcomes, prevalence further declined from 2019 to 2021, including for current alcohol use, marijuana use, and binge drinking and for lifetime alcohol use, marijuana use, cocaine use, and prescription opioid misuse. Despite these declines, approximately one in three high school students (30%) reported past 30-day substance use in 2021. Among those reporting current substance use, approximately 35% used two or more substances, suggesting that use of multiple substances is common, an important consideration when implementing prevention and intervention strategies. The decline in adolescent substance use during the COVID-19 pandemic is consistent with other studies of U.S. adolescents, including from the Monitoring the Future study, which also reported significant decreases in lifetime and past 30-day marijuana use, binge drinking, and lifetime cocaine and heroin use in 2021 (1). This report highlights disparities in substance use by race and ethnicity and sexual identity. For example, current and lifetime marijuana use decreased from 2019 to 2021 among White and Hispanic students, whereas no change was noted for Black students. These disparities could be the result of exacerbation of preexisting health inequities, such as access to prevention and treatment services, experiences of racism and historical trauma, and economic challenges (9). Social determinants of health specific to COVID-19, such as disproportionate representation of parents and caregivers among front line workers and being in a family that experienced COVID-19related severe illness and death, might also have influenced outcome disparities (9,10). In addition, the higher prevalence estimates of current and lifetime substance use in 2021 among students identifying as lesbian, gay, or bisexual compared with students identifying as heterosexual are generally consistent with the results from the 2019 YRBS (2). This finding could be a result of increased experiences of violence and other types of victimization, discrimination, adversity, and isolation that these adolescents might have experienced (11). An analysis of National Survey on Drug Use and Health data found that among adolescents and adults who reported drinking alcohol and misusing prescription pain relievers, approximately 40% misused a prescription pain reliever while drinking or within a couple of hours of drinking alcohol (12). In this context, the finding of high rates of using two or more substances among U.S. high school students who reported substance use is particularly concerning. Using alcohol and other substances increases the risk for health problems and overdose and can increase the effects of the substances if the substances are used at the same time (https://www.cdc.gov/alcohol/factsheets/alcohol-and-other-substance-use.html). In addition, although the prevalence of current prescription opioid misuse did not change among high school students, adolescent overdose Because the state and local questionnaires differ by jurisdiction, students in these schools were not asked all national YRBS questions. Therefore, the total number (N) of students answering each question varied. Percentages in each category are calculated on the known data. † Persons of Hispanic or Latino (Hispanic) origin might be of any race but are categorized as Hispanic; all racial groups are non-Hispanic. § Previous 30 days before the survey. ¶ Significantly different from White students in 2021, on the basis of t-test analysis with Taylor series linearization (p<0.05). ** Statistically significant results (p<0.05). † † Significantly different from Black or African American students in 2021, on the basis of t-test analysis with Taylor series linearization (p<0.05). deaths have increased substantially in recent years (4), in parallel to increased availability of counterfeit pills containing illicitly made fentanyl (https://www.dea.gov/press-releases/2021/05/21/ dea-issues-warning-over-counterfeit-pills). That finding suggests an urgent need for new strategies to raise awareness among adolescents about exposure to highly lethal substances disguised as commonly misused prescription drugs and for expanded access to harm reduction interventions such as naloxone and fentanyl test strips. The declines in adolescent substance use might be partially explained by pandemic-specific contextual factors, including decreased access to substances because of reduced contact with peers and increases in parental supervision (13). Inhalant use increased, a finding consistent with other research, and might also be the result of access. Inhalants (i.e., noncombusted and nonheated gases that can be inhaled for euphoric effect) are easily accessible inside most homes (1). Consequently, it is possible that as social interactions resume, access to substances could increase, supervision might decrease, and adolescent substance use could revert to prepandemic levels (1). Effective strategies to prevent and mitigate adolescent substance use are multilevel and focus on reducing risk factors associated with use and increasing protective factors likely to decrease use in the environments where adolescents interact (https://addiction.surgeongeneral.gov/sites/default/ files/surgeon-generals-report.pdf ). Feeling connected to family, positive peers (those not engaging in substance use risk behaviors), school, and community is an important protective factor that can buffer against adverse childhood experiences (ACEs), poor mental health, and health risk behaviors, including substance use and sexual risk behaviors (14). Family and parent substance use programs that focus on parental communication, monitoring, and modeling of positive problem-solving and coping strategies, can be effective in influencing adolescents' substance use behavior (14). Interventions that promote a positive school climate and increase students' feelings of connectedness to the school and decrease student dissatisfaction, in conjunction with effective health education, can improve substance use outcomes (15). For example, CDC's What Works in Schools approach (https://www.cdc.gov/healthyyouth/whatworks/index.htm), focused on creating safe and supportive environments, effective health education, and linking teens to health services, has demonstrated an effect on various mental health and health outcomes, including substance use. Community-school partnerships that increase access to evidence-based substance use prevention curricula and substance use treatment services also have demonstrated protective effects on substance use into adulthood for both illicit drugs and prescription drug misuse, such as PROmoting School-community-university Partnerships to Enhanced Resilience (PROSPER) and Communities That Care (CTC) (https://store.samhsa.gov/sites/default/files/d7/priv/pep19pl-guide-1.pdf ). The majority of adolescents are registered in school; therefore, schools can have an important role in substance use prevention and treatment by providing a supportive school environment including access to a counselor or a psychologist; school policies regarding the use of tobacco products, alcohol, and marijuana; and evidence-based programs to prevent substance use and violence and promote coping and problem-solving skills and mental health (16). Youth substance use can also be reduced and prevented with evidence-based policies that reduce the availability of substances where youths live and decrease their access to them (https:// addiction.surgeongeneral.gov/sites/default/files/surgeongenerals-report.pdf ). One example is to reduce the number and concentration of places that sell alcohol. Increasing the price of alcohol through alcohol taxes, enhanced enforcement of laws that prohibit sales of marijuana and alcohol to minors, and enforcement of other substance use policies (e.g., prescription drug monitoring programs) also can reduce adolescent substance use (https://www.cdc.gov/alcohol/fact-sheets/alcohol-and-othersubstance-use.html; https://www.thecommunityguide.org/ topics/excessive-alcohol-consumption.html). Disparities occur in adolescent substance use by race and ethnicity as well as sexual identity. Tailoring adolescent substance use prevention strategies to reach different population subgroups can be effective when implemented in tandem with broader strategies that prevent and mitigate ACEs and other individual, family, school, and community factors that influence risk for substance use (https://www.cdc.gov/ violenceprevention/pdf/preventingACES.pdf ). Limitations General limitations for the YRBS are available in the overview report of this supplement (8). The findings in this report are subject to at least three additional limitations. First, the survey questions on prescription opioid misuse refer to prescription pain medications and then provide examples of medications containing opioids only. Prescription opioid misuse prevalence might be overestimated if respondents included the use of nonopioid prescription pain medications; however, overestimation of prevalence should not have affected measures of difference between survey years. Second, substantial data were missing for certain substance use variables (e.g., prescription opioid misuse), which might be because of the order of the survey questions or other factors related to survey administration (2). These missing data could have resulted in overestimation or underestimation of prevalence. Finally, the YRBS questionnaire was updated in 2021 to be more inclusive of student sexual identities. This change limited the ability to assess changes in substance use by sexual identity in 2021 compared with earlier years. Current co-occurring substance use * Previous 30 days before the survey; n = 5,203 high school students who reported any current substance use. This n represents students who reported current use of at least one of the three substances, regardless of potential missing values for the other two substances. † Current substance use measures were current alcohol use, current marijuana use, and current prescription opioid misuse. Missing observations were excluded in the calculation of percentages in each category. For alcohol and marijuana use, 11 of 5,023 (0.2%); for marijuana use and prescription opioid misuse, missing = 37 of 5,023 (0.7%); for alcohol use and prescription opioid misuse, missing = 56 of 5,023 (1.1%). Conclusion Youth substance use has declined over the past decade, including during the COVID-19 pandemic; however, substance use remains common among U.S. high school students, and continued monitoring is important in the context of the changing marketplaces for alcohol beverage products and other drugs. Scaling-up tailored, evidence-based policies, programs, and practices to reduce factors that contribute to risk for adolescent substance use and promote factors that protect against risk might help build on recent declines.
2023-04-29T06:18:08.666Z
2023-04-28T00:00:00.000
{ "year": 2023, "sha1": "b59dc4590d60df8134dbbd1ce198b223ba1f77f2", "oa_license": "CC0", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "44600c7a2cfc2e6202362c6a87c39ed8cf82bffa", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
15032462
pes2o/s2orc
v3-fos-license
Reproductive Isolation and Ecological Niche Partition among Larvae of the Morphologically Cryptic Sister Species Chironomus riparius and C. piger Background One of the central issues in ecology is the question what allows sympatric occurrence of closely related species in the same general area? The non-biting midges Chironomus riparius and C. piger, interbreeding in the laboratory, have been shown to coexist frequently despite of their close relatedness, similar ecology and high morphological similarity. Methodology/Principal Findings In order to investigate factors shaping niche partitioning of these cryptic sister species, we explored the actual degree of reproductive isolation in the field. Congruent results from nuclear microsatellite and mitochondrial haplotype analyses indicated complete absence of interspecific gene-flow. Autocorrelation analysis showed a non-random spatial distribution of the two species. Though not dispersal limited at the scale of the study area, the sister species occurred less often than expected at the same site, indicating past or present competition. Correlation and multiple regression analyses suggested the repartition of the available habitat along water chemistry gradients (nitrite, conductivity, CaCO3), ultimately governed by differences in summer precipitation regime. Conclusions We show that these morphologically cryptic sister species partition their niches due to a certain degree of ecological distinctness and total reproductive isolation in the field. The coexistence of these species provides a suitable model system for the investigation of factors shaping the distribution of closely related, cryptic species. Introduction Competition for resources will generally be most severe among closely related species, because they tend to have, due to their shared phylogenetic history, the most similar demands [1,2]. It is widely assumed that the sympatric coexistence of sibling or sister species requires some sort of resource partitioning under resourcelimited conditions [3][4][5], but see [6]. This ''limiting similarity'' concept [7] may not hold under certain, narrowly defined circumstances [8,9], but these instances are believed to be rather the exception from the rule [8] . Hence, testing for differences in the realised ecological niche will consequently be the first logical step in order to explain the coexistence of similar, closely related species. However, closely related species often tend to be morphologically similar for the same reason they are ecologically alike [10]. Therefore, proper species delimitation and unequivocal recognition in field studies are a necessary prerequisite, often requiring molecular methods [11] . The dipteran midges Chironomus riparius Meigen 1804 (synonym C. thummi, respectively C. thummi thummi) and Chironomus piger Strenzke 1959 (synonym C. thummi piger) are sister taxa [12,13]. Larvae of both species are widely distributed in small streams, ditches, ponds and puddles throughout the holarctic [14]. The life cycle of C. riparius and C. piger consists of four larval stages, a short pupal stage and the adult midge. Adults form large mating swarms. A few days after hatching, female midges usually produce a single egg mass containing several hundreds of eggs. The larvae hatch after a few days, and the whole life cycle may be completed within four weeks. Depending on the water temperature, both species are usually multivoltin, with a first generation emerging early in spring and the final generation swarming around late autumn [15]. The overwintering generation consists solely of later larval stages (L3, L4). The species are often dominating the local Chironomus community [16]. In the study region they are frequently found together at the same sites [14], making the species pair an interesting model for the investigation of mechanisms enabling the sympatric coexistence of sibling species. As the two sister taxa are morphologically cryptic, safe species discrimination was only possible only by analysis of polytene chromosomal structure in the past [17]. Despite their morphological similarity, genome size differs by 30%, mainly due to repetitive DNA [18]. Not only their taxonomic status regarding species or subspecies rank is unclear, also the reports on their degree of reproductive isolation are inconsistent. Some degree of prezygotic isolation in the field is warranted by differential swarming behaviour [19]. While some studies indicate that C. riparius and C. piger readily form viable and fully fertile interspecific hybrids in the laboratory [20], others estimate fertile hybrids in the wild to be effectively absent, due to fertility reductions caused by hybrid dysgenesis syndromes [21]. The actual degree of hybridisation and reproductive isolation in the field, however, has not yet been explored. In this study we aimed to investigate distributional patterns of both species in an area where both species co-occur, and to reveal ecological factors that may have shaped the observed distribution. To this end, we investigated genetic differentiation between the species using mitochondrial and nuclear markers and related their relative abundance with environmental parameters. In particular, we answered successfully the following questions: -What is the degree of reproductive isolation among C. piger and C. riparius in the field, -is there a non-random spatial pattern of distribution and cooccurrence, and -can we identify ecological parameters potentially structuring the species distribution? Species delimitation and identification Two hundred sixty four individuals of C. riparius/piger were found at 34 sampling sites (Table 1). Microsatellite analysis detected a total of 76 alleles at the five loci (mean = 15.2, s.d. = 8.1). Factorial correspondence analysis on the microsatellite data revealed two distinct genotype clusters, termed A and B ( Figure 1A). Their distinctness was due to both private alleles and frequency differences at all loci ( Figure 1B). Identical results were obtained with other assignment methods like STRUCTURE (Pritchard et al., 1999) (results not shown). The statistical parsimony network revealed two major haplotype groups, linked by six mutational steps. Plotting the two nuclear genotypes on the haplotypes of the respective individuals revealed a complete congruence with these two haplogroups ( Figure 2). Polytene chromosome preparations identified genotype A (black symbols) consistently as C. riparius and genotype B (grey symbols) as C. piger. Co-occurrence and population structure At about half of the sampling sites containing C. riparius or C. piger, both species co-occurred in varying proportions ( Figure 3). However, individuals of the species occurred less often together than expected by chance (Fisher's exact test x 2 = 160, d.f. = 22, p,0.0001). The relative frequency of a species at a given site was not independent from their frequency at surrounding sites. We found a significant spatial autocorrelation of sampling sites up to 15 km apart ( Figure 4). A significant, albeit very weak genetic population structure within both species was detected. For C. piger, a W ST of 0.027 was calculated, while the estimate for C. riparius was 0.046 (Table 2), indicating a high amount of gene-flow among sampling sites among individuals of each species. Discussion Chironomus riparius and C. piger behave as good species in the field Microsatellite analysis showed the presence of two distinct genotype groups without intermediates, indicating complete reproductive isolation and the absence of putative hybrids ( Figure 1A). Most alleles were specific to one of the cluster with only few alleles shared in similar proportions by the two taxa ( Figure 1B). As only the lengths of PCR-fragments were scored, this partial overlap may be due to homoplasy or common ancestry. The results were so clear cut that the application of more sophisticated methods of hybrid detection (e.g. NewHybrids) was deemed unnecessary. The inference of reproductively isolated gene-pools is strengthened by the distinctness of the mitochondrial variation of the genotype groups ( Figure 2), indicating long lasting isolation with absence of both current and past hybridisation [22]. Even though the generation of hybrids in the laboratory is possible to varying degrees [17,21], both pre-and postzygotic isolation mechanisms [19,21] seemed to have maintained complete reproductive isolation in the wild, despite the opportunity to interbred. Therefore, the two taxa conform to several species concepts, including the biological [10] at least in the area investigated and they should be consequently regarded as good species. Spatial repartition of C. riparius and C. piger along ecological gradients The unequivocal species assignment by molecular markers showed that larvae of C. riparius and C. piger occurred not only in the same general area, but in about half of the cases at the same site ( Figure 3). Still, they were found less often syntopically than expected from their overall abundance. The virtual absence of population structure on the spatial scale of the study (Table 2) suggests that this is not due to dispersal restrictions or geographical obstacles. It indicates rather competitive interaction, either present or past [23]. C. piger was dominant mainly in the west of the area, while C. riparius occurred more frequently in the east (Figure 3), as mirrored in the significant spatial autocorrelation of relative species frequency ( Figure 4). This pattern corresponds to the correlation of the species' relative frequencies with parameters measuring the average amount of precipitation during the summer months ( Table 3). The latter are climatic parameters which generally tend to be spatially autocorrelated. Other parameters that covaried significantly with relative species abundances were water chemistry variables (conductivity, nitrite and CaCO 3 , Table 3). Multiple regression retained only the precipitation variables (Table 4). This suggests that differential desiccation resistance, as observed in other Chironomus species [24], could have caused the observed patterns. However, the absolute differences in precipitation (see Appendix) are probably too small for a differential desiccation risk of the water bodies in the area. Therefore, the climate constitutes probably merely the ultimate cause for the differential species distribution. The amount of rain during the warm summer months with their increased evaporation determines the concentrations of nitrite and other ions in the shallow puddles and ditches both species inhabit. It has been shown that both high salinity and high nitrite concentrations impart larval development in C. riparius/piger [25,26]. Our results indicate that C. piger occurs in areas with less summer rain and tolerates higher nitrite concentrations and conductivity than C. riparius (Table 3). Therefore, the proximate cause for the observed correlation of the species frequencies with summer precipitation is more likely the gradient of water chemistry variables during the time of highest larval abundance [27]. Even though the inferred spatial repartition along ecological gradients is rather a hypothesis in need to be confirmed by subsequent experiments in the laboratory, it has become evident that C. riparius and C. piger are ecologically not completely equivalent. Although this has already been suspected before [14], our study is the first to demonstrate ecological partitioning among the species pair quantitatively in the field. Studies on the ecological differentiation of other Chironomus species have revealed a range of mechanisms that structure coexistence in sympatry. Dietary niche separation among two profundal species from the Chironomus plumosus-group has been suggested by stable isotope analysis [28]. The same species were found to differ in emergence time, suggesting also temporal niche separation [29]. Perhaps the most impressive example of interspecies competition avoidance is the spatial repartition of temporary rain water puddles by C. pulcher and C. imicola into shaded and sunny regions on a very small scale [30]. Despite the demonstrated spatial repartition along ecological gradients of C. riparius and C. piger, we found a substantial number of sites where both species co-occurred, indicating a substantial overlap in the realised ecological niche. Possible, not mutually exclusive explanations for this pattern include: i) a substantial stochasticity in the dispersal/colonisation of the sites. Even though the oviposition choice in another Chironomus species is influenced by nitrogenous compounds and conspecific larvae [31], a high degree of randomness regarding environmental conditions is generally assumed in the community assembly of chironomids [27]. Also which species arrived first at a yet unoccupied site may crucially influence the outcome of subsequent competition [32]. ii) Temporally fluctuating environmental conditions may also prevent complete competitive exclusion [33]. iii) Interaction with other species. Several other species of Chironomus are present at most of the investigated sites [16], as well as other mud dwelling taxa with similar requirements. iv) the abundance in the neighbourhood possibly also influences the local abundance of C. riparius and/or C. piger [33]. As this study documents, C. riparius and C. piger provide a promising model for the investigation of factors shaping the distribution of closely related, cryptic species. Currently ongoing experimental and ecological genomic studies on this emerging model system will help to gain a deeper understanding of the processes and factors that shape the realised niche of closely related species in sympatry. Understanding the internal factors and constraints shaping their distribution and coexistence will contribute to our mechanistical understanding of the processes shaping biodiversity in ecological communities. Sampling The sampling area lies in the middle of the upper Rhine valley in a rectangle of roughly 40 by 60 km between 49u099-49u339N and 8u109-8u139E. It comprises the Rhine valley plain, in the west limited by the mountains of the Pfä lzer Wald and in the east by the rising hills of the Odenwald range. The area is hydrologically characterised by the presence of many drainage ditches, slowly flowing small streams, temporary puddles, the oxbows and the main stream of the river Rhine. The sampling took place from mid September to November 2004, thus sampling the over-wintering generation of Chironomus larvae [34]. The sampling period was scheduled in autumn in order to avoid the large fluctuations in abundance among species throughout summer. Moreover, sampling the hibernating larvae assemblage that will foster next years first generation presents the result of competition processes during the growth season [35]. Sampling took place as described in [12]. Briefly, potential Chironomus habitats were considered opportunistically within the study region, but we mainly focused on typical Chironomus riparius/ piger habitats (small streams, creeks, and ditches with fine, muddy sediment). An area of 161 m was sampled with a 30640 cm net of 0.5 mm mesh size. Due to the small size and low depth of most water bodies, we did not consider different areas within a water body during sampling. All Chironomus larvae instar stage found (instar stage 3 and 4), as identified by the presence of ventral tubuli, were brought alive into the laboratory. For the present study, we chose all thirtyfour sampling sites where C. riparius and/or C. piger which had been identified earlier using a COI barcoding approach [16]. DNA isolation and microsatellite analyses Larvae were kept in the laboratory for at least 5 days without feeding, in order to remove potential PCR inhibiting substances from the gut [36]. Head and first body segments were removed for polytene chromosome analysis as described in [17]. Briefly, salivary glands were prepared from fresh larval tissue and fixed in 50% acetic acid. Chromosomes were stained in 2% orcein acetic acid for 15 min and fixed on glass slides for microscopical analysis. Remaining caudal tissue was homogenized in 700 ml standard CTAB buffer containing 0.1 mg/ml proteinase K. After digestion for at least 1 h at 62u C, chloroform/isoamyl alcohol 24:1 treatment was performed followed by 1 h precipitation at 220u C. DNA pellets were washed twice with ethanol 70% and resolved in 30 ml water. Genetic structure, mitochondrial haplotype phylogeny and species identification Factorial correspondence analysis (FCA) was applied on multilocus genotypes to explore the distribution of genetic variation graphically (GENETIX 4.04 software, [38]. Genetic population structure was assessed using the AMOVA approach [39] as implemented in the Excel add-in GenAlEx [40]. For this analysis only sampling sites with at least seven conspecific individuals were taken into account. C. riparius/piger COI haplotypes were identified from [16] (GenBank Accession numbers DQ910547-DQ910729). The phylogeny of the COI haplotypes was inferred using statistical parsimony (SP) [41]. The SP network was constructed with the program TCS v. 1.21 [42]. Nesting of clades followed the rules given in [43] and [44]. Inferred reproductively isolated entities were taxonomically identified using polytene chromosome preparations of a subset of individuals [17]. Co-occurrence We used a Fisher's exact test (10 6 permutations) to investigate whether the co-occurrence of the identified taxa was random. Spatial patterns of the relative frequency of C. riparius and relevant environmental parameters at the sample sites with at least seven individuals found were inferred with spatial autocorrelation analysis. Seven mutually exclusive lag classes of 5000 m width were used to compute Moran's I spatial correlation coefficient for each class. Statistical significance of Moran's I was assessed with 999 Monte Carlo permutations. The Excel Add-in RookCase version 0.99 [45] was used for the calculations. Physico-chemical and climatic characterisation of sampling sites Thirty-eight ecological parameters were recorded in order to characterize abiotic habitat conditions at the respective sampling size. These parameters were chosen to cover a wide range of ecological conditions known to influence freshwater communities, and the distribution of chironomid species in particular. Recorded characteristics include physicochemical parameters [46,47], sediment composition [48], climatic conditions [49], and structural habitat characteristics (e.g., size and depth of water body). For the determination of sediment organic content, measured as loss on ignition, approximately 30 g of sediment sample were dried at 60uC for three days and weighed subsequently. Samples were then muffled at 550uC for 4 h, followed by determination of percentage weight loss.. For the identification of relative particle size composition of the samples, 150 g of homogenised sediment were washed through six sieves with decreasing mesh size and the content of each sieve was dried and weighted. Conductivity, pH, water temperature and O 2 saturation were measured with a WTW Multi 340i multimeter at each sampling site. Ammonium, nitrite and phosphate concentrations were calorimetrically determined using AquamerkH quicktests. Chloride, CaCO 3 and nitrate concentrations were measured with colour tests (MerkoquantH). The stream velocity was measured using an AMR ALMEMOH device. Nineteen biologically meaningful climatic parameters were extracted for each sampling site from the BIOCLIM environmental layers with a spatial resolution of 0.5 min, implemented in the computer program DIVA-GIS version 4.2 [50]. Mean, median, standard deviation, minimum and maximum values for the recorded parameters are given in Appendix S1. Statistical analysis Means, standard deviations, median, minimum and maximum values for all 38 variables taken into account are given in the Appendix. All data with the exception of pH were either log 10 (x+1; continuous variables) or arcsin (percentages) transformed to conform to the underlying assumptions of normality and heteroscedasticity in subsequent analyses. We calculated Pearson's correlation coefficients (r) between the relative C. riparius frequencies and all respective variables. Due to the multitude of comparisons, we calculated a q value for each test to estimate the minimum false discovery rate which is incurred when calling that test significant. Variables with p values,0.05 and q values,0.10 in correlation analysis were retained for a multiple regression (forward selection). Supporting Information Appendix S1 Recorded environmental parameters.
2014-10-01T00:00:00.000Z
2008-05-14T00:00:00.000
{ "year": 2008, "sha1": "6892bcafba9ba6ed61660c252f8bd6b93fbe640c", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0002157&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6892bcafba9ba6ed61660c252f8bd6b93fbe640c", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
55156300
pes2o/s2orc
v3-fos-license
Carbonate cementation in the late glacial outwash and beach deposits in northern Estonia The sedimentary environments, morphology and formation of carbonate cement in the late glacial glaciofluvial outwash and beach deposits in northern Estonia are discussed. Cementation is observed in well-drained, highly porous carbonaceous debris-rich gravel and sand-forming, resistant ledges in otherwise unconsolidated sediments. The cemented units occur as laterally continuous layers or as isolated lenticular patches with thicknesses from a few centimetres to 3 m. The cement is found in two main morphologies: (1) cement crusts or coatings around detrital grains and (2) massive cement almost entirely filling interparticle pores and intraparticle voids. It is exclusively composed of low-Mg calcite with angular equant to slightly elongated rhombohedral and scalenohedral or prismatic crystals, which indicate precipitation from meteoric or connate fresh surface (glacial lake) water and/or near-surface groundwater under low to moderate supersaturation and flow conditions. The absence of organic structures within the cement suggests that cementation is essentially inorganic. The cement exhibits both meteoric vadose and phreatic features and most probably occurred close to the vadose–phreatic interface, where the conditions were transitional and/or fluctuating. Cementation has mainly taken place by CO2-degassing in response to fluctuations in groundwater level and flow conditions, controlled by the Baltic Ice Lake water level, and seasonal cold and/or dry climate conditions. INTRODUCTION Carbonates can precipitate through a range of climatic conditions.They are mainly formed biogenically in warm tropical climates, but are also widespread in cold and glacial environments.Carbonate cementation in polar regions or within glacial deposits in formerly glaciated areas have attracted interest as a potential indicator of hydrologic and permafrost conditions and palaeoclimatic proxies (e.g.Leonard et al. 1981;Aharon 1988;Sharp et al. 1990;Vogt & Corte 1996;Lauriol & Clark 1999;Candy 2002;Lacelle et al. 2006Lacelle et al. , 2007;;Goodwin & Hellstrom 2007;Lacelle 2007).Carbonate precipitates, occurring as a cement in unconsolidated glacial sediments in different geomorphic and hydrologic settings, can be formed by a variety of mechanisms that lead to their dissolution and reprecipitation.In glacial settings carbonate precipitation is mainly a result of inorganic processes, such as subglacial regelation, periglacial freeze-thaw processes, evaporation and CO 2 -degassing.Biogenic processes, such as photosynthesis, skeletal mineralization and bacterial oxidation of organic matter, are involved in proglacial settings (Fairchild & Spiro 1990;Fairchild et al. 1994).Precipitation of carbonates in glacial sediments may also have occurred during subsequent non-glacial conditions, due to meteoric or terrestrial solute-bearing waters. This paper describes for the first time the sedimentary setting, texture and micromorphology of carbonate cements in North Estonia.Laterally extensive cemented beds have been recorded in five outcrops hosted in glacial outwash and beach sand and gravel deposits that accumulated during the Late Weichselian deglaciation about 15.7-12.5 ka BP (Kalm 2006;Kalm et al. 2011).We consider that controls on carbonate precipitation include the changes in local hydrologic conditions and in the subsurface thermal regime during the last deglaciation and the regression of the Baltic Ice Lake (BIL). REGIONAL SETTING The five studied outcrops are located on the edge of a carbonaceous bedrock cliff, the Baltic Klint in North Estonia (Fig. 1).The Ordovician limestone plateau lies at altitudes of 35 m a.s.l.near Laagna to about 70 m a.s.l. at Kunda.To the north the Ordovician escarpment borders the Cambrian and Ordovician terrigenous rocks (sand-and siltstone, clay) with several exposed or more or less buried terraces (Suuroja 2006).The Ordovician limestone plateau is covered by rather thin Quaternary sediments, usually less than 1 m.Thicker deposits, up to 10 m, are connected with glacial landforms, like eskers, ice-marginal ridges, kames and drumlins from the Late Weichselian deglaciation stages of North Estonia, ca 13.7-12.7 cal ka BP (Kalm 2006).In the foreklint area, including klint bays and bedrock valleys, the thickness of Quaternary deposits may reach more than 100 m.They are represented by glacial and interglacial deposits from the Early Saalian through the Eemian to the Late Weichselian and are followed by Holocene marine and terrestrial deposits on the top. During the Late Weichselian deglaciation of Northern Estonia the Baltic Klint might have become an obstacle to the movement of the otherwise degrading ice.Due to stresses in the ice a number of crevasses, mainly of marginal orientation, became the favoured channels for glacial meltwater drainage.In addition, isostatic tilting at that time promoted periglacial surface water flow close to and along the ice margin.Outwash and delta sands and gravels are deposited in the form of icemarginal ridges and outwash fans on the fringe of the Baltic Klint.The waters of the BIL inundated large areas of the carbonaceous plateau with a maximum water level of about 90 m a.s.l. in stage A 1 at around 13.3 ka BP and about 40 m a.s.l. in the final, B III stage at about 11.6 cal ka BP (Björk 1995;Rosentau et al. 2009).During the regression of the BIL the elevated glaciofluvial ridges gradually rose above the lake water level, forming either small isolated islands or barrier spits.In response to the water level fluctuation and the accompanying alongshore waves and currents, the underlying glaciofluvial material was reworked and/or subjected to littoral transport.The elevated glaciofluvial ridges and their unconsolidated sandy-gravelly deposits encouraged the formation of beach ridges along coastlines that had a windward exposure to open water from the north.In the following stages of the Baltic Sea (from the Yoldia to Limnea seas) the water levels were generally lower than the carbonaceous klint edge and their waters did not affect sedimentation. Glacial outwash and beach deposits of the studied sites, composed of bedded well-rounded and sorted pebble-cobble gravels with variable contents of sand and fines, contain carbonate-cemented layers and lenses (Table 1).Both types of sediment are quite similar lithologically, which is not surprising, since beach sediments are largely reworked from outwash deposits.Therefore, in some sections it was difficult to draw a line between outwash and beach deposits.The coarse-grained material mainly consists of local carbonaceous debris (> 70%) derived from the underlying Middle Ordovician strata.Scandinavian Shield-derived debris (igneous and metamorphic rocks) makes up about 10-20% and local Cambrian and Ordovician terrigenous silt-and sandstone from the foreklint area less than 5%.The texture of the deposits is highly variable and will be described in detail for each studied site later in this paper. All these sand and gravel deposits and cemented zones presently lie well above the modern water table, i.e. in the vadose zone.This does not exclude the possibility that cementation occurred when water table level was higher than today and sediments were permanently water saturated, i.e. in the phreatic zone. METHODS The samples were collected from both the laterally continuous horizons and vertical features, depending on the variations in sediment facies, cement distribution and cementation rate.Thin section petrography, X-ray diffraction (XRD) and scanning electron microscopy (SEM) were used to determine the mineralogy and texture of the cement. The mineral composition was determined from powdered samples (cement coatings).The powder was spread over an aluminium slide and analysed using a DRON 3M X-ray diffractometer at the Institute of Ecology and Earth Sciences, University of Tartu.Diffractograms were measured with Cu Kλ radiation in the range 10-55° 2θ with a step size of 0.03 and scanning speed of 3 s per step.Identifications were made based on comparisons with standard diffractograms and data from this analysis, and the result received was a comparison of the relative contents of minerals (mainly calcite, dolomite, quartz). The micromorphology and chemical composition of the cement were examined using a scanning electron microscope SEM Zeiss DSM 940, equipped with an Idfix silicon drift technology energy dispersive analyser (EDS) for semi-quantitative element analysis at the Institute of Ecology and Earth Sciences, University of Tartu.The samples (about 0.1 cm 3 ) were mounted onto an aluminium stub, using a double-sided carbon tape and then sputter-coated with gold for 230-250 s prior to examination under SEM. Before thin section production the soft-cemented samples were first impregnated with an epoxy resin to make them harder for cutting.All samples were sectioned, glued to a glass slide (3 × 5 mm) and polished, using progressively finer abrasive grit until the sample was 20-30 µm thick.Thin sections were visually analysed under an optical microscope and photographed with a digital camera at the Institute of Ecology and Earth Sciences, University of Tartu, and at the Faculty of Geography and Earth Sciences, University of Latvia. The Geological Base Map at a scale of 1 : 50 000 and LIDAR elevation data from the Estonian Land Board were evaluated for regional background information and description of the sites. Pehka site The 300 m long and 4-6 m high Pehka exposure (59°29′34″N, 26°20′20″E) is located on the NE-SW orientated ridge along the fringe of the carbonaceous escarpment (Fig. 2A).The limestone plateau lies at an altitude of 40-60 m and borders the 15-20 m high buried klint terrace in the north (Suuroja 2006).The Pehka ridge is about 2.3 km long, 100-300 m wide and with a relative height of about 6 m in the NE part and up to 13 m at its SW end.The southwestern part of the ridge acts as a dam in the mouth of the ancient bedrock valley cut into the carbonaceous plateau. The ridge is entirely composed of sand and gravel.The thickness of the deposits varies from about 5 m on the limestone plateau in the NE to 18 m within the river valley at the SW end of the ridge.Two laterally continuous lithofacies associations are distinguished, separated by a sharp apparently erosional contact.The upper lithofacies is composed of high-energy variably sorted well-to sub-rounded cross-bedded coarse gravel with pebbles and boulders with little sand and fines.The total thickness is up to 6 m (Figs 3A and 4).Coarse-grained material originates from the underlying carbonaceous bedrock (up to 90%) and is more likely derived from the klint edge.The cross-bed dip direction records palaeoflow mainly towards the south and southwest.This upper coarse-grained gravel-pebble-boulder facies is typical of wave-built beach ridges related either to storms or exceptionally high water stages, often occurring on erosionally sculpted bedrock terraces (Otvos 2000). The gravel unit is underlain by low-energy massive fine-grained sand, which expands upon the fore-klint area and fills most of the bedrock valley.The thickness of the sand unit reaches 12 m.The OSL age of sand has been dated as 26.8 ± 3.5 ka and related to the terrestrial interstadial sands of Middle Weichselian age (Kadastik 2004;Kalm et al. 2011). The strong cemented unit forms a lateral up to 3 m thick layer in the top part of the gravel unit (Fig. 4).The cemented layer is followed along the entire 300 m long exposure, forming precarious overhangs (Fig. 3A).It is traceable along the whole of the ridge top beneath a thin soil layer and probably wedges out either on slopes or just at the foot of the ridge.The degree of cementation is variable, mainly depending on the grain size of the sediments and the texture of the cement.Coarse gravel and pebbles are mostly cemented by thin carbonate crusts or coatings around the clast surface, which act like glue sticking them together.In finer, sandy material the cement is distributed uniformly in the matrix, forming a strong massive cement between coarser particles. Kunda site The 800 m long and 10-15 m high Kunda exposure (59°29′46″N, 26°32′49″E) is situated 11 km east of the Pehka site (Fig. 1), also on the NE-SW orientated arced ridge (called Hiiemägi) in the mouth of an ancient bedrock valley (Fig. 2B).The up to 30 m deep Kunda bedrock valley is incised into Middle Ordovician carbonates as well as Cambrian and Ordovician siltand sandstone.The bedrock lies at altitudes of 20 m within the valley and 50-55 m on the adjacent limestone plateau.The Kunda valley is mostly filled with till and glaciolacustrine silt and clay deposits with a thickness of 15-20 m.The present-day Kunda River flows above the ancient valley and has eroded a narrow deep-sided gully through the Hiiemägi ridge. The Hiiemägi ridge is 2.5 km long, 100-400 m wide and has a relative height up to 13 m.The ridge comprises mainly outwash deposits represented by cross-bedded clayey sand, gravelly sand and rounded gravel deposits alternating with well-rounded cobble-pebble interlayers almost lacking fine-grained material.The topmost part of the section contains discontinuous layers of slightly rounded platy limestone boulders.General cross-bed dip directions record palaeoflows towards the south and southwest.Some fine sand layers show also ripple marks.The Hiiemägi ridge has been interpreted as an icemarginal formation, which was later flooded and reworked by the BIL.Therefore, the topmost part of sediments could also be observed as beach deposits.As a result of the lowering of the BIL the higher northeastern part of the ridge formed a spit, which later became a dam separating the ancient Kunda Lake in the south from the BIL in the north (Karukäpp et al. 1996). Cementation occurs as lateral patches or 2 m thick lenses in the topmost part of the sediment complex, mainly between the two uppermost cobble-pebble layers (Fig. 4).The extent of cementation in the pit wall is difficult to observe because the outcrop is almost completely covered by a scree.Many large rafts of cemented material have been left at the bottom of the quarry (Fig. 3D), mostly in its central part.Therefore it seems that the cementation principally took place just in that specific sloped zone of the ridge.The cement is predominantly strong and massive in a fine-grained matrix, but occasionally highly weathered and crumbly. Moldova site The Moldova exposure (59°25′25″N, 27°06′27″E) is a part of a 12 km long and about 1 km wide NW-E directional arced ridge with a relative height up to 5 m (Fig. 2C).It is observed as a complex of glacial outwash forefield and littoral beach system of the BIL on the fringe of the Baltic Klint.At Moldova the Ordovician limestone escarpment, together with the escarpment in terrigenous rocks, forms an up to 15 m high terrace stepwise descending northwards (Suuroja 2006).The Ordovician limestone crops out in the bottom of the pit at an altitude of about 44 m a.s.l. and rises southwards to 50 m a.s.l. At Moldova an up to a 600 m wide littoral terrace is formed either on the carbonaceous bedrock or the glacial outwash terrace.The overall thickness of sand and gravel deposits is about 6 m.The upper 4 m of sediments are typical beach deposits consisting of seaward dipping cross-bedded well-sorted and well-rounded, often matrix-free coarse gravel with pebbles alternating with layers of sandy gravel and sand (Figs 3B, 4).These littoral deposits were formed in the repeatedly changing coastal environment, either with variable water levels and/or wave energies.The bottom 2 m of the deposits are made up of horizontally-bedded poorly graded sandy gravel with well-rounded pebbles and cobbles, which are most likely of glacial (outwash or fluvial) origin. Carbonate cementation is observed at two levels.The upper, extensively cemented layer with a thickness of 0.7-2 m occurs in beach deposits at a depth of 1 m, forming a distinctive ledge in the outcrop section (Fig. 3B).A second, 1 m thick cemented layer is present on the bottom of the quarry, in outwash deposits just above the bedrock.Several cemented rafts (quarrying residuals) that have dropped around the quarry bottom indicate a wider distribution of cementation than currently visible.Massive cement is mainly found in the finegrained sand matrix, or in the absence of sand a thin cement crust surrounds coarse gravel and pebble clasts. Tornimägi site (Hills of Sinimäed) Tornimägi is one of the three hills forming the westeast orientated Sinimäed ridge with a length of 3 km and width up to 400 m (Fig. 2D).The Sinimäed ridge is located about 2.2 km south of the Baltic Klint rising up to 50 m above the surrounding area with a generally flat topography.Three bedrock blocks of Middle Ordovician limestone form the cores of the three hills.The limestone beds are strongly inclined (18°-70°) and folded, and in places overthrusted.The hills are fully buried under Quaternary glacial deposits, while the limestone mainly crops out on the northern slopes as a 40-50 m high vertical wall. The formation of the Sinimäed ridge is still under discussion.According to one hypothesis (Orviku 1960;Suuroja 2006), the limestone blocks may have been squeezed upwards by the Cambrian clays along a preexisting tectonic fault zone under the pressure of continental ice.Others (Orviku 1926;Miidel et al. 1969) have suggested a glaciotectonic origin, that these are glacial rafts transported from the klint edge to the present location by continental ice.Based on that supposition, the hills have been interpreted as a push (end) moraine of the Pandivere stage (Raukas et al. 1971) about 13.3 cal ka BP (Kalm 2006).The entire icemarginal formation includes the end moraine chain, comprising deformed limestone blocks with glacial till and outwash deposits between them in the proximal and glaciofluvial delta plain in the distal part. The exposure (59°22′28″N, 27°50′46″E) is located on the southern slope of Tornimägi hill, where glaciofluvial outwash deposits outcrop as a 5 m high and 150 m long escarpment (Figs 3C, 4).Outwash deposits consist of planar cross-bedded poorly graded sandy gravel with a few purely pebble layers.Gravel deposits change gradationally downwards into medium to fine sand with little coarse-grained material, yet containing a large limestone raft that can be observed in open section.The gravel unit is partially covered by a thin, less than 1 m thick layer of medium to coarse-grained sand, which in turn is covered by an up to 2 m thick sandy loam till layer. The main, 2 m thick uniformly cemented layer occurs in the gravel unit.Very strong massive cement is found in the fine-grained gravel-sand matrix between coarser clasts.The lower boundary of the cemented layer is sharp and traceable along the entire open section.In addition, two thin (2-3 cm) strongly cemented layers are observed in the sand unit above the cemented gravel layer. Laagna site The Laagna exposure (59°23′18″N, 27°58′04″E) is situated 7 km to the northeast of the Tornimägi site (Fig. 1), in the southern part of Laagna klint bay that cuts as an up to 6 km long and 1.5 km wide incision into the limestone plateau (Suuroja 2006).Laagna klint bay is located in a faulted area where Ordovician carbonate rocks are missing and Cambrian blue clay lies at 25-30 m a.s.l.Several limestone blocks bordering the klint bay are squeezed upwards and form up to 15 m high escarpments partly buried under glacial outwash and beach deposits.Two NE-SW-trending ridges are observed on the edges of such limestone escarpments at Laagna (Fig. 2E).The ridges marks short standstills of the ice margin during the Pandivere phase and were later strongly reshaped by the coastal processes of the BIL.The overall thickness of gravel deposits is up to 10 m.The upper 5 m of the ridge consists of well-sorted wellrounded gravel and pebbles with fine to coarse sand interlayers typical of beach ridge sediments.The lower, presumably up to 7 m thick part of the sediment complex consists of poorly graded sand, gravel and pebbles suggesting a glacial outwash origin.The boundary between the two genetic types of sediment is not traceable (Fig. 4). The upper 6 m of deposits have been excavated in the Laagna exposure.Cementation occurs as a lateral discontinuous layer with a thickness of 0.5-0.8m in outwash deposits on the bottom of the quarry (Fig. 3E).Massive cement appears in the fine-grained sand matrix between coarse-grained material.The degree of cementation is variable, decreasing downwards. Texture The texture of carbonate cement is similar in all sites.The cement mostly occurs as a carbonate crust or fringe of variable thickness around detrital grains or fills interparticle pores (Fig. 5).Macroscopically (i.e.visible to the naked eye) the cement is highly visible in the coarse matrix as an up to 1 mm crust thick around the clast surface.Such a carbonate crust acts like a glue, sticking coarse clasts together.In some pebble layers the crust is only present on the upper surface of the clasts as highly characteristic of evaporative crusts, and occasionally slightly pendant morphology is observed.Several frost-cracked pebbles, probably caused by freezethaw processes, are also found in the cemented layers, where the cracks and fissures are filled with cemented fine-grained material.In fine-grained material the cement is distributed uniformly in the matrix, filling almost all intergranular space as massive cement between coarser material.The degree of cementation is variable throughout the cemented body due to sediment texture and porosity.Commonly the massive cement filling the pore space of the fine matrix is stronger than the carbonate crust around highly porous coarse material. Composition The mineral and chemical composition was determined from powdered cement using an X-ray diffractometer.The dominant minerals are calcite and quartz, but also minor amounts of dolomite are present.This suggests that the cement is not pure secondary carbonate, but also contains fine-grained detrital material from outwash and/or beach deposits.Microscopical thin-section and SEM examinations confirm that calcite is the main mineral that fills the pore space and cements the initial debris.Clay minerals were not analysed in detail due to their low content. Detailed examination with a SEM analyser shows that the cement is mostly composed of CaO (up to 50%), and contains minor amounts of SiO 2 , Al 2 O 3 and Fe 2 O 3 and trace amounts of MgO, MnO, K 2 O and Na 2 O.The minor amounts of silica and alumina could point to clay minerals, which were difficult to distinguish even under SEM. A specific morphology of micritic calcite crystals is often difficult to distinguish.It mostly occurs as massive subhedral calcite with some rhombohedron faces (Figs 6D, E, 7G).Micrite usually forms coatings or isopachous rims tightly around the grains or completely fills interparticle porosity or intraparticle voids.Micrite often changes to microsparite and sparite towards the intergranular pore space, indicating that micrite preceded microsparite or sparite precipitation (Figs 6F, 7D, F).The thickness of the micritic rims is variable (up to 20 µm). Microsparite and sparite calcite crystals are mostly equant to elongated rhombohedrons or scalenohedrons, but also a few prismatic crystals were observed.Elongated crystals often have sharp tips and stand perpendicular to grain surfaces towards the intergranular pore space.Occasionally equilateral, almost cube-shaped crystals with a size of 20-50 µm have developed (Fig. 7D).Sparite crystals also form assemblages, filling larger intergranular voids and microfractures, although larger intergranular voids have not always been completely filled. The thickness of the micritic and sparitic cement together ranges from 40 to 300 µm, whereas larger voids have not been fully filled by the cement.The cement is uniformly fringing the grains and no specific downward dragging pendant structures have been observed.In smaller pores the cement rims around the neighbouring clasts are grown together, being somewhat similar to the meniscus bridges.In addition, the peculiar meniscus cement with smooth ends of flattened calcite crystallites, which defines the water-air interface in pores, has not been observed in larger pores. The different texture of cement shows very clearly that crystal growth has taken place either in separate stages or in changing hydrological regimes.Traces of dissolution and secondary formations are found on the surfaces of crystals, referring to some later dissolution and reprecipitation of secondary calcite. Hydrologic environment of cementation Carbonate cement can precipitate in both vadose and phreatic hydrologic environments, which are distinguished by specific mechanisms and cement characteristics that are unique to or more prevalent in particular environments (Vogt & Corte 1996;Hall et al. 2004;Elbracht 2010).In the vadose zone, where water is retained by a combination of capillary force and adhesion action, precipitation can take place from the infiltrated or percolated waters that permeate through sediments at intervals.In a general way, vadose cements tend to be made of micrite due to high levels of supersaturation and rapid precipitation, either by rapid transpiration or evaporation or chemical reactions causing rapid CO 2degassing (Folk 1974;Given & Wilkinson 1985;Hall et al. 2004).Vadose cement is characterized by the meniscus cement at grain contacts or pendant textures, in which vertically-orientated crystals grow downwards from the lower faces of clasts.Vadose cement is preferentially located in fine-grained sediments because of higher matrix suction and increased capillary tension (Hall et al. 2004).Sparitic crystals may be associated with a greater water supply (e.g.seasonal) into sediments, leading to a decrease in carbonate concentration and deceleration of precipitation.In phreatic conditions all pores and fractures of sediments are saturated with water.Phreatic cements may exhibit a broad scale of crystal size and morphology depending on the degree of supersaturation and flow rate of fluids (Given & Wilkinson 1985;Gonzalez et al. 1992;Vieira & Ros 2006).Phreatic cements preferentially form coatings or isopachous rims tightly around the grains growing perpendicularly from the grain surfaces towards the intergranular pore space, eventually infilling all intergranular voids.Generally the formation of cement in phreatic conditions is slower than in the vadose environment, therefore distinct and well-developed crystals can form.General successions of cementation often demonstrate a variety of textures, which would imply that the formation of cement can take place in changing, even at micro-scale, hydrologic and/or climatic conditions attributed to drainage, fluctuations in the water table and triggering mechanism of precipitation (e.g.Aber 1979;Knight 1998;Candy 2002;Hall et al. 2004;Vieira & Ros 2006). The absence of organic structures within the studied cements suggests that cementation is essentially inorganic.The few macroscopic evidences, such as meniscus bridges and pendant texture between larger clasts (Fig. 5B) observed in the outcrops, are indicative of precipitation in the vadose zone.Likewise, micritic coatings and rare meniscus bridges concentrating mainly at grain contacts are observed in the fine-grained sand matrix between gravel and pebbles.No apparent micro-scale pendant texture or any influence of gravity was found in the fine matrix.Distinct evidence of phreatic cementation includes well-developed microsparite to sparite equant to elongated scalenohedral crystals forming isopachous rims around the grains (Figs 6D, 7E).These rims are sporadically surrounded or engulfed by dense massive micritic cement (Fig. 7D).Together they fill the bulk of the pore space.In turn, massive micrite includes dispersed microsparitic and sparitic crystals.This indicates phreatic rather than vadose conditions. The presence of bilaminar cement with thin inner micrite laminae and thicker outer microsparite and sparite rims of calcite crystals may refer to two-step cementation that started in the vadose and continued in the phreatic environment.The inner micritic cement (coating or meniscus) is explained by a high carbonate concentration during the initial stages of rapid precipitation produced by the repeated wetting and drying cycles of sediments.Such a recurrence most probably occurred on a seasonal or annual basis, when water could seep into sediments during storms and rains or exceptionally high water levels.The formation of micrite envelopes reduced permeability.Subsequently pore water drained and evaporated more slowly, leading to lower nucleation rates and thus permitting the growth of larger crystals at the outer rim.In addition, a coarser crystal size may have resulted from a progressively decreasing supersaturation due to the more limited access of carbonate ions in the vadose zone.On the other hand, sediments may have entered the phreatic zone due to groundwater table fluctuation, i.e. a rise in the water table.In that instance the dense sporadically sparry micritic cement resulted from rapid precipitation stimulated either by evaporation or the CO 2 -degassing of water. The studied cements exhibit both meteoric vadose and phreatic features.Most probably cementation occurred close to the vadose-phreatic interface, where conditions were transitional and/or fluctuating.The uniformlycementated layers indicate carbonate precipitation in a low-energy water-saturated phreatic environment (e.g. at Pehka, Tornimägi and Moldova), whereas in vadose conditions preferentially more irregular scattered cemented bodies developed (e.g. at Kunda). Controls on spatial distribution of cementation The degree of cementation within sediments can vary widely and is controlled by the characteristics of environmental conditions.A number of factors either inhibit or facilitate the process of cementation, contributing to its variability.Considerable control on cementation is derived from the hydrologic conditions, i.e. water table depth (vadose/phreatic environment), pore-water chemistry (degree of supersaturation) and ambient fluid pattern (the influx of calcium and/or bicarbonate ions).Several studies (e.g.Jacka 1974;Aber 1979;Khadkikar 1999;Hall et al. 2004) show that cementation is also strongly controlled by sediment texture (grain size and porosity), which in turn determines the permeability and thus the migration of fluids in sediments.Coarser-grained and more porous sand and gravel as more permeable media tend to be preferentially cemented, whereas fine-grained silt and clay, or poorly-graded tills, as less permeable beds, are rarely cemented. It is more usual to find (including this study) that particular gravel and sand beds are firmly cemented, whereas adjacent layers remain uncemented.The beach and outwash deposits with similar textures are widespread on the limestone plateau in North Estonia, but most of them do not exhibit calcite cementation.In the studied sites lateral cementation of variable thickness occurs in well-drained poorly-to well-graded gravel with a sand matrix either on the surface or some 1-5 m below the surface.The cemented layers do not exhibit apparent textural differences compared to the non-cemented poorly consolidated beds of similar texture (Fig. 4).Furthermore, the boundary between cemented and underlying uncemented sediments is mainly sharp and sub-horizontal, showing an abrupt interruption of cementation (e.g. at Pehka, Moldova and Tornimägi).Thereby cementation seems to be mainly controlled by the groundwater level, and is independent of sediment textural differences. The formation of near-surface lateral cementation probably requires the occurrence of an elevated nearsurface water table.The large volume of surface water in the form of BIL definitely produced a raised water table, which later started to decline gradually.Additionally, the perched water table could have been produced by the permafrost or seasonal freezing, which in turn may have promoted the supersaturation of CO 2 in water.A shallow layer of discontinuous permafrost would explain the formation of more restricted cemented layers or zone of cemented pones (e.g. at Kunda) between permafrost patches.Changes in the thermal regime and air-water exchange through the talik zone may have triggered CO 2 -degassing, causing the supersaturation of water and precipitation of carbonates (Fairchild et al. 1994;Knight 1998). Carbonate precipitation occurs preferentially in a zone of elevated supersaturation that forms as a function of reactants supplied by the surrounding waters.The composition and morphology of carbonate crystals are controlled by the Mg/Ca ratio of the solution and water flow characteristics, such as flow direction and velocity (Folk 1974;Given & Wilkinson 1985;Gonzalez et al. 1992).The studied cements consist exclusively of low-Mg calcite, preferentially precipitating in the meteoric or connate surface freshwater environments, which generally contain little Mg 2+ .As the majority of calcite crystals are only slightly elongated or equant, the degree of supersaturation generally had to be low.Commonly equant fabrics are also associated with lower flow rates (Gonzalez et al. 1992).More elongated crystals, terminated by shallow rhombohedral faces, are produced at higher supersaturation, which may have been driven by a greater flux of Ca 2+ and HCO 3 2-ions or other reactants transported by groundwater, or can be due to faster evaporation and/or CO 2 -degassing.Groundwater influx could be related to the overall raised groundwater levels in the sediment profile or may have resulted from provisional seepage (including pressurized) along the beach zone or klint edge. Under changing hydrologic conditions cementation could take place continuously at levels of low to moderate supersaturation.As calcite precipitation occurs, the porosity of sediments decreases, leading to a diminished or interrupted fluid flow and thus to retarded crystal growth.Sediments with lower porosity, where the flow is initially slow to non-existent, become water-saturated for a time.As a result, dense massive micritic or small equant cements are formed, distinctive to meteoric phreatic conditions.In this case, cementation was controlled by the spatial distribution of early, probably vadose, micritic cement, which formed preferentially in the fine-grained matrix.This could in turn reduce the overall subsurface drainage and water percolation in the coarse-grained units, where there was sufficient space for the growth of sparitic crystals.It may be the reason why there are no specific pendant structures, or if there were early vadose micritic pendant structures, they were overprinted by later phreatic sparitic cement apparently precipitating on the micrite substrate. Formation of cements and implication for late glacial hydrologic conditions Extensive calcite cementation discovered within beach and outwash deposits in North Estonia probably started soon after accumulation of sediments in late glacial time during the Allerød and Younger Dryas periods, and could episodically occur in the early Holocene only due to meteoric waters.The variable hydrologic conditions and amelioration of climate have had a great influence on the cementation process.However, the texture and composition of the cements detected in either the outwash or beach sediments do not exhibit high variability.This indicates that cementation did not occur under the influence of rapidly shifting parameters, thus the parent waters and the precipitation conditions had to be relatively comparable. Given the palaeogeographic settings of the studied sites and the position of cemented bodies in sediment succession, in each place the cement was formed in somewhat different ways and at relatively different time periods.Subaqueous outwash deposits were deposited during the BIL stages A 1 -A 2 in the front of the retreating ice margin at water levels of about 60-70 m a.s.l.(Saarse et al. 2007;Rosentau et al. 2009).During the continuous regression of the BIL (stages BI-III) the water level dropped to about 50 m a.s.l. at Pehka and to 32 m a.s.l. at Laagna.Everywhere along the coastal zone various beach formations such as spits, beach ridges and terraces developed at different levels.As a common feature the cement occurs near the top of elevated landforms, on the surface or a few metres below, either in glaciofluvial or beach deposits.Moreover, the cemented layers are stronger in the topmost part and tend to follow the landform topography, although they may be covered by younger sediments (e.g.till at Tornimägi and a beach ridge at Laagna).If cementation became possible just after the formation of these landforms or rise above the BIL water level, the water table in beach sediments close to the shoreline was strongly controlled by the BIL water level.In view of this, the cementation level at each site conforms well to the water table level derived from the water level of the BIL at different stages.Further, this suggests that the major changes have been attributed to water drainage and evaporation rates in the vadose zone, which were in turn influenced by influx of percolating rainwater, and water-level fluctuations.Therefore, depending on the volume of water and early vadose cement, the sediments in the vadose zone could be fully saturated and thus cementation continued in phreatic conditions.As elevated landforms extended above the BIL water level and thus to colder climates, and during the whole of the Younger Dryas cold period, it might be possible that a permafrost horizon or seasonally frozen sediments produced either a perched water table or barriers to water flow and thus created episodic watersaturated conditions. CONCLUSIONS Based on the results presented in this study the following main conclusions can be made.1. Lateral cementation of variable thickness occurs in porous well-drained gravel and sand either on the surface or a few metres below the surface.The spatial distribution of cementation is controlled in part by the texture of the initial sediments, but mainly derived from some specific confined hydrologic conditions in limited areas, bringing about the formation of the restricted cemented beds or scattered patches within the outwash-beach sediment succession.2. The cement is exclusively composed of low-Mg calcite appearing in angular equant to slightly elongated crystals, which indicates precipitation from the meteoric or connate fresh surface (glacial lake) water and/or near-surface groundwater at low to moderate supersaturation and flow conditions.The absence of organic structures within the cement suggests that cementation is essentially inorganic.3. Near-surface extended cemented layers are ascribed to the near-surface water table produced and controlled by the BIL water level.As the cements exhibit both meteoric vadose and phreatic features, cementation occurred at the vadose-phreatic interface, where the conditions were transitional and/or fluctuating.The presence of bilaminar cement with thin inner micrite laminae and thicker outer microsparite and sparite rims of calcite crystals indicates that cementation started in vadose conditions and continued in a water-saturated phreatic environment.4. Cementation has mainly taken place by CO 2 -degassing in response to fluctuations in groundwater level and flow conditions, controlled by the BIL water level, and seasonal cold and/or dry climate conditions.Further studies, including the application of isotope analysis and radiocarbon dating of cements, may shed some light on the source and isotope composition of the parent waters, kinetic controls of precipitation and precise time of formation. Fig. 1 . Fig. 1.Location of the studied sites on the edge of the Baltic Klint in northern Estonia.Shading indicates the topographic relief (darker means lower elevation, lighter -higher elevation).Simplified bedrock geology: C-O 1 , Cambrian and Lower Ordovician terrigenous rocks; O 2-3 , Middle and Upper Ordovician carbonaceous rocks; S 1 , Lower Silurian carbonaceous rocks; D 3 , Upper Devonian carbonaceous rocks. Fig. 2 . Fig. 2. Geomorphological and geological settings of the studied sites: A, Pehka; B, Kunda; C, Moldova; D, Tornimägi; E, Laagna.Colour-scaled topography is based on LIDAR measurements by the Estonian Land Board. Fig. 3 . Fig. 3. Studied exposures.See Fig. 1 for locations and Fig. 2 for geological settings.A, a lateral continuous up to 3 m thick cemented gravel layer on the top of the ridge, Pehka site; B, a cemented up to 2 m thick layer in the upper part of the gravel unit, Moldova site; C, a cemented 2 m thick gravel layer on the top of outwash deposits, southern slope of the Tornimägi hill; D, cemented rafts from outwash deposits left at the bottom of the quarry, Kunda site; E, an up to 1 m thick cemented layer on the bottom of the quarry, Laagna site.Hammer length 30 cm. Fig. 4 . Fig. 4. Sedimentological profiles of the studied exposures.Cementation is marked in grey. Fig. 6 . Fig. 6.Microphotographs of cemented outwash sediments.A-C, images of thin sections showing the texture of cemented outwash deposits: calcite crystals form a rim around detrital grains or completely fill intergranular pores and intragranular microfractures and voids; D, E, micritic cement between detrital grains sticking them together; F, close view of the cement entirely filling the pore space between detrital grains; the micritic carbonate-siliciclastic cement rim grades into distinguishable elongated calcite crystals.Key: D, detrital grain of source sediment; C, calcite crystals; mC, micritic calcite cement; Q, quartz; Fsp, feldspar. Fig. 7 . Fig.7.Micromorphology (SEM images) of calcite cement.A, view of a cemented sand deposit, where a secondary calcite coating covers detrital grains sticking them together; B, detrital grains covered by a coating of elongated calcite crystals, and this in turn by micritic (< 4 µm) massive calcite; C, micritic cement between detrital grains; a smooth cement surface had formerly been fringing a detrital grain; D, sparite with equilateral, almost cube-shaped calcite crystals in the subhedral calcite mass with some rhombohedron faces; E, scalenohedral calcite crystals grown perpendicular to grain surfaces towards the intergranular pore space; F, rim of the elongated calcite crystals with sharp tips orientated towards intergranular voids; G, massive micritic cement fringing a detrital quartz grain.See Fig.6for key. Table 1 . Summary of the sedimentology and cement characteristics of the studied sites
2018-12-13T14:25:07.132Z
2014-02-01T00:00:00.000
{ "year": 2014, "sha1": "df1f3546391d66b2663a799d2d4f27046015c6d4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3176/earth.2014.03", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "df1f3546391d66b2663a799d2d4f27046015c6d4", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
11070383
pes2o/s2orc
v3-fos-license
Ensemble based convergence assessment of biomolecular trajectories Assessing the convergence of a biomolecular simulation is an essential part of any computational investigation. This is because many important quantities (e.g., free energy differences) depend on the relative populations of different conformers; insufficient convergence translates into systematic errors. Here we present a simple method to self-consistently assess the convergence of a simulation. Standard clustering methods first generate a set of reference structures to any desired precision. The trajectory is then classified by proximity to the reference structures, yielding a one-dimensional histogram of structurally distinct populations. Comparing ensembles of different trajectories (or different parts of the same trajectory) built with the same reference structures provides a sensitive, quantitative measure of convergence. Please note: this is a preliminary manuscript, and should be read as such. Comments are most welcome, especially regarding pertinent prior work. Naturally, simulations aim to observe conformational fluctuations as well. A gap remains, however, between the timescale of many biologically important motions (µsec-sec), and that accessible to atomically detailed simulation (nsec). To put it another way, some problems are simply not possible to study computationally, since it is so far impossible to run a simulation which is "long-enough." For those problems which are at the very edge of being feasible, we would like to know whether we have indeed sampled enough to draw quantitative conclusions. These problems include the calculation of free energies of binding (10,11), ab initio protein folding (12,13), and simulation of flexible peptides (14) and conformational changes (15). Convergence assessment is also crucial for rigorous tests of simulation protocols and empirical force fields-see, e. g. (16). Many algorithms propose to improve the sampling of conformation space, but quantitative estimation of this type of efficiency is difficult-except in simple cases (17). In the case of force field validation, it is important to know whether systematic errors are a consequence of the force field, or are due to undersampling. The observed convergence of a simulation depends on how convergence is defined and measured. It is therefore important to consider what sort of quantity is to be calculated from the simulation, and choose an appropriate way to assess the adequacy of the simulation trajectory (or trajectories). Many relatively simple methods are commonly used, such as measuring distance from the starting structure as a function of simulation time, and calculation of various autocorrelation functions (16,18). Other, more sophisticated methods are based on principal components (19,20) or calculation of energy-based ergodic measures (21). Many applications, however, require a thorough and equilibrated sampling of the space of structures. All of the methods just listed are only related indirectly to structural sampling. There are many examples of groups of structures which are very close in energy, but very dissimiliar structurally. In such cases, we might expect energy-based methods to be insensitive to the relative populations of the different structural groups. It is therefore of interest to develop methods which are more directly related to the sam-pling of different structures, and see how such methods compare to more traditional techniques. Daura et. al. previously considered convergence assessment by counting structural clusters, based upon a cutoff in the RMSD metric (22,23). The authors assess the convergence of a simulation by considering the number of clusters as a function of time. Convergence is deemed sufficient when the curve plateaus. This is surely a better measure than simpler, historically used methods, such as RMSD from the starting structure or the running average energy. However, it is worth noting that long after the curve of number of clusters vs. time plateaus, the relative populations of the clusters may still be changing. Indeed, an important conformational substate which has been visited just once will appear as a cluster, but its relative population will certainly not have equilibrated. The method of Daura et. al. also suffers from the need to store the entire matrix of pairwise distances. For a trajectory of length N , the memory needed scales as N 2 , rendering the method impractical for long trajectories. At least two groups have developed methods which rely on nonhierarchical clustering schemes, and therefore require memory which is only linear in N . Karpen et. al. developed a method which optimizes the clusters based on distance from the cluster center (24), with distances measured in dihedral angle space. Elmer and Pande have optimized clusters subject to a constraint on the number of clusters (25), with distance defined by the atom-atom distance root mean square deviation (26,27). In this paper, we address systematically the measurement of sampling quality. Our method classifies (or bins) a trajectory based upon the "distances" between a set of reference structures and each structure in the trajectory. Our method is unique in that it not only builds clusters of structures, it also compares the cluster populations. By comparing different fragments of the trajectory to one another, convergence of the simulation is judged by the relative populations of the clusters. We believe the key to assessing convergence is tracking relative bin populations. Our approach can be directly applied to comparing the efficiency of different sampling methods. In the next section, we present a detailed description of the algorithm and discuss possible choices of metric. We then demonstrate the method on simulations of met-enkephalin, a structurally diverse peptide. Theory and methods We will evaluate sampling by comparing "structural histograms", described below. These histograms provide a fingerprint of the conformation space sampled by a protein, by projecting a trajectory onto a set of bins based on distinct reference structures. Comparing histograms for different pieces of a trajectory (or for two different trajectories), projected onto the same set of reference structures, provides a very sensitive measure of convergence. Not only are we comparing how broadly has each trajectory sampled conformation space, but also how frequently each substate has been visited. Histogram construction We generate the set of reference structures and corresponding histogram from a trajectory in the following simple way (our choice for measuring conformational distance will be discussed below): (i) A cutoff distance d c is defined. (ii) A structure S 1 is picked at random from the trajectory. (iii) S 1 and all structures less than d c from S 1 are removed from the trajectory. (iv) Repeat (ii) and (iii) until every structure in the trajectory is clustered, generating a set {S i } of reference structures, with i = 1, 2, .... (v) The set {S i } of reference structures is then used to build a histogram: every structure in the trajectory is classified according to its nearest reference structure. Note that this classification step generates a unique histogram for a given set of reference structures-unlike the simple clustering which is generated in step (iii). Such a partitioning guarantees a set of clusters whose centers are at least d c apart. Furthermore, for a trajectory of N frames, the number of reference structures, M , and therefore the memory needed to store the resulting M ×N matrix of distances, is controlled by d c . For physically reasonable cutoffs (e.g., d c 1Å RMSD), the number of reference structures is at least an order of magnitude smaller than the number of frames in the trajectory. The memory requirements are therefore manageble, since the computation of pairwise distances scales as N log N . There is nothing in principle which prevents the use of a more carefully chosen set of reference structures with our classification scheme. For example, we may consider a set of structures which correspond to minima of the potential energy surface. The cutoff might then be chosen to be the smallest observed distance between any pair of the minimum energy structures, and the set of reference structures so determined could be augmented by the random selection of more reference structures in order to span the whole trajectory. However, we expect that the purely random selection used here will naturally include the lowest free energy substates, since these are the most populated. In either case, any set of reference structures defines a unique histogram for any trajectory. Trajectory Analysis Once we have a set of reference structures, we may easily compare two different trajectories classified by the same set of reference structures, by comparing the populations of the various bins as observed in the two trajectories: given a (normalized) population p i (1) for cluster i in the first trajectory, and p i (2) in the second, the difference in the populations ∆P i = |p i (1) − p i (2)| measures the convergence of substate i's population between the two trajectories. Note that the "two" trajectories just discussed may be two different pieces of the same simulation. In this way, we may self-consistently assess the convergence of a continuous simulation, by looking to see whether the relative populations of the most populated substates are changing with time. Of course, this cannot answer affirmatively that a simulation has converged (no method can do so); however, it may answer negatively. In fact, we will see later that our method indicates that structural convergence may be much slower than previously appreciated. Our approach should also be applicable to some types of non-continuous trajectories, such as those generated by multiple starts (e. g.,(28)) or parallel exchange protocols (e. g., (29,30)). For multiple independent trajectories, one can compare the two histograms generated from (i) the first halves and (ii) the second halves of all simulations. If converged, these two histograms should agree. One could also compare histograms generated by grouping entire trajectories into distinct sets. For a parallel exchange simulation, where the ensemble is built from a set of continuous trajectories, histograms from the first and second halves of the simulation can be compared. The comparison of histograms clearly will not be appropriate when ensembles are generated in a fully decorrelated way. For instance, starting from a single long trajectory, one could generate two ensembles by randomly selecting structures, or perhaps by selecting structures at two different fixed time intervals. So long as the number of structures in each ensemble greatly exceeds the number of reference structures used for classification, it is hard to see how such histograms could be significantly different. In such cases, dynamical correlations have been explicitly discarded, and the histograms can only differ statistically. Structural Metrics Many different metrics have been used to measure distance between conformations. The choice depends on both physical and mathematical considerations. For example, dihedral angle based metrics are well-suited to capture local structural information(24), but are not sensitive to more global rearrangements of the molecule. Least-squares superposition followed by calculation of the average positional fluctuation per atom (RMSD) is quite popular, but the problem of optimizing the superposition can be both subtle and time-consuming for large, multi-domain proteins (31). In addition, RMSD does not satisfy a triangle inequality(32). This is not an issue for the algorithm presented here, but is a consideration for more sophisticated clustering methods (25). We will use RMSD to measure distance here, though we note that "distance root mean square deviation" (drms) (or sometimes, "distance matrix error")(26, 27) may be appropriate when RMSD is not. Labelling two structures by a and b, the traditional root mean square deviation (RMSD) is defined to be the minimum of the root mean square average of interatomic distances over all possible translations and rotations of x b -namely, where N is the number of atoms and x j is the position of atom j. It is clear that the choice of d c , together with the choice of metric, determines the resolution of the histogram. Reducing d c increases the number of reference structures, and reduces the size of the bins. How is d c chosen? There is no general answer, and a suitable cutoff will depend on the problem under investigation. The typical RMSD between a pair of structures will depend on the size of the molecule, its flexibility, and the conditions of the simulation. If the magnitude of some important conformational change is known in advance, then this information will guide the selection of an appropriate cutoff. If not, then a series of histograms should be constructed at several values of d c . The behavior of the histogram as a function of d c will give a sense of the appropriate value, as we will see below. Results We have tested our classification algorithm on implicitly solvated met-enkephalin, a pentapeptide neurotransmitter. By focusing first on a small peptide, we aim to develop the methodology on a system which may be thoroughly sampled and analyzed by standard techniques. The trajectories analyzed in this section were generated by Langevin dynamics simulations, as implemented in the Tinker v. 4.2.2 simulation package (33). The temperature was 298 K, the friction constant was 5 ps −1 , and solvation was treated by the GB/SA method (34). Two 100 nsec trajectories were generated, each starting from the PDB structure 1plw, model 1. The trajectories will be referred to as plw-a and plw-b. Coordinates were written every 10 psec, for a total of 10 4 frames per trajectory. Previous methods: RMSD analysis and cluster counting An often used indicator of equilibration is the RMSD from the starting structure (see Fig. 1A). Such plots are motivated by the recognition that the starting structure (e.g., a crystal structure) may not be representative of the protein under the simulation conditions-solvent, force field, and temperature. This is the case in Fig. 1A-the computation was performed with an implicit water model, while the experimental structure was determined in the presence of bicelles (35). The system fails to settle down to a relatively constant distance from the starting structure-rather, it is moving between various substates, some nearer and some farther from the starting structure. While this is not surprising for a peptide renowned for its floppy character, it also indicates that this method cannot determine when the peptide simulation has converged. Indeed, Fig. 1A can tell us little about the convergence of the simulation, only that it spends most of its time more than 2.0Å from the starting structure. A perhaps better indication of equilibration is provided by Fig. 1B, in which we have used the method of Daura,et. al(22), albeit with clusters built by the procedure described in Sec. 2.1. The premise is that convergence is achieved when the number of clusters no longer increases, as this means that the simulation has visited every substate. This analysis suggests that convergence is observed by about 7 nsec, and the curve has the comforting appearance of saturation. However, Fig. 1B is insensitive to the relative populations of the clusters. To illustrate the problem, consider a simple potential, with two smooth wells separated by a high barrier. By simple cluster counting, a simulation will be converged as soon as it has crossed the barrier once. It is clear, however, that many crossings will be required before the populations of the two states have equilibrated. We will address this question using our ensemble-based method. We find, in fact, that the relative populations of the clusters continue to change, long after their number has equilibrated. Ensemble-based assessment of trajectories The use of our systematic approach is much more revealing. We first discuss the selection of an appropriate cutoff. We then demonstrate two different applications of the ensemble based comparison of trajectories-a comparison between a trajectory and a "gold standard" ensemble, and a self-consistent convergence analysis of a single trajectory. Reference structure generation and cutoff selection A compound trajectory was formed from trajectories plw-a and plw-b, by discarding the first 1 nsec of each trajectory and concatenating the two into a single, 198 nsec trajectory (plw-ab). We then generated a set of reference structures for the compound trajectory, as described earlier: a structure is picked at random, and it is temporarily discarded along with every structure within a predefined cutoff distance, d c . The process is repeated on the remaining structures until the trajectory has been exhausted. The result is a set of reference structures which are separated from one another by at least the pre-defined cutoff distance. Lowering the cutoff (making the reference structures more similiar) increases the resolution of the clustering, and increases the number reference structures (see Table 1). While RMSD is system-size dependent(36), for a particular system the cutoff defines a resolution. A histogram is then constructed by grouping each frame in the trajectory with its nearest reference structure. The dependence of the histogram on d c is shown in Fig. 2. With d c = 3.0Å the first three bins already account for more than 50% of the total population. It might be expected that such a coarse description of the ensemble may not be particularly informativehowever, we will see in the next sections that this level is already sufficient to make powerful statements about convergence. Lowering the cutoff, the general features of the histogram remain unchanged: a steep slope initially, which accounts for half of the total population, followed by a flatter region. In each case, most (90%) of the population is accounted for by approximately half of all the reference structures. How-ever, a closer inspection reveals that the fraction of bins required to account for the noted percentages of population (50, 75, and 90%) is decreasing with the cutoff. For example, for d c = 3.0Å 16 of 24 bins account for 90% of the trajectory, while for d c = 2.0Å 164 of 331 bins account for 90% of the trajectory. It should be mentioned, however, that this difference between the d c = 2.0Å and d c = 1.5Å histograms is so small as to be insignificant. Although it seems obvious that the most revealing cutoff will be systemspecific, our histograms are more robust than they first appear. Because reference structures are chosen arbitrarily, the divisions between bins will not reflect basins of the landscape. In other words many, if not most, bins can be expected to include a number of full and partial local basins. Thus a lack of convergence in a "macroscopic" bin, at least in principle, can report on more local, microscopic states. Further, because our approach is so inexpensive compared to the simulation itself, more than one binning of configuration space can (should) be considered: see Sec. 3.2.3 and Fig. 4. Based upon the series of histograms in Fig. 2, we continued our study of met-enkephalin based upon d c = 3.0Å. At this level of resolution, the main features of the histogram are already present, while the number of reference structures is small enough to make the computation quite inexpensive. We shall see that d c = 3.0Å provides sufficient resolution to investigate the convergence properties of our simulation. Though we do not pursue it here, we note that the tail of the distributionwhere half of all the bins account for only 10% of the population-might contain some very interesting structures. Indeed, at the very end of the tail are found bins which sometimes contain a single structure. Might some of these low population bins represent transition states? For now, we set this question aside, and focus instead on convergence assessment. Comparing trajectories to a "gold standard" ensemble In some applications, we want to compare a trajectory to a "gold standard" ensemble. For example, the gold standard might be the ensemble sampled by a long molecular dynamics simulation, and we may wish to check the ensemble produced by a new simulation protocol against the long molecular dynamics trajectory. For met-enkephalin, we use our histogram approach to illustrate, in Fig. 3, the evolution of convergence in two long (99 nsec) trajectories. The compound trajectory (198 nsec) is taken as a "gold standard," from which reference structures are calculated using a cutoff d c = 3.0Å. We can then assess the convergence of portions of the trajectory against this full ensemble (see Figs. 3A-D). From Fig. 3A, it is clear that after the first 2 nsec, the simulation is far from converged. Many important substates have not yet been visited, and many of the bins are over or underpopulated by several k B T . (On a semilog scale, a factor of 2 in the population represents an error of 1/2 k B T .) After 50 nsec (Fig. 3(C)), all clusters are populated, but many important substates have not converged to within 1/2 k B T of the 198 nsec values. Fig. 3 presents a picture of a very conformationally diverse peptide, especially given the large cutoff (d c = 3.0Å) used. The first 3 "substates" contain only 52% of the observed structures, while the first 9 account for 74%. Indeed, the (experimentally determined) starting structure is located in the second most populated bin. We also analyzed the entire set of NMR model structures. These were determined in the presence of bicelles, as it was hypothesized that interaction of the peptide with the cell membrane induces a shift in the conformational distribution (35). We classified the entire set of 80 NMR structures against our set of reference structures. The overwhelming majority of the NMR structures-75%-were nearest to reference number 23-the second-least populated bin in our simulation. The next largest group of NMR structures (15 of 80) were nearest to reference number 2, which held a comparable portion of the simulation trajectory. The remaining 5 NMR structures were scattered among 4 different bins. While not conclusive, the comparison between our simulation data and the NMR structures supports the hypothesis that binding to the membrane induces a shift in the distribution of metenkephalin conformers, relative to the distribution observed in water. Such conformational diversity is not surprising for a peptide, which is known to be a promiscuous neurotransmitter by virtue of its flexibility (35,37,38). However, it will be interesting to revisit the issue in the study of a protein. Self-referential Convergence Assessment We want to assess convergence without the use of a "gold-standard." Our previous analysis (Fig. 3) might be used to compare simulation protocolsensembles from a new protocol may be compared to a "gold-standard" ensemble. (Here, the gold standard is the 198 nsec compound trajectory.) However, it is not useful as a means of assessing the convergence of a single simulation. After all, given only a 4 nsec trajectory, one would like an assessment without reference to "the answer". Fig. 4 therefore demonstrates a purely self-referential scheme for "on the fly" analysis of a continuous trajectory. Fig. 4A compares, for example, the first 2 nsec to the second 2 nsec. The series of plots in Fig. 4 shows that the populations of the clusters are still changing significantly, even between the first and second 50 nsec. Presuming we had run only a single 100 nsec simulation, we could make Fig. 4C, and describe the convergence by saying, "at a resolution of 3.0Å RMSD, considering bins containing 75% of the structures, 6 of 9 bins have not yet converged to within 1/2 k B T ." Note the contrast with Fig. 1B, where it appears convergence is reached after just 7 nsec. This contrast is all the more striking considering that d c = 3.0Å is a rather conservative choice. At a higher resolution (smaller d c ) the observed convergence is worse. To test whether our analysis is sensitive to the (random) selection of reference structures, Fig. 4 shows two independent sets of reference structures. There is little difference in the results. Both classifications indicate that more than 50 nsec are required for convergence when d c = 3.0Å. The observed ensembles and corresponding convergence depend on both the metric used and the value of d c . (This is of course true of any clustering algorithm.) It is therefore important to report this information along with any statements about the convergence of a particular simulation. Indeed, lowering the cutoff, and hence increasing the resolution of the classification, is bound to reduce the observed level of convergence. Instead of Fig. 4, in which each panel is a different length of the trajectory, we could have plotted the same trajectory length at different resolutions. At a high enough resolution, we will always find some substates which are under-or overpopulated. In other words, since all trajectories are finite, a physically acceptable value of d c must be chosen. While the choice of d c is somewhat ad hoc in the present implementation, plots like those in Fig. 4 still can provide valuable, quantitative information. For example, imagine that we wish to calculate the free energy difference between two experimentally known conformations, which differ by 3.0Å RMSD. In this case, Fig. 4 suggests that we cannot expect an accuracy better than 1/2 k B T . Perhaps more importantly, any fixed choice of cutoff can be useful in comparing different simulation methods-even if the difficult question of absolute convergence is not addressed. Discussion We have introduced a structure-based classification approach for the analysis of biomolecular simulation trajectories. The method provides a more rigorous evaluation of convergence than commonly used methods. Our ap-proach is based on a simple intuitive picture-namely, a comparison of the relative populations of different conformational substates. The method is trivially applicable to simulations of proteins of any size. The results for met-enkephalin indicate that it takes quite some time (> 50 nsec) for the relative populations of the various substates to equilibrate, even with fairly promiscuous cutoff (3.0Å RMSD) which partitions the trajectory into relatively few bins. Because we can expect that many transitions into and out of each substate will be required to equilibrate their relative populations, a simple cluster counting approach (Fig. 1B) will present a deceptively optimistic picture of convergence. In order to carefully assess convergence of a simulation, we must therefore compare the populations of the various substates from different fragments of the trajectory. A simple, fast way to carry out such a comparison is provided by the ensemble method described above. A higher level of rigor can be achieved by comparing multiple pairs of independent blocks of the trajectory. It must be stressed that-though our method may provide an unambiguous negative answer to the question, "is the simulation converged?"-it may only provide a provisionally positive answer. A longer simulation may well reveal longer timescale phenomena, parts of structure space not yet visited. Our approach should be useful, in its present form, as a means to assess the relative efficiencies of two simulation methods. (The cutoff d c can always be reduced enough to suggest poorer convergence of at least one of the trajectories analyzed.) Many algorithms have recently generated broad interest by virtue of their potential to enhance the sampling of biomolecular conformation space. Some of these algorithms, notably the various parallel exchange simulations (39), invest considerable CPU time in pursuit of this goal. It is therefore important to ask whether these methods are in fact worth the extra expense, i.e.,"does running the algorithm in question increase the quantity: (observed conformational sampling)/(total CPU time)"? In particular, these parallel exchange algorithms should be compared to (i) single, parallelized trajectories, as are possible with NAMD(40), for example, and (ii) multiple independent trajectories as suggested by Caves et. al (28). The CPU time is easy enough to quantify, and we hope the present report will aid in evaluating the numerator. In the future, we will study trajectories of larger proteins, in order to develop criteria for determining cutoffs in larger systems. On the one hand, the upper bound on RMSD distance between any pair of structures increases with the size of the protein. On the other hand, larger proteins may not be as structurally diverse as small, floppy peptides-at least on the timescale currently accessible to simulation. Work already underway on a G-protein coupled receptor should shed light on these issues(41). Furthermore, the approach should already be able to compare different simulation methods in large systems. The systems which may be treated with our method are not limited to proteins, or even single chains. Indeed, the method is immediately applicable for analyzing simulations of polymers, nuclei acids, or macromolecular complexes.
2014-10-01T00:00:00.000Z
2006-01-16T00:00:00.000
{ "year": 2006, "sha1": "690b090a68e51c417ad409c6f1f6f50c494d558c", "oa_license": "elsevier-specific: oa user license", "oa_url": "http://www.cell.com/article/S0006349506717167/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "690b090a68e51c417ad409c6f1f6f50c494d558c", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Physics", "Chemistry", "Medicine" ] }
268436987
pes2o/s2orc
v3-fos-license
An Exploration of Issues in China Art Portfolio Training Institutions . In recent years, a large number of Chinese students have travelled overseas to study art, which has spawned a new industry, albeit only about eight years old, Art Portfolio Training Institutions, and this study focuses on these institutions. Through qualitative analyses, the researcher interviewed educators and students in these art institutions to collect data from different perspectives. The main purpose of this study was to reveal the problems that exist in these portfolio institutions. There are still several issues within China’s portfolio guidance institutions that impact the success rate of students’ applications to foreign universities, varying teacher standards, false advertising, and indiscriminate fees. The study gives suggestions for portfolio institutions in choosing teachers and other aspects. INTRODUCTION Aspiring Chinese students seeking art education in developed countries are required to submit portfolios encompassing sources of inspiration, design concepts, design processes, and design outcomes.[1]Renowned international institutions expect these portfolios to demonstrate creativity and uniqueness.The majority of students pursuing education abroad often complete such portfolios through art study programs.In China, the industry of portfolio preparation for art studies has recently emerged [2].Teachers at these portfolio preparation institutions are frequently international students who have returned from countries such as the United Kingdom, the United States, Australia, Italy, South Korea, and Japan. Through interactions with professionals in the field of education, researchers have discovered that cultivating outstanding portfolios requires not only teachers with exceptional concepts and extensive experience but also collaborative efforts from various JICOMS aspects.This industry faces numerous challenges, including decentralized management, teacher complacency, talent attrition, and even the closure of training institutions. Relying solely on the individual efforts of teachers falls significantly short of resolving these issues.However, addressing these challenges is crucial for the industry's development.Thus, this study delves into investigating and delineating these problems, providing valuable insights and recommendations for improvement to industry practitioners. The selection of samples primarily includes globally recognized art universities, especially those that rank prominently in QS Art & Design Rankings and overall rankings. It involves teachers who later engage in teaching activities at portfolio institutions, as well as students who undergo training at these institutions. RESEARCH METHODOLOGY The main method used by researchers to collect data is interviews.Interviews are part of the qualitative research methodology and are used to obtain reliable information through face-to-face, open-ended question exchanges with interviewees [3].In this study, interviews were used to gain professional perspectives and practical experience on design issues related to infographic design. Purposive sampling method was used in this study to select experienced educators and students as participants so that a large amount of reliable data could be collected.Before conducting the interviews, the researcher established contact with the participants via WeChat and confirmed the interview time.During the interviews, the researcher asked specific questions.These questions were open-ended, allowing the interviewees to express themselves freely.The researcher's focus was on collecting descriptive and qualitative data. The interviews provided an opportunity to delve deeper into the perspectives and experiences of the participants.In-depth one-on-one interviews were the primary method of data collection, ensuring accuracy, promoting more interaction, and facilitating a comprehensive exchange of information.Whilst the researcher prepared the interview questions in advance, some adjustments and additions were introduced during the interview process.The interviews were audio-recorded throughout and lasted approximately 45 minutes each, allowing participants ample time to share their experiences. The use of open-ended questions in the interviews did facilitate the eliciting of valid and detailed responses from the interviewees.As Cohen and Manion (1984) open-ended questions provide flexibility and allow the interviewer to explore all aspects of the data and gain a deeper understanding of the respondent's perspective.[4] The purpose of using open-ended questions is to allow the researcher to capture the perspectives of the participants without the need to pre-select specific categories through a pre-determined questionnaire.[5] By transcribing the interviews, the researcher can focus on actively interacting with the interviewees and capturing their responses in real time without having to rely solely on notes, which may result in details being missed. Transcribing recorded interviews allows for thorough data analysis and helps to identify key themes, patterns and insights.It provides a written record of the interviewees' responses, enabling the researcher to review and analyse the data at a subsequent stage.Transcription also ensures that information is easily referenced and used for further analysis or comparison with other data sources [6]. Returnee Students as Teachers The focus of this study is on returnee student teachers who are actively teaching at various institutions, as they possess insights into the curriculum models of these institutions and are knowledgeable about existing issues.This group is referred to as "mainline teachers".In this survey, the first six questions are aimed at investigating potential teachingrelated issues, while the last two questions pertain to institutional inquiries.This section of the questionnaire will also be widely distributed, targeting teachers with years of experience in the industry and an in-depth understanding of its workings.For the third part of the questions, teachers can choose to answer based on their interests and areas of expertise; answering every question is not mandatory.Of course, if there are other issues not covered by the questions but teachers wish to express their opinions, they are free to do so.On the whole, the art study portfolio industry faces a number of problems, including salary and remuneration, unstable working status, paperwork and management issues, which directly affect the application process and the quality of students' studies.t1 talked about the experience of a teacher who discovered the academic requirements and salary situation after entering the portfolio industry.t2 mentioned the working status and remuneration of part-time tutors, and pointed out that some T3 talks about the portfolio industry being hit hard in 2020, affecting both institutions and students.t4 mentions the challenges faced by one tutor.t5 illustrates the powerlessness of another tutor in a full-time and managerial position, and also describes the problems students encounter in the application process.At the same time, the epidemic and policy changes have had an impact on the industry as a whole, making the market more difficult.It is vital for students to choose the right institution and tutor, while for tutors to adapt to the DOI 10.18502/keg.v6i1.15444changes in the industry and improve their capabilities to provide better coaching and guidance to students. Based on the feedback from teachers regarding issues with institutions, the summary is as follows: 1. Institutions do not provide much assistance to substitute teachers in terms of professional support.Teachers primarily enhance their skills on their own. 2. Some institutions have cases of delayed payment of teachers' salaries. 3. Due to the impact of the pandemic, there is a decrease in the number of students, leading to fierce competition in the industry.Some institutions are facing financial difficulties and closure due to a broken funding chain. 4. Portfolio institutions have a relatively short history, and they are also learning by trial and error, lacking experience. 5. The arrangement of foundational courses, such as software courses, is unreasonable. 6.The organization of institutions has problems.Apart from teaching, other aspects such as application processes have issues.Some institutions have teachers handling too many students, making it difficult to ensure quality. Students This study involves in-depth interviews with students of art study abroad institutions. Students are more attuned to their preferences for teachers and teaching methodologies that facilitate rapid progress.Importantly, their feedback provides valuable insights into their satisfaction levels with institutions and the issues they perceive.[ S13, who studies at an portfolio institution in Beijing, said that although he attended a third-rate university and stayed at the Beijing institution for less than two months, the institution placed a serious and responsible teacher with him.However, with more than 30 students to teach, it is sometimes difficult to juggle the needs of each student.The relationship between the students was also not good, and he did not learn software related knowledge in school.He found the whole institution confusing, and despite the high standard of the teachers, the quality of his own work was not good. S14 is a student at a middle school in a first-tier city.As he was studying at an art school, there was a school near the school where the teachers were exchange students from the UK.Later, problems arose between the portfolio institution and the teachers, who said they simply did not have enough time to work with students.The teacher is DOI 10.18502/keg.v6i1.15444 a full-time teacher, and full-time teachers may encounter this problem when working at an institution with overenrollment, so more tasks are assigned to full-time teachers. Later, the student changed institutions. S8 has a combination of online and offline one-on-one classes at the institution. He initially wanted to choose an offline institution because he felt more at ease with his teachers around.After a while, however, he realized there wasn't much difference between online and offline.Taking classes online also saves time traveling to and from school.At first, the institution asked the teacher to give a video lecture, but after two classes, he and the teacher went straight to voice, and the teacher relayed the information to him.The lessons were sometimes half an hour long, and sometimes the teacher would spend 20 minutes answering questions for him. S12 says her agency offers paperwork and online application services, as well as portfolio tutoring, forming what it calls a one-stop shop.But the agency's paperwork is overpriced and riddled with loopholes and errors.For example, missing transcripts or filling out incorrect information. In addition to S8, S12, S13 and S14, other problems mentioned by other students included: lagging information, with institutions failing to communicate important information in a timely manner, resulting in students missing deadlines; misleading studio advertisements, with studios advertised as communal areas of the teachers' office buildings, which were unsuitable for classes; inconsistencies in the quality of the insti- CONCLUSION AND RECOMMENDATION Overall, there are still several issues within China's portfolio guidance institutions that impact the success rate of students' applications to foreign universities. Suggestions for Teacher Selection in Institutions: Some teachers, despite graduating from prestigious universities, lack practical work experience and start teaching directly at institutions.Therefore, institutions need to raise their recruitment standards for talent.Teachers are the primary assets of these institutions, and as such, they should receive appropriate training, education, and incentives to better support the institution and provide high-quality teaching. Researchers also found that teachers working and living in second-and third-tier cities often seek stable jobs like civil servants or middle and high school art teachers after completing their graduate studies.They might even engage in unrelated jobs, only teaching part-time at portfolio guidance institutions in their spare time, resulting in inconsistent teaching quality. Recommendations for Other Aspects of Portfolio Guidance Institutions: Firstly, it's suggested that portfolio guidance institutions adopt a business model that collaborates with design companies.This would allow study abroad guidance teachers to double as designers, enhancing students' design capabilities.Additionally, there's a need for more emphasis on long-term design exercises in teaching that align with market demands. Institutions should also familiarize themselves with university curriculum offerings. Some university students lack even basic software skills.To provide better services and gain a competitive edge, understanding customer needs is essential. Some teachers juggle multiple institutions, making portfolio guidance institutions intermediaries between teachers.In such cases, there's a lack of supervision.These recommendations address various aspects of portfolio guidance institutions, aiming to improve the overall quality and effectiveness of the industry. [7] Questions How many years of experience do you have working or collaborating with portfolio guidance institutions?How many institutions have you worked with?Are you a full-time or part-time teacher? 2. In your teaching role, what kind of support do various institutions provide you with?Do you require additional support?3. What challenges have you encountered in your teaching role? 4. Do you need to collaborate with other teachers in your teaching role?Have you encountered any challenges during collaborative efforts? 5. How many students do you handle each year?What do you consider an appropriate number of students to handle? 6. Have the institutions you previously collaborated with offered opportunities for curriculum development, seminars, or teacher exchanges?7. Have you ever experienced wage deductions in your previous collaborations with institutions?DOI 10.18502/keg.v6i1.154448. Apart from the portfolio aspect, have there been any issues in the students' application process? tution's teaching, with teachers having poor qualifications; irresponsible advice from institutions; institutions accusing others of plagiarism while borrowing their work online; imbalanced fees, with students from the same background potentially facing different fees; inconsistent advice from portfolio teachers, with students not knowing who to listen to Unresponsible advice from institutions; accusations of plagiarism on the internet when institutions have borrowed work from others; unbalanced fees, where students from the same background may face different fees; inconsistent advice from teachers in combined institutions, and students' frustration at not knowing who to listen to.These students' experiences and opinions reveal that there are various problems and shortcomings in China Art Portfolio Training Institutions.While some students had a good teaching experience, others expressed dissatisfaction with the service and quality of teaching at the institutions.The feedback from students shows that the standard of teachers in China Art Portfolio Training Institutions varies.Some teachers have not even been abroad and do not have a good grasp of the basic logic of a portfolio, but simply make irrelevant requests.During the interviews, students expressed their views on the problems of the institutions.Among the most common problems were the DOI 10.18502/keg.v6i1.15444limited teaching ability of the teachers and the unreasonable schedule of the institution's courses. Full-time institution staff mainly handle enrollment, promotion, contract signing with clients, and have a relatively smaller role in the teaching process, largely depending on the professionalism and dedication of part-time teachers.Moreover, training and communication between teachers are limited.It is advisable for institutions to organize regular design exchange events.DOI 10.18502/keg.v6i1.15444Furthermore, institutions should pay due attention to issues like delayed salaries, the careful selection of cooperative institutions, and excessive teacher-student representation in applications. Table 2 : Interviews with Students. 6. Apart from the above, what other issues do you believe exist within institutions?The discussions and findings in this section provide a deeper understanding of the perspectives of both returnee student teachers and students.These insights shed light on the challenges and opportunities within the portfolio guidance industry in China.DOI 10.18502/keg.v6i1.15444
2024-03-17T17:17:17.400Z
2024-03-07T00:00:00.000
{ "year": 2024, "sha1": "5f9b4a7bd90d0576be49a954ac92a5602fbddecc", "oa_license": null, "oa_url": "https://knepublishing.com/index.php/KnE-Engineering/article/download/15444/24401", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "94cba4a56f2cd4bb47b8e43153bb1670b808463e", "s2fieldsofstudy": [ "Art", "Education" ], "extfieldsofstudy": [] }
230643614
pes2o/s2orc
v3-fos-license
Screening of Ber (Zizyphus mauritiana Lamk) Cultivars / Germplasm against Alternaria alternata (Fr.) Keissler Causing Alternaria Leaf Spot Indian jujube or ber (Zizyphus mauritiana Lamk.) is one of the most common fruit; indigenous to an area joined from India to China belongs to family Rhamnaceae. It is also known as Chinese date or Chinese fig or plum and commonly considered as poor man’s fruit. It is popularly called the king of arid zone fruit (Yamadagni, 1985; Shoba and Bharathi, 2007; Mishra et al., 2013). This fruit probably originated in India. It is reported to be grown in other countries like Iran, Syria, Australia, USA, France, and certain part of Italy, Spain, Africa, etc. Precisely, it is seen to grow under tropical and International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 9 Number 10 (2020) Journal homepage: http://www.ijcmas.com sub-tropical as well as Mediterranean region of the world. In view of the gaining popularity, area under this fruit is being increased gradually day by day. Ber orchards are generally found in Varanasi, Mirzapur, Sonbhadra, Jaunpur, Aligarh, Ayodhya, Agra and Raebareli districts of Uttar Pradesh (Singh et al., 1973) The ber fruit has high sugar content (sucrose, glucose, fructose and starch). It is therefore high in carbohydrates, which provide energy. The levels of sugars vary according to cultivar. The fruits also contain protein with many essential amino acids (asparginine, arginine, glutamic acid, aspartic acid, glycine, serine and threonine). General nutritive composition of ber fruits is reported by Morton, 1987;Pareek andDhaka, 2008 andPareek et al., 2009. The ber is a heat and drought tolerant fruit crop with high productivity under arid and semi-arid condition (Pareek 1983). Several species of Zizyphus can endure extreme stress caused by drought, salinity and in some cases water logging. The ber cultivar Gola is one of the most important cultivar and requires less degree days of maturity (Singh et al., 1983). Production of ber is affected by a large number of biotic and abiotic stresses (Gupta and Madaan, 1977). Among the biotic diseases, Alternaria alternata is one of the most important, widespread and easily recognized disease. In Uttar Pradesh, Alternaria leaf spot of ber disease was minor but due to climatic changes, it was recorded moderate to severe form among the commercial cultivars during recent years. The present investigation aims ultimately to find out to management strategy for leaf spot of ber disease that suits agroclimatic conditions of the country. In the absence of stable management strategy, use of resistant varieties, against Alternaria leaf spot of ber is the only recommendation for the management of leaf spot of ber. Materials and Methods Twenty five years old of 40 cultivars/ germplasm/variety grown at Main Experimental Station, Horticulture, Acharya Narendra Deva University of Agriculture and Technology, Kumarganj (26 o 47' N, 82 o 12' E, 113 msl), Ayodhya (U.P.), India. The symptoms were recorded from initial stage to final stage. Per cent disease severity was also recorded at initial stage to peak of disease under Randomized Block Design with three replications (one tree per replication) 50 leaves of a tree were picked up randomly when the disease severity was high and per cent disease intensity was calculated as per 0-5 disease rating scale for assessing host reaction against Alternaria leaf spot of ber is as follows scale given by McKinney (1923). The symptoms were recorded from initial stage to final stage. Rating scale Average per cent disease intensity 0 0% area covered with the pathogen 1 0. Symptomatology The disease is characterized by the formation of small, irregular brown spots on upper surface of leaves. Corresponding lower side appears with dark brown to black spot 3 to 6 mm in diameter with a gray to tan center and distinct brownish yellow margins. Under humid conditions, black patches comprising plenty of conidia can be seen and which severe as air borne inoculation. As in case of ber plants, under severe conditions many spots coalesce to form big patches and later such leaves defoliated from branches ( Fig. 1A to D). Similar symptoms were also recorded by Pareek (1983); Mehmood et al., (2018), Kaur et al., (2020) and reported Alternaria leaf spot of ber symptoms. Varietal screening Forty genotypes/varieties were selected for screening against leaf spot disease under natural field conditions. It was recorded that, none of the variety/genotypes was found free from disease the observation on leaf spot incidence was recorded on the basis of 0-5 disease rating scale. Out of forty genotypes/varieties screened under natural conditions.
2020-12-17T09:13:51.833Z
2020-10-20T00:00:00.000
{ "year": 2020, "sha1": "96fbfa9ee8939d081e6cbeb1416a54c9bc1acbe1", "oa_license": null, "oa_url": "https://www.ijcmas.com/9-10-2020/Akash%20Kumar,%20et%20al.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5107639cb99aa529da5860667f1d85d1a8581279", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
250678794
pes2o/s2orc
v3-fos-license
Use of GEANE for tracking in virtual Monte Carlo The concept of Virtual Monte Carlo (VMC) allows to use different Monte Carlo programs to simulate particle physics detectors without changing the geometry definition and the detector response simulation. In this context, to study the reconstruction capabilities of a detector, the availability of a tool to extrapolate the track parameters and their associated errors due to magnetic field, straggling in energy loss and Coulomb multiple scattering plays a central role: GEANE is an old program written in Fortran 15 years ago that performs this task through dense materials and that is still succesfully used by many modern experiments in its native form. Among its features there is the capability to read directly the geometry and the magnetic field map from the simulation and to use different track representations. In this work we have ‘rediscovered’ GEANE in the context of the Virtual Monte Carlo: we will show how GEANE has been integrated in the FairROOT framework, firmly based on the VMC, by keeping the old features in the new ROOT geometry modeler. Moreover new features have been added to GEANE that allow one to use it also for low density materials, i.e. for gaseous detectors, and preliminary results will be shown and discussed. The tool is now used by the PANDA and CBM collaborations at GSI as the first step for the global reconstruction algorithms, based on a Kalman filter which is currently under development. Track following in Virtual Monte Carlo The concept of Virtual Monte Carlo (VMC) provides a unique tool to collect in a common ROOT-based framework the most popular Monte Carlo (MC) codes available today for the simulation of particle physics detectors, i.e. Geant3, Geant4 and Fluka. The advantage of this approach is the possibility to use different simulation packages without writing different versions of the geometry in different formats. This is extremely useful in the context of simulation, in particular for the validation of new MC routines by comparison with old and tested ones, but it can also be exploited in the context of reconstruction where the main task is track fitting, the first step of which is the so called 'track following'. With the words 'track following', one usually intends two main tasks: (i) the transport of the track parameters (particle momentum, position and direction) from one point to another in the apparatus, forward and backward; (ii) the propagation of the errors on the track parameters together with the mean values. This is usually obtained by calculating, step by step, the 5 × 5 covariance matrix of the track. This mathematical part is analytically rather complicated: some general ideas are given in the current literature [1] but the detailed formulae have been published only very recently [2]. The Monte Carlo and track fitting tasks have been treated jointly by the CERN community in the nineties. The GEANT3 program was used for the point (i), that is for the determination of the track mean values. For the point (ii), the routines for the calculation and the transport of the error matrix, written by the CERN European Muon Collaboration (EMC) [2], were interfaced with the GEANT3 structure, giving rise to a Fortran package called GEANE [3,4]. The great advantage of this approach is that the track following is automatically obtained with exactly the same geometry of the Monte Carlo, i.e. the GEANT3 geometry, without the necessity to write ad hoc codes. The main purpose of this work is to preserve the old functionality of the GEANT3-GEANE programs in the framework of the Virtual Monte Carlo: we have reintroduced the possibility to call the GEANE functions into the main VMC class, TGeant3, so that now it is possible to access them from within ROOT. Moreover we have added new features to GEANE to allow its use also for low density materials, i.e. for gaseous detectors, and to build a Kalman filter for non-planar detectors like the PANDA Time Projection Chamber or Straw Tubes Tracker (STT) [5]. Finally we have written a C++ interface to Geane for the FairROOT framework, which is the simulation environment developed at GSI and used by the PANDA and CBM Collaborations (see the talk 156 by D. Bertini and talk 403 by S. Spataro at this conference). The main advantage of this work is the immediate possibility of using GEANE in reconstruction with the ROOT geometry modeler, i.e. with the same geometry used for the VMC simulation. Track representations The physical path of a particle of assigned mass m and momentum p is a six-fold entity of parameters x, y, z, p x , p y , p z . The track is defined as a set of points in the detectors, corresponding to the physical path of a particle: the track points are obtained as the intersections of the particle path with detector planes, that can be real detector planes or ideal planes, in this case usually chosen perpendicularly to the particle direction. If we translate the Master Reference System (MARS) and make the xy plane to coincide with the detector plane, the z-coordinate is blocked: so, the track is an entity defined by five parameters. Usually there are different choices of the five track parameters. The most common ones, that are also codified in the GEANE package, are (see Fig. 1) [3,4]: • the transverse SC system: where λ and φ are the dip and azimuthal angle in MARS and y ⊥ and z ⊥ are the coordinate of the trajectory in a frame with x ⊥ along the particle direction and y ⊥ parallel to the xy plane. • the detector SD system: where (u, v, w) is an orthonormal reference system with the vw plane coincident with the detector one. The derivatives indicate the momentum direction variation in the new system. • the 'spline' SP system: 1/p , y , z , y , z . This representation is used when the detector arrays are placed along the x-axis. Figure 1. Systems of reference for the track following. The most used representations are the SD and the SC ones: the SD representation follows the trajectory on any detector plane, real or ideal, while the SC representation gets the track parameters at any point along the track, at a given length or entering/exiting a given volume. The GEANE package can pass from one representation to another one through as set of routines [3]: this involves the calculation of the transport Jacobians between planes of arbitrary orientations and is not a trivial task, as explained in [2]. These routines are written in single precision and we have rewritten them in C++ using double precision variables since the problem of the calculation precision is crucial in track following [2]. New features of Geane GEANE predicts the trajectory of a charged particle in terms of mean values and errors of the track parameters both in the forward and in the backward direction. Three physical effects are taken into account: Coulomb multiple scattering (which affects errors only), energy loss (which affects mean values and errors) and the magnetic field (which affects mean values only). In this work we have updated the original code with respect to the following items: • new parametrization for the variance of the Coulomb multiple scattering angle; • extension of the parametrization for the fluctuations of energy loss to the case of low density media, by including the Vavilov, Landau and sub-Landau regimes to the tracking routines; • new option for track parameters extrapolation to the point of closest approach to a space point or to a wire. In this section we will briefly motivate these changes and we will illustrate the new features. Coulomb multiple scattering parametrization The two quantities of interest in the multiple scattering process are the displacement ∆ and the scattering angle θ: we have considered here the latter for which the theory of Molière finds a statistical distribution in agreement with the experimental data . This distribution can be approximated as the sum of two gaussians, a core one that takes into account the bulk process and a flat one that describes the tails [6]. The θ p total projected variance for a particle of momentum p (in GeV/c) that travels through an absorber of thickness d (in cm) is expressed as [6]: where X 0 is the radiation length of the absorber. The standard practice is to parametrize the core variance as [7]: However, this variance should not be used in track following for two reasons: • it is not the whole variance, so that the pull quantities show an underestimation of the multiple scattering errors; • the thickness contained in the logarithmic term makes the calculation dependent on the tracking steps. Indeed, to make the calculation independent of the layers in which an absorber is divided, the variance should be directly proportional to the thickness d, as in (4). Hence, we emphasize that the variance (4) should be used, which is the result of the accurate treatment of [6]. In this case, slight deviations from the gaussian form of the pull distribution have to be expected, since the angle distribution has non gaussian tails. In GEANE the following formula is used: We substituted this formula with eq. (4) in the ermcsc.f function of GEANE. This results in a slight improvement of the pull distribution for the dip angle since for most light scatterers, the ratio X s /X 0 1.15 is near to the ratio 225/185 = 1.21. Energy loss straggling The fluctuations in ionization for one particle of charge z, mass m, velocity β, are characterized by the parameter κ, which is proportional to the ratio of mean energy loss ξ to the maximum allowed energy transfer E max in a single collision with an atomic electron: where γ = 1/ 1 − β 2 = E/m and m e is the electron mass. The parameter ξ comes from the Rutherford scattering cross section and is defined as [7]: where ρ, d, Z and A are the density (g/cm 3 ), thickness, atomic and mass number of the medium. The parameter κ takes into account both the projectile energy and the geometrical thickness of the absorber; it defines univocally the absorber characteristics, that is the straggling conditions [8]: 1 for heavy absorbers, κ > 10 and the distribution is gaussian; 2 for moderate absorbers, 0.01 < κ < 10 and the distribution follows the function of Vavilov [8], that tends smoothly to the gaussian by increasing the thickness; 3 when κ < 0.01, we are in the presence of thin absorbers. When the number of collisions N c > 50, the distribution follows the Landau function; 4 for very thin absorbers, N c < 50 (the condition κ 0.01 is implicitly fulfilled) there are no universal straggling functions, but only approximated models [9]. We will call this as the sub-Landau regime: it is the dominant in a gaseous detector. For the cases 1. and 2. the straggling problem has a definite solution, both in simulation and in track following, because the general theory of the moments of the energy straggling distribution, based on the transport equation [10], allows one to calculate a finite variance. However, for the Landau distribution, that assumes E max = ∞, both the average and the variance are infinite [1,11]. Since GEANE only treats the gaussian case, the results are completely unreliable for thin layers, where the Landau or sub-Landau conditions 3. and 4. often are verified. To remedy to this situation, we have updated the original code of GEANE with the procedure explained below, that has been inserted in the routine erland.f. The main point is that for thin and very thin absorbers a rigorous solution exists for the simulation but not for track following. Indeed, whereas in the simulation the sampling and the tracking of the δ-electrons reproduce correctly some rare effects in the detectors or the noise characteristics, in track following the long tail of the energy lost by the particle, due to the δ-electron emission, makes the energy straggling variance infinite (for the Landau distribution [1,11]) or so big (in sub-Landau models [9]) that the uncertainty in the track momentum is meaningless, because these fluctuations refer to 'enormous' energy losses occurring with very low probability. Since a universally accepted solution of this problem at present does not exists, we use some approximations based on truncated distributions. In the Landau regime we cut the Landau distribution to an area α, corresponding to a value σ α given by table 1: then for the Landau case we assume σ(E) = ξ σ α . For example, for α = 0.95 we have, from table 1, σ(E) = 4.23 ξ. In the sub-Landau case we have the further difficulty due to the non existence of a straggling density in a closed analytical form [9]. In this case we decided to use a variance value obtained from the Urban model [12], which is one of the models used to sample the energy lost by the particle in very thin absorbers both in GEANT3 and GEANT4 [8]. The variance is calculated again by truncating the Urban distribution and considering a standard deviation representing a percentage α of the area of the δ-electron energy distribution: for values α > 0.99, this variance goes smoothly towards the Landau value while increasing the absorber thickness. Now we have to choose for α the values that are useful for our track following: we note that the 1/p pull spectra obtained with GEANE, in the case of thin absorbers, are composed of a peak, a shoulder near the peak and a very long and flat zone with very few events extending to the left (or the right) of the distribution. The area of the peak plus the shoulder is greater than the 99% of the total area. We choose α to have a unitary RMS (σ 1) for the peak plus the shoulder (see Fig. 2). Using this criterion, after many tracking tests through the PANDA apparatus, we found that a meaningful error propagation, taking into account the core of the distribution and excluding the long δ-electron tail, is obtained with values 0.995 ≤ α ≤ 0.998. In the new GEANE version, the parameter α is under the control of the user: if one sets α = 1 the run uses the old standard GEANE for backward compatibility. It is important to note that the treatment of the uncertainties due to the energy loss reported here is somewhat arbitrary and based on approximations: therefore, it has to be tuned, carefully checked and progressively improved in the different experimental conditions. Extrapolation to the point of closest approach The third feature added to GEANE is a new extrapolation option which extends the extrapolation to a given length along the track. The idea is to give the user the possibility to extrapolate to the Point of Closest Approach (PCA), along the track, to a given space point or to a given segment: this may represent a hit in a TPC or a wire in a gaseous detector, like the PANDA Straw Tubes Tracker (STT). The purpose is to use GEANE as a track follower in a global reconstruction algorithm, like for instance in a Kalman filter. In a planar tracking detector, the track is first extrapolated into the detector plane, then it is updated by taking the weighted mean between the position of the hit before the update and the measurement point of the detector. The weights for this mean are given by the inverse spatial resolution matrix of the detector and by the inverse covariance matrix of the track. In a non-planar detectors there are not reference planes and the way of treating this problem is to calculate the PCA between the extrapolated track and the space The PCA finding algorithms that we developed make use, at each step of the tracking, of the last 3 extrapolations of the track parameters. This works as a two phases process, due to the stepping of GEANE: first, in the eustep.f function, an approximation of the PCA is found which is bound to the size of the GEANE steps, then, in the new interface, the exact PCA is calculated by using these last 3 points with the use of different algorithms, since 2 of the 3 extrapolations may coincide. Great care must be taken in choosing the tracking step size depending on the medium. New C++ interface to Geane for FairROOT Taking advantage of the structure of the VMC and of its integration in ROOT, GEANE can be easily put to work and used also in modern C++ simulations with the ROOT geometry navigation tools. To allow this we have made some changes to the code at three different levels: the SC and SD systems and a utility class that is used to transform between representations. The package geane is used to initialize the use of GEANE and to handle the routines to find the PCA for all the possible cases. It is not mandatory to work within the FairROOT framework to use GEANE in the VMC: the first 2 points of the update list allow to use GEANE with the standard user interface and with access to the new features. The FairROOT interface is an optional way to use GEANE in a more easier and rationalized way, but it also contains the new methods to handle the extrapolation to the PCA. The code for the first 2 points is available in the VMC CVS repository at CERN and allows the use of GEANE via the old interface with C++ wrappers to the FORTRAN functions. The C++ interface instead is available in the FairROOT SVN repository at GSI and is currently used by the PANDA and CBM collaborations for the global reconstruction. Results for the PANDA Straw Tubes Tracker This new code is currently used in the design of a tracking Kalman filter for the PANDA detector: we present here some results obtained in this work. The first result is a check of the GEANE performances with a set of simulated data with the PANDA STT geometry. The multiple scattering has been simulated with the Molière distribution. The pull or standard variables are T = (x GEANE −x MC ) Figure 4. Pulls of the 5 track parameters in the SC representation, of the 3 momentum components in MARS and of the coordinates y and z in the SC system for a 1 GeV muon crossing the PANDA STT detector from the origin to a detector plane external to the apparatus.
2022-06-28T01:57:20.045Z
2008-01-01T00:00:00.000
{ "year": 2008, "sha1": "aeee897faf5867f937d14e1cc0c51501a7eba114", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/119/3/032018", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "aeee897faf5867f937d14e1cc0c51501a7eba114", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
102087242
pes2o/s2orc
v3-fos-license
(株式会社小糸製作所) Study on Ignition Mechanism of Non - mercury Metal Halide Co.,Ltd.) ABSTRACT The filling gas pressure of mercury - free HID lamp is much higher than that of the mercury containing lamp. A high filling gas pressure leads to a high ignition voltage in mercury - free HID lamp and this makes the ballast with ignition circuit bigger and heavier in order to maintain the ignition reliability. Therefore, it is important to decrease the ignition voltage of mercury - free HID lamps. It is necessary to better understand how the discharge starts and grows in the HID lamp burner in order to decrease the ignition voltage. An ultra high speed camera was used for the discharge observation, the shutter speed of which is 5 ns. As a result, we found that a very weak discharge occurred outside the burner before the burner ignited. It is believed that this weak discharge influences the ignition condition from two points of view, which are the ultraviolet rays radiating from this discharge and the electric field distortion formed by the attached electric charge on the outside of the burner wall. We found that the attached electric charge on the outside of the burner wall strongly influences the ignition performance and that the ultraviolet radiation little influences the ignition ABSTRACT The filling gas pressure of mercury-free HID lamp is much higher than that of the mercury containing lamp. A high filling gas pressure leads to a high ignition voltage in mercury-free HID lamp and this makes the ballast with ignition circuit bigger and heavier in order to maintain the ignition reliability. Therefore, it is important to decrease the ignition voltage of mercury-free HID lamps. It is necessary to better understand how the discharge starts and grows in the HID lamp burner in order to decrease the ignition voltage. An ultra high speed camera was used for the discharge observation, the shutter speed of which is 5 ns. As a result, we found that a very weak discharge occurred outside the burner before the burner ignited. It is believed that this weak discharge influences the ignition condition from two points of view, which are the ultraviolet rays radiating from this discharge and the electric field distortion formed by the attached electric charge on the outside of the burner wall. We found that the attached electric charge on the outside of the burner wall strongly influences the ignition performance and that the ultraviolet radiation little influences the ignition performance. KEYWORDS:HID-lamp,ignition,electric distortion,pre-discharge,shroud gas 図1 自動車用ヘッドランプの形状写真 Fig.1 Structure of the automobile head lamp. Fig.2 Structure of the burner made of aluminum. Fig.3 Schematic view of the measurement system and the lamp for the ignition. Fig.4 Definition of the ignition voltage(Vig) and the time lag at measurement. Fig.5 Procedure for measuring the ignition performance. Fig.6 Influence of the kinds and pressure of shroud gases on the ignition. Fig.7 Influence of the pulse polarity on the ignition performance. 図6 シュラウドガス種・ガス圧が始動性能に与える影響 図8 シュラウドガス Ar+N 2 (1:1) _13.3kPa の場合の始動状況 Fig.8 The progressing discharge when the lamp igniting. The shroud gas is 13.3kPa of 50%Ar and 50%N 2 . Fig.9 Observation of the pre-discharge under the condition that the shroud gas is 13.3kPa of 50%Ar and 50%N 2 . 図11 窒素ランプからの紫外線出力(封入窒素圧13.3kPa) Fig.11 UV emission from N2 lamp used. Fig.12 Influence of UV emission on the ignition performance. Fig.13 Lamp figure used for the simulation. Fig.15 The relation between the electric intensity distribution and the attached electric charge on the outside of the burner wall. Fig.16 The relationship between the maximum value of the electric field intensity and the attached electric charge on the outside of the burner wall. (+23kV applied and negative charge attached, or -23kV applied and positive charge attached) . Fig.17 The position where the maximum value of the electric fields were obtained by simulation in the vicinity of the both electrodes. 図17 計算で求めたランプ内部の電界強度の場所 図18 ランプ内部の最大電界強度と発光管外壁に帯電した電荷量の関係 (+23kV 印加で負電荷が帯電,或いは-23kV 印加で正電荷が帯電の 場合) Fig.18 The relation between the maximum value of the electric field intensity and the attached electric charge on the outside of the burner wall. (+23kV applied and negative charge attached, or -23kV applied and positive charge attached) . 図19 ランプ内部の最大電界強度と発光管外壁に帯電した電荷量の関係 (+23kV 印加で正電荷が帯電,或いは-23kV 印加で負電荷が帯電の 場合) Fig.19 The relation between the maximum value of the electric field intensity and the attached electric charge on the outside of the burner wall. (+23kV applied and positive charge attached, or -23kV applied and negative charge attached) . Awakowicz : Investigation of breakdown processes in automotive HID lamps, The proceedings of 64th Gaseous Electronics Conference, NR35, p.67 (2011) . ⑷ A. Bergner, T. Hobing, C. Ruhrmann, J. Mentel and P. Awakowicz : Minimizing the ignition voltage of automotive HID lamps by optimized DBD ignition whithin the outer bulbs, The proceedings of 13th International Symposium on the Science and Technology of Lighting, pp.173-174 (2012
2019-04-07T13:08:39.097Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "0309af74387da43f13fec0166f9c354caf40f799", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/jieij/98/8A/98_346/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "535ff5739ba61b09b794a3a3891ee00ff3a4a4dc", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
206300106
pes2o/s2orc
v3-fos-license
University of Birmingham Observation of Two-Dimensional Localized Jones- Roberts Solitons in Bose-Einstein Condensates • Users may freely distribute the URL that is used to identify this publication. • Users may download and/or print one copy of the publication from the University of Birmingham research portal for the purpose of private study or non-commercial research. • User may use extracts from the document in line with the concept of ‘fair dealing’ under the Copyright, Designs and Patents Act 1988 (?) • Users may not further distribute the material nor use it for the purposes of commercial gain. Observation of Two-Dimensional Localized Jones-Roberts Solitons in Bose-Einstein Condensates Nadine Meyer, 1,2 Harry Proud, 1 Marisa Perea-Ortiz, 1 Charlotte O'Neale, 1,3 Mathis Baumert, 1,4 Michael Holynski, 1 Jochen Kronjäger, 1,5 Jones-Roberts solitons are the only known class of stable dark solitonic solutions of the nonlinear Schrödinger equation in two and three dimensions. They feature a distinctive elongated elliptical shape that allows them to travel without change of form. By imprinting a triangular phase pattern, we experimentally generate two-dimensional Jones-Roberts solitons in a three-dimensional atomic Bose-Einstein condensate. We monitor their dynamics, observing that this kind of soliton is indeed not affected by dynamic (snaking) or thermodynamic instabilities, that instead make other classes of dark solitons unstable in dimensions higher than one. Our results confirm the prediction that Jones-Roberts solitons are stable solutions of the nonlinear Schrödinger equation and promote them for applications beyond matter wave physics, like energy and information transport in noisy and inhomogeneous environments. Waves play a key role in physics and technology, ranging from quantum mechanics to telecommunications. In linear media, waves spread both transversally and longitudinally due to dispersion, making them unsuitable for directed transport. This is at variance with nonlinear media, where solitary waves (solitons) become possible. In solitons, the broadening due to dispersion is counteracted by a nonlinear compression, leading to form-stable propagation at subsonic speed over large distances and particlelike properties. Depending on whether the soliton is a density dip or bump, it is classified as dark or bright, respectively. Both bright and dark solitons have been found in as diverse areas as water channels [1], high-speed data communication in optical fibers [2,3], energy transport along DNA in biology [4,5], and tropospheric phenomena like Morning Glory cloud formation [6]. In experiments with ultracold atoms, bright solitons [7] and dark plane solitons (DPS) [8][9][10][11][12] have been extensively studied. However, with the notable exception of dipolar systems [13], bright solitons are prone to collapse. Furthermore, DPS in two or three dimensions always rapidly decay due to thermodynamical or dynamical (snaking) instabilities [8,14,15]. Indeed, so far, stable long-living dark solitons have been realized only in one-dimensional systems [9][10][11][12]. In 1982, Jones and Roberts predicted a class of dark solitons that are stable both in two and three dimensions [16] and that, in contrast to DPS, have a finite extent in every direction. Jones-Roberts solitons (JRS) feature indeed a distinctive elongated shape that allows propagation without change of form and, due to their finite size and area, they are expected to be immune to the snaking instability and to be resilient against scattering of thermal excitations [17]. Despite their outstanding properties, so far the experimental observation of JRSs has been elusive. In this Letter we report the experimental realization of two-dimensional JRSs, achieved by imprinting a specific phase structure onto an atomic Bose-Einstein condensate (BEC). We find that, as predicted by Jones and Roberts, these solitons are immune to both dynamical and thermodynamical instabilities and indeed their lifetime greatly exceeds the one of simple DPSs, created with the same technique. We characterize our JRSs in terms of their size, speed, and direction of propagation, finding good agreement with numerical simulations. In addition, we demonstrate that our JRSs fulfil the Kadomtsev-Petviashvili condition, as predicted by Jones and Roberts in their seminal work [16]. At the mean-field level, BECs are excellently described by a nonlinear Schrödinger equation called the Gross-Pitaevskii equation (GPE): where Ψ is the condensate mean-field wave function, m the mass of the atoms, V the trapping potential, and g ¼ 4πℏ 2 a=m with a the s-wave scattering length, that at low temperatures parametrizes the strength of the interatomic interactions. In their seminal paper, Jones and Roberts demonstrated that, in addition to the well-known 1D dark soliton, Ψ ¼ ffiffiffiffiffi n 0 p tanhðx= ffiffi ffi 2 p ξÞe ign 0 t=ℏ , with n 0 the condensate density and ξ ¼ ð8πn 0 aÞ −1=2 the healing length, Eq. (1) allows stable solitonic solutions also in two and three dimensions. They further demonstrated that the shape and properties of JRSs depend on their speed v. In two dimensions, for v finite but lower than the speed of sound c, they take the form of finite line-shaped density minima, called rarefaction pulses. In the limit v → 0 instead, they transform into spatially separated vortex-antivortex (VA) pairs that mutually propel each other [16]. In the first case, the vorticity vanishes and the phase pattern shows a dipolar structure of two phase winding points of opposite sign connected via an elongated phase step [18]. In the case of spatially separated VA pairs, the phase pattern consists, as one would expect, of two separated 2π phase winding points of opposite charge. A similar picture holds for 3D systems, where JRSs take the form of axisymmetric solitary waves transitioning from rarefaction pulses to vortex rings, instead of VA pairs. To realize our BEC, in the experiment we start loading in 10 s 10 8 87 Rb atoms in a 3D Magneto-Optical Trap (MOT) fed by a 2D MOT. After an optical molasses stage the atoms are pumped in the jF ¼ 2; m F ¼ 2i state and then magnetically transported to the science chamber by moving the trapping coils. In the science chamber the atoms are further cooled by radio frequency evaporation and then transferred into an optical dipole trap. This is produced by a single astigmatic beam at 1550 nm, focused to a waist of 11 μm. The distance between the two foci is approximately 1 mm. The atoms are transferred into the region where the beam is focused vertically, providing a strong confinement in the vertical direction and much weaker confinement in the horizontal plane. A nearly pure BEC of typically 4 × 10 4 atoms in the jF ¼ 2i ground state hyperfine manifold is then formed with a subsequent evaporation. The final trapping frequencies are 2π × ð5; 30; 250Þ Hz, leading to an oblate BEC. To create the JRSs, we employ a phase imprinting method [19]. The phase imprinting and imaging setup is illustrated in Fig. 1(a). A near-resonant light is reflected by a digital micromirror device and then sent onto the atoms along the vertical direction using a high-resolution optical microscope objective. Each of the 1920 × 1080 micromirrors can be individually controlled, allowing them to imprint on the reflected beam any arbitrary intensity pattern Iðx; yÞ. The phase of the atoms is therefore locally changed by inducing a dipole potential U dip ðx; yÞ ∝ Iðx; yÞ=Δ, where Δ is the detuning with respect to the atomic transition. The detection light is superimposed to the imprinting beam on a polarizing beam cube. The atoms are imaged by absorption imaging with a CCD camera using a second microscope objective that allows a resolution of ≃1 μm. By performing numerical simulations using the GPE, we have found that imprinting a homogeneous triangularshaped phase structure on our BEC leads to the nucleation of a couple of rarefaction pulses. After formation, each of them travels approximately in the direction perpendicular to the corresponding edge of the triangle, as shown in Fig. 1(b). The triangular shape combines two key features of JRSs: phase winding around its vertices and an elongated phase profile along its edges. To identify the most efficient way to create the JRSs, we have performed a systematic numerical study changing the shape and the position of the imprinted triangle. We have found that a triangle whose lower vertex subtends an angle of 90 deg and that imprints a phase difference of π is the best choice to create 2 longliving JRSs. Smaller subtended angles lead to the creation of JRSs too close to the upper border of the BEC, limiting their lifetime. Larger angles instead launch the JRSs more in the direction of the short axis, also shortening the time they can travel through the BEC. Finally, we have found that imprinting a phase step lower than 0.9π leads to the same effect as imprinting a triangle with a smaller subtended angle. For these reasons, in the experiment the imprinted triangle has the lower vertex positioned at the center of the BEC and subtends an angle of 90 deg. We illuminate the BEC with a light pulse at approximately 10 GHz detuning from the jF ¼ 2i → jF 0 ¼ 3i transition for 28 μs to create a phase difference of π. Before performing absorption imaging on the BEC along the vertical direction, we then allow a time of flight of 10 ms. The experimental evolution that follows the phase imprinting is displayed in Fig. 2(a). In accordance with our simulations [Fig 2(b)], we observe that the imprinting generates two elliptical rarefaction pulses, one on each side of the triangle. To characterize the motion and the properties of the two traveling JRSs, we perform two independent Gaussian fits determining their angle θ, their position, their depth n 0 , and their major (α) and minor (β) axis lengths. We observe that, after the first 5 ms in which the initial sharp triangular imprint decays into the two JRSs, these latter notably travel through the BEC without any form of decay for at least 40 ms. At this time, they stop since they reach the border of the BEC. Indeed, as reported in Fig. 3(a), we observe that until that time, their relative depth n 0 =n, where n is the density of the unperturbed BEC, remains approximately constant for the whole evolution. The absence of decay as well as the overall dynamical evolution are in agreement with the GPE simulations demonstrating that JRSs are stable solutions of the nonlinear Schrödinger equation. Indeed, from our results we can conclude that finite temperature effects and quantum fluctuations [20] do not significantly alter the dynamics of JRSs. To provide a direct comparison, we also study the evolution of a standard DPS created in our experiment imprinting a linear phase step of π on the BEC. After only ≃10 ms we observe the DPS decaying due to snaking and thermodynamic instability into a pair of vortices, as shown in Fig. 3(b). The corresponding depth as a function of time is also reported in Fig. 3(a). The lifetime obtained by an exponential fit is 4 ms [21], therefore, an order of magnitude lower than the lifetime of our JRSs. This latter is limited only by the finite extension of our BEC. Interestingly, when reaching the border of the BEC, each JRS breaks into a VA pair [18,22], as shown in the inset of Fig. 3(a). This is due to the fact that at this point the speed of the JRSs rapidly drops to zero, making the rarefaction pulses transition to separated VA pairs, as predicted by Jones and Roberts [16]. This observation, in agreement with our simulations, further confirms that our solitons belong to the Jones-Roberts class. It is worth noticing that the high trapping frequency in the vertical direction prevents the formation of vortex rings. Indeed, the smallest vortex ring has a size comparable to 4 times the healing length ξ. For our BEC the healing length is ≃420 nm while the Thomas-Fermi radius in the vertical direction is 1.1 μm. Therefore, our condensate cannot support the formation of vortex rings but only of VA pairs along the compressed vertical direction. From this we conclude that even though our BEC is not strictly two dimensional, it can only support 2D JRSs. By measuring the speed of propagation we confirm the subsonic nature of our JRSs, as they move with an average speed of 0.43 mm=s, which is smaller than the speed of sound of 1.21 mm=s. Interestingly, in Ref. [23] it was predicted that in two dimensions JRSs should become vortex-antivortex pairs for velocities below ≃0.43c. Despite the fact that our JRSs travel at 0.355c, in our experiment they appear as pure rerefaction pulses and we attribute this discrepancy with theory to the nonuniformity of our trapped BEC. Furthermore, a single JRS is expected to travel parallel to its minor axis [16]. However, the direction of travel ϕ of our two JRSs features a different angle, as reported in Fig. 3(c). While ϕ is very close to π=4-the angle of the edge of the imprinting triangle-the orientation θ of the two JRSs tends to a more (anti)parallel configuration. This discrepancy could again be attributed to the nonhomogeneity of our BEC but could also stem from the fact that each of the two JRSs propagates in the velocity field exerted by the other. Given the peculiar dipolar phase structure of the JRSs, this could suggest that JRSs might feature a dipolarlike interaction. As with our imprinting method, including simulations, it is difficult to isolate one effect from the other, we plan to study this effect in detail in future work. In both (a) and (b), two elongated but spatially localized rarefaction pulses (density dips), known as Jones-Roberts solitons, are formed close to the center of the BEC after 5 ms from the initial imprinting. After formation, the two rarefaction pulses maintain an almost constant finite size until they reach the border of the BEC. The discrepancies between (a) and (b) are mainly due to the switching off of the trapping potential, that is not simulated in (b). The density defects that are visible at the border of the BEC are due to unavoidable spurious sound waves stemming from the phase imprint and are unrelated to the physics of the JRS. By analyzing the dynamics of their size, we can gain further insights on our JRSs. As reported in Fig. 4, after an initial time of 5 ms in which the initial triangular structure decays into the two JRSs, their major axis α reaches a stable value that is kept until the solitons reach the border of the BEC. This behavior coincides with the predictions of our numerical simulations [solid line in Fig. 4(a)]. The size of the minor axis β is at the limit of our resolution also after 10 ms of expansion and we do not observe any significant change, as also expected from the simulations [ Fig. 4(b)]. As can be seen in Fig. 4, once they are formed, our JRSs acquire a shape that fulfils the Kadomtsev-Petviashvili condition (KPC) [24] α ¼ C=χ 2 and β ¼ C=χ (dotted lines), with χ ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 1 − v=c p =3 and C ≃ ξ=3. The fulfilment of the KPC is another characteristic feature of twodimensional JRSs with finite velocity [17,23]. Intuitively, an increasing axis length over time in both directions due to dissipation would be expected, similar to DPS becoming wider and faster due to thermodynamic dissipation, as observed in previous experiments [10]. Notably, as far as our experiment can test, the fulfilment of the KPC provides an outstanding immunity against both the snaking and the thermodynamic instability, making the scattering of sound waves also negligible [22]. In summary, we have experimentally realized and characterized JRSs and confirmed that they are stable solutions of the nonlinear Schrödinger equation, a longsought goal since their prediction in 1982 [16,23]. By studying their motion and shape, we have confirmed the fulfilment of the KPC and found discrepancies with respect to the original theory that might be due to the nonuniformity of our trapped BEC or to the interaction between the two solitons. All this creates an experimental opportunity to investigate the contribution of JRSs to the specific heat of the BEC that possibly exceeds the phonon contribution [25]. Furthermore, studying the onset of vortex-free rarefaction pulses can also shed light on the anomalous critical scaling in acoustic turbulences described by the Kardar-Parisi-Zhang equation [26], which is relevant in turbulent systems far from equilibrium such as avalanches [27], formation of fire fronts [28], and surface growth [29]. The outstanding resilience of JRSs against dynamical instabilities and thermal decay might allow their propagation in disordered media, suggesting that they can play a significant role in many areas of science. This encourages the search for similar phenomena in other areas of physics, chemistry, and biology and opens up novel technology opportunities in directed transport through homogeneous but nonlinear media. The data presented here are available from the research data management system of the University of Birmingham, accessible online [30]. We gratefully thank J. Brand and T. Gasenzer for discussions. This work was funded by the EPSRC (EP/ H009914/1) and the Leverhulme trust.
2020-09-16T19:17:04.691Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "1f564bf68c7fa571b551d335a3e0eee35e04ebb7", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevLett.119.150403", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1f564bf68c7fa571b551d335a3e0eee35e04ebb7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
250361009
pes2o/s2orc
v3-fos-license
Mass media pressure on psychological and healthy well-being. An explanatory model as a function of physical activity The present research aims to identify and establish the relationships between media pressure, psychological well-being, age, physical activity and adherence to the Mediterranean diet. This objective is broken down into (a) developing an explanatory model of media pressure, psychological well-being, age, physical activity and adherence to the Mediterranean diet and (b) testing the structural model by means of a multi-group analysis according to physical activity level. To this end, a quantitative, non-experimental (ex post facto), comparative and cross-sectional study was carried out on a sample of 634 participants (35.18 ± 9.68). The instruments used were an ad hoc questionnaire, the Spanish version of Sociocultural Attitudes Towards Appearance Questionnaire-4 (SATAQ-4), the Psychological Well-Being Scales (PWBS) and the Prevention with Mediterranean Diet (PREDIMED). The data reveal that meeting the WHO physical activity criteria improves the relationships between media pressure, psychological well-being and healthy well-being. Introduction Currently, it has been shown that there is a relevant connection between positive adherence to a healthy lifestyle and both physical and psychological well-being (Cecilia et al. 2018). To achieve total fulfilment in the physical and mental areas, the nutritional, physical-sports, social and evolutionary spheres will condition the development of these domains (García-González and Froment 2017). In terms of sport, physical activity plays a key role in the development of physical and mental development (Kim et al. 2017). Furthermore, Thomas et al. (2019) point out that the ageing of the population is a key element in the abandonment of physical activity, since as age increases, people opt for other types of activities with a more sedentary character. Given such indications, the World Health Organization (WHO 2020) states that adults aged 18-64 years should engage in 150-300 minutes of moderate aerobic physical activity or intense aerobic physical activity for at least 75-150 minutes per week to maintain a healthy physical state, counting the exercise of any duration (Reyes et al. 2020; Thomas et al. 2019). For the population over 64 years of age, the organisation states that they should strengthen the large muscle groups, as well as dedicate three or more days a week to physical activity to improve their balance and prevent falls (Cárdenas-Guaman 2020). In addition, for the population over 64 years of age, the WHO (2020) states that they should also affect multimodal activities and improve their coordination skills. Likewise, a positive adherence to a healthy dietary pattern also has numerous physical and mental health benefits, such as a reduction in waist circumference, an improvement in blood pressure and an increase in self-concept (Muros et al. 2017). The Mediterranean Diet is conceived as a healthy dietary pattern, not only in terms of food intake but also in terms of the quality and cooking of the food (Melero et al. 2020). The foods that characterise the Mediterranean diet are those originating from the Mediterranean area such as cereals, fruit, vegetables, pulses, olive oil, whole grains and nuts (Melguizo-Ibáñez et al. 2020), accompanied by a balanced consumption of dairy products, oily fish and eggs (Martini 2019). Trigueros et al. (2020a, b) state that during adolescence there is a lack of control in following a healthy dietary pattern, due to the poor nutritional training provided in the educational stages; however, during adulthood there is an adherence to it, due to the numerous benefits it brings to health . To achieve adequate and full mental and physical development, it is necessary to understand the pressure exerted by different media on the behaviours of the adolescent and adult population towards the creation of different behaviours in different activities (Pippi et al. 2020). The media presents different iconic personalities where they encourage people to commit to a healthy lifestyle, encouraging people to follow healthy lifestyles (Puertas-Molero et al. 2019); however, not all of them have a positive effect. Given such findings (Barcaccia et al. 2017) and (Gietzen et al. 2017) point out that the continuous submission to the media can lead to personal dissatisfaction due to the creation of beauty standards, causing a rejection of their own self-image and thus worsening their own psychological well-being . During the past decade, there has been a great deal of research on well-being. Ryan and Deci (2001) proposed two main ways to understand this state, hedonics, which relates primarily to happiness, and eudaimonia, which is associated with human potential. In response to this proposal, Keyes et al. (2002) have extended the boundaries of this classification by using the subjective well-being construct as the main representative of the hedonic view and the psychological well-being construct as the representative of the eudamonic view. Focusing attention on the latter perspective, (Anglim et al. 2020) argues that psychological well-being has focused on the development of the various existing capacities for personal growth, these being conceived as the main indicators of positive functioning and thus of a positive self-image (Rahim et al. 2021). Therefore, in view of the above, the present research aims to identify and establish the relationships between media pressure, psychological well-being, age, physical activity and adherence to the Mediterranean diet. This objective is broken down into (a) developing an explanatory model of media pressure, psychological well-being, age, physical activity and adherence to the Mediterranean diet and (b) testing the structural model by means of a multi-group analysis according to physical activity. Design and participants A descriptive, non-experimental (ex post facto), cross-sectional design was used for this study. A convenience sampling was used for the selection of participants, with a single measurement for a single group. The study sample consisted of a total of 634 participants, showing a homogeneous distribution according to sex. Female sex accounted for 55.5% (n = 352) and males 44.5% (n = 282). Study participants reported a mean age of M = 35.18 ± 9.68 with a range of 18-66 years. Through different dissemination methods, the Spanish population was invited to participate, with the basic criteria of being of legal age and not exceeding the ordinary retirement age. In this way, a total of 53 questionnaires were eliminated because they did not meet the inclusion criteria or because they were incorrectly completed. Instruments and variables • Sociodemographic questionnaire: A self-drafted and self-recorded questionnaire was used to collect sociodemographic and physical-sporting aspects. The sex of the participants (male and female) and age were recorded. In this way, the time invested in the practice of physical activity and sport, expressed in minutes, was indicated. Based on this data and taking into account the minimum physical activity recommendations proposed by the World Health Organization (WHO 2020), we categorised whether or not the participants complied with these recommendations. • Sociocultural Attitudes Towards Appearance Questionnaire-4 (SATAQ-4): This questionnaire on sociocultural attitudes towards appearance was used to assess social pressure towards physical appearance, specifically, it was used to assess the pressure exerted by the media. This research used the Spanish version adapted by Llorente et al. (2015). It consists of 22 items that respond to a Likert-type scale with five response alternatives where '1 = completely disagree' and '5 = completely agree'. This questionnaire allows the recording of five dimensions: internalisation of a slim build (items 3, 4, 5, 8 and 9), internalisation of an athletic/muscular build (items 1, 2, 6, 7 and 10), family pressure (items 11, 12, 13 and 14), peer pressure (items 15, 16, 17 and 18) and media pressure (items 19, 20, 21 and 22). For the present study, the internal consistency of the questionnaire was α = 0.916, while for the media pressure dimension a reliability of α = 0.967 was obtained. • Psychological Well-Being Scales (PWBS): This instrument was used to record psychological well-being. The abbreviated version adapted to Spanish by Díaz et al. (2006), originally from the psychological well-being scale proposed by Ryff and Keyes (1995), was used. The scale is composed of 29 items that are answered on a Likert-type scale with six response options, where '1 = strongly disagree' and '6 = strongly agree'). Ten of the items are formulated in reverse. Through this scale we can obtain the sum of psychological well-being and six dimensions that emerge from it: self-acceptance (items 1, 7, 19 and 31), positive relationships (items 2, 8, 14, 26 and 32), autonomy (items 3, 4, 9, 15, 21 and 27), mastery of the environment (items 5, 11, 16, 22 and 39), personal growth (items 24, 36, 37 and 38) and purpose in life (items 6, 12, 17, 18 and 23). An internal consistency of α = 0.918 was obtained for the present study. • Prevention with Mediterranean Diet (PREDIMED): This instrument was used to record data related to adherence to the Mediterranean diet. This tool was developed by Schröder et al. (2011), but for this study we used the Spanish version adapted by Álvarez-Álvarez et al. (2019). This is composed of a total of 14 items, where a dichotomous Likert-type scale is used to obtain the final score. The final score is categorised into three levels: low adherence (≤7), medium adherence (8-10) and high adherence (≤10). Finally, an internal consistency of α = 0.789 was obtained. Procedure As a starting point, an exhaustive review of the scientific literature was carried out to extract information on existing and current problematic situations in society. Afterwards, a Google form was created by the Department of Didactics of Musical, Plastic and Corporal Expression of the University of Granada. This included the aforementioned instruments and detailed the aim and purpose of the study, offering the possibility of voluntary participation by giving informed consent when sending the form. For its administration, several channels were used; however, the most popular was the dissemination through social networks. Knowing the vulnerability and treatment of data through social networks, two questions were duplicated in order to detect that the questions had not been filled out randomly, thus ensuring the reliability and bias of the responses. Even so, a total of 53 questionnaires were eliminated because they were incorrectly completed or did not meet the inclusion criteria. Furthermore, this study complied with the ethical principles of research established by the Declaration of Helsinki (World Medical Association, 2009), ensuring anonymity and respecting the rights of the participants. In addition, the research was approved by the Ethics Committee of the University of Granada (1230/CEIH/2020). Data analysis For the descriptive analysis using frequencies and means, the SPSS 25.0 statistical software (IBM Corp, Armonk, NY, USA) was used. Cronbach's coefficient was used to determine the internal consistency of the instruments, establishing the reliability index at 95%. The AMOS 23.0 statistical software (IBM Corp., Armonk, NY, USA) was used to perform the multigroup structural equation analysis (SEM). SEM was used to establish the relationships between the variables that make up the theoretical model ( Fig. 1) for both groups (participants who did or did not comply with the physical activity recommendations). Two different models were constructed, with the aim of verifying the relationships between variables according to the participants' physical activity practice. The SEM developed for this analysis was constructed from five observable variables that provide the explanations for the relationships. In this case, the causal explanations of the endogenous variables were made by considering the observed associations between the indicators and the reliability of the measurements. Thus, the measurement error of the observable variables was included in the model and can be directly controlled and interpreted as multivariate regression coefficients. The one-way arrows represent the lines of influence between the latent variables and are interpreted from the regression weights. A significance level of .05 was established using Pearson's Chi-square test. The Mediterranean diet (MD) variable acts as an endogenous variable, which receives the effect of psychological well-being (PWB), physical activity practice (PA), age (AGE) and mass media pressure (MMP). Likewise, the variable psychological well-being (PWB) acts on the variables media pressure (MMP), physical activity practice (PA), adherence to the Mediterranean diet (MD) and age (AGE). To verify the compatibility between the model developed and the empirical data obtained, the fit of the model was examined. Following the criteria proposed by Marsh (2007), the reliability of the model was obtained according to the goodness of fit. For the Chi-square analysis, values associated with a non-significant p value indicate a good model fit. Because this statistic is very sensitive to sample size effects, other fit indices must be used (Byrne 2010). Other parameters such as the comparative fit index (CFI), normalised fit index (NFI), incremental fit index (IFI) and Tucker-Lewis index (TLI) were used. The values obtained must be greater than 0.90 to represent an acceptable fit and greater than 0.95 to represent an excellent fit. In addition, the root mean squared error of approximation (RMSEA) was used, where acceptable fit is determined by values at 0.08 and excellent fit with values below 0.05 (Bentler 1990;McDonald and Marsh 1990). Results The model developed by evaluating the variables measured in an adult sample aged 18-66 years in terms of meeting the recommendations proposed by the WHO (2020) showed a good fit of all indices. For the Chi-square analysis, a significant p value was found (X 2 = 1.869; df = 1; pl = 0.172). However, these indicators cannot be interpreted in isolation due to the susceptibility and influence of sample size (Marsh 2007). Thus, other standardised fit indices that are less sensitive to sample size were used. For the comparative fit index (CFI) analysis, a value of 0.992 was obtained; for the normalised fit index (NFI) analysis, a value of 0.985 was obtained; and for the incremental fit index (IFI), a value of 0.993 was obtained, which describes an excellent fit. In the Tucker-Lewis index (TLI) analysis, a value of 0.925 was obtained, which is an acceptable fit. The root mean square error of approximation analysis (RMSEA) also obtained an excellent value of 0.043. Figure 2 and Table 1 show the regression weights for the model as a function of meeting physical activity recommendations, for which statistically significant relationships were obtained at p < 0.05, p < 0.01 and p < 0.001. Among participants who complied with physical activity recommendations, psychological well-being (PWB) had a negative effect on media pressure (MMP) (p < 0.001; r = −0.379). However, PWB itself was positively associated with physical activity (PA) (p < 0.01; r = 0.237), adherence to the Mediterranean diet (MD) (p < 0.01; r = 0.238) and age (p < 0.05; r = 0.205). It should also be noted that MMP had an indirect effect on BP (p < 0.01; r = −0.216) and MD (p < 0.05; r = −0.206). Finally, BP (p < 0.001; r = 0.244) and age (p < 0.05; r = 0.191) had a positive effect on MD. The model developed for the variables measured as a function of non-compliance with the recommendations proposed by the WHO (2020) showed a good fit for all indices. A significant p value was found in the Chi-square analysis (X 2 = 0.136; df = 1; pl = 0.850). In addition, other standardised fit indices were used, as the mentioned indicators may be susceptible to and influenced by sample size (Marsh 2007). The fit of the model for the participants who did not comply with the physical activity recommendations was excellent, since in the analysis of the comparative fit index (CFI) a value of 0.999 was obtained; in the analysis of the normalised fit index (NFI), a value of 0.998 was obtained; in the incremental fit index (IFI), it was 0.994; and in the analysis of the Tucker-Lewis index (TLI), a value of 1.211 was recorded. Likewise, for the root mean square error of approximation (RMSEA) analysis, a value of 0.002 was obtained, which is excellent. Figure 3 and Table 2 show the regression weights for the model for participants who did not comply with the physical activity recommendations and for whom Fig. 1 The theoretical model. Note: mass media pressure (MMP); mediterranean diet (MD); physical activity (PA); psychological well-being (PWB); age (AGE) statistically significant relationships were obtained at the p < 0.05, p < 0.01 and p < 0.001 levels. For these participants, PWB exerted a positive effect on MD (p < 0.001; r = 0.414) with greater strength than for participants who complied with the physical activity recommendations and with the same intensity of association for the BP variable (p < 0.05; r = 0.273). Thus, the negative effect of MMP on MD (p < 0.001; r = −0.325) and PWB (p < 0.05; r = −0.268) was greater. Finally, there was a direct and stronger relationship between age and MD (p < 0.01; r = 0.306) for participants who did not comply with the recommendations. No statistically significant associations were found between the other factors. Discussion This research shows the relationship between media pressure, Mediterranean diet, physical activity, psychological well-being and age according to the WHO physical activity recommendations. The results obtained respond to the objectives set out, so the present discussion follows the line of comparing the data obtained with those of another research already carried out. In terms of the relationship between media pressure and psychological well-being, negative relationships are observed for both groups, with a higher significance for Fig. 2 The structural equation for individuals who did comply with the physical activity recommendations. Note: mass media pressure (MMP); mediterranean diet (MD); physical activity (PA); psychological well-being (PWB); age (AGE) Vandenbosch and Eggermont (2016) found that the pressure exerted by the media and social networks generates a beauty ideal aimed at acquiring weight reduction, causing personal body dissatisfaction and decreasing motivation towards physical-sports practice (Budzynski-Seymour et al. 2021). Continuing with the relationship between age and psychological well-being, it is observed that participants who meet the criteria for physical activity reflect a positive relationship, to the detriment of those who do not meet the criteria. Given these findings, Veldema and Jansen (2019) suggest that the practice of physical activity has a positive impact on the physical and mental health of individuals, regardless of the age of the participants, due to the segregation of neurotransmitters during its practice . Observing the relationships obtained from the practice of physical activity, a negative relationship is obtained with the media and age, these results are similar to those obtained by Berry et al. (2020), stating that the media can have a negative impact on healthy lifestyles, encouraging behaviours that are detrimental to physical and mental health. Likewise, Haas et al. (2021) state that age plays a key role in the Fig. 3 The structural equation for individuals who did not comply with the physical activity recommendations. Note: mass media pressure (MMP); Mediterranean diet (MD); physical activity (PA); psychological well-being (PWB); age (AGE) Table 2 The structural model for individuals who did not comply with the physical activity recommendations Note 1: RW, regression weights; SRW, standardised regression weights; SE, estimation error; CR, critical ratio. Note 2: MMP, mass media pressure; MD, Mediterranean diet; PA, physical activity; PWB, psychological well-being; AGE, age; association between variables (←). Note 3: p < 0.05 (*); p < 0.01 (**); p < 0.001 (***) decrease in the time spent doing physical activity, since as age increases there is a decrease in physical activity, as more sedentary activities are prioritised. On the contrary, positive relationships are observed between the practice of physical activity and psychological well-being. In view of these results, Muntaner-Mas et al. (2020) affirm that the practice of physical activity has a beneficial effect on mental health due to the secretion of neurotransmitters such as serotonin and dopamine (Alghadir et al. 2020) as well as the improvement of the mental image that participants have of themselves (Núñez et al. 2021). In relation to adherence to the Mediterranean diet, a positive relationship with age is evident. In view of these results, Gensous et al. (2020) state that during adolescence there is a low adherence to a healthy dietary pattern; however, once this phase of human development has been overcome, there is an improvement in adherence to positive eating habits. Likewise, a positive relationship is also observed between adherence to the Mediterranean diet and the practice of physical activity, as Melguizo-Ibáñez et al. (2020) argued that there is greater awareness of healthy lifestyles, both in the nutritional and physical-sports spheres from an early age, due to the physical and mental benefits they provide. At the same time, a positive relationship is also observed between adherence to the Mediterranean diet and psychological wellbeing, and these results are similar to those obtained by (Marchena et al. 2020), affirming Trigueros et al. (2020a that healthy food intake has a positive impact on the control of disruptive states such as anxiety, depression and stress, as well as an improvement in physical self-concept (Zurita-Ortega et al. 2018). In contrast, a negative relationship is obtained between adherence to the Mediterranean diet and mass media pressure, these results being similar to those obtained by Marfil-Carmona et al. (2021). In view of these findings, Zou et al. (2020) argue that the media have a powerful reach, but Spadine and Patterson (2022) point out that the nutritional messages they convey are mostly erroneous, with a predominance of advertisements for unhealthy and pre-cooked products, as well as a bias towards diets based on calorie imbalance. Limitations and future perspectives The current research has a series of limitations since, due to its design as a cross-sectional study, it only allows us to study the relationship between the variables at that moment in time, and is not able to establish causal relationships between the variables over a longitudinal period. Furthermore, the sample of participants belongs to a very specific geographical area; therefore, it is not possible to establish generalisations in a wider area of the national geography. It should also be noted that despite the use of reliable and validated questionnaires for data collection, these, being a self-affirmation instrument, have an intrinsic error. With a view to future perspectives and observing the results obtained, it is intended to develop a longitudinal study and to study the effects in the short and medium term of the impact of the media on the variables presented in the study. Conclusions In general, acceptable values were obtained for the different parameters of the general equation. The present study shows the relationships between media pressure, psychological well-being and health. Based on the models developed, it can be seen that for those who claim to meet the WHO physical activity criteria, there is a negative relationship between psychological wellbeing and media pressure. Likewise, negative associations between physical activity and media pressure and age are also evident. There is also no positive relationship between adherence to the Mediterranean diet and media pressure. In contrast, positive relationships were found between age and psychological well-being, physical activity and psychological well-being, Mediterranean diet and age, Mediterranean diet and physical activity, and between Mediterranean diet and psychological well-being. Focusing attention on the model of those participants who claim not to meet the WHO physical-sports criteria, negative relationships are obtained between psychological well-being and media pressure, age and psychological wellbeing and media pressure and physical activity practice. Likewise, negative relationships are also observed between physical activity practice and age and media pressure and adherence to the Mediterranean diet. Finally, positive relationships are shown between psychological well-being and physical activity practice, adherence to the Mediterranean diet and age, physical activity practice and Mediterranean diet, psychological well-being and adherence to a healthy dietary pattern. 3 Availability of data and material Not applicable. Code availability Not applicable. Declarations Ethics approval This research was approved by the University of Granada ethics committee (1230/CEIH/2020). Consent to participate Informed consent has been obtained from all study participants. Consent for publication All authors give their consent to publish the data obtained. All authors have read and agreed to the published version of the manuscript. Conflict of interest The authors declare that they have no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-07-09T13:34:20.650Z
2022-07-09T00:00:00.000
{ "year": 2022, "sha1": "7c5d9cf6e1d0a02a545f0e0c12c5a24fda766f3e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10389-022-01733-z.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "7c5d9cf6e1d0a02a545f0e0c12c5a24fda766f3e", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [] }
119385154
pes2o/s2orc
v3-fos-license
Superconducting states in ferromagnetic metals The symmetry of the superconducting states arising directly from ferromagnetic states in the crystals with cubic and orthorombic symmetries is described. The symmetry nodes in the quasiparticle spectra of such the states are pointed out if they exist. The superconducting phase transition in the ferromagnet is accompanied by the formation of superconducting domain structure consisting of complex conjugate states imposed on the ferromagnet domain structure with the opposite direction of the magnetization in the adjacent domains. The interplay between stimulation of a nonunitary superconducting state by the ferromagnetic moment and supression of the superconductivity by the diamagnetic orbital currents is established. I. INTRODUCTION A new class of superconducting materials has been revealed very recently where the superconducting state appears from another ordered state of the material -namely ferromagnetic state. There are now several metallic compounds demonstrating the coexistance of superconductivity and itinerant ferromagnetism. These are U Ge 2 1,2 , ZrZn 2 3 , U RhGe 4 . The superconducting states in these materials have to be preferably spin triplet to avoid the large depairing influence of the exchange field. Moreover it seems reasonable 5 that these are the states where only electrons with the spin down direction of the spins are paired, as is the case in the A 1 -phase of superfluid He-3 6 . Then the interaction between ferromagnetic and Cooper pair magnetic moments will stimulate the superconducting state. The explanation of the phase diagram of ZrZn 2 based on this idea has been proposed 7 . At first sight it seems plausible becouse ZrZn 2 has a cubic cristalline structure allowing multicomponent unconventional superconducting states with spontaneous magnetization. On the contrary the first discovered ferromagnet-superconductor U Ge 2 has an orthorombic structure. The orthorombic point group obeys only one-dimensional representations that prevents the formation of a superconducting state with spontaneous magnetization in the crystals with strong spin-orbital coupling as a result of a spontaneous phase transition from the normal state 8 . However I.Fomin has recently 9 shown that the magnetic superconducting phases may arise from the normal ferromagnet state even in the orthorombic crystal with strong spin-orbital coupling . It means that in this case the stimulation of the superconductivity by the ferromagnetism also takes place. The goal of this article is to present the detailed analysis of the problem of interaction of triplet pairing superconductivity with magnetization in the ferromagnetic metals. To investigate this problem one must first have the symmetry classification scheme for the superconducting states arising from the ferromagnetic normal state. The point is that the classification of unconventional superconducting states arising from a nonmagnetic normal state, that has been established in the papers 10-12 , does not include the new ferromagnet -superconducting states arising from the normal state with broken time reversal symmetry. So, the discussion of interplay of the stimulation of superconductivity by the ferromagnetism of itinerant electrons and the supression of superconductivity by diamagnetic currents is forestalled by the symmetry classification of possible triplet superconducting states arising directly from a normal ferromagnet state in crystals with an inversion centre. All superconducting magnetic classes in the crystals with orthorombic (Section 2) and cubic symmetry (Section 3) are described and the corresponding superconducting order parameters are presented. The existing symmetry nodes in the spectra of the elementary exitations are pointed out. It is shown that in the superconducting state the ferromagnetic domain structure with opposite direction of magnetization in the adjacent domains causes the appearence of the superconducting domain structure with the complex conjugate order parameters and the opposite directions of the Cooper pair magnetic moments. A. Superconducting states Let us consider first a ferromagnetic orthorombic crystal with spontaneous magnetization along one of the symmetry axis of the second order chosen as the z-direction. The symmetry group consists of the so called magnetic class 13 M and the group of the gauge transformations U (1). Any magnetic superconducting state arising directly from this normal state corresponds to the one of the subgroups of the group G characterized by broken gauge symmetry. In the given case M is equal to , where R is the time reversal operation. Let us look first on the subgroups of G being isomorphic to the initial magnetic group D 2 (C z 2 ) and constructed by means of combining its elements with phase factor e iπ being an element of the group of the gauge transformations U (1). The explicit form of these classes are The superconducting states are characterized by broken gauge symmetry. At the same time the phase transition from the normal paramagnetic state with symmetry D 2 × U (1) to the normal ferromagnetic state with symmetry D 2 (C z 2 ) × U (1) obeys this symmetry. That is why the phase transition from a normal paramagnetic state to a normal ferromagnetic state and from a normal ferromagnetic state to a superconducting ferromagnetic state can not have the same origine contrary to the statement in the paper 9 . As a result the corresponding phase transition lines may intersect each other only accidentally in isolated points in the (P, T ) plane. In particular there is no reason for the coincidence of these lines exactly in the quantum critical point at T=0. In the existing orthorombic compound U Ge 2 the ferromagnetism and superconductivity at T = 0 disappear simultaneously above the critical pressure about 17.6 kbar, the low temperature part of para-ferro phase transition near this pressure is however of the first order 2 . To each of the superconducting magnetic classes corresponds an order parameter. All these vector (triplet) order parameters in the crystal with inversion center and strong spin-orbital coupling have the form wherex,ŷ,ẑ are the unit vectors of the spin coordinate system pinned to the crystal axes and f x (k), . . . are the odd functions of momentum directions of pairing particles on the Fermi surface. Functions Ψ(k) for each superconducting state obey a normalization condition where the angular brackets denote the averaging over k directions . The general form of the order parameters for the states (2)-(5) have been pointed out in the paper 9 . We write them here in somethat different form: where u A 1 , . . . are real functions of k 2 x , k 2 y , k 2 z . It is worth noting that the state Ψ A2 transforms as iΨ A1 * and the state Ψ B2 transforms as iΨ B1 * . From the expressions for the order parameters (9)-(12) one can conclude that the states A and B have in general no symmetry nodes in the quasiparticle spectrum. Only occasional nodes appear for a particular form of the functions u A 1 , . . .. The classification of the states in quantum mechanics corresponds to the general statement by E.Wigner that the different eigenvalues are related to the sets of eigenstates belonging to the different irreducible representations of the group of symmetry of the hamiltonian. In particular, in absence of the time inversion symmetry violation, the superconducting states relating to the nonequivalent irreducible representations of the point symmetry group of crystal obey the different critical temperatures. Similarly the eigenstates of the particles in the ferromagnetic crystals are classified in accordance with corepresentations Γ of magnetic group M of the crystal 14 . The latter differ from usual representations by the law of multiplication of matrices of representation which is Γ(g 1 )Γ(g 2 ) = Γ(g 1 g 2 ) for elements g 1 , g 2 of group M if element g 1 does not include the time inversion operation and Γ(g 1 )Γ * (g 2 ) = Γ(g 1 g 2 ) if element g 1 does include the time inversion. The matrices of transformation of the order parameters (9)-(12) by the symmetry operations of the group D 2 (C z 2 ) = (E, C z 2 , RC x 2 , RC y 2 ) are just numbers (characters). As usual for one-dimensional representations they are equal ±1. For the state A 1 (9) which is a conventional superconducting state obeying the complete point-magnetric symmetry of initial normal state they are (1, 1, 1, 1). For the order parameter A 2 (10) they are (1, 1, −1, −1) where −1 corresponds to the elements of the superconducting symmetry class (3) containing the phase factor e iπ . The same is true for the table of characters of the other states. So all the corepresentations in the present case are real, however their difference from the usual representations manifests itself in the relationship of equivalence. The two corepresentations of the group M are called equivalent 15 if their matrices Γ(g) and Γ ′ (g) are transformed to each other by means of the unitary matrix U as Γ ′ (g) = U −1 Γ(g)U if the element g does not include the time inversion and as Γ ′ (g) = U −1 Γ(g)U * if the element g includes the time inversion. The corepresentations for the pair of states A 1 and A 2 are equivalent . In view of one-dimensional character of these corepresentations the matrix of the unitary transformation is simply given by the number U = i. The states A 1 and A 2 belong to the same corepresentation and represent two particular forms of the same superconducting state. It will be shown below that if we have state A 1 in the ferromagnet domains with the magnetization directed up the superconducting state in the domains with down direction of the magnetization corresponds to the superconducting state A 2 . The same is true for the pair of states B 1 and B 2 . The critical temperatures of the phase transition from a ferromagnetic normal state to the superconducting states relating to the nonequivalent corepresentations are in general different. The latter is garanteed by the property of the orthogonality of the order parameters relating to the nonequivalent corepresentations: The critical temperatures of equivalent states A 1 and A 2 in the ferromagnetic domains with the opposite orientations of magnetization are equal (see below). B. Stimulation of superconductivity by ferromagnetism All the listed above superconducting phases are in principle nonunitary and obey the Cooper pair spin momentum where f ± = f x ± if y and Cooper pair angular momentum The spontaneous Cooper pair magnetic moment as it is clear from (14) is proportional to the difference in the density of populations of pairs with spin up and spin down. In superfluid He-3 in the A 1 -state, only Cooper pairs with spin down are present. Their magnetic moments interact with external field giving rise to an increase of the critical temperature of the phase transition to the superfluid state. In the ferromagnetic metals with strong spin-orbital coupling there are the Cooper pairs with any projection of the total spin. However the dependence of the critical temperature of the superconducting phase transition from the ferromagnet magnetization also exists. On the microscopic level this dependence originates from the difference of the pairing interaction and the density of states on the Fermi surfaces for the particles with opposite spin projections (see below). Here one needs to note that as usual the word "spin" means in fact "pseudospin" and it is used to denote Kramers double degeneracy of electron states in a metal with spin-orbital coupling. On the phenomenological level the shift of the critical temperature can be described by the following term in the Landau free energy expansion where N 0 is the electron density of states on the Fermi surface, µ B is the Bohr magneton, the function f (x) ∼ x at small values of its argument. The magnetic field consists of the exchange field H ex and the external magnetic field H ext . Let us stress a very important difference between these fields. H ex is frozen into the crystal. It is transformed with any operation of the point symmetry group and completely invariant under magnetic symmetry class D 2 (C 2 z ) trasformations. H ext does not relate to these transformations, but as any magnetic field, changes the sign under time inversion. The exchange field acting on the electron spins stimulates the nonunitary superconducting state. The resulting enhancement of the critical temperature can be estimated as The exchange field determines the relative shift of the Fermi surfaces for the spin up and spin down quasiparticules. One can estimate the value of this field for U Ge 2 by its Curie temperature. Taking into account that at temperatures lower than ≈ 20K the phase transition into ferromagnet state starts to be of first order 16 one can say that H ex . is lying in the interval ≈ (20T, 40T ) in the whole interval of the pressures where superconductivity exists. Unlike in He-3, in ferromagnetic superconductors the magnetic field acts through the electron charges on the orbital electron motion to suppress the superconducting state. The reduction of the critical temperature due to the orbital effect is The electromagnetic field H em acting on the electron charges is determined by the modulus of the sum of the vectors of the external magnetic field and the dipole field of its own ferromagnet magnetic moment. The latter is much smaller than H ex . In the absence of the external field one can estimate the value of the H em by the value of the magnetic moment density which in U Ge 2 is of the order of 1kG 1,2 . The estimations (18) and (19) shows that the stimulation of a nonunitary superconductivity by ferromagnetism takes place at The interplay between the effects of the stimulation and the suppression of the critical temperature in ferromagnetic superconductors determines the phase diagrams of these materials 1-4 . One can find the confirmation of these qualitative estimates from the equation for the critical temperature of the superconducting phase transion 8 written 1 for one of superconducting state (9)-(12) in frame of some particular model of pairing: where g αβ . = i(σσ y ) αβ , σ = (σ x , σ y , σ z ) are the Pauli matrices. The functions are related to the particular corepresentation and in its particular form Ψ Γ (9)- (12). For instanse for A 1 case it is where v A 1 , . . . are real functions ofr 2 x ,r 2 y ,r 2 z . The normal metal electron Green functions are diagonal 2 × 2 matrices It is convenient to work with them by introducing the following notationŝ Using the general form of self-consistency equation (21) one can easily obtain the following system of equations where the combinations g ± = g x ± ig y , g z and f ± = f x ± if y , f z correspond to the pairing interaction and the order parameter amplitudes with spins up-up, down-down and zero projection of the pair spin on the z-direction. The angular brackets denote the averaging over the directions of unit vectorr and the integral operator in the right hand side isL One must to add to this equation the normalization condition (8) After finding of the eigen function η( R) of the operatorL one must find the critical temperature from the condition of zero value of the determinant of the linear system of equations (26)-(28) for the amplitudes g * f . It is worth noting that, for the interaction in the form of A 1 state, the order parameter can be chosen correspondingly as belonging to one of A 1 or A 2 states. As a result all the amplitudes g * f will be correspondingly real or imaginary and we deal with the system of equations of the third order. The same is correct for the B type corepresentation. Then the appearence of the linear shift of the critical temperature due to exchange field (18) follows trivially from the linear shifts of the amplitudes To demonstrate the validity of two latter relationships it is enough to look at the expression for electron Green function in a normal metal with isotropic spectrum ξ(p) = ξ and isotropic g-factor g(p) = 1/2 (see 17 ): here is the Fermi momentum, λ =↑, ↓ or +1, −1, v λ 0 = p λ 0 /m, is the Fermi velocity on the corresponding sheet of the Fermi surface, v 0 = p 0 /m, N 0 mp 0 /2π 2 is the density of states on the one spin projection. As for the second term in the right hand side of the equation (29) due to the difference in the Fermi momenta with spin up and spin down it contains the fast oscillating products of two Green functions and starts to be negligibly small. The smallness of this term however does not result in the disappearence of the amplitude f z of the Cooper pair state with zero projection of spin becouse all three amplitudes f + , f − , f z obey coupled linear equations (26)-(28). This fact is the direct consequence of the strong spin-orbital coupling. Unlike this, in the superfluid He-3, all three amplitudes f + , f − , f z obey independent equations characterized by different critical temperatures 17 such that the amplitudes f + and f z are equal to zero at the critical temperature where the amplitude f − appears. Generally speaking the second term in the (29) promotes the appearance of the oscillating solution On the other hand the first and the third terms in the equation (29) make these oscillations nonprofitable (oscillations in the order parameter decrease the critical temperature). In superconductors with s-pairing the appearence of a solution with nonvanishing Q or so called Fulde-Ferrell-Larkin-Ovchinnikov state 18,19 is possible for large enough values of H orb c2 in comparison with the paramagnetic limiting field 20 . In superconductors with triplet pairing when H ex ≫ H orb c2 one would not expect the appearance of FFLO state. This question however demands a special investigation in the frame of some particular model of pairing interaction. C. Domain structures Let us assume now that we have interactions (22)-(23) in the form corresponding to A 1 and A 2 states such that the corresponding functions v i A 1 and v i A 2 are equal. Let us fixe the solutions of the two corresponding sets of equations (26)-(28) as relating to A 1 and A 2 states. 2 Such a pair of states possess equal and opposite direction Cooper pair magnetic moments. It is easy to see that the following equalities are obeyed Hence, if the state A 1 is the solution of the system of equations (26)-(28) with critical temperature T c , the state A 2 is also the solution of the system (26)-(28 with opposite direction of the H ex and the same critical temperature. This means, if the pairing interaction in the ferromagnet with up direction of H ex corresponds to the pure A 1 state, the superconducting states in ferromagnetic domains with opposite orientation of magnetization will be A 2 . One can say that this is the consequence of the above mentioned property of conjugacy between the states A 1 and A 2 . The same is true for another pair of conjugate states B 1 and B 2 . The ferromagnet domain structure with alternating up-down direction of the magnetization is always accompanied by the superconducting domain structure with alternating properties of the complex conjugacy of the order parameter and alternating up-down direction of the Cooper pair magnetic moment. The superconducting order parameter distribution in the vicinity of the domain wall between of two adjacent domains demands special investigation. It is quite natural that the Abrikosov vortices having in the A 1 -state some fixed direction of the current and flux will have opposite orientations of the current and flux in the adjacent ferromagnet domain with the opposite direction of magnetization. III. SUPERCONDUCTIVITY IN FERROMAGNET METALS WITH CUBIC SYMMETRY Symmetric orientations of magnetic moments in cubic crystals along the symmetry axes of the fourth or of the third order give rise to a decreasing of the initial cubic symmetry of the normal state to the magnetic classes D 4 (C 4 ) = (E, C 4 , C 2 , C 3 4 , RU x , RU y , RU ′ , RU ′′ , ) and D 3 (C 3 ) = (E, C 3 , C 2 3 , RU 1 , RU 2 , RU 3 ) correspondingly 13 . Let as look first at the tetragonal magnetic class. A. Tetragonal magnetic class D4(C4) As before we construct first the groups being isomorphic to the initial magnetic group D 4 (C 4 ) by means of combining its elements with e iπ and e ±iπ/2 phase factors from the group of the gauge transformations U (1). The explicit form of these superconducting magnetic classes are D 4 (E) = (E, e 3iπ/2 C 4 , e iπ C 2 , e iπ/2 C 3 4 , e iπ RU x , RU y , e iπ/2 RU ′ , e 3iπ/2 RU ′′ ). The order parameters of the superconducting states corresponding to these classes are where u A1 1 , . . . are real functions of k 2 x + k 2 y , k 2 z . As for the orthorombic case the states A 1 , A 2 and B 1 , B 2 represent the pairs of equivalent corepresentations. Another two superconducting states E + and E − are related to nonequivalent corepresentations. In total there are four different superconducting states. The states A and E ± have no symmetry zeros in the quasiparticle spectra. Only the states of B type have symmetry points of zeros lying on the nothern and southern poles of the Fermi surface k x = k y = 0. This is easy to see directly from the expressions (39)-(44). Again in alternating ferromagnet domains with opposite directions of the magnetization there is alternating sequence of A 1 and A 2 , or B 1 and B 2 , or E + and E − states. As for the latter pair of states one can check this statement directly from the system of equations (26)-(28). B. Trigonal magnetic class D3(C3) The groups being isomorphic to to the initial magnetic group D 3 (C 3 ) are constructed by the combinations of its elements with elements e iπ and e ±2iπ/3 of the gauge group U (1). That yields the superconducting magnetic classes of symmetry where the elements U 1 , U 2 , U 3 are the rotations on the angle π around axeŝ The corresponding order parameters are Ψ E+ (k) = (φ 1 + e 2iπ/3 φ 2 + e −2iπ/3 φ 3 )[iẑu E+ 1 + k z (k xŷ − k yŷ )u 2 E+ ] + (51) (φ 1 + e 2iπ/3φ 2 + e −2iπ/3φ and u A1 1 , . . . are the real functions invariant under the transformations D 3 group. As before the A 1 and A 2 states correspond to the equivalent corepresentations. The states E ± are related to nonequivalent representations. None of these states have the symmetry nodes in the quasiparticle spectra. IV. CONCLUSION The symmetry classifications of the superconducting states with triplet pairing in the orthorombic and cubic ferromagnet crystals with strong spin-orbital coupling is presented. It is found that unlike the case of weak spin-orbital interaction where the nonunitary magnetic superconducting states are possible only in the case of multicomponent superconductivity 8 any superconducting state in the ferromagnet metals with strong spin-orbital coupling is in general nonunitary. The ferromagnetism stimulates in general the triplet superconductivity even with a one-component order parameter. The mechanism of this stimulation is due to the difference of the pairing interaction and the density of states for electrons with opposite directions of spin of degree of ferromagnet Fermi-liquid polarization. However the competitive mechanism supressing superconductivity due to the orbital diamagnetic currents is always presents. The interplay between these two interactions determines the superconducting phase diagram in the metallic ferromagnet materials. The presence of the ferromagnet domain structure in the superconducting state is always accompanied by the corresponding superconducting domain structure of the complex conjugate states. The adjacent domains contain the quantized vortices with opposite directions of currents and fluxes. V. ACKNOWLEDGMENTS I am grateful to I.A.Fomin and A.D.Huxley for the numerous wholesome discussions and D. Braithwaite for the valuable help. In particular I would like to acknowledge the strong influence of K.V.Samokhin in reaching the present formulation.
2019-04-14T01:59:46.366Z
2002-04-12T00:00:00.000
{ "year": 2002, "sha1": "49aade69ed4619f75385e6393b3dd93a753d673f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0204263", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "86dafd5f0a5f5ba1265d78c0a81f2ab97306232c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
269928574
pes2o/s2orc
v3-fos-license
The correlation between immune-related adverse events and efficacy of immune checkpoint inhibitors Abstract Immune checkpoint inhibitors have revolutionized cancer treatment by targeting the cytotoxic T lymphocyte antigen-4 and programmed death-1/ligand-1. Although immune checkpoint inhibitors show promising therapeutic efficacy, they often cause immune-related adverse events. Immune-related adverse events differ from the side effects of conventional chemotherapy and require vigilant monitoring. These events predominantly affect organs, such as the colon, liver, lungs, pituitary gland, thyroid and skin, with rare cases affecting the heart, nervous system and other tissues. As immune-related adverse events result from immune activation, indicating the reinvigoration of exhausted immune cells that attack both tumors and normal tissues, it is theoretically possible that immune-related adverse events may signal a better response to immune checkpoint inhibitor therapy. Recent retrospective studies have explored the link between immune-related adverse event development and clinical efficacy; however, the predictive value of immune-related adverse events in the immune checkpoint inhibitor response remains unclear. Additionally, studies have focused on immune-related adverse events, timing of onset and immunosuppressive treatments. This review focuses on pivotal studies of the association between immune-related adverse events and outcomes in patients treated with immune checkpoint inhibitors. Introduction Immune checkpoint inhibitors (ICIs) have ushered in a new era of cancer therapy, significantly reshaping the treatment landscape for patients with advanced malignancies.These agents, including monoclonal antibodies targeting cytotoxic T lymphocyte antigen-4 (CTLA-4), programmed cell death protein 1 (PD-1) and programmed death-ligand 1 (PD-L1), have shown remarkable efficacy, particularly in solid tumor treatment.By disrupting the mechanisms that enable malignancies to evade immune surveillance, ICIs reactivate the T cell immune response against cancer, offering a promising approach for patients with advanced or metastatic disease (1,2).However, the immune system activation induced by ICI treatment can lead to immune-mediated inflammation in various organs and tissues, with a particular susceptibility observed in the skin, endocrine system, gastrointestinal tract and lungs (3)(4)(5).Immune-related adverse events (AEs) (irAEs) significantly affect the treatment efficacy.Although many irAEs tend to be mild and resolve spontaneously, they pose potentially life-threatening risks in a few severe cases (6,7).Given that irAEs result from immune activation, a pertinent question arises: can the development of irAEs serve as a predictor of ICI therapy response?Numerous studies have explored the relationship between irAEs and treatment efficacy, particularly in patients with melanoma and non-small cell lung cancer (NSCLC).Nevertheless, the association between irAEs and ICI efficacy is multifaceted, with critical factors such as irAE type, severity, timing of onset and management potentially influencing treatment outcomes.Moreover, potential confounding factors must be considered when evaluating the association between the occurrence of irAEs and survival.Patients with longer survival times may exhibit a higher incidence of irAEs owing to prolonged therapeutic agent exposure.Consequently, the validity of irAEs as surrogate markers of ICI treatment efficacy remains unclear.To address these pivotal questions and provide a comprehensive overview of the current state of knowledge, this review focuses on previously published studies that investigated the association between irAEs and treatment efficacy.We searched PubMed for terms relevant to the theme of our study. Biological mechanisms of irAEs IrAEs are a spectrum of inflammatory responses against normal tissues triggered by heightened activity of the immune system owing to ICIs.The mechanisms underlying irAEs are complex and involve multiple components of the immune system.The activation of selfreactive T cells is one of the mechanisms that can induce irAEs.This is exemplified by myocarditis cases in which T cell infiltration into the myocardium is observed, suggesting a mechanism of shared antigen recognition between tumors and affected organs (8,9).Vitiligo occurs most commonly in patients with melanoma and is likely related to immune reactivity to melanosomal antigens (10).The expansion of T cell clones in the systemic circulation has also been linked to irAEs (11).Patients with elevated pretreatment levels of central memory T cells originating from CD4+ or CD8+ lymphocytes are predisposed to severe irAEs (12).Furthermore, the activation of B cells and the increased production of autoantibodies contribute to the occurrence of irAEs.Several studies have found an association between the presence of autoantibodies and the development of irAEs (13,14).Direct toxicity in normal tissues expressing checkpoint proteins is another facet of ICIs.Studies on hypophysitis have revealed that CTLA-4 expression in normal pituitary cells may enhance the toxicity associated with anti-CTLA-4 therapy (15,16).Evidence also indicates that inflammatory cytokines and microbiomes do play a role in the development of irAEs.For example, elevated baseline levels of IL-17 have been significantly correlated with later development of severe colitis (17).In a prospective study on advanced melanoma, patients with severe irAEs had a higher pre-treatment fecal abundance of the subclass Bacteroides intestinalis than those without irAEs (18).Although the mechanisms underlying irAEs have not been fully elucidated, understanding them and identifying predictive biomarkers for irAEs remain critical for optimizing patient outcomes. Renal cell carcinoma In the context of renal cell carcinoma (RCC), several studies have focused on irAEs associated with nivolumab or nivolumab + ipilimumab combination therapy. Gastrointestinal cancer There is evidence for an association between irAEs and treatment efficacy in patients with gastrointestinal cancer (GC). Meta-analysis Meta-analyses were conducted to determine the association between irAEs and treatment efficacy.In the present review, the associations with this region were also analyzed.It is noteworthy that in cases where irAEs occurred, Asian patients showed better OS and PFS than those of North America and Europe.However, contradictory results have been reported.Amoroso et al. conducted an analysis focusing only on randomized trials of ICIs (54).The authors examined 62 trials by employing a weighted linear regression with a logarithmic scale for their analysis.In contrast to numerous previous studies, this study showed lowstrength correlations between irAEs and OS in different cancer types.The authors concluded that the incidence of irAEs should not be considered a valid surrogate for OS when evaluating the efficacy of ICI therapy. Relationship between each irAE type and treatment efficacy In addition to the association between irAEs and treatment efficacy, several studies have investigated the relationship between irAE type and treatment outcomes.A meta-analysis conducted by (58).Gastrointestinal irAEs occurred in 5.6 and 16.1% of patients in the PD-1/PD-L1 and CTLA-4 groups, respectively, with a significantly higher incidence in the CTLA-4 group (P = 0.008).Patients who continued ICI treatment despite the development of gastrointestinal irAEs had significantly prolonged OS compared with those who did not experience gastrointestinal irAEs (P = 0.035).Yamamoto et al. explored the association between severe liver irAEs and prognosis in patients with NSCLC (59).Among 365 patients who received ICI treatment, 19 experienced severe liver irAEs.Patients with liver irAEs had significantly longer PFS (P = 0.010) and OS (P = 0.007) than those without liver irAEs. Relationship between timing of irAEs onset and treatment efficacy Several studies have explored the relationship between the timing of irAE onset and treatment efficacy.Hsiehchen et al. conducted an analysis focusing on the timing of irAE onset and its association with treatment efficacy in patients with NSCLC treated with PD-1/PD-L1 inhibitors (26).The authors examined two independent cohorts, one comprising 154 patients from a single institution, and a multicenter cohort of 433 patients.In both cohorts, late-onset irAEs occurring >3 months after the initiation of ICI treatment were associated with higher rates of radiographic response, as well as longer PFS and OS.Multivariable Cox regression analysis in the single institution cohort demonstrated that a longer time to irAE onset was associated with extended PFS (HR, 0.93; 95% CI, 0.87-0.99;P = 0.03) and OS (HR, 0.90; 95% CI, 0.83-0.98;P = 0.02).Contradictory results were reported by Ghisoni et al. (60), who reviewed all published registered trials on NSCLC and melanoma involving 622 patients, leading to the approval of ICIs by the FDA and EMA in December 2019. The cumulative probabilities of irAE onset after treatment initiation were 42.8, 51.0 and 57.3% at 6, 12 and 24 months, respectively.A time-dependent model was used to account for the immortal time bias introduced by late-onset toxicities.In both cohorts (NSCLC and melanoma), there were no significant associations between the incidence of irAEs and OS (P = 0.67 and 0.19, respectively). Relationship between immunosuppression therapy and treatment efficacy The management of immunotherapy toxicities was based on the recommendations proposed in the guidelines of the American Society of Clinical Oncology and the European Society of Medical Oncology (61,62).For the treatment of moderate to severe irAEs, a series of therapies targeting T cells, B cells, cytokines and autoantibodies are recommended in addition to steroids.Steroids are generally prescribed as the first-line treatment for irAEs, as recommended by current guidelines.However, the potential impact of steroids on the prognosis of irAEs has been reported and several studies have been conducted on this topic (Table 2).Stribek et al. (63) conducted a study involving 196 patients with NSCLC treated with ICIs and investigated the correlation between steroid administration, irAEs and outcomes.Steroid administration, defined as the use of >10 mg prednisolone equivalent for 10 days, was observed in 46.3% of patients.The results showed that steroid administration owing to irAEs did not significantly affect OS compared with steroidnaïve patients (P = 0.38).Bai et al. (64) conducted a retrospective study of 947 patients with melanomas treated with anti-PD-1 monotherapy and examined the impact of high-dose glucocorticoids on outcomes.The authors considered the peak dose of ≥60 mg prednisone equivalent once a day as the threshold for high-dose glucocorticoid use and used the endpoints of post-irAEs PFS/OS.The study revealed that early-onset irAEs (within 8 weeks of anti-PD-1 initiation) with high-dose glucocorticoids use were independently associated with poorer post-irAEs PFS (HR, 5.37; 95% CI, 2.10-13.70;P < 0.001) and post-irAEs OS (HR, 5.95; 95% CI, 2.20-16.09;P < 0.001) for 8-week landmark analysis.Van Not et al. (65) reported an association between the use of immunosuppressants for irAEs and prognosis in 771 patients with advanced melanoma treated with first-line ipilimumab and nivolumab.Among the 350 patients who required immunosuppression owing to severe irAEs, 235 received steroids alone and 115 received steroids in combination with second-line immunosuppressants.The most common irAEs were colitis and hepatitis.After adjusting for potential confounding factors, patients treated with steroids along with secondline immunosuppressants tended to have a higher risk of disease progression (adjusted HR, 1.40; 95% CI, 1.00-1.97;P = 0.05) and a greater risk of mortality (adjusted HR, 1.54; 95% CI, 1.03-2.30;P = 0.04) compared with patients treated with steroids alone.Several other studies have reported an association between steroid therapy for irAEs and treatment efficacy; however, the results are inconsistent (32,66,67).Given that the severity of irAEs and continuation of ICI treatment can also influence treatment efficacy, multiple factors are involved in treatment efficacy.Considering that the use of steroids and immunosuppressive agents may affect treatment efficacy, it is necessary to develop appropriate treatment plans. Rechallenge of ICIs Whether to rechallenge with ICIs after the onset of irAEs should be determined individually based on the treatment efficacy, type of symptoms and severity.Rechallenge is possible if the grade is below Grade 1, but for Grades 2 and above, the appropriateness of resuming treatment or permanently discontinuing it varies depending on the type of symptoms.However, even in the presence of severe irAEs, resuming administration may contribute to a prolonged prognosis, and there have been several reports on the efficacy and safety of rechallenge.In a study that rechallenged 452 patients with irAEs with ICIs, 28.8% experienced recurrence of the initial irAEs.The symptoms with a higher risk of recurrence were colitis (OR = 1.77), hepatitis (OR = 3.38) and pneumonia (OR = 2.26) (68).In studies focusing on NSCLC, rechallenge with ICIs for Grade 2 or higher resulted in no occurrence of Grade 4 or 5 irAEs; however, recurrence of the same or de novo irAEs was reported in 60% of the cases (69).There was no difference in the prognosis between the rechallenge and non-rechallenge groups.The safety of resuming anti-PD-1 antibody therapy in patients with melanoma who discontinued anti-CTLA-4 and PD-1 blockade combination therapy because of irAEs was assessed (70).Rechallenge resulted in 50% of patients developing irAEs, whereas the occurrence of the same irAEs as with combination therapy was 18%.Only one case of Grade 5 toxicity was observed.Considering the less severe toxicity after combination therapy followed by monotherapy rechallenge, this approach may be worth considering. Variations in toxicity with ICI combinations As previously discussed, the correlation between irAEs and therapeutic efficacy has been reported not only with ICI monotherapy but also with ICI combination therapies, as well as combinations of tyrosine kinase inhibitors (TKIs) or cytotoxic agents.However, caution is warranted in therapeutic management because of the increasing frequency of irAEs associated with combination therapies.Meta-analyses have demonstrated that the combination of anti-PD-1 and anti-CTLA-4 antibodies is associated with an increased risk of irAEs (71).Caution should be exercised even for rare events because the incidence of myocarditis and neurotoxic events is higher with ICI combination therapy than with ICI monotherapy (72,73). Additionally, an increase in fatal irAEs, such as myocarditis, has been reported (7).It is also recognized that the addition of TKIs or cytotoxic agents elevates the severity and frequency of treatmentrelated AEs compared with ICIs alone (74).Meta-analysis showed that the incidence of all-grade diarrhea following ICI treatment combined with TKIs or cytotoxic agents was higher than that following ICI monotherapy (75).Several analyses focusing on organ-specific irAEs following combination therapy with ICIs and TKIs have been reported.In a study involving 20 516 patients, the frequency of pneumonia with combination therapy of nivolumab and TKIs was 25.7%, Conclusion The association between irAEs and treatment efficacy has been the subject of extensive research, particularly in the context of ICIs across various malignancies.In this review, we focused on organ-specific data owing to their accessibility; however, considering the mechanisms of irAEs, cross-organ analysis is also necessary.Reports have indicated a correlation between irAEs and prognosis regardless of the organ involved (78).Although most studies have suggested a positive association between irAEs and treatment efficacy, it is important to acknowledge that some reports have presented conflicting findings.It is crucial to consider potential confounding factors, especially when assessing late-onset irAEs, as only patients who survive longer may develop irAEs, potentially biasing the survival data.Many studies have attempted to address this bias using landmark analyses and Cox models with time-dependent variables.However, there are additional challenges to evaluating these studies.The diagnosis of mild irAEs can sometimes elude the attending physician, which can affect the analysis.For instance, symptoms such as nausea, vomiting, appetite loss, weight loss, general weakness and fatigue are challenging to diagnose owing to their nonspecific nature, potentially stemming from various conditions beyond irAEs alone.When administered concurrently with chemotherapy, distinguishing between irAEs and the side effects commonly associated with chemotherapy, such as liver impairment, diarrhea and rashes, is particularly challenging.Therefore, when interpreting the results of these studies, potential problems must be taken into account.Overall, irAEs show promise as potential biomarkers for ICI treatment.However, the underlying mechanisms and impacts of specific irAE types, severity and timing of disease onset require further investigation. Table 1 . (41)tionship between irAEs and treatment efficacyBisschop et al. (36)investigated the relationship between pembrolizumab-related AEs and treatment efficacy in 147 patients treated with pembrolizumab, using prospectively collected data.Their analysis revealed that patients with AEs were more likely to achieve disease control than those without AEs in a multivariate logistic regression analysis (low-grade AEs vs. no AEs: OR = 12.8, P = 0.0002; high-grade AEs vs. no AEs: OR = 38.5,P=0.0001).P = 0.017).A retrospective multicenter study conducted in Japan investigated the correlation between irAEs and treatment efficacy in 150 patients with HCC treated with atezolizumab plus bevacizumab(41).Among these patients, 21.3% developed irAEs, with 15.3% experiencing grade 1/2 and 6.0% experiencing grade 3/4 irAEs.The most common AEs were endocrine (7.3%) and skin (6.0%) disorders. Table 2 . (77)tionship between immunosuppressive therapy and treatment efficacy 4% with nivolumab monotherapy and 4.6% with TKI monotherapy(76).In another study involving 106 patients with various cancers, the frequency of thyroid function abnormalities with combination therapy of ICIs and TKIs was 63.2%, including 10.4% with hyperthyroidism, 39.6% with subclinical hypothyroidism and 13.2% with overt hypothyroidism(77).Combination therapy with ICIs and TKIs may have a different AE profile than ICI monotherapy and should be approached with caution.
2024-05-22T06:17:48.906Z
2024-05-20T00:00:00.000
{ "year": 2024, "sha1": "fe15d7a727c873a44b587d0bf5529dcd85461657", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/jjco/advance-article-pdf/doi/10.1093/jjco/hyae067/57762310/hyae067.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ec7ca53d650f586c686636f4f4e44db86a038b2a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
195813546
pes2o/s2orc
v3-fos-license
Mechanical and Static Stab Resistant Properties of Hybrid-Fabric Fibrous Planks: Manufacturing Process of Nonwoven Fabrics Made of Recycled Fibers With the development of technology, fibers and textiles are no longer exclusive for the use of clothing and decoration. Protective products made of high-strength and high-modulus fibers have been commonly used in different fields. When exceeding the service life, the protective products also need to be replaced. This study proposes a highly efficient recycling and manufacturing design to create more added values for the waste materials. With a premise of minimized damage to fibers, the recycled selvage made of high strength PET fibers are reclaimed to yield high performance staple fibers at a low production cost. A large amount of recycled fibers are made into matrices with an attempt to decrease the consumption of new materials, while the combination of diverse plain woven fabrics reinforces hybrid-fabric fibrous planks. First, with the aid of machines, recycled high strength PET fibers are processed into staple fibers. Using a nonwoven process, low melting point polyester (LMPET) fibers and PET staple fibers are made into PET matrices. Next, the matrices and different woven fabrics are combined in order to form hybrid-fabric fibrous planks. The test results indicate that both of the PET matrices and fibrous planks have good mechanical properties. In particular, the fibrous planks yield diverse stab resistances from nonwoven and woven fabrics, and thus have greater stab performance. Introduction In recent years, fibrous and textile products are applied to many other fields than garments and decoration. Due to the flexibility, light weight, and easy process, they can be easily combined with other materials. In addition to the intrinsic properties, they can also yield other required properties via the employment of different organizational structures and manufacturing processes. Hence, fibrous and textile materials have a great diversity of applications, from domestic and industrial purposes to custom-made special functionality requests. The amounts of selvages and textile wastes soar as a Method The principal material in this study is recycled high strength PET selvages. In general, the staple fibers that are used in non-woven fabrics are wavy or crimp in order to increase the friction. However, the waste selvages of woven fabric are usually cut from the edge of woven fabric made by the filament or continue yarn and still with woven fabric structure. Thus, we had to break, dispersion, recycled and then reused these fibers. Therefore, the high strength PET waste selvage were processed with the opening into recycled PET staple fibers. The PET staple fibers were mixed with low-melting point polyester (LMPET) fibers at ratios of 9:1, 7:3, and 5:5 to form high strength PET matrices by a needle punching machine (needle punching machine, SNP120SH6, Shoou Shyng Machinery Co., Ltd., New Taipei City, Taiwan) with a needle-punched speed of 200 needles/min and a line speed of 2.3 m/min. During the needle-punching process, the needles were pressed in from the direction of the vertical fabric surface to laminate and bonded the multilayer web or multilayer fabric together. Next, pure Materials Recycled high strength polyester (PET) selvages (Chien Chen Textile, city, Taiwan) have a fiber fineness of 1000D/192f, fiber length of 40-65 mm, and single fiber strength of 8g/d ( Figure 1). Both carbon and aramid plain woven fabrics were purchased from Jinsor-Tech Industrial Co., Taichung City, Taiwan and the physical properties are listed in Table 1. Basalt woven fabrics (Yurak International, Taichung City, Taiwan) are composed of basalt fiber bundles at both warp and weft directions with a fineness of 2970 D and an areal density of 328 g/m 2 (as shown in Table 1 and Figure 2). Low-melting-point PET (LMPET) staple fibers (Far Eastern New Century, Taiwan) have a fineness of 4 D and a length of 51 mm, and are composed of a skin-core structure. The melting points of the skin and core are 110 °C and 265 °C. Method The principal material in this study is recycled high strength PET selvages. In general, the staple fibers that are used in non-woven fabrics are wavy or crimp in order to increase the friction. However, the waste selvages of woven fabric are usually cut from the edge of woven fabric made by the filament or continue yarn and still with woven fabric structure. Thus, we had to break, dispersion, recycled and then reused these fibers. Therefore, the high strength PET waste selvage were processed with the opening into recycled PET staple fibers. The PET staple fibers were mixed with low-melting point polyester (LMPET) fibers at ratios of 9:1, 7:3, and 5:5 to form high strength PET matrices by a needle punching machine (needle punching machine, SNP120SH6, Shoou Shyng Machinery Co., Ltd., New Taipei City, Taiwan) with a needle-punched speed of 200 needles/min and a line speed of 2.3 m/min. During the needle-punching process, the needles were pressed in from the direction of the vertical fabric surface to laminate and bonded the multilayer web or multilayer fabric together. Next, pure Method The principal material in this study is recycled high strength PET selvages. In general, the staple fibers that are used in non-woven fabrics are wavy or crimp in order to increase the friction. However, the waste selvages of woven fabric are usually cut from the edge of woven fabric made by the filament or continue yarn and still with woven fabric structure. Thus, we had to break, dispersion, recycled and then reused these fibers. Therefore, the high strength PET waste selvage were processed with the opening into recycled PET staple fibers. The PET staple fibers were mixed with low-melting point polyester (LMPET) fibers at ratios of 9:1, 7:3, and 5:5 to form high strength PET matrices by a needle punching machine (needle punching machine, SNP120SH6, Shoou Shyng Machinery Co., Ltd., New Taipei City, Taiwan) with a needle-punched speed of 200 needles/min and a line speed of 2.3 m/min. During the needle-punching process, the needles were pressed in from the direction of the vertical fabric surface to laminate and bonded the multilayer web or multilayer fabric together. Next, pure LMPET fibers were also made into LMPET layers by the nonwoven process, which serves as the adhesive layer between the PET matrix and reinforcing the woven fabric. Different reinforcing woven fabrics were used, including basalt, carbon-fiber, and aramid plain woven fabrics. The sandwich-structured laminates were hot pressed into the hybrid-fabric fibrous planks, which were treated at 130 • C at a speed of 0.2 m/min and hot pressure with 10 MPa (two-wheel hot press machine, CW-NEB, Chiefwell Engineering Co., Ltd., New Taipei City, Taiwan). The manufacturing process was denoted in Figure 3 and Table 2. Finally, the air permeability, tensile strength, and tearing strength, bursting strength, and static stab resistance of hybrid-fabric fibrous planks were tested to examine the influence of content of recycled high strength PET fibers and the employment of hot pressing. LMPET fibers were also made into LMPET layers by the nonwoven process, which serves as the adhesive layer between the PET matrix and reinforcing the woven fabric. Different reinforcing woven fabrics were used, including basalt, carbon-fiber, and aramid plain woven fabrics. The sandwichstructured laminates were hot pressed into the hybrid-fabric fibrous planks, which were treated at 130 °C at a speed of 0.2 m/min and hot pressure with 10 MPa (two-wheel hot press machine, CW-NEB, Chiefwell Engineering Co., Ltd., New Taipei City, Taiwan). The manufacturing process was denoted in Figure 3 and Table 2. Finally, the air permeability, tensile strength, and tearing strength, bursting strength, and static stab resistance of hybrid-fabric fibrous planks were tested to examine the influence of content of recycled high strength PET fibers and the employment of hot pressing. Air Permeability The air permeability of hybrid-fabric fibrous planks was measured using an air permeability tester (TEXTEST FX3300) as specified in ASTM D737 (Standard Test Method for Air Permeability of Textile Fabrics). Sample size is 25 cm × 25 cm. Ten samples for each specification were used in order to have the mean. Air Permeability The air permeability of hybrid-fabric fibrous planks was measured using an air permeability tester (TEXTEST FX3300) as specified in ASTM D737 (Standard Test Method for Air Permeability of Textile Fabrics). Sample size is 25 cm × 25 cm. Ten samples for each specification were used in order to have the mean. Tensile Strength As specified in ASTM D5035, the tensile strength of hybrid-fabric fibrous planks was measured at a cross head tensile speed of 300 mm/min using an Instron 5566 (Instron, Norwood, MA, USA). The distance between a pair of pneumatic clamps is 75 mm. Six samples for each specification along the cross machine direction (CD) and machine direction (MD) were used. Samples have a size of 25.4 mm × 180 mm. Tearing Strength The tearing strength of hybrid-fabric fibrous planks was measured as specified in ASTM D5587. Samples were prepared according to the trapezoid method and had two equal altitudes and two parallel bases of 75 mm and 150 mm. The short base had a perpendicular cut with a length of 15 mm in the center. The distance between two clamps is 25 mm and the test rate is 300 mm. Six samples for each specification along the CD and MD were taken in order to have the mean. Bursting Strength As specified in ASTM D3787, a universal tester (Instron 5566, Instron, Norwood, MA, USA) that is equipped with a 25.4-mm-diameter hemispherical probe was used to measure the bursting strength of hybrid-fabric fibrous planks at a rate of 100 mm/min. Six samples (150 mm × 150 mm) for each specification were used and the maximum bursting strength was recorded. Static-Stab Resistance Test The static puncture resistance of samples was measured at a puncture rate of 508 mm/min using a universal strength testing machine (Instron5566, Norwood, MA, USA) as specified in ASTM F1342. Samples have a size of 100 mm × 100 mm. The diameter of the puncture probe is 4.5 mm. Six samples for each specification were used for the test in order to have the average static puncture resistance, standard deviation, and coefficient of variation (as shown in Figure 4). As specified in ASTM D5035, the tensile strength of hybrid-fabric fibrous planks was measured at a cross head tensile speed of 300 mm/min using an Instron 5566 (Instron, Norwood, MA, USA). The distance between a pair of pneumatic clamps is 75 mm. Six samples for each specification along the cross machine direction (CD) and machine direction (MD) were used. Samples have a size of 25.4 mm x 180 mm. Tearing Strength The tearing strength of hybrid-fabric fibrous planks was measured as specified in ASTM D5587. Samples were prepared according to the trapezoid method and had two equal altitudes and two parallel bases of 75 mm and 150 mm. The short base had a perpendicular cut with a length of 15 mm in the center. The distance between two clamps is 25 mm and the test rate is 300 mm. Six samples for each specification along the CD and MD were taken in order to have the mean. Bursting Strength As specified in ASTM D3787, a universal tester (Instron 5566, Instron, Norwood, MA, USA) that is equipped with a 25.4-mm-diameter hemispherical probe was used to measure the bursting strength of hybrid-fabric fibrous planks at a rate of 100 mm/min. Six samples (150 mm × 150 mm) for each specification were used and the maximum bursting strength was recorded. Static-Stab Resistance Test The static puncture resistance of samples was measured at a puncture rate of 508 mm/min using a universal strength testing machine (Instron5566, Norwood, MA, USA) as specified in ASTM F1342. Samples have a size of 100 mm × 100 mm. The diameter of the puncture probe is 4.5 mm. Six samples for each specification were used for the test in order to have the average static puncture resistance, standard deviation, and coefficient of variation (as shown in Figure 4). Table 3 shows the mechanical property of recycled high strength PET matrices, including tensile strength, tearing strength, and air permeability. The mechanical properties are discussed based on the employment of hot pressing, the content of recycled PET fibers, and fiber orientation (i.e., the direction that the majority of fibers are aligned). Except for the tensile load, the employment of hot pressing significantly influences the elongation, tearing strength, tearing elongation, and air permeability. The LMPET fibers are melted to form thermal bonding points in the high strength PET matrices as a result of hot pressing. The thermal bonding points primarily stabilize the fabric structure Table 3 shows the mechanical property of recycled high strength PET matrices, including tensile strength, tearing strength, and air permeability. The mechanical properties are discussed based on the employment of hot pressing, the content of recycled PET fibers, and fiber orientation (i.e., the direction that the majority of fibers are aligned). Except for the tensile load, the employment of hot pressing significantly influences the elongation, tearing strength, tearing elongation, and air permeability. The LMPET fibers are melted to form thermal bonding points in the high strength PET matrices as a result of hot pressing. The thermal bonding points primarily stabilize the fabric structure and restrain the slip of fibers, which is proven by the results of tensile elongation of the matrices [22,23]. However, the thermal bonding points have a lower strength than single fiber strength of high strength PET fibers, and employment of hot pressing can hardly affect the tensile strength of the matrices. Mechanical Property of Recycle High Strength PET Matrices As for the tearing strength test, the matrices have a large area, which enable thermal bonding points to resist the tearing force as well as restrain the slip of fibers. Therefore, hot pressing has a positive influence on the tearing strength and elongation of the matrices. In particular, when composed of more recycled high strength PET fibers, the matrices exhibit greater tensile and tearing strengths. Because PET fibers are gathered from PET woven selvages, they are less crimped. Subsequently, the nonwoven fabrics (i.e., PET matrices) have a low porosity and compact structure, which is proven by the low air permeability. The employment of hot pressing creates a great amount of thermal bonding points and decreases the thickness of the matrices. Hence, the matrices have low air permeability due to the high fabric density and low porosity. Moreover, when composed of 50 wt% or 70 wt% of recycled high strength PET fibers, the matrices exhibit similar tensile and tearing performances. The results are ascribed to the fact that PET fibers undermine the synergistic effect with highly crimped LMPET fibers. Comparatively, hot pressed PET matrices exhibit greater mechanical properties, and are thus used for following discussions. Figure 5 shows the mechanical properties of high strength PET matrices as related to the fiber blending ratios. When the content of LMPET fibers is lower than 50 wt %, the resulted PET matrices exhibit greater tensile strength. Furthermore, three plain woven fabrics are separately combined with the PET matrices for reinforcement. The hybrid-fabric fibrous planks containing basalt woven fabrics and carbon-fiber woven fabrics have comparable mechanical properties. Basalt fibers and carbon fibers have similar properties, and both fibers are fragile and cannot be bent (cf. Figure 5). Hence, when using a basalt or carbon-fiber plain woven fabric as the reinforcement, the fibrous planks exhibit similar trends in tensile strength tests, and the maximum tensile strength occurs when the fibrous planks are composed of 70 wt% of PET fibers. Specifically, the planks composed of aramid woven fabrics outperform planks composed of basalt or carbon-fiber woven fabrics in terms of tensile strength. Figure 6 shows the fractured images of different woven fabrics. Furthermore, Figure 7 indicates that the fibrous planks composed of aramid woven fabrics exhibit the highest tensile elongation due to the greater elongation rate of aramid fibers. As a result, the fibrous planks composed of aramid woven fabrics do not have a sudden decrease in the tensile strength when the aramid woven fabric is damaged. However, the high modulus fibers are commonly coated with oiling agent during the spinning and weaving process, which hampers the melted LMPET fibers to adhere. Therefore, the thermal bonding effect of LMPET fibers is insignificant, and aramid fibers slip to a greater extent. The test results show that HP9K that is composed of a lower content of LMPET fibers has the maximum tensile strength. fibrous planks are composed of 70 wt% of PET fibers. Specifically, the planks composed of aramid woven fabrics outperform planks composed of basalt or carbon-fiber woven fabrics in terms of tensile strength. Figure 6 shows the fractured images of different woven fabrics. Furthermore, Figure 7 indicates that the fibrous planks composed of aramid woven fabrics exhibit the highest tensile elongation due to the greater elongation rate of aramid fibers. As a result, the fibrous planks composed of aramid Figure 8 shows the tearing strength and elongation of hybrid-fabric fibrous planks as related to fiber blending ratios. Samples are prepared with a perpendicular cut in the center beforehand. The test results show that hybrid-fabric fibrous planks consisting of aramid woven fabrics have the maximum tearing strength. Figure 9 shows that fibrous planks that are composed of greater LMPET and not hot pressed exhibit low tearing elongation, which suggests that the employment of hot press creates thermal bonding points, preventing the slip of fibers and stabilizing the structure. By contrast, both basalt and carbon-fiber woven fabrics have fragile fibers. The breakage of basalt or carbon-fiber woven fabrics causes a sudden decrease in the tearing properties, which in turn stops the test immediately. In particular, consisting of 10 wt % of LMPET fibers and 90 wt % of recycled high Figure 8 shows the tearing strength and elongation of hybrid-fabric fibrous planks as related to fiber blending ratios. Samples are prepared with a perpendicular cut in the center beforehand. The test results show that hybrid-fabric fibrous planks consisting of aramid woven fabrics have the maximum tearing strength. Figure 9 shows that fibrous planks that are composed of greater LMPET and not hot pressed exhibit low tearing elongation, which suggests that the employment of hot press creates thermal bonding points, preventing the slip of fibers and stabilizing the structure. By contrast, both basalt and carbon-fiber woven fabrics have fragile fibers. The breakage of basalt or carbon-fiber woven fabrics causes a sudden decrease in the tearing properties, which in turn stops the test immediately. In particular, consisting of 10 wt % of LMPET fibers and 90 wt % of recycled high strength PET fibers, HP9K exhibits the highest tearing strength. The high strength PET fibers are repeatedly processed with combing and scattering to form staple fibers, and then made into nonwoven fabrics. Based on the test results, the tensile and tearing strength of the PET fibers are comparable to those of the embossing fibers, which indicates that the recycling has effective value [24][25][26]. Fiber broke Fiber broke Fiber slipped Figure 7. Elongation of hybrid-fabric fibrous planks as related to fiber blending ratios. Figure 8 shows the tearing strength and elongation of hybrid-fabric fibrous planks as related to fiber blending ratios. Samples are prepared with a perpendicular cut in the center beforehand. The test results show that hybrid-fabric fibrous planks consisting of aramid woven fabrics have the maximum tearing strength. Figure 9 shows that fibrous planks that are composed of greater LMPET and not hot pressed exhibit low tearing elongation, which suggests that the employment of hot press creates thermal bonding points, preventing the slip of fibers and stabilizing the structure. By contrast, both basalt and carbon-fiber woven fabrics have fragile fibers. The breakage of basalt or carbon-fiber woven fabrics causes a sudden decrease in the tearing properties, which in turn stops the test immediately. In particular, consisting of 10 wt % of LMPET fibers and 90 wt % of recycled high strength PET fibers, HP9K exhibits the highest tearing strength. The high strength PET fibers are repeatedly processed with combing and scattering to form staple fibers, and then made into nonwoven fabrics. Based on the test results, the tensile and tearing strength of the PET fibers are comparable to those of the embossing fibers, which indicates that the recycling has effective value [24][25][26]. Polymers 2019, 11, x FOR PEER REVIEW 8 of 12 Figure 10 shows the bursting strength of hybrid-fabric fibrous planks as related to the fiber blending ratios. The fibrous planks that consist of a higher content of recycled PET fibers have higher bursting strength. The recycled PET fibers undergo the combing and carding processes repeatedly to form the staple fibers for nonwoven fabrics. Unlike the commonly used staple fibers in nonwoven fabrics, the recycled PET fibers are not crimped or embossing. Usually, crimped or embossing fibers contribute relatively higher friction and a uniform distribution to the nonwoven fabrics. In addition, the recycled PET fibers are chopped from complete bundles, which may possibly leave more filling and oiling agent that restrain the subsequent combing and opening processes. Based on the bursting strength, all hybrid-fabric fibrous planks have comparable bursting strength and coefficient of variation. Only when containing uneven fiber distribution or obvious fiber packing do the planks exhibit uneven and diversity in the bursting strength. Moreover, the fibrous planks consisting of a greater amount of recycled PET fibers demonstrate higher bursting strength, which suggests that the fibers are evenly distributed and the recycling PET fibers for the production of nonwoven fabrics is proven effective. The test results show that HP9K that is composed of a lower content of LMPET fibers has the maximum bursting strength about 146.8% better than the sample of HP9. Figure 10 shows the bursting strength of hybrid-fabric fibrous planks as related to the fiber blending ratios. The fibrous planks that consist of a higher content of recycled PET fibers have higher bursting strength. The recycled PET fibers undergo the combing and carding processes repeatedly to form the staple fibers for nonwoven fabrics. Unlike the commonly used staple fibers in nonwoven fabrics, the recycled PET fibers are not crimped or embossing. Usually, crimped or embossing fibers contribute relatively higher friction and a uniform distribution to the nonwoven fabrics. In addition, the recycled PET fibers are chopped from complete bundles, which may possibly leave more filling and oiling agent that restrain the subsequent combing and opening processes. Based on the bursting strength, all hybrid-fabric fibrous planks have comparable bursting strength and coefficient of variation. Only when containing uneven fiber distribution or obvious fiber packing do the planks exhibit uneven and diversity in the bursting strength. Moreover, the fibrous planks consisting of a greater amount of recycled PET fibers demonstrate higher bursting strength, which suggests that the fibers are evenly distributed and the recycling PET fibers for the production of nonwoven fabrics is proven effective. The test results show that HP9K that is composed of a lower content of LMPET fibers has the maximum bursting strength about 146.8% better than the sample of HP9. Figure 10 shows the bursting strength of hybrid-fabric fibrous planks as related to the fiber blending ratios. The fibrous planks that consist of a higher content of recycled PET fibers have higher bursting strength. The recycled PET fibers undergo the combing and carding processes repeatedly to form the staple fibers for nonwoven fabrics. Unlike the commonly used staple fibers in nonwoven fabrics, the recycled PET fibers are not crimped or embossing. Usually, crimped or embossing fibers contribute relatively higher friction and a uniform distribution to the nonwoven fabrics. In addition, the recycled PET fibers are chopped from complete bundles, which may possibly leave more filling and oiling agent that restrain the subsequent combing and opening processes. Based on the bursting strength, all hybrid-fabric fibrous planks have comparable bursting strength and coefficient of variation. Only when containing uneven fiber distribution or obvious fiber packing do the planks exhibit uneven and diversity in the bursting strength. Moreover, the fibrous planks consisting of a greater amount of recycled PET fibers demonstrate higher bursting strength, which suggests that the fibers are evenly distributed and the recycling PET fibers for the production of nonwoven fabrics is proven effective. The test results show that HP9K that is composed of a lower content of LMPET fibers has the maximum bursting strength about 146.8% better than the sample of HP9. Figure 11 shows the static stab resistance of hybrid-fabric fibrous planks as related to the fiber blending ratios. Consisting of 90 wt % of recycled PET fibers in the PET matrix and Kevlar woven Figure 11 shows the static stab resistance of hybrid-fabric fibrous planks as related to the fiber blending ratios. Consisting of 90 wt % of recycled PET fibers in the PET matrix and Kevlar woven fabric as reinforcement, P9K demonstrates the maximum static stab resistance. This test uses a pointed probe with a diameter of 4.5 mm, and the stab resistance mechanism of fibrous planks is via the resistance against the tip of the probe as well as the frictional resistance of fibers against the pointed probe. The displacement of the fibrous planks is relatively smaller in Figure 12. The hybrid-fabric fibrous planks to resist the pointed probe via the compact plain structure of woven fabric as well as the melting status and adhesion effects caused by a great amount of LMPET fibers. The specified design of the fibrous planks is to stop the slip of fibers and increase the friction between fibers and the probe effectively. However, the recycled high modulus PET fibers are coated with oiling agent, which prevents LMPET fibers to form an adhesive layer during the hot pressing, and thus the interface bonding strength is low. Furthermore, the fibrous planks exhibit highest static stab resistance when consisting of the greatest amount of recycled PET fibers and the smallest amount of LMPET fibers about 212.6 N. Based on the test results, the static stab-resistance of the hybrid-fabric fibrous planks are comparable to those of the embossing fibers, which indicates that the recycling has effective value [24,27]. The future studies need to remove the filling and oiling agent before conducting the test for further discussion. Figure 11 shows the static stab resistance of hybrid-fabric fibrous planks as related to the fiber blending ratios. Consisting of 90 wt % of recycled PET fibers in the PET matrix and Kevlar woven fabric as reinforcement, P9K demonstrates the maximum static stab resistance. This test uses a pointed probe with a diameter of 4.5 mm, and the stab resistance mechanism of fibrous planks is via the resistance against the tip of the probe as well as the frictional resistance of fibers against the pointed probe. The displacement of the fibrous planks is relatively smaller in Figure 12. The hybridfabric fibrous planks to resist the pointed probe via the compact plain structure of woven fabric as well as the melting status and adhesion effects caused by a great amount of LMPET fibers. The specified design of the fibrous planks is to stop the slip of fibers and increase the friction between fibers and the probe effectively. However, the recycled high modulus PET fibers are coated with oiling agent, which prevents LMPET fibers to form an adhesive layer during the hot pressing, and thus the interface bonding strength is low. Furthermore, the fibrous planks exhibit highest static stab resistance when consisting of the greatest amount of recycled PET fibers and the smallest amount of LMPET fibers about 212.6 N. Based on the test results, the static stab-resistance of the hybrid-fabric fibrous planks are comparable to those of the embossing fibers, which indicates that the recycling has effective value [24,27]. The future studies need to remove the filling and oiling agent before conducting the test for further discussion. Figure 11. Static puncture resistance of hybrid-fabric fibrous planks as related to fiber blending ratios. Figure 11. Static puncture resistance of hybrid-fabric fibrous planks as related to fiber blending ratios. Conclusions This study proposes flexible fabric-based protective planks, which are recycled high strength PET fibers by processed with minimum damage for secondary production, thereby obtaining recycled high performance fibers with relatively lower production cost. In this study, the different reinforcing woven fabrics are combined with matrices to form hybrid-fabric fibrous planks. Despite multiple combining and carding processes, the recycled PET staple fibers are proven to provide the fibrous planks with high tensile and tearing strengths. The test results indicate that recycled PET fibers remain high strength and can be made into protective products. The test results indicate that recycled PET fibers remain high strength and can be made into protective products. The sample consisting of 10 wt% of LMPET fibers and 90 wt% of recycled high strength PET fibers, HP9K exhibits the optimal mechanical properties of those we tested in this study with 38.5 MPa of tensile strength, 1392.8 N/mm of tearing strength, 215.9 Kpa of bursting strength Conclusions This study proposes flexible fabric-based protective planks, which are recycled high strength PET fibers by processed with minimum damage for secondary production, thereby obtaining recycled high performance fibers with relatively lower production cost. In this study, the different reinforcing woven fabrics are combined with matrices to form hybrid-fabric fibrous planks. Despite multiple combining and carding processes, the recycled PET staple fibers are proven to provide the fibrous planks with high tensile and tearing strengths. The test results indicate that recycled PET fibers remain high strength and can be made into protective products. The test results indicate that recycled PET fibers remain high strength and can be made into protective products. The sample consisting of 10 wt% of LMPET fibers and 90 wt% of recycled high strength PET fibers, HP9K exhibits the optimal mechanical properties of those we tested in this study with 38.5 MPa of tensile strength, 1392.8 N/mm of tearing strength, 215.9 Kpa of bursting strength and 212.6 N of static stab resistance force. The combination of nonwoven and woven fabrics provides the benefits of their different stab behaviors, strengthening the puncture resistance of the hybrid-fabric fibrous planks. Most of all, an efficient recycling process and using textile and fiber waste to make protective fibrous planks decrease the production cost considerably, which makes the industrial and livelihood protective products more advantageous and acceptable. In addition, the recycled high modulus PET fibers being generally coated with an oiling agent when produced prevents LMPET fibers from forming an adhesive layer during the hot pressing, and thus the interface bonding strength is low. Future studies need to remove the filling and oiling agent before conducting the test for further discussion. Author Contributions: In this study, the concepts and designs for the experiment, all required materials, as well as processing and assessment instruments are provided by C.
2019-07-07T13:05:14.368Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "1518f7f7becdfd419873a335a2264516481724f4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/11/7/1140/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1518f7f7becdfd419873a335a2264516481724f4", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
16524238
pes2o/s2orc
v3-fos-license
Sources of social support associated with health and quality of life: a cross-sectional study among Canadian and Latin American older adults Objectives To examine whether the association between emotional support and indicators of health and quality of life differs between Canadian and Latin American older adults. Design Cross-sectional analysis of the International Mobility in Aging Study (IMIAS). Social support from friends, family members, children and partner was measured with a previously validated social network and support scale (IMIAS-SNSS). Low social support was defined as ranking in the lowest site-specific quartile. Prevalence ratios (PR) of good health, depression and good quality of life were estimated with Poisson regression models, adjusting for age, gender, education, income and disability in activities of daily living. Setting Kingston and Saint-Hyacinthe in Canada, Manizales in Colombia and Natal in Brazil. Participants 1600 community-dwelling adults aged 65–74 years, n=400 at each site. Outcome measures Likert scale question on self-rated health, Center for Epidemiological Studies Depression Scale and 10-point analogical quality-of-life (QoL) scale. Results Relationships between social support and study outcomes differed between Canadian and Latin American older adults. Among Canadians, those without a partner had a lower prevalence of good health (PR=0.90; 95% CI 0.82 to 0.98), and those with high support from friends had a higher prevalence of good health (PR=1.09; 95% CI 1.01 to 1.18). Among Latin Americans, depression was lower among those with high levels of support from family (PR=0.63; 95% CI 0.48 to 0.83), children (PR=0.60; 95% CI 0.45 to 0.80) and partner (PR=0.57; 95% CI 0.31 to 0.77); good QoL was associated with high levels of support from children (PR=1.54; 95% CI 1.20 to 1.99) and partner (PR=1.31; 95% CI 1.03 to 1.67). Conclusions Among older adults, different sources of support were relevant to health across societies. Support from friends and having a partner were related to good health in Canada, whereas in Latin America, support from family, children and partner were associated with less depression and better QoL. INTRODUCTION Population ageing is a global phenomenon, affecting developed and developing countries. Among the social determinants of physical and mental health in populations of older adults, strong social networks with high levels of social support generally represent a protective factor for maintaining good health and quality of life in old age. [1][2][3] Different forms of social support are related to a variety of physical and mental health outcomes. 4 For example, older adults may receive emotional support from their loved ones and feel useful when they are involved in their lives. A study of over 1200 community-dwelling older adults in Spain concluded that high emotional support was positively associated with physical and mental health. 5 Another study of over 3400 older adults in the USA confirmed that satisfaction with social support is related to good self-rated health (SRH). 6 Moreover, a Strengths and limitations of this study ▪ This study examines the associations between emotional support and self-rated health (SRH), depression and quality of life in cross-cultural samples of older adults. ▪ The measure of emotional support (International Mobility in Aging Study-social network and support scale) has been validated in the study population and differentiates between different types of social ties. ▪ The study protocol and measurement instruments were identical across international sites, allowing rigorous cross-cultural comparison. ▪ While cross-sectional data cannot ascertain the direction of the associations between social support and health outcomes, the associations between SRH, depression and quality of life with social support from different sources are significant and strong and they vary across cultures. systematic review of 51 studies with different age groups also revealed significant protective effects of perceived emotional support in relation to depression. 7 Finally, good quality of life has also been related to social support in older adults. 8 Berkman and Glass 3 9 proposed a conceptual model of the influence of social networks and support on health status, which recognised that individual social networks depend on the social and cultural context where individuals live. As such, the impact of social relations on various indicators of health and well-being appears to vary depending on the nature of social ties (eg, friends, children, family members and partner) and the quality of the relationships, a complexity that calls for further study. 10 For example, recent research in the USA suggests that social support from friends becomes an important predictor of perceived health among older adults, particularly when compared with younger cohorts for whom family support seems more important. 11 In addition, the majority of research about the impact of social support on health and quality of life has been carried out in specific populations in North America and in Western Europe, 12 13 although what constitutes good social support may vary in relation to the social values and norms of regions and countries. To this day, we have limited knowledge about the relationships between social support and health outcomes beyond these particular socio-cultural contexts. The different social norms and expectations surrounding social relations in cross-cultural samples of older adults further complicate this field of study. 14 15 Research across ethnic or racial groups in the USA 16-18 demonstrated differences in correlates of health and quality of life among these groups, which suggest that there will be differences in different regions of the world. 19 20 Comparisons across populations allow us to detect the features of the social environment that affect most (or all) individuals in a population and have therefore little variance (or are invariant) within that population. 21 The objective of this study is to examine associations between emotional support and self-reported health, depression and quality of life, which are important indicators of overall well-being among older adults from Canada and Latin America. The three study outcomes were chosen because of their importance in terms of the longevity and well-being of older adults. SRH is one of the stronger predictors of mortality in cohorts of older adults in Canada 22 and in Brazil, 23 and according to multiple international studies. 24 25 Depression in later life is a strong risk factor for disability 26 and cognitive decline, 27 and a frequent outcome for chronic diseases. 28 Self-reported quality of life is an encompassing index of well-being in old age and of successful ageing. 29 The study sites were four middle-sized cities: Kingston (Ontario) and Saint-Hyacinthe (Quebec) in Canada, Manizales in the Andean Mountains of Colombia and Natal in North East Brazil. The sites were selected to capture a diversity of experiences in old age, particularly different social and gender relations, as well as levels of economic security. Moreover, we used a new validated measurement tool that assesses the emotional support associated with different social ties, which will enable us to detect cross-cultural differences in the links between social relations and indicators of health and well-being. 30 In light of existing literature, we posit the following hypotheses regarding the relationship between emotional support and the outcomes of interest. We expect that more emotional support will generally be associated with better health, mood and quality of life in older adults. We also expect that different sources of support will have different effects on the outcomes, and that these associations will vary cross-culturally in North America when compared to Latin America. Study population and recruitment methods For this study, we analysed baseline data from the International Mobility in Aging Study (IMIAS), which were collected in 2012. Rationale and methods have been described in previous publications. 31 Briefly, the sample includes 400 community-dwelling adults (200 men and 200 women) aged 65-74 years at each of the four sites, for a total sample of 1600. The sample sizes were established as part of the original IMIAS study and were deemed sufficient to capture gender differences in mobility across sites. Participants were recruited randomly from the patient lists of primary care providers. At the two Latin American sites, participants were contacted directly by researchers and there was a response rate close to 100%. Owing to requirements of the ethical review boards, participants could not be contacted directly in Canada. They were invited to participate in the study through a letter from their primary care provider, and then had to contact our field researcher to enter the study. Approximately 30% of those receiving the letter contacted us to participate, and 95% of them participated in the study. At all sites, study procedures were carried out at the participants' home. Study material and questionnaires were available in the local languages: English, French, Spanish and Portuguese. Ethical requirements This project was approved by the institutional review boards of the relevant health centres where the study was conducted, and the overall study is overseen by the ethical review board of the Research Centre of the Centre Hospitalier de l'Université de Montréal (CRCHUM). Self-rated health Participants were asked, 'Would you rate your health as very good, good, fair, poor or very poor?'. SRH was dichotomised as good if respondents answered 'very good' or 'good', and poor if respondents answered 'fair', 'poor' or 'very poor'. 32 The validity of this measurement item was demonstrated in our study population through a significant linear association between mean score of the Short Physical Performance Battery and ordinal categories of SRH. 33 Four participants were excluded because of missing values. Depressive symptoms In this study, we defined depression as having a score of 16 and above on the Centre for Epidemiological Studies Depression (CES-D) scale, which is indicative of a probable diagnosis of depression. 34 The CES-D scale has been used extensively in populations of older adults and encompasses negative affect, positive affect, somatic symptoms and interpersonal problems. 35 Studies have demonstrated its validity and reliability in French, 36 in Spanish 37 and in Portuguese, 38 as well as in low-income settings. 39 There were no missing values. Quality of life Quality of life was assessed with a visual analogue scale (VAS). Participants were asked to indicate their quality of life in the preceding 2 weeks on a continuous line that went from the worst possible quality to the best possible quality. Their answers were then converted into a number from 1 to 10 based on their position on the line. VAS measures of quality of life are valid and reliable when used as a dependent variable to assess global quality of life. 40 41 Given the distribution of quality of life across sites, a score higher or equal to 8 out of 10 represents a good quality of life (this corresponds roughly to 70% of the studied Canadian population), in agreement with research showing that ∼75% of older adults consider that they have a good or very good quality of life. 8 There were no missing values. Social support Social support was measured for different social ties, namely with friends, family members, children and partner. To this effect, we used a scale that was developed and validated by members of the IMIAS team using confirmatory factor analysis, called the social support and network scale (SSSN-IMIAS) 30 based on previous scales. 1 42 Factor analysis demonstrated satisfactory goodness of fit and consistent validity in the study population. 30 This scale focuses on the emotional support and feelings of usefulness provided by the four above cited types of social ties. Five Likert scale questions about social support were included in the IMIAS-SSSN for each type of social tie. For example, participants were asked if they felt loved by their friends, if they felt that their friends listen to them when they talk about their problems, if they felt useful for their friends, how often they helped their friends and if they felt they play an important role in their friends' lives. The maximum total score for social support is 20 for each type of social tie. There were wide variations in the existence and frequency of social ties cross-culturally. As an illustration, while most Canadians reported having friends only half of the Natal sample reported this tie. Given this, social support was coded as ranking in the lowest quartile of social support. Low social support is used as the reference category throughout the analyses, and compared with those without the social tie and those reporting social support above the lowest quartile. Missing data were excluded, leaving a total sample size of 1582 participants. They did not follow a specific pattern across study sites and represented fewer than five respondents per study site, per social tie. Control variables The confounding variables that were controlled for in this study were selected according to the literature on socioeconomic and functional determinants of SRH, depression and quality of life in older adults. Among them, age, gender, education and income have been identified as potential strong confounders in the associations of self-reported measures of physical and mental health and quality of life with social support. Finally, disability in activities of daily living (ADL) was considered a potential confounder because this type of severe disability has known effects on SRH and depression and it is a strong determinant of quality of life. ADL disability has been shown to be related to social networks and support. 43 Education was measured as a categorical variable, in terms of having less than a high school degree, having a high school diploma or having postsecondary education. Income sufficiency was also measured as a categorical variable; respondents could indicate that their income was insufficient, sufficient or very sufficient to cover their basic needs. ADL disability was measured as a binary variable to capture the presence of any difficulty in conducting any of a list of six basic activities of daily living (walking across a room, bathing, getting dressed, getting up, eating and going to the bathroom). Statistical analyses We examined descriptive statistics and then constructed several models using Poisson multiple regression with robust covariance to test our hypotheses with each outcome variable. While logistic regression is recommended to estimate the occurrence of rare events, that is, those occurring <10% of the time, Poisson regression with robust covariance is preferred for cross-sectional studies of binary outcomes that are common, because the confidence intervals (CIs) are more conservative and prevalence ratios (PR) are easier to interpret and disseminate. 44 45 The analyses were carried out following the hypotheses tested. We first modelled the data together to examine the associations between sources of social support and the outcomes of interest. To test the heterogeneity of the associations between considered risk factors and SRH, depressive symptoms and quality of life across research sites, we conducted two Omnibus tests of the interactions of all social support sources with each outcome (six models) with research site and then with gender at a level of significance of 0.05, without including potential covariates. Since these associations were statistically significant for the Canadian data and the Latin American data, we proceeded by analysing these separately. We finally tested the impact of confounding variables on models developed at the different sites to assess whether the associations between social support and the outcomes of interest remained statistically significant despite controlling for age, gender, income sufficiency, level of education and disability in activities of daily living. RESULTS There are major differences between the Canadian and Latin American study samples in terms of sociodemographic variables, ADL disability, SRH, depression and quality of life. In table 1, the Canadian sample has significantly higher levels of education and income sufficiency. While few people had less than secondary education or had insufficient income to cover basic needs in Canada, the majority of Latin American older adults had not finished secondary education and had insufficient income to cover basic needs. These are very important aspects of the social context to consider when assessing the health of ageing populations. ADL disability was significantly more prevalent in Latin America, affecting 26% of older adults in Manizales and 31% in Natal compared with 22% in Kingston and 16% in Saint-Hyacinthe. Good SRH was more common in Canada, with the highest value among men living in Kingston (87.50%) and the lowest one among women living in Natal (22.93%) (table 2). The majority of Latin American participants reported being in poor health. Depression was less frequent among Canadian participants and was higher among women, overall, but especially at Latin American sites. Although less pronounced than the distribution of SRH, quality of life followed a similar trend, with the majority of Canadian respondents reporting a high quality of life and the majority of Latin American older adults reporting a low quality of life. Correlations between the outcome measures were moderate: between quality of life and SRH, it was 0.35; between quality of life and depression, it was 0.29 and between depression and SRH, it was 0.36. The differences between the distributions of social support variables across sites are also worth noting (table 3). While fewer than 6% of Canadian participants reported having no friends, this proportion reached 25% in Manizales and nearly 50% in Natal. Support from partner was precluded for those without a partner. This was of particular importance in Manizales, where 49.50% had no partner. The proportions of low social support were similar across sites because the variable was constructed as a site-specific lowest quartile among those who had the specific social tie (friends, children, family or spouse). Low social support is therefore a contextspecific variable, whereby support is considered low in comparison with local societal norms. The variable of interest is therefore exposure to less social support than the social norm at the specific study sites. We first pooled all of the data together to examine relationships between different sources of social support and each outcome of interest. Then, we examined how context (research site) modified the association between social support and the three outcomes. Significant interactions were found between the social support provided by family members and Canadian versus Latin American study sites (comparison of models with interaction terms and without, SRH: χ 2 =10.86, p=0.004; depression: χ 2 =12.57, p=0.002 and quality of life: χ 2 =5.27, p=0.072). We also tested for interactions with gender, and although it is a significant confounder in Latin America, none of the interaction terms between social support and gender were significant. Men and women were therefore modelled together. To better understand these differences between sites, we modelled the Canadian and Latin American data separately. Table 4 presents the PR obtained in Canada when testing the relationship between all sources of social support and the three outcome variables, first unadjusted and then when controlling for age, gender, site, level of education, income sufficiency as well as disability. High support from friends and not having a spouse are significantly associated with poorer health in Canada, in the unadjusted and the adjusted models. Similarly, not having friends and not having a partner yield a higher PR of depression, but this effect disappears after adding the control variables about socioeconomic status (staggered inclusion of variables in regression models, not shown here). The associations are not significant for quality of life, but the absence of partner again appears to be relevant for quality of life in the unadjusted model with our sample of older Canadian adults. It is noteworthy that among older Canadians, social support from family members and children is not related to any of the three outcomes, and that the quality of the relationships themselves does not seem to have an impact. In other words, those with a better relationship with friends and who have a partner do report better outcomes. For partner, significant differences only emerge between those with the specific social tie and those without. The results from the Latin American models were at odds with the Canadian ones (table 5). High levels of social support by family members were associated with better health and although the significance of the associations was lost after full adjustment by covariates, the size of the regression coefficients changed very little. High social support from partner was also associated with good health, but this relationship also disappeared after controlling for gender and site (staggered entry of variables in the analyses, not shown here). The results for depression demonstrated large effects despite all control variables; a high level of social support from children, family members and partner was all related with a lower prevalence of depressive symptoms. With respect to quality of life, a high level of social support from children and partner remained significant after controlling for confounding variables. Across the three outcome variables, there were no significant differences between those without social ties and those with poor levels of social support. To assess whether the associations of quality of life with social support were independent from older adults' physical and mental health, we fitted regression equations for Canada and Latin America, using quality of life as the dependent variable and adding SRH or depression as a potential confounder. Changes in the coefficients of social support variables were negligible, demonstrating that the associations between quality of life and social support are independent of any physical or mental health problems (data not shown). DISCUSSION We examined the associations between emotional support and SRH, depression and quality of life in older adults aged between 65 and 74 years residing in two Canadian and two Latin American cities. First, among Canadian and Latin American participants, there were positive associations between either the presence of social ties or perceived emotional support and SRH, depression and quality of life. The association between social support and health, mood and quality of life differed cross-culturally according to its source. Among Canadian participants, protective associations were found between good health and less depression and high levels of support from friends and having a partner. No effect was observed for the quality of the support from partner, children or family, demonstrating that they did not play a role in health, depression or quality of life. Among Latin American participants, the strongest associations were seen when support came from extended family, children and partner, whereas support from friends did not play a significant role. In fact, among Latin Americans, having high levels of social support from family and partner was related to good health, and having high support from children was also related to less depression and better quality of life. Quality of life was related to receiving high levels of support from the partner, and those with poor support from children appeared to have worse quality of life than those without children. We conclude that aside from the importance of relationships with friends, in Canada, the presence of a partner is more important than the quality of support, which is different from the results in Latin America where not merely the presence of the social tie, but the levels of support from family members, children and partner are significantly associated with older adults' health and well-being. These results confirm and extend previous research conducted in Europe comparing social support in five Mediterranean countries with seven countries of Northern Europe. 15 Litwin reported that family support is more important in Mediterranean countries where there are more household exchanges. These observations are in agreement with previous research conducted in Canada, 32 Cuba 46 and Spain. 5 Moreover, comparing two francophone older Canadian populations, one from a working class neighbourhood population of Montreal, and the other from the middle class city of Moncton, New Brunswick, Zunzunegui et al 32 found that in Montreal, having family and children was associated with good health, whereas having low support from children was associated with poor health. Networks of friends played a role only for those with good physical and cognitive function. In Moncton, the associations were different because only relationships with friends seemed to play a role in health. The authors concluded that support from children was more salient in socially and materially deprived areas than in more affluent environments. These results suggest that among our Canadian participants, the effects of social support on health and well-being could be linked to a protection against social isolation. We propose that in high-income countries like Canada, where friends are associated with leisure activities and family ties may lead to unwanted responsibilities and potential conflicts, friends could be more beneficial for health. 47 Levels of social capital are high in Canada and society provides the services that family members provide in other cultures. 48 In addition, Canadians have a relatively strong system of public and private social services and old age pensions, which provide some economic security to older adults. Consequently, there is relatively less need to rely on family. It appeared that the quality of the social support provided was more important in Latin America, especially when this support came from family members, children and cohabiting partner. In fact, older adults in Latin America appear to place more emphasis on emotional support from their children, and social contact and affection with grandchildren has been found to influence their sense of well-being. 49 Latin American older adults live in societies with strong family intergenerational interdependence but limited economic security, social protection and social services. 50 Social integration in society occurs within the family around which the social life pivots. 51 52 Family interdependence means that support flows between generations in multigenerational households. Previous results on social support and depression in Cuba coincide with our findings from Southern Brazil, whereby receiving help from children and extended family is associated with the lowest depression rates. 46 53 Results about the beneficial effects of friends on health and quality of life in this study confirm previous results from longitudinal ageing research conducted in the USA and Northern Europe. Our research contributes to the literature about social relations and health outcomes among older adults by demonstrating that the sources of social support that are relevant to the physical and mental health and quality of life vary according to the socioeconomic and cultural characteristics of the population. Today more than two-thirds of older adults live in middle-income and lowincome countries where social protection for older adults is weak and older adults' well-being depends on family exchanges and solidarity. Our results for Latin America contribute to the limited literature on social support and health in that region and can be helpful in inspiring social policies. In particular, emerging economies, such as Brazil, have recently legislated universal old age pensions and healthcare for older adults. These universal protection programmes may have further increased interfamilial dependence. 54 For example, in periods of economic crises, the household's largest source of revenue may come from the pension of the older members of the family and many older adults help with house chores and care of grandchildren in a system of family exchange across generations. Social networks and support represent an important determinant of older population's health and well-being in Latin America and possibly in other regions of the world where family interdependence is highly valued. Strengths and limitations of the study The cross-sectional nature of this study imposes restrictions in establishing the temporal sequence of the associations between social support, health and quality of life. It is possible that those who have the poorest health and quality of life are unable to mobilise social support from their networks. However, some of these results continue to be significant even after controlling for disability, establishing the associations independently of the presence of disability. The latest longitudinal research also suggests that the associations between social support and health indicators may be bidirectional and may change in strength over the life course. 55 Second, there are challenges inherent in cross-cultural research, such as the difficulty of interpreting findings across very different cultural contexts. 56 While some authors do not recommend combining data sets across populations because of uncertainty regarding the equivalence of covariance matrices, our work addressed these limitations by validating the social support scales with factor analysis in the different cultural contexts 30 and by involving a multicultural research team in the study design and analysis. Our results nonetheless remain preliminary and need further confirmation in Latin American populations and in other settings with emerging economies and changing levels of social protection and family norms. Moreover, the relatively low response rates among the Canadian samples raise questions about external validity. We addressed this issue in previous publications reporting that according to census data, the Saint-Hyacinthe sample is comparable to the population of the same age group in the selected cities in terms of education, income and education and that the Kingston sample is relatively more highly educated. Kingston is, nevertheless, similar to the sample in Saint-Hyacinthe in terms of blood pressure, 57 C reactive protein, 58 physical function indicators 31 and distributions of SRH, depression and quality of life as reported here. It is nevertheless important to mention that the study samples are limited to older adults registered with a primary care provider, and that study samples in Canada are likely to be less depressed than the general population of older adults, given that they had to contact the research coordinator to participate in the study. As far as we know, the coverage of family medicine at local medical clinics (Canada) and neighbourhood primary health centres (Brazil) is higher than 90% for the population aged 65-74 years residing in the participating cities. 59 Only Manizales (Colombia) does not have universal coverage for healthcare, but a high percentage of Colombian older adults (around 82%) are covered by the Public Health Insurance. 60 Outside of Canada, response rates were very high, close to 100%. Therefore, we have reasons to believe that these samples are representative of the population registered at those local health centres. Among the strengths of this research, we need to mention the rigorous survey methodology preceded by two pilot studies, the high response rates in Manizales and Natal, and the validity of the social support scales. We also examined the same relationships with identical measures and data collection protocols in the four populations, which make cross-site comparisons rigorous and valid. In conclusion, the impact of social support is closely linked to different societal and cultural norms. The effects of social support on physical and mental health and on quality of life depend on the sources of this support and vary by social context. Social interventions to mobilise social support to promote the well-being and health of older populations need to take these contextual determinants into consideration.
2018-04-03T04:07:46.046Z
2016-06-01T00:00:00.000
{ "year": 2016, "sha1": "bc5e7be3dca86998677c5a8e63f6cf2e391c2075", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/6/6/e011503.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bc5e7be3dca86998677c5a8e63f6cf2e391c2075", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
199626623
pes2o/s2orc
v3-fos-license
Peutz-Jeghers Type Polyp of the Appendix with Review of Literature Hamartomatous polyps of Peutz-Jeghers type are strongly associated with Peutz-Jeghers polyposis syndrome and are predominantly encountered in the small intestine. Sporadic cases are uncommonly reported. We report a case of a polyp identified incidentally in the appendix of a patient undergoing diagnostic imaging due to a history of hepatitis C infection. Histopathologic evaluation after appendectomy showed a polyp with bands of muscularis mucosae bundles with arborizing architecture and variable amounts of inspissated mucin, morphologically indistinguishable from Peutz-Jeghers type hamartomatous polyp. A family or personal history of abdominal cancers was not reported by the patient, suggesting a sporadic occurrence. Next generation sequencing revealed only two pathogenic low-level STK11 mutations, presumed to be somatic. In conclusion, this is an unusual case of a sporadic Peutz-Jeghers type polyp occurring in the appendix. Introduction Appendiceal dilatation of more than 6 mm detected on abdominal imaging can potentially trigger appendectomy due to concern for presence of infection or malignancy [1]. The preoperative differential diagnosis is broad, including inflammatory conditions, benign tumors, and malignant neoplasms. The final diagnosis is often unknown until the results of the histopathologic analysis of the resected tissue become available. The most common tumors involving the appendix are mucinous tumors, such as low grade appendiceal mucinous neoplasm (LAMN) and mucinous carcinomas, adenomas, serrated polyps, goblet cell tumors, neuroendocrine tumors, and colonic type carcinomas [2]. Hamartomatous polyps can also occur in the appendix but are very rare, mostly described in case reports [3]. Unlike adenomas, which are largely sporadic, hamartomatous polyps of Peutz-Jeghers type are mostly associated with polyposis syndromes such as Peutz-Jeghers polyposis, juvenile polyposis, or Cowden syndrome, and this diagnosis may trigger clinical evaluation for cancer predisposition in the affected patient. Sporadic Peutz-Jeghers type polyps are considered very uncommon and are mainly documented in case reports and small case series [4,5]. Additionally, the risk for cancer predisposition in these cases is currently uncertain, as there is limited clinical follow-up in the available literature. In one of the available studies aiming to evaluate sporadic Peutz-Jeghers type polyps, there were only 8 out of 102 Peutz-Jeghers polyps identified over a period of 22 years that could have potentially been sporadic [4]. Most of these polyps occurred in middle aged to older patients and about half of them developed a malignancy, suggesting a possible link between even the sporadic Peutz-Jeghers type polyps and cancer predisposition [4]. Case Presentation A 60-year-old male with a prior history of alcoholism, tachycardia, hypertension, schizophrenia, and hepatitis C infection underwent regular surveillance protocol for hepatocellular carcinoma. Imaging revealed an incidental dilatation of appendix, which slowly increased in size over a 10 year period. More recent images revealed a dilatation of the midportion of the appendix measuring 16 mm in widest dimension, concerning for a mucinous appendiceal tumor ( Figure 1) and an appendicolith in the distal appendix. There was no surrounding inflammation identified to suggest rupture radiographically. Subsequent colonoscopy showed a single hyperplastic polyp and normal appearing appendiceal orifice. The patient's family and personal history was negative for abdominal cancers, hamartomatous polyps, or mucocutaneous pigmentations. The patient was referred for a surgical consult due to increase in size of this appendiceal lesion and progressive luminal dilatation. Ultimately, an appendectomy was pursued due to concern for the presence of a mucinous tumor of the appendix. During the procedure, the appendix appeared dilated without signs of perforation or carcinomatosis. The appendix was resected without complication and was submitted intact for histopathologic evaluation. Gross examination revealed a 1.4 cm well-circumscribed pale brown intraluminal mass in the mid portion of a 7.5 x 2.5 x 2.2 cm appendix, 4 cm from the resection margin. Microscopically, the mass appeared to be a pedunculated polyp supported by broad bands of arborizing fibromuscular bands at low power, lined by colonic type epithelium without dysplasia (Figure 2). Altogether, the findings were in keeping with a Peutz-Jeghers type hamartomatous polyp, which markedly distended the appendiceal lumen. There was no evidence of adenomatous change, low grade mucinous neoplasia, or malignancy identified in any of the examined tissue sections. The resected tissue was evaluated for pertinent mutations via a clinically validated solid tumor mutation panel by next generation sequencing (NGS) assessing mutational hotspot regions in 44 genes including STK11 (exons 1, 4, 5, 6, and 8; NM 000455.4) and PTEN (exons 1-9; NM 000314.6). Genomic DNA was extracted from FFPE tissue followed by hybridization capture to enrich for the regions of interest and next generation sequencing. Two low-level mutations (presumed somatic) were detected in STK11 gene: a frameshift mutation in exon 1 (c.179 180del, p.Y60fs) at variant allele frequency (VAF) of 1.8% and a nonsense mutation in exons 5 (c.658C>T, p.Q220 * ) at VAF of 2.7%. Even though both of these mutations are slightly below the established limit of detection (LOD) of 5% VAF of this assay, the high quality scores and bidirectional and staggered reads ( Figure 3) support that these calls represent true mutations. No other variants were detected in any of the other genes included in the panel. Discussion Peutz-Jeghers polyps are diagnosed histologically based on the presence of broad bands of arborizing muscularis mucosae smooth muscle bundles, described as resembling a "Christmas tree" at low magnification, with variable amounts of inspissated pink mucin. Typically, the polyp is lined by benign epithelium without adenomatous change. The differential diagnosis includes adenomas, juvenile polyps, serrated polyps, inflammatory polyps, and mucosal prolapse. In the current case there was no adenomatous change to suggest a villous adenoma. Juvenile polyps and inflammatory polyps typically have more pronounced inflammatory components and dilatation of crypts. Features of mucosal prolapse include fibromuscular hyperplasia of the lamina propria extending towards the luminal aspect of the mucosa as well as distorted diamond shaped glands in the deeper aspect and congestion of dilated capillaries, and these features were not seen in the current case. The observed morphologic findings were all in keeping with a Peutz-Jeghers type polyp. This case is uncommon because the Peutz-Jeghers type polyp presented in the appendix of a patient without clinical or familial history suggestive of a polyposis syndrome; therefore both the site and patient demographics were unusual. There are only rare documented reports of appendiceal Peutz-Jeghers polyps [3,[6][7][8][9][10][11]. Appendiceal Peutz-Jeghers polyps may present clinically with intussusception, while others occur incidentally [9]. To our knowledge this is the first report of a case presenting with dilatation of the appendix on imaging mimicking a mucinous appendiceal neoplasm. Classic Peutz-Jeghers polyps occur predominantly in the small intestine, followed by the stomach and colon. There is a strong association with Peutz-Jeghers polyposis syndrome, which is a rare autosomal dominant condition, characterized by hamartomatous polyps in the gastrointestinal tract, mucocutaneous pigmentation, and a predisposition to a variety of neoplasms including carcinomas of the pancreas, colon, stomach, small bowel, breast, ovaries, and testes [12]. A thorough review of the clinical and family history in the current case failed to reveal typical lesions associated with Peutz-Jeghers syndrome, suggesting sporadic occurrence. Additionally, solid tumor NGS panel performed in this case showed only two pathogenic low-level STK11 mutations. These mutations are presumed to be somatic based on low variant allele frequencies; however, their germline origin cannot be completely excluded as the assay performed is not designed to distinguish between somatic and germline variants. The exact role of these mutations in the pathogenesis of the polyp is uncertain. Additionally, the performed panel is designed to detect only single nucleotide variants (SNVs) and insertion and/or deletions (Indels) and only in the exons captured. Gross (e.g., exon-level) deletions (known to cause a subset of Peutz-Jeghers syndrome cases) would not be detected by this panel and their presence in this patient cannot be completely excluded. Although these results do not completely rule out the syndromic origin of the polyp, with the combination of the negative family and patient's history they make it much more unlikely. In conclusion, we report a very unusual case of likely sporadic Peutz-Jeghers type polyp, which presented with a dilated appendix and was radiographically concerning mucinous tumor of appendix. Conflicts of Interest The authors have no conflicts (financial or nonfinancial) of interest to declare.
2019-08-15T15:14:15.394Z
2019-07-25T00:00:00.000
{ "year": 2019, "sha1": "b9f14b47f13e6b027be6d0808063c2ba4fe421cd", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/cripa/2019/7584070.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b9f14b47f13e6b027be6d0808063c2ba4fe421cd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119279652
pes2o/s2orc
v3-fos-license
A high resolution synchrotron x-ray powder diffraction study of the incommensurate modulation in the martensite phase of Ni2MnGa: Evidence for nearly 7M modulation and phason broadening The modulated structure of the martensite phase of Ni2MnGa is revisited using high resolution synchrotron x-ray powder diffraction (SXRPD) measurements, which reveals higher order satellite reflections up to the 3rd order and phason broadening of the satellite peaks. The structure refinement, using the (3+1) dimensional superspace group approach, shows that the modulated structure of Ni2MnGa can be described by orthorhombic superspace group Immm(00$\gamma$)s00 with lattice parameters a= 4.21861(2) {\AA}, b= 5.54696(3) {\AA}, and c= 4.18763(2) {\AA} and an incommensurate modulation wave vector q= 0.43160(3)c*= (3/7+$\delta$)c*, where $\delta$=0.00303(3) is the degree of incommensuration of the modulated structure. Additional satellite peak broadening, which could not be accounted for in terms of the anisotropic strain broadening based on a lattice parameter distribution , has been modeled in terms of phasons using fourth rank covariant strain tensor representation for incommensurate structures. The simulation of single crystal diffraction patterns from the refined structural parameters unambiguously reveals a rational approximant structure with 7M modulation. The inhomogeneous displacement of different atomic sites on account of incommensurate modulation and the presence of phason broadening clearly rule out the adaptive phase model proposed recently by Kaufmann et al.[1] and suggests that the modulation in Ni2MnGa originates from soft-mode phonons. INTRODUCTION Recent years have witnessed enormous interest in ferromagnetic shape memory alloys (FSMA) exhibiting extremely large magnetic field induced strain (MFIS) that is nearly an order of magnitude larger than the maximum strain generated in piezoelectric materials currently used by the industry for making miniaturized actuators for a host of applications [2]. FSMAs are therefore being visualised as better candidates for developing miniaturized magnetic actuators, but the extreme brittleness of some of the most promising FSMA compositions is a matter of concern [3,4,5,6,7]. FSMAs are magnetoelastic multiferroic materials exhibiting ferromagnetic and ferroelastic (martensitic) phase transitions with characteristic transition temperatures T C and T M , respectively, with a very strong coupling between the magnetic and ferroelastic order parameters [8]. The martensitic transitions in FSMAs are displacive transitions resulting from a lattice variant deformation, usually called as Bain distortion, of the high temperature austenite phase leading to a low temperature martensite phase such that the large transformation strain is accommodated at the austenite-martensite interface (habit plane) by the formation of symmetry permitted martensite variants or ferroelastic domains (as distinct from the magnetic domains) through a lattice invariant deformation (achieved through twinning or faulting ) that leaves the habit plane undistorted and unrotated in an average sense at the microscopic scale [9]. Huge MFIS has been reported in the ferromagnetic martensite phase of the FMSAs with Tc > T M due to strong magnetoelastic coupling leading to magnetic field induced alignment of the ferroelastic domains (martensite variants) , and hence the magnetic moments, on account of low ferroelastic twinning energy as compared to the energy of magnetisation rotation [10,11]. One of the most prominent ferromagnetic shape memory alloy systems is the Ni-Mn-Ga alloy, especially the nearly stoichiometric Ni 2 MnGa composition that shows about 10% magentic field induced strain in its low temperature martensite phase. [ 10,11,12] More recently, this alloy system has been shown to exhibit large magnetocaloric effect as well. [13] The room temperature structure (austenite phase) of Ni 2 MnGa is L2 1 type cubic in the Fm-3m space group that undergoes a ferromagnetic phase transition at T C ~ 370 K, premartensitic transition to an incommensurate phase at T PM = 260 K and a martensitic (ferroelastic) transition below T M = 210K to a modulated structure. [14,15] The structure of the low temperature martensite phase of Ni 2 MnGa has been extensively investigated using different diffraction techniques (x-ray, neutron and electron diffraction) for single crystal [16,17,18,19,20,21], powder [19,22,23,24] and epitaxial thin film [1] samples. However, the actual crystal structure and the nature of the modulation in the martensite phase, for which both commensurate and incommensurate modulations have been reported [16,22,23,24], are still controversial. Webster et al. [15] studied the martensite structure of Ni 2 MnGa using neutron powder diffraction measurements and reported a tetragonal structure with c/a ratio of about 0.94. Martynov et al. reported five-layer modulated martensite structure (5M) on the basis of single crystal x-ray diffraction (XRD) data, where four satellites between the main austenite reflections were reported. [17] The first Rietveld refinement of the modulated structure of the martensite phase was carried out using medium resolution neutron powder diffraction data by Brown et al. [22] who concluded that the structure is orthorhombic in the Pnnm space group with 7M commensurate modulation. A subsequent Rietveld study using medium resolution x-ray powder diffraction data from a rotating anode based x-ray source also supported commensurate 7M [17,20]. Apparently the composition of the sample has a very crucial role in deciding the nature of the modulation. This, therefore, necessitates to revisit the structure of martensite phase of stochiometric Ni 2 MnGa in detail. The powder diffraction patterns reported in literature have been recorded using moderate resolution powder diffractometers and laboratory X-ray sources or neutron sources. No attempt has been made to refine the modulated structure of the martensite phase of stochiometric Ni 2 MnGa using high resolution synchrotron x-ray powder diffraction data. The large MFIS in Ni 2 MnGa has been related to the modulated structure of the ferromagnetic martensite phase, which leads to lowering of the twinning stresses [10]. To understand the genesis of the large MFIS in Ni 2 MnGa, it is therefore imperative to understand the structure and origin of the modulated phase. We present here the results of Rietveld analysis of medium resolution neutron diffraction and very high resolution synchrotron x-ray powder diffraction (SXRPD) data using (3+1) dimensional superspace approach. The higher resolution and high intensity SXRPD data used in the present study shows not only well defined main reflections but also satellites reflection up to the third order, which enabled us to quantify the modulation wave vector precisely, as compared to the earlier low resolution XRD study using rotating anode data [23]. Furthermore, high resolution SXRPD data has enabled us to capture the signatures of additional broadening of the satellite peaks due to phasons, which could not be accounted for using Stephens model [27] of anisotropic peak broadening in commensurate structures and requires consideration of a 4 th rank covariant tensor for incommensurate structures [28]. We also compare the simulated single crystal diffraction patterns of the incommensurate phase using first, second and third order satellites to confirm unambiguously that the structure of Ni 2 MnGa is 7M like, although incommensurate. The present results also indicate that the incommensurate modulation in stoichiometric Ni 2 MnGa cannot originate from the adaptive phase model [1] in view of the (1) significant mismatch between the calculated and observed peak positions of the superlattice reflections using commensurate 7M modulation, (2) [16] The initial characterization results are given in Ref. [14]. The neutron diffraction measurements were performed at D2B beamline (ILL, Grenoble). -A vanadium cylinder was used as sample holder. The data were collected at 5K in the 2θ range of 10-160 0 in steps of 0.05 0 using a neutron wavelength of 1.59 Å in the high intensity mode. The minimum value of the full width at half maximum (FWHM) is around 0.07 0 . For the synchrotron XRD measurements, the same powder sample was sealed in a borosilicate capillary of 0.3mm diameter and the data was recorded at 90K at a wavelength of λ= 0.39993 Å in the 2θ range of 5-58 0 on the high-resolution powder diffractometer ID31 at ESRF, Grenoble. The resolution is given by the instrumental contribution to FWHM that is around 0.003 0 in 2θ. Le Bail and Rietveld analysis were performed using Fullprof [30] and JANA2006 software packages. [31] RESULTS AND DISCUSSION Refinement using Commensurate Modulation: Rietveld refinement of commensurate modulated structure of the martensite phase of Ni 2 MnGa using medium resolution data has been carried out in the past. [22,24] In this section we proceed to show that medium resolution neutron powder diffraction data cannot capture the nature of incommensurate modulation in Ni 2 MnGa. All the Bragg reflections in the neutron powder diffraction pattern of Ni 2 MnGa at 5K were well accounted for using orthorhombic space group Pnnm and the lattice parameters a≈ ( 1/√2) a cubic, b≈ (7/√2) a cubic and c≈ a cubic where a cubic is the cell parameter of the cubic austenite phase [22]. The Rietveld fit is shown in Fig. 1. One peak at 2θ =40.5 0 , which could not be account for, is due to aluminum in the cryofurnace wall (indicated by arrow in Fig. 1). The refined lattice parameters (a= 4.21796(6) Å, b= 29.2972(4) Å, c= 5.53492(6) Å) and the magnetic moment (3.13 (7)μ B /f.u.) are in good agreement with the earlier neutron diffraction results. [22] From the values of lattice parameters it is evident that b≈(7/√2) a cubic , where a cubic = 5.535 which indicates that the structure is 7-fold modulated (7M) in the <110> cubic direction in agreement with the observation of previous workers. [22,24] While the neutron diffraction data analysis indicates that the structure of the martensite phase of Ni 2 MnGa is 7M modulated, the nature of the modulation, and in particular, whether it is commensurate or incommensurate, could not be settled unambiguously due to limited resolution of the data. Nature of Modulation and Evidence for Phason Broadening In order to determine the nature (commensurate vs incommensurate) of modulation, we use high resolution synchrotron x-ray diffraction data and analyse the data employing the (3+1)-D superspace approach. [32][33][34] In this approach the diffraction pattern is divided into two parts: (i) the main reflections corresponding to the basic structure and (ii) the satellite reflections arising out of the modulation and having weaker intensity as compared to the main reflections. All the main reflections due to the basic structure (Bain distorted structure) of Ni 2 MnGa could be indexed using an orthorhombic space group Immm with unit cell parameters a= 4.21853(2) Å, b=5.54667(2) Å and c=4.18754(1) Å which are similar to the earlier work [23]. After obtaining the unit cell parameters for the basic structure, the modulated structure was refined using the superspace group Immm(00)s00 by (1) LeBail and (2) ] and only first order satellites (hkl±1) were considered in the LeBail refinement. But, this model was unable to fit the satellite reflections as the calculated peak positions were shifted away from the observed ones (see Fig. 2 (a)). This shows the inadequacy of low resolution powder diffraction data in capturing the signatures of the failure of the commensurate modulation model of the structure like that presented in the previous section. We also tried the incommensurate value of the modulation vector reported in Ref. [23] (q= 0.4248), but it led to even worse fit with higher GOF=4.82 compared to that (GOF= 4.04) for the commensurate modulation with This indicates that the modulation vector , as reported in [6], cannot account for the satellite peak positions. Finally we refined the modulation vector also and this led to significantly better fit between the observed and the calculated peak positions (see Fig. 2(b)) with lower GOF (= 3.69) for an incommensurate value of the modulation vector q= 0.43154(3) c*. So far we considered only the first order satellite reflections (hkl ± 1) in the refinements following Righi et al. [23] and it was possible to index majority of the satellite peaks. However, several of the satellite reflections with rather low intensities could not be accounted for using first order reflections only as shown in Fig. 3(a). Consideration of second order satellites (hkl ±2) in the LeBail refinement could index most of these low intensity reflections very well, as can be seen from Fig. 3(b). This confirmed the presence of second order satellites in our high resolution SXRPD patterns which were not discernible in the laboratory XRPD patterns of Righi et al [23]. It is interesting to note that consideration of even the second order satellites could not index some very weak reflections as can be seen in Fig 3(b) at 2θ ~10.03 deg. Accordingly, we considered third order satellites in our refinements and this led to the identification of the small peak at 2θ ~10.03 deg as a third order satellite as shown in Fig 3(c) (indicated by red arrow). It is worth mentioning here that there is no previous report on the observation of second and third order satellites in the powder diffraction pattern of the martensite phase of Ni 2 MnGa and it shows the significance of the higher resolution data used in the present work. However, after including the third order satellites, it was noticed that the anisotropic peak broadening function in terms of Stephens [27] formalism, as used for 3D periodic crystals, cannot fully account for peak broadening of satellite reflections. It is well known that the incommensurate modulated structures may exhibit additional broadening of the satellite peaks due to phasons [28]. A fourth-rank covariant strain tensor based formalism with a distribution of the strain-tensor components implemented in JANA2006 [31] was therefore considered in the LeBail refinements. Ten strain components allowed by the symmetry were all considered in the refinement. This led to significantly better fit between the observed and calculated peak profiles (Fig 4(b)) with a decrease in the GOF from a value of 3.09 to 2.60. The values of the refined components of the strain tensor are given in Table I. The higher magnitude of strain for st2011 and st0022 (these coefficients are connected with the terms h 2 lm and l 2 m 2 , respectivelyfor more details see [27] and [28]) confirms the presence of phasons, not considered in any of the previous refinements [23,35]. It is interesting to note that inelastic neutron scattering studies on the martensite phase of Ni 2 MnGa have revealed well-defined phasons associated with CDW [36]. Our LeBail refinements using fourth rank covariant strain tensor provide the first experimental evidence of such phason broadening resulting from the fluctuations in the incommensurate modulation vector in high resolution SXRPD patterns. Having established from LeBail refinements the incommensurate nature of the modulated structure of the martensite phase of Ni 2 MnGa, Rietveld refinements were performed. In the Rietveld refinement of the modulated structure, the deviation   Table II. The average first nearest neighbor inter atomic distances obtained in the present study are given in Table III. No unphysically short interatomic distances for Ni-Ga and Ni-Mn are observed, unlike those for the commensurate modulation [22]. The Mn-Ga average distance (2.77355(6) Å) obtained here is in excellent agreement with the value (2.780 (6)) reported by a x-ray absorption fine structure (EXAFS) study [37]. Moreover, besides Mn-Ga, other distances listed in Table III are also close to the EXAFS values. This excellent agreement between the interatomic distances obtained by the XRD technique that probes the spatially averaged long range structure and EXAFS that probes the local structure shows that local distortion of the structure is rather negligible. However, the Ni-Mn and Ni-Ga distances are smaller than the values derived from the atomic radii (Table III). The shorter Ni-Ga and Ni-Mn distances may be attributed to Ni-Ga and Ni-Mn If one uses only first order harmonic waves, the modulations of the atomic positions are allowed only along the x direction for all the atoms due to the symmetry restrictions [23]. However, higher order modulation waves (second and third) also allow modulations in the y direction for Ni atom and in the z direction for all other atoms. The displacements along the z direction are found to be within the standard deviations as can be seen from Table II. Further, significant displacement along y direction exists only for the Ni atoms. But more significantly, the atomic displacements along the x direction are different for each atom (Table II). The displacement amplitude (A 1 ) for the Mn, Ga and Ni atoms are 0.0665 (12), 0.0657(9) and 0.0618(9), respectively (see Table II). Thus the largest A 1 is observed for Mn and it is lowest for Ni. The values of A 1 reported here are significantly different from earlier XRD results based on consideration of the first order satellites only [23] where these displacement were 0.066(8), 0.070(6) and 0.072 (6) for Mn, Ga and Ni, respectively, in the x-direction only. Origin of the Modulated Structure There are essentially two main theories for the formation of the modulated structure of the martensite phase in Ni 2 MnGa: the adaptive phase model and the soft-phonon mode based displacive modulation model. In the adaptive phase model [1], the modulated structure is visualized as a nanotwinned state of the Bain distorted phase, which maintains the invariance of the habit plane between the austenite and the martensite phases. In the soft phonon model, the origin of modulation has been related to a TA2 soft phonon mode in the transverse acoustic branch along [110] direction of the austenite phase [38][39][40][41], which has been supported by the observation of change in the modulation period leading to a premartensite phase before the final structural transition to the martensite phase [39]. The instability of the TA2 phonon is related to a long-range anomalous contribution to the phonon frequency due to electronic screening [42]. It is possible to make a choice between the adaptive phase and soft mode models from a knowledge of the amplitudes of displacive modulations for the different atomic sites, since they are required to be identical for the former but may be dissimilar for the latter. [35]. The considerably dissimilar amplitudes of modulation for the different atomic sites together with atomic displacements in different directions (Table II) clearly indicate that the modulations in stoichiometric Ni 2 MnGa cannot be explained in terms of the adaptive phase model but may arise due to soft phonon modes. [35] The observation of charge density wave in Ni 2 MnGa also supports the present finding and indicates that the modulation is driven by soft-phonon modes. [21] Thus our present results raise doubts about the validity of adaptive phase model. [1] 14 The shift of some of the x-ray powder diffraction peaks with the respect to the ideal commensurate positions predicted by adaptive phase model has been attributed by Kaufmann et al. [1] to the presence of stacking faults. Stacking faults are known to broaden and shift the peaks as discussed in the context of non-magnetic shape memory alloys by Kabra et al. [43] However, no explanation exists as to why the shifts should be only for the satellite reflections, as the stacking faults are known to affect the main peaks also. More importantly, simulation of the diffraction patterns from the nanotwinned Ni 2 MnGa structure with stacking faults has revealed that the observed peak shifts cannot be attributed to stacking faults [16]. Thus, the adaptive phase model fails to explain the incommensurate modulated structure even if the presence of stacking faults is invoked. It is worth mentioning here that although the intensity of the second and third order satellites is comparatively less, it has played an important role in obtaining the modulation amplitudes with greater accuracy and hence in the rejection of the adaptive phase model. Righi et al. [23] could not observe second and third order satellites as they used lower intensity and higher background laboratory source XRD data. The Periodicity of the Rational Approximant Structure We have also simulated the single crystal diffraction patterns along [010] zone axis to compare our results with those obtained by earlier workers [16,20,23]. The simulation was carried out using the JANA2006 package. Fig. 6(a) depicts the reciprocal space section calculated using 1 st order satellites only, as per the results of Righi et al. [23] It is evident that only two satellites can be observed between the main reflections in this case and therefore the diffraction pattern indicates a 3M like modulation. Although 2 nd order satellites were also considered by Righi et al in their simulations of the single crystal diffraction patterns, they had not observed 2 nd order satellites in their diffraction patterns. In the present study we have found unambiguous evidence for the second order satellite reflections, and their consideration in the simulation gives rise to 4 satellites between the main reflections in the reciprocal space ( Fig. 6(b)). Since we have observed 3 rd order satellites also, we simulated the reciprocal space including these satellites and the results are shown in Fig. 6(c). There are evidently six satellites between the main reflections, which is in good agreement with the electron diffraction patterns of Table IV. Our results clearly rule out the 5M description [23] for the rational approximant of the incommensurate phase of Ni 2 MnGa. CONCLUSIONS In conclusions, we have presented results of (3+1) D superspace Le Bail and Rietveld refinements of the modulated structure of the martensinte phase of stoichiometric Ni 2 MnGa ferromagnetic shape memory alloy using high resolution synchrotron x-ray powder diffraction (SXRPD) patterns. Our results confirm the incommensurate nature of the modulation and presence of phason broadening of the satellite peaks. Observation of higher order satellites up to the 3 rd has enabled us to simulate the single crystal x-ray diffraction patterns, which reveal six satellite between the main diffraction unambiguously. This conclusively rejects the 5M like rational approximant structure of the martensite phase and confirms the 7M like modulation. Presence of higher order satellites due to the higher resolution of the SXRPD data has enabled us to determine the modulation wave vector precisely as where the incommensuration parameter δ =0.00303(3) and capture the atomic displacements not only along the x direction but also along the y direction. The inhomogeneous nature of the atomic displacement rule out the adaptive phase model as a possible mechanism for the origin of the modulated structure and suggests the soft phonon mode as the most plausible mechanism of modulation. Tables: TableI: The refined values of the strain components of the fourth-rank covariance tensor. (2)
2019-04-13T08:35:22.512Z
2014-05-14T00:00:00.000
{ "year": 2014, "sha1": "b3c6f90efc47d2e7427d6eb82e2e3e61c43bb7ba", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1405.3478", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "28734a2c0c1324631d4b0fec92e1050b125073ed", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
214059852
pes2o/s2orc
v3-fos-license
Stability and universal encapsulation of epitaxial Xenes In the realm of two-dimensional material frameworks, single-element graphene-like lattices, known as Xenes, pose several issues concerning their environmental stability, with implications for their use in technology transfer to a device layout. In this Discussion, we scrutinize the chemical reactivity of epitaxial silicene, taken as a case in point, in oxygen-rich environments. The oxidation of silicene is detailed by means of a photoemission spectroscopy study upon carefully dosing molecular oxygen under vacuum and subsequent exposure to ambient conditions, showing di ff erent chemical reactivity. We therefore propose a sequential Al 2 O 3 encapsulation of silicene as a solution to face degradation, proving its e ff ectiveness by virtue of the interaction between silicene and a silver substrate. Based on this method, we generalize our encapsulation scheme to a large number of metal-supported Xenes by taking into account the case of epitaxial phosphorene-on-gold. Introduction Epitaxial Xenes are a quickly emerging reality in the realm of two-dimensional (2D) crystals beyond graphene. They were recently sorted into two generations according to the position of the constitutive element (X) in the periodic table (the rst generation includes elements in the silicon group, whereas the second includes elements in the boron, nitrogen, and oxygen groups). 1 Compared to other 2D players, they constitute a broad portfolio of materials with variable and congurable properties depending on the growth environment (e.g. the substrate) or the atomistic details (e.g. buckled bonding). As epitaxial materials, they are scalable and thus look promising for industrial applications. The heavier among a CNR-IMM, Unit of Agrate Brianza, Via C. Olivetti them are also prone to hosting a 2D topological insulating state of matter. 2,3 While structure and electronic properties may vary throughout the Xene generations, 4,5 a common issue for all single-layer Xenes relates to stability against environmental reactivity. This is basically due to the fact that bonding in Xenes is less stable than in graphene or other layered compounds. A paradigmatic example in this respect is silicene (and silicene-like materials like germanene and stanene), where pseudo sp 2 bonding is articially induced by means of congurational constraints. 6 This synthetic set-up poses several questions concerning the stability of silicene and similar materials. For instance, is silicene prone to degradation under ambient conditions? How does silicene undergo oxidation? How to protect silicene from degradation? And how to extend such a protective process to the whole class of Xenes? In this Discussion, we elaborate on specic aspects of the chemical stability of the Xenes by reporting original data on the exposure of epitaxial silicene to a O 2 -rich environment in ultra-high vacuum (UHV) growth conditions. Silicene, as well as other Xenes, undergoes degradation in environmental conditions unless a protective layer is applied. Encapsulation, as opposed to passivation, aims to form a barrier to the external atmosphere without changing either the bonding or the functional properties of the Xenes. This can be a difficult task because we need the encapsulating layer to grow on top of the Xene, while having minimal interaction with it. In this respect, we state the conditions for a universal encapsulation scheme for metal-supported Xenes and support it with a preliminary ab initio model. To corroborate the universal character of the approach, we scrutinize the epitaxial phosphorene on Au(111) as a second case in point. Silicene reactivity The oxidation of silicon constituted a milestone in integrated microelectronic circuits, enabling the ubiquitous integration of silicon. Now that silicon-based devices are approaching ultimate scaling, the oxidation of silicene, namely silicon at the 2D level, may open new technology paths of silicon exploitation in nanoelectronics. One step in this direction is to investigate how the chemical bonding in epitaxial silicene is affected by an oxygen-rich environment. We elucidate this scenario in a variable dosage experiment starting from an UHV condition and monitoring the synchrotron radiation Si 2p core-level photoemission line of epitaxial silicene-on-silver(111) with increasing O 2 partial dose. Two silicene congurations were taken into account, the low-temperature mixed 4 Â 4/O13 Â O13 phase (Fig. 1a) and the high-temperature 2O3 Â 2O3 phase (Fig. 1b), as they were proven to be the precursors for a room temperature operational eld effect transistor. 7 Basically, the two congurations differ in terms of their buckled bond pattern, 8 as evidenced by the markedly different positioning of the E 2g Raman mode as well as lower frequency features. 9 For each conguration, Si 2p spectra are reported for the freshly grown silicene, aer 75 L and 750 L O 2 exposure. The Si 2p lines of the freshly grown silicene phases display a multipeak shape prole that was previously rationalized as coming from the interplay of three different elemental (Si-Si) bonding environments termed Si a , Si b , and Si g . 10 Similar to Fleurence et al., 11 these components reect different positioning of the Si atoms with respect to the Ag surface, namely hollow sites, intermediate sites, and coincidence sites. This substrate sensitivity was also observed in a position-dependent local density of states. 12 Limited oxidation takes place when the O 2 dosage is varied from 75 to 750 L, as may be deduced from the emergence of a minor feature that is characteristic of Si 4+ bonding in SiO 2 (see also the magnication of the spectral region in the insets). Conversely, the Si 2p prole dramatically changes in the proportion of its elemental components. This fact is apparent from the evolution of the Si 2p line as a function of the O 2 dosage in both silicene congurations. This aspect relies on substantial variation of the bonding population inside each individual silicene phase as a consequence of oxygen exposure. In other words, silicene reacts to O 2 exposure by rebalancing its elemental bonding components with respect to its pristine form, while SiO 2 bonding formation is quite limited. In detail, the Si a component corresponding to hollow sites of silicon atoms undergoes a relative increase with increased O 2 exposure from 75 to 750 L. Qualitatively, this behaviour looks like a silicene lattice rearrangement, with silicon atoms preferentially positioned far from the underlying Ag atoms as if this recombination could diminish the interaction of silicene with the substrate. This picture can be related to previously reported O 2 intercalation in the silicon bilayer. 13 If this is the case, an open question is how post- growth O 2 exposure can be exploited to disentangle silicene from its substrate without corrupting its elemental lattice, with a substantial benet in the transfer of silicene into a device substrate. 7 Ultimately, if silicene is exposed to air, complete oxidation to a SiO 2 compound takes place, as follows from the emergence of a broad peak at 103.50 eV corresponding to the Si 4+ valence state in Si bonding. In this latter stage, it is not clear at the moment whether this silicene-derived SiO 2 is recast as a 2D oxide layer (conformal to the original silicene) or as a clustered adsorbate. Open questions remain as to what kind of oxidation may take place when considering multilayer silicene as an alternative conguration to conventional cubic-structured silicon, which may unravel new mechanisms for silicon oxidation. More specically, we wonder whether silicene can be uniformly oxidized so as to form a 2D silicon oxide either in a single layer or in a multilayer state. If so, we also wonder to what extent this silicene oxide would differ from the 2D silicon oxide frameworks grown on metal surfaces as reported by Büchner et al. 14 This is an interesting set of questions aimed at the development of a new type of 2D insulator. Encapsulation of silicene The experimental framework The environmental oxidation of silicene is a hurdle for its usage in applications. Methods to save silicene from degradation have been in demand since the early stages of silicene's debut. One possible effective option turned out to be Al 2 O 3 encapsulation sequentially carried out in UHV conditions aer silicene growth by means of reactive co-deposition of an atomic Al beam with O 2 molecules. 15 This methodology results in the preservation of the chemical environment of silicene by means of a protective Al 2 O 3 layer on silicene by sequentially growing elemental Al in the rst stage, and Al in an O 2 -rich environment in the second stage. The initial elemental Al prevents silicon from oxidation when inletting O 2 gas in the second stage, so as to have an overall Al 2 O 3 layer at the end. No chemical interaction occurs between silicene and Al 2 O 3 , as demonstrated by the unaltered shape Fig. 2 (a) Si 2p core-level photoemission lines taken on freshly grown silicene-on-silver (bottom) and after Al 2 O 3 encapsulation (top). The right-side feature in the former spectrum is related to Ag 4s photoemission from the substrate, and is no longer detected in the encapsulated silicene owing to the Al 2 O 3 layer thickness ($5 nm); (b) Raman spectra of (top) capped and (bottom) uncapped silicene/Ag(111). In both cases the samples are exposed to air when recording the Raman spectrum and the silicene lattice is preserved only in the case of the encapsulated sample; (c) Raman spectra of unprotected silicene/ Ag(111) obtained after 1 minute (top: same as the bottom spectrum in panel (b)) and $5 minutes after air exposure (bottom). The spectra are vertically stacked for clarity. deposition. An extra Ag 4s feature from the substrate does not appear in the Si 2p line of the pristine silicene aer encapsulation, owing to the added Al 2 O 3 thickness and consequent reduced substrate sensitivity in the photoemission. The Al 2 O 3 encapsulation presents several benets. It is a sequential process with silicene epitaxy within the same UHV environment. As such, it is not affected by collateral contamination. It works as an insulating layer for applications where a gate is needed. The effectiveness of the Al 2 O 3 encapsulation is phenomenological evidence substantiated by in situ X-ray photoelectron spectroscopy (XPS) and ex situ Raman spectroscopy, as reported in Fig. 2. As an identication probe, Raman spectroscopy has generally proved to be a superior tool to identify graphene and graphene-related materials. 16 It was used for silicene too in order to check the silicene integrity aer encapsulation, as reported in Fig. 2a. This is demonstrated in the comparative Raman spectroscopy study in Fig. 2b. In detail, Fig. 2b compares the encapsulated silicene (in air) with uncapped silicene in the early stage of degradation upon exposure to ambient air. The spectrum of silicene-on-silver(111) acquired just aer its exposure to ambient conditions shows a feature around 500 cm À1 , composed of a prominent narrow peak at 516 cm À1 , plus a broad shoulder on the lower energy side ($460 cm À1 ). These features are comparable with the Raman active modes of silicene-onsilver(111) substrates maintained in UHV conditions 17 and, together with the band at 800 cm À1 , are indicative of the presence of a partially oxidized ordered SiO 2 phase. 18 More detailed characterization of the Raman spectrum of silicene is extensively reported elsewhere. 9 Here we emphasize the effectiveness of encapsulation in preserving silicene from degradation upon direct exposure. Nonetheless, fast degradation occurs for both silicene congurations aer exposure to air if uncapped, as seen from the XPS spectra reported in Fig. 1. The dynamics of silicene oxidation are so fast that close real-time monitoring of silicene evolution is necessary to unveil the mechanism of Si reactivity under environmental conditions. As uncapped silicene is extracted from the growth chamber and brought into ambient lab conditions, the reaction with O 2 occurs immediately, as evidenced by the Raman spectrum reported in Fig. 2c. Sizable modications affect the Raman spectrum as a function of the exposure time. The disappearance of the sharp peak is accompanied by the broadening of the feature at 500 cm À1 , whose intensity becomes comparable with the feature at 800 cm À1 . These two observations may reect the complete amorphization of the SiO 2 phase resulting from early exposure, thus conrming the recent results obtained by means of in situ Raman spectroscopy at a low O 2 dose. 19 The dynamics of the bare silicene/Ag(111) exposed to ambient conditions are clearly dominated by the reaction with O 2 . The overall process could be depicted as a two-step reaction. The early interaction of the silicene layer with O 2 leads to the formation of partially ordered SiO 2 phase clusters with a small portion of silicene in its original structure. As the process progresses, the system evolves towards the amorphous phase of thin clusterassembled SiO x . To conrm this two-step picture and describe the silicene degradation process, in situ monitoring of the structural properties of the system as a function of the O 2 dose would help to disentangle the two proposed kinetics. The initial tendency of the silicene layer to preserve its order may be related to the interaction with the metallic substrate that makes the Si atoms less prone to oxidation. Overall, a question arises regarding which mechanism favours Al 2 O 3 accommodation on silicene-on-Ag without apparently affecting its structure. A theoretical framework An argument to support the picture as outlined above comes from preliminary density functional theory (DFT) simulations of silicene reactivity towards Al atom adsorption rst, and Al 2 O 3 accommodation second. We propose simulations for two different congurations, free-standing silicene and silicene-on-Ag. In order to further investigate this mechanism, a preliminary ab initio investigation of the interaction between Al or O 2 and a 4 Â 4 reconstruction of silicene on Ag(111) has been performed by means of DFT calculations detailed in the methods section. The results of the structural relaxations are summarized in Fig. 3, where two relevant aspects can be highlighted. First, an Al atom (light blue sphere) turns out to be favorably incorporated into the crystal lattice of free-standing silicene and it strongly affects the silicene structure (blue sphere lattice) aer relaxation (Fig. 3a), whereas the same atom on Ag-supported silicene (Fig. 3b) tends to move apart from the Si surface upon relaxation. Hence, one can infer that the silicene layer is chemically "protected" by the hybridization of the Si atoms with the Ag atoms of the substrate, and in fact the calculated binding energy of the Al atom on the silicene layer is reduced by about 0.25 eV in the presence of the Ag substrate, as compared to the free-standing case. It should be noted that this is tendential behavior for an individual Al adatom accommodated on the silicene face that can hardly be matched with the collective behavior of an Al layer grown on silicene. Nonetheless, this individual picture gives us an idea of the tendential inuence of the Ag substrate on silicene reactivity during Al 2 O 3 deposition. In the latter case, the "lower reactivity" of the Ag-supported silicene (compared to the free-standing case) leads the O 2 molecules (O atoms are depicted as red spheres in Fig. 3) to preferably bind with the Al atoms, resulting in a relaxed structure that only marginally inuences the silicene bonds (see Fig. 3c, which illustrates the asgrown and relaxed Al 2 O 3 on Ag-supported silicene). It should be noted that this picture is only specic to the Ag-supported silicene and may fail in other cases, e.g. silicene on ZrB 2 , where the extent of hybridization between the silicene layer and the substrate can change. Although it was noted that effective encapsulation can be substrate-dependent, 20 this preliminary outcome suggests a method to apply Al 2 O 3 encapsulation to Xenes supported by noble metal surfaces. Though preliminary, these data show that the Al 2 O 3 capping protocol can be applied to epitaxial Xenes with an interaction with the substrate strong enough to inhibit or minimize chemisorption to Al adatoms. This usually happens on noble metal surfaces hosting an Xene layer, as in the case of silicene-on-silver, and should be further investigated for silicene deposition on sapphire. 21 A question arises as to whether other Xenes of the same kind may benet from such an encapsulation scheme. More generally, we wonder whether the encapsulation reported here can be taken as a universal methodology for Xenes on metal surfaces. In the following section we partially address this by taking into account the case of epitaxial phosphorene. Encapsulation of epitaxial phosphorene The strategy developed for the encapsulation of silicene on a metallic substrate has been fruitfully exploited to prevent the oxidation of exfoliated black phosphorous akes on SiO 2 /Si substrate. 22 The encapsulation thus also works for highly reactive van der Waals materials on inert substrates. Therefore, it is interesting to further explore its applicability to the vast family of 2D materials with a focus on epitaxial Xenes. Epitaxial phosphorene is a good case for which to validate Al 2 O 3 encapsulation for metal-supported Xenes beyond silicene-on-silver. Epitaxial phosphorene results from the vacuum evaporation of a P 4 molecular ux that undergoes dissociative chemisorption when ground on an Au(111) surface. In the substrate temperature range 240-270 C, chemisorbed phosphorus molecular species selfarrange in a ower-like honeycomb pattern with hexagonal symmetry, as shown in the atomically resolved scanning tunneling microscopy (STM) topography in Fig. 4a. We name this structure epitaxial phosphorene to discriminate it from a single layer of black phosphorus, conventionally termed phosphorene. In contrast to silicene, epitaxial phosphorene possesses a local semiconducting character 23,24 and completely covers the Au terraces with a single superstructure (Fig. 4b). The observed morphology was originally attributed to an epilayer with the same buckled structure of the theoretically predicted free-standing bluephosphorene lattice. In this model all the petals of the ower-like pattern correspond to P 16 blue-phosphorene islands matching half of the 5 Â 5 Au(111) unit cell. [23][24][25] However, close inspection of the high-resolution STM image in Fig. 4a reveals that only a few petals show six bright protrusions typical of a P 16 bluephosphorene island, while most of them consist of three bright protrusions. This has been explained by introducing a model where two local atomic congurations may coexist together. 26,27 Beside sparse P 16 blue-phosphorene islands (see the blue rhombus in Fig. 4a), "adatom-rich" structures are also possible. In the "adatom-rich" structure three additional phosphorus atoms are accommodated on the top layer (see the yellow rhombus in Fig. 4a). More recently, a new interpretation of the structure of epitaxial phosphorene questioned all the previously discussed atomic models. As reported in ref. 28, the structure of epitaxial phosphorene may correspond to an Au-decorated phosphorene network rather than an epilayer. According to this model, the triangular petals coincide with buckled blue-phosphorene P 9 monomers (with the top three P atoms protruding at a larger distance from the Au substrate) linked to three other P 9 monomers on their sides by gold adatoms coming from the substrate (see the green rhombus in Fig. 4a). Like silicene (but to a different extent), epitaxial phoshorene is subject to oxidation when exposed to air. Its stability is elucidated in the comparative XPS study in Fig. 4c, where the P 2p core level photoemission line is reported for freshly grown epitaxial phosphorene (bottom) and aer exposure to air (top). As can be noted from the permanence of the elementary P-P bonding component (denoted P 0 in Fig. 4c), at 129.4 eV epitaxial phosphorene still survives aer air exposure, in marked contrast to silicene. The photoemitted P 2p core electrons of air-exposed epitaxial phosphorene show a broad oxide peak from 132 to 136 eV. By tting this oxide peak through spin-orbit doublets with a separation of 0.86 eV and a xed 2 : 1 intensity ratio, the resulting components at 134.7 eV and 133.8 eV can be attributed to phosphorus oxidation states of +5 and +3, respectively, whereas the component at 132.1 eV can be assigned to lower oxidation states. Thus, the oxidation of epitaxial phosphorene in air may involve the formation of P 2 O 5 and P 2 O 3 oxides and P 2 O x,x<3 suboxides, similar to what has been reported for air-exposed black phosphorus lms. 29,30 Oxygen is known to play an important role in the preliminary stages of black phosphorus oxidation, 31 but alone does not lead to a complete physical degradation of the material. Short-term exposure to high-purity O 2 has been shown to oxidize few-layer black phosphorus at a much slower rate than the combined action of oxygen and water producing low oxidation states. 31 Similarly, epitaxial phosphorene shows a remarkably high oxidation endurance to limited molecular oxygen dosing in UHV. 26 At higher doses, oxygen atoms preferentially absorb on top of one of the protruding phosphorus trimers without degrading the epitaxial phosphorene superstructure. 32 These facts reveal a peculiar oxidation chemistry of epitaxial phosphorene that differs from that of silicene and shares some similarities with the oxidation mechanism of black phosphorus. Accurate investigations are needed to disentangle the oxidation scheme of epitaxial phosphorene and resolve open questions such as the chemistry of epitaxial phosphorene with water, alone or in combination with another oxidizing agent like oxygen, as well as the inuence of exposure to light. The effect of the Al 2 O 3 encapsulation of epitaxial phosphorene is elucidated in Fig. 4d by comparing the P 2p line prior to and aer Al 2 O 3 capping. Despite the reduced photoemission intensity (due to the extra thickness of the Al 2 O 3 top layer), the shape prole of the P 2p line shows no substantial change except for a À1.2 eV shi of the binding energy between the two lines (to a lesser extent a similar shi is also observed for silicene on Ag). The former fact validates the encapsulating action of the Al 2 O 3 layer and positions Al 2 O 3 encapsulation as a candidate for a general approach to the stabilization of Xenes supported on metal substrates. The latter fact stems from the peak positioning with reference to the Au 4f peaks (binding energy positioning of the core level peaks is calibrated using the Au 4f position as usual, for each process stage). Shis in the binding energies can be due to charge incorporation causing the Fermi level to move away from its pristine position in the uncapped layer. We interpret this behaviour as being due to an interface dipole formation around the phosphorene environment, according to which xed charges intrinsic to the Al 2 O 3 lm are compensated by an image charge in the Au/phosphorene superstructure. 33,34 This picture is corroborated by the observation that the original binding energy is restored whenever the Al 2 O 3 capped phosphorene is exposed to air (data not shown), as far as the environmental reactivity may induce the saturation of charged defects in Al 2 O 3 and a reset of the electrical balance between the stacks. Oxidation of epitaxial phosphorene is a key aspect to understand in order to develop processing protocols such as layer transfer aiming at device fabrication. Recently, it was shown that Al 2 O 3 encapsulation can save epitaxial phosphorene from complete degradation in a wet environment (used for substrate etching). 26 Nonetheless, partial oxidation takes place and it is not clear at the moment what the relevant reactivity path is. This will open a framework to engineer processes enabling the full exploitation of large-scale phosphorus at the 2D level, thus bypassing the current size limitation of black phosphorus akes. Summary Xenes are a new frontier in the physics and chemistry of 2D materials. However, their stability in environmental conditions is a hurdle for applications. This Discussion provides a playground to explore several aspects of the chemical reactivity of Xenes and to develop a general methodology to prevent the environmental degradation of Xenes out of vacuum. The general character of the Discussion is to propose data inspiring open questions or issues yet to be solved. In detail, the rst aspect considered is the chemical reactivity upon O 2 exposure and in air. This is showcased in the paradigmatic epitaxial silicene-on-silver by means of photoemission and Raman spectroscopy investigations. Second, a solution to the fast degradation of silicene is proposed, which consists of the sequential encapsulation of silicene with an Al 2 O 3 capping layer. The effectiveness of the approach is substantiated and supported by a preliminary DFT model suggesting that the silicene-silver interaction is a key mechanism to save silicene integrity aer Al 2 O 3 deposition. This idea is further developed for epitaxial phosphorene-on-gold in order to validate the universality of the approach for the Xene-on-metal framework. In the latter case, the encapsulation works as effective protection and, due to the peculiar charge exchange at the Al 2 O 3 -phosphorene interface, dipole-induced electronic band bending is reported as an effect of the heterostructure. Additional Xenes on metal substrates will represent promising tests to unravel possible universal oxidation mechanisms as well as to develop encapsulation processes. Sample growth Epitaxial growth of Xenes was performed in UHV conditions (base pressure 10 À10 mbar) in a temperature range of 210-290 C by means of molecular ux released from e-beam evaporators or effusion cells on metal (Ag, Au) lm surfaces supported by mica substrates and prepared by several cycles of Ar + (1 keV) sputtering and annealing at 500 C. Oxygen dosage experiments were performed in the UHV environment by exposing the silicene surface to an O 2 -rich atmosphere (100 s and 1000 s at 1 Â 10 À6 mbar to get 75 and 750 L, respectively). Experimental characterization Synchrotron radiation core-level photoemission spectroscopy in Fig. 1 was performed at the VUV-beamline at the Elettra Synchrotron Radiation facility in Italy, with the electron spectrometers placed at 45 with respect to the direction of the horizontally polarized photon beam and with a photon energy hn ¼ 130 eV; XPS data in Fig. 2 and 4 were acquired by means of a non-monochromatic Mg Ka source (1253.6 eV) and their energy positioning was calibrated by taking the tabulated Ag and Au core-level positions as references for the case of silicene and phosphorene, respectively; Raman spectroscopy in Fig. 2 was performed with a single 60 s long acquisition by exploiting an Invia Renishaw spectrometer equipped with the 514.5 nm wavelength of an Ar + laser Raman spectrum; STM topographies in Fig. 1 and 4 were acquired at room temperature with an Omicron scanning tunneling microscope (STM) equipped with a homemade chemically etched tungsten tip. DFT modelling details Ab initio simulations are performed by exploiting DFT with PBE exchangecorrelation potential in a generalized gradient approximation (GGA) 35 and using a non-local functional for the van der Waals forces, 36 as implemented in Quantum Espresso. 37 We use a plane-wave basis set with a kinetic energy cutoff of 40 Ry and the G point for sampling the Brillouin zone of the simulated supercell. The silicene/Ag system has been modeled by a supercell containing 5 Ag(111) atomic layer with the silicene layer on top. The supercell is periodic in the x-y directions, while 15Å of vacuum has been exploited to avoid interactions between periodic replica in the z direction and a dipole correction has been also used. A single Al atom, an O 2 molecule, or an Al 2 O 12 cluster has been inserted on the silicene surface at an initial distance that ensures the formation of bonds between the Si atoms of the silicene layer and the Al or O atoms on top. Then structural relaxations have been performed until the average atomic force was lower than 10 À4 Ry per Bohr, by using a quasi-Newton algorithm, both including the Ag substrate and considering a bare free-standing silicene layer. Conflicts of interest There are no conicts to declare.
2020-01-02T21:51:19.920Z
2020-12-09T00:00:00.000
{ "year": 2020, "sha1": "8dd646cb79a9ba82ead46b562651035c8dd1731f", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/fd/c9fd00121b", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "d8dab4bee8a846f83019c0e5aeceddf441a9980d", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
267711575
pes2o/s2orc
v3-fos-license
Spontaneous external iliac artery dissection treated conservatively: A case report and review of the management options Introduction and importance Spontaneous iliac artery dissection (IAD) is a rare condition that is usually associated with connective tissue diseases. Complications include ischemia due to malperfusion and bleeding due to rupture. Treatments vary depending on the expertise and presenting symptoms; they include conservative, endovascular, and surgical options. Presentation of case Here, we present the case of a 45-year-old man who presented with right lower quadrant pain and hypertension as well as normal laboratory results. A contrast-enhanced computed tomography (CT) scan of the abdomen revealed an isolated dissection of the right external iliac artery. The patient had intact distal pulses and no other abnormal findings. He was admitted to the intensive care unit to control his high blood pressure with antihypertensive medications. The patient recovered well and was discharged home in stable condition with antiplatelet and antihypertensive therapy. The follow-up with the patient continued for one year. Discussion Given the rarity of this disease, the treatment protocols and outcomes are still a matter of ongoing debate. Complicated cases with rupture should be treated on an emergency basis using open and endovascular repairs. In asymptomatic and symptomatic patients without rupture, medical treatment and possibly endovascular treatments are considered. Conclusion Conservative management of uncomplicated asymptomatic IAD should be considered as first-line therapy. Introduction Although peripheral artery dissection arises most frequently in conjunction with aortic dissection, spontaneous peripheral arterial dissection that occurs without aortic dissection has rarely been reported [1].Spontaneous isolated iliac artery dissection (IAD) is a rare vascularpathological condition that causes arterial dissection in the common iliac artery [2].Only a few cases have been reported in the literature.Although this rare vascular-pathological condition may occasionally be asymptomatic, arterial dissection can have catastrophic consequences, and it might be fatal [2]. Clinically, IAD can imitate arterial limb ischemia and present with symptoms such as sudden-acute onset pain, pulselessness, paleness, paresthesia, and poikilothermia, according to reported cases in the literature [3].Nevertheless, IAD can remain asymptomatic and undiagnosed for years, and go undiagnosed for years, only to be incidentally discovered during diagnostic testing for other concurrent conditions.[2,3] Here, we report a case of asymptomatic isolated right external IAD found incidentally during a work-up for abdominal pain and managed conservatively.This case report has been described in line with the SCARE criteria [4]. Presentation of case A 45-year-old man is known to have type 2 diabetes mellitus and colonic diverticular disease.He presented to the emergency department of Dr. Sulaiman Al Habib Hospital, Riyadh, Saudi Arabia with a complaint of chronic right lower quadrant pain for the past three months.The pain was intermittent, did not increase with meals, and was not radiating.The patient denied any history of weight loss, fatigue, anorexia, jaundice, and diarrhea or vomiting.Additionally, the patient denied any vigorous exercise and has no history of smoking or alcohol intake.He had a long history of constipation for 10 years on intermittent laxative medications.The past personal history or family history was unremarkable.Upon general physical examination, the patient was wellnourished, in moderate pain, and had no pallor or jaundice.His blood pressure was 210/110 mmHg, and he was not taking antihypertensive drugs.Abdominal examination revealed minimal tenderness in the right lower quadrant area with no palpable masses.The lower limb examination was unremarkable, with palpable pulses and no sensory or motor neurological deficit.Laboratory value were all within the normal range (Table 1). Despite the absence of alarming symptoms such as weight loss, fatigue, anorexia, or jaundice, the chronicity and persistence of the right quadrant pain in addition to the patient's diverticular disease warranted further investigation to rule out underlying pathology.Therefore, a contrast-enhanced computed tomography (CT) scan of the abdomen was ordered to thoroughly evaluate the underlying cause of the chronic pain, assess for possible complications related to known comorbidities, and exclude other differential diagnoses.The imaging showed an incidental finding of a right external iliac artery dissection.The proximal entry tear was in the mid-external iliac artery and extended for 1.8 cm.The distal exit tear was present, and there was no flow limitation, dilatation, or aneurysmal changes (Fig. 1).The abdominal aorta and common iliac arteries were mildly calcified without dissection (Fig. 2).The past personal history did not reveal conditions such as connective tissue disorders, vascular diseases, hypertension, intermittent claudication, arrhythmia, cardiac disease, trauma, intervention, or drug therapy.The past surgical and family histories were unremarkable.The patient's collagen connective tissue work-up was unremarkable.The anklebrachial index was found to be 1.1 on the right and 1.2 on the left.Since the patient is asymptomatic and the pain was related to his chronic colonic diverticular disease.Additionally, there was no clinical history or signs of claudication or limb ischemia. Based on history, and clinical findings, conservative treatment was chosen.The patient was admitted to the intensive care unit for observation and blood pressure control.A left radial arterial line and a right internal jugular central venous catheter were inserted for continuous monitoring.Vital signs and peripheral pulses were checked hourly, and meticulous measurement of input and output was maintained.The patient was kept fasting with intravenous hydration.Intravenous labetalol 1 mg/mL solution with an infusion rate of 20 mg/h was started.The rate was increased to 40 mg/h, and the blood pressure was stabilized.The following day, the blood pressure treatment transitioned to oral beta blockers, and he started a progressive oral diet.After controlling his blood pressure, he was discharged home in good condition with antiplatelet therapy (acetylsalicylic acid, 81 mg orally daily) and antihypertensive therapy (atenolol, 100 mg orally daily).The patient was followed up in the outpatient department.He remained stable throughout a one-year follow-up period, and the CT examination revealed no changes (Fig. 3). Discussion Peripheral artery dissection arises most frequently in conjunction with aortic dissection; however, spontaneous peripheral arterial dissection that occurs without aortic dissection has rarely been reported [1].The pathophysiology of these lesions is complex, and dissection may occur as a result of uncontrolled hypertension.Notably, the risk of aortic dissection is significant at systolic blood pressure of more than 132 mmHg and diastolic blood pressure of more than 75 mmHg [5].High blood pressure can initiate the intimal injury that eventually results in the dissection. In the presented case, the patient had diabetes mellitus, a recognized risk factor for cardiovascular diseases and peripheral arterial disease.The exact mechanism linking diabetes to arterial dissection is uncertain, but the vascular changes in diabetes, such as endothelial dysfunction and oxidative stress, may compromise arterial wall integrity, rendering it more vulnerable to dissection [6].Therefore, it is important for clinicians to recognize this disease as a potential predisposing factor for arterial dissection. The optimal approach to treating isolated IAD remains a subject of debate.The urgency of intervention is dictated by the patient's clinical presentation; those presenting with acute limb ischemia or rupture require immediate endovascular or open repair [7].Conversely, conservative management is typically reserved for asymptomatic individuals.However, due to the rarity of the condition, there are currently no established management guidelines for asymptomatic IAD patients.According to a study by Liang et al. [2], asymptomatic patients can safely undergo conservative treatment without experiencing complications.Nevertheless, there is still a lack of consensus on follow-up recommendations for non-surgical patients.These patients are at increased risk of arterial aneurysm development due to the nature of the disease.Therefore, administering β blockers and conducting annual clinical evaluations, along with screening using ultrasonography or CT scan to monitor for potential arterial aneurysm formation is recommended.We have chosen this approach with close clinical, instrumental, and laboratory monitoring.Clinical attention has been focused on hemodynamic stability given the asymptomatic nature of the patient's disease. Open surgery is currently the most common treatment for these conditions, especially in the presence of rupture.Considering that most investigators use percutaneous transluminal angioplasty with endovascular stents [8][9][10], there are some cases of IAD treated with balloon angioplasty alone [11].Endovascular intervention can be considered if there are symptoms (mostly pain) without signs of rupture [8][9][10]. The endovascular intervention for IAD aims to keep the distal and hypogastric arteries perfused while excluding and achieving thrombosis in the false lumen by sealing the proximal entry tear.Kwak et al. [12] discussed the benefits of endovascular treatment for IAD.The authors described two cases of iliac axis dissections that were successfully treated with the implantation of self-expanding stents, and postintervention follow-up showed good results [12].Additionally, Yoshida et al. [13] reported a total endovascular treatment of common IAD using two covered stents to maintain pelvic circulation.The authors placed two covered stents in the common and external iliac arteries.The hypogastric artery's origin was not covered, and the results of the postintervention follow-up showed good results. During endovascular intervention for IAD, at least one hypogastric artery needs to be preserved to avoid ischemic complications [14].Maintaining the patency of the hypogastric artery in young patients, especially those with idiopathic IAD, is preferable as the contralateral iliac artery may become affected by other vascular conditions (stenosis, aneurysms) with the aging process.Whereas it is not recommended to use endovascular devices in IAD caused by collagen disease [15]. In our case, conservative management was chosen based on the patient's clinical presentation, imaging findings, and stable hemodynamics achieved through prompt pharmacological therapy.Despite potential risks, the patient remained asymptomatic during follow-up with no adverse outcomes on imaging.Favorable responses to conservative measures, such as blood pressure control, support the decision to avoid invasive intervention.This case highlights the importance of individualized treatment approaches in managing rare vascular-pathological conditions like isolated IAD, where conservative management may offer a safe and effective alternative to invasive procedures, particularly in asymptomatic patients without evidence of disease progression. Conclusion Spontaneous isolated dissection of the iliac artery is a rare vascularpathological condition.The treatment options should be individualized based on the patient clinical presentation.We advocate that medical therapy alone should be the first line of treatment for uncomplicated asymptomatic dissection.Nevertheless, careful monitoring can help detect forms that tend towards morphological changes, increasing the risk of rupture. Patient perspective and informed consent Written informed consent was obtained from the patient for publication of this case report and accompanying images.A copy of the written consent is available for review by the Editor-in-Chief of this journal on request. Fig. 1 . Fig. 1.A contrast-enhanced computed tomography (CT) scan of the abdomen showed an incidental finding of a right external iliac artery dissection (blue arrow).(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Fig. 2 .Fig. 3 . Fig. 2. Coronal view of CT scan of the abdomen revealed a mildly calcified abdominal aorta and common iliac arteries without dissection (blue arrow).(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Table 1 Laboratory values.
2024-02-17T16:05:17.390Z
2024-02-14T00:00:00.000
{ "year": 2024, "sha1": "0934be1e548c269b3e8091d3fbfb8018198d2cb6", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijscr.2024.109378", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "0d723c695dc1852e0fa4d36d45ec1b5f58f9675f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119312095
pes2o/s2orc
v3-fos-license
Meteors in the Maori Astronomical Traditions of New Zealand We review the literature for perceptions of meteors in the Maori cultures of New Zealand. We examine representations of meteors in religion, story, and ceremony. We find that meteors are sometimes personified as gods or children, or are seen as omens of death and destruction. The stories we found highlight the broad perception of meteors found throughout the Maori culture and demonstrate that some early scholars conflated the terms comet and meteor. Introduction The Māori of New Zealand (Aotearoa) are a Polynesian people who descend from the Cook Island Māori and other Eastern Polynesian groups (King, 2003). The Māori migrated to Aotearoa in the 13 th century, travelling by waka (canoe) from Rarotonga (Anderson, 2009;Walter & Moeka'a, 2000). This Great Migration comprised seven or eight wakas 1 containing the ancestors of the present day Māori people of Aotearoa (Evans, 2009). Many Māori can trace their lineage back to one or more of the original waka. The arrival of Europeans (Pakeha 2 ) in the late 18 th century marked a change in Māori culture. The Māori quickly learnt to read and write, being introduced to these new skills from the Pakeha. The Māori embraced these skills in order to preserve their knowledge and oral traditions. Countless documents exist written by both Māori and Pakeha that contain the knowledge and traditions of the Māori. Researchers are still tracking down and sifting through these documents for many aspects relating to Māori culture, particularly astronomy (e.g. Harris & Matamua, 2012;Orchiston, 2000). We present here a brief examination of the cultural knowledge of Māori astronomy by focusing on their folklore and legends of meteors or shooting stars. We examine many well-known records for references to meteors in story, religion, and ceremony. The majority of published information about Maori astronomical traditions, including meteor names, comes from the work of Elsdon Best (1955). His published study of Maori astronomy remains the most detailed and comprehensive to date. Meteor Names In Aotearoa, meteors have many names, varying from region to region. In the Bay of Plenty, meteors are known as matakōkiri (the darting ones), tūmatakōkiri, kōtiri, kōtiritiri, tamarau, and possibly unahi o Taero (Stowell, 1911:199;Best, 1955:69). The Ngatiawa tribe near Whakatane say that Taneatua -the tohunga (priest) of the Mataatua (one of the seven original waka) -brought comets and meteors with him on the Great Migration and released them into the southern skies (Kingsley-Smith, 1967). The Ngatiawa call these meteors Rongomai, a name also used to denote a comet -in particular Halley's Comet (Stowell, 1911: 200). Perception of Meteors Perceptions of meteors were diverse among the Māori. Meteors were generally seen as omens of evil or death (Best 1955;Mackrell, 1985:21-28). A meteor may portend the death, or the rise and fall, of a chief (Best, 1955:70). They were also viewed as star children or personifications of supernatural beings or ancestors (ibid). The physical characteristics of a meteor, such as the brightness and trajectory, have special meaning to the Māori (ibid). Bright meteors denote good omens, while fainter ones denote evil omens. If a meteor is seen heading toward the observer, it is a good sign (ibid). For example, the matakōkiri are stars that have wandered out of their places and have been struck by their elders -the sun and moon. If a matakōkiri appeared to approach a person directly, it was seen as a good omen (ibid). Meteors are sometimes refereed to as Raririki (little shining ones). The twinkling stars are children playing across the robe of Rangi (the sky father). Occasionally one of the children will trip and fall, flashing across the sky in a brilliant light (Reed, 1950:190). Stories of Gods Meteors are personified as atua (supernatural beings; Best, 1955:70). When an atua is expelled from the sky for behaving badly, he is seen as a meteor. Atuas are also known to occasionally visit the earth (Orbell, 1996:165), suggesting a link between meteors and meteorites. Rongomai was an atua who provided guidance and protection in war. We know that Rongomai is used to refer to Halley's Comet (Tregear 1891:425). But Rongomai was also known to "move through space" and "give off sparks". Best (1955:67) cites an account by Rev. R. Taylor who claimed that when the Pakakutu pa (fort) at Otaki was besieged, Rongomai was seen in broad daylight as a "fiery form rushing through space" striking the ground and causing dust to rise. This description clearly illustrates a fireball and subsequent meteorite impact and not a comet. Otaki is approximately 65 km north of Wellington on the western coast of the North Island. Best also describes a place named Te Hapua o Rongomai at Owhiro Bay, south of Wellington, where an atua is said to have descended to earth. The motif of a star falling from the sky and impacting in the distance during or preceeding a war or battle is found across the world (e.g. Avilin, 2007). This may not actually describe a witnessed event, but may simply be an idea incorporated into folklore. Tutaka, one of Best's male informants from the Tuhoe tribe, stated that Tunui is not a star but a demon -a spirit that flies through space and has a "big head" (Best, 1955:68). Best categorizes Tunui as a comet, but the description clearly indicates that Tunui is a bright meteor or fireball. The appearance of Tunui signals that someone has died. This is also a common perception among Aboriginal groups in northern Australia (Hamacher & Norris, 2010). In many early writings about astronomical traditions, comets and meteors are often conflated (see Hamacher & Norris, 2010). Another story recorded by Best (1955:68) highlights this. He describes Tunui and Te Po-tuatini as spirits that fly through space, which Best identifies as comets. They are seen in the night sky and they are atua toroinquisitive, reconnoitring gods. Their human mediums (usually the tohunga) placate and influence them by means of a ritual saying. Thus, Tunui is employed as a war-god and certain invocations are addressed to him. Comets do not appear to "fly" through space, nor are they fleeting. Instead, they gradually move across the sky from night to night. It is clear that what is described is a meteor and not a comet. Stories of Ancestors & Men An ancestor and supernatural being named Tūmatakōkiri is seen as a meteor, according to an "old warlock" of the sons of Awa (Best, 1955:70). Tūmatakōkiri foresees the positions of celestial bodies, and the seasonal and weather conditions, as he flies through the skies (ibid). If he moves downward, the following season will be a windy. If he maintains a level trajectory, the following season will be successful and bear much fruit. Hape from Ohiwa is the ancestor of the Te Hapu Oneone people ("the people of the soil", Orbell, 1996:45). He had two sons, Tamarau and Rawaho. Rawaho was the eldest and a tohunga, however, it was Tamarau that entered the house of his fathers' death first and inherited his father's powers. This turned him into an atua and gave him the power of flight; he would fly around from place to place. Tamarau lived in Kawekawe but if anyone approached his house he would turn into a meteor and fly away. In Maori astronomical traditions, the bright star Sirius (Alpha Canis Major) is called Rehua (Stowell, 1911:201-202). The star is said to have come as a "flaming star from out of the dark-hole", a reference to the Coalsack nebula near the Southern Cross. Rehua flew across the sky with "lighting speed", venturing among the stars before finally settling in his current place in the sky. The motif of a flaming star emerging from the Coalsack is also found in Aboriginal traditions of Australia (Hamacher & Norris, 2010). Stories of Destruction Māori mythology is rife with connections between fire, the disappearance of the moa (a large flightless bird akin to an ostrich or emu and is now extinct), and an object falling from the sky (Snow, 1983;Steel & Snow, 1992;Bryant, 2001). Meteors were believed to bring fire to the earth, suggesting a cultural memory of an airburst or meteorite impact event. According to Steel & Snow, the word moa itself is recent; in an early periodbefore the flames -the Māori called the bird Pouakai, but later it was called Manu Whakatau. One translation of this is bird felled by strange fire. The following Māori poem highlights this view (Steel & Snow, 1992:571): "Very calm and placid have become the raging billows That caused the total destruction of the moa When the horns of the Moon fell from above down". Steel & Snow (ibid) cite a report of a conversation with an 88-year-old Māori chief who claimed that: "The moa disappeared after the coming of Tamaatea (a man/god) who set fire to the land. The fire was not the same as our fire but embers sent by Rangi (the sky). The signs of the fires are still to be seen where red rocks like berries are found." Attempts to directly relate these oral traditions to a meteorite impact have been problematic and contentious. In 2003, Dallas Abbott and her colleagues reported the discovery of a putative submarine impact crater 250 km south of Stewart Island (48.3 S,166.4 E) with a diameter of 20±2 km that they believe impacted in 1443 AD (Abbott et al, 2003(Abbott et al, , 2005. Abbott and her colleagues named the structure Mahuika, after the Māori god of fire, believing this to be the impactor that sparked the Māori traditions described in this section. Impacts large enough to create a crater 20 km wide are believed to impact the Earth once every three million years (Collins et al, 2005), casting doubt on the proposed date of 1443 CE. The impact hypothesis relating to the Mahuika structure and the Māori traditions has been challenged (Goff et al. 2003; but remains a topic of contentious debate (Bryant et al. 2007). An impact origin of the structure is still awaiting confirmation. Discussion Some of the stories in Māori traditions seem to describe a meteorite fall or impact. Only nine meteorite finds have been confirmed in New Zealand. In order of discovery, they are Wairarapa Valley (1863: Find), Makarewa (1879: Find), Mokoia (1908: Observed Fall), Morven (1925: Find), View Hill (1953, Waingaromia (1970: Find), Duganville (1976, Kimbolton (1976: Find) (Grady, 2000), andEllerslie (2004: Observed Fall). There is currently no confirmed connection between known meteoritic events and those recorded in Māori traditions. Meteor traditions that may describe an impact from Otaki and Owhiro Bay are in the same general region as the Wairarapa Valley meteorite-find in 1863, but any connection between them is speculative. There is no confirmed impact crater associated with the proposed impact event that describes the destruction of the moa. There are no known reports of the Māori using meteoritic material for practical or social purposes. However, these are topics of current research. Conclusion We have highlighted various views of meteors in Māori stories, religion, and ceremony. The Māori have a broad perception of meteors. We find that they often view meteors as atua (supernatural beings) or ra ririki (children of light). However, meteors are also synonymous with fire and destruction. This is similar to other cultures around the world where meteors are viewed as bad omens (e.g. Hamacher & Norris, 2010). We also find that references to comets and meteors are often conflated, with descriptions of meteors being mistaken for comets.
2019-04-13T05:52:30.816Z
2013-06-04T00:00:00.000
{ "year": 2013, "sha1": "c38514a2083763bbb8981f662bf6ac93b1caed83", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e912a40c99266b591cc53e23d9d1d01eb1e9a379", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "History", "Physics" ] }
204852320
pes2o/s2orc
v3-fos-license
Matching in the Air: Optimal Analog Beamforming under Angular Spread Gbps wireless transmission over long distance at high frequency bands has great potential for 5G and beyond, as long as high beamforming gain could be delivered at affordable cost to combat the severe path loss. With limited number of RF chains, the effective beamwidth of a high gain antenna will be"widened"by channel angular spread, resulting in gain reduction. In this paper, we formulate the analog beamforming as a constrained optimization problem and present closed form solution that maximizes the effective beamforming gain. The optimal beam pattern of antenna array turns out to"match"the channel angular spread, and the effectiveness of the theoretical results has been verified by numerical evaluation via exhaustive search and system level simulation using 3D channel models. Furthermore, we propose an efficient angular spread estimation method using as few as three power measurements and validate its accuracy by lab measurements using a $16{\times}16$ phased array at 28 GHz. The capability of estimating angular spread and matching the beam pattern on the fly enables high effective gain using low cost analog/hybrid beamforming implementation, and we demonstrate a few examples where substantial gain can be achieved through array geometry optimization. I. INTRODUCTION 5G systems will adopt millimeter wave (mmWave) frequency bands to meet the capacity demand for future mobile broadband applications and new use cases [1]- [3]. However, the high path loss and sensitivity to blockages [4]- [6], channel state information acquisition challenges [7], hardware limitation and other difficulties [8] make it challenging to provide high user rate at high frequencies without shrinking the traditional cell coverage range. The critical part of high frequency links is the antenna and associated beamforming method. High beamforming gain is essential to combat the severe path loss such that Gbps throughput over long distance and coverage in non-line of sight (NLOS) areas can be realized. Full digital beamforming, capable of altering both amplitude and phase for each antenna element, is costly as it requires a dedicated RF chain for every antenna element and powerful baseband processing. Analog or hybrid beamforming with limited number of RF chains will be used in most of the products indented for mm-Wave frequency bands. However, owing to the channel angular spread and limited number of RF chains, the effective beamwidth of the Jinfeng antenna will be "widened" by the channel, as illustrated in Fig. 1, resulting in reduced effective beamforming gain. This can be intuitively understood by an analogy of lighthouse beacons being scattered in fog, leading to shortened reach. A sample measured beam pattern, presented in Fig. 1, shows 4.5 dB gain reduction as compared to its nominal gain of 14.5 dBi (as measured in anechoic chamber). Previous measurement campaigns have reported significant loss of directional gain in various deployment scenarios, including suburban fixed wireless access (FWA) [9], [10], indoor offices [11], and industrial factories [12], where up to 7 dB gain reduction (90th percentile) out of 14.5 dBi nominal gain was reported. Angular spread has been widely acknowledged and carefully modeled for wireless communications, for example, by the 3rd Generation Partnership Project (3GPP) [13]. It is different in azimuth and in elevation for most relevant deployment scenarios, and a chart of the root-mean-square (RMS) angular spread (its mean and associated 10% to 90% range) for base station (BS) and for outdoor user equipment (UE) is presented in Fig. 2, created based on 3GPP channel models [13] for Elevation RMS angular spread [°] outdoor UE Base Station Figure 2. A chart of RMS angular spread (mean value and the corresponding range of 10% to 90%) for BS and for outdoor UE using 3GPP channel models [13] for 28 GHz with BS-UE distance of 100 m. 28 GHz with BS-UE distance of 100 m 1 . Such difference has also been observed in other channel models developed by mmMagic, METIS, and NYU Wireless [14]. However, the impact of channel angular spread on system design, planning and performance evaluation has not been well understood. The prevailing practice for link budget calcualtion, inter-site interference and co-existence studies is to use nominal antenna pattern rather than the effective pattern, leading to inaccurate received power and interference level estimation. Although high directional antennas have been used for backhaul links, they are usually installed at high heights with almost clear direct line-of-sight (LOS) path and close to zero angular spread. This is in contrast to mobile or fixed wireless access applications where the antennas might be below average clutter height and the impact of angular spread could be significant. A. Our Contribution In this paper, we focus on wireless access deployment scenarios where large antenna arrays are deployed to improve the link budget. We take advantage of the difference in elevation and azimuth angular spread and formulate the analog beamforming as a constrained optimization problem to maximize the effective beamforming gain. We derive a closed form solution of the optimal array geometry, whose nominal beam pattern turns out to match the given channel angular spread. The potential gain of the optimal array over a square array of the same size is demonstrated by system level simulations using 3D channel models. Furthermore, we also propose a method of estimating channel angular spread in azimuth and in evaluation using as few as three power measurements, and validate its accuracy via lab measurements using a 16×16 phased array at 28 GHz. The capability of estimating angular spread and optimizing beam pattern on the fly enables dynamic directional beam configuration, and it helps to achieve high effective gain using low cost analog/hybrid beamforming implementation. We also demonstrate a few examples where substantial gain can be achieved through array geometry optimization. To the best of our knowledge, this work is the first of its kind in matching antenna pattern 1 Angular spreads are not sensitive to frequency or distance in [13]. with channel angular spreads to improve effective direction gain, which is essential and critical to ensure sufficient link budget in real deployment. B. Related Work Some recent work have provided preliminary investigations on the impact of channel angular spread for channel modeling, link budget analysis, and system performance evaluation. The mismatch between nominal antenna gain and received power level was observed in various channel measurements with directional antennas, and such antenna specific variation was embedded directly into "directional" path loss models [15]- [17], which leads to different path loss models for each different combination of transmit and receive directive antennas. This is in contrast to the "omni" path loss models widely adopted by industrial standards such as [13] where the propagation channel is characterized free from any antenna assumptions and the path loss is modeled as it would be observed with ideal omni antennas at both the transmitter and the receiver. For example, in [10]- [12] the effective gain reduction caused by angular spread is modeled separately from the "omni" path loss channel models. Reduction of directional gain and capacity by azimuth angular spread have been evaluated in [18] for single/multiple sector beams, and the impact of angular spread in azimuth and in elevation for mmWave square arrays have been studied in [19] for Gbps coverage with wireless relayed backhaul. System level simulations of mobile networks in [20] have demonstrated up to 40% deviation from realistic value of Long Term Evolution (LTE) downlink throughput when nominal antenna pattern is assumed instead of effective antenna pattern. Study for 5G scenario with analog beamforming in mm-Wave range was presented in [21] where the radio link budget for serving link and interfering links were evaluated for both nominal and effective antenna gains. The impact of 3GPP 3D channel models on effective antenna array patterns has been visualized in [22] and it was found that the downlink Signal to Interference and Noise Ratio (SINR) can be overestimated by 10 to 17 dB in NLOS scenario when using nominal beam pattern rather than effective pattern. The impact of angular spread on the efficiency of tapering method has been evaluated via simulations [23] which indicates that the first side-lobe suppression level (SSL) can decrease to 16 dB in line of sight (LOS) conditions, or even to 2 dB in NLOS, in comparison to SSL of 20 dB for the nominal antenna pattern. C. Paper Organization A brief description of system model is in Sec. II and array geometry optimization is presented in Sec. III. System level simulation and lab measurements are reported in Sec. IV. Several potential applications are discussed in Sec. V and conclusions are in Sec. VI. II. SYSTEM MODELS To simplify presentation, we focus exclusively on beamforming over uniform planar array where elements are separated by half a wavelength. This configuration facilitates simple and direct representation of the nominal beam pattern by the underlining array size and array geometry. The same concept and methodology apply to other array types and beamforming methods where the RMS beamwidth of the beam pattern should be used for optimization. We consider the case of high gain antennas whose beam pattern can be approximately characterized by Gaussian functions 2 both in azimuth and in elevation [19] where B v and B h are the RMS beamwidth (in radius) in elevation and in azimuth, respectively. The directional gain, defined as the peak to average power ratio of the antenna pattern, is determined by the RMS beamwidths as [19] In the absence of scattering, the RMS beamwidths are set, correspondingly, to their nominal value B v0 and B h0 , which can be determined from measurement in an anechoic chamber. In the presence of scattering, signals may come from multiple directions. The received signal along a certain direction is the circular convolution of the nominal antenna pattern and the channel power angular response [11]. Assuming, for tractability, the channel angular spectrum of RMS azimuthal angular spread (ASD) σ h and RMS elevation angular spread (ZSD) σ v can be modeled as Gaussian functions with variance σ 2 h and σ 2 v , respectively. The effective antenna pattern, which is a circular convolution of two independent Gaussian signals, still has the Gaussian form as (1) but with effective RMS beamwidth given by Therefore, we can determine the effective beamforming gain based on the nominal antenna pattern and channel angular spread. As a result, when the number of antenna elements increases, the effective gain in scattering environment is always smaller than its nominal gain, and will saturate 3 at the limit imposed by the channel angular spread. A. Theoretical Derivation of Optimal Array Geometry We focus on analog/RF beamforming where there are in total N antenna elements, arranged in rectangular/square shape to form a uniform planar array of size (K 1 , K 2 ), with Array of (K 1 , K 2 )=(1, N ) corresponds to a horizontally deployed uniform linear array whereas K 2 =1 indicates a Figure 3. The optimal beam pattern (and the underlining array geometry using uniform plenary array) should match the channel angular spread as prescribed by (7) to maximize the effective analog beamforming gain. vertically deployed uniform linear array. Since the effective beamforming gain depends on the panel geometry (K 1 , K 2 ), the nominal beamwidths B ve and B he of the antenna elements, and channel angular spread σ h and σ v , we can optimize the array geometry (K 1 , K 2 ) to maximize the effective beamforming gain subject to the size constraint (4). Theorem 1. Ignoring the integer constraint on array dimension K 1 and K 2 , the effective beamforming gain of an antenna array with N elements is upper bounded as with equality if and only if the array geometry is given by Proof: See Appendix A. The nearest integer pair close to (K 1 , K 2 ) as specified by (6) and satisfying the total elements constraint (4) gives the best analog beamforming gain. Note that the ratio between the optimal RMS azimuth and elevation beamwidth equals the ratio of the channel RMS spread in azimuth and in elevation, i.e., Hence, the optimal beam pattern (generated by the optimal array geometry) matches the channel angular spread in both azimuth and elevation, as illustrated in Fig. 3. Remark 1. The optimal geometry that provides the maximal effective gain is determined for the given angular spread and number of elements. The actually implementation might not be exact as what suggested by the optimal solution due to implementation difficulties or cost constraints. For example, RF design would prefer symmetric circuits and antenna elements placement, and the use of splitters in the feed network may limit the granularity of array geometry options. Nevertheless, the beam pattern should match the angular spread as close as possible as prescribed in (7) after balancing all the tradeoffs. Remark 2. The array geometry optimization for uniform plenary arrays is also applicable for other types of directional antennas (like horn, reflector antennas, plasma antennas, etc.) or antenna arrays using non-directional elements (dipole, monopole, etc.) where the optimal antenna (array) is designed by optimizing the beam pattern in azimuth and in elevation to achieve the maximal effective antenna gain in a given channel. B. Theoretical Derivation of Angular Spread Estimation When channel angular spread (ASD σ h and/or ZSD σ v ) is unknown or time varying, the effective gain of a rectangularshaped sub-array can be determined in real time from measured signal strength using three or more different sub-array configurations, as detailed below. For a uniform planar array of size (N 1 , N 2 ), i.e., there are N 1 rows and N 2 column, we can measure the signal strength of three sub-panels of size (n 1 , k 1 ), (n 1 , k 2 ), and (n 2 , k 1 ), where n 1 , n 2 ≤ N 1 , and k 1 , k 2 ≤ N 2 . The effective gains of the corresponding subarrays, which depend on (B ve , B he , σ v , σ h ) but not shown explicitly to simplify notation, can be written as By combining (8) and (9) we have, from which we can obtain leading to an estimate of normalized ASD, in its squared form, Similarly, by combining (11) and (13) we obtain an estimate of the normalized ZSD, in its squared form,as σ If there are more measurements using different sub-arrays, each such pair would provide an estimate of the normalized ASD or ZSD, and such estimates should be combined together by treating each of such estimation as one realization of (12) and (14) for ASD and ZSD, respectively. Then all the equations formulated using (12) will be treated as an overdetermined linear system for ASD and all the equations formulated using (14) will be treated as an overdetermined linear system for ZSD. Given n independent measurements of ASD established by (12), we donate a i and b i as the corresponding constant on the left-hand-side (LHS) and the right-hand-side (RHS), respectively, of ASD estimation (12), for pair i = 1, . . . , n. Similarly, denote c j , b j , j = 1, . . . , l, as the LHS and RHS constants, respectively, of ZSD estimation (14). We will have whereā [a 1 , . . . , a n ] Then we can apply the classical Least Square estimator to obtain the improved estimation of the normalized ASD and ZSD, in their squared form, as Estimators other than the Lease Square estimator used here in (17) can also be applied here to tradeoff among accuracy, complexity and robustness. Note that a legitimate estimate of the squared ASD and ZSD should always be non-negative, but the estimates obtained using (13), (15), or (17) might be negative because of estimation noise. Therefore, any of the estimates whose value is negative should be replaced by zero. With estimation from (13), (15), or (17), the effective gain of a sub-array of size (m 1 , m 2 ) can be estimated as IV. NUMERICAL EVALUATION, SYSTEM LEVEL SIMULATION AND LAB MEASUREMENTS In this section we will demonstrate the benefits of array geometry optimization by numerical results, system level simulation and lab measurements using a 28 GHz phased array with 256 elements. A. Numerical Evaluation Effective beamforming gain for analog beamforming (i.e., one RF chain) using uniform planar arrays with 256 antenna elements at 28 GHz are shown in Fig. 4 for both the UMa NLOS scenario (blue line) and the UMi Street Canyon LOS scenario (red line), where the angular spreads of radio channels are from 3GPP models [13] assuing BS-UE distance of 100 m. The effective gain obtained using (20) for a set of different array geometry are marked by markers and connected by solid curves to illustrate the general trend of effective gain with respect to array geometry. The optimal array geometries for each channel, designed based on Theorem 1, are highlighted in the plot using black triangles. With total of 256 elements, 5dBi each, the ideal gain obtained by digital beamforming with full channel state information would be 29.1 dBi. In scenarios where angular spread is moderate, such as the 3GPP UMi Street Canyon LOS with medium ASD of 14 • and ZSD 0.6 • , a 64 × 4 tall array (very close to the optimal geometry of 85 × 3) is 4 dB better than the 16 × 16 square array, and 16 dB better than a 1 × 256 fat array. In a different environment such as the 3GPP UMa NLOS case which is characterized by larger angular spreads (medium ASD of 22 • and ZSD 5 • ), a 32×8 tall array (optimal) is 0.5 dB better than a 16×16 square array, 9 dB better than a 1×256 fat Figure 4. Effective beamforming gain of (20) for analog beamforming using uniform planar array with 256 elements at 28 GHz with BS-UE distance of 100 meters for both the UMa NLOS scenario (blue line) and the UMi Street Canyon LOS scenario (red line) using 3GPP models [13]. The optimal array geometries from Theorem 1 are highlighted as black triangles. array. This shows how important it is to have matched antenna beam pattern to radio channels and highlights the benefit of adapting antenna beam pattern to particular angular spreads of radio channels. B. System Level Simulation Using 3D Channel Models The system level simulation was performed to examine the accuracy of the theoretical analysis presented in Sec. III with full 3D spatial statistical channel model, as specified in 3GPP TR 38.901 [13], and antenna array model with beamforming algorithm adopted from 3GPP 5G system evaluation described in 3GPP TR 38.803 [27]. Key parameters of our system level simulation are summarized in Table I. First set of simulation results aim to verify correctness of analysis of effective antenna gain, for BS transmission in downlink, described above. For this purpose, we override some of the simulation parameters from Table I to remove some constraints normally seen in system level simulations. More specifically, we set all UEs at 10 m high (same height as the BS) and 60 m from its serving BS with both the BS and UE antennas aiming towards the strongest direction on its boresight. The means ASD is fixed to 16 • and mean ZSD of 1 • to facilitate direct comparison against theoretical analysis. Results of simulations are presented in Fig. 5 and Table II. It can be noticed, that median value of antenna gain CDF matches the analytical effective antenna gain within 0.5 dB. Second set of simulation results are to demonstrate the benefits of optimizing antenna array geometry in realistic deployment scenarios as described in Table I. Two array geometries are used in simulation, i.e., the default 8×16 arrangement and the optimal 42×3 configuration as obtained using Theorem 1. Simulation results for the received DL serving signal power, DL interference power, and DL SINR are presented in Fig. 6. As compared to the default 8×16 array configuration assumed by 3GPP, the optimized 42 × 3 array has demonstrated large increase in signal power (Fig. 6 left) thanks to its matching to channel angular spread, and modest reduction in interference power (Fig. 6 middle) thanks to its increased vertical resolution, leading to a combined gain of 6.6 dB on median SINR (Fig. 6 right). Should all users/devices distributed at the same height, widened azimuthal beam may lead to an increase in interference and therefore smaller SINR gain using optimized array geometry. C. Lab Measurements Lab measurements were carried out using a 28 GHz 16×16 array as the transmitter (Tx) and a 10 dBi horn as the receiver (Rx). Different antenna array geometry was configured by setting zero amplitude for selected antenna elements (AE). The "muted" antenna elements behaved like dummy elements which have marginal impact for antenna pattern due to EM coupling from active AEs. However, this small impact does not influence our general conclusion. The Rx horn antenna was connected to a Signal Analyzer. Tx signal with 100 MHz bandwidth was radiated from the antenna array and the received signal power was measured at Rx side. Since different Tx sub-array has different Tx power, the difference in beamforming gain is determined by the difference in Rx power subtracting the difference in Tx power. This operation also eliminates the common losses (such as cable loss, connector loss) experienced by all signals. Calibration in anechoic chamber was done using different antenna array configurations with boresight alignment. The measured total array gain with the same number of antenna elements but different geometry (e.g. 8 × 8, 16 × 4, 4 × 16 for 64 elements) was almost the same, with difference around 0.5 dB which could be attributed to dummy elements coupling effect, beam alignment offset or other measurement noise. Lab measurements, as shown in Fig. 7, were carried out for both LOS and NLOS scenarios. For LOS, two rows of reflective panels are used to create multipath-rich environment with larger angular spread in azimuth to verify the gain of optimal antenna arrays. For NLOS measurements, a metal rack and additional panels are used to increase angular spread. The measured relative gain, using the full 16×16 array as baseline, as well as the estimated gain based on estimated angular spreads using the methods presented in Sec. III-B (rounded to integer value) are shown Fig. 8. The results have verified the effective antenna gain for different antenna array geometry with different number of antenna elements for LOS and NLOS scenarios. For example, in LOS, the 16×2 sub-array has similar gain as the 8×8 by using 2 times less antenna elements. In NLOS, the effective antenna gain of 16×2 array is only 2.2 dB worse than effective gain of optimal array Figure 9. Example of optimal array geometry and the effective gain as function of array size. The ASD and ZSD are according to specifications in 3GPP UMi street canyon NLOS channel [13]. 16×16, whereas the effective gain of 2×16 array is 8.7 dB worse, clearly demonstrated the need of array optimization. Furthermore, these measurement results match our estimated gain (based on estimated angular spread) with high accuracy. These examples clearly validate our analysis on antenna array optimization and angular spread estimation. V. POTENTIAL APPLICATIONS We present here a few potential applications where optimizing array geometry can be applied to improve system performance. A. Deployment Specific Array Optimization In environments where azimuth angular spread is much larger than elevation angular spread, which is the case for deployment scenarios covered by 3GPP channel models, a tall array with the same number of elements (e.g., 16×4) may improve the signal strength by a few dB as compared to the square array (i.e., 8×8), thus leading to better performance. Pre-design of arrays in different geometry can be targeted for each typical deployment scenarios, such as urban macro sites, urban micro small cells, suburban FWA, and indoor office. For each typical deployment scenario, one may design the array geometry based on the mean value of angular spread in such cases and exploit the fact that the spreads in azimuth and in elevation are not the same. Such design strategy would provide similar gain on SNR over the square array for majority of the users, as verified by our system level simulations. In Fig. 9 we compare the effective analog beamforming gain of the optimal array to the gain of traditional squared arrays in 3GPP UMi street canyon NLOS deployment scenarios. Optimal array geometry as labeled in the figure are obtained according to Theorem 1 and the corresponding effective beamforming gain is obtained using (20). For same number of antenna elements, 5 dBi each, the optimal array design can improve the effective beamforming gain (thus the signal strength) by 2 to 3 dB over squared arrays. 132x4 Figure 10. Example of optimal analog beamforming gain and array geometry as a function of EIRP limit for 3GPP indoor LOS channel [13]. other radio propagation environments with different angular spreads or other values of element gain can be obtained in a similar way straightforwardly. Since the angular spreads at UE are much larger than those at BS, as shown in Fig. 2, using large antenna arrays at UE is inefficient in providing beamforming gain. B. Optimizing Array Geometry under EIRP Constraint For devices with strict equivalent isotropic radiated power (EIRP) limit, such as indoor AP/CPE, the maximum allowable number of antenna elements N can be determined from the EIRP limit as: where EIRP is in dBm, P t is the per-element transmit power in dBm and G e is the per-element gain in dBi. For example, with per-element directional gain of 5 dBi and per-element transmit power of 10 dBm, a maximum of 25 elements is allowed for indoor mobile stations subject to the peak 43 dBm EIRP limit imposed in the United States [24]. At a higher peak EIRP limit of 55 dBm for indoor modems, up to 100 such antenna elements can be used. In Fig. 10 we plot the nominal gain, the effective gain of squared arrays, and the effective gain of optimal arrays with the same number of elements, as a function of EIRP limit, where the optimal configuration of antenna array, obtained by applying Theorem 1, is as indicated in the figure. Compared to squared arrays with the same EIRP limit, 3 to 4 dB improvement of effective beamforming gain (thus signal strength) can be achieved by array geometry optimization for 3GPP indoor LOS scenarios [13]. Configurations for other radio propagation environments with different angular spreads or other values of element gain and element power can be obtained straightforwardly following the same method. On the other hand, the improved effective gain from array geometry optimization can also be leveraged to maintain the same link budget (thus throughput) but with fewer antenna elements as compared to conventional square arrays. For example, as shown in Fig. 10 Figure 11. CDF of DL cell capacity (bps/Hz) for 5G FWA at 28 GHz in a suburban deployment scenario [26], where optimized array geometry of 16×4 is compared to the default 8×8 squared array. dBm EIRP (including 24 dBm Tx power) would have effective gain of 13 dBi, whereas a 16 × 1 array would have 22 dBm Tx power but with effective gain of 15 dBi. Thus, using the 16 × 1 array would maintain the same link signal strength as the 5 × 5 squared array but with 2 dB less Tx power and 36% reduction in antenna elements, which translates to a combined 4 dB reduction of EIRP. Such reduction will not only leads to lower power consumption and reduced hardware cost, but also lower EMF radiation, which could help 5G system to meet performance expectations under RF EMF compliance limits [25]. C. Array Optimization for FWA Cell Capacity Enhencement High path loss and large signal bandwidth (in the order of 1000 MHz) at mmWave bands lead to low to medium SNR for users in NLOS or at long distance. Since the throughput is close to linear of SNR level in noise limited systems, a modest gain in signal strength could lead to substantial gain in throughput, especially for cell edge users. In Fig. 11 we plot the CDFs of the DL cell capacity (bps/Hz) for 5G FWA at 28 GHz in a suburban residential deployment scenario [26] where antenna arrays of 64 elements are used at lamppost-mounted access points. Detailed simulation setup can be found in [26]. With 800 MHz bandwidth and 285 m inter-site distance along the same street, the system is essentially noise limited for most of the Customer Premise Equipment (CPE). The optimized array of 16×4 achieves about 2 dB gain in median DL SINR as compared to the default 8×8 squared array. We map the DL SINR to DL cell capacity using the 3GPP configuration [27], and the plot the CDFs of cell capacity in Fig. 11. As compared to the default squared array, the optimized array provides a 20% increase of cell capacity at median and 60% increase at 10th percentile (i.e., cell edge). VI. CONCLUSIONS AND DISCUSSIONS In this paper we address the link budget challenge of high speed wireless access at high bands by focusing on the effective beamforming gain of antenna arrays under channel angular spread. We have presented closed form solution to match the antenna beam pattern with channel angular spread, which can be very useful in designing deployment specific antenna arrays for typical scenarios based on long-term historical data to improve link budget. We have also developed a method to estimate channel angular spread based on as few as three power measurements, which facilitate dynamic directional beam configuration in a per-transmission basis. This opens the door of a new operation regime for analog beamforming at high frequencies. Although we made a few assumptions regarding the angularpower distribution to make analysis tractable, the feasibility and projected gains of our methods have been confirmed with impressive accuracy by our 3GPP compliant system level simulations using 3D channel models and by our lab measurement using a 16×16 phased array at 28 GHz. Furthermore, our proposed use cases for deployment-specific array geometry optimization only require the average value of RMS angular spread, which can be estimated based on historical data for each deployment scenarios. Since the key ingredients of our solution is to match the beam pattern with channel angular spread, the proposed geometry optimization and angular spread estimation methods also apply to other array types and beamforming methods, despite that our description focused exclusively on beamforming over uniform planar array. For such applications, it is the RMS beamwidths in azimuth and in elevation that should be used in analysis rather than the dimension of arrays. The capability of real-time link-specific optimal beam pattern determination developed here is especially interesting for advanced beamforming techniques of phased arrays [28] and novel antenna technologies using metasurfaces [29]. Extension to panel-based hybrid beamforming is straightforward. Assuming there are in total N antenna elements evenly allocated to M sub-panels, each supported by one dedicated RF chain. Each sub-panel has N/M elements arranged in rectangular/square shape to form a uniform planar array, where the optimal array geometry (K 1 , K 2 ) can be optimized as in Sec. III to maximize the effective analog beamforming gain G(K 1 , K 2 ) for each sub-panel. Assuming perfect CSI is available for digital beamforming when combining M panels via maximum ratio combining/transmission, the effective beamforming gain of the N -element M -subpanel hybrid beamforming is therefore M G(K 1 , K 2 ). ACKNOWLEDGMENT The authors would like to thank Dmitry Chizhik for helpful discussions on channel angular spread, and Jakub Bartz for help during all the measurements in the laboratory. APPENDIX A PROOF OF OPTIMAL ARRAY GEOMETRY Assume each antenna element has nominal beamwidth B ve in elevation and B he in azimuth, which could be measured from anechoic chamber. They can also be derived from its nominal gain by assuming identical beamwidth in elevation and in azimuth, i.e., where G e is the gain of the antenna element and the last step is from (2). In free space or anechoic chamber where there is no angular spread, the analog beams formed by an antenna array of size (K 1 , K 2 ) shall preserve its ideal RMS beamwidths B v0 and B h0 , Given angular spread σ v and σ h , the effective analog beamforming gain can be determined by substituting (19) and (3) into (2), described as follows Since K 1 K 2 ≤ N , the effective beamforming gain (20) can be rewritten as where (21) is by substitution of K 1 K 2 =N , and (22) is from the inequality of arithmetic and geometric means (i.e., the AM-GM inequality), with equality hold, thus achieving the maximal effective gain (23), if and only if Combine (24) with constraint K 1 K 2 =N leads to the solution presented in (6).
2019-10-24T12:37:11.000Z
2019-10-24T00:00:00.000
{ "year": 2019, "sha1": "e50490c9eb73ebbbd4ffd93ca0381d44ad1c48df", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e50490c9eb73ebbbd4ffd93ca0381d44ad1c48df", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science", "Engineering", "Mathematics" ] }
92260559
pes2o/s2orc
v3-fos-license
Comparison of Bishop score and cervical length measurement through transvaginal ultrasound as prediction against labor induction Objective: To compare the Bishop score and cervical length measured by transvaginal ultrasound concerned with prediction over the success of labor induction. Methods: This cross-sectional observational analytical study was conducted from May 2017 to October 2017 at several teaching hospitals of Obstetrics and Gynecology Department, Faculty of Medicine Hasanuddin University of Makassar, India. There were 110 samples of pregnant women undergoing labor induction process including 79 samples of successful induction and 31 samples of induction failure. The data analysis used Pearson Chi-square test and multivariate logistic regression to see the effect of Bishop score and measurement of cervical length with successful induction of labor. Results: Number of samples with successful labor induction with Bishop score ⩾3 was 25 (31.6%) and Bishop score was 54 (68.4%), with rate ratio=3.714 and P=0.000. With measurement of cervical length (cut-off point 2.98 cm), number of samples with successful labor induction with cervical length ⩽2.98 cm was 12 (15.2%) and cervical length >2.98 cm was 67 (84.8%), with rate ratio=3.124 and P=0.000. Multivahate analysis of logistic regression was found to be more influential in the predicted success of labor induction (P=0.014 with Bishop score <3, odds ratio=1.000 and Bishop score ⩾3, odds ratio=3.779. Conclusions: Bishop score is better in predicting the success of labor induction compared to the measurement of cervical length through transvaginal ultrasound. Introduction Labor is a process of the fetus movement from the intrauterine to the extrauterine area; the process was named as the diagnosis clinic. It is an initial as well as a permanent contraction to produce the leveling and clinical dilatation which is connected to each other. The exact mechanism responsible for this process has been not fully understood yet [1]. Induction of labor is performed in about 20% of pregnancies and the success of the induction process of labor is reported to be related to either cervical character or cervical maturity [1]. Labor induction is referred to the place where the uterus contraction will be started either through medical or surgery process which should be carried out before the spontaneous partus. Based on the latest studies, there is a variation of the ratio over 9.5%-33.7% from all pregnancies every year. In addition, the cervical condition which is not mature enough will affect the success of partus per vaginam. In conclusion, the maturation of cervical as well as induction preparation should be evaluated yet before conducting therapy [2]. There are several labor induction cases ended with spontaneous labor and several cases ended with sectiosecarea. As many as 1/5 cases of pregnancies women with labor induction ended with sectiosacarea. Within the same palpasi generation, cervical digital is an examination that has been used for evaluating a labor development of women in labor process [1,2]. The examination is subjective and has variability among the investigators. Bishop score was found in 1967 and now remains a gold standard to assess cervical maturity as its function is as a reference of labor induction. However, it has not shown a successful prediction. By the day, there are a lot of cervical evaluation studies through ultrasound. The examination of cervical length through transvaginal ultrasound has been successfully used on the cervical evaluation concerned with the prediction of pre-labor process and post labor induction [3]. An appropriate timing for induction of labor for patients with indications such as diabetes mellitus in pregnancy, post-term pregnancy, and hypertension in pregnancy, remains controversial. If the risk of failure of labor induction can be well predicted, then the timing for the induction of labor may be considered, especially in some cases with milder indications [3]. A study by Park et al reveals that the length of the cervix as measured by transvaginal ultrasonography is a better predictor of the success of labor induction compared to the Bishop score [4]. A study by Hatfield et al reports that transvaginal ultrasonography was not proven as a predictor to assess the inducibility of the cervix better compared with Bishop score [5]. Based on the description above, this study aims to compare the success of labor induction with Bishop score assessment and cervical length assessment using a transvaginal ultrasound. Design and variable This was an observational analytical study using cross-sectional design. Population and sample The population in this study was all women who would get a birth service in the Educational Hospital while the study was in progress. The sample in this study was women of the population who met the selection criteria. Method of data collection This study used primary data obtained by using questionnaire, Bishop score examination and transvaginal ultrasound examination to obtain the length of the cervix. Technique of data analysis The data obtained was organized and processed using SPSS 17.0 computer program, Microsoft Excel and Microsoft Word. The data that were processed would be presented in tabular and description form. Results The observational analytical research was conducted using crosssectional study. In this study, the results were obtained based on sensitivity and specificity test of cervical length. We obtained a cut- Discussion The result of data on the study of 110 subjects were processed into the receiver operating characteristic curve to get the cut-off point of the cervical length in this study. The cut-off point of this study obtained the cervical length 2.98 cm with 54.8% sensitivity and 84.8% specificity. Tendean who analyzed cervical length as a predictor of successful induction of labor in 39 women who underwent labor induction and received a cut-off point of <2.895 cm with 79.41% sensitivity and 80.0% specificity [6]. In the study, there was no statistical significance between age, parity, and gestational age with successful induction of labor. Similarly, Abdelazim et al who conducted a study of 120 pregnant women undergoing labor induction found that age, parity, and gestational age did not affect the success of labor induction [7]. The result of bivariate analysis found that there was a statistically significant correlation between Bishop score with successful induction of labor (P=0.000, RR=3.714; 95% CI: 1.824-7.559). Ivars et al found that in nulliparous women, the success rate of labor induction reached up to 90%, but their study used a modified Bishop score by not assessing consistency and cervical position, and used the Bishop score 4 limits as a reference. They also found a significant association between Bishop's score with successful induction of labor (P<0.001) [8]. Cubal et al who conducted a study about 206 women undergoing labor induction suggested that Bishop score is a good predictor of labor induction [9]. Banu et al in his study of 125 nulliparous women also suggested that Bishop score and cervical length measurement were also good predictors of delivery status [10]. A systematic review study conducted by Banos et al also concluded that cervical status remained an important parameter in assessing the success of labor induction [11]. From a total of 507 studies screened in the database search, the researchers obtained 7 studies for further analysis. 3 of 7 studies concluded that although Bishop's score was one of the predictive factors of significant labor induction success, and Bishop's score was not the only independent predictive factor [7][8][9]. In the study, the cut-off point from receiver operating characteristic curve based on measurement of cervical length was 2.98 cm. We used this value to categorize the subject into two groups: Bishop score. However, these two variables were not significant to predict the success of labor induction [5]. In bivariate analysis, several variables in our study is proven to be significantly related to the success of labor induction, Bishop process, because the subject of the study receives different treatment of induction actions that match the Bishop score of the subject. The ultrasound device used in the measurement of cervical length is a different tool depending on the location of the hospital and the device availability at the place of the study subjects. In conclusion, based on the overall statements above, Bishop's score is better in predicting the success of labor induction compared to the measurement of cervical length through transvaginal ultrasound. The researchers suggest additional parameters of cervical such as measurement by posterocervico angle from the cervix and cervical elastography examination.
2019-04-03T13:08:39.303Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "80a75ceb2f46ccfb3b5fbf7745142779f05f5016", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/2305-0500.246348", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "d8aa5c76e2eeaa3972c809becd5250dbb570d54b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
259004157
pes2o/s2orc
v3-fos-license
Sex differences in the relationship between axial length and dry eye in elderly patients Purpose The aim of this study was to explore the association between myopia and dry eye (DE)-related ocular parameters. Methods We recruited a total of 460 patients (mean age, 73.6 years; 40.2% men) and performed DE-related, axial length (AL) and retinal examinations. Statistical analysis revealed a significant sex difference in AL, strip meniscometry value, corneal staining score, corneal endothelial cell density, ganglion cell complex (GCC) thickness, and full macular thickness. AL was strongly age- and sex-dependent, so subsequent analyses were stratified by sex. Results Among DE-related parameters, strip meniscometry value (ß = −0.167, p = 0.033) and corneal endothelial cell density (ß = −0.139, p = 0.023) were correlated with AL in women but not in men. Regarding retinal parameters, GCC thickness and full macular thickness were correlated with AL in women but not in men. Conclusion The current results suggest a relationship between tear production and AL in elderly women and support the hypothesis that there may be a common upstream factor including the parasympathetic nervous system in the association between tear production and AL or DE and myopia. Introduction Growing evidence suggests that dry eye (DE) may be associated with myopia (1-7). We previously reported a possible relationship between DE and myopia on the basis of integrating a DE-related questionnaire, axial length (AL), and myopic error (1). Further research revealed a relationship between tear break-up time (BUT) and choroidal thickness (2), a known parameter of myopia progression (8). Furthermore, a relationship between myopia and DE in younger subjects was suggested in a study examining tear evaporation rate (3) and tear ferning patterns (4), and another study with 682 teenagers demonstrated that BUT was correlated with myopic error (5). AL elongates in myopia, which may lead to exposure keratitis such as in thyroid eye disease (9,10). However, the association between AL elongation and the worsening of DE symptoms in healthy subjects is not fully determined. As AL increases after adolescence in high myopia, this relationship may be also /fmed. . observed in the elderly (11,12). Our previous observational study examined the longitudinal change in the AL of eyes implanted with either a violet light-filtering or non-filtering intra-ocular lens (11). We found greater AL elongation with a violet lightfiltering lens possibly due to the suppressive effect of violet light on AL elongation described previously (13). In contrast, an epidemiological study found that refractive status shifted to hyperopic with age (14), and it is generally believed AL stops increasing after the age of 20 years (15). The observed refractive difference between adolescents and older populations might partly be explained by the fact that children spend less time outdoors and more time near work (16). In fact, a survey in Hungary revealed a 3-fold increase in the prevalence of myopia in a young population compared to the elderly (16). Longitudinal data on AL in the general population is not available, and changes in AL in adulthood have not been determined. DE is a prevalent geriatric ocular surface disease. It has recently been defined as a multifactorial disease characterized by a persistently unstable and/or deficient tear film causing discomfort and/or visual impairment, accompanied by variable degrees of ocular surface epitheliopathy, inflammation, and neurosensory abnormalities (17). Decreased lacrimal secretion leads to aqueous tear deficiency and is a typical clinical manifestation of DE in addition to excessive tear evaporation and shortened BUT. BUT is a complex indicator because it is influenced by tear secretion, cornea, and eyelid (meibomian gland). The measurement of tear secretion reflects the aqueous tear component and can be conveniently done by tear strip meniscometry (18,19). It is a simple test for measuring lower tear meniscus volume and could be a relevant parameter in assessing the relationship between DE and myopia. The aim of this study was to explore the association between myopia and DE-related ocular parameters based on tear strip meniscometry, BUT, and retinal thickness measured with optical coherence tomography (OCT) and AL. We selected an older population to complement results from previous studies of children and younger subjects (1-7). Recruitment of patients and Institutional Review Board approval We consecutively recruited outpatients for preoperative evaluation and postoperative follow-up at Otake Eye Clinic and Tsukuba Central Hospital in Japan from January 2019 to August 2022. Inclusion and exclusion criteria Outpatients aged 40 years or older with a measurement of AL and best-corrected visual acuity better than 20/30 in both eyes were consecutively enrolled during the study period. Patients with glaucoma, optic neuropathy, and retinal degeneration were excluded. Macular diseases including age-related macular degeneration, epiretinal membrane, and macular edema were also excluded from analysis since they are significantly associated with macular thickness (20). Contact lens wearers were excluded as contact lenses may contribute to DE (21,22). None of the patients had undergone any non-medical interventions on the ocular surface, such as punctal plug insertion or punctal occlusion, or any surgical interventions within 6 months prior to being included. Ophthalmological examinations Board-certified ophthalmologists tested subjects with tear strip meniscometry and vital corneal staining. Detailed procedures have been described previously (18). Strip meniscometry is a new noninvasive lacrimal function test to measure lower tear meniscus volume in 5 s using SMTube strips (Echo Electricity Co., Ltd., Fukushima, Japan) (18,19). The tip of the SMTube strip is gently immersed into the lower tear meniscus, and the resting tear is absorbed into the column part of the SMTube strip with the tear propagation path stained by blue dye. Although the Schirmer test is a gold standard for evaluating tear production (23, 24), we used tear strip meniscometry to measure tear meniscus volume. We chose this method for being a 5 s non-invasive procedure and for producing results with a statistically significant linear correlation not only with subjective symptoms but also with the Schirmer test value, tear meniscus height measurement by anterior optical coherence tomography, BUT, and corneal staining score (18,19). It is minimally invasive and the relatively quick examination can minimize reflex tearing. AL was measured using the IOLMaster 700 (Carl Zeiss Meditec AG, Jena, Germany). The corneal endothelial cell density was measured with NONCON ROBO II (Konan Medical, Nishinomiya, Japan). OCT measurement Spectral domain OCT data were obtained using the RS 3000 R (Nidek Co. Ltd., Aichi, Japan), and all OCT imaging was performed using the raster-scan protocol (25). A macular ganglion cell complex (GCC) [retinal nerve fiber layer (RNFL) + ganglion cell layer (GCL) + inner plexiform layer (IPL)] diameter of 9 mm . /fmed. . and full retinal thickness in the central macular area diameter of 1 mm were analyzed. Using software supplied by the manufacturer, the thickness of (i) NFL, (ii) GCL + IPL, (iii) internal limiting membrane (INL) + outer plexiform layer (OPL), (iv) ONL + inner segment layer (IS), and (v) outer segment layer (OS) + retinal pigment epithelium (RPE) were exported as a pixel image (512 × 128 pixels), and the mean thickness values of the whole analysis area (9.0 × 9.0 mm, corrected for axial length) excluding the optic disk and peripapillary atrophy were calculated. Statistical analysis Data are given as the mean ± SD, where appropriate. Macular measurements of patients with macular disease (agerelated macular degeneration and epiretinal membrane) were excluded from the analysis. Data from the left eye were analyzed. To identify which ophthalmic parameters were correlated with AL and phakic refraction, regression analysis was conducted with AL and refraction as dependent variables, while demographic (age and sex) and ophthalmic parameters (myopia-related parameters and DE-related parameters) were used as independent variables. Sex differences were identified, and all subsequent analyses were conducted after stratification by sex. The regression line and probability ellipse were computed for axial length and variables by the least-squares method. The correlation was analyzed using Pearson's correlation coefficient. All analyses were performed using StatFlex R (Atech, Osaka, Japan) with a P-value of <0.05 considered to be statistically significant. We identified significant sex differences in BUT, strip meniscometry value, corneal staining score, corneal endothelial cell density, AL, GCC thickness, and full macular thickness (Table 2). AL was strongly age-and sex-dependent (Table 1 and Figure 1), which is why we stratified by sex in subsequent analyses (Table 2). Strip meniscometry value (ß = −0.163, p = 0.039; Figure 2) and corneal endothelial cell density (ß = −0.139, p = 0.021; Figure 3) were correlated with AL in women but not in men. Regarding retinal parameters, GCC thickness (ß = −0.250, p < 0.001; Figure 4) and full macular thickness (ß = −0.170, p = 0.034, adjusted for age) were correlated with AL in women but not in men. The correlation of peripapillary NFL thickness and AL was observed in both sexes ( Figure 5). Discussion Findings in the present study agree with prior studies (2)(3)(4) where lacrimal and corneal examination results indicate a substantial relationship between myopia and DE. The current results obtained in elderly patients could support the relationship between myopia and DE since tear production measurement is strongly linked to the proposed hypothesis that the parasympathetic nervous system might be involved in the relationship between these two ocular conditions (2). The ocular surface in older patients is much more complicated and disturbed compared with younger subjects, possibly by previous cataract surgery, conjunctivochalasis, and meibomian gland dysfunction although there was no difference between phakic and pseudophakic cases in relation to BUT, corneal staining score, and meniscometry value. Therefore, it would be acceptable if the results contradicted previous results (2) showing a clear association between BUT and AL. Nevertheless, the present results are noteworthy suggesting a significant correlation between tear production and AL in elderly women. The lacrimal gland is innervated from the parasympathetic nervous system (26)(27)(28). The parasympathetic nervous system is also closely associated with choroidal thickness, which is involved in modulating ocular elongation and control of the refractive error (29-31). Tear meniscus volume as a proxy of lacrimal gland activity can be measured by tear strip meniscometry without reflex lacrimation and is a sensitive indicator of ocular surface dryness correlated with BUT and the Schirmer test (18). Taken together, the current results agree with the hypothesis (2) that there may be a common upstream factor including the parasympathetic nervous system in the association between tear production and AL or DE and myopia. Evidence from basic and clinical research has not been able to fully elucidate the association between dry eye and myopia. Although some clinical studies have shown a relationship between dry eye and myopia, the causal relationship is still unknown. It is speculated that the parasympathetic system is involved; however, more nuanced hypotheses should be proposed in further studies on the subject. In our study, AL decreased with age, which is consistent with prior studies including a large Japanese study (32-34). Cataracts develop earlier in high myopes and could introduce a bias in the current study that included many cataract patients. However, our results were clear and comparable with prior large studies. Corneal endothelial cell density was low and correlated with AL in women, which was consistent with previous research (35,36) describing lower endothelial cell density and higher hexagonality and coefficient of variation in women. The authors speculated that the abnormalities of endothelial parameters in female participants might be associated with a different susceptibilities of the endothelial cells, which may explain the relationship between high myopia and abnormal endothelial morphology in female participants in their study. Another group (37, 38) observed an accelerated reduction of endothelial cell density and corneal nerve damage in DE compared with non-DE and suggested that chronic inflammation involving the deep cornea and/or aqueous humor may play a role. As DE is more prevalent in women, this hypothesis could be applicable to our results. Table . FIGURE Scatter plots and regression lines of axial length and tear strip meniscometry value with a probability ellipse (confidence interval %). Axial length correlated with tear strip meniscometry in women (ß = − . , p = . Hanyuda et al. indicated that the correlation of posterior vitreous detachment and AL with female sex may be due to hormonal factors (39). They suggested it may be partly attributed to vitreous liquefaction followed by perimenopausal hormonal changes. In our study, peripapillary NFL thickness was correlated with AL in both sexes, in line with previous research. Nevertheless, macular full thickness and GCC thickness were correlated with AL in women only, suggesting AL may more strongly contribute to retinal thickness in women than men. A possible explanation of these unexpected results is that the peripapillary NFL may be affected by AL rather than sex differences although the detailed reason is unclear. It has been repeatedly documented that the retina is thinner in women than men and thinner in myopia than emmetropia (40,41); however, sex differences in the association . FIGURE Scatter plots and regression lines of axial length and corneal endothelial cell density with a probability ellipse (confidence interval %). Axial length correlated with corneal endothelial cell density in women (ß = − . , p = . FIGURE Scatter plots and regression lines of axial length and GCC (ganglion cell complex) thickness with a probability ellipse (confidence interval %). Axial length correlated with GCC thickness in women (ß = − . , p < . ) but not in men (ß = − . , p = . ). of AL and retinal thickness have not been fully determined. Overall, the current study confirms sex differences in AL, corneal endothelial cell density, and retinal thickness. A further study with a large number of non-surgical cases would be warranted to confirm these findings. This study reveals a relationship between myopia and DE in elderly subjects that had previously been suggested in young subjects. Consequently, the current results could support the hypothesis that tear production and AL or DE and myopia may share a common upstream factor including the parasympathetic nervous system. This research subject is new and requires further study of the different factors involved and optimal methodology to provide conclusive evidence. The current study has several limitations. We included patients with pseudophakic eyes who might have retained ocular surface modifications even after a long postoperative period (42). Additionally, aging eyes undergo a variety of changes, including decreased tear secretion and poor lacrimal drainage, which could affect DE-related examination results. DE is a systemic disease which correlates with age and hormone levels. This study does not discern the influence of hormone levels and other confounding factors on DE. Further investigations would be warranted to assess hormone levels and other DE-related systemic parameters for determining the association of DE and myopia. Nevertheless, tear strip meniscometry is a sensitive indicator for the severity of aqueous tear deficiency type DE. In spite of these limitations, our results in elderly subjects confirm a relationship between myopia and DE that has previously been established in young subjects. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement The studies involving human participants were reviewed and approved by Tsukuba Central Hospital and Kanagawa Medical Association. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. Author contributions MA designed the study, collected data, analyzed data, and wrote manuscript. All authors reviewed and approved final version of the manuscript.
2023-06-02T13:15:31.823Z
2023-06-02T00:00:00.000
{ "year": 2023, "sha1": "2481810560310afb7103ffcc7a1433ded12ed160", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "2481810560310afb7103ffcc7a1433ded12ed160", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
231720310
pes2o/s2orc
v3-fos-license
The Relationship of Functional Connectivity of the Sensorimotor and Visual Cortical Networks Between Resting and Task States The intrinsic activity of the human brain maintains its general operation at rest, and this ongoing spontaneous activity exhibits a high level of spatiotemporally correlated activity among different cortical areas, showing intrinsically organized brain functional connectivity (FC) networks. Many functional network properties of the human brain have been investigated extensively for both rest and task states, but the relationship between these two states has been rarely investigated yet and remains unclear. Comparing well-defined task-specific networks with corresponding intrinsic FC networks may reveal their relationship and improve our understanding of the brain’s operations at both rest and task states. This study investigated the relationship of the sensorimotor and visual cortical FC networks between the resting and task states. The sensorimotor task was to rub right-hand fingers, and the visual task was to open and close eyes, respectively. Our study demonstrated a general relationship of the task-evoked FC network with its corresponding intrinsic FC network, regardless of the tasks. For each task type, the study showed that (1) the intrinsic and task-evoked FC networks shared a common network and the task enhanced the coactivity within that common network compared to the intrinsic activity; (2) some areas within the intrinsic FC network were not activated by the task, and therefore, the task activated only partial but not whole of the intrinsic FC network; and (3) the task activated substantial additional areas outside the intrinsic FC network and therefore recruited more intrinsic FC networks to perform the task. INTRODUCTION The brain's operations are mainly intrinsic, including the acquisition and maintenance of information for interpreting, responding to, and predicting environmental demands (Raichle, 2010). This ongoing intrinsic activity, i.e., the resting-state activity measured with the blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI), is spontaneous but exhibits a high level of spatiotemporally correlated activity among different cortical areas, showing intrinsically organized brain functional connectivity (FC) networks and each network's temporal coactivity at rest (Ogawa et al., 1992;Biswal et al., 1995;Raichle, 2011). The activity of these FC networks may reflect the brain's operations at rest, and the study of these FC networks may provide rich and sensitive markers for diseases (Greicius et al., 2004;Filippini et al., 2009). The task fMRI, on the other hand, examines the dynamic brain activity evoked by performing tasks (Kwong et al., 1992;Laird et al., 2013). The activity of neural networks gives rise to simple motor behaviors as well as behaviors that are more complex, and therefore, the activity of a task-specific network is responsible for the specific human behavior. Although the resting-state FC network and the taskspecific network reflect two very different cognitive states, i.e., the intrinsic activity vs. the task-evoked activity, these two networks may be related to each other, and studying this relationship may improve our understanding of the brain's operations at both rest and task states (Cole et al., 2014). Using a novel method, Huang compared the intrinsic activity with task-evoked activity and found that the former was substantially larger than the latter and consistently so for all levels of analysis from a cortical area to the whole brain (Huang, 2019). The study found that, for the task state, the brain (1) controlled the intrinsic activity not only during the performance of a task but also during the rest between tasks; (2) activated a task-specific network only when the task was performed but kept it relatively "silent" for other different tasks; and (3) simultaneously controlled the activation of all taskspecific networks during the performance of each task. These results show a strong interaction between the intrinsic activity and task-evoked activity. Understanding this rest-task interaction may be crucial to the elucidation of the brain's contribution to mental states (Northoff et al., 2010). It may also play an important role in neuroimaging diagnosis and evaluation of neurologically and psychiatrically diseased brains (Castellanos et al., 2013). The study of resting-state fMRI is of great significance for medical imaging diagnosis because it is easy to operate and the scanning process is relatively simple. It only requires the patients to lie down, while the task fMRI requires them to perform tasks, which may not be an easy task for those who have difficulty to carry out the task properly. Nevertheless, neurological and psychiatric diseases may manifest as certain behaviors that may be better characterized by specific task networks such as the faceprocessing network in Alzheimer's disease . Accordingly, interpreting clinical resting-state fMRI data may require a better understanding the relationship of FC network between rest and task states. Many functional network properties of the human brain have been investigated extensively for both rest and task states, but the relationship between these two states has been rarely investigated yet and remains unclear. The literature shows inconsistent results regarding the relationship between the intrinsic and task-evoked FC networks. Arfanakis et al. (2000) report that the FC, which is demonstrated in the "resting brain, " is not affected by tasks that activate unrelated brain regions. Hampson et al. (2004) found reduced FC between MT/V5 and the cuneus, lingual gyrus, and thalamus but increased FC between MT/V5 and the middle occipital gyrus when viewing moving concentric circles. Fransson and Marrelec (2008) found a global reduction in FC within the default mode network (DMN) during a continuous working memory task. Hasson et al. (2009) reported that the FC network among those regions typically active during rest varies with exogenous processing demands, i.e., the network encompasses more highly interconnected regions during rest than during listening but also when listening to unsurprising vs. surprising information. In comparison to the resting state, Shirer et al. (2012) found increased FC among task-related regions during memory and subtraction tasks. He (2013) found a negative interaction between intrinsic activity and task-evoked activity during a visual attention task. Arbabshirani et al. (2013) reported globally decreased FC networks during the performance of an auditory oddball task. Lynch et al. (2018) also reported reduced FC among different cortical networks, especially between visual and non-visual sensory or motor cortices, when watching a naturalistic movie. Huang (2019) reported a globally greater intrinsic activity compared to the taskevoked activity and the brain's control to this intrinsic activity not only during the performance of a task but also during the rest between tasks. Cole et al. (2014) suggested that the brain's functional network architecture during task performance is shaped primarily by an intrinsic network architecture that is also present during rest and secondarily by evoked task-general and task-specific network changes, a strong relationship between intrinsic FC and task-evoked FC. As different methods were used in these studies, their inconsistent results could reflect the FC network difference between the rest and task states or simply were the results of those different methods used for the analyses. To avoid the latter case, in this study, we used the same method to compare the FC networks between the rest and task states. fMRI-identified FC networks, regardless of rest or task, are determined based on the temporal correlation of the underlying neural activity within each network, reflecting the organized coactivity within that network. The existence of intrinsic FC networks such as the coarse 7-networks and the fine 17-networks is well documented (Yeo et al., 2011). The 7networks consist of visual, somatomotor, dorsal attention, ventral attention, limbic, frontoparietal, and default networks. The 17networks further divide these 7 networks into 17 networks. The separation of these networks suggests an organized intrinsic activity within each network, and studying this intrinsic activity for each network may provide insights for understanding the nature of these organized intrinsic activities at rest. On the other hand, fMRI studies of a wide range of sensorimotor, visual, and cognitive tasks reveal simultaneous activation in multiple regions across the whole brain, showing the existence of task-specific networks. Comparing well-defined task-specific networks with corresponding intrinsic FC networks may reveal their relationship. For example, the FC network of the intrinsic somatosensory and motor activity contains both the left and right somatosensory and motor cortices (Yeo et al., 2011). The FC network of tapping the right-hand fingers should, however, contain the left but not the right primary sensory (S1) and motor (M1) cortical areas because the left M1 controls the movement of the right-hand fingers and the left S1 is the primary area for the input of the somatic sensation when tapping these fingers (Penfield and Boldrey, 1937). Accordingly, there should be (1) overlapped or common areas between these two FC networks, (2) areas such as the right S1 and M1 that are present only in the intrinsic FC network, and (3) additional areas outside the intrinsic FC network that are recruited by tapping the right-hand fingers, respectively. To verify this prediction, this study investigated the relationship of the sensorimotor and visual cortical FC networks between the resting and task states. Subjects Eighteen healthy right-handed young adults (10 male and 8 female, ages from 19 to 25 years old) were recruited to participate in this study. Four subjects were excluded from the analysis; three showed substantial head-motion-induced image artifacts and one did not complete the experiment. The Ethics Committee of Guizhou Provincial People's Hospital approved this study. All subjects consented to participate voluntarily prior to the study. All methods were performed in accordance with the relevant guidelines and regulations of Guizhou Provincial People's Hospital. fMRI Scans Each participant first undertook a 9 min resting-state (rs) run and then a 9 min task run. (A dummy scan of 5 volumes prior to each run was discarded). During the rs run, the participants were instructed to close their eyes and try not to think of anything but remain awake for the whole scan. During the task run, they performed two tasks: the first task trial consisted of rubbing five fingers of the right hand for 8 s followed by a 22 s rest period (eyes were closed during the whole trial); and the second task trial consisted of opening eyes for 8 s and then closing them for the 22 s rest period. These two task trials were repeated eight times, resulting in a total of 8 min task period. The participants were instructed to close their eyes during the first 1 min scan, and then, the 8 min task period started ( Figure 1A). Image Acquisition MRI data were acquired using a 3.0 T MR scanner (Discovery MR 750, GE Healthcare, Milwaukee, WI) with a 32-channel phased-array coil. Thirty-eight axial T2 * -weighted functional images to cover the whole brain were performed using a gradient echo echo-planar-imaging pulse sequence with the parameters: echo time (TE)/repetition time (TR) = 28/2,500 ms, flip angle (FA) = 80 • , field of view (FOV) = 224 mm, matrix 64 × 64, slice thickness of 3.5 mm, and spacing of 0.0 mm. Prior to the functional scans, the participants had a pre-training of task performance. They started to rub their fingers when their right legs were tapped twice and stopped the rubbing when the legs were tapped once. When their left legs were tapped twice, they opened their eyes and then closed them when the legs were tapped once. During the task run, these task instructions of tapping the right or left leg were provided by a researcher who stood beside the participant. After the functional scans, T1weighted whole-brain MR images were performed using a 3D BRAVO pulse sequence. Image Pre-processing Image pre-processing of the functional images was performed with a standard procedure (Huang, 2018), using AFNI 1 . The procedures included the following: (1) removing spikes from the signal intensity time course; (2) slice-timing correction of the image acquisition time difference from slice to slice; (3) motion correction of the images to align all volume images to the first volume image of the rs run; (4) spatial smoothing each volume image with a full-width-half-maximum (FWHM) of 4.0 mm; (5) sorting images for the "base" period (the first 1 min period) and "task" period (the last 8 min period) for each run; (6) computing the mean volume image for both "base" period and "task" period; (7) generating a brain mask with the images of the rs run; (8) bandpassing the signal intensity time course of the "task" period to the range of 0.009-0.08 Hz for both rs and task runs; and (9) computing the relative signal change (%) of the bandpassed signal intensity time course of the "task" period, i.e., relative to the mean signal of the corresponding "base" period, for both rs and task runs. This voxel-wise relative signal change time course of the 8min "task" period was used to conduct FC analysis for rs and task runs, respectively ( Figure 1B), and this ensures the consistency of our FC comparison between the rest and task states. Seed Selection and Seed-Dependent Pearson Correlation (R) Maps We should expect to see (1) a finger-rubbing-induced BOLD signal change in the left primary sensorimotor cortex (PSMC) for each of the eight finger-rubbing tasks and (2) an eye-opening and closing-induced signal change in primary visual cortex (V1) for each of the eight visual tasks. Accordingly, we can examine the signal time courses in these areas to identify one seed in the left PSMC that reflects the time-locked finger-rubbing-induced signal changes and one seed in the area V1 that reflects the time-locked visual stimulation-induced signal changes. For each individual, based on the well-known somatotopic map (i.e., the somatosensory and motor homunculus in PSMC) (Penfield and Boldrey, 1937), we first identified a coarse finger-representation area in the left PSMC. Then, based on the time-locked fingerrubbing-evoked BOLD response, we selected a seed that consisted of four voxels with similar signal change time courses. The same procedure was used to select a seed in V1. For each seed, we computed the mean signal time course, averaged over the four voxels of the seed, for the task run. Then, this mean signal time course was used to compute R with the voxel-wise relative signal change time course of the 8 min "task" period across the whole brain, yielding a seed-dependent R map for the task state. For each participant, two task-associated R maps were generated in the original space, one for the finger-rubbing task and the other for the eye-opening and closing task, respectively. For the resting state, the same two seeds were used to compute the two mean signal time courses in the two cortical areas, similarly as that for the task state. Then, these two time courses were used to generate two R maps for the resting state, one for the sensorimotor network and the other for the visual network, FIGURE 1 | Illustration of the task paradigm, the two selected seeds in the left primary sensorimotor cortex (PSMC) and in the left primary visual area (V1) and their signal time courses of the resting and task states for a representative participant and the group-mean signal change time courses of the selected seeds. (A) The task paradigm. The first 1 min served as the "base" period and the last 8 min served as the "task" period. The "task" period consisted of 16 tasks shown by the 16 bars: red bars representing the sensorimotor task and blue bars the visual task. (B) Top panel: the red cluster in the left PSMC denotes the seed associated with the finger-rubbing task, and the eight wine red arrows in the right top plot indicate the large signal changes evoked by the eight sensorimotor tasks. The eight red bars represent the onset and duration of the eight finger-rubbing tasks, and the eight blue bars represent the onset and duration of the eight eye-opening and closing tasks, respectively. The right bottom plot illustrates the seed signal time course for the resting state; bottom panel: the red cluster in the left V1 denotes the seed associated with the eye-opening and closing task, and the eight blue arrows in the right top plot indicate the large signal changes evoked by the eight visual tasks. The right bottom plot illustrates the seed signal time course for the resting state. (C) Group-mean task-evoked signal change time courses of the selected seeds in the left PSMC and in the left V1, respectively. The task-evoked signal changes are conspicuous for both sensorimotor and visual tasks that are time-locked with these tasks. L, left; R, right. respectively. For group comparison, for each participant, each R map was first warped to a standard template space (icbm452, an averaged volume of 452 normal brains), and then, a mean R map was computed over all participants for that R map, yielding four group-mean R maps corresponding to the two seeds (PSMC vs. V1) and two states (resting vs. task). Group Comparison of Seed-Dependent Functional Connectivity Maps Between the Resting and Task States For each seed and each state, the group-mean R map averaged over all participants was thresholded with a chosen threshold value of R = 0.345 (P = 1.0 × 10 −6 , N = 192) to yield an FC map for that seed and that state. For the seed in the left PSMC, the FC map of the resting state reflects the significant correlation of the intrinsic neural activity at that area with all other cortical areas (i.e., the sensorimotor network at the resting state), and the FC map of the task state shows the finger-rubbing-evoked significant coactivity across the whole brain. Similarly, for the seed in area V1, the FC map of the resting state reflects the significant correlation of the intrinsic neural activity at V1 with all other cortical areas (i.e., the visual network at the resting state), and the FC map of the task state shows the eye-opening and closing-evoked significant coactivity across the whole brain. For both PSMC and V1 areas, we generated a mask of the common area of the two FC maps between the resting and task states to examine the effect of the task on the rs network, and two masks of the difference between the two FC maps, one for the rs FC map excluding the task FC map and the other for the task FC map excluding the rs FC map, to examine the network difference between the two states. Then, for each of the two areas (PSMC vs. V1) and the two states (task vs. rs), each mask was used to compute a mask-mean R within that mask for each subject. These mask-mean R values were used for group comparison. For group statistical tests, the R values were converted to Z values through Fisher's Z transformation to improve the normality of the distribution. Validating the Chosen Threshold R for Determining FC Maps The determined FC maps were obtained with thresholding their corresponding R maps with R = 0.345 (P = 1.0 × 10 −6 ). Different threshold R values would yield different FC maps; a larger threshold would yield a smaller FC map and a smaller threshold would yield a larger FC map, respectively. We chose two different threshold R-values of R = 0.314 (P = 1.0 × 10 −5 ) and R = 0.374 (P = 1.0 × 10 −7 ) to test their effect on the relationship of FC between the resting and task states. With each threshold R, we also generated three masks to examine the effect of the task on the rs network and the network difference between the two states for each task type as we did in section "Group Comparison of Seed-Dependent Functional Connectivity Maps Between the Resting and Task States." Validating the Selected Seeds for Determining FC Maps The determined R maps were obtained with the selected seeds, and different seeds may yield different R maps that affect their corresponding FC maps. To test the potential seed effect on the relationship of FC between the resting and task states, in the original space, we changed the seed size from four to eight voxels to test the seed size effect. Considering that these seeds were selected for each individual and the selection may be biased for the analysis, in the standard template space, we selected two seeds of four voxels each to conduct the FC analysis; one seed was located in the left PSMC [−38 mm (L), −27 mm (P), 55 mm (S), MNI] and the other in area V1 [−6 mm (L), −90 mm (P), 7 mm (S), MNI], respectively. With each seed either in the original space or the standard template space, we also generated three masks to examine the effect of the task on the rs network and the network difference between the two states for each task type as we did in section "Group Comparison of Seed-Dependent Functional Connectivity Maps Between the Resting and Task States." Seed Selection and Seed-Dependent R Maps We identified one seed in left PSMC that was associated with the finger-rubbing task and one seed in left V1 associated with the eye-opening and closing task for each participant, and Figure 1B illustrates the two selected seeds in these areas for a representative participant. For each identified seed, a seed-mean signal time course was computed for both resting and task states ( Figure 1B). For the task state, for each seed type, a group-mean signal time course averaged over all participants was computed, and its association with that task is conspicuous and time locked for each of the eight task trials (Figure 1C). For each seed type, the seedmean signal time course was used to compute an R map for each state, yielding a total of four R maps (two seeds and two states) for each individual participant. Group Comparison of FC Maps Between the Resting and Task States For the seed selected in the left PSMC, for the resting state, the determined FC map demonstrated a significant correlation of the intrinsic neural activity in both left and right primary sensorimotor cortex, premotor area, supplementary motor area, parietal cortex, and the right anterior motor area of the cerebellum (Figure 2, top panel). The finger-rubbing task activated not only these regions but also some other areas in cerebrum. The left two images in the top panel of Figure 3 illustrate the overlapped (i.e., common) areas of the FC maps between the resting and task states, the middle two images illustrate the major areas of the rs FC map excluding the task FC map, and the right two images the major areas of the task FC map excluding the rs FC map. Using the common areas of the two FC maps as a mask, a group-mean analysis of the R values between the resting and task states yielded a significantly increased R to this common FC map by the finger-rubbing task (Figure 4, top panel, left), demonstrating that the sensorimotor task significantly enhanced the FC of the neural activity of this common sensorimotor system. One of the two major areas of the rs FC map excluding the task FC map was on the right central sulcus, and the other one was located at the posterior part of the supplementary motor area (Figure 3, top panel, middle). Using the corresponding mask, the group-mean analysis of the R values showed a significant R for the resting state compared to that of FIGURE 2 | Comparison of the functional connectivity (FC) map between the resting and task states. Top panel illustrates the FC map of the resting state that was associated with the intrinsic neural activity of the seed in the left PSMC (left) and the sensorimotor task-evoked FC map across the whole brain (right). Bottom panel shows the FC map of the resting state that was associated with the intrinsic neural activity of the seed in the left V1 (left) and the visual stimulation-evoked FC map across the whole brain (right). The color bar indicates the Pearson correlation coefficient R. PSMC, primary sensorimotor cortex; V1, primary visual cortex; SMA, supplementary motor area; PM, premotor area; AMA, anterior motor area; PMA, posterior motor area; L, left; R, right. FIGURE 3 | Illustration of the areas of the common and differences of the two functional connectivity (FC) maps between the resting and task states. Top panel illustrates the mask of the common FC network of the resting and task states (left), of the areas presented at the resting-state but not the task state (middle), and of the areas presented at the task state but not the resting state (right), respectively, for the sensorimotor system; bottom panel shows the mask of the common FC network of the resting and task states (left), of the areas presented at the resting state but not the task state (middle), and of the areas presented at the task state but not the resting state (right), respectively, for the visual system. the task state (Figure 4, top panel, middle), demonstrating that the intrinsic neural activity of these areas with that of the seed at the left PSMC was significantly correlated for the resting state, but their neural activity for the task state was not correlated with the sensorimotor task-evoked activity. Comparing to the resting state, the sensorimotor task not only substantially expanded the FIGURE 4 | Comparison of the task effect on the functional connectivity (FC) map between the resting and task states. Top panel: for the sensorimotor FC map, the finger-rubbing task significantly increased the coactivity across the entire common FC network (two-tail paired t-test, P = 0.005) (left paired bars) and across those expanded and additionally activated brain areas (two-tail paired t-test, P = 1.0 × 10 −5 ) (right paired bars), respectively. In the resting FC map excluding the task FC map, the R was significantly larger for the resting state than that for the task state (two-tail paired t-test, P = 0.0001) (middle paired bars); Bottom panel: for the visual FC map, the eye-opening and closing task significantly increased the coactivity across the entire common FC network (two-tail paired t-test, P = 0.002) (left paired bars) and across those expanded and additionally activated brain areas (two-tail paired t-test, P = 5.6 × 10 -5 ) (right paired bars), respectively. In the resting-state FC map excluding the task FC map, the R was significantly larger for the resting state than that for the task state (two-tail paired t-test, P = 0.028) (middle paired bars). common FC map but also recruited several additional areas such as both the left and right anterior and posterior motor areas of the cerebellum (the right two images in the top panel of Figure 3). Using these areas as a mask, the group-mean analysis showed a significantly increased R for the task state compared to that for the resting state (Figure 4, top panel, right), demonstrating a significantly expanded task-associated activation network across the whole brain by the finger-rubbing task. To compare the relative size of these three FC maps, i.e., the three FC masks in the top panel of Figure 3, we computed the total number of voxels for each FC mask. Using the total number of voxels of the shared common FC map as the reference, the ratio of the area for the three networks was 1:1.16:2.61 (shared common FC/rs FC excluding task FC/the task FC excluding rs FC). The anatomic locations for each network are tabulated in Table 1. For the seed selected in the left V1, for the resting state, the identified FC map showed a significant correlation of the intrinsic neural activity in both the left and right visual cortex (Figure 2, bottom panel). The eye-opening and closing task activated the visual cortex, and this activation extended outside the visual cortex as illustrated in the right images in the bottom panel of Figure 2. The left two images in Figure 3 bottom panel illustrate the mask of the common FC map between the resting and task states, the middle two images illustrate the major areas of the rs FC map excluding the task FC map, and the right two images the major areas of the task FC map excluding the rs FC map, respectively. For the common FC map, a group-mean analysis of the R values between the resting and task states showed a significantly increased R for the task state (Figure 4, bottom panel, left), showing that the visual task significantly enhanced the FC within this common FC map compared to the resting state. Using the mask of the major areas of the rs FC map excluding the task FC map (the middle two images in the bottom panel of Figure 3), the group-mean analysis of the R values showed a significant R for the resting state compared to that of the task state (Figure 4, bottom panel, middle), demonstrating that the intrinsic neural activity of these areas with that of the seed at the left V1 was significantly correlated for the resting 1 | Brain regions of the common areas shared by both resting state (rs)-and task-functional connectivity (FC) networks, the distinct areas of the rs-FC network from those common areas, and the distinct areas of the task-FC network from those common areas, respectively, for the sensorimotor network labeled in the atlas of TT_Daemon. state, but their neural activity for the task state was not correlated with the visual task-evoked activity. For those areas of the task FC map excluding the rs FC map (the right two images in the bottom panel of Figure 3), the group-mean analysis showed a significantly increased R for the task state compared to that for the resting state (Figure 4, bottom panel, right), demonstrating a significantly expanded task-associated activation network across the whole brain by the eye-opening and closing task. To compare the relative size of these three FC maps, i.e., the three FC masks in the bottom panel of Figure 3, we computed the total number of voxels for each FC mask. Using the total number of voxels of the shared common FC map as the reference, the ratio of the area for the three networks was 1:0.08:1.85 (shared common FC/rs FC excluding task FC/the task FC excluding rs FC). The anatomic locations for each network are tabulated in Table 2. 2 | Brain regions of the common areas shared by both rs-and task-functional connectivity (FC) networks, the distinct areas of the rs-FC network from those common areas, and the distinct areas of the task-FC network from those common areas, respectively, for the visual network labeled in the atlas of TT_Daemon. Validating the Chosen Threshold R for Determining FC Maps The determined FC maps with the two different threshold P = 1.0 × 10 −5 and 1.0 × 10 −7 demonstrated almost the same FC networks as that determined with P = 1.0 × 10 −6 for both resting and task states (Figure 5), showing that the general pattern of these FC networks held despite different threshold R (P) values. With the generated three masks (images not presented) for each threshold P-value, similar results were obtained (Figure 6), showing that these two different threshold P-values produced the same relationship of FC between the resting and task states. Validating the Selected Seeds for Determining FC Maps In the original space, the selected two seeds with eight voxels each produced almost identical results as those with four voxels each (data not presented). In the standard template space, the selected two seeds produced similar FC networks as those obtained with the selected two seeds in the original space (Figure 7). With the generated three masks (images not presented), comparing the rest FC with the task FC showed the same relationship of FC between the resting and task states (Figure 8). DISCUSSION AND CONCLUSION This study investigated the relationship of the sensorimotor FC network between the resting and the task state of rubbing the fingers of the right hand. The results verified our prediction that these two FC networks are related in a specific way. First, they share a common FC network as shown in Figure 3 (top panel, left). Second, as expected, the right M1 and S1 areas are not present in the task-evoked FC network (Figure 3, top panel, middle). Third, the performance of this finger-rubbing task recruited, outside the intrinsic FC network, substantial areas across both cerebrum and cerebellum (Figure 3, top panel, right). These results do not support the suggestion that the brain's functional network architecture during task performance is shaped primarily by an intrinsic network architecture that is also present during rest and secondarily by evoked task-general and task-specific network changes (Cole et al., 2014). These substantial additional areas recruited by the task performance indicate the involvement of other intrinsic FC networks when performing the task, showing a complicated relationship of this task-evoked FC network with those intrinsic FC networks. As the task is a simple sensorimotor task, we also expect a complicated relationship between intrinsic and task-evoked FC networks when performing complex tasks. The shared common FC network (Figure 3, top panel, left) shows a significantly increased R for the task state than that for the resting state (paired t-test, P = 0.005) (Figure 4, top panel, left), showing that the task performance significantly enhances the co-activity within that network compared to the intrinsic neural activity at rest. This conclusion is also illustrated in the top panel in Figure 2. This common FC network may be an essential part for the finger-rubbing task. The control of the left primary motor cortex of the cerebrum to the movement of the fingers of the right-hand and similarly the control of the right primary motor cortex to the lefthand fingers, i.e., the somatomotor representations, are well documented (Penfield and Boldrey, 1937). The somatosensory representations of the input of sensory information of the righthand to the left primary sensory cortex and the input of the sensory information of the left-hand to the right primary sensory cortex, respectively, are also well documented. The exclusion of the right M1 and S1 areas from the right-hand fingerrubbing-evoked FC network reflects these somatomotor and somatosensory representations (Figure 3, top panel, middle). The important role of the cerebellum in movement control, and the decussate cerebrocerebellar circuit, i.e., the right cerebellar cortex is connected to the left cerebral cortex and the left cerebellar cortex is connected to the right cerebral cortex, respectively, is also well documented. This cerebrocerebellar circuit mediates a two-way connection between the cerebrum and cerebellum and plays a crucial role in somatic functions concerning motor planning, motor coordination, motor learning, and memory (Allen and Tsukahara, 1974;Benagiano et al., 2018). Right-hand finger rubbing activates not only the contralateral cerebrocerebellar circuit with respect to the cerebrum but also the ipsilateral cerebrocerebellar circuit as evidenced in the right images in the top panel of Figure 2, showing an association between these two circuits and a complicated task-evoked FC network even for a simple finger-rubbing task. The contralateral cerebrocerebellar circuit consists of M1, premotor and supplementary motor areas in the left cerebrum and both anterior and posterior motor areas in the right cerebellum and is mainly responsible for the motor planning, coordination, and execution of rubbing the fingers of the right hand. The ipsilateral cerebrocerebellar circuit, however, consists of premotor and supplementary motor areas in the right cerebrum and both anterior and posterior motor areas in the left cerebellum, i.e., excluding the right M1 area compared to the contralateral cerebrocerebellar circuit, and its functional role in the performance of rubbing right-hand fingers is unknown. These results replicate those previous findings (Huang, 2020). Further studies are needed to explore the functional role of this ipsilateral cerebrocerebellar circuit. This study also investigated the relationship of the visual FC network between the resting and the task state of opening and closing eyes. The results demonstrate a similar relationship as that of the sensorimotor FC network between the resting state and the finger-rubbing task state: (1) they share a common FC network as shown in Figure 3 (bottom panel, left); (2) a few areas both inside and outside the visual cortex are present only in the intrinsic FC network (Figure 3, bottom panel, middle); and (3) substantial areas outside the intrinsic FC network are recruited by the opening and closing eyes (Figure 3, bottom panel, right). The shared common FC network shows a significantly increased R for the task state than that for the resting state (paired t-test, P = 0.002) (Figure 4, bottom panel, left), showing that opening and closing eyes significantly enhances the coactivity within that network compared to the intrinsic neural activity at rest. This common FC network is mainly in the visual cortex, and the bottom panel in Figure 2 illustrates the task-enhanced coactivity within that network. In comparison to the intrinsic neural activity at rest, the substantial additional areas recruited by opening and closing eyes locate mainly outside the visual cortex and extend to the cerebellum as well (Figure 4, bottom panel, right), indicating the involvement of other intrinsic FC networks when performing this task. It shows a complicated relationship of this eyeopening-and eye-closing-evoked FC network with intrinsic FC networks, a conclusion similar as that of the finger-rubbingevoked FC network. The group-mean R in the shared common FC network was significantly larger for the task than that for the rest (Figure 4, left), regardless of the task type, showing a taskenhanced coactivity within the network in comparison to the intrinsic activity. In contrast to the task paradigm of 8-s task on followed by 22 s task off for each of the 16 task trials, our recent study with a continuous alternating 2 s visual stimulation on-and-off task paradigm observed a similar task-enhanced coactivity in the visual FC (Huang and Zhu, 2017), showing that this task-enhanced coactivity is independent of the task paradigms. The intrinsic activity was irregular, spontaneous, and self-regulated, but the task-evoked activity was actively controlled by the brain, reflected in the task-fMRI time series that was regular and time locked to the task paradigm ( Figure 1B). This regularity and time-locked behavior were the results of the brain's actively controlling the task performance and therefore should reflect the underlying neuronal activity evoked by performing the task. In comparison to the intrinsic activity, the task-enhanced coactivity in the common FC network shows a stronger effect of the brain's active control to the task-evoked activity. It reflects a different degree of the brain's control to these two different brain states, i.e., the self-regulated intrinsic activity in the resting state vs. brain's actively controlled task-evoked neuronal activity in the task state. Our recent study demonstrates the brain's active control to the intrinsic activity during the task state (Huang, 2019). The study systematically compared the intrinsic activity with the task-evoked activity at several levels starting from a finger-tapping-activated area in the PSMC, then the taskactivated areas across the whole brain, and finally the gray matter, white matter, and whole brain. At each level, the intrinsic activity was found to be equal to or substantially larger than the task-evoked activity. The study also found that the brain substantially suppressed the intrinsic activity not only during the period of task performance but also during the rest period between the tasks, reflecting the brain's active control to the intrinsic activity during the task state. This study also found that changing seed size (four vs. eight voxels) and selecting seeds in the original space for each individual subject vs. common seeds in the standard template space for all subjects produced similar results for both rest FC and task FC (Figures 5-8), showing that the relationship of FC of the sensorimotor and visual networks between the resting and task states remained unchanged under these conditions. In conclusion, this study shows a general relationship of a taskevoked FC network with its corresponding intrinsic FC network, regardless of tasks. For each task type, the study shows that (1) the intrinsic and task-evoked FC networks share a common network and the task enhances the coactivity within that common network compared to the intrinsic activity; (2) some areas within the intrinsic FC network are not activated by the task, and therefore, the task activates partial but not whole of the intrinsic network; and (3) the task activates substantial additional areas outside the intrinsic FC network and therefore recruits more intrinsic FC networks for the task performance. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Ethic Committee of Guizhou Provincial People's Hospital (Guizhou Provincial People's Hospital). The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS ZX was the guarantor of integrity of the entire study. All authors contributed to the study concepts and design, data analysis and interpretation, and manuscript drafting or manuscript revision for important intellectual content, gave approval of the final version of the submitted manuscript.
2021-01-29T05:35:41.465Z
2021-01-12T00:00:00.000
{ "year": 2020, "sha1": "d46abf877c7cc5333bcdc9178a718f902a3d3497", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2020.592720/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d46abf877c7cc5333bcdc9178a718f902a3d3497", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
55756588
pes2o/s2orc
v3-fos-license
Challenges in Agribusiness and Rural Tourism Development in Albania Prof Agricultural sector is one of sectors that has a huge impact on the economy. Agricultural production is increasing but it has not had any lasting impact. On the other hand, the impact of investment in value chains, including different factors, resulted successful in increasing agricultural productivity and profitability, and this impact have demonstrated a high degree of economic stability. Thus, tourism is one of the most perspective sectors in Albania. There are also many difficulties, such as the lack of services, marketing or infrastructure. Furthermore, agribusiness sector and rural tourism have difficulties in funding because of lack of technology, lower product quality standards and market. Based on these facts, in this paper, we have analysed empirically difficulties of this sector and the opportunity to develop it. The main purpose of our research is to design a strategy for development of agribusiness and rural tourism sector. The methodology is based on primary and secondary sources. More specifically, we refer to previous research by other authors and other studies made by government and private agencies. The main goal is to provide data through interviews of business representatives. Introduction Tourism is a catchword for rural development, engagement in projects to alleviate poverty and the preservation of cultural diversity of the indigenous community (Doohyun et al., 2014).In recent years there have been changes in the Albanian Agribusiness Sector.These have had a positive impact on this sector, but its development is still low.Some of the numerous barriers are: production, marketing, information and lack of access to appropriate financial products.Actually the demand for rural goods and tourist services in this direction is high intensity, while the ability of businesses operating in these areas to fulfill this demand remains at a low level.A lot of the products are sold in the market due to lack of conservation and their packaging.On the other hand, tourism services have low quality, low cost, low margins and poor customers. The agricultural sector in Europe is facing dramatic changes due to various factors, such as climatic changes, changes in the policy environment and development of tourism.(Straaten, 2000).The nature of the process of land reform carried out during the 1990s has contributed to the chronic structural nature of rural poverty.Although most people received land from this process, land distribution was far from ideal.Most people received only very small areas, usually broken in to several parcels.Thus the production base for poor farmers is impractical, and many have leased or sold their entitlements, or else have chosen to allow it to remain fallow.Small-scale producers continue to have great difficulty in producing marketable surpluses of produce at a profit.While they produce a wide range of commodities, their access to the factors of production (machinery, credit, inputs) remains sub-optimal and their access to information is poorly addressed through the publicly available extension services.They typically sell marketable produce immediately after harvest at a significant price disadvantage.Poor people in rural areas of Albania find great difficulty in engaging in entrepreneurial activity.They are constrained in this by a number of inter-related factors.These include: lack of access to formal lines of finance, inadequate education, lack of access to land, lack of appropriate financial products for poor people, small of scale of operations and lack of technical and market knowledge.The combined impact of these factors is that entrepreneurial activity by poor people remains subdued in rural areas.The attempt to combine agricultural and tourism services, which are known as agro-tourism, is an important approach for rural development (Tanupol et al., 2000).Thus, our research objectives are: (a) to identify and appreciate the problems of the rural tourism; (b) to consider how this sector can be developed ; and (c) to provide valuable strategies for the development of this sector. Methodology The main purpose of our research is to design a strategy for development of agribusiness and rural tourism sector.This paper is mainly based on the study and review of several papers by national and international authors.As secondary sources were used earlier studies by different authors, periodic reports from government agencies and consulting corporation.Meanwhile primary data was collected through interviews with business representatives.To assess the problems and to develop the strategies for rural tourism, we conducted a desk research and held direct meetings with representatives of selected business.Also, we conducted the farmers for interviews.After that, we used qualitative analysis for generating results from findings. Limitations on Sector Growth Rural tourism is stable for supporting socio-economic problems in rural areas and it is also an important source of living for the rural population (Tchetchik, Fleischer, & Fleischer, 2008).However, there are some limitations.The limitations of this sector are numerous and they affect each other.The reduction or elimination of these restrictions will be an opportunity of developing this sector.Some of the limitations are: infrastructure, finance, markets, technological. Infrastructure: Rural infrastructure remains in poor condition or non-existent in the more remote parts of Albania.While some of the main roads and facilities are in fair condition, it is the condition of rural access roads, energy supplies and water supplies for both domestic and irrigation needs which have the greatest negative impact on limiting agriculturalbased income growth.There are also severe constraints on communications in many localities. Finance.The relative lack of provision of appropriate financial products and tools for agricultural-based rural enterprises in mountain areas remains a serious constraint on the growth of incomes from these sources.While there has been a general improvement in the provision of medium and longer-term finance for enterprises.The supply remains concentrated in more easily accessible areas, and is almost non-existent in some areas.In addition, there are several other important factors that limit the availability of rural finance.In summary, these are: lack of viable Banks and Microfinance institutions in many areas, under-valuation of rural assets with the potential to be used for collateral, lack of assets to be used for collateral by the youth and by potential female entrepreneurs and lack of facilities to enable viable SMEs to gain access to long-term venture capital or equity financing. Markets: The market demand for Albanian agricultural commodities and rural-based products remains strong.In the first place, there are significant imports or agricultural commodities which can be grown locally, even of items grown in the programme area (e.g.chestnuts, herbal teas).The proximity to Europe, and the new initiatives for easier trading systems, also provide new opportunities for exports to the EU.However, there are serious constraints which continue to operate on the supply side of the marketing process, with resultant limitations on the ability of rural SMEs to provide avenues for sustainable income growth.The most important of these are: (a) lack of investment in production, processing, storage and transport facilities.There has been very limited investment from private sources in the sector, with the limited investment mainly being provided through official development assistance channels; (b) lack of access to short-term and seasonal finance for production.Trade in commodities, including those for exports, takes place in an informal and un-managed setting.Vendors and purchasers have to use personal knowledge and contacts in order to make trades.The result is that prices achieved are sub-optimal, substantial amounts of produce goes un-harvested (specifically in the case of chestnuts, pomegranate and medicinal herbs occurring naturally in forest areas), unsold and is wasted while at the same time there is un-met demand.At the farm level, there has been very little development of contract farming or supply procedures.Those that do exist are not suitable for large-scale replication because they are based on personal knowledge and social pressure rather than on systematic and business like principles.Marketing of tourism products is done in an amateurish and un-coordinated way, to the detriment of the buyers of the product as well as the purveyors.There is no system of booking, and few ways of determining the nature of services being offered.There are few links between accommodation services and others, such as activities and restaurants.Moreover, the overall quality of the services offered is poor, despite the undoubted beauty of many localities.Further, the lack of proper waste disposal systems means that the environment is substantially degraded by an abundance of litter and household rubbish. Technological: With the inadequate levels of investment in the past decade, most Albanian agriculture and agribusiness continues to use out-dated and inadequate technology, especially in the mountain areas.Much of the agricultural equipment in use dates from the era of the centrally planned economy, and is thus both inefficient and costly to operate.There is also lack of application of modern standards of hygiene, quality, reliability and grading.All of these make it more difficult for enterprises to successfully enter export markets, especially those in the EU, as well as raising costs at the farm and enterprise levels. Strategic Approach to Develop the Agribusiness Rural tourism is a convergence of rural and tourism development.Moreover, rural sustainable livelihoods is a convergence of rural development and sustainable development (Shen, Hughey dhe Simmons 2008).The impact from investment in value chains involving a broad rangers of actors has proved successful in raising agricultural productivity and profitability and has also demonstrated a higher degree of sustainability.Rural tourism is marginalized some extent in substitution for rural development and its pursuit will depend on agricultural activity and community involvement (Hall, Kirkpatrick and Morag, 2005).Agribusiness must to accelerating the shift to a private sector led structural transformation, increasing opportunities for the poorest by enabling them to exploit non-agricultural opportunities and prioritising districts with higher than average poverty rates. • The demand for medium and long term rural finance is largely unsatisfied, despite the efforts of several development initiatives; • The impact of the rural finance which has been provided has largely been positive in terms of employment and income creation, enterprise development and improvements in export performance; • So far, despite intentions to the contrary, the application of a Value Chain Approach to rural investment has only occurred in a piece-meal fashion; • There is a large unsatisfied demand for many rural products; • There is still a large population of under-employed rural people in need of permanent and remunerative employment.In order for companies to be able to compete, there should be an efficient market system.It is often argued that the development of sustainable rural tourism cannot be achieved without the full support of rural community (Doohyun et al., 2014).However, market efficiency is not a normal situation for many sub-sectors of the agricultural industry in Albania.Its consequences are caused losses, sub-optimal prices, supply shortages and irregularities and loss due to competition.A tourist offer is very fragmented (Hall, Mitchell and Roberts, 2005).Inefficiencies are prominent in the relationships between small farmers and processors/markets, between suppliers and local markets and also between various producers and export markets.Although there have been some efforts to remove such inefficiencies, they remain a serious impediment.Similarly, enterprises need to use technology that is sufficiently modern and efficient to enable them to compete on the basis of cost and quality.Actually, social-cultural heritage maintenance is also considered an essential component of sustainable development the natural resources and socio-cultural heritage maintenance (Holloway and Taylor, 2006).Again, these are not features of the majority of agricultural enterprises, contributing to their difficulty in competing and making sustainable profits.The approach to technological improvements is in two directions.Firstly, in the case of financial development, technological innovations would be developed in concert with the development and application of financial services.Secondly, in the case of individual enterprises supported, the application of appropriate new technology would be embedded within the investment.There would also be requirements for all agro-processing investments to be compliant with best practice environmental standards, particularly those such as dairy processors which provide perishable food products directly to the public.Such environmental standards would also be applied to producers to avoid the possibility of unintended environmental costs being passed on to the associated community or public sphere.Sustainable tourism model is intended to reconcile the tensions between the partners, social, environmental and economic factors and keeping the balance in the long term (Lane, 2005).The scenario in the mountainous rural Albania is one of the small plots of land and fragmented land for many of those who have some, and many rural people being ill-qualified for agricultural entrepreneurship.Moreover, even amongst people do own and farm the land, there are many who are ill-suited to farming due to their own skill base, age or gender.It follows that the most attractive means for genuine poverty alleviation is for rural employment creation.Efforts by agricultural SMEs to develop contract farming have been observed in some localities.However, as noted, these are based on personal knowledge and are thus inherently limited in size, outreach and scope of activities.Procedures for engagement between a company/market entity and farmers in a contract farming system need to be based on principles of transparency, trust and effective enforcement, together with in-built financing. Transparency.The company would need to ensure that all of the information pertaining to financial and technical dealings with farmers would be transparently available.The policy on company mark-ups and pricing for services and inputs would be negotiated and agreed prior to any agreement.The costs would be overtly disclosed by the company to relevant community organisations of farmers participating in a scheme, and directly to the farmers themselves.This process would be maintained through an "open book system" whereby the company would specifically enable farmers to have full knowledge of the derivation of prices and charges pertaining to farmer's transactions. Trust.Various other devices would be used to engender trust between the parties.Firstly, a contract would be negotiated between the each company and participating farmers, with farmers organisations providing negotiation on behalf of their members.This would be varied from time to time to suit circumstances, but only with the full knowledge and participation of the farmers and their organisations.Secondly, an Independent Audit of procedures would be carried out periodically by a competent and reputable organisation.This would be done annually at the commencement of the agreement, but could be done less frequently once trust is evidently established.This would be used to provide a guarantee that stated procedures and calculations were accurate, and that the procedures themselves were robust and fair to all parties.Thirdly, the system would use the authority of the farmer's organisations and their leaders to ensure appropriate communication and discipline.Fourthly, companies would endeavour to use trusted and well-educated local people within its system of engagement with farmers and provision of services to them. Enforcement.There would need to be agreed methods of enforcement to avoid delinquent behaviour by either party.For the most part, enforcement of farmer behaviour would be through a form of group responsibility, managed by farmer's organisations.While there would be a legally binding agreement between the parties, resorting to the formal legal system for redress would be too slow and inconvenient, as matters would require rapid resolution so as not to disrupt agricultural operations.For example, the farmers organisation for each participating group of farmers would be responsible for ensuring compliance within their group for technical matters such as timely and assured crop delivery and crop quality management.Failure of one or more of the group to undertake agreed actions would need to be covered by the group as a whole.Similarly, the farmer's organisation would be responsible for the financial compliance of farmers under its control, and would be obliged to make good any shortfalls.The behaviour of each company would also need to be governed by the possibility of sanctions.If a farmer or farmer's organisation believed that a company had not discharged its obligations, they would be able to seek redress through direct engagement with the company.If intermediation at this level did not succeed, then the matter could be referred to an independent arbitrator (honest broker).The arbitrator would need to be a competent yet "disinterested" organisation such as a legal arbitration practice or a reputable NGO.The identity of the arbitrator would need to be agreed jointly between each company and the participating farmers organisations, and its decisions would be binding on both parties.The arbitrator would be able to invoke sanctions on a company if a case against it were proven.The severity of the sanctions would be pre-determined by negotiation between the parties.Financing.The system would have in-built financing, usually from a bank or MFI, within a "Tri-Partite Agreement".Such a system would operate as follows: (a) the Farmers and the Processor would negotiate a contract, which would include specification of improved procedures requiring specific inputs, as well as pricing and standards parameters; (b) the Farmers and the Processor would jointly approach a bank with which they already have a relationship, seeking partial or complete financing for the required inputs; (c) they bank would become a party to the agreement, with its role being to provide the agreed finance in a timely way to the farmers, and to manage repayments.The Processor would pay the Farmers only through their account with the bank and it would deduct repayments from this on an agreed schedule; (d) while the bank may require some collateral or guarantee initially, it is expected that it would ease this requirement after it becomes confident of the robustness of the system. Conclusions Through this empirical research, we have found difficulties and opportunities for development of this sector.Some of the limitations are: infrastructure, finance, markets, technological.However, rural tourism has the potential to be developed.The coordination between farmers and market will lead to the development of rural tourism.Some strategies may focus on transparency, trust, effective enforcement and financing.Another way is to invest in technology.Why? Technology will help financial services and thus individual enterprises will be supported by applying new technology appropriate for their investments.
2018-12-07T06:29:10.624Z
2016-12-30T00:00:00.000
{ "year": 2016, "sha1": "4142a4613e6a49d6155d1ca1b745fc332b8ac509", "oa_license": "CCBY", "oa_url": "https://www.mcser.org/journal/index.php/ajis/article/download/9765/9403", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "4142a4613e6a49d6155d1ca1b745fc332b8ac509", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Economics", "Business" ], "extfieldsofstudy": [ "Business" ] }
225584927
pes2o/s2orc
v3-fos-license
Algorithm Scheme to Simulate the Distortions during Gas Quenching in a Single-Piece Flow Technology Low-pressure carburizing followed by high-pressure quenching in single-piece flow technology has shown good results in avoiding distortions. For better control of specimen quality in these processes, developing numerical simulations can be beneficial. However, there is no commercial software able to simulate distortion formation during gas quenching that considers the complex fluid flow field and heat transfer coefficient as a function of space and time. For this reason, this paper proposes an algorithm scheme that aims for more refined results. Based on the physical phenomena involved, a numerical scheme was divided into five modules: diffusion module, fluid module, thermal module, phase transformation module, and mechanical module. In order to validate the simulation, the results were compared with the experimental data. The outcomes showed that the average difference between the numerical and experimental data for distortions was 1.7% for the outer diameter and 12% for the inner diameter of the steel element. Numerical simulation also showed the differences between deformations in the inner and outer diameters as they appear in the experimental data. Therefore, a numerical model capable of simulating distortions in the steel elements during high-pressure gas quenching after low-pressure carburizing using a single-piece flow technology was obtained, whereupon the complex fluid flow and variation of the heat transfer coefficient was considered. Introduction Avoiding distortions is cost-effective for the quality control of chemically treated steel elements. As an example, gears with large distortions can produce more noise and have shorter durability [1,2]. Furthermore, the correction of distortions is one of the most expensive processes [3][4][5]. For this reason, despite being inevitable [6,7], methods to avoid them as much as possible must be developed and studied. In the perspective of constructing a new device that gives optimized outcomes in minimizing distortions, this study developed a simulation around Vacuum UniCase Master (UMC ® ) Furnace design by SECO/WARWICK (Świebodzin, Poland) that uses low-pressure carburizing (LPC), followed by high-pressure gas quenching (HPGQ) in a 4D chamber (4D Quench ® ), all conducted using the single-piece flow method. LPC and HPGQ, when compared with traditional methods, have shown better results [8][9][10][11]. Moreover, the single-piece flow model takes every single element through the exact same position and process conditions as the others, avoiding variations in the physical and In order to verify the results from simulation, experimental data from parts that went through chemical and heat treatment in a furnace are needed. Reference elements made of 20MnCr5 steel were tested. The external and internal diameters were 90 and 30 mm, and their height was 10 mm, respectively. All the elements were subjected to low-pressure stream carburizing at 980 • C and then gas quenching at 860 • C. A rotary device for HPGQ (4D Quench ® ) design by SECO/WARWICK was used for hardening. The elements were inserted into the chamber one by one, where the cooling nozzles were arranged all around, and the base with the cooling element was rotated, thus ensuring identical hardening conditions for each element. Before and after the chemical heat treatment, geometrical measurements of the 10 reference elements were taken, and based on these measurements, the size of hardening deformations of their surface flatness parameters was determined. After the treatment, the carbon concentration, in a layer, was measured with a glow discharge optical emission spectrometer-LECO GDS850A. The flatness of the top surface of the reference disc was measured on a bench equipped with a DEA Global Performance 5.7.5 coordinate measuring machine and a computer with PC-DEMIS CAD+++ software (2018 R2). For the measurements described, the maximum permissible error for length measurement (according to ISO 10360 [24]) was ±1.8 µm. Before the measurements were taken, a coordinate system was built with its midpoint in the center of the reference disc's insert hole at the height of the upper plane. The value of flatness was determined on the basis of data from 32 measurement points located on the circumference of two circles-inner and outer. The inner circle, with a diameter of ϕ 40 mm, was located on the edge of the bore, while the outer circle-with a diameter of ϕ 80 mm-was located on the edge of the external reference disc. The centers of the imaginary circles coincided with the center of the coordinate system. Figure 1 shows the position of the inner (C1) and outer (C2) circles in the upper plane of the reference disc, and the arrangement of the measuring points during the measurement to determine flatness. The deformation results for each measurement point were subjected to the 1-factor ANOVA hypothesis, with statistical significance assumed at a level of α = 0.05. Before the measurements were taken, a coordinate system was built with its midpoint in the center of the reference disc's insert hole at the height of the upper plane. The value of flatness was determined on the basis of data from 32 measurement points located on the circumference of two circles-inner and outer. The inner circle, with a diameter of ϕ 40 mm, was located on the edge of the bore, while the outer circle-with a diameter of ϕ 80 mm-was located on the edge of the external reference disc. The centers of the imaginary circles coincided with the center of the coordinate system. Figure 1 shows the position of the inner (C1) and outer (C2) circles in the upper plane of the reference disc, and the arrangement of the measuring points during the measurement to determine flatness. The deformation results for each measurement point were subjected to the 1-factor ANOVA hypothesis, with statistical significance assumed at a level of α = 0.05. Numerical Simulation The simulation was created by implementing and modifying the solvers available in the Ansys package with the use of data generated using the JMatPro ® program. Figure 2 shows a diagram of the numerical simulation algorithm. Basically, the simulation is composed of certain modules for different physical phenomena, which pass the results as a boundary condition for the next module. The numerical scheme follows the order of stages taking place in a real furnace during the treatment. Numerical Simulation The simulation was created by implementing and modifying the solvers available in the Ansys package with the use of data generated using the JMatPro ® program. Figure 2 shows a diagram of the numerical simulation algorithm. Basically, the simulation is composed of certain modules for different physical phenomena, which pass the results as a boundary condition for the next module. The numerical scheme follows the order of stages taking place in a real furnace during the treatment. Before the measurements were taken, a coordinate system was built with its midpoint in the center of the reference disc's insert hole at the height of the upper plane. The value of flatness was determined on the basis of data from 32 measurement points located on the circumference of two circles-inner and outer. The inner circle, with a diameter of ϕ 40 mm, was located on the edge of the bore, while the outer circle-with a diameter of ϕ 80 mm-was located on the edge of the external reference disc. The centers of the imaginary circles coincided with the center of the coordinate system. Figure 1 shows the position of the inner (C1) and outer (C2) circles in the upper plane of the reference disc, and the arrangement of the measuring points during the measurement to determine flatness. The deformation results for each measurement point were subjected to the 1-factor ANOVA hypothesis, with statistical significance assumed at a level of α = 0.05. Numerical Simulation The simulation was created by implementing and modifying the solvers available in the Ansys package with the use of data generated using the JMatPro ® program. Figure 2 shows a diagram of the numerical simulation algorithm. Basically, the simulation is composed of certain modules for different physical phenomena, which pass the results as a boundary condition for the next module. The numerical scheme follows the order of stages taking place in a real furnace during the treatment. Input Data The first and most important part of numerical scheme is the generation of appropriate tables with input data. For the simulated steel material (20MnCr5), the JMatPro ® program generated multidimensional matrices with the material, mechanical and thermal properties as a function of temperature, cooling rate and carbon concentration. Computational Grids Next, based on geometry, computational grids were created using the Ansys CFX software (2019 R1). As mentioned earlier, meshes were created for the flow and solid domains. Due to the occurrence of high speeds and gas pressures in the cooling chamber, as well as the sensitivity of the results of the flow simulations to the quality of the mesh, the hexagonal elements were created and optimized for correct calculations, especially within the boundary layer, in which the heat exchange between fluid and solid domains has a significant impact on distortions. Diffusion Module Following the scheme, firstly, diffusion was calculated on the mesh of the specimen. In these simulations, it was assumed that after the heating stage, the processed elements had an even temperature distribution throughout their volume. The carbon profile was calculated by the diffusion transport equation according to Fick's law, for which the boundary condition of carbon penetration is set on the outer walls in the form of carbon flux density. In order to simulate the boost and diffusion carburizing technology, the density of the carbon stream is varied over time in accordance with the process parameters in the furnace. Fick's law: where J is the flux, D is the diffusion coefficient, ∅ is the carbon concentration, and t is the time. Equation (1) is used as a boundary condition, and Equation (2) describes the carbon concentration distribution in the element. Flow Module After diffusion, the fluid flow was calculated. The flow field is necessary for obtaining the correct distribution of heat fluxes on the surface during cooling in the chamber. In order to homogenize the distribution of wall heat transfer coefficient over the surface, a rotary system was implemented in the construction of the cooling chamber. The rotation and high-speed cooling force non-stationary numerical calculations. The equations of energy and mass transport in the fluid domain for the ideal compressible gas were solved in the Ansys CFX software (2019 R1) solver with the finite volume method. Based on previous simulations [15] of high-pressure gas cooling in a quenching chamber, the mesh size and quality and the flow solver parameters were developed to map all relevant phenomena. In particular, the turbulence and jet impingement regions, which greatly influence heat fluxes over the specimen surface, required careful boundary layer modeling. The distribution of the heat transfer coefficient generated in this simulation was used further for thermal computations. The flow field is described by the momentum, continuity, and energy equations. Momentum: Continuity: Coatings 2020, 10, 694 of 12 Energy: where ρ is the specific mass, h is the enthalpy, U is the velocity, T is the temperature, λ is the conduction coefficient, τ:∆U is the viscous dissipation, and S E is the source term. The applied model equations for turbulence are as follows: Turbulent kinetic energy, k: Dissipation of the turbulent kinetic energy, ω: where • turbulent viscosity: • production term: • functions F 1 and F 2 : • fixed coefficients: The above coefficients are default settings for solving the Shear Stress Transport (SST) model of turbulence, which all have good general application. The user can change them in order to search for the best numerical solution for fluid flow. The modification of the production term: the SST k-ω model tends to overproduce turbulence in the pile-up areas because of the high S values generated in these regions. Kato and Launder propose replacing the tangential stresses S in the turbulence production equation with rotation Ω, then: Coatings 2020, 10, 694 6 of 12 where Therefore, the model enables to introduce a term that limits over-production of the turbulence kinetic energy in the areas of high-pressure gradients [18]. Thermal Module The flow module provided the boundary condition for thermal calculations inside a workpiece in the form of heat transfer coefficient on its surfaces. The thermal properties-density, heat conduction, and heat capacity-were interpolated into each finite element in every time iteration from the provided data matrices. The aforementioned properties were dependent on temperature, cooling rate, and carbon concentration. Heat transport calculations were conducted in an Ansys thermal solver, which uses the finite element method: where T-temperature, C p -heat capacity, -density, and k t -thermal conductivity depending on the temperature. Phase Transformation Module After calculating the time-varying temperature field in the element during cooling, cooling rate and concentration of carbon, it is possible, based on the read data matrix, to indicate the place of phase transformations during the time. The data generated in the JMatPro ® program are determined on the basis of solving the kinetic equations of phase transformations inside this software. Mechanical Module As a result of the previously calculated carbon and temperature distributions over time, the program was able to approximate the appropriate values from the data provided by the JMatPro ® program. To calculate quenching distortions, additional terms were added to the total deformation equation. Those terms are thermal and phase transformation deformations, which are included in the linear expression provided in form of data matrices. The simulation was conducted in the Ansys mechanical interface, which uses the finite element method for structural calculations. The below charts (Figure 3) present how linear expression changes with temperature, carbon concentration, and cooling rate for selected values. Thus, only after calculation were the previous stages of the program able to finally obtain quenching deformations. Results and Discussion Firstly, the distribution of carbon concentration was obtained. As can be seen in Figure 4, according to diffusion theory, the maximum concentration and thickness of the carburized layer are found in the corners of the element. The numerical results were compared with experimental data by plotting the value of carbon concentration in the middle of the top surface ( Figure 5). Results and Discussion Firstly, the distribution of carbon concentration was obtained. As can be seen in Figure 4, according to diffusion theory, the maximum concentration and thickness of the carburized layer are found in the corners of the element. The numerical results were compared with experimental data by plotting the value of carbon concentration in the middle of the top surface ( Figure 5). Results and Discussion Firstly, the distribution of carbon concentration was obtained. As can be seen in Figure 4, according to diffusion theory, the maximum concentration and thickness of the carburized layer are found in the corners of the element. The numerical results were compared with experimental data by plotting the value of carbon concentration in the middle of the top surface ( Figure 5). Results and Discussion Firstly, the distribution of carbon concentration was obtained. As can be seen in Figure 4, according to diffusion theory, the maximum concentration and thickness of the carburized layer are found in the corners of the element. The numerical results were compared with experimental data by plotting the value of carbon concentration in the middle of the top surface ( Figure 5). Gas flow simulations were then performed to calculate the rapid cooling process. In this way, the influence of the construction of the quenching chamber and nozzle collectors on the generated coolant flow field is reflected, which, in turn, translates into the temperature distribution ( Figure 6) of the entire hardening process. Coatings 2020, 10, x FOR PEER REVIEW 8 of 12 Gas flow simulations were then performed to calculate the rapid cooling process. In this way, the influence of the construction of the quenching chamber and nozzle collectors on the generated coolant flow field is reflected, which, in turn, translates into the temperature distribution ( Figure 6) of the entire hardening process. Figure 8 shows the examples of the results obtained from experimental measurements of the surface flatness of reference elements. As can be seen, the biggest difference in measurements occurred for measuring points close to the internal diameter of the reference elements. Analyzing the measurement results for all of the tested reference elements, the average difference in the surface flatness before the heat treatment processing was 0.014 ± 0.004 mm and, after heat treatment, processing was 0.035 ± 0.006 mm. Statistical analysis (ANOVA) for the experimental data performed did not show statistically significant differences between individual measuring points for the surface flatness parameter. Thereafter, in the mechanical module, based on the results of diffusion and cooling simulations and harnessed material data, the deformations were calculated (Figure 7) and compared with the experimental results (Figure 8), where-for the same points as in the experiment-the numerical values of deformations were taken into a 3D comparison, shown below (Figures 9-12). Coatings 2020, 10, x FOR PEER REVIEW 8 of 12 Gas flow simulations were then performed to calculate the rapid cooling process. In this way, the influence of the construction of the quenching chamber and nozzle collectors on the generated coolant flow field is reflected, which, in turn, translates into the temperature distribution ( Figure 6) of the entire hardening process. Thereafter, in the mechanical module, based on the results of diffusion and cooling simulations and harnessed material data, the deformations were calculated (Figure 7) and compared with the experimental results (Figure 8), where-for the same points as in the experiment-the numerical values of deformations were taken into a 3D comparison, shown below (Figures 9-12). Figure 8 shows the examples of the results obtained from experimental measurements of the surface flatness of reference elements. As can be seen, the biggest difference in measurements occurred for measuring points close to the internal diameter of the reference elements. Analyzing the measurement results for all of the tested reference elements, the average difference in the surface flatness before the heat treatment processing was 0.014 ± 0.004 mm and, after heat treatment, processing was 0.035 ± 0.006 mm. Statistical analysis (ANOVA) for the experimental data performed did not show statistically significant differences between individual measuring points for the surface flatness parameter. Figure 1) for reference elements (blue color-before heat treatment processing, red color-after heat treatment processing). Figure 9 shows the average values of the surface flatness calculated from the measurements of 10 reference elements, and Figure 10, the values obtained in the numerical simulation. For the final experimental average, the surface flatness (the difference before and after heat treatment) was 0.021 ± 0.002 mm. The difference between the obtained results in terms of surface flatness of the outer and inner circles was the result of sample preparation for testing and, primarily, the quenching process. In the opinion of the authors, quenching from high temperatures results in the immediate occurrence of a martensitic transition and a "freezing" of the geometrical dimensions on the outer side of the element [10,14,15]. Consequently, the "deformation front" is pushed deep inside the element, and the inevitable changes in volume-related to transition of the material from austenite to martensitetake place in the inner parts of the element. This causes distortions in the surface flatness in the reference elements. The presented charts (Figures 11 and 12) show a fine correlation between experimental results and those obtained in numerical simulation. The numerical simulation also shows the differences between deformations in the inner and outer diameters. The average differences between the numerical and experimental results were 1.7% for the outer circle and 12% for the inner circle. The Figure 1) for reference elements (blue color-before heat treatment processing, red color-after heat treatment processing). Figure 8. Example of experimental surface flatness results for measuring points (see Figure 1) for reference elements (blue color-before heat treatment processing, red color-after heat treatment processing). Figure 9 shows the average values of the surface flatness calculated from the measurements of 10 reference elements, and Figure 10, the values obtained in the numerical simulation. For the final experimental average, the surface flatness (the difference before and after heat treatment) was 0.021 ± 0.002 mm. The difference between the obtained results in terms of surface flatness of the outer and inner circles was the result of sample preparation for testing and, primarily, the quenching process. In the opinion of the authors, quenching from high temperatures results in the immediate occurrence of a martensitic transition and a "freezing" of the geometrical dimensions on the outer side of the element [10,14,15]. Consequently, the "deformation front" is pushed deep inside the element, and the inevitable changes in volume-related to transition of the material from austenite to martensitetake place in the inner parts of the element. This causes distortions in the surface flatness in the reference elements. The presented charts (Figures 11 and 12) show a fine correlation between experimental results and those obtained in numerical simulation. The numerical simulation also shows the differences between deformations in the inner and outer diameters. The average differences between the numerical and experimental results were 1.7% for the outer circle and 12% for the inner circle. The Figure 1) for reference elements (blue color-before heat treatment processing, red color-after heat treatment processing). Figure 9 shows the average values of the surface flatness calculated from the measurements of 10 reference elements, and Figure 10, the values obtained in the numerical simulation. For the final experimental average, the surface flatness (the difference before and after heat treatment) was 0.021 ± 0.002 mm. The difference between the obtained results in terms of surface flatness of the outer and inner circles was the result of sample preparation for testing and, primarily, the quenching process. In the opinion of the authors, quenching from high temperatures results in the immediate occurrence of a martensitic transition and a "freezing" of the geometrical dimensions on the outer side of the element [10,14,15]. Consequently, the "deformation front" is pushed deep inside the element, and the inevitable changes in volume-related to transition of the material from austenite to martensitetake place in the inner parts of the element. This causes distortions in the surface flatness in the reference elements. The presented charts (Figures 11 and 12) show a fine correlation between experimental results and those obtained in numerical simulation. The numerical simulation also shows the differences between deformations in the inner and outer diameters. The average differences between the numerical and experimental results were 1.7% for the outer circle and 12% for the inner circle. The Conclusions Heat treatment is a system of combined complex processes. Therefore, creating a reliable numerical model is very difficult and requires to some extent simplification off reality. However, the created numerical computation scheme in this study enables to conduct computer simulations of the heat treatment processes taking place inside the furnace. The collected results from the reference element show a good correlation with experimental data and will allow for much faster optimization and implementation of new solutions to the device. Furthermore, due to the results obtainedassuming that experimental verification sufficiently confirms their correctness-there is a chance for a much deeper and more extensive analysis. Further work will be conducted to simulate elements with more complex geometry. The simulation results will be verified with the experiment to confirm the correctness for different elements' shape and the generated flow fields. Figure 8 shows the examples of the results obtained from experimental measurements of the surface flatness of reference elements. As can be seen, the biggest difference in measurements occurred for measuring points close to the internal diameter of the reference elements. Analyzing the measurement results for all of the tested reference elements, the average difference in the surface flatness before the heat treatment processing was 0.014 ± 0.004 mm and, after heat treatment, processing was 0.035 ± 0.006 mm. Statistical analysis (ANOVA) for the experimental data performed did not show statistically significant differences between individual measuring points for the surface flatness parameter. Figure 9 shows the average values of the surface flatness calculated from the measurements of 10 reference elements, and Figure 10, the values obtained in the numerical simulation. For the final experimental average, the surface flatness (the difference before and after heat treatment) was 0.021 ± 0.002 mm. The difference between the obtained results in terms of surface flatness of the outer and inner circles was the result of sample preparation for testing and, primarily, the quenching process. In the opinion of the authors, quenching from high temperatures results in the immediate occurrence of a martensitic transition and a "freezing" of the geometrical dimensions on the outer side of the element [10,14,15]. Consequently, the "deformation front" is pushed deep inside the element, and the inevitable changes in volume-related to transition of the material from austenite to martensite-take place in the inner parts of the element. This causes distortions in the surface flatness in the reference elements. The presented charts (Figures 11 and 12) show a fine correlation between experimental results and those obtained in numerical simulation. The numerical simulation also shows the differences between deformations in the inner and outer diameters. The average differences between the numerical and experimental results were 1.7% for the outer circle and 12% for the inner circle. The numerical simulation also shows the differences between deformations in the inner and outer diameters as with the experimental data, which are all the result of the way the elements are cooled and the construction of the chamber of HPGQ (4D Quench ® ). Conclusions Heat treatment is a system of combined complex processes. Therefore, creating a reliable numerical model is very difficult and requires to some extent simplification off reality. However, the created numerical computation scheme in this study enables to conduct computer simulations of the heat treatment processes taking place inside the furnace. The collected results from the reference element show a good correlation with experimental data and will allow for much faster optimization and implementation of new solutions to the device. Furthermore, due to the results obtained-assuming that experimental verification sufficiently confirms their correctness-there is a chance for a much deeper and more extensive analysis. Further work will be conducted to simulate elements with more complex geometry. The simulation results will be verified with the experiment to confirm the correctness for different elements' shape and the generated flow fields.
2020-07-23T09:01:34.642Z
2020-07-19T00:00:00.000
{ "year": 2020, "sha1": "3af271d207ed97d644110cfe83b305b95e7d34e4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6412/10/7/694/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5e8f86dda7e80f02d60a221a4eefa67a29375f43", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
3449489
pes2o/s2orc
v3-fos-license
1H NMR Investigation of the Distal Hydrogen Bonding Network and Ligand Tilt in the Cyanomet Complex of Oxygen-avidAscaris suum Hemoglobin* The O2-avid hemoglobin from the parasitic nematode Ascaris suum exhibits one of the slowest known O2 off rates. Solution 1H NMR has been used to investigate the electronic and molecular structural properties of the active site for the cyano-met derivative of the recombinant first domain of this protein. Assignment of the heme, axial His, and majority of the residues in contact with the heme reveals a molecular structure that is the same as reported in the A. suumHbO2 crystal structure (Yang, J., Kloek, A., Goldberg, D. E., and Mathews, F. S. (1995) Proc. Natl. Acad. Sci. U. S. A. 92, 4224–4228) with the exception that the heme in solution is rotated by 180 ° about the α,γ-meso axis relative to that in the crystal. The observed dipolar shifts, together with the crystal coordinates of HbO2, provide the orientation of the magnetic axes in the molecular framework. The major magnetic axis, which correlates with the Fe-CN vector, is found oriented ∼30 ° away from the heme normal and indicates significant steric tilt because of interaction with Tyr30(B10). The three side chain labile protons for the distal residues Tyr30(B10) and Gln64(E7) were identified, and their relaxation, dipolar shifts, and nuclear Overhauser effects to adjacent residues used to place them in the distal pocket. It is shown that these two distal residues exhibit the same orientations ideal for H bonding to the ligand and to each other, as found in the A. suumHbO2 crystal. It is concluded that the ligated cyanide participates in the same distal H bonding network as ligated O2. The combination of the strong steric tilt of the bound cyanide and slow ring reorientation of the Tyr30(B10) side chain supports a crowded and constrained distal pocket. The O 2 binding globins, myoglobin (Mb) 1 and hemoglobin (Hb), despite highly varied sequences throughout phylogeny, possess a highly conserved folding topology of 7-8 helices (A-H), with the heme wedged in between the E and F helices and ligated by one, His F8 (proximal), of only two (the other is Phe CD1) completely conserved residues (1)(2)(3). Despite this strong structural homology, the O 2 ligation rates and O 2 affinities vary over a remarkably wide range (by ϳ10 5 ), depending on the exact nature of several distal residues at the key positions B10, E7, and E10 (4 -8). The most important distal interaction for stabilizing bound O 2 is hydrogen bonding to the ligand, for which the donor is generally His E7 (1,7,9,10) and is Gln E7 in a few cases (2). In several invertebrates, such as Aplysia and Dolbella Mbs that possess a Val E7, the distal H bond to the ligand is provided by an Arg at position E10 (11)(12)(13). A particularly noteworthy class of globins is that of parasitic nematodes that possess, in addition to a H bond donor at position E7, a Tyr at position B10 that is also capable of H bonding to the ligand (4, 5, 8, 14 -17). In the case of the Hb from Ascaris suum, the extraordinarily high O 2 affinity and extremely low O 2 off-rates have been attributed to a distal H bonding interactions for the Tyr 30 (B10) and Gln 64 (E7) side chains with bound O 2 . Stabilizing H bond interactions of Gln 64 (E7) and Tyr 30 (B10) with ligated O 2 are supported by the observations of enhanced O 2 off-rates upon mutating either residue (5,15). The positions of these two key residues are clearly defined in the crystal structure of A. suum HbO 2 , in which the two residues are appropriately poised to serve as H bond donors to bound O 2 , with the Gln 64 (E7) additionally providing an H bond to the Tyr 30 (B10) side chain O that stabilizes the optimal dispositions of these two residues (8). Resonance Raman spectroscopy has confirmed the role of Tyr 30 (B10) as a H bond donor (18,19), and flash photolysis experiments (19) have indicated that A. suum Hb possesses a very compact and constrained distal pocket when compared with other globins. Resonance Raman spectroscopy has shown that A. suum HbCO exhibits the lowest reported CO stretching frequency (19). The strong modulation of the CO stretching frequency by globin distal environment has been discussed in the context of both steric tilt/bending of the Fe-CO unit from the heme normal and pocket dielectric effects (20 -23), and the currently accepted interpretive basis is that the latter effect is the major determinant of CO (7,24). Nevertheless, crystal structures of myoglobins invariably find the carbonyl oxygen placed off-axis from the heme normal, indicating that the Fe-CO unit is bent/ tilted from the heme normal (7,(25)(26)(27). The cyanomet derivatives of globins can serve as valuable structural (but not functional) (28,29) models for both O 2 and CO binding, in that Fe III CN, like Fe II O 2 , is polar and is a good H bond acceptor (30) and, like Fe II CO, prefers to bind normal to the heme in the absence of distal steric interactions (31). In the one case where the crystal structures of both the carbonyl and cyanomet globins have been reported, there is a good correlation in the degree and direction of the off-axis placement of the terminal atoms (26). Theoretical considerations have indicated that distal ligand tilt could be modulated by tilt of the proximal His (32). In the crystallographically and NMR characterized globins to date, the axial His is essentially normal to the heme. The crystal structures of A. suum HbO 2 , on the other hand, show that the axial His imidazole plane is tilted some ϳ8°in the direction of pyrrole C with respect to the heme plane (8). Thus a determination of orientation of the Fe III CN or Fe II CO units relative to the heme in A. suum Hb would indicate whether the axial His could contribute to distal ligand tilt and provide some insight as to whether there is likely to be a large Fe-CO tilt that could contribute to the reduced value for co . Solution 1 H NMR of the paramagnetic cyanomet Hb derivatives can provide significant structural details on the distal pocket in relation to both stabilizing H bonding and destabilizing steric interactions with the bound ligand (30,33,34). On the one hand, the dipolar shifts and moderate relaxation imparted to distal residues and their labile protons facilitate their detection, identification, and detailed placement relative to the bound ligand (35,36). On the other hand, the sizable dipolar shifts for active site residues allow the quantitative determination of the orientation of the paramagnetic susceptibility tensor, for which the major magnetic axis can be correlated to the degree of Fe-CN tilt from the heme normal (34). There is generally very good agreement in the magnitude and direction of Fe-CN tilt observed in crystal structure and the orientation of the major magnetic axes determined by solution 1 H NMR (26, 27, 38 -41). Lastly, the expanded chemical shift scale for heme pocket residues because of the hyperfine interaction increases the prospect for measuring rapid dynamic processes, such as ring orientation, that can constitute probes for the constraints in the heme pocket (42). We report herein on the solution 1 H NMR characterization of the cyanomet complex of the D1 domain of A. suum metHbCN, which demonstrates that the distal H bonding network is essentially identical to that in HbO 2 , that the Fe-CN unit appears tilted strongly away from the heme normal in the direction of the observed terminal oxygen in HbO 2 , and that the distal pocket is sufficiently crowded to strongly tilt the Fe-CN vector and to impede the reorientation of the Tyr 30 (B10) ring. EXPERIMENTAL PROCEDURES Protein Preparation-Native protein samples were prepared as described previously (15,43). The cyanide complexes were prepared by adding KCN to the protein solution approximately in a molar ratio of 10:1 buffered with 50 mM phosphate, 200 mM NaCl at pH 7.2. 2 H 2 O sample was prepared by repeatedly washing protein with 2 H 2 O in the same buffer with a Centricon (Amicon Inc.), and pH was read directly from the pH meter without the isotope effect correction; the final protein concentration was ϳ2 mM. NMR Spectra-All 1 H NMR spectra were recorded on a GE ⍀ 500 spectrometer operating at 500 MHz. Chemical shift values were referenced to 2,2-dimethyl-2-pentane-5-sulfonate (DSS) through the residual water signal. Reference spectra were collected with 1 H 2 O saturation. Steady-state NOE and inversion-recovery spectra were collected at a repetition rate of 3 s Ϫ1 (34). The residual water signals were removed from the free induction decay by convolution difference. The nonselective spin lattice paramagnetic relaxation times for the resolved peaks were derived from two-parameter exponential least square fits using only short (Յ50 ms) delays. Estimates for distance to the iron for proton i, R Fe-i were obtained from the nonselective paramagnetically dominated T 1 values using the following relation. where the reference T 1j ϭ 150 ms (R Fe-j ϭ 6.1 Å) for a heme methyl, or T 1j ϭ 30 ms (R Fe ϭ 5.1 Å) for the His(F8) N ␦ H, provided upper and lower limits, respectively (44). Interproton distances, r ij , were estimated from steady-state NOEs, i-j to protons with paramagnetically dominated (nonselective) T 1 values, via the following two equations. NOESY (45) and TOCSY (46 -48) spectra were collected over a temperature range of 20 -35°C in 1 H 2 O. Two different spectral windows and mixing times were used for NOESY 25.0 KHz using 2048 complex points at 3 scans/s with a mixing time of 35 ms to optimize the observation of the hyperfine shifted signals and 10.0 KHz at 1 scan/s with a mixing time of 100 ms to cover the diamagnetic window at optimal digital resolution. The clean TOCSY spectra were collected over 12.0 KHz at 2 scans/s with a spin locking time of 35 ms using the MLEV-17 mixing scheme (47). All the two-dimensional data sets were processed on a Silicon Graphics workstation either using the software package FELIX from Biosym/MSI (San Diego, CA) or AZARA generously provided by Wayne Boucher (Department of Biochemistry, University of Cambridge). To increase the resolution, data sets were apodized with a sine-bell-squared function shifted by 20 -40°in both dimensions and zero-filled once in t 1 dimension. The spectral assignments were largely facilitated with the aid of ANSIG package (49). with ␦ dip (calc) given by the following equation. where ⌬ ax ϭ zz Ϫ 1 ⁄2( xx ϩ yy ) and ⌬ rh ϭ xx Ϫ yy are the axial and rhombic anisotropies of the diagonal paramagnetic susceptibility tensor, . The tilt of the major magnetic axis, z, from the heme normal is given by ␤, the projection of this tilt on the heme plane relative to the xЈ axis is given by ␣, and the location of the rhombic axes projected on the heme plane is approximated by ϭ ␣ ϩ ␥, as labeled in Fig. 1C. ␦ dip (obs) is given by the following equation. ␦ DSS (obs) is the observed chemical shift referenced to DSS. ␦ DSS (dia) is the isostructural diamagnetic shift, which, in this case, was calculated by using: where ␦ tetr is the shift in an unfolded tetra peptide (54), ␦ sec is the shift of an amino acid proton resulted from the protein secondary structure (55), and ␦ rc is the hemeinduced ring current shift (56). Minimizing the error function, F/n, in Equation 4 was performed over five parameters, ⌬ ax , ⌬ rh , ␣, ␤, and ␥, using the HbO 2 crystal coordinates. For the iron ligated porphyrin and axial His, the hyperfine shift is obtained via the following equation. which yields the contact shift via the following equation. Molecular Modeling-Protons were added to the crystal coordinates of A. suum HbO 2 using the program INSIGHT II (MSI). This provided unique coordinates for all protons of interest except the Tyr 30 (B10) hydroxyl proton; hence its position was determined from the 1 H NMR spectral parameters. RESULTS A schematic representation for selected heme cavity proximal (squares) and distal (circles) residues and their disposition relative to the heme is shown in Fig. 1. The heme is shown in . The region downfield of ϳ9 ppm resolves four three-proton (methyls) and four single-proton signals as well as one two-proton peak at 11 ppm that overlaps a methyl peak. Ten single-proton peaks and four methyl peaks can be resolved at some temperature in the upfield portion of the spectra. The comparison between 1 H 2 O and 2 H 2 O reveals that three of the most strongly relaxed resolved protons are labile. Although the present 1 H NMR data on metHbCN confirm a highly conserved arrangement relative to each other, of both proximal and distal residues (see below), the data also demonstrate that the heme is oriented differently in the cavity from that originally reported in the crystal structure (8). Thus as-signments for the heme and axial His are pursued first, followed by the key residues (i.e. Phe 44 (CD1) and Met 103 (FG5)) that invariably place strongly relaxed protons in resolved spectral windows and hence can be unambiguously assigned based solely on the conserved globin fold. These two residues are then used to uniquely orient the heme in the cavity. The remainder of the assignments are then presented on the basis of characteristic TOCSY-detected spin systems, with heme contacts expected on the basis of the heme orientation deduced above. Because the protocol for heme and resolved residue assignments in similar cyanomet globins has been described in detail (13), NMR data are illustrated only for the key heme-Phe 44 (CD1) contact that determines the heme orientation and for the distal H bond donors, Tyr 30 (B10) and Gln 64 (E7). Heme Assignments-The heme substituents could be unambiguously assigned as described in detail elsewhere (13). Two TOCSY-detected vinyl and one propionate groups exhibit NOESY cross-peaks to low field resolved methyls that pair 1-CH 3 , 2 vinyl and 3-CH 3 , 4 vinyl; NOESY cross-peaks between two of the heme methyls (Fig. 3D) assign the 1-CH 3 and 8-CH 3 , and a NOESY cross-peak between the remaining methyls and the one detected propionate uniquely assigns the pyrrole substituents. Common NOESY cross-peaks for the two substituents flanking a meso position, together with large low field intercepts in a Curie plot (not shown), identify the meso-Hs (34). The heme assignments, chemical shifts, T 1 values, and slopes in a Curie plot, are listed in Table I. Assignment of Key Resolved Resonances-The 19.0 ppm labile proton, when saturated (not shown), exhibits NOEs to a labile proton at 12 ppm, which is part of a nonlabile proton NMR spin system diagnostic of the axial His 97 (F8) C ␤ H 2 C ␣ H-NH fragment. A very broad and strongly relaxed (T 1 ϭ ϳ3 ms) upfield, nonlabile proton peak must arise from the axial His ring (34), and a NOE to the C ␤ H upon saturating this The heme is labeled with the Fisher notation, and the substituents are labeled M (methyl), V (vinyl), and P (propionate). The expected (on the basis of the crystal structure with the rotated heme) and observed inter-residue and residue-heme dipolar contacts are shown in B by double-sided arrows. The iron-centered, crystal structure-based coordinate system (xЈ, yЈ, zЈ) is shown in C, as is the magnetic coordinate system (x, y, z), where is diagonal. The two systems are related by the Euler rotation, ⌫ (␣, ␤, ␥), [x, y, z] ϭ [xЈ, yЈ, zЈ] ⌫ (␣, ␤, ␥), where ␤ is the tilt of the major magnetic axes from the heme normal, ␣ is the angle between the projection of the tilt on the heme plane and the xЈ axis, and ϭ ϳ␣ ϩ ␥ define the projection of the rhombic axes on the xЈ, yЈ plane. is the orientation of the proximal His imidazole ring plane relative to the N A -Fe-N C vector (xЈ axis). It is noted that the convention for xЈ, yЈ, zЈ differs from that used previously by a 45° (34, 39 -41, 44, 50) rotation in the heme plane and referencing ␣ to the ϩxЈ rather than ϪxЈ axis, so that The assigned resolved signals are labeled by the Fisher notation for the heme and by the one-letter code for the residue and sequence position. Also shown are steady-state NOE difference spectra upon saturating the low field Tyr 30 (B10) OH signal (C) and upon saturating the upfield Gln(E7) N ⑀ H 2 (D). The intensity of the saturated signals in traces C and D are identical. The detected NOEs are assigned as presented in the text. An asterisk indicates off-resonance saturation. peak (not shown) establishes that it is the His 97 (F8) C ␦ H. The TOCSY-detected C ␣ HN p H backbone and several C ␤ Hs of the members of the F-helix, Asp 94 (F6) through Arg 98 (F10), could be located via the standard expected helical N i -N iϩ1 , ␣ i -N iϩ1 contacts (not shown) (54). Although these backbone assignments could not be extended to F4 because of spectral congestion, the expected strong NOE between His 97 (F8) N ␦ H and Leu 92 (F4) locates the latter residue C ␣ H. A resolved (15 ppm), strongly relaxed, nonlabile single proton peak exhibits the TOCSY connectivity (Fig. 3, A and B) and variable temperature slope and intercepts of a rapidly reorienting aromatic ring. The relaxation properties (T 1 ϭ ϳ20 ms) alone dictate that it must arise from the completely conserved Phe 44 (CD1) (42). TOCSY connections (not shown), moreover, involving two sets of upfield hyperfine shifted residues identify AMXPT and AM(X 3 )(Y) 3 spin systems, which, moreover, ex-hibit several NOESY cross-peaks to each other (shown schematically in Fig. 1). The spin topology, their significant dipolar shifts, and their inter-residue contacts uniquely identify these two residues as Met 103 (FG5) and Val 101 (FG3). An upfield, resolved, strongly relaxed methyl peak with no TOCSY connectivities exhibits strong NOESY cross-peaks to the terminus of the AMXPT spin system, locating the C ⑀ H 3 , which, together with the NOESY cross-peak to His 97 (F8) (shown schematically in Fig. 1), confirms the assignment of Met 103 (FG5). A Tyr ring with contacts to Val 101 (FG3) identifies Tyr 43 (C7). Orientation of the Heme-The Phe 44 (CD1) ring exhibits NOESY cross-peaks to both the 1-CH 3 and 8-CH 3 of the heme (Fig. 3, C and D) and not to the 5-CH 3 and 4-vinyl, as predicted by the crystal structure (Fig. 1A), clearly establishing that the heme is oriented differently from that in the crystal by a 180°r otation about the ␣,␥-meso-axis (Fig. 1B). This conclusion is further confirmed by detecting the characteristic dipolar contact between Met 103 (FG5) and the 1-CH 3 (rather than the 4-vinyl predicted by the crystal structure), as shown schematically in Fig. 1, and between Val 101 (FG3) and 8-CH 3 (rather than the 5-CH 3 , predicted by the crystal structure). Hence all subsequent assignments are determined by using the crystal structure as a guide but with the heme rotated by 180°about the ␣,␥-axis, as shown in Fig. 1B. Distal Pocket Residues-The extreme low field, strongly relaxed (T 1 ϭ ϳ11 ms) labile proton peak at 22 ppm does not participate in a NOESY map, but when it is saturated (Fig. 2C), it exhibits a strong NOESY cross-peak to a two-proton signal under the 8-CH 3 . This signal under the 8-CH 3 in turn exhibits a TOCSY cross-peak to 9.0 ppm. The strong relaxation of the labile proton and the intercept in Curie plots for the TOCSY detected fragment uniquely identify the complete ring of Tyr 30 (B10). A strong NOE to the Phe 44 (CD1) C H, together with the latter T 1 pf ϳ20 ms, yields, with Equation 2, a 2.8 Ϯ 0.3 Å estimate for the Tyr 30 (B10) OH to Phe 44 (CD1) C H distance. Similarly, the relaxation properties (T 1 ϭ ϳ24 ms) of an upfield shifted labile proton dictate that it must arise from the only other residue that can place labile protons so close to the iron, the terminal N ⑀2 H of Gln 64 (E7). 3 Saturation of this peak results in a very strong (ϳ15%) NOE to a proton near 4 ppm. The large NOE from this upfield labile proton to 4 ppm identifies the latter as the geminal partner of the saturated peak. The strong NOE from Tyr 30 (B10) OH to the 4 ppm Gln N ⑀1 H confirms the assignment and argues for the assignment of the 4 and Ϫ6 ppm peaks to the N ⑀1 H and N ⑀2 H (see below). The ratio of the steady-state NOEs to the 4 ppm Gln 64 (E7) N ⑀ H 1 peak upon saturating the Gln 64 (E7) N ⑀ H 2 and Tyr 30 (B10) OH (ϳ0.5), together with the fixed ϳ1.9 Å distance between N ⑀1 H and N ⑀2 H, leads to a 2.0 Ϯ 0.2 Å estimate for the Tyr 30 (B10) OH to Gln 64 (E7) N ⑀1 H distance. Additional assignments (data not shown), include two upfield resolved methyls (one strongly relaxed) that are part of a five-spin system diagnostic of a Ile, with a strongly relaxed C ␣ H that exhibits the NOESY cross-peaks to 5-CH 3 and 4-vinyl (as predicted by the 180°rotated heme) for Ile 68 (E11); this residue exhibits the expected NOESY cross-peaks to the Tyr 30 (B10) ring (shown schematically in Fig. 1). Common NOESY contacts to 4-vinyl and 5-CH 3 for a TOCSY-detected Ala uniquely identify Ala 71 (E14). TOCSY spectra detect three Phe rings with weak hyperfine shifts. They are assigned to Phe 34 (B14), Phe 60 (E3), and Phe 140 (H15) based on their predicted dipolar contacts to Phe 44 (CD1) and Tyr 30 (B10), only to Phe 44 (CD1), and to 3-CH 3 , respectively. Two low field shifted TOCSY-detected fragments with slopes and intercepts indicative of aromatic pro- 3 Denotation of N⑀Hs of Gln 64 was based on the x-ray structure (8). C and D, portions of the NOESY spectrum that illustrates the dipolar contact between the two heme methyls that uniquely assigns 1-CH 3 and 8-CH 3 and the dipolar contact of the Phe 44 (CD1) ring with both the 1-CH 3 and 8-CH 3 of the heme, which uniquely characterize the orientation of the heme in the pocket as rotated by 180°about the ␣,␥-meso axis relative to that found in the HbO 2 crystal structure. tons, together with NOESY cross-peaks to Met 103 (FG5) identify the C ␦1 H-C ⑀1 H and C 2 -C 2 H portions of Trp 108 (G5); the remainder of the ring protons could not be located because of likely strong relaxation and near degeneracy with other protons and in position under the residual solvent peak. The observed interresidue and residue-heme dipolar contacts are summarized in Fig. 1B. Spectral congestion precluded further assignments. The assignments, chemical shifts, and T 1 values for the residues described in Fig. 1 are listed in Table II. Magnetic Axes-The orientation of the magnetic axes 2 was found to be essentially independent of the selection of input data or whether the anisotropies were also determined or held constant at the values determined for sperm whale metMbCN (50,51). The resulting orientation of is defined by ␤ ϭ 29.5°Ϯ 1.0°(tilt from the heme normal), ␣ ϭ 159 Ϯ 10°(direction of tilt projected on the heme plane), and ϭ ␣ ϩ ␥ ϭ 59 Ϯ 10°( rhombic axes projected on the heme plane). The residual error function, F/n, is small in all cases (ϳ 0.05 ppm 2 ), and the resulting correlation between observed and calculated dipolar shifts is very good, as illustrated in Fig. 4. The magnitude of the tilt of the major magnetic axis, z, from the heme normal (zЈ axis), ␤ ϭ ϳ30°, is nearly twice as much as that observed previously in cyanomet globins (34,50,51). The large magnitude of the tilt of the major magnetic axis from the heme normal indicated by the complete magnetic axes determination also reveals itself clearly in the analysis of the dipolar shift pattern for individual residues. Thus the nodal surface for the axial dipolar shift can be mapped by considering the magnitude and direction of dipolar shifts of residues near the nodal surface. The plots in Fig. 5 for the protons whose shift direction/ magnitude reflect primarily the axial geometric factor node are shown as a function of tilt angle ␤. The agreement with the experimental shifts is acceptable within 30 Ϯ 10°. The Orientation of Tyr 30 (B10) and Gln 64 (E7) -Predicted ␦ dip values for the Tyr 30 (B10) ring and Gln 64 (E7) N ⑀ Hs, based on the HbO 2 crystal coordinates (8), are included in Fig. 4 as filled circles and filled squares, respectively. It is observed that the uniquely placed protons on the Gln(E7) N ⑀2 H result in shifts that are well predicted, indicating that this residue in met HbCN maintains the same orientation relative to the iron as in HbO 2 . The R Fe ϭ 4.5 Ϯ 0.4 Å estimated from the T 1 ϭ 25 ms for the Gln 64 (E7) N ⑀2 H is consistent with the crystallographic R Fe ϭ 4.1Å. Moreover, the predicted dipolar shifts for the Gln 64 (E7) nonlabile side chain protons are small and are consistent with the likely appearance of protons in the poorly resolved and very crowded aliphatic envelope. In the case of Tyr 30 (B10), the dipolar shifts are very well predicted for the ring, which is consistent with conserved 1 , 2 angles with respect to HbO 2 . The placement of the proton on the hydroxyl oxygen crystal coordinates, however, unlike the N ⑀ Hs of Gln 64 (E7), is not unique. Hence the ␦ dip (calc) (Fig. 6A), distance to Phe 44 (CD1) C H (Fig. 6B) , distance to Gln 64 (E7) N ⑀1 H (Fig. 6C), and distance to the iron, R Fe (T 1 ϭ 10 ms, R Fe ϭ 4.0 Ϯ 0.4 Å) (Fig. 6D) for the Tyr 30 (B10) OH are calculated as a function of the dihedral angle between the H-O-C and ring planes, 3 , as illustrated in Fig. 6; the observed values are shown by shaded regions. It is clear that each of the four observable values are optimally predicted for the angle ϳ20°in Fig. 6, which leads us to conclude that we have uniquely spatially located the labile proton for the distal Tyr 30 (B10). Mobility of the Tyr 30 (B10) Ring-The Tyr 30 (B10) C ⑀ H resonance overlaps, at least in part, the 8-CH 3 peak ( Fig. 2A) over the accessible temperature range but appears to broaden selectively as the temperature is lowered. The line broadening, however, can be quantitated by observing only the steady-state NOE for the averaged C ⑀ H peak upon saturating the Tyr 30 (B10) OH, as shown in Fig. 7. Plotting the ln (linewidth) versus reciprocal temperature shows a plot with selective increase in slope at low temperature for the C ⑀ Hs peak, which yields an estimated exchange contribution of 20 Hz at 30°C. This value, together with the ␦ dip (calc) for the individual Tyr 30 (B10) C ⑀ Hs, results in an estimated shift difference of 4.4 ppm, which at 500 MHz, results in a reorientation rate of 5 ϫ 10 6 s Ϫ1 using the standard equation for chemical exchange in the first exchange limit (57). Heme Pocket Molecular and Electronic Structure-The pattern of the heme methyl contact shifts has been proposed to largely reflect the orientation of the axial His imidazole ring relative to a heme pyrrole-Fe-pyrrole axis (12, 58 -61). For an axial His oriented along such an N-Fe-N axis, large contact shifts are predicted and observed primarily for the pyrroles normal to the His plane. Thus the contact shift patterns among different globins are modulated separately by the orientation of the His relative to the heme and the orientation of the heme about the ␣,␥-meso axis. In cases where the axial His is oriented close to meso-Fe-meso vectors (11,62), the four pyrroles exhibit comparable contact shifts (13,63). A. suum metHbCN, like mammalian globins, exhibits large contact shifts for 1-CH 3 and 5-CH 3 , arguing for orientation of axial His along the N B -Fe-N D vector of the heme if the heme and the axial His 97 (F8) were orientated similarly. However, the heme methyl contact shift pattern in A. suum metHbCN is achieved by completely different means than in sperm whale metMbCN. Thus, as shown in Fig. 1, the axial His ring (as viewed from the proximal side) is rotated by ϳ60°( ϭ Ϫ65°) relative to that in sperm whale Mb ( ϭ Ϫ6°), which should result in larger 3-CH 3 , 8-CH 3 than 1-CH 3 , 5-CH 3 contact shifts, if the heme were seated in the pocket the same as in sperm whale Mb. However, the rotation of the heme by 180°about the ␣,␥-meso axis, when compared with sperm whale Mb, reverses this pattern and leads to larger 1-CH 3 , 5-CH 3 contact shifts than 3-CH 3 , 8-CH 3 contact shifts. Thus the fortuitous similarity in the heme contact shift pattern in A. suum metHbCN and mammalian glo-bins is due to off-setting influences of the differences in the orientation of both the axial His and the heme. The determined heme methyl contact shifts, together with the x-ray determined axial His orientation, thus independently confirm that the heme in A. suum metHbCN in solution is rotated by 180°about the ␣,␥-meso axis relative to that reported in the HbO 2 crystal structure. These results also suggest that caution should be exercised in assigning a heme orientation based on heme methyl contact shifts in a cyanomet globin unless the orientation of the axial His is known. The re-evaluation of the x-ray diffraction data to reconcile the alternate heme orientation in the crystal and solution has shown that the heme in the crystal is, in fact, rotated by 180°about the ␣,␥-meso axis from that originally reported (8) 4 and the same as found by 1 H NMR in solution. Theoretical considerations (61,64) confirmed in model compounds (65,66) dictate that if the orbital ground state is determined by the axial His(F8) bonding, the rhombic axes, , and the angle between the heme N-Fe-N and imidazole plane, (Fig. 1C), obey the counter-rotation rule where ϭ Ϫ. The present results conform quite well to these predictions, as shown in Fig. 8. The temperature dependence of the heme methyl shifts reveals that the 1-CH 3 , 5-CH 3 exhibit positive slopes that are steeper than Curie (T Ϫ1 ) behavior, whereas the 3-CH 3 , 5-CH 3 exhibit slopes that are negative or exhibits anti-Curie behavior. This effect is expected on the basis of thermal population of the excited orbital state, where the lone spin on the iron becomes delocalized into pyrroles B and D (60,(67)(68)(69). Lastly, the magnetic axes reported above allow the determination of ␦ dip for the axial His, which, in turn, provides ␦ con for each of the positions, as shown in Table I. Thus only the C ⑀ H exhibits large contact shifts that are very similar to those reported for sperm whale metMbCN (70) and confirms an essentially conserved axial His-Fe bond in A. suum relative to sperm whale Mb. Distal Hydrogen Bonding Network-The excellent correlation between the observed and crystal-structure predicted values for ␦ dip and T 1 for the Tyr 30 (B10) ring and Gln 64 (E7) N ⑀2 H side chain shows that their dispositions in metHbCN are essentially quantitatively conserved relative to those in the HbO 2 crystal structure (8). The position of the Tyr 30 (B10) hydroxyl proton, deduced from its relaxation, NOESY, and dipolar shift constraints, with an 3 value of ϳ20°, is precisely in the position to make an ideal H bond to the strongly tilted cyanide ligand. The short interproton distance between the Tyr 30 (19). Significant crowding in the distal pocket is evident in two 1 H NMR spectral parameters. The major magnetic axis (Fe-CN tilt) is tilted from the heme normal by ϳ30°, nearly twice as much as in other globins (34, 39 -41, 51, 63). This can be rationalized by the disposition of the Tyr 30 (B10) ring, which provides a steric barrier to ligation along the heme normal. The orientations of the Tyr 30 (B10) ring and the Fe-CN tilt (if only tilted and not primarily bent) determined herein place the two residues in van der Waals' contact between the Tyr O and the N of the bound cyanide. However, the tilt of the major magnetic axis (ϳ30°) is in the same direction (toward pyrrole C), as is the tilt of the proximal His 97 (F8) imidazole plane (by ϳ10°) observed for HbO 2 (8), so that the large tilt in the major magnetic axis, and hence the Fe-CN tilt, could have a significant contribution (to ϳ10°) from the proximal His tilt (32). The present results suggest that the crystal structure of A. suum HbCO would find the CO off axis to a degree that is much larger than found in other carbonyl globins. The role of the tilt for the axial His(F8) in contributing to either Fe-CO (32) or Fe-CN tilt could be addressed by either the crystal structure of the carbonyl complex or the solution 1 H NMR determination of the magnetic axes of the cyanomet complex, for the A. suum Hb mutant where the covalent connection between the axial imidazole and the F-helix backbone is severed in the His(F8) 3 Gly mutant (71,72), allowing an exogenous imidazole to bind in the preferred normal to the heme. Aromatic rings in the heme pocket of globins are generally found with sufficient local flexibility to yield only rotationally averaged 1 H NMR signals (37,73), despite the apparent close packing suggested by the crystal structures. Thus, Phe(CD1) is generally found packed tightly against the heme surface but nevertheless exhibits an 1 H NMR spectrum that is rapidly averaged by the 180°ring flips. The Tyr 30 (B10) ring exhibits an averaged NMR spectrum, but the rotation contributes significantly to the linewidth, and standard analysis in the fast exchange limit (57) using the ␦ dip (calc) for the individual C ⑀ Hs results in a rotation rate of ϳ1 MHz. A comparison can be made to globins with Phe rather than Tyr(B10) and with a Gln(E7), i.e. elephant Mb and the sperm whale Leu 29 (B10) 3 Phe/ His 64 (E7) 3 Gln and Leu 29 (B10) 3 Phe/His 64 (E7) 3 Gln/ Val 68 (E11) 3 Phe Mb mutants, for which the B10 ring exhibited "normal" linewidth indicative of much faster reorientation (39,51). Whether the constraints on the Tyr 30 (B10) ring in A. suum Hb result from "pinning down" the extremity via the H bond to the ligand or from the tight van der Waals' contacts with the aromatic ring is not known but could be elucidated in a comparison of the solution 1 H NMR spectra of WT and Tyr(B10) 3 Phe A. suum mutant Hb. Conclusions-The present NMR data provide support that the heme pocket of A. suum Hb is highly constrained, as evidenced by larger tilts from the heme normal for Fe-CN than previously observed and slow reorientation of the Tyr 33 (B10) ring. The heme is shown to be rotated by 180°about the ␣,␥-meso axis relative to that originally reported in the crystal (8), and the pattern of heme methyl contact shifts is shown to be consistent with the deduced heme orientation. The Tyr 33 (B10) and Gln 64 (E7) side chain labile protons in met HbCN are located at essentially the same positions as found in the HbO 2 crystal and hence provide H bonds to the bound cyanide and establish that the metHbCN is a valuable structural model for aspects of both HbO 2 and HbCO. However, although the Fe ϩ3 -CN unit can serve as limited structural 4 F. S. Mathews, personal communication. OE), and A. suum hemoglobin (q). The line represents a perfect counter-rotation model. models for the Fe ϩ2 -CO and Fe ϩ2 -O 2 units in globins, cyanide ligation rates unfortunately are not functionally relevant to O 2 or CO binding. This is due to the fact that free cyanide at physiologic pH range is protonated, so that both the on-and off-rates involve protonation/deprotonation steps that are strongly influenced by local pocket polarity that modulates the cyanide pK. Thus the cyanide on-and off-rates directly relate to neither distal steric nor H bonding effects (28,29).
2018-04-03T00:55:10.905Z
1999-11-05T00:00:00.000
{ "year": 1999, "sha1": "17c27c9fe0f1f13346b52b553b968d25f2a5ccd9", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/274/45/31819.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "6b257d239c17c022251adbf1d1756737545f9e49", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
53798910
pes2o/s2orc
v3-fos-license
766. Migration Flows and Increase of Extrapulmonary Tuberculosis in a Low Prevalence Setting: A Retrospective Analysis in Two Italian Centers Abstract Background Extrapulmonary tuberculosis (EPTB) represents 25% of Worldwide tuberculosis and it is more commonly associated with immunodepression. The purpose of this study was to determinate the burden of EPTB in a low TB prevalence setting. Methods A retrospective evaluation of patients treated for TB at Tor Vergata Hospital and Terni Hospital (Italy) from January 2013 to November 2017 was done. Clinical charts, laboratory tests and radiological findings were reviewed and analysed. Data were elaborated using Yates’ method analysis, Fisher test and Pearson’s chi-square test. Results A total of 171 patients were enrolled from 2013 to 2017 in two Italian centers (Rome and Terni); 71% were males, with a mean age of 41.5 years. The number of TB diagnosis increased among the study period (6.6% in 2013 vs. 56% in 2017) and an increase of EPTB (23% in 2013 vs. 44% in 2017) was seen. Most commonly EPTB presented as generalized lymphadenitis (34%), osteomyelitis and spondylodiscitis (28%) and other sites localizations (31%). Statistical analysis revealed a significant correlation between geographical provenience and TB localization (P = 0.004). Extra European immigrants (76% Africans) resulted at higher risk of EPTB (OR 2.31; CI 95% 0.63–8.46), while being Caucasian showed a protective role toward EPTB development (P = 0.001). The risk of EPTB doubled in 2015–2017 respect 2013–2014. Conclusion From 2013 to 2017 an increase in TB admissions was documented with a significant higher number of EPTB cases, particularly in extra-European immigrants. The doubled risk in 2015–2017 was likely the consequence of the recent ongoing escalating levels of migration from African countries and may result as an emerging Public Health problem. Disclosures All authors: No reported disclosures. Background. Extrapulmonary tuberculosis (EPTB) represents 25% of Worldwide tuberculosis and it is more commonly associated with immunodepression. The purpose of this study was to determinate the burden of EPTB in a low TB prevalence setting. Methods. A retrospective evaluation of patients treated for TB at Tor Vergata Hospital and Terni Hospital (Italy) from January 2013 to November 2017 was done. Clinical charts, laboratory tests and radiological findings were reviewed and analysed. Data were elaborated using Yates' method analysis, Fisher test and Pearson's chi-square test. Results. A total of 171 patients were enrolled from 2013 to 2017 in two Italian centers (Rome and Terni); 71% were males, with a mean age of 41.5 years. The number of TB diagnosis increased among the study period (6.6% in 2013 vs. 56% in 2017) and an increase of EPTB (23% in 2013 vs. 44% in 2017) was seen. Most commonly EPTB presented as generalized lymphadenitis (34%), osteomyelitis and spondylodiscitis (28%) and other sites localizations (31%). Statistical analysis revealed a significant correlation between geographical provenience and TB localization (P = 0.004). Extra European immigrants (76% Africans) resulted at higher risk of EPTB (OR 2.31; CI 95% 0.63-8.46), while being Caucasian showed a protective role toward EPTB development (P = 0.001). The risk of EPTB doubled in 2015-2017 respect 2013-2014. Conclusion. From 2013 to 2017 an increase in TB admissions was documented with a significant higher number of EPTB cases, particularly in extra-European immigrants. The doubled risk in 2015-2017 was likely the consequence of the recent ongoing escalating levels of migration from African countries and may result as an emerging Public Health problem. Disclosures. All authors: No reported disclosures. Background. According to WHO data, in 2016 10.4 million people were infected with tuberculosis (TB), from which one million were patients ≤18 years, and 250,000 deaths. The diagnosis of TB in pediatric patients is a challenge given the clinical behavior. A 7-Year Retrospective Study of Pediatric Tuberculosis in a Third-Level Hospital in Mexico City Methods. This is a retrospective, descriptive, and observational study of patients under 18 years treated at the TB Clinic at Department of Pediatric Infectious Diseases in the National Institute of Pediatrics (INP) in Mexico City during the period 2011-2018. Conclusion. Tuberculosis in Mexico is still a major public health problem and thus is important to remain suspicious of it. This is the first report in Mexico where immunodeficiency is investigated in pediatric patients with tuberculosis, detected in one out of five cases, which stresses the need of its search, given this can modify the outcome. Disclosures. All authors: No reported disclosures. Background. Miliary tuberculosis (MT) is a severe rare form of tuberculosis (TB). It is often due to lymphohaematogenous dissemination of tubercle bacilli. Although the global incidence of TB has been slowly decreasing with globally conducted program, MT incidence is relatively increasing owing mainly to widespread use of immunosuppressive drugs and HIV/AIDS pandemicity. Few reports were found regarding epidemiological data of MT in developing countries. We aim to evaluate epidemiological characteristics of MT in the region of Sfax Southern Tunisia. Epidemiological and Clinical Profile of Miliary Tuberculosis in Southern of Methods. We conducted a retrospective study of all new cases of MT of all ages between January 1995 and December 2016. Data were collected from the regional register of tuberculosis implanted in the anti-tuberculosis center of Sfax. Results. We analyzed 22 patients with MT accounting for 0.8 of all cases of tuberculosis. Incidence rates of MT were stable over the 22-year study period. Their median age was of 41 years ) and a half of them were females. MT was significantly more common in patients less than 15 years (2.4% vs. 0.7%; OR=3.5; P = 0.04). Six patients (27.3%) had extra-pulmonary locations with lymph nodes (n = 1), meninges (n = 2), bones and joints (n = 1), abdominal cavity (n = 1), and pleura (n = 1). One patient (4.5%) died within 8 months after a confirmed diagnosis. Median duration of treatment was 10 months (IQR = [6-15 months]). The outcome was favorable in 19 cases (86.4%) and three patients received a combined-drug regimen (13.6%).
2018-12-08T11:38:31.375Z
2018-10-04T00:00:00.000
{ "year": 2018, "sha1": "afb477da607e77b2fcce005677f03da263320103", "oa_license": "CCBYNCND", "oa_url": "https://academic.oup.com/ofid/article-pdf/5/suppl_1/S275/33598137/ofy210.773.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "afb477da607e77b2fcce005677f03da263320103", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
117902673
pes2o/s2orc
v3-fos-license
Coinvariants for Yangian doubles and quantum KZ equations We present a quantum version of the construction of the KZ system of equations as a flat connection on the spaces of coinvariants of representations of tensor products of Kac-Moody algebras. We consider here representations of a tensor product of Yangian doubles and compute the coinvariants of a deformation of the subalgebra generated by the regular functions of a rational curve with marked points. We observe that Drinfeld's quantum Casimir element can be viewed as a deformation of the zero-mode of the Sugawara tensor in the Yangian double. These ingredients serve to define a compatible system of difference equations, which we identify with the quantum KZ equations introduced by I. Frenkel and N. Reshetikhin. Introduction. The Knizhnik-Zamolodchikov (KZ) system is a set of differential equations satisfied by correlation functions in Wess-Zumino-Witten theories ( [12]). These equations can be interpreted as the equations satisfied by matrix elements of intertwining operators associated with representations of affine Kac-Moody algebras ( [14]). They define a local system on the configuration space of n distinct marked points on the rational curve CP 1 . This local system also has another interpretation: to a complex curve with marked points, a system of weights of a semisimple Lie algebra and a positive integer is associated the vector space of conformal blocks. It is the space of coinvariants of a representation of a product of Kac-Moody algebras attached to the points of the curve, with respect to the subalgebra formed by the rational functions on the curve, regular outside the points. The conformal blocks form a vector bundle on the moduli space of curves with marked points. There exists a natural connection on this vector bundle, the KZB (for KZ-Bernard) connection. It is provided by the action of the Sugawara field. This field is a generating functional for elements of the enveloping algebra of the Kac-Moody algebra. One shows that certain combinations of these elements conjugate the regular algebras associated to nearby elements of the moduli space (see [2,15]). In [11], I. Frenkel and N. Reshetikhin introduced q-deformed analogues of the KZ equations. This qKZ system is a difference system obeyed by matrix elements of intertwining operators of quantum affine algebras. Later, elliptic analogues of this system were defined and studied ( [9,10]). If one wished to understand qdeformed versions of the KZB connection in higher genus, it would be important to understand how these equations could be derived from the coinvariants viewpoint. To make such a derivation explicit in the rational case is the main goal of this paper. Let us now present our work. We consider a system of points z = (z i ) i=1,... ,n on CP 1 . We call O i and K i the completed ring and field of CP 1 at z i . We also call O and K their direct sums, and R z the ring of regular functions on CP 1 , regular outside z and vanishing at infinity. We setḡ = sl 2 , and we denote by g K the double extension ofḡ ⊗ K by central and derivation elements, and by g O and g K extensionsḡ ⊗ O and ofḡ ⊗ R z . In this situation, the coinvariants construction described above is based on the inclusions of the enveloping algebras Ug O and Ug z in Ug K . To construct a deformation of these inclusions, we note that the decomposition of g K as a direct sum g O ⊕g z is that of a Manin triple, associated with the rational form dz on CP 1 . We apply techniques of quantum currents and twists [7,5] to the quantization of this triple (sect. 1.1). We then give an presentation of the resulting algebras U g O , U g z and U g K in terms of L-operators (sect. 1.4). Let us mention here that P. Etingof and D. Kazhdan obtained in [8] quantizations of these triples, for arbitraryḡ; their construction of U g K,z from the double Yangian algebra is a special case of their construction of "factored algebras", that applies to any quantum group based on an 1-dimensional algebraic group. They also obtain RT T -type relations similar to our RLL relations ( [8], Prop. 3.25). Our next step is to construct an isomorphism between U g K and a tensor product of n copies of the double Yangian algebra DY (sl 2 ), with their central elements identified. This is done in Prop. 1.6 using the L-operators description of both algebras. The formulas closely resemble formulas for coproducts. They may have the following interpretation: the isomorphism of the quantum groups appearing in [6] with a tensor product of "local" algebras could be obtained if we extended the construction of that paper to a larger extension, with a n-dimensional center and n derivations. The resulting algebra would then have specialization morphisms to local algebras, and the desired isomorphism would result from composing the coproduct ∆ (n) with the tensor product of these specialization morphisms. We hope to return to this question elsewhere. After that, we observe that there exists in the double Yangian DY (sl 2 ) a central element of the form q (K+2)D S, deforming the difference (K + 2)D − L −1 , where K and D are the central and derivation elements of an extension of the loop algebra, L −1 is a mode of the Sugawara tensor, and S belongs to the subalgebra of DY (sl 2 ) "without D". The construction of this element follows from a general construction of V. Drinfeld in [4] of central elements in quasi-triangular Hopf algebras, implementing isomorphisms of modules with their double duals. We then show that S plays a role similar to L −1 in the classical situation, with infinitesimal shifts replaced by finite shifts of the points: its copy S (i) on the ith factor of DY (sl 2 ) ⊗n conjugates the subalgebra U g z to U g z+ (K+2)δ i , where δ i in the ith basis vector of C n (see sect. 3.3.3). As in the classical case, the actions of the S (i) define a discrete flat connection on the space of coinvariants H 0 (U g z , V) * , where V is the representation induced to DY (sl 2 ) ⊗n from a finite-dimensional representation of U g O . We then compute explicitly this connection (sect. 3.3), and find that it agrees with the quantum KZ connection of [11]. 1. Algebras associated with sl 2 in the rational case 1.1. The algebra U g K,z . 1.1.1. Manin triples. Let us fix an integer n ≥ 1. Let z i , i = 1, . . . , n be a family of complex numbers; set z = (z i ). Let t be the standard coordinate on CP 1 , and set t i = t − z i ; t i is then a local coordinate at z i . Let be the local field and ring at z i . Let us set and let R z be the subring of K, formed by the expansions of regular functions on .. ,n . We then have the direct sum decomposition K = R z ⊕ O. Let us endow K with the scalar product φ, ψ K = n i=1 res z i (φψdz). Let us setḡ = sl 2 , and construct the Lie algebras g K,z = (ḡ ⊗ K) ⊕ CD z ⊕ CK z as the double extension of the loop algebraḡ ⊗ K by the cocycle c(x ⊗ φ, y ⊗ ψ) = x, y ḡ n i=1 res z i (φdψ), and by the derivation [D z , x ⊗ φ] = x ⊗ (dψ/dz). This Lie algebra is endowed with the scalar product , g K,z , defined onḡ ⊗ K by x ⊗ φ, y ⊗ ψ g K,z = x, y ḡ φ, ψ K , , ḡ being the Killing form ofḡ, and D z , The Lie algebra g K,z contains subalgebras Both subalgebras are isotropic for the scalar product , g K,z . We then construct the Manin triple In [6], we also considered the following twisted Manin triples. Letḡ = n + ⊕ h ⊕ n − be a Cartan decomposition ofḡ. Set and let g w 0 +,z , g w 0 O be the subspaces defined as g +,z , g O , exchanging n + and n − . Then we have the Manin triples and 1.1.2. Quantization of the Manin triples (2) and (3). In [6], we defined quantizations of the Manin triples (2) and (3). Let us recall their construction. Let U g K,z be the algebra generated by D z , K z , x (i) k , k ∈ Z, i = 1, · · · , n, x = e, f, h, arranged in the generating series (we note as usual z ij = z i − z j ); we also set and In (4), the arguments of the exponentials are viewed as formal power series in , with coefficients in U g K,z⊗ R z in the first case, and in U g K,z⊗ O in the second one. Herē ⊗ denotes the graded tensor product with respect to the bases (t k i ) of O, and (t − z 1 ) −k 1 −1 · · · (t − z n ) −kn−1 of R z . In the notation of [6], we have x The above relation between x Here δ(z, w) = k∈Z z k w −k−1 ; note that we have changed both signs of K z and D z with respect to the convention of [5]. This algebra is endowed with Hopf structures (∆, ε, S) and (∆, ε,S), quantizing the bialgebra structures (2) and (3) respectively. These coproducts are defined by where (K z ) 1 and (K z ) 2 mean K z ⊗ 1 and 1 ⊗ K z ; and The counit ε is defined to be equal to zero on all generators. Proposition 1.1. (see [6]) The above formulas define quantizations (U g K,z , ∆) and (U g K,z ,∆) of the Manin triples (2) and (3). These are quasitriangular Hopf algebras, with universal R-matrices respectively given by The coproducts ∆ and∆ are related by the twist this means that we have∆ = Ad(F ) • ∆. We denote Ad(X), for an invertible element X of some algebra A, the linear endomorphism Y → XY X −1 of A. F also satisfies the cocycle equation Remark 1. The above results can be extended to the case where we take z in 1.1.3. Subalgebras of U g K,z . Let us consider in U g K,z , the subalgebras U g O and U g z , respectively generated by D z and the x (i) k , k ≥ 0, and by K z and the x = e, f, h. Proof. This follows from the fact that the three spaces U g z , U g O and U g K,z are flat deformations of Ug z , Ug O and Ug K,z , and that the product maps Ug z ⊗ Ug O → Ug K,z and Ug O ⊗ Ug z → Ug K,z are linear isomorphisms. 1.2. Hopf structures on U g K,z and its subalgebras. In a way similar to [5], we define the linear maps The fact that these maps are well-defined follows from Prop. 1.3. We then have Theorem 1.1. (see [6,8]) We have the equalities If we set and we have F = F 2 F 1 . The maps Ad(F 1 ) • ∆ and Ad(F −1 2 ) •∆ coincide; we will denote them by ∆ z . ∆ z defines a quasitriangular Hopf algebra structure on U g K,z ; U g z and U g O are Hopf subalgebras of U g K,z for this structure. The universal R-matrix of (U g K,z , ∆ z ) is expressed by Moreover, F 1 and F 2 have the expansions and where U n [2] ± are the linear spans in U g K,z of products of more than two elements of the form x[ǫ], x = e for ± = + and x = f for ± = −, ǫ ∈ K. The Hopf algebra (U g K,z , ∆ z ), together with its Hopf subalgebras U g z and U g O , forms a quantization of the Manin triple (1). 1.3. Representations of U g K,z and L-operators. In [6], we studied level zero representations of the algebras introduced there. Our result can be expressed as follows. Define the algebra U g ′ K,z as the subalgebra of U g K,z generated by the K z and the x[ǫ], ǫ ∈ K, x = e, f, h; and U g z and U g ′ O as its subalgebras respectively generated by the x[ǫ], ǫ ∈ R z and K z , and by the x[ǫ], ǫ ∈ O, x = e, f, h. Proposition 1.4. We have an algebra morphism The images by π of U g z and U g ′ O are contained in Define the L-operators of (U g K,z , ∆ z ) as (24), and has leading term 1, we have In what follows, we will stress the functional dependences of L + z and L − z by writing . From (25) and (26) follows that L + z (t) and L − (i) (t i ) are decomposed as follows: and 1.4. RLL relations for U g K,z and its subalgebras. Let us compute the image by π ⊗ π of the universal R-matrix of U g K,z . We find where expanded at the vicinity of t i = 0, where P is the permutation operator of the two factors of C 2 ⊗ C 2 . By applying the representation π to two out of the three factors of the Yang-Baxter equation with we set Moreover, we have the relations where det (L(t)) = l 11 (t)l 22 (t − ) − l 12 (t)l 21 (t − ), for L(t) a matrix with entries l ij (t); this follows from the identities specialized to t = u − and t i = u i − . Below we will denote by 1 i the element of K with ith component equal to 1 and the others equal to 0. Proof. Denote by U the algebra defined above. As we have seen before, the formulas (27) and (28) define a morphism from U to U g K,z . On the other hand, because of relations (35), a system of generators of U is given by D z , K z and the l , up to first order in ; it is easy to check that this combination is given by the Lie algebra bracket inḡ ⊗ K. Therefore, U is a deformation of the tensor product of the symmetric algebra ofḡ ⊗ K with the symmetric algebra in D z and K z . In the classical limit, the morphism defined by (27) and (28) is the identity. Corollary 1.1. Let ϕ be an algebra morphism from U g K,z to some algebra A. Then the images by ϕ ⊗ 1 ⊗ 1 of L + (t) and L − (i) (t i ) are elements satisfying (32), (32), (35). Conversely, any matrices (37), (38) satisfying these equations define a morphism ϕ : U g K,z → A. 1.5. Isomorphism of U g K,z with DY (sl 2 ) ⊗n /(K (i) − K (j) ). In this section, we will construct an isomorphism i z from U g K,z to DY (sl 2 ) ⊗n /(K (i) − K (j) ). We set K (i) = 1 ⊗(i−1) ⊗ K ⊗ 1 ⊗(n−i) and we will denote by K the common value of the K (i) in the latter algebra. 1.5.1. Presentation of DY (sl 2 ). We will express i z in terms of L-operators. To do that, let us remark that there is a simple presentation of DY (sl 2 ) as U g C((t)),(0) , the specialization for n = 1 and z = (0) of the algebra U g K,z . In terms of L-operators, DY (sl 2 ) is presented as follows. It has generators D, K and l αβ [k], α, β = 1, 2, k ∈ Z, generating series and relations where R(z, z ′ ) is defined by (34), the quantum determinant relations det (L ± (z)) = 1, and [K, anything] = 0, [D, L ± (z)] = −dL ± (z)/dz. (In the notation of [5], L − (z) and L + (z) correspond respectively to L <0 (z) and to the inverse of L ≥0 (z) in End( We also have the relation of the operator L ± (z), which belongs to DY (sl 2 Consider the expression On the other hand, the series L + j (t i + z ij ), j = i are expanded as sums 1 ⊗ 1 + k≥1 (t i + z ij ) −k × coefficients, and we expand them in turn at the vicinity of t i = 0. The coefficient of each power of t i in this expansion is then an infinite series, converging in the topology of DY (sl 2 ) ⊗n because it involves large Fourier indices. Therefore the above expression belongs Consider now the expression L + 1 (t − z 1 ) · · · L + n (t − z n ). Since each L + (t) belongs to 1 ⊗ 1 + t −1 DY (sl 2 )[[t −1 ]], this product belongs to 1 ⊗ 1 + DY (sl 2 ) ⊗n⊗ R z . Then Proposition 1.6. The formulas define an isomorphism between U g K,z and DY (sl 2 ) ⊗n /(K (i) − K (j) ). The fact that the right sides of (42) and (43) satisfy the identities (35) follows from the fact that the quantum determinant is a group-like element of the Yangian algebra Y (gl 2 ). 1.6. Shifts of the points. By Prop. 1.6, we have subalgebras i z (U g z ) of for λ ∈ C and δ i the ith standard basis vector of C n . Proof. i z (U g z ) is generated by the coefficients of (42). We have for any j = 1, . . . , n, these are generating functions for i z+ λδ i (U g z+ λδ i ). Quantum deformation of the zero-mode of the Sugawara field 2.1. Quantum Casimir elements. In [4], V. Drinfeld proved the following fact: Proposition 2.1. (see [4], Prop. 2.2) Let A be a quasitriangular Hopf algebra with coproduct ∆ A , counit ε A and antipode S A . Let R A be its R-matrix and set Then we have for any x in A, S −2 A (x) = uxu −1 . In particular, if for some other u 0 ∈ A, we have S −2 A (x) = u 0 xu −1 0 , then u −1 0 u belongs to the center of A. 2.2. Application to the deformation of the zero-mode of the Sugawara tensor. Letḡ = sl 2 and g = (ḡ ⊗ C((t))) ⊕ CD ⊕ CK be the double extension of the loop algebraḡ ⊗ C((t)) by the cocycle . Let e, f, h be the Chevalley basis of sl 2 , and let us set x n = x ⊗ t n for x ∈ sl 2 . Then it is a known fact that belongs to the center of Ug. The sum in this expression is the zero-mode of the Sugawara tensor. Let DY (sl 2 ) be the double Yangian algebra associated with sl 2 . As we have seen, this algebra is generated by central and derivation elements K and D, and elements x lifting elements x ofḡ ⊗ C((t)). Let DY (sl 2 ) ′ be the subalgebra of DY (sl 2 ) generated by K and the x. The universal R-matrix of DY (sl 2 ) is expressed as Yg has the expansion The antipode S Yg of DY (sl 2 ) satisfies S 2 Yg = Ad(q 2D ). Let us set is a central element of DY (sl 2 ). It is written as where S belongs to the completion of the subalgebra DY (sl 2 ) ′ defined by the left ideals generated by the lifts ofḡ ⊗ t N C[[t]]. Its expansion in powers of is Proof. The first statement follows from Prop. 2.1. We then have S = Yg commutes with D ⊗ 1 + 1 ⊗ D, S commutes with D. This proves (45). (46) then follows from the above expansion of R 0 . Discrete connection on coinvariants 3.1. Induced representations. Let DY (sl 2 ) ≥0 be the subalgebra of DY (sl 2 ) generated by the the nonnegative Fourier generators x k , k ≥ 0, x = e, f, h. DY (sl 2 ) ⊗n ≥0 is isomorphic to its image in DY (sl 2 ) ⊗n /(K (i) − K (j) ); these two algebras will be denoted the same way. Finally, we denote by DY (sl 2 ) ⊗n ≥0 [K] the subalgebra of DY (sl 2 ) ⊗n /(K (i) − K (j) ) generated by DY (sl 2 ) ⊗n ≥0 and K. We then have: Proof. It follows from (43) and (36) that i z (U g O ) is contained in DY (sl 2 ) ⊗n ≥0 [K]. Since the classical limits of both algebras coincide with Ug O and the classical limit of i z is then the identity, i z is an isomorphism between these algebras. The restriction to DY (sl 2 ) ≥0 of the Yangian version of π can be specialized to t = 0. Denote by (V, ρ V ) the resulting 2-dimensional representation. Proof. We have where the third equality follows from (29). We will consider also the dual representation where t denotes the transposition. We have then where the exponent t 2 denotes the transposition in the second factor. Proof. We have (see [4] The result now follows from the identity Let us fix now a complex number k and consider the module (ρ, (V * ) ⊗n ) over DY (sl 2 ) ⊗n ≥0 [K], defined as follows. As an algebra, DY (sl 2 ) ⊗n ≥0 [K] is isomorphic to DY (sl 2 ) ⊗n ≥0 ⊗ C[K]. DY (sl 2 ) ⊗n ≥0 then acts on (V * ) ⊗n by ρ ⊗n V * , and K acts on this space by the scalar k. 3.2. Coinvariants. We will be interested in the (dual to the) space of coinvariants H 0 (U g z , V) * , which is defined as the subspace of V * consisting of the forms ℓ z , such that Props. 1.3 and 1.6 show that the map from (V * ) ⊗n to V, sending v to 1 ⊗ v, is injective. This map will serve to identify (V * ) ⊗n with a subspace of V. Proposition 3.1. The restriction of ℓ z to (V * ) ⊗n defines a map from H 0 (U g z , V) * to ((V * ) ⊗n ) * = V ⊗n . This map is a linear isomorphism. Proof. It follows from Props. 1.6 and 1.3 that a basis of V is given by the u i ⊗ v j , with (u i ), (v j ) bases of U g z and (V * ) ⊗n respectively (u 0 = 1). A form ℓ z of V * is then invariant iff it satisfies the equations ℓ z (u i ⊗ v j ) = ε(u i )ℓ z (v j ), so it is exactly determined by the ℓ z (1 ⊗ v j ). Compatible difference system on coinvariants. 3.3.1. Definitions. Let C n * be the complement of the diagonals in C n , and let η be a nonzero complex number. A difference flat connection on C n * is the data of a vector space E z for each z ∈ C n * , together with a system of linear isomorphisms satisfying the relations η is called the step of the system. Suppose we have a system ι z of isomorphisms of the E z with a fixed vector space E. We get a system of elements A i (z) = ι z+ δ i A i (z)ι −1 z ∈ Aut(E), satisfying the same relations. Such a system is called a compatible difference system (see [1,13]). 3.3.2. The case of a formal step. We will need the following modification of the above definitions in the formal context. In that situation, z is a sequence (z 1 , . . . , z n ) ∈ C[[ ]] n , such that z i = z j mod for i = j. The 3.3.3. The action of the quantum Sugawara element on coinvariants. We then have a compatible difference system, in the formal sense, defined as follows. Associate to z the coinvariants space E z = H 0 (U g z , V) * . Set η = (k + 2) and define ; the action of this element on V coincides with that of Ad(q −(k+2)D (i) )(i z+ (k+2)δ i (x)), which belongs to i z (U g z ) by Prop. 1.7. We then have Identification with the qKZ system The aim of this section is to make the system (48) explicit, using the identifications of Prop. 3.1 of the spaces of coinvariants with V ⊗n . 4.1. Expression of the Sugawara action in terms of L-operators. Let us express the connection (48) in terms of L-operators. For that, we will prove: and l − α ∈ DY (sl 2 ). Let v belong to (V * ) ⊗n ; we view it as a vector of V, as explained above. Then the action of S (i) on v is expressed as recall that the exponent i means the action on the ith factor of (V * ) ⊗n . Proof. In the notation of sect. 2.2, we have On the other hand, we have (1 ⊗ q −kD )L − (t)(1 ⊗ q kD ) = L − (t + k), so that for t = 0 we obtain, since ρ V coincides with the specialization of π for t = 0, Therefore, we have The lemma follows from the comparison of this formula with (49). 4.2. Expression of the discrete compatible system. Let us express the invariance of the form ℓ z . Let V a be an auxiliary copy of the vector space V . Consider i z (L − (i) (t i )) as an elements of End(V a ) ⊗ DY (sl 2 ) ⊗n /(K (i) − K (j) ). We then have, for v a ∈ V a , v ∈ (V * ) ⊗n , and any formal t i , Substitute t i by k in this identity. Using (42), we find that The exponent t i denotes here the transposition of the ith factor. Therefore, we have the identity for any v a ∈ V a , v ∈ (V * ) ⊗n . Introduce nowR This object has the following property: if we write (R(z) Note that we haveR(z) = R(z + 2 ) t 2 , so that After we apply the transposition on V a , we obtain, for any u a ∈ V * , Here R(z) t = R(z) t 1 t 2 is the transposed of R(z). By Lemma 4.1, the left side of this equality is the expression of (A i ℓ z )(v). Remark 2. We have obtained equations for the coinvariants of the representation V of U g K,z induced from ρ ⊗n V * . It should be possible to obtain equations for coinvariants of a representation ⊗ n i=1 ρ V * i , where V i are finite dimensional representations of DY (sl 2 ) ≥0 . For that, one should consider the L-operators L −(V i ) (i) (t i ) for U g K,z and L ±(V i ) (t), defined by replacing π by π V i in the definitions of L − (i) (t i ) and L ± (t), and prove that i z (L −(V i ) (i) (t i )) is given by the formula of Prop. 3.1. After that, it should be possible to apply the reasoning above to derive the general version of the qKZ equation. Remark 3. It is interesting to consider the systems obtained by replacing V by a representation induced from a non-irreducible representation of U g O . In the classical case and the gl 1 situation, one obtains in this way non-Fuchsian systems. For example, DY (sl 2 ) ≥0 has an evaluation representation ρ V [[ζ]] on V [[ζ]], defined by (id ⊗ ρ V [[ζ]] )(L + (t)) = R(t + ζ); the representation ρ V considered above is a quotient of ρ V [[ζ]] . One may expect that the system of equations one would obtain this way is the system where ℓ z belongs to V ⊗n [[ζ 1 , . . . , ζ n ]], on which the operators K i (z ij + ζ i − ζ j ) act as k≥0 K (k) i (z ij )(ζ i − ζ j ) k /k! (the functions of ζ i act by multiplication on the formal series part and the derivatives of K i act as matrices on the factor V ⊗n ; K i are the operators appearing in the right side of (56)). Remark 4. In the classical case, it is possible to explain the agreement of the intertwiners and coinvariants approaches in a simple way. It would be interesting to find such an explanation of the result of this paper.
2019-04-12T09:26:06.380Z
1997-07-10T00:00:00.000
{ "year": 1997, "sha1": "599ac88c4ef0b6524173ebc2ec56c2e3b2bf5634", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "58230597739a5032c5fa2f8322c2b4dba5cb31e7", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
82151216
pes2o/s2orc
v3-fos-license
p21-activated kinase family: promising new drug targets : The p21-activated kinase (PAK) family of serine/threonine protein kinases are downstream effectors of the Rho family of GTPases. PAKs are frequently upregulated in human diseases, including various cancers, and their overexpression correlates with disease progression. Current research findings have validated important roles for PAKs in cell proliferation, survival, gene transcription, transformation, and cytoskeletal remodeling. PAKs are shown to act as a converging node for many signaling pathways that regulate these cellular processes. Therefore, PAKs have emerged as attractive targets for treatment of disease. This review discusses the physiological and pathological roles of PAKs, validation of PAKs as new promising drug targets, and current challenges and advances in the development of PAK-targeted anticancer therapy, with a focus on PAKs and human cancers. Introduction The first p21-activated kinase, PAK1, was discovered in 1994 as a Cdc42/Racinteracting protein. 1 Cdc42 and Rac proteins are members the Rho family of GTPases, which are known regulators of cell proliferation and motility. 2 Thereafter, a significant amount of work has revealed the connection of PAKs to a plethora of cellular processes, including proliferation, survival, motility, gene transcription, and oncogenic transformation. 3,4 Activation of PAKs has been shown to drive oncogenic signaling in cells and contribute to the progression of cancer. This review summarizes the role of PAKs in tumorigenesis and the current status of PAK1-targeted drug development. Functional characteristics and molecular mechanisms of action Structure and activation mechanisms of PAKs Six mammalian PAKs are classified into two groups based on their sequence, structural homology, and activation mechanism, ie, group I (PAK1-3) and group II (PAK4-6). All PAKs have a conserved C-terminal kinase domain and an N-terminal regulatory domain. 5 Group I PAKs are approximately 70% homologous in their sequence but with over 90% homology in their kinase domains. 6 The N-terminal regulatory domain contains a p21-protein binding domain that overlaps with an autoinhibitory domain (AID, Figure 1). Group I PAKs normally exist as inactive homodimers, in which the AID of one PAK molecule interacts with the kinase domain of another. Binding of Cdc42/Rac to the p21-protein binding domain disrupts this interaction and initiates conformational changes to trigger autophosphorylation of the activation loop and several C-terminal serine residues, which are critical for full kinase activity. 1,7 Unlike group I, the kinase domains of group II PAKs are constitutively phosphorylated. Binding to Cdc42/Rac does not result in activation but rather affects subcellular localization. 8 It was believed that group II PAKs do not possess an AID and are constitutively catalytically active, until recent work revealed the presence of AID-like domains in group II PAKs. [9][10][11] PAK5 was first identified to contain an AID and its kinase activation was found to be stimulated by Cdc42 binding similar to group I PAKs. 9 This domain, however, was absent in PAK4 and PAK6, leading to the assumption of distinct regulation within group II PAKs. Recent studies suggest that the AID-like domain interacts with and inhibits the kinase domain. Activation does not rely on autophosphorylation of the activation loop, but rather occurs readily once this interaction is disrupted, permitting assumption of active conformation. 10,11 Biological functions of PAKs PAKs play important roles in many cellular processes, including cell proliferation, motility, and survival. PAKs stimulate cell proliferation by activating the mitogen-activated protein kinase signaling pathway. 12,13 Two components of this pathway, c-Raf and MEK1, have been reported to be direct substrates for group I PAKs. [14][15][16] PAKs also promote cell proliferation via the Wnt signaling pathway. PAK1 associates and phosphorylates β-catenin to stabilize and facilitate its nuclear translocation, thus controlling transcriptional activity. 17,18 In addition, PAKs promote cell cycle progression by stimulating cyclin D1 expression, a key regulator of G1 cell cycle progression. 19 PAK1 has also been shown to localize to centrosomes and to phosphorylate and activate centrosomal kinase Aurora A during mitosis. 20 PAKs regulate actin-cytoskeleton rearrangement during cell motility, division, and migration; these are processes that, when deregulated, can be exploited by cancer cells during metastatic and invasive disease progression. PAKs participate in cytoskeletal remodeling by phosphorylating many substrates that control different aspects of cytoskeletal dynamics, such as LIM kinase, 21 p41-ARC, 22 filamin A, 22 and myosin light chain kinase. 23 PAK1 phosphorylates c-Raf, causing its relocalization from the cell membrane to the mitochondria, where it binds to Bcl-2 to replace Bad, thereby relieving Bcl-2 from the inhibitory complex to stimulate cell survival signals. 24 PAK1 and PAK5 can also phosphorylate Bad directly and dissociate it from Bcl-2, 25 thus enhancing cell survival. PAKs in disease: aberrant PAK function in cancer PAKs have been shown to affect three main areas of human health, ie, cancer, brain function, and viral infection. 26 However, much interest and studies accentuate the understanding of their roles in cancer. This review focuses on the functions of PAKs in different aspects of cancer. 121 PAKs as anticancer targets signaling pathways controlling cell proliferation, survival, motility, angiogenesis, anchorage-independent growth, and epithelial-mesenchymal transition, 27 processes that are conceptualized to constitute the hallmarks of cancer. PAKs are not found to be frequently mutated in cancers; yet, overexpression and/or gene amplification of PAKs are common. 28,29 This generally results in elevated mRNA and protein levels, with subsequent accumulation of phosphorylated PAK and presumably increased PAK activity. In particular, PAK1 and PAK4 genes are located on chromosomal regions that are frequently amplified in cancer, and are most strongly associated with cancer compared with other PAK members. Aberrant function of PAKs, particularly PAK1, is well documented in breast cancer. PAK1 gene amplification at 11q13 is reported as prevalent in luminal breast cancer. 30 The frequency of PAK1 amplification is 17% in the tumor panel examined and well correlated with mRNA expression. Immunohistochemistry analysis of breast tumor tissues of different stages revealed that PAK1 expression was upregulated and positively correlated with disease progression 30 as well as recurrence rate and mortality. 31,32 PAK1 small interfering RNA induces robust cell apoptosis associated with caspase activation and attenuated phosphorylation of MEK1 and ERK1/2. Amplification of PAK1 is also reported in ovarian cancer, and more importantly, elevated PAK1 levels are used as an independent prognostic predictor of poor survival in ovarian cancer. 33 Overexpression of PAK1 is observed in over 70% of colorectal cancers (CRCs), 34 where its expression increases with disease progression from adenoma to carcinoma, with significant increases in invasive and metastatic CRC. 35 In addition, expression of PAK4 and PAK5 is also elevated in CRC. 36 PAK4 gene amplification at 19q13.2 has been shown in 22% and 11% of pancreatic cancers and CRCs, respectively. 37,38 Increased expression of PAK4 is positively correlated with disease aggressiveness and a poor prognosis. 39 Loss-of-function mutations of the tumor suppressor genes NF1 and NF2 predispose mutation carriers to development of the dominantly inherited autosomal disease neurofibromatosis type 1 or type 2 (NF1 and NF2), respectively, which is characterized by formation of tumors of the central and peripheral nervous systems. PAK1 promotes the malignant growth of both NF1 and NF2 through different mechanisms. Neurofibromin, encoded by NF1, is a GTPase-activating protein that stimulates Ras GTPase activity, leading to its inactivation and downstream inhibition of PAK1 through the effector pathways. Merlin, the tumor suppressor gene encoded by NF2, directly interacts with the Cdc42/Rac binding domain of PAK1 to prevent its activation. 40,41 Merlin is a negative regulator of PAK1 and exerts growth suppressive activity. PAK1 can phosphorylate Merlin on S518, causing conformational changes and consequently disrupting interaction with PAK1, liberating PAK1 for activation. 41 Although mutations of PAKs are not frequently described in cancer, a gain-of-function point mutation of PAK5 was recently reported in 5%-10% of lung cancers. 42 Inhibition of PAK5 by small interfering RNA decreased cell survival and ERK activity. 42 Hyperactivated PAK functions can also result from mutations in upstream regulators such as Ras/Rac. The discovery of the role of PAKs in carcinogenesis warrants the potential of targeting PAKs in cancer therapy. PAK signaling in cancer Many human cancers are driven by the oncogenic protein Ras, with more than 30% showing somatic gain-of-function mutations of this oncogene. 43 Over 90% of pancreatic adenocarcinomas 44 and 30%-50% of CRCs 45,46 carry Ras mutations. Of the three human Ras isoforms, NRas, HRas, and KRas, KRas mutations comprise 86% 47 of all Ras mutations and occur in 17%-25% of all cancers. 46 KRas mutations are probably best described in CRC, and are clinically associated with a poorer prognosis, increased tumor aggressiveness, and treatment resistance. 45,46,48 Mutations of Kras cause deregulated cell growth, migration, apoptosis, and differentiation via activation of its downstream effectors. The two canonical Ras effectors are the Raf serine/threonine protein kinases and the phosphoinositide 3-kinases (PI3Ks). The development of anti-Ras therapy has not proved successful due to the complex nature of Ras signaling. The next most sensible approach therefore has been focused on targeting the two mentioned Ras effector signaling pathways. Over 30 and 50 inhibitors targeting the RAF/MEK/MAPK/ERK and the PI3K/AKT cascade, respectively, are currently under evaluation. 28 However, the efficacy of targeting components in these two pathways is compromised by the cross-talk and inhibitory effects that exist between the two pathways, and ablating signal from one pathway may result in compensatory signal in the other. Consequently, combination targeting of both pathways may prove helpful in anti-Ras therapy. PAK1, acting downstream of Ras, can enhance the RAF/ MEK/ERK signaling pathway and increase cell proliferation by phosphorylating Raf1 (S338) 49 and MEK1 (S298). 50 PAK3 can also phosphorylate Raf1 on S338 to enhance its activation both in vitro and in vivo. 15 122 Huynh and He membrane where it is activated. 51 We have recently reported that PAK1 is required for proliferation, survival, migration, and vascular endothelial growth factor secretion of CRC cells harboring mutations in Ras, PI3K, and Apc 13 and that PAK1 knockdown inhibited these cellular processes by inactivation of ERK and AKT. 13 Moreover, PAK1 knockdown suppressed growth and metastasis of CRC cell lines in xenograft and liver metastasis models in mice. 17 These findings are supported by the work of Li et al showing that PAK1 regulates CRC cell metastasis via ERK-dependent phosphorylation of FAK in both cell lines and clinical samples. 52 Inhibition of PAK1, both genetically and pharmacologically, causes inhibition of ERK and AKT activity, and tumor regression in a KRas-driven skin cancer model. 53 These results imply that instead of combination therapies targeting both RAF/MEK/ ERK and PI3K/AKT pathways, targeting PAK1 alone could be an alternative approach in cancer treatment, specifically Kras-driven cancers. However, there was a report that knockdown of PAK1 or PAK4 inhibited the proliferation of mutant KRas colon cancer cells via RAF/MEK/ERK-independent and/or PI3K/ AKT-independent pathway(s), 54 suggesting an alternative signaling pathway is involved. Indeed, the interactions of PAKs with the Wnt pathway have also been reported by several studies. 17,34,55,56 In breast cancer and CRC cell lines, PAK1 associates with and phosphorylates β-catenin on S663 and S675, 34,56 and promotes its nuclear translocation and subsequent transcriptional upregulation of T-cell factorresponsive genes including myc and cyclin D1, a key driver of cell cycle progression. Consistent with this, we have reported that PAK1 binds to β-catenin in CRC cells and that PAK1 knockdown inhibited β-catenin expression and β-catenin/TCF4 transcriptional activity. 17 Taken together, PAK1 facilitates cross-talk between the Ras effector pathways and the Wnt signaling pathway, which is important in tumorigenesis, and thus becomes an appealing target in cancer therapy (Figure 2). Accumulating evidence indicates that micro (mi)RNAs play a key role in a wide range of biological functions, including cellular proliferation, differentiation, and apoptosis in cancer. Emerging evidence shows that PAKs are regulated by a number of miRNAs that are also recognized to promote hyperactivation of oncogenic Kras signaling. MiRNA-433 inhibits proliferation of hepatocellular carcinoma cells by downregulation of PAK4. 57 miRNA-145 inhibits P-ERK expression by targeting PAK4 and leads to inhibition of colon cancer growth. 57 Similarly, miRNA-133 inhibits gastric cancer growth by downregulation of CDC42-PAK1 pathway. 58 Further investigation of the mechanism(s) of regulation by aberrantly expressed miRNAs of PAKs in cancer and its implications for oncogenic signaling would advance the development of successful therapies targeting PAKs. PAKs can stimulate cell survival through interaction with the proteins of the Bcl-2 family. PAK1, 25,59 PAK4, 60 and PAK5 61,62 can phosphorylate Bad on S112 and/or S136, disabling it from interacting with and inhibiting the prosurvival Bcl-2 and Bcl-xL. 62 PAK1 and PAK5 can phosphorylate and translocate Raf-1 to the mitochondria, where it can in turn phosphorylate Bad on S112 to inhibit its proapoptotic activity. 24,25,63 In addition, PAK1 can phosphorylate BimL, another proapoptotic protein, dissociating it from Bcl-2, thereby enhancing cell survival. 64 Another mechanism employed by PAKs to protect cells from apoptosis involves stimulating the transcription factor nuclear factor kappa B, which regulates genes important for cell survival and proliferation. 3 Also, PAK1 can directly phosphorylate and inhibit the proapoptotic transcription factor FKHR by sequestering it in the cytosol, preventing it from activating transcription of proapoptotic genes. 65 Unlike PAK1, PAK4, or PAK5, PAK2 is unique in containing a caspase site that is cleaved by caspase 3 upon apoptotic signaling, generating a constitutively active fragment known as PAK-2p34, 66 which can regulate morphological changes during late apoptosis 67 and stimulate cell death in Jurkat cells. 68 The full-length PAK2 localizes in the cytoplasm and promotes cell survival in a manner similar to PAK1 phosphorylation of Bad. PAK2 is reported to phosphorylate and inhibit caspase-7 in breast cancer cells, hence reducing cell apoptosis. 69 The dual role of PAK2 in promoting cell survival or apoptosis is due to differential regulation of subcellular localization of PAK2 and PAK-2p34, where the latter is localized to the nucleus to activate a proapoptotic substrate. 66 Cell migration requires formation of filopodia and lamellipodia at the leading edge, establishing new adhesions at the forefront, detaching adhesions at the trailing edge, and contracting to propel the cell forward. PAKs coordinate the formation of new adhesions at the leading edge with contraction and detachment at the trailing edge of human microvascular endothelial cells. 70 PAK1 was observed to localize to focal adhesions in fibroblasts, and when stimulated by plateletderived growth factor, PAK1 redistributed into the dorsal and membrane ruffles and into the edges of lamellipodia, where it colocalized with polymerized actin. 71,72 Constitutively active PAK1 induced rapid formation of filopodia and membrane ruffles in mammalian cells 73 and enhanced cell motility, invasiveness, and anchorage-independent growth. 74,75 PAKs appear to control actin filaments turnover and assembly via its downstream target LIM kinase. PAK1 21 phosphorylates LIM kinase on T508, which in turn phosphorylates cofilin to prevent actin depolymerization. 76,77 LIM kinase is also reported to be activated in a PAK2-dependent manner. 78 In addition, PAK1 phosphorylates the p41-ARC subunit of the Apr2/3 complex, which is important for branching of the network of actin filaments in the cell cortex, contributing to extension of the leading edge. 22 PAKs can enhance cell motility by controlling microtubule stability, mediated by the microtubule-destabilizing protein Op18/Stathmin. PAK1 mediates the phosphorylation of Stathmin by Rac1 and Cdc42, blocking its ability to destabilize microtubule, resulting in net growth at the cell's leading edge. 79,80 PAK1 can directly phosphorylate the tubulin cofactor B, and this is essential for polymerization of new microtubules. 81 PAKmediated phosphorylation of myosin light chain kinase inhibits phosphorylation of myosin light chain, leading to reduced stress fiber formation. 23,70,74 For a tumor cell to invade and metastasize, elevated cell motility must be parallel to destruction and reorganization of the extracellular matrix. 26 Matrix metalloproteinases (MMPs) control the restructure of the extracellular matrix and mediate human tumor metastasis. 82 Increased activity of PAK1 in breast cancer cells induces expression of MMP1 and MMP3 and increased cell invasion. 83,84 PAK5 regulates breast cancer and glioma cell migration and invasion, possibly through Egr1-MMP2 signaling pathway. 85,86 PAK4 associated with MMP2, and inhibition of PAK4 by small interfering RNA decreased tumor growth by inhibiting MMP-2, β3-integrin, and phosphorylated epidermal growth factor levels in tumors. 87 In addition, PAKs are speculated to further facilitate this process by downregulation of adhesion junctions to increase cell permeability and by stimulation of angiogenesis. 27,[88][89][90][91] PAKs as therapeutic targets The unique central position of PAKs in many intersecting signaling pathways controlling cell proliferation, survival, transformation, and mobility that are frequently deregulated in tumorigenesis and the observation of highly upregulated expression of PAKs in various cancers have validated the rationale for targeting PAKs in treatment of disease, including cancers. Functional inhibition of PAK1 has been achieved experimentally using a few forms of dominant-negative mutants, aiming to exploit the protein-protein interaction properties of PAKs, including kinase-dead mutant (K299A) and expressing the AID alone. Either approach had been successful to some degree in suppressing kinase-dependent functions of PAKs both in vitro and in vivo. 55,92 For example, PAK1 transgenic mice expressing a kinase-dead form of PAK1 ameliorate some symptoms in a mouse model of fragile X syndrome. 93 Research and Reports in Biochemistry 2015:5 submit your manuscript | www.dovepress.com Huynh and He However, PAKs do have kinase-independent activities, most of which are implicated in the scaffolding and cytoskeletal reorganization. Attempts to block their functions in this respect include a PAK1-specific cell-permeable small peptide called WR-PAK18, which contains a PIX-binding site. This peptide was demonstrated to block Ras-induced malignant transformation in fibroblasts. 94 Another small peptide containing a Nck-binding motif effectively disrupted PAK1-Nck interaction and consequentially, blocked cell migration, contractility, and tube formation in endothelial cells. 91 Despite this, the use of dominant-negative forms or peptides to inhibit PAK1 still faces the question of specificity and functional redundancy as it is difficult to delineate the functions of PAK1 from PAK2 and PAK3. The next approach is using RNA interference. The advantage is that it overcomes the issue of non-specificity presented by the dominant-negative forms, providing improved discrimination between the different PAK isoforms. However, the difficulties associated with RNA interference are unoptimized delivery method, stability, and off-target effects. 95 Early generations of PAK inhibitors have focused on ATP-competitive compounds, a typical approach to inhibit protein kinases. Extensive structural studies have revealed the ATP binding and substrate catalysis pocket deeply tucked in the cleft between the N-terminal and C-terminal lobes of PAK1. 43 This catalytic pocket is unusually large and highly flexible, and in addition to the highly mobile N lobe, presents a challenge in developing a PAK inhibitor. 43 Several broad-range kinase inhibitors demonstrate potent PAK inhibition; however, with poor selection due to the strong similarity between the ATP binding pocket of kinases. 43 Such non-selective compounds have limited use clinically because of undesired side effects. Attempts to achieve higher selectivity identified the bulky ATP antagonist, CEP-1347. Although CEP-1347 inhibited both PAK1 and MLKs and suppressed PAK-dependent growth of Ras-transformed cells, 96 disappointingly, it was later demonstrated to have more selectivity for MLK3 and poor potency for PAK1, with an IC 50 (half maximal inhibitory concentration) value of more than 1 µM, which is too low to be of clinical importance. 97 Further attempts to exploit the capacious ATP binding pocket of PAKs identified an organoruthenium compound F172, which showed improved selectivity and potency for PAK1, with an IC 50 value of around 100 nM. 98 Nevertheless, compounds based on organometal conjugates like this usually display poor solubility and high toxicity, so their use in clinical settings is questionable. 27,43 An ATP-competitive pyrrolopyrazole pan-PAK inhibitor, PF-3758309, was the first PAK inhibitor to enter clinical trials and was developed by Pfizer. 99 Although designed as a PAK4 inhibitor, it effectively inhibits all PAK members in addition to other off-target protein kinases. PF-3758309 has highest affinity for PAK4 (Kd 2.7 nM), followed by PAK1, PAK5, and PAK6 (Kd 14-18 nM), and lowest affinity for PAK2 and PAK3 (Kd 100-190 nM). Preclinical evaluation demonstrated great potency of PF-3758309 in suppressing proliferation of a panel of tumor cell lines, with an impressive IC 50 of less than 10 nM and high antitumor effects in human xenograft tumor models. 100 However, a clinical trial in patients with advanced solid tumors was prematurely terminated in Phase I due to low bioavailability, lack of responses in some instances, and in particular adverse effects 43 (http://clinicaltrials.gov/show/ NCT00932126). Recently, Licciulli et al discovered a group I-specific ATP-competitive PAK inhibitor, FRAX597, 101 and illustrated selectivity and potency of FRAX597 against group I PAKs, with a biochemical IC 50 value of 7-20 nM, while group II PAKs were spared from inhibition even at a concentration higher than 10 µM. Despite this, FRAX597 also exhibits potency against other kinases, especially the receptor tyrosine kinases. FRAX597 effectively blocked in vitro growth of NF2-deficient Schwann cells as well as tumor formation involving these cells in a xenograft model. In a Kras-driven skin cancer model, treatment with FRAX597 inhibited tumorigenesis associated with near-abolished PAK activation, the reduction of total PAK1 and PAK2 levels and the reduced activity of both ERK and AKT to a level comparable with that of PAK1 knockout mice. 53 These data highlight the importance of PAK1 signaling cross-talk with the two Ras effector pathways. Interestingly, the decreased PAK1 and PAK2 expression but not their kinase activity is abolished in cells treated with the proteasome inhibitor, MG132, suggesting that FRAX597 has a dual inhibitory effect on group I PAKs, ie, acting as an ATP-competitive inhibitor and a destabilizing agent. 27,53 It is worth noting that the inhibitory effects observed in the two models here are consistent with the pan-PAK inhibitor, PF-3758309, yielding similar results. Although both FRAX597 and PF-3758309 have off-target effects on other kinases, these targets are largely non-overlapping. This suggests that the similar antitumor effects exerted by both compounds can be, at least in part, attributed to inhibition of PAKs. In addition, Zhang et al identified LCH-7749944 as a potent inhibitor of PAK4. 102 125 PAKs as anticancer targets inhibitory effect against PAK1, PAK5, and PAK6. They predicted a binding model of LCH-7749944 in the PAK4 ATP binding pocket, suggesting that LCH-7749944 acts as an ATP-competitive inhibitor. They further reported that LCH-7749944 inhibited gastric cancer growth and migration/invasion via suppression of PAK4-mediated signaling pathways. Considering the wide breadth of off-target effects the ATP-competitive compounds have in addition to their intended target due to the highly conserved catalytic pocket present in all kinases, the successful ATP-competitor imatinib (Gleevec ® ) 3 achieved high kinase selectivity by binding to a less conserved region near the ATP binding pocket, which accentuated the feasibility of developing inhibitors interacting with less conserved regions of the kinase. This alternative approach seems especially appealing in the tightly regulated group I PAKs by autoinhibition, which presents an opportunity for allosteric inhibition during the multistep conformational changes accompanying kinase activation. Indeed, Deacon et al reported the discovery of a small molecule allosteric inhibitor, IPA-3, 103 which selectively inhibited group I PAKs, with greatest potency for PAK1, displaying isoform selectivity within group I PAKs. Mechanistically, IPA-3 was found to bind covalently to the AID, preventing binding of Cdc42 to PAK, thus effectively blocked Cdc42-triggered conformational changes and subsequent activation of PAKs. 104 In support of this finding, IPA-3 was essentially ineffective in inhibiting preactivated PAKs. As expected, group II PAKs are insensitive to IPA-3 because their activation is not regulated by AID and is therefore Cdc42-independent. In addition, IPA-3 showed a very limited inhibitory effect on a panel of diverse kinases, demonstrating high kinase specificity. Regrettably, the disulfide covalent bond that is critical for the IPA-3 inhibitory effect can be reduced in the cytoplasm by reducing agents, thereby reversing the binding of IPA-3 to and inhibition of PAKs. These limitations render IPA-3 unsuitable for clinical use. Nonetheless, allosteric inhibition provided an approach from a different angle, ie, targeting the PAK activation mechanism rather than its kinase activity. Future prospects A substantial amount of research has been conducted to decipher the roles of PAKs in a range of diseases, with a focus on cancer. Increasing experimental evidence affirm PAKs as potential drug targets due to these observations: PAKs are frequently upregulated in various forms of human cancers, with their overexpression being correlated with disease progression. PAKs mediate the effects of major signaling pathways, which are often deregulated during cell transformation, such as the Ras/Raf/MEK/ERK, PI3K/ AKT/mammalian target of rapamycin and Wnt/β-catenin pathways. PAKs occupy a unique central position where many oncogenic signaling pathways intersect, enabling cross-talk between them. PAKs also have an established role in many cellular processes that are the hallmarks of cancer initiation, growth, and metastasis. Thus, inhibition of PAKs is attracting increasing interest. The search for PAK inhibitors started in the late 1990s and has intensified over the years. Despite this, only a handful of inhibitors have been developed to date due to the challenges mentioned in the previous section. The currently available inhibitors still face the issue of kinase selectivity and isoform specificity. Such limitations preclude the advancement of these compounds to clinical trials. In cancers, the ideal choice of target should have a critical role in disease progression and persistence while having a more dispensable role for the survival of the whole organism as to enable a more cancer-targeted effect that is tolerable by the subject undergoing treatment. In the case of PAK1, PAK1 knockout mice develop an overall normal phenotype with normal life expectancy and health, 105 suggesting that PAK1-targeted therapy may be well tolerated by the patient. However, given the highly conserved features among the PAKs, especially group I PAKs, isoform specificity in targeting PAKs may trigger compensatory effects by other isoforms due to functional redundancy. This has a more serious implication in the treatment of cancer, ie, acquired resistance to therapy. As PAKs also play important roles physiologically, targeting a single isoform may cause less side effects than targeting all of them. After all, combined knockout of all PAK isoforms has not been achieved and therefore the consequences are still unknown. On the other hand, multiple isoform inhibition of PAK may be necessary to circumvent therapeutic resistance caused by functional redundancy, thereby increasing the risk of side effects. The current understanding of the functions of PAKs has advanced and continues to improve. Nonetheless, there are still gaps in our knowledge that need to be filled, such as the precise underlying mechanism for PAK-targeted therapeutic resistance and identification of substrates unique to specific isoforms. Such knowledge will help to advance the development and improvement of PAK inhibitors, which can conquer the current obstacles of isoform specificity in parallel with functional redundancy, off-target effects, and acquired resistance to anti-PAK therapies. Disclosure The authors report no conflicts of interest in this work. Research and Reports in Biochemistry Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/research-and-reports-in-biochemistry-journal Research and Reports in Biochemistry is an international, peer-reviewed, open access journal publishing original research, reports, reviews and commentaries on all areas of biochemistry. The manuscript management system is completely online and includes a very quick and fair peer-review system. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
2019-03-19T13:13:46.042Z
2015-05-14T00:00:00.000
{ "year": 2015, "sha1": "4374d1256f82a28a80826e2de80f53effde0126d", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=25055", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "418ab10d144dd0bf1c6b7da3f129bffb9182cf74", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
4288950
pes2o/s2orc
v3-fos-license
Efficacy, safety and population pharmacokinetics of sapropterin in PKU patients <4 years: results from the SPARK open-label, multicentre, randomized phase IIIb trial Background Sapropterin dihydrochloride, a synthetic formulation of BH4, the cofactor for phenylalanine hydroxylase (PAH, EC 1.14.16.1), was initially approved in Europe only for patients ≥4 years with BH4-responsive phenylketonuria. The aim of the SPARK (Safety Paediatric efficAcy phaRmacokinetic with Kuvan®) trial was to assess the efficacy (improvement in daily phenylalanine tolerance, neuromotor development and growth parameters), safety and pharmacokinetics of sapropterin dihydrochloride in children <4 years. Results In total, 109 male or female children <4 years with confirmed BH4-responsive phenylketonuria or mild hyperphenylalaninemia and good adherence to dietary treatment were screened. 56 patients were randomly assigned (1:1) to 10 mg/kg/day oral sapropterin plus a phenylalanine-restricted diet or to only a phenylalanine-restricted diet for 26 weeks (27 to the sapropterin and diet group and 29 to the diet-only group; intention-to-treat population). Of these, 52 patients with ≥1 pharmacokinetic sample were included in the pharmacokinetic analysis, and 54 patients were included in the safety analysis. At week 26 in the sapropterin plus diet group, mean phenylalanine tolerance was 30.5 (95% confidence interval 18.7–42.3) mg/kg/day higher than in the diet-only group (p < 0.001). The safety profile of sapropterin, measured monthly, was acceptable and consistent with that seen in studies of older children. Using non-linear mixed effect modelling, a one-compartment model with flip-flop pharmacokinetic behaviour, in which the effect of weight was substantial, best described the pharmacokinetic profile. Patients in both groups had normal neuromotor development and stable growth parameters. Conclusions The addition of sapropterin to a phenylalanine-restricted diet was well tolerated and led to a significant improvement in phenylalanine tolerance in children <4 years with BH4-responsive phenylketonuria or mild hyperphenylalaninemia. The pharmacokinetic model favours once per day dosing with adjustment for weight. Based on the SPARK trial results, sapropterin has received EU approval to treat patients <4 years with BH4-responsive phenylketonuria. Trial registration ClinicalTrials.gov, NCT01376908. Registered June 17, 2011. Electronic supplementary material The online version of this article (doi:10.1186/s13023-017-0600-x) contains supplementary material, which is available to authorized users. Background Hyperphenylalaninemia (HPA) is a rare inherited metabolic disorder caused by reduced activity of the hepatic enzyme phenylalanine hydroxylase (PAH, EC 1.14.16.1), which catalyses the conversion of phenylalanine (Phe) to tyrosine. Most cases of HPA (98%) in North American and European populations are due to mutations in the PAH gene but, in rare cases of HPA (1-2%), the cause can be a defect in the metabolism of the natural PAH cofactor, the R diastereoisomer of tetrahydrobiopterin (BH 4 ) [1][2][3]. Owing to the reduced activity of PAH due to either mechanism, patients with HPA have an accumulation of Phe in the blood and body tissues and a relative deficiency of tyrosine and subsequent metabolites such as epinephrine [4,5]. The therapeutic range of Phe concentration varies according to different guidelines [8,9], and there is no international consensus. The US diagnostic and management guidelines recommend that the initiation of treatment for PKU should be undertaken as early as possible, preferably within the first week after birth, with a goal of having blood Phe in the range 120-360 μmol/L within the first 2 weeks of life, to prevent permanent neurological damage [10]. The European guidelines recommend target concentrations of 120-360 μmol/L for individuals aged 0-12 years and for maternal PKU [11]. In both, this is largely achieved by a natural proteinrestricted diet and Phe-free synthetic amino-acid supplementation [10,11]. However, adherence to a Phe-restricted diet is burdensome owing to the need for long-term dietary counselling and daily micronutrient supplementation [12]. The management guidelines also stipulate that a course of treatment with BH 4 should be investigated [10,11]. Sapropterin dihydrochloride (sapropterin, Kuvan®, Merck, Geneva, Switzerland, an affiliate of Merck KGaA, Darmstadt, Germany, and BioMarin, Novato, CA, USA) is a synthetic formulation of BH 4 that has been shown to be effective in lowering serum Phe concentrations and/or improving dietary Phe tolerance in a subset of patients with PKU or mild HPA who respond to treatment with BH 4 (known as responders) and in the rare patients with a defect in BH 4 synthesis [12]. Based on the results of the SPARK (Safety Paediatric efficAcy phaRmacokinetics with Kuvan®) study, the European Medicines Agency has recently extended the indication for sapropterin from the treatment of BH 4 -responsive PKU in adults and children aged ≥4 years and in all BH 4 -deficient adults and children [12,13] to now include children with BH 4 -responsive PKU <4 years old, for whom the previous standard of care was a Phe-restricted diet. The primary aim of the SPARK study was to evaluate the efficacy (increase in Phe tolerance, defined as the amount of Phe a patient may consume while maintaining blood Phe concentrations within the target range of 120-360 μmol/L); safety of 26 weeks of treatment with sapropterin dihydrochloride plus a Phe-restricted diet compared with a Phe-restricted diet alone in children <4 years of age with BH 4 -responsive PKU or mild HPA; to document the relationship between exposure and response; and to support the posology in this age-group. Although population pharmacokinetic (PopPK) data for sapropterin have been published for infants and young children in the USA and Canada [14], there are no PopPK data for sapropterin in this age range in the European Union (EU); therefore, a secondary aim of SPARK was to develop a PopPK model for sapropterin in this population. The other secondary endpoints were to document the concentrations of blood Phe during the study and extension periods, to document the change in dietary Phe tolerance, and to monitor blood pressure, growth parameters, and neuromotor developmental milestones. Study design The SPARK trial (NCT01376908) is a 26-week openlabel, multicentre, randomized phase IIIb study to assess the efficacy, safety and PopPK of sapropterin in patients aged <4 years with BH 4 -responsive PKU or mild HPA. SPARK was conducted at 22 sites in nine countries: Austria (n = 2), Belgium (n = 2), Czech Republic (n = 1), Germany (n = 4), Italy (n = 5), Netherlands (n = 2), Slovakia (n = 3), Turkey (n = 1) and the United Kingdom (n = 2). The study was performed in accordance with the protocol and subsequent protocol amendments and with the ethical principles laid down in the Declaration of Helsinki, in accordance with the International Conference on Harmonisation (ICH), Note for Guidance on Good Clinical Practice (ICH Topic E6, 1996) and applicable regulatory requirements. The local ethics committee/institutional review board at each of the participating centres approved the protocol. Patients Male or female patients aged <4 years at randomization were eligible for entry into the study if they had participated in the screening protocol <42 days before study day 1, had a confirmed diagnosis of mild HPA or PKU (a defined level of Phe tolerance consistent with a diagnosis of PKU, ≥2 previous blood Phe concentrations ≥400 μmol/L obtained on two separate occasions), were responsive to BH 4 (a decrease of >30% in Phe concentrations following a 20 mg/kg BH 4 challenge of at least 24 h), good adherence to dietary treatment and maintenance of blood Phe concentrations within the therapeutic target range (120-360 μmol/L) for 4 months prior to screening or at least the last four values of Phe (either from venous blood or dry blood spot) were to be assessed, out of which 75% had to be within the above therapeutic range. Patients were excluded if they had used sapropterin or any preparation of BH 4 within the previous 30 days (unless for the purposes of the BH 4 responsiveness test), had known hypersensitivity to sapropterin, its excipients, or to other approved or non-approved formulations of BH 4 , or had a previous diagnosis of BH 4 deficiency. Patients' parent(s)/guardian(s) gave written informed consent for participation in the study before any trialrelated procedures were performed. Parent(s) and/or guardian(s) had to be willing to comply with all study procedures, maintain strict adherence to the diet, and be willing and able to provide written, signed informed consent after the nature of the study had been explained and prior to any study procedures. Where required, separate informed consent was obtained from the patients' parents or guardians to obtain samples for pharmacokinetic analysis. Randomization On study day one, patients were randomly assigned 1:1 to 10 mg/kg/day oral sapropterin dissolved in water to be taken with breakfast (after 4 weeks, sapropterin could be increased to 20 mg/kg/day if Phe tolerance had not increased by >20% vs. baseline) plus a Phe-restricted diet or only a Phe-restricted diet for 26 weeks. After study completion, patients were eligible to enrol in a 3-year extension period (to be reported separately), during which all patients received sapropterin plus a Pherestricted diet (Fig. 1). Efficacy assessments The primary outcome was an improvement in dietary Phe tolerance, defined as the daily amount of Phe (mg/ Fig. 1 Patient disposition. *Two of the randomized patients withdrew consent after randomization. No safety assessments were performed during the study period kg/day) that could be ingested while sustaining mean blood Phe concentrations within a target range of 120-360 μmol/L by dietary Phe adjustments following an algorithm (Table 1). An additional supportive analysis was performed, in which dietary Phe tolerance was based on the Phe intake reported in a 3-day Phe diet diary used to monitor the adherence to the Phe-restricted diet. Analysis and adjustment of dietary intake were performed by the investigator and/or experienced dietician/nutritionist every 2 weeks during the study, according to the study algorithm. Blood Phe concentrations were measured twice weekly via dried blood spot cards using a high-performance liquid chromatography/tandem mass spectrometry method for Phe detection. The results were verified every 3 months using venous blood plasma. Blood Phe samples could be obtained more frequently at the investigator's discretion. Secondary endpoints included neuromotor development and physical growth parameters (height or length, weight and maximal occipital-frontal head circumference). Neuropsychological development was assessed using the adaptive behaviour composite score with the Bayley III and the social-emotional composite score in the WPPSI-III, although these results are not reported in this manuscript. Pharmacokinetic analysis The PopPK analysis population comprised all randomized subjects with ≥1 pharmacokinetic sample. PopPK parameters were apparent clearance (CL/F), apparent volume of distribution (V/F), absorption rate constant (K a ), and endogenous BH 4 (C0). These were used to compute the area under the curve (AUC 0-∞ ), peak serum concentration (C max ), time of C max (T max ), and half-life (t 1/2 ). Plasma samples were collected for endogenous BH 4 measurement at baseline and sparsely thereafter between weeks 5-12 after oral administration of sapropterin 10 mg/kg/day. In order to ensure that the sparse pharmacokinetic sampling provided sufficient information and that samples were taken at informative times, the sampling had been planned using D-optimization [15]. During this process, competing maturation functions were considered [16,17]. PopPK modelling was conducted using NONMEM® (software version 7, level 2; Icon Development Solutions, Ellicott City, MD, USA) using standard model building and evaluation approaches. Covariates, including age, weight and sex, were evaluated using standard methodology to determine if these factors were predictive of BH 4 pharmacokinetics. The final model was subsequently used to derive metrics of exposure and to determine the exposure relative to adult PKU patients. Laboratory assessments All standard blood chemistry, hematologic and urine analysis, as well as specialized testing for Phe and tyrosine concentrations, were performed at a central laboratory. Safety analysis The safety population consisted of all subjects who had some safety assessment data available. Safety was assessed at the clinic on a monthly basis during the 26-week study period or until 4 weeks post-treatment, by recording, reporting, and analysis of baseline medical conditions and adverse events (AEs) and physical examination findings (including vital signs). Standard blood chemistry, hematologic, and urine analyses were performed every 3 months during the study period for safety analysis. Genotype analysis PAH genotype data were collected at screening for enrolled patients, after a separate informed consent was obtained from the patients' parents or guardians. Genotype testing was performed by a central laboratory. Statistical analyses The primary efficacy analysis population was the intention-to-treat (ITT) population comprising all randomized patients. The per-protocol (PP) population included all ITT patients who completed the study with no prohibited concomitant medication and without major protocol deviation. A missing pre-study Phe tolerance, lack of adherence to Phe-restricted diet over the past 3 months, lack of adherence to sapropterin, and a sapropterin-dose adjustment not conducted as per protocol were considered to be major protocol deviations leading to exclusion from the PP population. The safety population comprised all patients with safety assessment data available (≥1 visit for vital signs, AEs or laboratory results) and who had received ≥1 dose of sapropterin or were randomly assigned to Phe-restricted diet alone. The sample size was planned to be 23 patients per group, to ensure a power of 80% to demonstrate a treatment group difference, assuming a dietary Phe tolerance of 20 mg/kg/ day under dietary therapy alone, a difference of 75% with the sapropterin plus diet group, and a common standard deviation of 17.5 mg/kg/day. To compensate for possible dropouts, a total of 50 subjects were to be randomized. The dietary Phe tolerance was analyzed using the repeated measures analysis of covariance (ANCOVA) on the observed records for the ITT population, with baseline Phe tolerance, treatment group, age group, visit, baseline blood Phe concentration and treatment by visit interaction as fixed effects. Secondary endpoints were described using summary statistics. Non-linear mixed-effect modelling (NONMEM® software version 7, level 2) was applied to estimate the pharmacokinetic parameters and their variability. The final model was evaluated using a number of methods, which included bootstrapping and visual predictive checks, as conducted previously in children aged 0-6 years [18]. In order to evaluate the differences in exposure expected from the original model and the current model, simulated concentration-time profiles for the reference subject were generated. Patient disposition and demographics In total, 109 patients were screened (Table 2 and Fig. 1 The overall mean adherence to sapropterin (defined as the proportion between the actual dose administered and the prescribed dose) over the study was 100% (range 82 to 107%). Most patients (n = 25, 92.6%) continued on 10 mg/kg/day after 4 weeks of treatment, with only two patients switching to 20 mg/kg/day. The overall mean (±SD) adherence to diet, as assessed by a 3-day food diary, was 94.6±9.4% (range 69 to 111%) in the sapropterin-treated group and 92.1±23.8% (range 65 to 183%) in the diet-only treated group. Blood Phe concentrations Phe concentrations from dried blood spots were lower than those from venous blood spots but this was consistent with differences reported in the literature [19][20][21]. In the Phe-restricted diet group, the adjusted mean blood Phe concentrations in the ITT population were stable over time, with a mean (±SD) increase of 23.1 (±21.9) μmol/L at week 26 (Fig. 2b). In the sapropterin plus Phe-restricted diet group, the mean (±SD) blood concentrations decreased by 110.7 (±20.1) μmol/L at week 4 and gradually returned to concentrations similar to those seen in the Phe-restricted diet group, reflecting the increase in Phe intake and Phe tolerance. At week 26, the adjusted mean (±SD) blood Phe concentrations were similar: 300.1 (±115.2) μmol/L in the sapropterin plus Phe-restricted diet group and 343.3 (±118.4) μmol/L in the diet-only group (adjusted between-group difference 33.2 μmol/L [95% CI −94.8, 28.4], p = 0.290). It is important to note that patients were expected to maintain blood Phe concentrations within this range; therefore, differences in blood Phe concentrations were not anticipated. The observed proportion of patients with blood Phe concentrations maintained in the range 120-360 μmol/L throughout the whole study was greater in the sapropterin plus Phe-restricted diet group (n = 9/27, 33.3%) than in the diet-only group (n = 3/29, 10.3%). 21 of 27 (77.8%) sapropterin-treated patients and 15 of 27 (55.6%) patients on only the Phe-restricted diet had ≥1 blood Phe concentration at or below the 120 μmol/L threshold established by the British PKU Registry [22]. However, very few instances of Phe concentration below the normal range thresholds of 40 and 26 μmol/L were observed during the study. Change from baseline in dietary Phe tolerance The mean change in dietary Phe tolerance between baseline and the last Phe tolerance observation was assessed within each treatment group. The mean (±SD) change from baseline to week 26 in patients receiving sapropterin plus Phe-restricted diet was 36.9 (±27.3) mg/kg/day (p < 0.001). The mean change from baseline in patients only on the Phe-restricted diet was 13.1 (±19.6) mg/kg/day (p = 0.002). Pharmacokinetic analysis The pharmacokinetic data are best described by a onecomponent model with first-order input following a time lag and first-order elimination, with an endogenous baseline BH 4 concentration component. The model included terms describing between-subject variability on apparent clearance (CL/F) and apparent volume of distribution (V/F) as well as their correlation ( Table 3). The final model parameter estimate for CL/F was 2780 L/h, 3870 L for V/F, and 0.234 h −1 for K a . From the model, an elimination half-life of approximately 1 h can be computed, with an absorption half-life (ln2/K a ) of approximately 3 h, suggesting flip-flop kinetics where the absorption becomes the rate-limiting step of drug disposition. Body weight was the only covariate that affected the CL/F and V/F of sapropterin: these variables increased in a nonlinear manner with increasing weight, although individual predictions still varied around the typical individual predictions (Fig. 3). At the lowest extreme of weight, a 5 kg patient had a CL/F value 11% of that of a 70 kg reference adult, and a V/F value 22% of that of the reference adult (Table 4). Even after inclusion of weight into the pharmacokinetic model, significant between-subject variability in CL/F and V/F remained, supporting an adaptive approach to individual treatment. Simulated concentration-time curves following sapropterin 10 mg/kg show that sapropterin concentrations remain above the modelestimated endogenous BH 4 concentrations (12.6 μg/L; Table 3) for the dose interval for patients with different weights (Fig. 4). Overall, the exposure across all age groups is comparable, although the number of patients in all age groups is small. The exposure in pediatric patients was lower than the expected exposure in adults, based on the simulated concentration-time profiles following the 10 mg/kg/day dose across a range of body weights. This analysis shows that the concentrations remain above the endogenous concentration, which is set at a concentration below that for a person not diagnosed with PKU, for a daily dose interval and support the current approach to treatment as conservative (Fig. 4). Safety The safety population comprised 54 patients; two of the randomized patients withdrew consent after randomization and, therefore, were excluded from the safety population ( Fig. 1). All patients in the safety population reported at least one AE (Table 5); in the sapropterin plus Pherestricted diet group, eight out of 27 patients (29.6%) reported at least one treatment-emergent AE (TEAE) classified as related to sapropterin. The proportion of patients reporting TEAEs was the same in the two groups, and no On the day of first administration of study treatment, the subject had a sapropterin overdose (severity: mild; 80 mg/day instead of 75 mg/day by mistake). At 26 days after the first administration of study treatment, the subject had another sapropterin overdose (severity: mild; 80 mg/day instead of 75 mg/day by mistake). Both events were reported in accordance with the protocol and were therefore categorized as medically important. The subject recovered without sequelae from both events. The administration of sapropterin plus Phe-restricted diet alone was continued without change after the first overdose and the dose was reduced after the second overdose AE defined as any untoward medical occurrence in a subject or clinical investigation subject administered a pharmaceutical product, which did not necessarily have a causal relationship with trial treatment; SAE was any untoward medical occurrence that at any dose: resulted in death; was life-threatening; might have caused death if it had been more severe; required inpatient hospitalization or prolongation of existing hospitalization; resulted in persistent or significant disability/incapacity; was a congenital anomaly/birth defect; or was otherwise considered as medically important patients withdrew owing to AEs. None of the TEAEs were graded as severe. All patients had at least one TEAE that was judged to be mild in severity. Seven (25.9%) patients in the sapropterin plus Phe-restricted diet group had nine TEAEs graded as moderate in severity, and eight (29.6%) patients in the Phe-restricted diet group reported 18 TEAEs graded as moderate in severity. The most common TEAEs in the sapropterin plus Phe-restricted diet group and in the Phe-restricted diet group were: pyrexia (63.0 and 66.7%), cough (48.1 and 48.1%) and nasopharyngitis (48.1 and 40.7%), respectively. The most common TEAEs classified as related to sapropterin were amino acid concentration decrease (six patients [22.2%]), rhinitis, and vomiting (two patients each [7.4%]), and one patient (3.7%) each for pharyngitis, diarrhea, abdominal pain, mouth ulceration and increased amino acid concentration. Although the proportion of patients who reported a serious AE (SAE) was higher in the sapropterin plus Phe-restricted diet group compared with the Phe-restricted diet (11.1 vs. 3.7%), all SAEs were assessed as unrelated to sapropterin treatment (Table 5). Genotype data Of 109 patients who were screened, 73 agreed to participate in the pharmacogenetics sub-study. Of the 73 patients who agreed, 36 were screening failures, leaving genotype data for 37 responders (Additional file 1: Table S1). Neuromotor development and growth parameters Most patients in both treatment groups had normal neuromotor development, including fine motor, gross motor, language, and personal and social function, and there were no statistically significant differences between treatment groups in any of the neuromotor developmental milestones at baseline, 12 and 26 weeks (Additional file 1: Figure S1). Patients in both treatment groups had stable growth parameters, including body mass index SD score (SDS), height SDS, maximum occipital-frontal head circumference SDS and weight SDS. There were no statistically significant differences between the treatment groups for any of the growth parameters. Discussion In PKU, blood Phe concentrations need to be controlled from birth to prevent neurological sequelae, such as cognitive impairment and mild-to-severe intellectual disability, linked to PKU [5,7]. Until July 2015 there was no licensed pharmacological treatment available in the EU for children with PKU aged <4 years, and the standard of care was a Phe-restricted diet. The results of the SPARK study, which was the first clinical trial of sapropterin in patients 0-4-years-old with BH 4 -responsive PKU or mild HPA in Europe, showed that daily dosing with 10 or 20 mg/kg/day sapropterin in combination with a Phe-restricted diet led to statistically and clinically significant improved dietary Phe tolerance at week 26 compared with a Phe-restricted diet alone, while maintaining mean blood Phe concentrations within the protocol-specified range. These results were consistent with those seen in children aged 4-12 years treated with 20 mg/kg/day sapropterin, in whom the mean amount of Phe supplement tolerated had increased at 10 weeks of treatment [23]. The results were also consistent with those reported in a study from the USA and Canada in children aged 0-6 years old, in whom 20 mg/kg/day sapropterin treatment lowered blood Phe concentrations, enabling, in some cases, an increase in dietary Phe intake [24]. The benefits of initiating sapropterin therapy in patients younger than 4 years have been highlighted by a post-marketing study conducted in Japan between 1995 and 2001, which reported that all patients who started treatment with sapropterin before the age of 4 years maintained serum Phe concentrations within the recommended range for the duration of the study [25]. Previous reports have shown that neurocognitive function was preserved and no neurodevelopmental penalty was reported in patients who started sapropterin therapy between 0 and 6 years of age [24], and that treatment with BH 4 may enable relaxation of the dietary regimen, leading to improved quality of life [26]. Patients with mild HPA, who comprised almost a half of the population in this study, retain substantial enzyme activity and will, therefore, likely respond to sapropterin treatment. However, the indication for treatment of mild HPA differs between countries due to weak evidence. US guidelines recommend treatment at a Phe concentration above 360 μmol/L [10], while other countries start treatment at Phe concentrations above 600 μmol/L [27]. In this study, the addition of sapropterin to a Pherestricted diet in patients <4 years old with BH 4 -responsive PAH deficiency significantly improved Phe tolerance compared with a Phe-restricted diet alone. In the sapropterintreated group, blood Phe concentrations initially fell at the beginning of treatment (4 weeks), but they slowly increased over the course of the study to reach concentrations similar to those at baseline by week 12 (Fig. 2), while increasing dietary Phe-intake. The observed increase in Phe tolerance reported in patients on the Phe-restricted diet compared with the tolerance at baseline may be explained by the fact that the patients in this group were not at their maximum Phe tolerance in daily practice before starting the study. This observation confirms the expectation that under the tight control of study conditions using a strict algorithm of Phe escalation, dietary Phe tolerance may be further optimized [28]. Because of the potential for Phe concentrations to drop below either the normal or the desired therapeutic concentrations owing to the action of sapropterin, careful monitoring and adjustment of therapeutic dose and dietary Phe concentrations was necessary. The pharmacokinetics of BH 4 can be well described by a one-compartment model that respects the principle of parsimony and provides accurate estimates that describe BH 4 profiles virtually identical to those from a twocompartment model evaluated in a previous study [18]. The terminal and absorption half-lives are suggestive of flip-flop pharmacokinetic behavior, in which absorption is the rate-limiting step of drug disposition. Sapropterin exposure was similar across all age groups studied here. With this in mind, a once-daily dosing regimen is justified. Weight was the only covariate that had an effect on the clearance and volume distribution of sapropterin, meaning that dose adjustments based on weight are appropriate [14]. The secondary endpoints of growth and neuromotor development were considered to be normal in the patient population throughout the study and no difference between groups was observed, suggesting no treatment effect on these growth and development parameters. However, the time scale in the study was too short to expect clinically meaningful changes in neuromotor development. The safety profile for sapropterin was acceptable and similar to that reported in studies of patients >4 years old [23] and in those <4 years old [25], with no deaths, severe TEAEs or withdrawals reported. Although four patients had SAEs, none of these was deemed to be related to treatment. The number of TEAEs was similar between the two groups and was commonly associated with normal childhood illness. Conclusion In conclusion, the addition of sapropterin to a Pherestricted diet in patients aged <4 years old with BH 4 -responsive PKU, mild PKU or mild HPA was well tolerated and led to a significant improvement in Phe tolerance compared with only a Phe-restricted diet. The pharmacokinetics of sapropterin in patients aged <4 years are adequately described by a onecompartment model, and favor once-a-day dosing with dose adjustment for weight. These data led to the approval of sapropterin for individuals with BH 4responsive PKU or mild HPA aged <4 years, and will thus change treatment management for this subset of patients in the first week of life. population pharmacokinetic analysis plan and the interpretation of these data. DRM conducted the population PK analysis. FM-S participated in the design of the study and performed the statistical analysis. DR contributed to the review of the data. All authors were involved in the writing, revision and final approval of the manuscript of the manuscript. Competing interests ACM has participated in strategic advisory boards and has received honoraria as a consultant and as a speaker from Merck KGaA, Darmstadt, Germany. GG has received support for travel expenses to the SSIEM congress from Merck. FR has participated in advisory boards and has received honoraria as a consultant from Merck KGaA, Darmstadt, Germany. Both DR and FM-S are employees of EMD Serono, Inc, Billerica, MA, USA, a business of Merck KGaA, Darmstadt, Germany. AM is an employee of the Merck Institute for Pharmacometrics, Lausanne, Switzerland, a subsidiary of Merck KGaA, Darmstadt, Germany. AB is an advisory board member for Danone and Merck. PF has received grants from Merck. CDL has received travel fees and provided data management support. MC has been an advisory board member for, and her institution has received a grant from, Merck. A L-H has received travel grants from Merck. DRM was also a paid consultant of Merck KGaA, Darmstadt, Germany. All authors have received fees as an investigator for Merck KGaA, Darmstadt, Germany. Consent for publication Consent to publish has been obtained from all parents and guardians. Ethics approval and consent to participate The local ethics committee/institutional review board at each of the participating centres approved the protocol. Patients' parent(s)/guardian(s) gave written informed consent for participation in the study before any trial-related procedures were performed. Ethics committee: Lead Ethics Committee of the LMU, Munich, Germany (ethikkommission.med.uni-muenchen.de). Children's Hospital Kreiskliniken, Reutlingen, Germany. 5 Hôpital Universitaire des Enfants Reine Fabiola, Brussels, Belgium. 6 Universita La Sapienza, Rome, Italy. 7 Muenster University Children's Hospital, Muenster, Germany.
2018-04-03T04:00:42.037Z
2017-03-09T00:00:00.000
{ "year": 2017, "sha1": "0fb32fe30a63c2b3884951c57964a0ace8a9c323", "oa_license": "CCBY", "oa_url": "https://ojrd.biomedcentral.com/track/pdf/10.1186/s13023-017-0600-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0fb32fe30a63c2b3884951c57964a0ace8a9c323", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
52946931
pes2o/s2orc
v3-fos-license
Open reduction and internal fixation with plating is beneficial in the early recovery stage for displaced midshaft clavicular fractures in patients aged 30–65 years old Objectives: Midshaft clavicular fractures are increasingly treated operatively rather than nonoperatively. Studies have shown mixed results for both types of treatment. The aim of this study was to compare the early-stage functional status associated with open reduction and internal fixation (ORIF) with plating and that associated with conservative treatment for displaced midshaft clavicular fractures. Materials and Methods: A single-center retrospective review of the results of 120 cases of displaced midshaft clavicular fractures in patients aged 30–65 years old was conducted. The primary outcome was fracture union status at 6 months. Other outcomes were subjective shoulder value (SSV) scores, visual analog scale (VAS) scores, and radiographic shortening at 6 weeks, 3 months, and 6 months. The complication rates in the operative and nonoperative groups were recorded. Results: The delayed union rate at 6-month postoperatively and VAS scores at 6 weeks, 3 months, and 6 months postinjury were significantly higher in the conservative treatment group than in the ORIF group. SSV scores were significantly improved at 6-month postinjury in the ORIF group. Conclusions: This is the first study to discuss the importance of early-stage functional restoration after ORIF with plating for displaced midshaft clavicular fractures. This surgery leads to lower pain complications in the earlier stages of bone healing and lower delayed union rates compared with conservative treatment, in patients aged 30–65 years old. Introduction D isplaced clavicle fractures constitute one of the most common types of injury in fracture patients [1]. The well-known indications for open reduction and internal fixation (ORIF) include open fractures, fractures in the lateral third region, floating shoulders, fractures combined with multiple ipsilateral rib fractures, and fracture complicated by compromised skin integrity [2]. Since the reported complication rate for ORIF is as high as 34%, displaced middle-third clavicle fractures have mostly been managed conservatively, except for cases with skin tenting through the fracture fragment [3,4]. However, more recent studies have demonstrated high nonunion rates in some conservatively managed subgroups [5,6]. Current studies show that some patients have poor functional results after nonoperative treatment of middle-third clavicular fractures [7]. Although primary plate fixation is a dependable technique for the a Materials and methods This retrospective study was approved by the Research Ethics Committee of Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation (IRB106-155-B). This study was conducted in accordance with the Declaration of Helsinki and was approved by the local ethics committee of the institution. Informed written consent was obtained because the study was a retrospective data analysis. The data were collected through electronic chart reviews of patients who experienced displaced midshaft clavicle fractures and came to our institution for help and follow-up between April 2014 and December 2016. The inclusion criteria were as follows: (1) having a displaced middle-third clavicle fracture (Robinson type 2B1/2B2) [10], (2) age between 30 and 65 years, and (3) having a complete record of radiographic and functional evaluations at regular follow-up periods of at least 6 months. The exclusion criteria were as follows: (1) bilateral clavicle fracture, (2) fracture associated with other fractures or major injuries, (3) another trauma at the time of follow-up, and (4) initial injury combined with injured skin integrity or skin tenting of the fracture fragment. The patients were divided into those who received ORIF after injury (O group) and those who received conservative treatment and outpatient department follow-up (C group). The functional outcomes included the visual analog scale (VAS) score for pain (0-100) [11] and subjective shoulder value (SSV) score (0-100) [12] at 6 weeks, 3 months, and 6 months postoperatively. A poor radiographic outcome was delayed union or shortening of the fracture site by more than 2 cm compared with the other side at 6 months based on plain radiography of the clavicle. The rates of complications in the O group were also recorded, including implant failure or indication for revision surgery. The chart reviews and radiographic evaluations were performed by one orthopedic doctor who did not operate on any of the included patients. The fixation method in the O group was ORIF with one neutralization plate through a superior approach and with at least 6-cortex-purchase strength by the cortical screws at both sides of the fracture site. After surgery, all patients were provided with a sling for comfort and began an early active mobilization program, although resistance with the limb was avoided for 6 weeks postoperatively. The patients in the C group were protected with a figure-of-eight bandage [13] for 3 months and began an early rehabilitation program 6 weeks after injury. Statistical analyses were performed using SPSS 17.0 (SPSS Inc., Chicago, IL, USA). Statistical significance was set at P < 0.05 in all cases. Comparisons of data between the C group and O group were analyzed using independent t-test for continuous variables and the Chi-square test for categorical variables. The risk factors for delayed union in both groups and those of revision surgery in the O group were analyzed using the multiple regression method. Results Data from 500 patients were reviewed. After excluding cases that did not meet the inclusion criteria, we included 120 patients in the study, comprising 82 men and 38 women. Their mean age was 48.0 years. The patient data are shown in Table 1. Of these fractures, 85.8% were Robinson classification type 2B1, and 46.7% were over the side of the dominant hand. Most patients were nonsmokers and did not have diabetes mellitus (DM). The shortening of the fracture site ratio was 3.3%, the same in both the C and O groups. The primarily successful union rate in the O group was 90% [ Figure 1]. There was no case of wound infection in the O group and the failure rate of ORIF was 10% (n = 6). Four of the patients had broken plates or screws after falls [ Figure 2] and two other patients had loosening of the lateral site screws after heavy labor that had been prohibited by the surgeon. All received revision ORIF. There was a significantly higher ratio of delayed union at 6-month postinjury in the C group. There was significantly less postinjury shoulder pain and lower SSV scores in the O group. The risk factors associated with delayed union and postinjury shoulder pain VAS scores are presented in Tables 2 and 3, respectively. Only ORIF significantly lowered the risk ratio of these complications. The risk factors related to revision surgery for failed ORIF in clavicle fracture are presented in Table 4, revealing that smoking significantly increased the risk ratio. All 120 patients had achieved successful union at 2-year postoperatively. Discussion This retrospective study executed a comparative analysis between the O and C groups and determined the risk factors associated with revision surgery in the O group. A significantly higher delayed union rate at 6-month postinjury was noted in the C group compared with the O group. Research revealed that the prevalence of nonunion after nonoperative treatment was approximately 10%-15% [7]. The absolute indication for midshaft clavicular fracture ORIF in our institution is skin tenting 0.001* Data are presented as n or mean±SD. *P<0.05 was considered statistically significant. NSAID: Nonsteroidal anti-inflammatory drug, VAS: Visual analog scale score, SSV: Subjective shoulder function value, DM: Diabetes mellitus, SD: Standard deviation at the fracture site. For other types of midshaft clavicular fracture, we discuss the pros and cons of conservative treatment and surgical treatment in detail with patients and their families. Although primary fracture healing without surgical soft-tissue dissection occurred in the C group, unstable fixation with accidental movement of fracture fragments because of the mobility and high frequency of use of the upper limbs may have caused delayed union or even nonunion of the fracture site. This result may have a considerable effect on functional recovery and the duration of pain in the shoulder region. The duration of nonsteroidal anti-inflammatory drug (NSAID) use was also significantly shorter in the O group. Shorter period of NSAID usage is beneficial for the union of the fracture site because this medication may have a negative influence on fracture healing [14]. The surgical method used in the O group relied on stable fixation with a neutral plate and at least 6 cortex fixations on each side of the fracture. This stable structure may considerably aid successful direct bone healing after open reduction, because 90% of the fixations had not failed at 6-month postoperatively. Previous studies reported complication rates for ORIF as high as 34% [3,4], including wound infection, surgical site infection, implant failure, and nonunion. Our complication rate of ORIF in the O group was 18.3%, including 3 cases of delayed union, 2 cases of fracture site shortening, and 6 cases of fixation failure that required revision surgeries. Through decreasing soft-tissue dissection and an accurate disinfection technique, complications such as surgical site pain, wound infection, and nonunion seemed to be decreased effectively. Multiple regression analysis further revealed that primary ORIF for displaced midshaft clavicular fractures decreased the risks of shoulder pain in the affected limb at 6-month postinjury and poor functional recovery at 6-month postinjury. This may be related to earlier fracture healing and early rehabilitation after ORIF, which is known to be a critical factor for functional restoration of the shoulder girdle. One study revealed no statistically significant differences in any of the functional outcome evaluation methods between early and late fixation of displaced midshaft clavicular fractures, and the authors suggested that operative intervention for clavicle fractures between 3-and 12-week postinjury may be safe with no risk of excess complications [15]. However, our study results indicated improved functional restoration with ORIF for patients aged 30-65 years old, which may assist patients in returning to their preinjury social function, quality of life, and activity level in the early stage. This may be crucial because many of them were the main financial source for their family. Smoking seems to be a significant risk factor for the failure of ORIF for displaced midshaft clavicular fractures. All the patients who smoked required revision surgery. Smoking is known to be a major negative factor in bone-and soft-tissue healing [16]. DM has also been determined to considerably influence sensitivity to pain and fracture healing [17]. In our study, DM seemed to increase the risk of moderate-to-severe pain postinjury, although this did not reach statistical significance. The limitations of various types of analgesics because of DM nephropathy may also be another reason for this result. The limitations of the present study are the small sample size, short follow-up duration, and selection bias of surgeons between conservative treatment and surgery. However, this is the first study to reveal the relationship between treatment choice and early stage function or pain in the affected limb. Furthermore, delayed union did not equal nonunion. In the future studies, our goal is to determine the risk factors for nonunion of displaced midshaft clavicular fractures for longer follow-up periods. Conclusions ORIF with stable fixation for displaced midshaft fractures seemed to decrease delayed union rates, restore more early-stage function, and decrease early-stage pain for patients aged 30-65 years old. Smoking may have a negative effect on the successful rate of the operation according to our result of this study. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form the patients have given their consent for their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.
2018-10-26T08:57:10.195Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "63252f16950b1005cc72f7db5af6539e1280790c", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/tcmj.tcmj_25_18", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "63252f16950b1005cc72f7db5af6539e1280790c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18938036
pes2o/s2orc
v3-fos-license
Pseudoxanthoma Elasticum – Also a Lung Disease? The Respiratory Affection of Patients with Pseudoxanthoma Elasticum Background Pseudoxanthoma elasticum (PXE) is an autosomal-recessive mineralisation disorder caused by loss of function mutations in the ABCC6 Gen. Histological findings and data of an autopsy of a PXE-patient suggest a possible pulmonal calcification. So far, there exists no clinical data whether PXE patients actually are at high risk of developing pulmonary disorder. Methods In a cross-sectional study, 35 PXE patients and 15 healthy controls underwent a pulmonary function testing, including spirometry, body plethysmography and carbon monoxide diffusing test. Additionally, PXE patients completed a COPD–Assessment-Test (CAT). Results We observed in PXE patients normal values for predicted vital capacity (VC%; 96.0±13.0%), predicted total lung capacity (TLC%; 98.2±12.0%) and predicted forced expiration volume (FEV1%; 102.5±15.6%), whereas compared to healthy controls the PXE group showed significant diminished values for carbon monoxide diffusing capacity (DLCO, 7.2 ±1.4mmol/min/kPa vs. 8.6 ±1.5 mmol/min/kPa; p = 0.008) and predicted carbon monoxide diffusing capacity (DLCO%; 79.7±11.5% vs. 87.2±6.6%; p = 0.008). 11/35 (31.4%) PXE patients showed pathological DLCO% values under 75% (68.5%±5.4%). Conclusion PXE patients demonstrated a regular lung function testing, but nevertheless they had impaired CO diffusing parameters, which might be associated with a preclinical state of an interstitial lung disease and a risk for restrictive ventilation disorders. Methods In a cross-sectional study, 35 PXE patients and 15 healthy controls underwent a pulmonary function testing, including spirometry, body plethysmography and carbon monoxide diffusing test. Additionally, PXE patients completed a COPD-Assessment-Test (CAT). Conclusion PXE patients demonstrated a regular lung function testing, but nevertheless they had impaired CO diffusing parameters, which might be associated with a preclinical state of an interstitial lung disease and a risk for restrictive ventilation disorders. Introduction Pseudoxanthoma elasticum (PXE), also known as Grönblad Strandberg syndrome, is a rare disease with an estimated prevalence of prevalence of 1:25 000 to 1:100 000 [1]. PXE is an autosomal recessive mineralization disorder [2] caused by several loss of function mutations [3] in the ABCC6 gene. It encodes a transmembrane ATP-binding cassette transporter [2] on the basolateral surface [4], mainly expressed in the liver [5], which mediates the cellular release of ATP [2] in healthy people. The released ATP is converted into AMP and anorganic pyrophosphat (PPi) within the liver vasculature [2]. In patients with PXE, lower levels of PPi, which is discussed to be an mineralization inhibitor [6], were observed [2]. Histologic samples from patients with PXE showed thickening and calcification of Bruch membrane [1], extracellular calcification around elastic fibers of the carotid arteries [5] and other vessels, as well as mineralisation and fragmentation of mid-dermal elastic fibers [7]. For example mineralisation can affect eyes, skin or the cardiovascular system [7]. Typical symptoms of skin manifestations are peau d'orange, yellowish papules and inelastic skinfolds, which primarily deteriorate the flexural areas of the body [8]. Common ocular signs of PXE are angoid streaks, retinal pigment epithelium atrophy and characteristic fundus signs [9]. The cardiovascular findings comprise an impairment of the elastic properties of the aorta, a higher prevalence of peripheral artery disease and intermittent claudication [8,10]. Also renovascular hypertension might be a finding in PXE patients [11]. As a systemic mineralization disorder PXE affects elastic fibers [4]. Therefore, it is reasonable to suggest a possible lung affection with increased rate of restrictive ventilation disorder. The aim of our study was to screen for interstitial pulmonary disorder, connected with diminished pulmonary function parameters. Material and Methods In this prospective registry, 35 consecutive PXE patients underwent pulmonary examinations at the Department for Internal Medicine II (cardiology, angiology and pneumology) of the University Hospital Bonn between October 2014 and October 2015. All patients gave written consent and the study was approved by the local ethic committee of the faculty of medicine, Rheinische Friedrich-Wilhelm Universität Bonn. Minimal criteria for the diagnosis of PXE were the finding of two major clinical signs of PXE like ophthalmological signs (angioid streaks) or characteristic skin lesion [12] or one clinical sign and two mutations in the ABCC6 gene [11]. All patients showed typical ophthalmological signs, 30 patients had a genetically verified evidence of PXE and 26 had skin manifestations. As control group 15 healthy volunteers underwent the same set of examination. The healthy volunteers had no respiratory diseases and a similar distribution for age, weight, height and smoking, as shown in Table 1. Pulmonary Function Test All patients underwent a pulmonary function test with spirometry (Flowscreen Jaeger©), body plethysmography (Bodyplethismograph Jaeger©) and a carbon monoxide (CO) diffusing test of the lung in single breath method (Alveo-Diffusionstest Jaeger©). Standard spirometry (vital capacity (VC), forced expiratory volume (FEV1)), body plethysmography (total lung capacity (TLC), residual volume (RV)) and diffusing parameters (DLCO, DLCO/VA) were recorded. The predicted values for all volumes (TLC%, VC%, RV%, FEV1%) and the diffusion parameters (DLCO% and DLCO/VA%) were calculated automatically by the software of the pulmonary function test, using age, sex, height and weight. Reference parameters were compared according to Global Lung Initiative (GLI) reference values for tiffenau index (FEV1/IVC) and inspiratory vital capacity (IVC). Z-scores were calculated post examination for each patient with the GLI2012 tool (www.lungfunction.org). Diffusing parameters below 75% of the predicted values were assumed as abnormal [13]. Additionally, PXE patients completed a COPD-Assessment-Test (CAT) for quality of life investigation (http://www.catestonline.org/english/index_German.htm) and hemoglobin (Hb) was recorded from laboratory testing at their general practitioner not older than 3 months to eliminate changes in lung function testing caused by anemia. None of our patients had signs of anemia and the mean Hb was 14.17±1.62 mg/dl. Statistical Analysis Presented values were expressed as mean ± standard deviation and a significant P-value was defined as <0.05. For distribution analysis of non-parametric variable Chi² test was applied while differences in parametric variables were tested with a t-test in case of an existing Gaussian distribution, whereas the Mann-Whitney-U-Test was applied in non-Gaussian distributions. Pearsons test was used for parametric correlations, whereas Spearman-Rho test was used for non-parametric correlations. Statistical analysis was performed with SPSS 23 for Win-dows© (SPSS Inc. Chicago,IL, USA). Discussion According to our knowledge this is the first study investigating a pulmonary affection of patients suffering from PXE. PXE patients had normal lung volumes and no signs for obstructive disorders. In 31.4% of our patients we found pathologically decreased values for DLCO, a very sensitive marker for interstitial lung diseases, and a significant lower TLC. Additionally, one patient with a manifest restrictive ventilation disorder with abnormal decreased TLC and VC was found. Other five patients had also pathological values for VC under the LLN. All patients with low VC presented a DLCO under 75% of the predicted value or were nearby the pathological limit. These functional respiratory values correlated strongly with subjective pulmonary impairment shown by the CAT-score as assessment test for respiratory limitation. Jackson et al. described a PXE-patient with histological findings of extensive calcification, connected with elastic tissue damage in the lung [14]. In 1996 Yamato et al. published a case report of a lung biopsy with exercise depending dyspnoea in one PXE patient. They found small calcified nodules scattered in the alveolar septa [15]. An autopsy report from Miki et al. demonstrated also vascular changes in the lung of a PXE patients with fragmented, laminated and calcified elastic laminae of the small up to medium size arteries [16]. Reported histomorphological changes in absence of anaemia could be the underlying process in PXE patients leading in a diminished CO diffusing capacity and the trend to restrictive ventilation disorder. Alternatively, interstitial lung involvement could be due to the mineralisation process of the PXE-disease. On the one hand, it could affect the pulmonary tissues directly. On the other hand, damaging the small to medium-sized arteries in the lung could represent another possible pathway. To reveal other risk factors for a development of pulmonary impairment, we performed a subgroup analysis, including all patients with a DLCO% <75%. This analysis showed no significant differences in the baseline characteristics, therefore identifying independent risk factors other than PXE for a restrictive ventilation disorder in this collective so far could not be found. Further investigations with larger patient cohorts should be performed to address this question. Regarding the rare prevalence recruiting a patient collective with eligible size might be challenging. Conclusion PXE patients had no impairment in spirometry or body plethysmography compared with healthy controls, thus there is no sign for a clinical manifestation of a lung disease. Nevertheless, we found a significant reduction in diffusing parameters, a sensitive marker for preclinical restrictive lung disorders. To our knowledge this is the first time a structured lung assessment of a relatively large cohort of PXE-patients was performed, and it is the first time reporting that PXE patients have unremarkable lung function testing findings, but they are possibly at risk of developing a restrictive ventilation disorder.
2018-04-03T06:01:23.891Z
2016-09-13T00:00:00.000
{ "year": 2016, "sha1": "30ce6b0f18c635e0d773c0d320a49cf3dbbf6a81", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0162337&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "30ce6b0f18c635e0d773c0d320a49cf3dbbf6a81", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226353241
pes2o/s2orc
v3-fos-license
Partial flexible job shop scheduling considering preventive maintenance and priorities In this paper, a new mathematical programming model is proposed for a partial flexible job shop scheduling problem with an integrated solution approach. The purpose of this model is the assignment of production operations to machines with the goal of simultaneously minimizing operating costs and penalties. These penalties include delayed delivery, deviation from a fixed time point for preventive maintenance, and deviation from the priorities of each machine. Considering the priorities for machines in partial flexible job shop scheduling problems can be a contribution in closer to the reality of production systems. For validation and evaluation of the effectiveness of the model, several numerical examples are solved by using the Baron solver in GAMS. Sensitivity analysis is performed for the model parameters. The results further indicate the relationship between scheduling according to priorities of each machine and production scheduling. Introduction The job shop problem is one of the major issues in production planning. In the flexible job shop problem, it is assumed that each operation is allowed to be processed on a set of available machines. Flexible job shop scheduling is much more difficult than job shop scheduling because in this case there is the problem of assigning operations to machines. To cite this article: Farahani, Ameneh; Tohidi, Hamid; Khalaj, Mehran; Shoja, Ahmad (2020). Partial flexible job shop scheduling considering preventive mainte-nance and priorities WPOM-Working Papers on Operations Management,11 (2), 27-48. doi: https://doi.org/10.4995/wpom.v11i2.14187 Wang and Yu (2010) considered a flexible job-shop scheduling problem with machine availability constraints. Each machine is subject to preventive maintenance during the planning period and the starting times of maintenance activities are either flexible in a time interval or fixed time point. Rajkumar, Asokan, and Vamsikrishna (2010) consider a flexible job shop scheduling problem with maintenance at a time interval, and uses the greedy randomized adaptive search procedure (GRASP) algorithm. Moradi, Ghomi, and Zandieh (2010) consider a flexible job shop scheduling problem with preventive maintenance over a time interval. In this paper, makespan is considered as an optimal criterion for this problem, and has used a learningable genetic architecture to solve the problem. Moradi, Ghomi, and Zandieh (2011) provide a hybrid problem of flexible job shop scheduling and preventive maintenance by the multiobjective optimization approach. In this paper, the number of preventive maintenance and maintenance interval is not fixed. Dalfard and Mohammadi (2012) present a new mathematical modeling for the multi-objective flexible job shop problem with parallel machines and maintenance over a time interval. Li and Pan (2012) proposed an effective discrete chemical-reaction optimization algorithm for solving a flexible job shop scheduling problem with consideration of maintenance activity. In this paper, a time interval is considered for performing maintenance. Li, Pan, and Tasgetiren (2014) presented a discrete artificial bee colony algorithm to solve a flexible job shop scheduling problem with maintenance. A time interval is considered to carry out the maintenance. Ziaee (2014) considers the problem of flexible job shop scheduling with preventive maintenance. The preventive maintenance has to be is executed within a given time interval and only one maintenance operation is performed on each machine. Mokhtari and Dadgar (2015) present a mixed integer linear programming model for flexible job shop scheduling problem with maintenance. A time interval is considered to carry out the maintenance. Thornblad et al. (2015) consider the problem of flexible job shop scheduling with preventive maintenance activities. The start time of maintenance is not a fixed time and each preventive maintenance operation has to be is executed within a given time interval. They propose a fast-iterative approach to solve the problem. Ye and Ma (2015) propose a multi-objective integrated op-timization model based on the concept of flexible job shop and preventive maintenance that minimizes the maximum completion time and maintenance cost per time unit. Zandieh, Khatami, and Rahmati (2017) developed the problem of flexible job shop scheduling with condition-based maintenance. In this paper, the start time of the maintenance is based on the conditions and their time length is different. El Khoukhi, Boukachour, and Alaoui (2017) present the problem of flexible job shop scheduling with preventive maintenance. A fixed time is considered to carry out the maintenance. The objective is to minimize the makespan. According to the literature review presented in present paper and the search, in none of the models, priority is not given to choosing the machine and allocations are made only by the competence of the machine. In real production environments, there are always priorities in the select of machines for assignment to activities as well as the length of time the machine is turned on and the number of setups of each machine. These priorities can be due to the costs imposed on the production system by each machine, the difficulty of setting up, the difficulty of working with a machine and the quality of the products produced by each machine. In the present study, the priorities are considered to choosing machines so that the proposed model is closer to the reality of production environments. In papers that have investigated the flexible job shop scheduling problem with preventive maintenance, they assume the start time of preventive maintenance is either a fixed time point or a time interval. In the actual production environments, taking a fixed time point for maintenance, interrupted production operations to perform repairs at a certain point in time, and sometimes rework operation on product are both costly and time-consuming. Also, considering a time interval for maintenance regardless of priority for any point time in this interval is far from the reality of maintenance. Therefore, the following points can be counted as the contribution of this paper in the partial flexible job shop scheduling problems: (1) The preferred time to repair is a fixed time point. However, a time interval is considered for positive or negative deviation from this time point. A penalty has been set for the positive or negative deviation from this point within this time interval (2) For each machine, three priorities are considered, including the length of time the machine is turned on in the system, the type of operation assigned to each machine and the number of setups for each machine. There is a penalty for deviating from every of the preferred items of each machine. This assumption makes this model applicable to industrial environments. The rest of the paper is presented as follows. Section 2 describes the proposed model. In section 3, several numerical examples are solved and sensitivity analysis is performed and the results are analyzed. In the end, section 4 will present a summary of the paper and conclusions and future suggestions. Description of proposed model In this paper, the problem of flexible job shop scheduling with preventive maintenance is studied. Hypotheses considered in this paper are summarized as follows: • The time for production operations varies by different machines available for that operation; • There is a setup time for each machine when the product is changed; • After starting a production operation on a machine, there is no possibility of interrupting for other production operation or preventive maintenance activity; • All machines and jobs are available at the beginning of the schedule; • Each maintenance task has a fixed time for performing and a predefined time interval, in which the starting time of the preventive maintenance can be changed within it; • There are three priority items for each machine, which includes the duration that each machine is turned on, the operation assigned to each machine and the number of setups; • For different machines, the preventive maintenance interval, the duration of maintenance and the number of maintenance can be different; • Each machine during the planning horizon is assigned to stages that each stage occurs when the state of a machine or the type of operation or the type of product is changed. • For a machine that is turned on, there are four states in each stage that the machine can be assigned to one of the states in each stage. At each stage, either the machine is idle or the machine performs the production operation or preventive maintenance is done or the machine is in setup state; • The setup state is considered either when changing jobs on each machine or when the machine is in idle state at the stage p and is assigned to a production operation at the stage p+2, then that machine should be in the setup state at the stage p+1; • Each machine can be started only once during the planning period; • Each machine can only perform a production operation at each stage and each production operation is performing only by a machine; The purpose of this mathematical model is to allocate production operations to machines and determine the sequence of production operations on each machine, in order to minimize production costs and costs of deviation from preferences for each machine, and the cost of deviation from the pre-set point for preventive maintenance in a time interval, and minimize the total delay time of the delivery of jobs. First, we introduce the indices, sets, parameters, and variables of the model, and in the following, the objective function and the constraints of the model are presented. : Set of machine stages; Npm j : Set of preventive maintenance assigned to the machine j ; P s : Set of stages allowed to start the activity of each machine; J ik : Set of machines that can perform the operation o ik ; Ifev j : Set of production operations that are preferred for machine j ; Iafe j : Set of production operations that are non-preferred for machine ; : Set of the Latest operations of job ; Partial flexible job shop scheduling considering preventive maintenance and priorities Ameneh Farahani, Hamid Tohidi, Mehran Khalaj, Ahmad Shoja Parameters : The cost per time unit when the machine is turned on; : The cost of turning on the machine ; : The cost per time unit for the assignment of the machine to the production operation ; : The cost per time unit for the idle time of the machine ; : The cost per time unit for the setup time of the machine ; : The cost per time unit for the duration of preventive maintenance of the machine ; : Penalty for each positive deviation unit from the number of setups preferred for machine ; : Penalty for each negative deviation unit from the number of setups preferred for machine ; ℎ : Penalty for each positive deviation unit from duration preferred to turn on the machine ; ℎ : Penalty for each negative deviation unit from duration preferred to turn on the machine ; : Penalty for each unit of positive deviation from the start time of preventive maintenance of step machine ; : Penalty for each unit of negative deviation from the start time of preventive maintenance of step machine ; : Penalty for assigning the machine to the production operation that the machine can do it (depending on whether that production operation is preferred for that machine or that production operation is nonpreferred for that machine); : Penalty for each time unit delayed delivered of the product ; : The number of idle stages allowed for the machine ; : The maximum total time of idle allowed for the machine ; : The time of operation on the machine ; : The earliest allowed start time for the preventive maintenance of the step machine ; : The latest allowed start time for the preventive maintenance of the step machine ; Partial flexible job shop scheduling considering preventive maintenance and priorities Ameneh Farahani, Hamid Tohidi, Mehran Khalaj, Ahmad Shoja Variables : The total time length when the machine is turned on; : The total length of time when the machine is assigned to the production operation ; : The total length of time when the machine is idle at the production center; : The total time of setup of the machine ; : The total time length of setup of the machine when changing product; : The total time length of setup of the machine when changing from idle state to the assignment to the production operation; : The total time length of preventive maintenance of the machine ; : The positive deviation from the number of setups preferred for the machine ; : The negative deviation from the number of setups preferred for the machine ; ℎ : The positive deviation from the time of turning on preferred of machine ; ℎ : The negative deviation from the time of turning on preferred for machine ; : The positive deviation from the start time of preventive maintenance of the step of the machine; : The negative deviation from the start time of preventive maintenance of the step of the machine; : Total number of operations assigned to machine (for preferred operation, negative sign and for nonpreferred operation, positive sign is considered); : The number of machines are turned on in stage ; : Total time of delayed delivery of product ; : The time length of the stage of the machine ; : Binary variable that it is one if the machine is turned on at the ∈ stage and zero otherwise. : Binary variable that it is one if the machine is turn on at the stage and zero otherwise; : Binary variable that it is one if the machine is assigned to the production operations at stage and zero otherwise; : Binary variable that it is one if the machine is idle at stage and zero otherwise. : Binary variable that it is one if the machine is assigned to preventive maintenance in stage and zero otherwise; : Binary variable that it is one if the machine is assigned to the setup state at stage and zero otherwise; : Binary variable that it is one if the machine is assigned to activity in stage and zero otherwise; : Binary variable that is one if the machine is assigned to activity at stage after an idle stage and zero otherwise; : Binary variable that is one if the machine is assigned to preventive maintenance of the step in stage and zero otherwise; ′ ℎ : Binary variable that is one if the machine is assigned to the setup state in the change of the operation on product to the operation ′ ℎ on the product ′ and zero otherwise; : Binary variable that is one if the machine is assigned to the setup state in the change of the product and zero otherwise; : Binary variable that is one if the machine is assigned to the setup state after an idle stage and zero otherwise; : The start time of the stage of the machine ; : The start time of the operation on the machine ; ′ ℎ : The start time of the setup operation in the change of the operation on the product to the operation ′ ℎ on the product ′ of the machine ; : The start time of the setup state in the change from the idle state to the production operation. : The start time of the preventive maintenance of the step on the machine ; : The start time of idling of machine in the stage ; : The number of setup of the machine ; : The deviation from the delivery time of the product (Both positive and negative); : Binary variable that it is one if preventive maintenance of the step of the machine should be assigned to that machine and zero otherwise; : The idle time of the machine in the stage ; : The objective function The objective function The objective function is consisting of a sum of eleven functions. All functions are of cost type. = 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 + 11 (1) Which are as follows: 1-The cost of the time length is turning on the machine at the production center: 2-The cost of the number of machines that are started: 3-The cost of assigning machines to production operations: 4-The idle cost for machines: 5-The cost of setup for machines: Partial flexible job shop scheduling considering preventive maintenance and priorities Ameneh Farahani, Hamid Tohidi, Mehran Khalaj, Ahmad Shoja 6-The cost of preventive maintenance of machines: And the penalty of deviation from the preference of each machine include: 7-The penalty resulting from the deviation (positive or negative) from the number of preferred setup of each machine: 8-The penalty resulting from the deviation (positive or negative) of the time length of the turning on preferred for each machine (if it is turned on): 9-The penalty imposed by the assignment of machines to the production operations: 10-The penalty for the deviation (positive or negative) from the start time of preventive maintenance of the step of each machine: 11-Penalty for delayed delivery of each product: Constraints 1. The number of machines that start their activity at each allowed stage; 2. Each machine can start its activity only once during each production period; 3. A machine can be turn on in the first stage if it starts its activity in this stage; 4. A machine can only be turn on at the stages allowed to get started (other than the first period), which is either turned on at that stage or has been turned on in the previous period; Partial flexible job shop scheduling considering preventive maintenance and priorities Ameneh Farahani, Hamid Tohidi, Mehran Khalaj, Ahmad Shoja 5. Except for the stages allowed to start, a machine can only be turn on at each stage, if it is turned on at the previous stage; 6. If a machine turns on at each stage, must be in one of four states include assigned to production operations, set up, preventive maintenance and idle state; 7. A machine is at the state at each stage, which is assigned to one of the production operations that it has the ability to do it at that stage; 8. A machine is in the state at each stage, which is assigned to one of the preventive maintenance steps of the machine at that stage; 9. A machine is in the state at each stage, which, that machine is assigned to a production operation on a product in the previous stage, and in the next stage it is assigned to a production operation on another product or that it is in the previous stage in idle state and in the next stage it is assigned to a production operation on a product; 10. A machine is in the state at each stage, which in the previous stage, it is assigned to a production operation on a product, and then it is assigned to a production operation on another product in the next stage; 11. A machine is in the state at each stage, which is in the previous stage in idle state and then it is assigned to a production operation on a product in the next stage. 12. The start time of each machine depends on the stage that is turned on; Partial flexible job shop scheduling considering preventive maintenance and priorities Ameneh Farahani, Hamid Tohidi, Mehran Khalaj, Ahmad Shoja 13. All the production operations must be done; 14. The start time of the preventive maintenance of each step is equal to the start time of the stage that the machine is assigned to the maintenance of that step; 15. The start time of preventive maintenance should be within a certain range; 16. The number of idle stages assigned to each machine should not exceed the permitted number; 17. The time length of idle for each machine at each stage should not exceed the maximum allowed idle time; 18. The total idle time of each machine should not exceed the maximum allowed idle time; 19. The time length of each stage for each machine is calculated as follows: 20. The start time of each stage assigned to each machine, with the exception of the first stage, is equal to the sum of the start time of the previous stage plus the time length of that stage; 21. The start time of each production operation on each machine is calculated as follows: 22. The start time of setup activity in the change of the operation to ′ ℎ on the machine is calculated as follows: 23. The start time of setup activity is calculated on each machine, when the state of the machine changes from the idle state to production operations state. Partial flexible job shop scheduling considering preventive maintenance and priorities Ameneh Farahani, Hamid Tohidi, Mehran Khalaj, Ahmad Shoja 24. The start time of the idle for each machine at each stage, is calculated as follows: 25. The sequence of the start time of production operations of a product on a machine is ensured if both activities are assigned to a machine; 26. The sequence of the start time of production operations of a product on different machines is ensured; 27. The sequence of the production of different products on a machine according to the time length of setup is ensured; 28. The sequence of successive production operations of a product on a machine is ensured; 29. The sequence of successive production operations of a product on different machines is ensured; 30. A machine is assigning to the state at each stage if the product has been changed on the machine; 31. The total time of the assignment of each machine to the production operations; Partial flexible job shop scheduling considering preventive maintenance and priorities Ameneh Farahani, Hamid Tohidi, Mehran Khalaj, Ahmad Shoja 32. The total time of the assignment of each machine to the preventive maintenance; 33. The total time of the setup of the machine ; 34. The total time is assigned to the setup state for each machine when the product is changed. 35. The total time of the assignment of each machine to the setup state in the change from idle state to production operations is computed; 36. The total time of idle is calculated for each machine after it is turned; 37. The total time of turning on is computed for each machine; 38. The number of setups assigned to machine is calculated; 39. The positive or negative deviation of the turning on time preferred for each machine (if it is turned on) is computed; 40. The positive or negative deviation of the number of setups preferred for each machine (if it is turned on) is calculated; 41. The positive or negative deviation from the start time of each step of preventive maintenance for each machine is computed; Partial flexible job shop scheduling considering preventive maintenance and priorities Ameneh Farahani, Hamid Tohidi, Mehran Khalaj, Ahmad Shoja 42. If the preventive maintenance of step , it should be assigned to that machine because of the machine being turned on at this time, this assignment must be done in one of the stages; ≤ ∑ ∈ ∀ ∈ , ∀ ∈ (55) 43. These constraints ensure that if the preventive maintenance is assigned to a machine, the machine is turn on at the start time of the maintenance; 44. This constraint calculates the total desirable or undesirable operations assigned to each machine; 45. The minimum time of turning on for a machine (if the machine is turned on); 46. This constraint calculates the deviation of the delivery time of each product; 47. This constraint calculates delayed delivery of each product (earlier delivery is not considered); Results and discussion In this part, several numerical examples are solved to evaluate the model and sensitivity analysis is performed on the costs and penalties of the model. There is a production system with five machines and three products that are being produced in this system. The sequence of production operations and the duration of each operation are given in Table1.1. The "N" in Table1.1. indicates that the machine is not capable to performing that operation. For the proposed model, the program is written in the GAMS software (version 24.9.1) and solved using the Baron solver. The optimum object function value is equal to 2143 and the solution time is 483 seconds. The scheduling is in accordance with Table 1.15. The delivery time of products is equal to 120, 185 and 155. Sensitivity analysis is performed to illustrate the effect of the parameter on optimal decisions. In the sensitivity analysis, the parameters that have been changed include the various costs and penalties listed in Table 1.16. The variation of parameters of the sensitivity analysis are also presented in Table 1 As seen in Table 1.17., the following can be noted: • Variation in penalties for machines' priorities: As the penalties increases, the scheduling moves towards more satisfying these priorities. So that the penalties resulting from these deviations in the objective function are reduced. The object function, despite the doubling of penalties, is reduced. Because the number and value of deviations have decreased. Also, by the reduction of penalties for machines' priorities, the objective function has decreased, while the number of deviations has increased from the priorities of machines, which is logical. • Variation of the costs: The variation of the cost affects both the delivery delay of the product and the total penalties for machines' priorities. As the costs increase, the total penalties for machines' priorities and the penalties for delayed delivery increase and scheduling and delivery of product are done in accordance with reduced costs. • Variation in penalties for delayed delivery: As the penalties for delayed delivery increase, the scheduling tends to reduce delayed delivery of products. Total penalties of machines' priorities and total costs have been changed in accordance with decreasing penalties for delivery delay of products. The results of the sensitivity analysis show that the model responds logically to variations of the parameters of the problem. The change in input parameters affect the scheduling and the time of delivery of products and it reflects the relationship between scheduling with priorities and production scheduling. The combination optimization problems are known as NP-hard problems. Due to the complexity and difficulty of solving these problems, the solution time increases exponentially as the dimensions of the problem increase. In the following, several problems with small and medium dimensions have been solved according to the data of Table 1.18. to show the efficiency of the model, and the results have been given according to the solution time. For these examples, the same parameters defined in the numerical example are used, but the number of machines and jobs are multiplied by two, three, four, and five, respectively. To solve these models, an Intel (R) Core (TM) i7CPU processor computer with 32 GB of memory has been used. As can be seen in Table 1.18., the solution time increases nonlinearly by increasing the dimensions of the problem. To solve a problem with 30 machines and 18 products, the program showed an out-of-memory message. According to Table 1.18., the time to solve these problems may be acceptable for small problems, but given that in production centers, there are a large number of machines and production operations that must be solved in a short and fast time. Exact methods are not acceptable for large-scale prob- Conclusion This paper presents a new mathematical programming model for partial flexible job shop scheduling with preventive maintenance. The purpose of this model is to allocate production operations to machines with the goal of simultaneously minimizing operating costs, penalties of delays in delivery, deviations from the fixed time point of preventive maintenance of each step of each machine and deviations of priorities for each machine. The contribution of this paper is to develop a model for the integrated optimization of the partial flexible job shop scheduling, considering the delayed delivery penalty, the penalty of the deviation from the start time of each step of preventive maintenance for each machine and the penalty of the deviation from the preferred priorities of each machine. The goal is to reduce costs per time unit. Also, in this model, the start time of each step of preventive maintenance for each machine is a fixed time point, but with the imposition of penalty it can change within a time interval. Several numerical examples are presented to evaluate the model, and the sensitivity analysis shows the dependence between the scheduling and preferences for each machine. These assumptions make this model applicable to industrial environments, in real production environments, there are always priorities in the select of machines for assignment to activities as well as duration of turning on and the number of setup of each machine. Despite this fact, this issue has not been considered in the literature on the partial flexible job shop scheduling problem with preventive maintenance. Also, considering a fixed time point for each step of maintenance on each machine, and considering the allowed time interval with the imposition of the penalty for the deviation from the fixed point is more consistent with the reality of maintenance. These research gaps are considered in this paper and these assumptions make this model closer to the reality of production environments. Partial flexible job shop problem is a combination optimization problem, In future research, it is suggested that this model be solved by meta-heuristic algorithms in order to complete evaluation of the model's capability. It is also recommended to incorporate quality control and human resource planning policies in this model, considering the selective priorities for each worker. The simultaneous optimization of production planning, preventive maintenance and quality control with regard to priorities can be done in future research.
2020-11-13T23:08:30.053Z
2020-10-19T00:00:00.000
{ "year": 2020, "sha1": "d740d75b49762d7ebcb7dcb98f81e448b5dbd628", "oa_license": "CCBY", "oa_url": "https://polipapers.upv.es/index.php/WPOM/article/download/14187/13073", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "55092c0a7f700bfb22fbef4b193affff2974a80c", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
18725855
pes2o/s2orc
v3-fos-license
Biomarkers in development of psychotropic drugs Biomarkers have been receiving increasing attention, especially in the field of psychiatry. In contrast to the availability of potent therapeutic tools including pharmacotherapy, psychotherapy, and biological therapies, unmet needs remain in terms of onset of action, stability of response, and further improvement of the clinical course. Biomarkers are objectively measured characteristics which serve as indicators of the causes of illnesses, their clinical course, and modification by treatment. There exist a variety of markers: laboratory markers which comprise the determination of genetic and epigenetic markers, neurotransmitters, hormones, cytokines, neuropeptides, enzymes, and others as single measures; electrophysiological markers which usually comprise electroencephalography (EEG) measures, and in particular sleep EEG and evoked potentials, magnetic encephalography, electrocardiogram, facial electromyography, skin conductance, and others; brain imaging techniques such as cranial computed tomography, magnetic resonance imaging, functional MRl, magnetic resonance spectroscopy, positron emission tomography, and single photon emission computed tomography; and behavioral approaches such as cue exposure and challenge tests which can be used to induce especially emotional processes in anxiety and depression. Examples for each of these domains are provided in this review. With a view to developing more individually tailored therapeutic strategies, the characterization of patients and the courses of different types of treatment will become even more important in the future. clinical course. Psychiatric disorders still show a wide diagnostic variability, for example, the differential diagnosis of early uni-vs bipolar disorders, the differentiation within the schizoaffective spectrum (between the bipolar and schizophrenic pole) or the comorbidity of anxiety spectrum disorders and depressive spectrum disorders. Hence, for an apparently similar phenotype, the relevant biomarkers may vary considerably, leading to a blurred relationship between distinct biomarkers and psychopathologically defined nosological entities. Biomarkers are regularly determined by technical, somewhat "objective" means using chemical or physical measures. 1 In contrast, the clinical diagnosis of any psychiatric disease and monitoring of the clinical course either during the patient's everyday life or during clinical trials of therapeutic interventions is still carried out by psychometric and somewhat "subjective" means. Despite a considerable and immense set of psychological measures, the rating within each test is done by psychiatrists and psychologists, who of course are trained, but still subject to their individual points of view. This incurs an additional considerable risk of variation. Importantly, the stability of diagnoses varies over the long-term course of psychiatric diseases. 2 Hence, even variability between raters at the same time point can occur, and during extended periods of observation distinct measures may vary considerably. This leads to the problem of whether pathological findings represent a "state" or "trait" phenomenon, whereby "state" may represent either a stable condition apparent at the onset of the disease or a biological "scar" as late sequela of this disease. Currently some biomarkers are regarded as state markers such as genetics and related findings, in addition, several markers are putative trait markers. Both state and trait markers carry distinct information which provides the possibility of characterizing treatment outcome better than mere subjective measures. Definition The term "biomarker" is not always appropriately used, given the great diversity of methods and investigational procedures to identify the origin or "state" of psychiatric disorders. Moreover, for drug development it also appears necessary to identify "trait" alterations; this is of importance for identification of parameters monitoring the intrinsic course of illness on one hand and predicting the efficacy of treatment procedures on this intrinsic course on the other hand. From this point of view for biomarkers individual dynamic responsiveness to interventions is also interesting. Absolute measures are helpful in identifying, eg, alterations in comparison of patients vs controls. However, of further interest is the way the individual response has to be classified: within the physiological bandwidth of homeostasis or at the borders of individual regulatory capacity. According to Frank and Hargreaves, 1 biomarkers are characteristics which are objectively measured and evaluated as an indicator of the intrinsic causes of illnesses, the clinical course, and its modification by treatment. In this context the authors point to the differentiation of clinical end points of treatment and surrogate end points: the former is for psychiatric approaches reflected by behavior and subjective feelings. For the latter the surrogate end point substitutes a clinical end point, to predict clinically wanted or unwanted effects. In addition, different types of biomarkers can in general be classified as shown in Table I 3 : Another aspect comprises the terms sensitivity and specificity. Sensitivity and specificity are statistical measures of the performance of binary classification tests. Sensitivity measures the proportion of measures or markers which correctly identify a condition, specificity measures the proportion of negative measures, which resembles the concept of Type I and Type II errors. 4 In the spectrum of biomarkers there is considerable variability with regard to sensitivity and specificity. Up to now, and especially in the past decade, a multitude of procedures have been developed, which may be listed as follows (adapted from ref 5, but not an exhaustive list of approaches - Table II): C l i n i c a l r e s e a r c h 226 • Type 0 biomarkers are markers of the intrinsic cause of an illness and its longitudal course • Type 1 markers identify the effects of an intervention by a specific drug action • Type 2 markers are surrogate end points which predict the clinical course. In clinical trials in the development of new drugs for psychiatric diseases, at a very early stage the analysis of concentrations and the presence or absence of markers are important approaches for characterizing, in addition to the behavioral characteristics of efficacy, the global "phenome" of the patient's condition. Genetics Modern antidepressant drugs are, in terms of efficacy, largely similar to drugs discovered several years ago. The development of new treatments for depression is limited by the availability of validated human biomarker models. 15 Family studies have revealed that the clinical response to antidepressant treatment shows more similarities within one family compared with controls, which indicates that uptake, metabolism, transport of drugs, and receptor binding is subjected to genetically controlled enzymes, receptor expression, and others factors. Monoamine transporters, including the serotonin, norepinephrine, and dopamine transporters are important in regulating neurotransmission by uptake of respective transmitters released from nerve terminals. Regarding serotonin transporter gene length polymorphisms, Caspi and colleagues 16 concluded that in interaction with stressful life events the genetic variation in the promoter region plays a role in predisposition to major depression. In the context of selective serotonin reuptake inhibitors in treatment of depression and the well-established link between stressful life events and depression, this finding offered a convincing biological link. This result, however, could not be confirmed by metanalyses of 14 studies 17 and a birth cohort study in nearly 900 participants 18 : neither a risk elevation nor stable gene x environment interactions were able to be proven. These findings question the suitability of single-gene expression alterations for differentiation of patients in clinical trials. Genome-wide association studies point to multiple loci which in combination with additional clinical characteristics may be better suited for predicting treatment responses. 19 One of the largest recent cohort studies for evaluation of treatment algorithms is the Sequenced Treatment Alternative to Relieve Depression (STAR*D) trial, which provided DNA from nearly 2000 patients with nonpsychotic depression. Variants in the serotonin 2A receptor, the subunit of the glutamatekainate receptor (GRIK4) the potassium channel (KCNK2) the chaperone FKBP5, a protein important for HPA axis regulation, were associated with citalopram treatment outcome. 20,21 For example, participants who were homozygous for the A allele of the serotonin 2A receptor had an 18% reduction in absolute risk of having no response to treatment. 22 Analyzing the BDNF ValMet66 polymorphism, no evidence of an association with treatment outcome in STAR*D could be found. 23 There is also evidence for a complex inheritance with multiple genes in the etiology of panic disorder. So far it has not been possible to identify single major responsible genes. Again, several genes of classical neurotransmitter systems have been reported to be associated, eg, genes of the serotonin transporter length polymorphisms, of the monoamine oxidase A, catechol-O-methyltransferase, adenosine receptor, and cholecystokinin B. 24 After treating healthy volunteers with escitalopram, the induction of panic-like anxiety by cholecystokinin tetrapeptide was significantly more pronounced in the short/short genotype subjects during escitalopram vs placebo pretreatment, and no inhibitory effect of escitalopram upon panic-like symptoms elicited by choleystokinin tetrapeptide could be demonstrated. 14 These findings support the notion that gene x treatment effects are highly complex and subject to a variety of influential factors. Biomarkers and psychotropic drugs -Wiedemann Dialogues in Clinical Neuroscience -Vol 13 . No. 2 . 2011 • Laboratory markers which comprise the determination of genetic and epigenetic markers, transmitters, hormones, cytokines, neuropeptides, enzymes, and others as single measures; this approach is also suited to reflecting the investigation of complex biological systems in its approximated entirety which is frequently described as a genome, proteome, and metabolome 6 • Electrophysiological markers which regularly comprise, eg, electroencephalography (EEG) measures 7 (and particularly sleep EEG and evoked potentials), 8 magnetic encephalography, electrocardiography, and in particular heart rate variability analyses, 9 facial electromyography analysis for emotion processing, 10 skin conductance, and others • Brain imaging techniques like cranial computed tomography, magnetic resonance imaging (MRI), functional MRI (fMRI), magnetic resonance spectroscopy (MRS), positron emission tomography (PET) and single-photon emission computed tomography (SPECT) 11,12 • Behavioral approaches such as cue exposure and challenge tests which can be used to induce or monitor especially emotional processes in anxiety and depression. 13,14 Of special interest is the pathophysiology of hypothalamo-pituitary-adrenocortical (HPA) axis regulation in depression and anxiety disorders: corticotropin-releasing hormone (CRH) related peptides, gluco-and mineralocorticoids and their receptors play an important role in behavioral, endocrine, and autonomic responses to stress, which is thought to be important in depression and anxiety. The chaperone FKBP5, a protein involved in HPA axis regulation, has been shown to mediate interaction effects with other polymorphisms. 21 Selective antagonists have been used experimentally to elucidate the role of CRH-related peptides, but up to now the development of specific drugs has been challenging 25,26 and tests of these compounds in genetically well-characterized patient samples remain to be tested. Schizophrenia is also the result of genetic alterations. However, genetic research has been impaired by the lack of disease-specific biomarkers. Despite an estimated 70% to 80% heritability of schizophrenia, nongenetic factors considerably modify the incidence and course of this disease, which complicates the identification of susceptibility genes. 27 Genes such as DISC1 include existing targets for drug development in schizophrenia and depression, 28 but are not specific for schizophrenia. The wide interindividual variability in clinical efficacy and tolerability of antipsychotic medications led investigators to relate not only efficacy of antipsychotic medications but side-effect profiles to pharmacogenetic factors. 29 However, up to now, only a few genome-wide association studies, eg, the CATIE trial with atypical antipsychotic treatment, are available which might lead to novel genes important for the efficacy of antipsychotics. 30 Pharmacogenetics In the context of pharmacogenetics, there was a goal of establishing individualized pharmacotherapy. 31 Genes encoding for enzymes involved in phase 1 metabolism are mainly cytochrome P450 (CYP) enzymes, which are known to contain a large variety of functional polymorphisms that significantly alter their metabolic activity. Common CYP polymorphisms can be distinguished by their effects upon metabolic rate, identifying the enzyme as slow (poor metabolizers), rapid (extensive metabolizers), or ultrarapid (ultrarapid metabolizers). 32 In particular, CYP2D6, a hepatic enzyme involved in the metabolism and elimination of antidepressants and antipsychotics, has been thoroughly investigated and associated with loss of efficacy or the potential to develop toxic reactions. Individuals presenting CYP2D6 PM variants are more likely to develop extrapyramidal side effects and weight gain. Kirchheiner and Rodriguez-Antona 33 showed that CYP2D6 and CYP2C19 metabolic rates may have an important influence upon the required doses of antidepressants and antipsychotics. This is an example for the clinical use of pharmacogenetics, especially when combined with clinical informations. The geographical distribution of CYP2D6 variants is heterogenous, supporting the notion that metabolic polymorphisms account for a significant part of variability in response to medications. Functional polymorphisms have been observed also in genes coding for CYP1A2, CYP2C9, CYP2C19, and CYP3A4 enzymes. Whereas CYP2C19 may be clinically relevant for the metabolism of antidepressants, CYP1A2 and CYP3A4 are major metabolic pathways of most commonly used antipsychotics, eg, olanzapine, risperidone, aripiprazole, and clozapine. Slow CYP1A2 variants have been associated with increased risk of drug-induced side effects. Since smoking can induce CYP1A2 activity, this example of a gene x environment interaction may have clinical significance: individuals with CYP1A2 rapid phenotypes who smoke are known to experience an impaired response to treatment with clozapine, a CYP1A2 substrate. Few reports have investigated CYP3A4, CYP2C9, and CYP2C19 functional variants and their influence on clinical outcome, with only some reference to the influence of CYP2C19 variants on therapeutic doses of antidepressants. 34 Whereas it has been postulated that clinical trials should include measurements of blood concentrations during drug development to generate more valid data about the relationship between drug concentrations and clinical outcomes under controlled conditions, 35 up to now no studies have reported on the prospective use of CYP genotyping in clinical practice. 36 Regarding the pharmacodynamics of the respective types of drugs, genetic polymorphisms in serotonin, noradrenaline, and dopamine receptors have been extensively investigated. Again, no single but multiple genes play a role in complex phenotypes, including the clinical response to medication. Thus, a multiple candidate gene approach has recently been adopted in pharmacogenetics. The new field of pharmacogenomics using DNA microarray analysis, which focuses on the genetic determinants of drug response at the level of the entire human genome, is important for development and prescription of, eg, safer and more effective individually tailored antipsychotics. 37 C l i n i c a l r e s e a r c h Biochemistry Studies with profiling experiments on brain physiology have to rely largely on postmortem analyses, which carry the risk of artefacts. Approaches to parallel alterations of the transcriptome, proteome, and metabolome in brain to findings in blood and cerebrospinal fluid (CSF) are possibly capable of providing experimental evidence for molecular findings in psychiatric disorders which help to identify also treatment responses. 6 Using proteomics to investigate distinct protein patterns is promising to improve the biology of psychiatric disorders and to identify biomarkers. 38 Also, knowledge of biochemical pathways can provide disease marker information required for drug development and improved patient treatment. Therefore, approaches to identifying pathways that affect depression-, anxiety-and schizophrenialike phenotypes could be important. 39 Due to the close proximity of CSF to the brain, pathological brain processes are more likely to be reflected in CSF than in blood or saliva, 40 and especially new tools like capillary electrophoresis-mass spectrometry in proteome analysis 41 could also reveal new proteins in CSF that are suited as biomarkers for treatment responses. Neuroendocrinology and hypothalamic-pituitaryadrenal axis alterations Particularly in depression, but also in anxiety disorders, frequently alterations of the hypothalamic-pituitaryadrenal (HPA) axis are observed. Besides steroids, numerous other factors regulate HPA axis responsiveness: at the hypothalamic level corticotrophin-releasing hormone (CRH) and receptors such as the CRH1-and CRH2-receptor, 42 modulators such as agonistic vasopressin 43 and antagonistic atriopeptins 44,45 are involved in the central regulation of HPA activity. At the molecular level, glucocorticoid receptor polymorphisms may be associated either with hypofunction or hyperfunction which could contribute to these findings. 46 Other factors are the influences of steroids like estrogen and progesterone. However, immune molecules, such as interleukins and cytokines, also activate the HPA axis and alter brain function, including cognition and mood. 47 Regarding treatment outcome, pivotal studies have been conducted in the past, applying the dexamethasoneinduced suppression of HPA activity, the CRH stimulation test of HPA activity, and the combined dexametha-sone-CRH test to predict treatment reponse. 48 In an investigation by Schüle et al 49 the attenuation of HPA axis activity after 1 week of antidepressant pharmacotherapy was significantly associated with subsequent improvement of depressive symptoms. Also, other single tests revealed a predictive potency of the dexamethasone-CRH test. 50 These findings are in line with studies reported by Ising et al, 51 who found normalized HPA activity in a subsequent dexamethasone-CRH test 2 or 3 weeks after the first test at beginning of treatment with an association of psychopathological improvement after 5 weeks. Interestingly, the effects of CRH-1 receptor antagonists 25 and glucocorticoid receptor antagonists 52 could not be predicted by defined alterations of HPA activity before treatment. In line with this, HPA axis activity also did not predict the efficacy of cortisol synthesis inhibitors in treatment of depression. 53 Sleep electroencephalography Sleep electroencephalogram (EEG) analysis provides markers of depression 54,55 and for antidepressant therapy. 8 For a long time it has been known that EEG activity is altered by drugs. Quantitative EEG analysis helps to delineate effects of antidepressants on brain activity. Elevated rapid eye movement (REM) density, which is a measure of frequency of REM, characterizes an endophenotype in family studies of depression. For example, for paroxetine REM density after 1 week of treatment was a predictor of treatment response. 56 Most antidepressants suppress REM sleep in depressed patients and normal controls, but REM suppression appears not to be crucial for antidepressant effects. Sleep EEG variables like REM latency and other variables were shown to predict the response to treatment with an antidepressant or the course of the depressive disorder. Some of these predictive sleep EEG markers of the long-term course of depression appear to be closely related to hypothalamo-pituitary-adrenocortical system activity. 8,54 Challenge studies To experimentally induce fear, or panic anxiety, several approaches with a large variety of agents have been conducted for further elaboration of the physiological basis of pathologic anxiety. Targets are the identification of more effective anxiolytic compounds avoiding addictive effects. In early human clinical psychopharmacology, a variety of challenge paradigms were investigated to establish the proof of concept in healthy volunteers. Different types of models for patients and healthy volunteers are available (Table III). However, these challenge paradigms fulfil the requirements of test-retest consistence and standardized responsiveness to reference drugs only in part. Most of them have been developed for the purpose of pathophysiological studies, 58 using rating instruments validated for clinical practice. Adapting these models to the requirements of pharmaceutical trials involves possibly a wider use of other biomarkers, and better characterization has to be carried out. 59 Whether human models can significantly enhance and accelerate phase I studies remains elusive. For example, experimental panic induction with cholecystokinin tetrapeptide (CCK4) is considered a suitable model to investigate the pathophysiology of panic attacks and a variety of studies in patients and healthy volunteers have been conducted. Some clinical trials have proven the validity of CCK4 studies in selective serotonin reuptake inhibitors, 60 benzodiazepine trials 61 and experimental studies with neuropeptides and neurosteroids. 44,62 In contrast, CCK4 antagonist studies 63,64 have shown equivocal effects in patients with panic disorder. Moreover, studies in healthy men showed stimulatory effects of escitalopram upon panic symptoms elicited by choleystokinin tetrapeptide. These findings question the potential usefulness of this panic model for proof-of-concept studies. 14 Imaging Brain imaging represents a tool to characterize state and trait markers, also in disorders with an episodic course such as schizophrenia and bipolar disorder. An integrated approach to support diagnostic processes may lead to a more accurate classification of depression. 11 Results of functional magnetic resonance imaging (fMRI) indicate that both gray and white matter have diagnostic and prognostic potential in major depression and may provide an initial step towards the use of markers to predict efficacy of pharmacologic treatment. 65 Besides structural analyses, positron emission tomography (PET) and single-photon emission computed tomography (SPECT) are used to identify alterations of neurotransmitters and their respective receptors in specific regions of the brains. Magnetic resonance spectroscopy (MRS) literature supports the presence of brain metabolic alterations in relation to individual mood state. An analysis of 31P-MRS studies regarding brain energetic status and phospholipid metabolism provided support for state-specific alterations in bipolar disorder. 66 More generally, evidence for an abnormal brain energy metabolism in mood disorders was found. Metabolic aberrations may be intrinsic since, for example, brain intracellular pH determined by 31P-MRS is decreased in medication-free bipolar patients in manic, depressed, and euthymic mood states. 12 Anxiety, and in particular panic disorder, has been extensively investigated to link episodic pathological C l i n i c a l r e s e a r c h symptoms to underlying biological mechanisms. It is hypothesized that respiratory dysregulation persists as a trait finding, also in the asymptomatic state. 67 Patients with panic disorder are susceptible to panic attacks precipitated by challenges like sodium lactate infusion, carbon dioxide inhalation, and hyperventilation (Table III). Intravenous infusion of 0. 5 mol/L sodium lactate with 70 mL/kg body weight produces marked physiologic and psychologic symptoms in panic patients but less frequently in healthy controls. 58 Also in 1-h MRS studies lactate infusion was used as a physiological challenge to investigate brain metabolism. When the distribution of lactate increases was assessed, abnormal brain lactate increases were estimated as tissue-based due to brain metabolic mechanisms. However, persistent brain lactate rises in panic patients during treatment with, eg, fluoxetine or gabapentin, indicate that brain lactate increases are possibly independent of metabolic challenges, which questions their suitability as markers. 66 Only a few fMRI studies have investigated the brain activation patterns following CCK4 administration. CCK4-induced anxiety was accompanied by strong and robust activation in various areas. Analysis for placebo and anticipatory anxiety generated no significant differences, and overall functional responses did not differ between panickers and nonpanickers. 68 Up to now, no fMRI studies have been conducted to predict treatment response. In patients with schizophrenia especially, studies of specific receptors, such as the dopamine D2 receptor, before and after administration of an antipsychotic, provide a means to determine receptor occupation. PET findings of high D2-receptor occupation in the striatum of responders to different antipsychotics provided clinical support for the dopamine hypothesis of antipsychotic drug action. Patients with extrapyramidal syndromes (EPS) show a higher occupancy-over 80%-than patients with no EPS. The PET-defined interval for an optimal antipsychotic drug treatment has been used in dose recommendations for typical and atypical antipsychotics. Interestingly, currently available PET ligands are not selective for the five dopamine receptor subtypes. 69 However, up to now PET can be used to predict and monitor extrapyramidal side effects of antipsychotic treatment rather than therapeutic efficacy. 70 Summary In this overview some biomarkers for future development of psychopharmaceutical drugs have been exemplified for antidepressants, anxiolytics, and antipsychotics. Due to the trend to develop more individually tailored therapeutic strategies, the characterization of patients and the course of treatment by different aspects will become more important in the future. A better description of state and trait characteristics should enable us to focus on a more specific individual "phenome" that is to be treated. In applying biomarkers to therapeutic drug development, additional aspects have to be taken into account: the increasing frequency of psychiatric diagnoses and especially of depression and anxiety and a trend to denosologization during the past decades regarding "depressive syndromes" and "anxiety spectrum disorders." To predict or monitor treatment responses more precisely, biomarkers will need to characterize the patient's condition in an integrated manner. ❏
2014-10-01T00:00:00.000Z
2011-06-01T00:00:00.000
{ "year": 2011, "sha1": "621b7b9a216c2b936fe58b2773bfd7f1eda7fff2", "oa_license": "CCBYNCND", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3182003", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "621b7b9a216c2b936fe58b2773bfd7f1eda7fff2", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
221558799
pes2o/s2orc
v3-fos-license
Study the effectiveness of EMLA in alleviation of pain during BCG vaccination given at birth Background and Aims: Neonates routinely undergo painful cutaneous procedures as part of their medical treatment, and during vaccination. Effective pain management is a desirable standard of care for newborns and may potentially improve their clinical and neurodevelopmental outcomes. Neonatal pain should be assessed routinely using context-specific, validated and objective pain methods, despite the limitations of currently available tools.BCG vaccines are given to all newborns under universal immunization program and a major cause of pain in neonates. Lidocaine-prilocaine 5% cream (EMLA) is a topical anaesthetic that may be useful for diminishing the pain from these procedures. So the present study was planned to know the effectiveness of EMLA in alleviating pain during BCG vaccination given at Birth. Methods: A total of 280 neonates meeting inclusion criteria and received BCG vaccine at the time of birth were enrolled in the study. They were divided in two groups each with 140 participants. Group 1 had received BCG vaccine after application of EMLA 1hour prior to vaccine (case group) and group 2 received vaccine after application of placebo (control group). Pain was assessed on NIPS scale in both the groups immediately after immunization, at 30 second and at 60 seconds after immunization. Results: NIPS scores at all the intervals were significantly lower when the vaccine was given after EMLA (Group 1) compare to group 2 who was immunized after placebo( P value of<0.0001) Conclusion: Pain perceived by the newborn after EMLA application during BCG vaccine was less as compared to the placebo group. Introduction Pain is defined as an unpleasant sensory and emotional experience associated with actual or potential tissue damage. Pain in children is mostly underestimated all around the world and often sub optimally managed. 1,2 Children experiences pain as a result of injury, illness, during medical procedures and vaccination. Vaccination are considered as the safest and most effective way to prevent serious illness and death 3 . Vaccination prevent approximately 2.5 million death every year, globally 4 . These vaccines are the most common source of unavoidable repeated http://jmscr.igmpublication.org/home/ ISSN (e)-2347-176x ISSN (p) 2455-0450 DOI: https://dx.doi.org/10.18535/jmscr/v8i6.71 iatrogenic pain in children. Although children are repeatedly exposed to immunization pain, the need for immunization pain management is often under-recognized in the Maternal and Child Health Centre (MCHC) Health care providers are not sensitize towards the pain in neonates. They consider, it's a benign procedure which require no intervention, on the contrary pain when left unmanaged, could have undesirable effects on children. These repeated injection may cause short-term and long-term effects. They also include physiological changes such as apnoea, bradycardia, tachycardia, skin colour changes, sweating palms, increased respiration rate and muscular tonicity, increased intracranial pressure and behavioural changes, crying, sleep disturbances, feeding problems. This lead tointense anxiety regarding vaccination that may result in non-adherence to the recommended vaccination schedule 5,6 . Different techniques are evolved in last few decades to reduce pain during various interventions in neonates which includes nonpharmacological methods like distractions, (video, music, tactile, blowing) sucrose solution or breast milk, patient positioning, vaccination pH and pharmacological like lignocaine gel or spray, EMLA, opioids. EMLA cream was in use from 1980 for relieving pain neonates and children. EMLA Cream (lidocaine 2.5% and prilocaine 2.5%) is an emulsion in which the oil phase is a eutectic mixture of lidocaine and prilocaine in a ratio of 1:1 by weight. It is used to relieve pain during lumber puncture, intra venous, intra arterial cannulation, intra muscular injection. It was also used for reduction of pain during vaccination in infants and children. There are very few studies in which effect of EMLA was studied on newborn during vaccination. So the present study was conducted to know the effect of EMLA in reducing pain in newborn during vaccination. Aims and Objectives To study the effectiveness of EMLA in alleviating pain during BCG vaccination given at birth. Material and Methods The study was conducted in the department of paediatrics Kamla Nehru State Hospital for Mother and child Shimla. This was a prospective and interventional randamized comparative double blinded study conducted among healthy full term newborns. The study was conducted over a period of from 1 st September, 2017 through 31 st December 2017.Study participants was randomly divided into two groups by using computer generated random numbers. Inclusion Criteria After taking consent from the mother healthy term newborn babies born and receive BCG vaccination at Kamla Nehru Hospital which a tertiary care hospital were enrolled in the present study. Exclusion Criteria 1. Gestation less than 37 weeks, IUGR baby 2. New born with any kind of illness. 3. New born requiring any kind of supportive treatment after birth. 4. New born with any major congenital malformations. 5. New born to those mothers who receive any drug which cause CNS depression in the baby. The socio demographic profile of the all the cases was recorded as per a structured case recording format. The newborns were randomly divided into two groups by using computer generated random numbers. Group 1: Included newborn whom EMLA patch was applied one hour prior to hepatitis B vaccination and act as cases for the study. Group 2: Included newborn whom placebo patch was applied one hour prior to hepatitis vaccination and considered as controls for group 1. Placebo patch used was identical to EMLA patch in its consistency, colour and odour. The parents, as well as observer was not aware of which treatment was received by the neonate making the study double blinded. The person involve in randomization was not involve further in the study. Newborn was laid down on the radiant warmer during whole procedure and was breastfed half an before vaccination. Vaccine was given to those newborns that were at the 3-4 brazilton stage of arousal 7 on the same surrounding and at the same examination table. EMLA patch or placebo was applied before vaccination depending on the group assigned after randomization. All the injections will be administered by the same staff nurse gently. EMLA cream 1gm were uniformly applied to an area of 1 square inch at the vaccination area (left deltoid region) and covered with an occlusive dressing (Tegaderm) for 60 to 90 minute ,the dressing were removed before procedure, the skin were wiped dry. 0.1ml of BCG vaccine was be administered intradermaly at the left deltoid using a tuberculin syringe with 0.45x 13mm needle. The primary outcome measure neonatal pain had been assessed using Neonatal Infant Pain Scale (NIPS), a reliable tool to assess neonatal pain [13] . NIPS score of zero was ensured before vaccination and used as baseline for further comparison. NIPS score was observed immediately after the vaccine, at 30 second, and at one minute after administration of vaccine. Statistical Analysis The data was entered, cleaned in MS EXCEL spreadsheet and analysis was done using epi info version 7.2.2 All the categorical variables were presented in frequency and percentage (%) and continuous variables was presented as mean ± SD and median. Quantitative variables was compared using Unpaired t-test/Mann-Whitney Test (when the data sets were not normally distributed.) between the two groups. Qualitative variables was compared using Chi-Square test /Fisher's exact test. A p value of <0.05 was considered statistically significant. Results We enrolled a total of 280 participants in our study divided in two groups. Each group had 140 newborns. Both the groups were comparable on the basis of demographic and anthropometric profile as mentioned in table 2. NIPS score was observed in group 1 and group 2 at 0 sec 30 seconds and 60 seconds after BCG vaccination. It was found that mean NIPS score in group 1(BCG vaccination after EMLA) was 2.42±1.15,0.71±0.83 and 0.02±0.14which was significantly less as compared to group 2(BCG vaccine after placebo) where we observed the mean NIPS score was 3.32±1.73 ,1.66±1.66and 0.39±0.67 at 0sec 30 seconds and 60 seconds respectively with p value <0.001. Discussion Mean±SD of NIPS score in group 1(after application of EMLA before giving BCG vaccination ) at 0,30 and 60 seconds were 2.42±1.15,0.71±0.83 and 0.02±0.14 respectively and the mean of NIPS score of group 2 at 0, 30 and 60 seconds were 3.32±1 Similarly, Uhari nurses also reported a lower mean VAS pain score (range, 0-10 cm) (2.5 vs 3.8; P<0.003) and VAS crying score ( range, 0-10cm) (2.8 P<0.003) for infant who received lidocaine-prilocaine then for those who received placebo during DPT vaccination in infant. They also did not include newborns in their study and used VAS score for assessment of pain. In a study of lidocaineprilocaine and no intervention, Dlli et al, reported a significant reduction in the NIPS score (range, 3-7) (MD, -4.0095% cCl, -4.83 to -3.17; P <0.001) in infants 6 to 12 months of age and in the CHEOPS score (range , 4-13) (P .001; data =NR) during different vaccines Hep B at 0-2weeks, at 1 and 6 month, MMR at 9 months. Conclusion Pain perceived during BCG vaccine was less when vaccination was done after EMLA application, which concluded that EMLA is effective modality in alleviating pain during vaccination through intradermal route. Recommendation EMLA is an effective pain alleviating modality and can be considered as pain reliving modalities during routine immunization. This will decrease immediate as well as late Neurodevelopmental complication in children due to repeated unavoidable pain during vaccination. This will also decrease parents anxiety regarding pain during vaccination and increase adherence to vaccination.
2020-06-18T09:04:04.952Z
2020-06-18T00:00:00.000
{ "year": 2020, "sha1": "09d38234ad3bd31fc8df823058c166b6deb7d0b7", "oa_license": null, "oa_url": "https://doi.org/10.18535/jmscr/v8i6.71", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4d1d6d4da201801b082f361e832adfc213d13362", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235313607
pes2o/s2orc
v3-fos-license
Gravitational and electromagnetic radiation from an electrically charged black hole in general nonlinear electrodynamics We derive the equations for the odd and even parity perturbations of coupled electromagnetic and gravitational fields of a black hole with an electric charge within the context of general nonlinear electrodynamics. The Lagrangian density is a generic function of the Lorentz invariant scalar quantities of the electromagnetic fields. We include the Hodge dual of the electromagnetic field tensor and the cosmological constant in our calculations. For each type of parity, we reduce the system of Einstein field equations coupled to nonlinear electrodynamics to two coupled Schr\"odinger-type wave equations, one for the gravitational field and one for the electromagnetic field. The stability conditions in the presence of the Hodge dual of the electromagnetic field are derived. Introduction Penrose, in his Noble prize winning work [1], shows that when a massive star collapses to form a black hole, the singularity formation in general relativity (GR) is inevitable. This issue signals the demise of GR in its classical form. The singularity may be resolved by an ultimate quantum theory of gravity that can describe the final stage of gravitational collapse. In the absence of a microscopic theory, toy models of regular (singularity-free) black holes have been proposed to study the formation and evaporation of such black holes. After the first specific proposal for a regular black hole (RBH), which was presented by Bardeen in [2], many RBH models have been proposed by various authors over the years. See, for example, [3][4][5][6][7][8][9][10][11][12][13][14][15][16] for some of the RBHs that are asymptotically Schwarzschild at large radii. The majority of these black holes, including Bardeen's model, are constructed in an ad-hoc manner without an underlying theory behind them. However, in [17], Ayón-Beato and García found the first RBH solution in GR that is coupled to nonlinear electrodynamics (NLED). NLED was originally proposed in [18], by Born and Infeld, in an attempt to generalize Maxwell's theory to strong field regimes. As a result, this theory provides a natural choice for studying charged black holes where we deal with strong electromagnetic and gravitational fields. Ayón-Beato and García were also able to reinterpret Bardeen's model as a black hole with a nonlinear magnetic monopole charge in [19]. It was also shown by Rodrigues and Silva in [20] that the Bardeen solution can be obtained in NLED with an electric charge. In addition to the electrically charged black hole in [17], Ayón-Beato and García proposed two more black hole models with electric charge in [21] and [22]. For these RBH models to be viable, they need to be stable when they are perturbed. In addition to its relevance to gravitational wave observations, the study of black hole perturbations is crucial in determining the stability of a black hole model [23]. There are two approaches to study black hole perturbations. In one approach, the perturbation of a field (e.g. a scalar field), which is weakly coupled to the background of a black hole spacetime, is analyzed. In this case, the geometric perturbations are usually neglected. Since the equations governing the perturbations of spherically symmetric black holes are similar to the Klein-Gordon equation for a scalar field, one can achieve a qualitative understanding of how the RBH and its perturbations differ from its Schwarzschild counterpart. However, to achieve a quantitative understanding of the stability of a black hole, one needs to look at the perturbations of the spacetime and any strongly coupled fields to the background geometry. The wave equations of coupled electromagnetic and gravitational fields of a black hole with an electric charge in general NLED are derived for the first time by Moreno and Sarbach in [24]. The Lagrangian considered in [24] is a general function of the Lorentz invariant scalar quantity F of the electromagnetic field, where F = 1 4 F µν F µν and F µν is the electromagnetic field tensor. The stability conditions for these black holes are also derived in [24]. The wave equations of coupled electromagnetic and gravitational fields of a black hole with a magnetic monopole charge in general NLED are derived for the first time by Nomura et al. in [25]. In addition to the electromagnetic field tensor F µν , the authors of [25] include the Hodge dual of the field tensor, F * µν . Since magnetic monopoles have never been observed in nature, in this paper we focus on electrically charged black holes within the context of NLED. Similar to [24], we introduce perturbations on the background geometry of a charged black hole and its nonlinear electric field. We derive the odd parity (magnetic or axial) and even parity (electric or polar) wave equations for the coupled electromagnetic and gravitational fields. In our calculations, we include the Hodge dual field tensor, which was ignored in [24]. The method we use in this paper is different than the gauge-invariant approach used in [24]. Our method, where we fix the gauge (i.e. Regge-Wheeler gauge) early on, is more in line with the work done by Zerilli in [26] for the Reissner-Nordström black hole and by Nomura et al. in [25] for black holes with a magnetic monopole. For simplicity, we do not consider any test particle outside the black hole horizon. However, it should be easy to incorporate that using Zerilli's results in [26]. We structure the paper as follows. In Section 2, we set up the problem by deriving the perturbed Einstein-NLED equations. In Section 3, we expand the geometric and NLED perturbations in tensor harmonics and derive the wave equations for odd parity perturbations, which are reduced to two coupled Schrödinger-type wave equations. We then derive stability conditions in Section 4. In Section 5, we examine the even parity perturbations. In Section 6, to provide an example of a theory with a Hodge dual field, we apply our stability conditions to RBHs in Einstein-Born-Infeld gravity. We provide the summary and conclusion in Section 7. In Appendices A and B, we provide more details and calculations involving even parity perturbations and their stability. In Appendix C, we explain some of the differences between our results, when reduced to the Reissner-Nordström case, and Zerilli's results in [26]. Perturbed Field Equations In order to make the comparison with the Reissner-Nordström black hole perturbations easier, we closely follow the notation in [26]. The action of NLED in a curved spacetime is where R is the Ricci scalar, Λ is the cosmological constant, g is the determinant of the spacetime metric tensor g µν , and the Lagrangian density L is an arbitrary function of the Lorentz invariant scalar quantities 1 Here, F µν * = 1 2 µναβ F αβ is the Hodge dual of the the electromagnetic field tensor F µν . The Levi-Civita tensor is normalized as 0123 = √ −g. In this paper, we adopt Planck units where c = G = = 1. The Einstein-NLED equations that describe the gravitational and NLED fields areG whereL ≡ L(F ,F * ),LF ≡ ∂L/∂F , andLF * ≡ ∂L/∂F * . We use tilde for quantities associated with the total NLED and gravitational fields. Quantities with no tilde refer to the background geometry represented by the static spherically symmetric line element 2 ds 2 = −e ν dt 2 + e −ν dr 2 + r 2 (dθ 2 + sin 2 θdφ 2 ). We assume the following general ansatz for the Maxwell field for an electric charge from which we get F * = 0. One then can integrate Eq. (5) to obtain In the case of spherical symmetry, the invariant quantities F and F * only depend on the radial coordinate. Consequently, both L and L F are functions of the radial coordinate only. Therefore, one can use the Bianchi identity, dF = 0, to show f (θ, φ) is a constant that we will call −q. Therefore, the background field strength can be written as In the rest of this paper, we choose In the case of the Reissner-Nordström black hole, where L F = 1, q is simply the electric charge. In the RBH models presented in [17,21,22], q is also interpreted as the electric charge. Equation (9) gives us the background values of the invariant scalar quantities as 2 Note that we assume g rr = −1/g tt . It turns out this is forced to be true. Had we not made this assumption, once we get to Eq. (28), using G r r − G t t = 0 we find −g tt g rr is a constant. The constant can be set to one by rescaling the time coordinate. For more details, see [25]. Here L F = ∂L/∂F , L F F = ∂ 2 L/∂F 2 , and so on. To derive the equations above, we use the fact that to first orderF In the above, we also use the first order Taylor expansions The perturbed Einstein-NLED equations (15) and (16) reduce to the Reissner-Nordstöm results in [26] when we choose L = F and Λ = 0. In addition to the above perturbed field equations, some of the background field equations are useful for this work. A combination of the line element (6) and the Einstein equation in the form lead to the background field equations and The above equations will be used extensively in the following sections. For more details on the background field equations, see [25]. An electrically charged black hole in NLED should satisfy some reasonable energy condition. We denote where ρ is the energy density and p i (i = r, θ, φ) represents the pressure in the i direction. From Eq. (12), we obtain Using the above expressions for the energy density and pressure, we examine the following well-known options for the energy condition: (a) The weak energy condition, where ρ ≥ 0 and ρ + p i ≥ 0, gives • If L + Λ/2 < 0, then (d) The strong energy condition, where ρ + p i ≥ 0 and ρ + i p i ≥ 0, gives Note that all the above conditions force L F ≥ 0. Also, for the background field strength (9) to be finite, we need L F = 0. Therefore, we will assume L F > 0 in the rest of the paper. Odd Parity Perturbations The next step is to expand the perturbations h µν and f µν in tensor harmonics. The odd parity (magnetic or axial) tensor expansion of the geometric perturbation is where h 0 , h 1 , and h 2 are functions of the time and radial coordinates only. Y lm (θ, φ) are the spherical harmonics and The integer l ≥ 2 is the multipole number and m = −l, . . . , 0, . . . , l. The freedom to make infinitesimal coordinate transformations allows us to fix the gauge in a way that h 2 = 0 (Regge-Weeler gauge [23]). The odd parity tensor expansion of the NLED perturbation is wheref µν denote angle-independent parts of f µν . The asterisk denotes the anti-symmetric components of the matrix. As noted by Zerilli in [26], the odd (even) parity geometric perturbations couple only to odd (even) parity electromagnetic perturbations. More specifically, when we combine odd with even parity, the Einstein-Maxwell equations lead tof µν = 0. We find this to be true for the NLED case considered here. This, however, is not always true. In a black hole with a magnetic monopole charge, odd parity geometric perturbations couple only to even parity electromagnetic perturbations and vice versa [25]. Since the electromagnetic field tensorF µν is derived from a vector potentialà µ , wherẽ where a µ is the perturbed vector potential. This is equivalent to having the field equations of the form f µν,λ + f λµ,ν + f νλ,µ = 0. These field equations lead to the following relations After inserting tensors (38) and (40) into Eq. (15), we obtain three equations from the components rθ, tθ, and θθ respectively . Throughout this paper, we use prime to denote the derivative with respect to the radial coordinate r. In addition, from the rr component of the perturbed Einstein equation, we find that when F * = 0, which is the case for an electric charge. This constraint on L is also noticed by the authors of [25], where they suggest a general form for L in which Inserting tensors (38) and (40) into Eq. (16) and using Eq. (42), we obtain We then solve Eq. (45) for h 0 and substitute it into Eq. (43). After defining R , using the tortoise coordinate r * where dr * /dr = e −ν , and using Eq. (30), we get while Eq. (48) becomes In the above two equations, we assume all field functions depend on time as e −iωt , where ω is the quasinormal mode frequency of the perturbations. This is formally equivalent to a Fourier transform of the field functions where ∂ t → −iω. Recall, we are also requiring L F > 0, which makes √ L F well-defined. Equations (49) and (50) reduce to the Reissner-Nordström wave equations when L = F , Λ = 0, and e ν = 1 − 2M r + q 2 r 2 . Wave equations (49) and (50) are valid for multipole numbers of l ≥ 2. In the case of l = 1, where λ = 0, wave equation (50) decouples from (49). In this case, only the electromagnetic perturbations are dynamical degrees of freedom and the perturbations are completely described by Eq. (50). This is because h 0 is only defined for l ≥ 2. As a result, for l = 1, h 1 (and consequently R (odd) lm ) is no longer a physical degree of freedom. This can be shown by simplifying Eq. (43) using the background field equation (30) and taking h 0 and λ to be zero, which gives Stability for Odd Parity Perturbations To derive the stability condition for odd parity perturbations, we follow the method in [24]. Definingf lm , we can rewrite the wave equations (49) and (50) as where Equations (52-56) are in good agreement with the results found in [24]. The contribution from including the Hodge dual scalar invariant F * can be found in the potential (56). The stability condition in [24], given by requiring the potential matrix to be positive-definite, will be modified due to the inclusion of Hodge dual fields. We require the determinant and trace det(V I ) = 4λ(λ + 1) be positive. Since λ > 0 and L F > 0, this gives the stability condition as For l = 1, where λ = 0, the perturbations are completely described by Eq. (53) where V I 21 = 0. Therefore, for the black hole stability against electromagnetic perturbations with l = 1, the only requirement is V I 22 > 0. This gives us a condition that holds when Eq. (60) is satisfied. Therefore, the stability condition (60) can be applied to multipole numbers l ≥ 1. Even Parity Perturbations The even parity (electric or polar) tensor expansion of the geometric perturbation is wheref 01 ,f 02 , andf 12 are functions of the time and radial coordinates only. One can use the same idea as in Eq. (41) to find the homogeneous Maxwell equation for even parity perturbations. The tt, rr, a combination of θθ and φφ 4 , tr, tθ, rθ, and θφ components of Eq. (15) are respectively Note that Eqs. (64), (65), and (66) do not reduce to Zerilli's results in [26] for the Reissner-Nordström case. For an explanation, see Appendix C. The r, θ or φ, and t components of the perturbed NLED equation (16) are respectively Note that, in addition to Eq. (72), the θ or φ component of the perturbed NLED equation requires which is satisfied only if L F * F = 0 andf 01 + ∂ tf12 − ∂ rf02 = 0 that we already determined in Eqs. (46) and (63). This provides a good consistency check. Note that the Hodge dual of the electromagnetic field does not appear anywhere in the equations (64-73). Therefore, the even parity perturbations are unaltered by the inclusion of Hodge dual fields. So, equations (64-73) should, and do, reduce to a pair of coupled Schrödinger-type wave equations, which agree with those in [24]. Likewise, the stability conditions for even parity perturbations do not change from those that appear in [24]. Since our method is different than that used in [24], we include the derivation of the wave equations in Appendices A and B. In Appendix A, we use a method similar to that in [26] to find the wave equations that reduce to those in [26] in the Reissner-Nördstrom case. In Appendix B, we show how to rewrite the wave equations in the form in which they appear in [24] and are more suitable for stability analysis. An Application: Born-Infeld theory In this section, we provide an example of a viable theory that involves Hodge dual fields. In the original work of Born and Infeld [18], they removed the divergence of an electron's self-energy in classical electrodynamics by introducing a nonlinear Lagrangian density of the form where µ is a scale parameter of dimension mass. It is easy to see this Lagrangian density reduces to Maxwell's when F/µ 4 1. Born-Infeld theory in curved spacetime (Einstein-Born-Infeld gravity) has been explored in the literature. For electrically charged black hole solutions, see for example [28,29]. If we use a metric function of the form together with the background field equations (29) and (30), we find We then take the derivative of Eq. (75) with respect to F . Replacing F and F * with their background values of − q 2 2r 4 L 2 F and 0 gives an equation in L F . Using L F > 0 as required by the energy conditions listed at the end of Section 2, we obtain We can use Eq. (79) to write F , and consequently L, as functions of r only. This allows us to integrate the background field equation (77) to get where F(ϕ|k 2 ) is the elliptic integral of the first kind. In the asymptotic region of r → 0, M (r) ≈ q 2 µ 2 r. As r → ∞, M (r) approaches a positive constant. Therefore, for |q| > 1 2µ 2 , the metric function e ν starts with a finite negative value of 1 − 2 q 2 µ 2 at r = 0 and approaches 1 (for Λ = 0) as r → ∞. This provides us with the spacetime of a black hole. We show the behavior of M and e ν as a function of r in Figure 1. For this black hole, the stability condition (60) translates to Since this is always true, we can conclude that electrically charged black hole solutions in Einstein-Born-Infeld gravity are stable against odd parity perturbations. This includes purely electromagnetic perturbations with l = 1 as discussed at the end of Sec. 4. For even parity perturbations, we can use the same stability conditions derived in [24]. These are where H = 2F L F − L and P = − q 2 2r 4 . These conditions apply to the region outside the event horizon. We combine Eqs. (11), (75), and (79) to get The stability condition (82) gives which is the same as L F > 1. This is true as long as q is not zero. It is easy to show that the condition (83) is satisfied when inequality (86) holds. The condition (84) gives For Λ = 0, since 0 < e ν < 1 outside the event horizon, condition (87) is always satisfied. We conclude that electrically charged black holes in an asymptotically Minkowski spacetime in Einstein-Born-Infeld gravity are stable. Summary and Conclusion We studied the perturbations of the Einstein equation coupled to general NLED for a spherically symmetric black hole solution with electric charge. We also included the cosmological constant and the Hodge dual of the electromagnetic field strength tensor in our calculations. The NLED Lagrangian density is a generic function of the Lorentz invariant scalar quantities of the electromagnetic fields, i.e. F and F * . The wave equations for odd and even parity perturbations of gravitational and NLED fields were derived. For each parity, we reduced the Einstein-NLED field equations to two coupled Schrödingertype equations, one of which determines the gravitational and the other the NLED field oscillations. Our results are consistent with those found in [24], although we did not use the gaugeinvariant technique utilized by Moreno and Sarbach in [24]. Our method, where we fixed the gauge early on, is more in line with the work done by Nomura et al. in [25] and by Zerilli in [26]. We also included the Hodge dual of the electromagnetic field strength tensor, which was ignored in [24]. In addition, all our equations reduce to the correct results for the Reissner-Nordström case when we use Maxwell's Lagrangian density (L = F ) and take the cosmological constant Λ to be zero. The inclusion of the Hodge dual of the electromagnetic field modifies the results of [24] only for odd parity perturbations. The even parity perturbations stay unaltered. Therefore, we conclude that the inclusion of F * does not change the stability conditions for even parity perturbations that were explored earlier in the literature. We provided new stability conditions for the odd parity perturbations that include the Hodge dual of the electromagnetic field. A Derivation of Even Parity Wave Equations In this appendix, we show how to use equations (64-73) to derive two coupled Schrödingertype wave equations for even parity perturbations. First, we findf 01 andf 02 in terms off 12 by solving Eqs. (71) and (72) respectively. We then substitute these values to Eq. (63) to find a second order differential equation forf 12 : We define f (even) lm = e ν √ L Ff12 , and use the tortoise coordinate r * where dr * /dr = e −ν , to find In the remainder of this section, we assume all field functions depend on time as e −iωt , where ω is a complex constant that turns out to be the quasinormal mode frequency of the perturbations. We now look at the geometric perturbation equations (64-70). We use Eq. (70) to eliminate H 2 in Eqs. (67-69). We then substitute ∂ r K and ∂ r H 1 , as given by these equations, into Eq. (65). This gives an algebraic equation that involves H 0 , H 1 , K and the electromagnetic functionsf 01 ,f 02 ,f 12 . We now solve this equation for H 0 and substitute into Eqs. (67) and (68). Using Eqs. (71) and (72), we replacef 01 andf 02 withf 12 . This procedure gives the following two equations where α ω (r) = 4e ν Q 2 L F − r 2 ξ(e ν − λ − 1) − 2r 2 (λ + 1) 2 + 2r 2 e ν (2λ + 1) − 2ω 2 r 4 r 3 e ν ξ(r) β ω (r) = 2i (λ + 1) [(λ + 1) − e ν ] + ω 2 r 2 r 2 ξ(r) (93) Here ξ(r) = re ν ν − 2e ν + l(l + 1) Equations (92-97) are simplified using the background equations (29) and (30). We wish to combine Eqs. (90) and (91) to a second order wave equation of the form To do this, we follow the method outlined by Zerilli in [26]. The first step is to transform Eqs. (90) and (91) to the form dK dr =L +Ŝ 1 (100) where the new variabler is given in terms of r by dr/dr = 1/n(r). For brevity, one can rewrite Eqs. (90), (91), (100) and (101) in the matrix form where We now look for a transformation ψ = Fψ, where is to be determined. Inserting Eq. (106) into (102) and then comparing the result to Eq. (103) tells us that Using the above equations, one can determine n(r), F, and consequentlyŜ. The results for the components of the matrix F are where we have used Eq. (30) to simplify the above functions. Also which shows that the new variabler is just the tortoise coordinate r * . Note that the functions f (r) and h(r) given in Eqs. (110) and (112) do not reduce to Zerilli's results in [26] for the Reissner-Nordström case. For an explanation, see Appendix C. We can now express the potential in the following form In addition, we can use Eq. (109) to determineŜ 1 andŜ 2 . Comparing Eqs. (100) and (101) with (99), we get It is also easy to combine Eqs. (100) and (106) to obtain Using the results for S lm and K we can write the final wave equations as The equations (118) and (119) are similar in structure to the results found by Zerilli in [26]. B Stability for Even Parity Perturbations To make the wave equations more suitable for the stability analysis conducted in [24], we want to eliminate the dR (even) lm /dr * term in Eq. (119). Below we explain how to systematically approach this problem. We first rewrite Eqs. (88), (90), and (91) in the following form where F = 2e ν L Ff12 and ρ(r) = 2Q(ξ + 2λ + 2) r 2 e ν ξ(r) (127) We want to convert the system of equations (120-123) to where the new variabler is given in terms of r by dr/dr = 1/n(r). We first put the equations in matrix form: where We now look for a matrix transformation Ψ = NΨ, which combined with (135) gives We can now solve for n, N , and M. We find n(r) = e ν , which meansr = r * . Putting these into Ψ = NΨ gives (141) and equation (136) gives where the relation between R and B and our original functions can easily be derived from equations (140 -142). The wave equations (143) and (144) can be rewritten in the form where V II 11 = − 2λ r 2 + 8λ(λ + 1 − e ν ) r 2 ξ + 8λ 2 e ν r 2 ξ 2 + 16λe ν Q 2 L F r 4 ξ 2 (147) These equations agree with those in [24].
2021-06-04T01:15:36.369Z
2021-06-02T00:00:00.000
{ "year": 2021, "sha1": "276b5a7a461ebfd7b77d8dd15936a62eb9df0243", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "276b5a7a461ebfd7b77d8dd15936a62eb9df0243", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
221084445
pes2o/s2orc
v3-fos-license
Broccoli extract increases drug-mediated cytotoxicity towards cancer stem cells of head and neck squamous cell carcinoma Background Head and neck squamous cell carcinomas (HNSCC) are malignant neoplasms with poor prognosis. Treatment-resistant cancer stem cell (CSC) is one reason for treatment failure. Considerable attention has been focused on sulforaphane (SF), a phytochemical from broccoli possessing anticancer properties. We investigated whether SF could enhance the chemotherapeutic effects of cisplatin (CIS) and 5-fluorouracil (5-FU) against HNSCC–CSCs, and its mechanisms of action. Methods CD44+/CD271+ FACS-isolated CSCs from SCC12 and SCC38 human cell lines were treated with SF alone or combined with CIS or 5-FU. Cell viability, colony- and sphere-forming ability, apoptosis, CSC-related gene and protein expression and in vivo tumour growth were assessed. Safety of SF was tested on non-cancerous stem cells and in vivo. Results SF reduced HNSCC–CSC viability in a time- and dose-dependent manner. Combining SF increased the cytotoxicity of CIS twofold and 5-FU tenfold, with no effects on non-cancerous stem cell viability and functions. SF-combined treatments inhibited CSC colony and sphere formation, and tumour progression in vivo. Potential mechanisms of action included the stimulation of caspase-dependent apoptotic pathway, inhibition of SHH pathway and decreased expression of SOX2 and OCT4. Conclusions Combining SF allowed lower doses of CIS or 5-FU while enhancing these drug cytotoxicities against HNSCC–CSCs, with minimal effects on healthy cells. MTT assay for cell viability As described previously, 13 1500 cells per well were seeded in 96well plates, and they were treated with different concentrations of SF and/or chemotherapeutic agents for 72 h. The medium was then removed, and 10% solution of 5 mg/ml MTT in medium (Sigma Aldrich) was added and incubated at 37°C for 2 h. Formazan was dissolved by adding DMSO to each well after MTT removal. The optical density was measured at 562/540 nm in EL800 Microplate Reader (BIO-TEK Instruments, Winooski, Vermont, United States). For analysing the effect of SF over time, cells were treated with 3.5 µM SF, and the same steps were followed daily for 4 consecutive days. Colony-forming assay CD44 + /CD271 + cells were seeded at 1 × 10 5 cells/well in 6-well tissue culture plates. The cells were treated with SF and/or chemotherapeutic agents for 72 h. Then, cells were detached, plated at a density of 400 single living cells/well in 6-well tissue culture plates and incubated for 10 days while the medium was being changed every 3 days. The cell colonies were fixed and stained with 1% crystal violet, 50% methanol in DDH 2 O for 1 h. The number of colonies with >50 cells were counted under an inverted microscope. Sphere-forming assay In total, 5000 CD44 + /CD271 + cells/500 µl per well were seeded in 24 ultra-low-attachment multiple-well plate (Millipore Sigma, Burlington, Massachusetts, United States) in DMEM-F-12 medium (Thermo Fisher) reconstituted with 20 ng/ml of epidermal growth factor, 20 ng/ml of basic fibroblast growth factor, 0.5% N 2 supplement (STEMCELL Technologies, Vancouver, Canada), 1% B27 supplement and 2% antibiotic-antimycotic (Thermo Fisher). After 24 h, SF and/or the two chemotherapeutic agents were added. The medium with drugs was added every 2-3 days to measure the long-term effect on cells. Photographs of groups were captured at 14 days, using a phase-contrast microscope. For serial passage, single cells were obtained from Accutasetreated spheroids. Then, the same steps were followed as described above. Spheres were then collected by centrifugation and dissociated by Accutase to single cells to obtain a cell count. Annexin V apoptosis detection Post-treatment apoptosis was measured by using the PE-Annexin V Apoptosis Detection Kit (BD Bioscience). Briefly, 1.5 × 10 5 CSCs from SCC12 cell line were seeded per well, in a 6-well plate for 24 h, and were then treated with SF and/or the chemotherapeutic agents for 72 h. Cells were detached using Accutase (Biolegend, San Diego, California, United States); then, all procedures followed the manufacturer's protocol. Cells were analysed by flow cytometry using LSR Fortessa (BD Biosciences). Data analysis was performed using FlowJo vX (FlowJo LCC). Real-time qRT-PCR Gene expression levels in CD44 + /CD271 + cells from the SCC12 cell line after exposure to SF and/or chemotherapeutic agents for 3 days were measured as previously described. 16 Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) was used as the endogenous expression standard. The appendix lists the gene-specific sequence of the primers. Gene expression was calculated based on ΔΔCt method. The n-fold difference in mRNA expression was determined according to the method of 2 −ΔΔCT . Western blot assay CD44 + /CD271 + cells from the SCC12 cell line were exposed to SF and/or chemotherapeutic agents for 3 days, then harvested using trypsin. A lysis buffer that consisted of 10 mM Tris-HCl, pH 7.2, 150 mM NaCl, 5 mM ethylenediaminetetraacetic acid (EDTA), 1% Triton X-100, 0.1% sodium dodecyl sulfate (SDS) and 1% Na-deoxycholate was used to lyse the cells. After centrifugation at 15,000 × g for 20 min, supernatants were recovered, and the protein content was quantified by the Pierce™ BCA Protein Assay Kit (Thermo Fisher). Protein samples (20-60 μg) were size-separated by electrophoresis on sodium dodecyl sulfatepolyacrylamide gels under reducing conditions. Separated proteins were electroblotted onto nitrocellulose membranes. Non-specific immunoreactions in the blot were blocked by 5% skim milk and incubated with one of the following primary antibodies: anti-human BMI1, anti-BCL2, anti-active Caspase 3 (Cell Signaling, Danvers, Massachusetts, United States), anti-SOX2, Anti-OCT4 and anti-β actin (Abcam, Cambridge, United Kingdom) overnight at 4°C. Horseradish peroxidase (HRP)-conjugated antigoat or -rabbit secondary antibody was then used. Antibodybound proteins were detected by the spray on ECL (Zmtech Scientifique, Montreal, Canada) and ChemiDoc™ Touch Imaging System (Bio-Rad, Hercules, California, United States). Osteogenic differentiation DPSCs and PDLSCs were treated with 3.5 µM SF for 3 days; then, the cells were collected and seeded in 6-well plates, 2 × 10 5 cells/ well, and allowed to grow to 70% confluency in normal medium. Thereafter, the growth media were replaced with the osteogenic medium containing α-MEM supplemented with 1% antibiotic/ antimycotic, 20% FBS, 2 mM glutamine, 10 −8 M dexamethasone sodium phosphate, 55 µM 2-mercaptoethanol, 0.1 mM L-ascorbic acid and 2 mM β-glycerophosphate. Control cells were cultured in the normal growth medium. Both media were changed every 3 days. All cultures were allowed to grow for 21 days, then fixed and stained with Alizarin Red (Sigma). Photographs of all groups were captured using a phase-contrast microscope at ×5 magnification. Osteogenic quantification was done by unbinding the Alizarin Red stain using 10% (v/v) acetic acid followed by reading the absorbance at a wavelength of 405 using microplate reader. Chondrogenic differentiation DPSCs and PDLSCs were treated with 3.5 µM SF for 3 days; then, the cells were collected as 5 × 10 5 cells in 15-ml polypropylene tubes. Cells were centrifuged, and the media were replaced with the StemXVivo Chondrogenic Base Media supplemented with StemXVivo Chondrogenic Supplement (R&D Systems, Minneapolis, Minnesota, United States) and 1% antibiotic/antimycotic. Control cells were cultured in the normal growth medium. Every 3 days, half of the medium was replaced by a new medium. All cultures were grown for 21 days; then, the pellets were collected and frozen by OCT compound (Thermo Fisher), cryosectioned and stained by Collagen Type II immunofluorescent staining. Photographs were captured using a phase-contrast microscope at ×20 magnification. Chondrogenic quantification was done using ImageJ software (NIH). In vivo assay and tumour xenografts For the in vivo experiments, we used SF and CIS only, without 5-FU to decrease the number of mice to be sacrificed. This animal research study was approved by the University Animal Care Committee at McGill University (Protocol #5330, www.animalcare. mcgill.ca) and conformed to ARRIVE (animal research: reporting of in vivo experiments) guidelines. The animals used in this study were 23 NU/NU nude (Crl:NU-Foxn1 nu ) mice (n = 5 in each group and n = 3 in the sham-control group) (Charles River, Wilmington, Massachusetts, United States). All the mice were kept in clean conditions with soft food and water in the animal resource centre at McGill University. Six-to ten-week-old male mice were injected with 1 × 10 4 CD44 + /CD271 + SCC12 cells (suspended in 30 μl of normal saline) on the lateral side of the tongue using a 1-ml tuberculin syringe with a 30-gauge hypodermic needle, under general anaesthesia with isoflurane (Isoba Vet TM ). After 1 week, mice-bearing tumours were randomly divided into groups, and different (drug) treatments were administered. Mice were injected intraperitoneally (I.P) with the vehicle control (normal saline), SF (4 mg/kg), CIS (3 mg/kg) or a combination of SF and CIS every 3 days for a total of 6 doses. 17 Mice were examined weekly to measure their body weight and the tumour size bidirectionally using a calibre, under isoflurane gas anaesthesia. Tumour size was calculated using the following formula: volume = (width) 2 * length/2. Animals were sacrificed after 49 days with CO 2 inhalation, and the tongues, livers and kidneys were collected. Tumour formation in the tongues, and liver or kidney necrosis was assessed using H&E-stained sections. Statistical analysis Data were presented as the means ± standard deviations (SD) of three independent experiments conducted in triplicate with comparable results. One-way analysis of variance (ANOVA) followed by post hoc Tukey's test was used to assess significant differences between three groups or more, while Student's t test (Unpaired) was used to compare two groups. p Values < 0.05 were considered statistically significant. GraphPad Prism 6 software was used for the statistical analysis (GraphPad Software, San Diego, Canada). RESULTS Effects of sulforaphane on the viability and proliferation in HNSCC-CSCs FACS-isolated CSCs were exposed to different SF concentrations. SF treatment decreased the viability of HNSCC-CSCs in a dosedependent manner (Fig. 1a). The half-maximal inhibitory concentration (IC50) of SF on CSCs was 5.54 μM for SCC12 and 5.13 μM for SCC38. The inhibitory effects of SF on cellular viability increased over time, as shown by testing CSCs to 3.50 μM SF (Fig. 1b). Adding 3.50 μM SF had a statistically significant increase in the inhibition of cell viability when compared with using either CIS ( Data are presented as means ± SD for N = 3 ("a" means a P value < 0.05 relative to 0 μM, "b" to 0.875 μM, "c" to 1.75 μM, "d" to 3.5 μM and "e" to 7 μM). b HNSCC-CSCs were treated with 3.5 μM of SF for the indicated times ("a" significance relative to 0 h, "b" significance relative to 24 h and "c" significance relative to 48 h. P < 0.05). HNSCC-CSCs were treated with 3.5 μM of SF with or without 0.1, 0.5, 1 or 2 μg/ml of CIS (c), or 0.013, 0.13, 1.3 or 130 μg/ml of 5-FU (d) for 72 h. Data are presented as means ± SD for N = 3 (*P < 0.05 and **P < 0.01 relative to treatment in the absence of SF). e HNSCC-CSCs were pre-treated with SF with or without CIS or 5-FU for 72 h before being seeded in 6-well plates for 10 days. Fixed and stained colonies containing >50 cells were counted under an inverted light microscope. Data were presented as means ± SD for N = 3 (**P < 0.01). Photographs of the fixed and stained colonies are presented in (f). CIS (Fig. 1c) and increased ten times with 5-FU (Fig. 1d), especially at the lower chemotherapeutic concentrations. By using 3.50 μM of SF alone, the clonogenic ability of the CSCs was reduced to 29% ± 10.1 and 24% ± 3.9% in SCC12 and SCC38, respectively, when compared with the control, and this was comparable to using 0.5 μg/ml of CIS alone. CIS as a single treatment also reduced the clonogenic ability to 28% ± 2.4 and 19% ± 4.6%, while 5-FU reduced it to 52% ± 6.8 and 38% ± 14% for SCC12 and SCC38, respectively. Surprisingly, the combination of SF + CIS or SF + 5-FU completely prevented CSC colony formation (Fig. 1e, f). Effect of sulforaphane on self-renewal and apoptosis in HNSCC-CSCs While single treatment with SF, CIS or 5-FU reduced spheroid formation, the combinations SF + CIS or SF + 5-FU inhibited spheroid formation most effectively (Fig. 2a, c). The effect was not only on the number of spheres, but also on the size of the formed spheres; the combination treatments produced smaller spheres with fewer cell numbers (Fig. 2b, d). Furthermore, the combinations SF + CIS or SF + 5-FU also inhibited the formation of secondary spheres (both their numbers and sizes, Fig. 2e, f). SF treatment alone induced early apoptosis in 46% ± 3.4% of CSCs as compared with 32% ± 7.3% in the control group. CIS treatment induced early apoptosis to 50.3% ± 2.4% CSCs, while combining SF + CIS increased apoptosis to 70.2% ± 11.1%. Similarly, 5-FU as a stand-alone treatment induced apoptosis to 41.2% ± 6.4% CSCs, while the combined treatment SF + 5-FU increased apoptosis to 60.3% ± 8.1% (Fig. 2g, h). These results suggested that SF could reduce HNSCC-CSC numbers through the induction . d-f Spheroids were dissociated into single cells, and an equal number of live cells were re-plated. Fourteen days later, the second generation of spheroids were formed. These second-generation spheroids were photographed and dissociated again for cell counting. Data are presented as means ± SD for N = 3 (*P < 0.05, **P < 0.01 relative to treatments in the absence of SF). h The percentage of early apoptotic cells was presented as means ± SD for N = 3 (*P < 0.05). g Flow cytometry graphs showing the gating strategy. The vertical line represents the cutline for Annexin V staining, and the horizontal line represents the cutline for 7-aminoactinomycin D (7-AAD) staining. of apoptosis, in addition to inhibiting cell proliferation and selfrenewal. Effect of sulforaphane on the genotyping of HNSCC-CSCs By combining SF to either CIS or 5-FU, there was a significant decrease in the expression levels of NOTCH1, SMO and GLI1 genes when compared with using CIS or 5-FU alone. This led to inhibiting their downstream gene BMI1 (Fig. 3a). Combining SF + CIS or SF + 5-FU decreased SOX2 expression significantly when compared with either CIS or 5-FU alone. The decrease in OCT4 gene expression was significant only for the SF + CIS combination treatment. Drug resistance and stemnessrelated mRNA expression level of ALDH1A1 was analysed by qRT-PCR (Fig. 3b). SF combination with CIS or 5-FU significantly reduced ALDH1A1 mRNA expression level when compared with single CIS or 5-FU chemotherapy treatment (Fig. 3b). Our results showed a significant decrease in BCL2 expression after combining SF + CIS or SF + 5-FU, and although there was an increase in the expression of BAX with the combined treatment, it was not significant. Caspase 3 expression was elevated with SF + CIS or SF + 5-FU when compared with using each chemotherapy alone (Fig. 3c). qRT-PCR results were confirmed by western blotting to detect the changes at the protein level and the activation of Caspase 3 by cleavage (Fig. 3d). Effect of sulforaphane on non-cancerous (healthy) stem cells The effects of SF alone or combined with CIS or 5-FU were tested on non-cancerous human stem cells (nCSCs), such as periodontal ligament stem cells (PDLSCs) and dental pulp stem cells (DPSCs). SF alone did not show any significant toxicity to nCSCs in concentrations less than 3.5 μM (Fig. 4a). There was no significant difference between using CIS or 5-FU as a single treatment versus the addition of SF (Fig. 4b, c). There was no significant difference between SF-treated and -untreated PDLSCs and DPSCs in their abilities for osteogenic (Fig. 4d, e) or chondrogenic differentiation (Fig. 4f, g). Effect of the SF + CIS combination treatment in vivo To assess whether SF might affect the sensitivity of HNSCC-CSC xenografts towards chemotherapy, we transplanted CD44 + /CD271 + cells from SCC12 cell line into the tongue of nude immunocompromised mice. Mice were injected I.P with the vehicle (normal saline), SF, CIS or SF + CIS, and tumour growth was measured weekly for 49 days (Fig. 5a). When compared with the control group, treatment with either SF or CIS alone inhibited tumour growth and tumour volumes by 59% or 54.5%, respectively (Fig. 5b). SF + CIS treatment reduced tumour volume by 73% (Fig. 5b). A significant difference in tumour size was observed between mice in the control group and those in the treatment groups starting from 14 days after treatment. There was a significant difference between the SF + CIS treatment group and the SF or the CIS group after 35 days (Fig. 5b). SF had no toxic effects on mice injected with either SF alone or combined with CIS, as shown by their body weights when compared with the shamtreated mouse group. There was an insignificant decrease in the body weights of mice receiving the combined treatment during the administration period; however, these mice re-gained their weights after treatment cessation. Mice in the control (saline-injected) group showed a marked reduction in their body weights during the followup (Fig. 5c). Histological analysis showed no tissue necrosis of the livers and kidneys in all groups of mice (Fig. 5d). DISCUSSION Therapeutic efficacy of SF In a previous study, we demonstrate that SF increased chemotherapeutic cytotoxicity of CIS and 5-FU against HNSCC. 13 . Intra-oral tumour size and mice body weights were monitored weekly. Black arrow indicates tumour formation after 1 week of tumour implantation with 1 × 10 4 CD44 + /CD271 + SCC12 cells. b Tumour volumes and c mice body weights were determined as described in the "Methods" section. Data represent means ± SD for N = 5 (*P < 0.05 and **P < 0.01 compared with the control, @ P < 0.05 and @ P < 0.01 compared with the combined treatment). d A representative H&E staining of histological sections of the kidneys (upper row) and livers (lower row) after treatments with SF and/or CIS, or vehicle control, is shown at ×5 magnification and ×20 magnification in the insets; scale bars = 130 μm and 34 μm for the main photograph and the inset, respectively. were in line with other studies on oral cancers 18,19 and in a variety of other types of cancers. 20,21 However, little is known on the effect of combining SF on HNSCC-CSCs. In our previous study, we suggested that both CD44 + and CD271 + were suitable markers to isolate CSCs from HNSCC. 16 In the current study, we used these FACS-sorted CD44 + /CD271 + CSCs to examine the effect of SF/chemotherapycombination treatments. Our results demonstrated that SF had a cytotoxic effect on HNSCC-CSCs that were elevated in both doseand time-dependent manners. Other studies in oral carcinomas 22 and other cancer types [23][24][25] reported comparable results. The new finding of this study was that SF could be used as a combination treatment to enhance the toxicity of CIS and 5-FU against the more resistant CSCs in HNSCC. Adding 3.50 µM of SF nearly doubled the effect of CIS and multiplied the effect of 5-FU by 10 times, especially at lower chemotherapy doses. A concentration of 3.50 µM SF in the human body can be achieved simply by eating fresh broccoli sprouts. It was reported that following the ingestion of 40 g of broccoli, the SF plasma concentration reached 2.50 µM/L within 3 h. 26 Remarkably, SF cytotoxic effect was comparable on both cell lines tested in this study (SCC12 and SCC38), even if SCC38 was known as a more chemoresistant cell line. 27,28 This suggested that SF could affect CSCs from both chemoresistant and chemosensitive HNSCC. Our results demonstrated that 3.5 µM of SF alone reduced CSC clonogenicity to the same extent as 0.5 µg/ml CIS, and was more efficient than 1.3 µg/ml of 5-FU. Furthermore, combining SF to the standard CIS or 5-FU chemotherapy treatments eliminated CSC clonogenic ability completely. Similar results were reported with Gemcitabine in pancreatic cancer, Taxol on prostate cancer 21 and CIS on gastric cancer. 20 We obtained comparable results with the sphere-formation assay. The dose of 3.5 µM tested was comparable to the range of 0.5-10 µM that had been used in other types of cancer to inhibit tumour-sphere formation. 21,24,25 By using the annexin V/7-AAD assay, we found that SF treatment significantly increased early apoptosis in treated CSCs, which was equal to using 0.5 µg/ml CIS and greater than 1.3 µg/ml 5-FU. However, the combined treatment of SF and low doses of CIS or 5-FU led to increased apoptosis as compared with using a single chemotherapeutic drug or SF as a treatment. These results suggested that SF acted through multiple mechanisms to target CSCs, and that strategy could reduce the chance for CSCs to develop resistance against SF. SF induction of apoptosis on CSCs was also reported with pancreas and prostate CSCs. 21,23 Our results demonstrated that the SF + CIS combination reduced tumour size that was formed by the inoculation of HNSCC-CSCs in the tongue of immunocompromised mice, as compared with mice treated with SF or CIS alone. All tumourbearing mice had decreased body weights when compared with the sham-treated group, and they were highly significant with the control group (treated with saline only). This could be explained by an increase in tumour size, which interfered with normal feeding habits, even with the use of a soft-food diet. SF biosafety was shown by H&E staining of the mice livers and kidneys, as there was no necrosis with SF alone or combined with CIS. Several studies reported similar biosafety profiles for SF combined with other drugs. 21,29,30 To our knowledge, we are the first to show that SF enhanced the cytotoxicity of CIS and 5-FU towards HNSCC-CSCs. Safety of SF on non-cancerous human stem cells Several studies demonstrated that SF had little-to-no toxicity on non-cancerous human (adult) cells. 13,22,31 In the current study, we additionally assessed SF effect on human stem cells. We demonstrated that a concentration of 3.5 µM SF and less, either used alone or combined with CIS or 5-FU, did not affect the viability of human stem cells. Also, a concentration of 3.5 µM SF did not affect the multipotential differentiation capacity of human periodontal and dental pulp stem cells. Several studies had equally reported that low doses of SF did not affect the viability of mesenchymal stem cells and protected them from carcinogens. [32][33][34] Molecular mechanism of SF-mediated targeting of HNSCC-CSCs Mechanistically, we recently demonstrated that SF enhanced the cytotoxicity of chemotherapy (CIS or 5-FU) against HNSCC by stimulation of the caspase-dependent apoptosis pathway. 13 In the current study, we reported comparable results on HNSCC-CSCs, such as SF increased the apoptotic effect of CIS and 5-FU on CSCs by inhibiting BCL2. Also, we found that SF increased the expression and activation of Caspase 3, both at the genomic and protein levels. Numerous other molecular mechanisms were suggested for the pro-apoptotic effect of SF, such as the cleavage of caspase-8 in pancreatic cancer, 35 the fragmentation of the DNArepairing protein poly (ADP-ribose) polymerase (PARP) and decreased expression of BCL2 in mammary, prostate and colon cancers. [36][37][38] Aldehyde dehydrogenase 1 (ALDH1) is a member of the aldehyde dehydrogenase family of cytosolic isoenzymes, which are highly expressed in many types of stem and progenitor cells. 39 Interestingly, ALDH1 + HNSCC cells showed a high self-renewal ability along with increased tumour formation, invasion and treatment resistance. 40 It was reported that ALDH1 stimulated tumour proliferation and survival by activating Akt and c-MYC through the regulation of retinoic acid formation. 41,42 The inhibition of tumour proliferation in our study might be explained partially by the reduction in ALDH1A1 gene expression. It was suggested that the dysregulation of selfrenewal pathways in CSCs, such as SMO, NOTCH1 and BMI1, could be the cause for CSC tumorigenicity and treatment resistance. [43][44][45] Studies have reported that chemotherapies using CIS and 5-FU might cause the selection of CSCs and increased the expression of self-renewal and drug resistance-related genes, like BMI1 46,47 or ALDH1A1, 21,48,49 which were also found in our study. Our in vitro experiments demonstrated that SF treatments prevented CIS and 5-FU to induce BMI1 and ALDH1A1 expression, and enhanced the downregulation of SMO, GLI1 and NOTCH1. Therefore, SF cotreatments contributed to the resensitisation of CSCs to chemotherapeutic drugs. Interestingly, a similar effect was reported in other cancer types, either with gemcitabine or cisplatin. 20,21 The octamer-binding transcription factor 4 (OCT4) was suggested to be the best indicator for stemness and maintenance of an undifferentiated state. 50 In a recent meta-analysis study, a strong correlation was found between OCT4 overexpression and poor overall survival of HNSCC patients. 51 SOX2 overexpression was also reported to affect the invasion and metastasis induction of laryngeal squamous cell carcinomas. 52 Our results showed that SF inhibited the expression of both SOX2 and OCT4. In conclusion, we demonstrated that SF strongly enhanced the cytotoxic effect of the chemotherapeutic agents CIS and 5-FU against HNSCC-CSCs. Combining SF with either CIS or 5-FU also decreased the expression of self-renewal and drug resistancerelated genes. Our data suggest that SF enhanced the effect mediated by chemotherapy, both in vitro and in vivo, and thus allowed a lowered dose of these chemotherapeutic agents. Combining SF to standard chemotherapy (CIS and 5-FU) may provide a better treatment modality option for the clinical setting. ADDITIONAL INFORMATION Ethics approval and consent to participate The animal research study was approved by the University Animal Care Committee at McGill University (Protocol #5330, www.animalcare.mcgill.ca). Consent to publish Not applicable. Data availability All data generated or analysed during this study are included in this published article.
2020-08-10T13:34:47.078Z
2020-08-10T00:00:00.000
{ "year": 2020, "sha1": "657601e2e5ad10bfc04fcea119df2a198ff8b00f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1038/s41416-020-1025-1", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "541b26f9980760946762b9401366e1c69be5c9b8", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
15437008
pes2o/s2orc
v3-fos-license
Management of Multiple Sclerosis in the Breastfeeding Mother Multiple Sclerosis (MS) is an autoimmune neurological disease characterized by inflammation of the brain and spinal cord. Relapsing-Remitting MS is characterized by acute attacks followed by remission. Treatment is aimed at halting these attacks; therapy may last for months to years. Because MS disproportionately affects females and commonly begins during the childbearing years, clinicians treat pregnant or nursing MS patients. The intent of this review is to perform an in-depth analysis into the safety of drugs used in breastfeeding women with MS. This paper is composed of several drugs used in the treatment of MS and current research regarding their safety in breastfeeding including immunomodulators, immunosuppressants, monoclonal antibodies, corticosteroids, and drugs used for symptomatic treatment. Typically, some medications are large polar molecules which often do not pass into the milk in clinically relevant amounts. For this reason, interferon beta is likely safe for the infant when given to a breastfeeding mother. However, other drugs with particularly dangerous side effects may not be recommended. While treatment options are available and some data from clinical studies does exist, there continues to be a need for investigation and ongoing review of the medications used in breastfeeding mothers. Introduction Multiple Sclerosis (MS) is a common neurological disease in young adults, affecting approximately 400,000 individuals in the United States and over 2 million people worldwide [1]. MS is an autoimmune disease characterized by both diffuse and localized inflammation, demyelination of neurons, and nonspecific brain and spinal cord damage [2]. The course of the disease is variable and patients commonly experience a period of clearly defined attacks followed by periods of complete or partial recovery. This type of MS is classified as Relapsing-Remitting MS (RRMS), and it accounts for nearly 85% of all cases [3,4]. During attacks, patients may experience deficits in any number of systems (motor, sensory, optic, sphincteric, etc.) [5]. Treatment of MS is aimed at halting attacks when they occur. Treatments usually last for years. Nonspecific immunosuppressive agents are the mainstay of therapy [6]. The future of MS research will be aimed at repairing and reversing damage to the myelin sheath; however, the understanding of disease etiology is still limited [7]. As with most autoimmune diseases, MS disproportionately affects females with a threefold increased risk as compared to males. The common age of onset is during the third and fourth decades of life, coincidentally a woman's childbearing years [2,8]. Due to medical advancements in the past two decades, clinicians have become more supportive of young adults with MS who choose to start a family. Because of the development of disease modifying drugs (DMDs), healthcare professionals have the ability to reduce the accumulation of CNS damage and resulting disability by extending the time between relapses. Women with MS have become more confident in their ability to safely and successfully become pregnant and have a healthy child. The therapeutic management of MS in the pregnant woman has been adequately covered in recent years [9,10]. However, an in-depth investigation into the safety of DMDs in breastfeeding women and their infants is limited. Given that approximately 72% of women in the USA choose to breastfeed and up to 30% of women with MS may relapse within the first 3 months postpartum, the safety of medications used to treat MS while breastfeeding is of paramount concern to mothers and their infants [10,11]. Transfer of Medications into Breast Milk While the exact nature of the transfer of DMDs into breast milk is still largely unknown, we do have a reasonable system for estimating drug transfer in some cases [11,12]. Newer agents often lack extensive research, but the relative risk to the infant of most medications can be estimated to some degree. While the benefits of breastfeeding an infant are enormous, the risk of incidental drug exposure to a certain drug may outweigh the benefits of breastfeeding. This risk assessment is the subject of this review. Finally, the absolute cessation of breastfeeding should only be recommended with drugs that have extremely hazardous side effects [13]. The transfer of a drug into breast milk is dependent upon many factors. These include molecular weight, protein binding, pKa, lipid solubility, volume of distribution, and the presence of any active transport mechanisms [14]. The drug's molecular weight is perhaps the most important determinate of its transfer into breast milk [15]. In general, large polar molecules often do not pass into milk in clinically relevant amounts. A drug's protein binding is also relevant because drugs that are highly protein bound are generally unable to pass into milk [16]. Drugs with high volumes of distribution ( ) are often concentrated in body tissues outside the plasma and produce limited plasma levels; therefore, they are often considered safer for breastfeeding mothers. Oral ingestion of a drug in milk is followed by digestion in the gut and firstpass metabolism through the liver. Drugs with high first-pass effect are often sequestered in the liver and fail to produce high plasma levels. As the infant ages, drug metabolism and first-pass effect increase [17]. Local effects on the gastrointestinal tract of the infant should be considered when relevant. Other risk factors that should be considered are age (<4 months), prematurity or complications during delivery, gastrointestinal anomalies, and the presence of metabolic problems in the infant. Younger infants are characteristically at higher risk than older (>9 months) infants. The Relative Infant Dose (RID) is another frequently used tool to evaluate clinical risk to the infant. The RID is calculated using the mother's dose, the concentration of drug in milk, and the volume of breast milk consumed by the infant. In infants that exclusively breastfeed, the RID is an estimate of the amount of mother's daily medication that the infant receives via milk. For example, if the mother takes 100 mg of a drug with RID of 1%, the infant will be exposed to about 1 mg. Most experts in breastfeeding medicine consider RID of 10% to be a reasonable cutoff between safe and nonsafe, assuming that the drug in question does not have extraordinary toxicity associated with it [18]. However, each case should be carefully considered before making a clinical recommendation. Multiple Sclerosis and Breastfeeding In 2002, a study suggested that mothers with MS who wished to breastfeed their newborns could safely do so; however, the authors advised against the use of immunomodulating drugs while nursing and suggested postponing breastfeeding for three months after the last dose of any disease modifying agent [19]. Several studies have attempted to assess, with conflicting results, whether or not breastfeeding itself affects the disease or the rate of relapse in MS patients. A recent metaanalysis concluded that women who breastfeed are half as likely to experience a relapse compared to women who do not breastfeed. While this study cites limitations of heterogeneity and adequate breastfeeding recall, it also mentions that women with severe disease might be less likely to breastfeed due to disease limitations [20]. Further research is needed in order to adequately determine the existence of any association between breastfeeding and MS relapse. Many studies have failed to consider the differences between exclusive breastfeeding and breastfeeding supplemented with formula feeds. It is well known that exclusive breastfeeding results in different hormonal changes in women compared to those who only partially breastfeed and use formula supplements. One study reported a beneficial effect of exclusive breastfeeding on Multiple Sclerosis. This study also suggests that lactational amenorrhea could contribute to a reported decrease in relapse rate [21]. Another study suggests that patients who are not on immunomodulatory therapy are more likely to experience MS exacerbation within three months of giving birth. This study concluded that there was no significant effect of breastfeeding on the patient's risk of experiencing a relapse and that the relapse rate returned to prepregnancy levels at three months postpartum regardless [22]. A multicenter study following 423 pregnancies in women with MS over the course of six years concluded that postpartum relapses were more likely found in women with higher disease activity before pregnancy. The study recommended early resumption of Disease Modifying Drugs in patients with severe disease [23]. Considering current evidence, the choice of becoming pregnant and breastfeeding are both possible and safe options for women with MS. However, for women actively experiencing severe disease symptoms, treatment should probably be restarted. While the safety of disease modifying drugs in pregnancy is somewhat understood, their safety in the infant following breastfeeding has not been well established. [24]. Still, the exact mechanism in the treatment of MS is still unknown. Interferon beta 1b is also used to treat secondary progressive MS. Treatment with interferon beta produces side effects of flu-like symptoms, injection site reactions, headache, fatigue, and elevated ALT levels [25]. Interferon beta 1a's therapeutic effects are noted within 12 hours after the initial dose. This effect lasts 4 days. Its halflife is approximately 19 hours, with a time-to-peak serum concentration of 5 hours. The half-life of interferon beta 1b ranges from 8 minutes to 4.3 hours. It requires 1-8 hours to reach peak serum concentrations. Interferon beta 1a is not as well studied as interferon beta 1b [26]. Medications Used for Small amounts are excreted in breast milk. In a recent study of the transfer of interferon beta 1a in six mothers receiving 30 g per week, average milk concentrations were 46.7, 97.4, 66.4, 77.5, 103.1, 108.3, 124, and 87.9 pg/mL at 0, 1, 4, 8, 12, 24, 48, and 72 hours, respectively, after dosing. The estimated Relative Infant Dose would be 0.006% of the maternal dose. Interferons have a large molecular weight, bind to T cells, are distributed out of the plasma compartment, and are relatively nontoxic. No adverse events have been reported in the nursing infant. Interferon beta is currently classified as probably compatible with limited data for breastfeeding mothers [27,28]. Fingolimod. Fingolimod is rapidly metabolized to fingolimod phosphate which inhibits sphingosine-1-phosphate receptors. This may reduce lymphocyte transfer into the central nervous system. Fingolimod is an immune modulator and prodrug that binds to the surface of lymphocytes and redirects them from the blood and graft sites to the lymph nodes, thus reducing the immune response in patients with Multiple Sclerosis. It reportedly assists in the repair of brain glial and precursor cells following injury. It slows the progression of disability and reduces the frequency and severity of symptoms in patients with MS. Side effects of therapy include headache, cough, macular edema, weakness, dizziness, hypotension, bradycardia, diarrhea, flu-like symptoms, increased hepatic enzyme levels (ALT and AST), and back pain. Hourly monitoring is required after the first dose of fingolimod for bradycardia and hypotension detection [29]. Fingolimod has of 1200 L and is also highly protein bound in the plasma compartment. Oral bioavailability is high at 93%. Elimination half-life is 6 to 9 days, and peak concentration levels are reached 12-16 hours after dosing [30]. It is unknown if fingolimod passes into human breastmilk, but it is known to be excreted into rat milk. Due to its high volume of distribution and high protein binding, levels in human milk will probably be low. While it is less likely to be found in breastmilk in significant amounts, because of its significant risk of cardiovascular side effects, it is categorized as hazardous at this time. Teriflunomide. Teriflunomide is a pyrimidine synthesis inhibitor and may exert its effects against MS by reducing the number of activated lymphocytes in the central nervous system. Adverse effects of therapy include headache, hypophosphatemia, neutropenia, lymphocytopenia, increased hepatic enzyme levels (ALT), and influenza infection [31]. Teriflunomide has a volume of distribution of 11 L/kg, and it is extensively protein bound at 99%. Its half-life is 18-19 days with peak serum concentration at 1-4 days. No human studies have been conducted regarding this drug's ability to transfer into human milk. Animal studies have identified teriflunomide in rat milk after a single dose but rodent studies are not at all indicative of human transfer. Given this drug's toxicity in adults, its likely presence in milk, and its long half-life, teriflunomide should be used with great caution in breastfeeding mothers. It is categorized as hazardous with no data. Glatiramer Acetate. Glatiramer acetate is a synthetic polymer of amino acids L-tyrosine, L-glutamate, L-alanine, and L-lysine. Glatiramer acetate is similar in composition to the myelin sheath of nerves. It is believed to exert its effects by binding to the major histocompatibility class II molecule. Side effects in patients receiving therapy include chest pain, diaphoresis, skin rash, injection site residual mass, dyspnea, and flu-like symptoms. Small amounts of the drug are believed to enter the lymphatic circulation. Its molecular weight ranges from 4,700 to 13,000 Daltons. No data are available on its transfer into human milk, but, due to its large molecular weight, transfer is very unlikely [32]. It is not known to cross the blood-brain barrier, and it is unlikely to cross into the breast milk. After oral ingestion, it is depolymerized into individual amino acids so oral bioavailability in the breastfed infant would be nil [33]. It is currently classified as probably compatible with no data. Mitoxantrone. Mitoxantrone intercalates itself into DNA and inhibits DNA topoisomerase II, resulting in decreased DNA synthesis and replication. It also alters RNA synthesis. Side effects of mitoxantrone therapy include edema, arrhythmia, alopecia, amenorrhea, sepsis, and gastrointestinal disturbances, headache, fever, fatigue, cardiotoxicity, mucositis, anorexia, changes in liver and renal function, leukopenia, neutropenia, thrombocytopenia, pruritus, phlebitis, and increased frequency of infection [34]. Mitoxantrone has poor oral absorption with a large volume of distribution of 14 L/kg. Mitoxantrone is distributed into peripheral tissues and binds to proteins. The mean alpha half-life of mitoxantrone is six to twelve minutes, the mean beta half-life is 1.1 to 3.1 hours, and the mean gamma (terminal or elimination) half-life is 23 to 215 hours (median = 75 hours). Distribution to tissues is extensive: steady state volume of distribution exceeds 1,000 L/m 2 . In a study of a patient who received 3 treatments of mitoxantrone (6 mg/m 2 ) on days 1 to 5, mitoxantrone levels in milk measured 120 ng/mL just after treatment and dropped to a stable level of 18 ng/mL for the next 28 days [35]. Assuming that a mother was breastfeeding, these levels would provide about 18 g/L of milk consumed after the first few days following exposure to the drug. In addition, it would be sequestered for long periods in the infant as well. The RID is estimated to be 2-12% of the maternal dose. Due to its significant side effect profile, it is currently classified as hazardous with limited data and should probably not be used by breastfeeding mothers [36]. Dimethyl Fumarate. Dimethyl fumarate exerts its effects by modifying the cellular response to oxidative stress. The exact mechanism of action is unknown, although it is believed to produce anti-inflammatory properties [37]. Adverse reactions associated with dimethyl fumarate include flushing, abdominal pain, infection, diarrhea, abdominal pain, changes in liver enzymes, nausea, proteinuria, rash, lymphopenia, and contact dermatitis [38]. Dimethyl fumarate has a volume of distribution of 53 to 73 L/kg and an active metabolite with a low molecular weight (129 D). It is metabolized very quickly with a half-life of approximately 1 hour. Time required to reach peak serum concentration is 2 hours [39]. Its high volume of distribution and rapid metabolism would likely preclude significant transfer into human milk. However it is categorized as possibly hazardous with no data due to its significant side effect profile. The active drug component (monomethyl fumarate) has a low molecular weight of only 129 Daltons and low protein binding (27-45%). This product will probably enter milk significantly. Until more is known about its transfer into milk, caution is recommended for its use in breastfeeding mothers. Cladribine. Cladribine, a prodrug that is phosphorylated intracellularly, is an immunosuppressive agent that induces irreparable DNA damage within cells and induces apoptosis of lymphocytes. Side effects of therapy include fever, fatigue, headache, rash, appetite suppression, neutropenia, anemia, thrombocytopenia, abnormal breath sounds, and infections. Local injection site reactions may have also been reported [40]. Cladribine has a volume of distribution of <9 L/kg indicating extensive sequestration in body tissues and is 20% bound to protein with a relatively low molecular weight of 285 D. Its half-life is about 5 hours, with some estimates as high as 19 hours. It is unknown if cladribine enters breast milk, but it enters cerebrospinal fluid significantly (25% of plasma levels), suggesting it might easily transfer into human milk as well. It is rated as hazardous in breastfeeding mothers, and it is recommended to withhold breastfeeding for 48 hours after a dose of cladribine and longer in the event of maternal renal dysfunction. Azathioprine. Azathioprine is a derivative of mercaptopurine. It exerts its effects via its metabolites which halt DNA replication and purine synthesis. Side effects of therapy with azathioprine include malaise, nausea, vomiting, leukopenia, and increased susceptibility to infection [41,42]. Azathioprine is 30% bound to plasma proteins. Its half-life is approximately 2 hours. In two mothers receiving 75 mg azathioprine, the concentration of 6-MP in milk varied from 3.5-4.5 g/L to 18 g/L [43]. The authors conclude that these levels would be too low to produce clinical effects in a breastfed infant. In another study of two infants who were breastfed by mothers receiving 75-100 mg/day azathioprine, both infants had normal blood counts, no increase in infections, and above-average growth rate [44]. Four mothers who were receiving 1.2-2.1 mg/kg/day of azathioprine throughout pregnancy and continued postpartum were studied, and neither 6-TGN nor 6-MMPN could be detected in the exposed infants [45]. Four case reports were performed with mothers taking between 50 and 100 mg/day of azathioprine and reported no adverse events in any of the infants [46]. Ten women at steady state on 75 to 150 mg/day azathioprine provided milk samples on days 3-4, days 7-10, and day 28 after delivery; 6-MP and 6-TGN were undetectable in the infants' blood [47]. Another study of three mothers taking azathioprine while breastfeeding (doses of 100-175 mg) reported normal blood cell counts in all three infants [48]. In a group of 8 lactating women who received azathioprine (75-200 mg/day), levels in milk ranged from 2 to 50 g/L. The authors estimate the infant's dose to be <0.008 mg/kg/day [49]. In a 31-year-old mother with Crohn's disease being treated with 100 mg/day azathioprine, peripheral blood levels of 6-MP and 6-TGN in the infant were undetectable at day 8 or after 3 months of therapy [50]. In a recent study of the longterm follow-up (median 3.3 years) of fetal and breastfeeding exposure to azathioprine ( = 11 infants), there were no differences in rates of infectious disease in azathioprinetreated groups compared to nontreated controls [51]. In summary, azathioprine enters breast milk at RID of 0.07-0.3% and is categorized as probably compatible with limited data. Although no adverse effects have been reported, breastfed infants of mothers taking azathioprine should be monitored for signs of leukopenia, pancreatitis, and immunosuppression. Cyclophosphamide. Cyclophosphamide is an alkylating agent that prevents cell division by cross-linking DNA strands and decreasing DNA synthesis. Cyclophosphamide is a potent immunosuppressant. It is well absorbed orally and has a volume of distribution from 30 to 50 L/kg. Half-life is 3-12 hours and time-to-peak serum concentration is 1-3 hours after a dose. Side effects of cyclophosphamide therapy are significant and include alopecia, amenorrhea, gonadal suppression, abdominal pain, hemorrhagic cystitis, anemia, and dose dependent leukopenia [52]. Cyclophosphamide is known to enter breast milk. In one case of a mother who received 800 mg/week of cyclophosphamide, the infant was significantly neutropenic following 6 weeks of exposure via breastmilk [53]. In another mother who was receiving 10 mg/kg intravenously daily for seven days for a total of 3.5 gm, major leukopenia was also reported in her breastfed infant. Thus far, no reports have provided quantitative actual estimates of cyclophosphamide in milk [53,54]. Regardless, cyclophosphamide is categorized as hazardous, with no data, and it should be avoided in breastfeeding Multiple Sclerosis International 5 mothers. Mothers should withhold breastfeeding for a period of at least 72 hours after the last dose. Methotrexate. Methotrexate is used off the label for the treatment of MS. It inhibits dihydrofolate reductase and prevents DNA synthesis. Adverse effects associated with methotrexate therapy include chest pain, headache, dizziness, alopecia, skin photosensitivity, cerebral thrombosis, azotemia, thrombocytopenia, aplastic anemia, and increased liver enzymes [55]. Up to 3-6 weeks may be required to produce a significant therapeutic effect. Methotrexate has a volume of distribution from 0.18 to 0.8 L/kg. It is 50% bound to protein. Half-life is 3-10 hours for lower doses and 8-15 hours for higher doses. In children, the half-life is 1-6 hours. Time-to-peak serum concentration ranges from 1 to 2 hours after administration [56]. Methotrexate is secreted in breast milk in small amounts [57]. In 1972, a case report was published concerning a woman receiving 22.5 mg/day oral dose of methotrexate. Two hours after dose, the methotrexate concentration in breastmilk was 2.6 g/L of milk. The cumulative excretion of methotrexate in the first 12 hours after oral administration was only 0.32 g in milk [58]. In 2014, a second case report was published about a woman who was taking 25 mg/week of methotrexate for rheumatoid arthritis. This woman reinitiated her therapy 151 days postpartum because her disease worsened. The maternal serum concentration was 0.92 M 1 hour after her dose was given; breastmilk samples taken at 2, 12, and 24 hours after her dose were 0.05 M (detectable but below the level of quantification). The authors estimated the average infant dose to be 3.4 g/kg/day (22.7 g/L). This infant continued to breastfeed at 5 months of age when methotrexate was reinitiated in the mother and for another 9 months. No adverse events were reported in the infant [59]. Methotrexate does enter into breast milk with RID of 0.1 to 0.9%. It is categorized as possibly hazardous with limited data. If the mother breastfeeds more than 24 hours after the last dose of methotrexate, the infant should be monitored for vomiting, diarrhea, and blood in the vomit, stool, or urine [60]. In extreme cases where the infant cannot be withdrawn from the breast, the infant could be supplemented with folic acid or L-methylfolate. Natalizumab. Natalizumab is a monoclonal IgG4k antibody that binds to the 4-subunit of 4 1 and 4 7 integrin molecules. Integrins are responsible for adhesion and subsequent migration of cells from the bloodstream to the site of inflammation in tissues. Natalizumab prevents the transmigration of leukocytes to inflamed tissues by binding to the alpha-4 subunit on the surface of leukocytes. In Multiple Sclerosis, it is believed to block T lymphocyte migration into the CNS resulting in a decreased rate of relapse. Natalizumab is primarily indicated for patients with moderate-to-severe relapsing forms of MS. Side effects of natalizumab therapy include headache, fatigue, depression, rash, gastrointestinal disturbances, urinary tract infections, flu-like symptoms, and progressive multifocal leukoencephalopathy [61]. The volume of distribution of natalizumab is 3.8-7.6 L/kg and half-life is 7-15 days. Due to the prolonged time required to achieve steady state (28 weeks), actual concentration in breast milk at steady state is still uncertain at this time. One recent study demonstrated rising levels of natalizumab in breastmilk in one patient with an estimated RID of 2-5%. However, this study only followed the infants for 50 days and natalizumab requires up to 28 weeks to achieve steady state [62]. Therefore, the authors suggest that further research is needed to evaluate levels at steady state and risks to the breastfed infant. Although no adverse effects were reported in the infant, it is categorized as probably compatible but with limited data. Alemtuzumab. Alemtuzumab is a recombinant DNAderived humanized monoclonal antibody that depletes T and B lymphocytes by targeting CD52 glycoproteins on T and B cells, lymphocytes, monocytes, macrophages, and natural killer cells. It is indicated for patients with relapsing MS who have had inadequate response to other products. Adverse effects of therapy with alemtuzumab are significant and include autoimmune diseases, fever, headache, insomnia, paresthesia, skin rash, thyroid dysfunction, and urinary tract infections. The most significant side effect is lymphopenia. Infusion related reactions and infections are frequently reported [63,64]. The volume of distribution is different for the different formulations of alemtuzumab. Intravenous alemtuzumab has of 0.18 L, while the oral formulation has of 4.1 L. The half-life of the intravenous formulation is 11 hours and oral formulation is approximately 2 weeks. Because it is a monoclonal antibody and because it has a large molecular weight (150 kD), alemtuzumab is unlikely to significantly enter breast milk in clinically relevant amounts. Despite the fact that maternal IgG is known to transfer into breast milk at very low levels, because of its significant side effect profile, alemtuzumab is categorized as possibly hazardous. Rituximab. Rituximab is a chimeric (human/mouse) monoclonal antibody that targets CD20 receptors on B lymphocytes. Adverse reactions seen with rituximab therapy include hypertension, fever, fatigue, headache, insomnia, rash, cytopenia, and increased hepatic ALT levels. Neuropathy and muscle spasms have also been reported. Infusion related reactions may also be seen with rituximab therapy [65]. Rituximab has a volume of distribution of 3.1 to 4.1 L/kg. Half-life ranges from 18 to 32 days. It remains in the plasma for 3-6 months after the last dose. Although it is unlikely to enter breastmilk due to its large molecular weight, the long half-life and significant side effect profile have led it to be categorized as possibly hazardous with no data. Daclizumab. Daclizumab is a humanized monoclonal antibody that targets IL-2 and CD25. It has a relatively long half-life, ranging from 21 to 25 days in healthy individuals and is highly bioavailable after subcutaneous injection. Hypersensitivity is a known side effect and patients treated with daclizumab often experience cutaneous adverse effects. Other side effects include arterial hypertension, dyspnea, and edema [66]. With a low volume of distribution of 2.5 L/kg, daclizumab probably remains in the plasma compartment for a significant amount of time [67]. It has a molecular weight of 144 kD and has somewhat limited potential to pass into breast milk [68]. Because it is unknown if daclizumab enters breast milk, this drug should be avoided pending further investigation. Methylprednisolone. Methylprednisolone is a typical corticosteroid. Corticosteroids exert their effects by regulating gene expression and modify carbohydrate, protein, and lipid metabolism. The therapeutic effects of methylprednisolone in MS are believed to be due to its anti-inflammatory properties [69,70]. Although rare, corticosteroids may produce significant side effects including cardiovascular (arrhythmia, bradycardia, and cardiomegaly), neurological (delirium, depression, and hallucinations), dermatological (acne, alopecia), endocrine (adrenal suppression, diabetes mellitus), and gastrointestinal (abdominal distention, pancreatitis, and peptic ulcer disease) effects when used for prolonged periods. Oral methylprednisolone reaches its peak effect at 1-2 hours after administration, while IM administration can take up to 4-8 days [71,72]. Methylprednisolone's volume of distribution is 0.7 to 1.5 L/kg. The half-life of methylprednisolone is about 3 hours, and it is reduced in obese individuals [73]. In a recent case report of a 36-year-old woman who received methylprednisolone 1000 mg IV daily × 3 days for treatment of relapsing MS, the authors found that the infant would ingest only 0.164 to 0.207 mg/kg/day of methylprednisolone. Levels in milk were low and dissipated by 8-12 hours. Oral methylprednisolone is secreted into breast milk with RID of only 0.4 to 3%. Based on this data, it would seem reasonable to continue breastfeeding while receiving a short course of high-dose methylprednisolone. If the mother wishes to further limit infant exposure, she should interrupt breastfeeding for 8-12 hours after high intravenous doses [74]. Methylprednisolone use in breastfeeding women is probably compatible although with limited data. Symptomatic Treatments Dalfampridine is a potassium channel blocker that is approved to improve gait in patients with disability caused by MS. Dose-related seizures have been reported [13]. While it is a broad spectrum potassium channel blocker, it apparently does not increase QTc interval. Other adverse reactions to therapy include urinary tract infections, anaphylaxis, insomnia, dizziness, headache, nausea, and vomiting. Dalfampridine has a limited volume of distribution of 2.6 L/kg. It is not bound to proteins and its absorption is rapid and complete. Its half-life is 5-6 hours but it is prolonged in patients with renal impairment. The drug requires 3-4 hours to reach peak plasma concentrations. Because dalfampridine has a small molecular weight of only 94 Daltons, it is probably a highly risky product to use in breastfeeding mothers [75]. The kinetics of this product are ideal for entering milk at high levels. Caution is recommended. This product is well known to be highly toxic in some animal species (birds). This drug should be avoided in breastfeeding mothers pending further investigation [76]. Baclofen. Baclofen inhibits spinal cord reflexes and reduces spasticity. While its exact mechanism of action is unknown, it apparently acts as an agonist at presynaptic GABA-B receptors at the level of the spinal cord. It is approved for the treatment of spasticity in MS patients. Side effects of therapy include drowsiness, excitement, dry mouth, urinary retention, tremor, confusion, headache, hypotension, rigidity, and wide pupils. Muscle rigidity, exaggerated spasticity, multiple system organ failure, and death have been reported. Baclofen can be administered intrathecally or intravenously. Intrathecal onset of action is quick, about 30 minutes to 1 hour, while IV onset requires 6-8 hours. Peak effect is attained at 4 hours with intrathecal administration, 24-48 hours with IV infusion, and 2 hours with oral administration. Baclofen is 30% protein bound, with a molecular weight of only 214 Daltons and a half-life of 2-4 hours [58]. Small amounts of baclofen are known to be secreted into milk. In one mother given a 20 mg oral dose, total consumption by infant (via milk) over a 26-hour period was estimated to be 22 g, about 0.1% of the maternal dose. Milk levels ranged from 0.6 mol/L to 0.052 mol/L at 26 hours. The maternal plasma and milk half-lives were 3.9 hours and 5.6 hours, respectively. It is unlikely that baclofen administered intrathecally would transfer into milk in clinically relevant quantities [77]. The RID is estimated at 6.9% of the maternal dose. No adverse effects were seen in the infant of the breastfeeding mother, and it is categorized as probably compatible with limited data. Conclusion Women undergoing treatment for MS will often be faced with the decision as to which drugs are safe while breastfeeding. Table 1 summarizes the list of medications used in MS patients and their descriptions and RID and pertinent clinical considerations. Healthcare professionals should encourage women with MS to breastfeed just as they would encourage healthy mothers after each individual patient's health status and medications are reviewed. Because it is likely that a woman will suffer a relapse in the first few months postpartum, the decision to continue or withhold breastfeeding should be made by the physician as well as the patient after weighing the risks and benefits. With an appropriate therapeutic approach, breastfeeding does not have to be stopped in all occasions. Each clinician should help the patient make the correct decision using accurate information provided here and in other sources.
2017-09-26T08:57:15.781Z
2016-02-04T00:00:00.000
{ "year": 2016, "sha1": "5059ee056b1373a5295ff0140889d35074edbdad", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/msi/2016/6527458.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "707e2b87ba8e021380c382922fdcf2a357c6aee7", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
249210116
pes2o/s2orc
v3-fos-license
The thermodynamics of stellar multiplicity: dynamical evolution of binary star populations in dense stellar environments We recently derived, using the density-of-states approximation, analytic distribution functions for the outcomes of direct single-binary scatterings (Stone&Leigh 2019). Using these outcome distribution functions, we present in this paper a self-consistent statistical mechanics-based analytic model obtained using the Fokker-Planck limit of the Boltzmann equation. Our model quantifies the dominant gravitational physics, combining both strong and weak single-binary interactions, that drives the time evolution of binary orbital parameter distributions in dense stellar environments. We focus in particular the distributions of binary orbital energies and eccentricities. We find a novel steady state distribution of binary eccentricities, featuring strong depletions of both the highest and the lowest eccentricity binaries. In energy space, we compare the predictions of our analytic model to the results of numerical N-body simulations, and find that the agreement is good for the initial conditions considered here. This work is a first step toward the development of a fully self-consistent semi-analytic model for dynamically evolving binary star populations in dense stellar environments due to direct few-body interactions. INTRODUCTION Moore's Law is dead; the future will bring improved efficiency and speed to the computational sciences, but at a slower rate than before (e.g. Wang et al. 2015Wang et al. , 2016Bonetti et al. 2020). Consequently, the demand for alternative models independent of computational limitations (e.g., analytic calculations) are becoming increasingly urgent for the first time in over half a century. Galactic and extragalactic globular clusters (GCs) are an example of an astrophysical system predominantly modeled today via complex numerical simulations. One particular challenge in studying GCs is related to their collisional nature; central stellar densities are so high (> 10 5 M pc −3 ) that direct encounters between single and binary stars occur frequently. The exact rate depends on the host cluster properties but, for the densest GCs, the time-scale for direct single-binary and binary-binary encounters to occur is of order 1-10 Myr (e.g. Leigh & Sills 2011;Geller & Leigh 2015). These two timescales are roughly equal for binary fractions f b ∼ 10% (Sigurdsson & Phinney 1993;Leigh & Sills 2011), such that the rate of single-binary interactions dominates over that of binary-binary interactions in clusters with f b 10%. These multibody interactions are expensive to resolve in direct N-body simulations, but play a key role in (i) determining the overall cluster evolution, due to the release of gravitational potential energy via "binary burning" (as first suggested by Hénon 1961), and (ii) producing exotic sources of electromagnetic (Leigh, Sills, & Knigge 2011;Ivanova et al. 2006Ivanova et al. , 2008 At present, a number of highly sophisticated computational tools have been developed to simulate the time evolution of dense stellar systems, including direct interactions involving binary stars. Among the most successful of these are N -body simulations (e.g. Aarseth & Lecar 1975;Aarseth 1985Aarseth , 1999Aarseth , 2003, which calculate directly the gravitational acceleration exerted on every particle in the system, summed over all other particles. The code then propagates the system forward through time using an appropriately chosen time-step, and step-by-step the cluster is evolved. N -body simulations originated over half a century ago (e.g. von Hoerner 1960(e.g. von Hoerner , 1963Aarseth 1963). They have been evolving ever since, growing increasingly sophisticated over time. Many new software and hardware techniques have been introduced and incorporated, including three-body regularization (Aarseth & Zare 1974), chain regularization (Mikkola & Aarseth 1993, 1998, the Hermite scheme (Makino & Aarseth 1992), the GRAPE hardware (Sugimoto et al. 1990;Makino 1996;Nitadori & Aarseth 2012) and, more recently, the introduction of GPUs for accelerated computing. Due to the increased computational expense, N -body simulations struggle to simulate self-gravitating systems with large particle numbers (this includes the largest globular clusters, but also most nuclear star clusters) and high binary fractions. For this general category of initial conditions (i.e., massive clusters with high binary fractions), long simulation run times, of order a year, can be needed to complete even a single simulation (e.g. Wang et al. 2015Wang et al. , 2016. Monte Carlo (MC) simulations for GC evolution have also proven highly successful, covering a range of parameter space inaccessible to more computationally-expensive N -body simulations. MC models rely on statistical approximations to calculate the time-dependent diffusion of energy throughout cluster due to two-body relaxation. Consequently, MC models are able to handle much larger particle numbers, including larger numbers of binaries, and hence are able to simulate more massive and denser GCs than are N -body models, at a significantly reduced computational expense. MC models are incredibly fast; they are able to evolve a million cluster stars for a Hubble time in a matter of days (roughly two orders of magnitude faster than state-of-the-art N-body simulations). The MC method goes back to the pioneering works of Henon (1971) and Spitzer (1971), with many other researchers later building upon these earliest ideas (e.g. Shapiro & Marchant 1978;Stodolkiewicz 1982Stodolkiewicz , 1986Giersz 1998Giersz , 2001. Modern MC codes supplement their diffusive treatment of bulk cluster evolution with embedded few-body integrators, most notably FEWBODY (Fregeau et al. 2004). These are typically used to treat direct three-(i.e., single-binary) and fourbody (i.e., binary-binary) interactions in MC simulations (Giersz et al. 2008;Downing et al. 2010;Leigh et al. 2015). As a result, MC models are useful for simulations of massive star clusters, capturing the time evolution of temporarily bound and highly chaotic three-and four-body systems. These are thought to be the main source of stellar collisions and/or mergers in dense star clusters and an important formation channel for many exotic stellar systems (Leonard 1989;Pooley & Hut 2006;Sills et al. 1997Sills et al. , 2001Leigh & Sills 2011;Leigh & Geller 2012;. In spite of these successes, MC models are still limited, in that they cannot simulate small particle numbers (i.e., N 10 4 ) (e.g. Giersz 1998Giersz , 2001Kremer et al. 2020). Thus, we are still lacking a single tool capable of covering the entire range of parameter space relevant to real star clusters, ranging from open to globular and even nuclear clusters. While N -body and MC simulations often find good agreement for bulk cluster properties , detailed comparisons find areas of disagreement, such as phases of deep core collapse . The assumptions of existing MC codes also prevent simulation of some clusters of interest, for example those with net rotation 1 (which can accelerate core collapse, possibly increasing the rates of exotica production, Ernst et al. 2007) or those where vector resonant relaxation has created strong deviations from spherical symmetry Szölgyén, Meiron, & Kocsis 2019). Such clusters nonetheless contain collisionally evolving binaries, and it would be useful to have a computationally efficient tool with which to model them. Another limitation of MC codes is that three-and four-body interactions, along with stellar and binary evolution, are assumed to occur in isolation, and so do not allow for the interruption of ongoing interactions and other perturbative effects that occur in a live star cluster environment (e.g. Geller & Leigh 2015;Leigh et al. 2016c). For example, binaries are continually perturbed by the distant flybys of other stars in the cluster. These perturbations happen at every time-step in an N -body code, often leading to random walks in the orbital eccentricity that drive mergers. MC codes do not capture this aspect of the N-body dynamics (Giersz 1998;Giersz et al. 2008;Kremer et al. 2020), again suggesting the need for an additional efficient computational tool. Beyond the successes and limitations of the aforementioned computational tools for simulating GC evolution, we are still lacking a robust and transparent physical model that can be used to understand the dominant physical processes dynamically re-shaping the properties of binary populations over cosmic time. In other words, we can give a set of initial conditions to a computer simulation and it will compute for us the final result. But how do we understand, and even quantitatively characterize, what this output is telling us about everything that happened in between providing the input and reading the output? In this paper, we directly address these practical and conceptual issues, by formulating a self-consistent statistical mechanics model to describe the time evolution of the properties of a population of binaries, based on a master-type equation. Our semi-analytic model quantifies the dominant gravitational physics driving the time evolution of the binary orbital parameter distributions in dense stellar environments, namely direct three-body interactions between single and binary stars. A practical solution of our master equation can be obtained in the Fokker-Planck limit, which we complete using the formulation for the outcomes of chaotic three-body interactions introduced in Stone & Leigh (2019). We compare the results of our analytic calculations to a suite of N-body simulations, and show that the agreement is excellent for the range of initial conditions considered here. In an Appendix, we go on to discuss how to adapt our base model to also include four-body or binary-binary interactions. Although four-body interactions are not always dominant over three-body interactions, they are always occurring for non-zero binary fractions, and contribute to the underlying dynamical evolution of the binary orbital parameter distributions. Hence, a complete model must include this contribution which, as described in the Appendix, will be the focus of future work. In principle, this will allow us to address the key thermodynamic issue of whether or not (and on what timescale) stellar multiplicity in dense cluster environments ever reaches a steady-or equilibrium-state is discussed, as quantified by the ratio of single, binary and triple stars. In steady-state, their relative fractions should remain approximately constant. THE MODEL In this section, we present our model for dynamically evolving in time the distribution functions for an entire population of binary star systems in dense stellar environments, accounting for the effects of repeated binary-single encounters. In other words, we model the time evolution of a population of binaries embedded in a "heat bath" of single stars. This formalism is similar in spirit to the pioneering early work of Goodman & Hut (1993), but it is both (i) more general, in that we present a multi-dimensional version of their underlying master equation, and (ii) also more accurate, in that we take advantage of recent advances in the statistical theory of chaotic multi-body encounters (Stone & Leigh 2019, but see also Ginat & Perets 2020;Kol 2021;Manwadkar et al. 2021) and the secular theory of weak encounters (Hamers & Samsing 2019a,b). We begin with a simple, 1D, second-order (Fokker-Planck) method to quantify the time evolution of the binary orbital energy distribution. We then present the more complete multi-dimensional master equation that describes the full time evolution of all binary variables of interest. This equation is complex enough that we do not solve it fully in this paper, but we show how it can reduce to the simplified Fokker-Planck limit presented earlier, and analyze various 1D solutions. We also present a simple model to quantify the backreaction effects on the properties of the host star cluster (e.g., the time evolution of the core radius, the time at which core-collapse occurs, and so on). A Fokker-Planck equation in energy space We begin by reviewing the features of a 1D Fokker-Planck equation valid to second-order (in the fractional changes of the variable of interest). In the next subsections, we show that this formalism is the relevant limit for some aspects of a more general master equation for binary evolution. Since this is the primary limit of binary evolution we will investigate in this paper, we begin with a brief review, focusing on distributions of binary energy EB for specificity. Assuming diffusive time evolution of the binary orbital energy probability distribution function N (EB) = dNB/dEB, we can apply a standard Fokker-Planck equation: where ∆EB and ∆E 2 B are the first-and second-order diffusion coefficients. The boundary conditions of the equation above are set by the cluster hard-soft boundary (i.e., |EB| = |EHS| = 1 2M σ 2 , whereM is the mean stellar mass in the cluster) at the loosely bound end of the distribution, and by the criterion for a contact binary (i.e., |EB| = |E coll | ≈ GM 2 R , where M is the mass and R is the radius of the test star species) at the compact end. This equation arises from the assumption that the binary orbital energy distribution function evolves diffusively in its host cluster. In this way, changes to this distribution can be described as a local process, with binaries flowing mostly in to and out of adjacent energy bins. We can understand the origin of the Fokker-Planck equation as follows. If we consider a general process of binary evolution through energy space, then the binary energy distribution at a time t + ∆t will be N (EB, t+∆t) = N (EB−∆EB, t)P (EB−∆EB, ∆EB)d∆EB, (2) where the transfer function, represents the probability that in a differential time interval ∆t, a binary will reach a final energy EB from an initial energy EB − ∆EB. Here the function F(EB, E0) is the differential probability of a single "encounter" transitioning a test binary from initial energy E0 = EB − ∆EB to a final energy EB, and Γ is the rate of all such encounters (here our language is general, but our eventual application will be to consider "encounter" arising from single-binary encounters). Following Spitzer (1987) and Taylor expanding Equation 2, we obtain: which can, after eliminating a single factor of N (EB, t) from both sides, be simplified to Eq. 1. Here we now see the origin of the diffusion coefficients in our assumption that the relevant "encounters" are perturbative in nature, or in other words that our Taylor expansion was validly truncated at second order: and Equation 1 is a partial differential equation that can be solved numerically. To do this, we must close the system by writing the perbinary encounter rate. Focusing on strong single-binary scatterings for specificity, the gravitationally focused scattering rate is: where σ is the cluster velocity dispersion, mB and ms are the binary and single star masses, respectively, and ns is the density of scatterers. The total encounter energy can then be written: The distribution of binary orbital energies for hard binaries left over after single-binary interactions is often approximated as a power law: (Monaghan 1976a,b;Valtonen & Karttunen 2006): The parameter n depends on the total interaction angular momentum L, and has been fit to numerical simulations as (Valtonen & Karttunen 2006): whereL is a normalized version of the total encounter angular momentum (see Valtonen & Karttunen (2006) for more details). Alternatively, one may use first-principles estimates for F from the ergodic formalism of Stone & Leigh (2019), which will be our approach later in this paper. With F specified, Equations 7 and 8 can be plugged in to Equation 1 such that it becomes straight-forward to solve numerically. The Master Equation and its Fokker-Planck Limit In this section, we further generalize our formulation to include angular momentum, moving away from the quasi-empirical outcome distribution provided in Valtonen & Karttunen (2006) and toward the more robust formulation provided in Stone & Leigh (2019). The latter self-consistently accounts for angular momentum in the densityof-states formulation, whereas the former uses numerical scattering experiments to obtain approximate outcome distributions as a function of the total angular momentum. We will also present the evolution of the binary distribution from a first-principles kinetic perspective, by working in the Fokker-Planck limit of a master equation. We consider the evolution of a population of binaries inside a dense, collisional, spherically symmetric star cluster. The cluster has an ambient density of single stars n(r) and a velocity dispersion σ(r) that are both assumed to vary with radius r. We assume for now that the cluster is relaxed and isotropic, so that the stellar velocity distribution is Maxwellian, The binaries in this cluster can be characterized by six binary orbital elements, B. For practical purposes, we focus on their internal energy EB, internal angular momentum LB (or equivalently their semimajor axis aB and eccentricity eB), and orbital orientation CB; other, angular orbital elements can generally be assumed to be distributed isotropically. We define CB = cos IB, where IB is the inclination angle between the binary angular momentum vector and an arbitrary reference direction. The number of binaries with energy EB, angular momentum LB, and orientation CB within an infinitesimal range dEBdLBdCB is N (EB, LB, CB)dEBdLBdCB. The binary probability distribution 2 N (EB, LB, CB) will evolve due to scatterings with single stars and with each other. These scatterings may be weak, distant encounters, in which case the tools of secular theory can be used to understand the exchange of energy and angular momentum (Heggie & Rasio 1996). Alternatively, some scatterings may form temporarily bound ("resonant") triple systems, which will eventually disintegrate into a survivor binary and an escaping single star; the outcomes of these strong scatterings can be understood through the ergodic hypothesis (Stone & Leigh 2019). The long-term evolution of a single-mass binary distribution function will be governed by a master equation: Here Ψ( B, ∆ B) is a transition probability describing the differential rate that stars are scattered into the phase space region B + ∆ B from the original phase space coordinates B, and the final term is a catchall "sink" representing ways binaries may be destroyed. Equation 13 is the most general kinetic formulation for the local evolution of a population of single-mass binaries, but if we assume isotropy, we need only to consider a two-dimensional integral and a two-dimensional distribution N (EB, LB). In principle, one may Taylor-expand Equation 13 in the small ∆ B limit, and obtain a Fokker-Planck equation generalizing Eq. 1: Here terms such as ∆E are effective diffusion coefficients. These diffusion coefficients (and the transition probability Ψ from which they originate) represent the cumulative effect of interactions between a population of binaries and their perturbers. This includes both self-interactions (i.e. binary-binary scatterings) and external interactions (e.g. binary-single scatterings). In this work, we will focus only on binary-single scatterings, which dominate the rate of binary evolution so long as the binary fraction is sufficiently low 10% (e.g. Sigurdsson & Phinney 1993;Leigh & Sills 2011). Qualitatively different types of binary-single scatterings are possible, which we can break down into four categories: (i) ionizations, (ii) flybys, (iii) prompt exchanges, and (iv) resonant encounters. A closer look at some of these categories reveals the inconsistency of the (full) Fokker-Planck limit of the master equation: while energy (EB) evolution is indeed diffusive, angular momentum (LB) evolves in a strongly non-diffusive way during resonances and prompt exchanges. We will postpone a full solution of the master equation for future work. In this paper, we work with a hybrid, "two-timescale" approach to understand the evolution of EB and LB separately. For simplicity, we will treat ionizations by assuming that all binaries with energy below the hard-soft boundary are promptly ionized. Binary energies Here we will neglect flybys and prompt exchanges, since we are concerned with hard binaries (soft binaries are short-lived and quickly ionized). In hard binaries, resonant and non-resonant encounters contribute roughly equally to total energy evolution (Hut 1984), although resonant encounters do dominate the largest energy shifts. We will thus only consider energy evolution through resonant encounters, an approximation that should be valid at a factor ≈ 2 level, and which is motivated primarily by the lack of an analytic formalism for energy exchange in strong but non-resonant encounters (Heggie & Hut 1993) 3 . For simplicity, we treat ionizations by assuming that all binaries with energy below the hard-soft boundary are promptly ionized. The mean change in binary energy in a given resonant encounter, with conserved energy E0, conserved angular momentum L0, and a mass triplet m = {m1, m2, m3} is Here we have used the ergodic outcome distribution, T (E0, L0, m) = dV /dEBdLBdCB, which partitions outcomes of non-hierarchical triple disintegration uniformly across a high-dimensional phase volume (V ), ultimately giving nontrivial outcome distributions in the survivor binary's EB, LB, and cosine-inclination CB (Stone & Leigh 2019). While Equation 15 gives a moment of the outcome distribution for a particular combination of E0, L0, and m (and can easily be generalized to produce ∆LB, (∆EB) 2 , etc.) we are interested in computing rate-averaged diffusion coefficients. If resonant encounters happen at a differential rate dΓ/dE0dL0dC0, the rate-averaged kth diffusion coefficient in energy is dE0dL0dC0. (16) Here we have introduced an additional variable absent from T , C0, which represents the cosine of the inclination between the preencounter binary's orbital plane, and the mutual orbital plane between the binary and the single star it is encountering. Differential encounter rates are more easily expressed in terms of the impact parameter b and relative velocity at infinity, v∞. Specifically, the differential encounter rate is: Here n is the local number density of single stars. These variables are related to E0 and L0 through the following equations: where µ = m b m3/(m b +m3) is the reduced mass of the encounter, and we have denoted the variables of the pre-scattering binary with lower-case "b" subscripts. Performing a change of variables, we find that dE0dL0dC0. (20) Here we have assumed that v∞ is drawn from a velocity distribution f (v) (which can be written as a function of E0 and E b through Eq. 18), although we have left this general for the moment (rather than specifying a Maxwellian). Integrating this differential encounter rate dC0 (from −1 to 1, i.e. under the assumption of velocity isotropy) eliminates the L b dependence: Eq. 14 is, unfortunately, not valid across all timescales. While binary energy evolution is a fundamentally diffusive process (even strongly resonant encounters rarely change individual binary energies by more than a factor ≈ 2; Hut 1984; Stone & Leigh 2019), resonances lead to highly non-diffusive evolution of binary angular momentum 4 . We therefore analyze two different limits of Eq. 14: first, a relatively simple, 1D equation in energy space, where we assume a steady-state eccentricity distribution. In this limit, we have (22) We allow EB to range from EHS to E coll . At these boundaries, we impose a Dirichlet-type boundary condition, with N (EB) = 0. Alternatively, one could allow soft binaries to be included, in which case the limits would become 0 and E coll . We neglect soft binaries because of their rapid rate of ionization, but this could in the future be modeled with an appropriate choice of volumetric sink function. Equation 22 is a partial differential equation that can be solved numerically. To do this, it is necessary to write explicit diffusion coefficients. We find remarkably simple rates of diffusion across the space of orbital elements using T taken from Stone & Leigh (2019), assuming a Maxwellian distribution of relative velocities, and considering particles all of equal mass, we find that and The one approximation needed to derive these diffusion coefficients 4 Unless the binary is much higher mass than the population of field stars it scatters against; however, this more extreme mass ratio limit greatly reduces the importance of resonant scatterings. is to approximate , an approximate scaling that is quite accurate for equal-mass resonant scatterings (Stone & Leigh 2019). Numerical results are shown in Fig. 1. We see that the binary population steadily depletes over time, initially at low energies but eventually at higher energies as well. Eventually, a quasi-steady state energy distribution is reached, with a constant shape but continuously decaying normalization. Binary angular momenta Strong encounters (resonances and prompt exchanges) essentially randomize the angular momentum of a given binary, making a Fokker-Planck approach ill-equipped to deal with this aspect of the problem. However, in the time between strong encounters, the cumulative effect of many weak flybys will lead to diffusive evolution of an individual binary's angular momentum, an effect largely neglected in existing approaches to binary evolution in dense clusters. We therefore treat the problem of binary angular momentum in a two-timescale form. An individual binary will undergo a random walk in angular momentum space until it suffers a strong binarysingle scattering, at which point its new angular momentum can be drawn from the relevant distribution. We will thus arrive at a steady state angular momentum distribution by solving a 1D Fokker-Planck equation in eB-space. This population-level approach uses the diffusion coefficients computed in Hamers & Samsing (2019b, their Eqs. 25a/25b) to account for weak, distant encounters, and the mildly superthermal outcome distribution of Stone & Leigh (2019), also found by Ginat & Perets (2022): as the "initial conditions" in a source term S + (eB) = ΓNres(eB) that accounts for the generation of new binaries after resonant en-counters (a sink term S − (eB) = −ΓN (eB) is likewise used to remove existing binaries in resonant encounters). Here we take a gravitationally focused strong scattering rate i.e. the integral of Eq. 17 over an isotropic Maxwellian velocity distribution. Note that in all cases we remain in the equal-mass limit, so that mtot = 3m. For practical calculations, it is somewhat easier to use the variable R = 1 − e 2 B , which can be viewed as a dimensionless angular momentum. Combining the resonant source/sink terms with the perturbative time evolution terms, we have Unlike Eq. 22, the diffusion coefficients in this Fokker-Planck equation reflect the cumulative effect of many weak, perturbative flybys rather than that of repeated strong scatterings. The diffusion coefficients ∆R and (∆R) 2 are derived in Appendix B using the results of (Hamers & Samsing 2019b) as a starting point. We use a Dirichlet-type N = 0 boundary condition at high eccentricity eB = e coll , i.e. R = 1 − e 2 coll (corresponding to collisions, or, in the case of black hole binaries, gravitational wave inspirals), and a zeroflux boundary condition at eB = 0 (R = 1). By evolving this PDE forward in time, we can find a population-level steady state solution for arbitrary initial conditions. Alternatively, we can take initial conditions corresponding to the outcomes of the strong scatterings (Eq. 25), and evolve the eccentricity distribution forward under the influence of weak scatterings. This snapshot-level approach describes the time evolution of an ensemble of binaries since their last strong scattering. While the populationlevel approach provides astrophysically realistic eccentricity distributions, the snapshot-level approach is more useful for building physical understanding, specifically by disaggregating the effects of strong and weak scatterings. Snapshot-level results are shown in Fig. 2. We see that the initially super-thermal distribution (describing an ensemble of binaries shortly after their last resonant encounter) is initially eroded primarily by the Dirichlet boundary condition at high eccentricity. After a time t ∼ Γ −1 , however, the effect of many weak encounters serves to further redistribute binary orbits to the low-eB side of the spectrum. If we had considered weak scatterings only (i.e. zero-flux boundary conditions at both ends), we would have found a steady-state eccentricity distribution N (eB) ∝ e −4/25 B (Hamers & Samsing 2019b), i.e. a moderately sub-thermal distribution biased towards circular orbits. This analytic curve is shown for comparison in Fig. 2, but it is not achieved even for the rare subset of the binary population that survives for a time 10Γ −1 without a resonant encounter, attesting to the importance of the collisional boundary condition at high eB. The combined effects of weak flybys, resonances, and a collisional boundary condition are fully visible in the population-level solutions in Fig. 3, which shows three different solutions for three different values of e coll . Here we see that the steady-state N (eB) distributions are neither thermal, strictly super-thermal (as in Stone & Leigh 2019, nor strictly sub-thermal (as in Hamers & Samsing 2019b). The distributions are peaked at an intermediate eccentricity ∼ 0.1 − 0.5, but highly depleted at large eB (due to the collisional boundary condition) and at nearly circular eB (due to resonances). We caution that our results do depend on the value of tertiary pericenter Qmin that transitions between resonant and non-resonant en- B (R = 1 is a circular orbit; R = 0 a radial one). Here we show the number of stars per unit square angular momentum N (R; a B ) at fixed semimajor axis a B . We have assumed that the stellar radius R coll = 10 −2 a B (i.e. there is an absorbing boundary condition at 1 − e B = 10 −2 ). Different colors are labeled according to the fraction of the time between a strong (resonant) binary-single scattering event, which resets the binary eccentricity distribution to the mildly superthermal result discussed in the text. However, the cumulative (diffusive) effect of many weak flybys will quickly evolve this superthermal input distribution into a more complicated one, with high-e B (low-R) orbits strongly depleted by direct collisions. The subset of uncommon binaries that survive for more than a few resonant encounter times will achieve a distribution that is significantly more sub-thermal than the dN/de B ∝ e −4/25 B distribution predicted in the absence of collisions (Hamers & Samsing 2019b, shown as a dot-dashed black line). For comparison, the usual thermal eccentricity distribution is shown as a dotted black line. Our results depend modestly on Q min , the critical tertiary pericenter that is assumed to separate resonant from non-resonant encounters. Solid colored lines show our fiducial value Q min = 3.4a B , while dashed colored lines show a more extreme choice of Q min = 2.4a B . The net effect is modest. counters. Appendix B contains a fuller discussion of this, but Figs. 2 and 3 demonstrate that this dependence is modest. We also note here that a similar approach could be taken to the diffusive (weak scattering) and non-diffusive (resonant encounter) evolution of binary angular momentum orientations in a non-isotropic cluster. Since resonant encounters preferentially produce survivor binaries with angular momentum vectors LB aligned with the total angular momentum L0 of the three-body encounter, a non-isotropic velocity field will produce a preferential plane of binary rotation. For example, a rotating star cluster with net angular momentum L cl would be expected to have binaries with orbital planes biased towards prograde alignment with L cl . In this first paper, however, we remain focused on the isotropic limit. Energy backreaction to the host cluster In this section, we discuss and develop a toy model to quantify the effects of binary hardening via single-binary interactions for the energy content of the host star cluster. Energy is systematically imparted to single stars via single-binary interactions, providing a heat source to the cluster core and subsequently further hardening those interacting hard binaries. Integrating over the binary distribution, we can compute the flow of energy from this binary reservoir into the core, and its subsequent diffusion throughout the entire stellar sys- Here we show the steady-state distributions achieved when combining (i) weak perturbations from secular flybys with (ii) less common resonant scatterings, whose outcomes are determined ergodically. The solid green, blue and purple curves show steady-state distributions for populations of binaries that will merge when R equals 10 −3 , 10 −2 , and 10 −1 , respectively. For comparison, the black dotted line is a thermal eccentricity distribution, the black dashed line is the superthermal ergodic outcome distribution for an ensemble of resonant scatterings, and the black dot-dashed line is the dN/de B ∝ e −4/25 B distribution from Hamers & Samsing (2019b). In all cases the steady-state eccentricity (or R) distribution is neither super-nor sub-thermal; instead it is peaked at intermediate eccentricity and depleted of both highly radial and highly circular orbits. As in Fig. 2, solid colored lines represent Q min = 3.4a B , and dashed colored lines Q min = 2.4a B . tem via two-body relaxation: indirect (predominantly long-range) single-single interactions. In principle, one could model the interactions between the binary population and the host cluster by adding extra dimensions to a Fokker-Planck approach. The global evolution of star clusters is often treated in a Fokker-Planck way: assuming spherical symmetry, a population of stars (single-or multi-species) can be evolved forward in time in the space of orbital energy E and orbital angular momentum L, with populations slowly diffusing due to two-body scatterings (indeed, solving this problem by probabilistically sampling stellar distributions is the basis of the Monte Carlo codes mentioned in §1). While we hope to more rigorously explore this integrated approach in future work, for now we content ourselves to develop a "two-zone" model, where a star cluster with total mass Mtot is divided into a core of radius rc and a halo extending further out, with a half-mass radius r h . The cluster is collisionally relaxed and therefore is quasi-isothermal with a velocity dispersion σ that is a weak function of radius. For the sake of concreteness, we will use the analytic potential-density pair of Stone & Ostriker (2015). In this three-parameter model (Mtot, r h , rc), the mass density profile of the cluster is with a central core density of and a core velocity dispersion The approximate equality in the last expression indicates that this formula for the core velocity dispersion is taken in the rc r h limit (though it is correct to within ∼ 10% for rc ∼ r h ). We use this simple model to estimate the rate of orbital energy flow into the core (from binary-single scatterings) and out of the core (from two-body relaxation, primarily between single stars):Ėc = dEc/dt. The net flow is theṅ where Ec ≈ -Mcσ 2 c /2 is the binding energy of the core, rc is the core radius and σc is the 3D velocity dispersion in the core. Here the mass of the core is roughly (again, in the limit of rc r h ) The diffusive energy flow due to two-body relaxation is set by the radial gradient in "temperature" (i.e. velocity dispersion σ) across the cluster core and halo out to the half-mass radius. In the limit of a single-mass cluster, it is roughlẏ where tr,c is the core relaxation time and A cond is a dimensionless conductivity constant (i.e. an encapsulation of the very small gradient dσ/dr) that can be measured from Fokker-Planck (e.g. in singlemass clusters, A cond ∼ 10 −3 ; Cohn 1980) and N-body simulations. In a multi-mass cluster, A cond is higher by one to two orders of magnitude depending on the exact mass spectrum.Ė rel is negativedefinite because two-body relaxation conducts heat outwards, allowing the core to collapse and become more tightly bound. The rate of energy injection from binary burning can be computed by calculating the time evolution of the total energy in all binaries in the core, As the binary burning rate is an integral over the distribution function, it turns Eq. 31 into an integro-differential equation. One particularly interesting limit is when drc dt → 0, since this defines the time of core "bounce", or the moment when core collapse halts and is reversed by single-binary interactions. N -body and other approaches show that steady state solutions do not generally exist near this limit, which is the turning point in gravothermal oscillations. We now make a "two-zone" approximation for the binary populations, in which the binaries are divided into a portion in the cluster core (with distribution Nc) and a portion in the halo (with distribution N h ). Note that since encounter rates scale as the local relaxation time, there are different diffusion coefficients for both the core (e.g. ∆cEB ) and the halo (e.g. ∆ h EB ). Binaries may move back and forth between these two zones via ejection from the core and dynamical friction on halo binaries. These produce interchange rates that are and respectively. In both of these equations, tr,c is the core relaxation time (which increases by a factor r 2 h /r 2 c when one considers the relaxation time at the half-mass radius r h ), and in the first, Aej ∼ 0.1 − 1 is a dimensionless number computed by integrating over the Maxwellian velocity dispersion of the core stars. This gives a set of coupled differential equations for the time evolution of the cluster: Note that this is a system of two diffusive-type PDEs and one integro-differential ODE. RESULTS In this section, we apply our model to dynamically evolve a population of binary star systems in a dense cluster environment and compare the results to N-body simulations. We chooseÖpik's Law for our fiducial initial conditions, which gives the initial distribution of binary orbital energies. We begin with a one-zone model, before presenting the results for our two-zone (core/halo) model, using Eqs. 37. One-Zone Model We begin by dynamically evolving a population of binary star systems through energy space in a dense cluster environment, by numerically solving the Fokker-Planck equation derived in Section 2.2. The initial distribution of binary orbital energies is again chosen (according toÖpik's Law) to be fB(EB) = k|EB| −1 , where k is a normalization constant. The results have already been shown in Figure 1 at several different core relaxation times tr,c, indicated by the different coloured lines, but here we will discuss them in more detail. In Figure 1, we see slow evolution of the hardest binaries, and relatively quick evolution of ones near the hard-soft boundary. This is unsurprising, given the small (large) scattering cross-sections of the former (latter). Over time, a quasi-steady state is reached at low binary energies, which slowly propagates to higher energies. Once this quasi-steady state solution reaches the collisional (high-|EB|) boundary, the shape of the distribution freezes in, and further evolution only decreases its normalization. At all times, the drift coefficient EB is creating a net flow towards softer energies, though the diffusion term (EB) 2 is responsible for the turnover at the hardest energies, once a quasi-steady state has been established everywhere. Two-Zone Model Next, we add many of the extra pieces of the two-zone model, i.e. Eq. 37. Because our goal is to eventually compare to N -body simulations with limited run-time (and limited evolution of the cluster density profile, i.e. limited global energy transfer), here we solve only the two coupled PDEs that exchange binary populations between the cluster core and halo. Simultaneous solution of the energy equation would require more numerical development which we defer to future work. As we will show in the subsequent sections, we initially observe a quick depletion of binaries in the core, due to the recoil imparted by single-binary interactions, and a corresponding increase in the halo population. At later times, these binaries mass segregate back into the core, increasing the population of binaries in the core and decreasing that in the halo. Eventually, an approximate steady-state balance is achieved between the rate of binaries being ejected from the core due to single-binary interactions and binaries re-entering the core by leaving the halo due to mass segregation. This is because we assume all equal-mass particles in our model, such that binaries are the heaviest objects in the cluster. Comparisons to N-body simulations In this section, we present the results of our preliminary N-body simulations, and compare them to the predictions of our second-order analytic models described in the previous sections, with a focus on the improved two-zone model. Initial Conditions and Assumptions For each simulation, we adopt a Plummer density profile initially, with 10 4 stars and 200 binaries. The initial cluster has a core radius of 0.3 pc and a half-mass radius is 0.8 pc. We assume identical point-particles with masses of 1 M . As in our analytic calculations, we assumeÖpik's Law initially for the distribution of binary orbital energies and a thermal eccentricity distribution. At the hard end, we truncate our initial energy distribution at a minimum value of 4 times the radius of the Sun (i.e., corresponding to slightly wider than a contact state for 1 M stars). At the soft end, we truncate at twice the hard-soft boundary, calculated as: where σ is the core velocity dispersion, which is initially 2.0 km s −1 . The initial core and half-mass relaxation times for our simulated clusters are 7.6 Myr and 25 Myr, respectively. We perform 10 simulations all with the same initial conditions each perturbed slightly using a different random seed. These simulations are then stacked together, to increase our sample size for the number of binaries evolving dynamically due to single-binary interactions, bringing the total sample size up to 2000. This stacking is done to increase the statistical significance of our results, and verify the robustness of our N-body simulations by quantifying the stochastic contribution of chaos to the observed differences in each simulation. The time evolution of several core properties in each of the 10 simulations, namely the core density, velocity dispersion, radius, and binary fraction are illustrated in Figure 4, with the average values illustrated in black. Time is normalized by the cluster's initial core relaxation time t 0 r,c and the error bars represent the standard deviation about the mean. For t/t 0 r,c < 20, the core evolution of all 10 models is very similar, with core density and velocity dispersion staying nearly constant while the core radius and binary fraction slowly decrease with time. However, near t/t 0 r,c = 20, the clusters undergo a mild core collapse. In the post-core collapse stage, the core density and radius of individual simulations slowly start to diverge with significant fluctuations between timesteps. The core velocity dispersion, however, remains close to its original value with a few brief fluctuations and the core binary fraction slowly decreases beyond Figure 4. The time evolution, normalized by the initial core relaxation time, of the core density, core velocity dispersion, core radius, and core binary fraction for all 10 simulations with the mean value illustrated as a solid black line. The error bars mark the standard deviation about the mean. t/t 0 r,c = 20 due to binary destruction, after most binaries outside of the core have had enough time to mass segregate into the core. We emphasize that the comparison is most reliable before core-collapse occurs, due to the increased stochasticity in the time evolution of the cluster properties beyond this point. In other words, the simulations begin to diverge significantly beyond core collapse, yielding increasingly different cluster properties between the simulations over time. Also, our neglect of energy backreaction from the binary population becomes worse at this time. One-Zone Model In this section, we discuss the time evolution of the binary orbital energy distribution for our one-zone model, focusing on the cluster core. The binary orbital energy distributions of our N-body simulations are shown in Figure 5 at several different core relaxation times. The black solid line show the initial distribution given by Opik's Law. The time evolution of the binary orbital energy distribution behaves as expected, slowly pushing hard binaries to become more compact and soft binaries to become disrupted. The evolution of the one-zone model is initially faster than the simulations as N (EB) quickly decreases after just 5 core relaxation times. It is not until 40 core relaxation times does N (EB) for the simulations fall below the one-zone model. This discrepancy is likely because binaries flow into the core from the halo due to two-body relaxation in our simulations, and out of the core due to the recoil imparted postsingle-binary interaction. But, in our analytic model, we assume that anything happening in the core stays in the core. Hence, binaries are depleted in the core at a faster rate, both due to mergers at the hard end of the distribution and the destruction of wide binaries at the soft end of the distribution. After a given number of core relaxation times, Figure 5 illustrates a strong agreement between the one-zone model and the simulations at the hard-and soft-ends of the distribution. However, the model over-predicts the number of binaries in the intermediate regime relative to the simulations by a factor ∼ 3 (when error bars are included this typically means that the N-body simulation results disagree with our analytic model at the level of 3σ). We attribute this disagreement to the overall faster core evolution in our one-zone model, depleting binaries in the intermediate regime by pushing hard binaries to become harder, and wide binaries to become wider. In the subsequent section, we will move on to our two-zone model, and show that this does indeed correct this disagreement. Two-Zone Model In this section, we discuss the time evolution of the binary orbital energy distribution for our two-zone model, considering now both the core and the halo. Binaries are now allowed to flow out of the core in our analytic model due to the recoil imparted by linear momentum conservation post-single-binary interaction, and later flow back into the core due to mass segregation. As is clear from Figures 6 and 7, the results from our N-body simulations, showed by the coloured data points, now agree with our analytic two-zone model typically to within 1σ, with only a few outliers within 2σ from our analytic model. We conclude that our improved two-zone model does indeed improve the agreement between the simulations and our analytic theory. We emphasize that this is but one of the many changes that can be accommodated by our model, to even further improve the agreement between the simulated and calculated results. Possible improvements and how to implement them will be discussed in more detail in the subsequent section. Figure 6. The distribution of binary energies in the halo of the cluster, shown at three different snapshots in time. Black, blue, and green correspond to t = 0t 0 r,c , t = 10t 0 r,c , and t = 40t 0 r,c , respectively. Data points are binnedup binary energies from the N-body simulations (error bars show asymmetric 1σ Poissonian error range; Gehrels 1986), while curves are predictions from the two-zone Fokker-Planck approach. Figure 7. The same as Figure 6, but for the cluster core, where statistics are poorer. DISCUSSION In this section, we discuss the key features and assumptions going into our model, their short-comings and future efforts that should be made for constructing an improved analytic model for dynamically evolving forward through time populations of binary star systems in dense stellar environments due to single-binary interactions. We further discuss the significance of our analytic models and their results to pertinent problems characteristic of modern astrophysics. A new tool independent of computational methods for dynamically evolving populations of binaries The Boltzmann-type equation derived in this paper presents an alternative and complementary tool to computational N -body or Monte Carlo simulations. These methods compute the time evolution of their binaries using a combination of stellar dynamics, stellar and binary evolution, etc. The method presented here isolates the dynamical effects of single-binary scatterings on the time evolution of binary orbital parameter distribution functions. Thus, our method is meant to be directly complementary to analogous computer simula-tions. In principle, comparing the results of the model presented here with more complicated N -body and Monte Carlo simulations should allow us to better isolate the effects of dynamics in evolving a binary population in a given environment, thereby isolating the effects of stellar and binary evolution, etc. (i.e., usually included in N -body and MC models in an approximate way, assuming the evolution proceeds in isolation), while also comparing the relevant timescales for each of these critical effects to operate. Thus, our model allows us to quantify and characterize the dominant physics evolving a given binary population in a given star cluster at any time, by rigorously quantifying and isolating the effects of single-binary interactions. Recently, Geller et al. (2019) took a significant step forward in this direction by developing a semi-analytic Monte Carlo-based model for the dynamical evolution of binaries that is analogous to a firstorder diffusion-based model. They compared the results to direct N -body and Monte Carlo models for GC evolution with very similar initial conditions, and found excellent agreement over several Gyr of cluster evolution. This model was subsequently applied in Trani et al. (2021) to quantify the impact of binary-binary scatterings on the results, since the authors considered only single-binary interactions. As illustrated in this paper, adopting the higher-order diffusion-based model presented herein only serves to improve the agreement between the analytic theory and computational simulations. Improving the model Next, we discuss the key features and assumptions going into our model, and how they can be improved upon in future work. First, we begin with a one zone model for our calculations, including only those binaries and singles inside the cluster core. However, in reality, objects are free to migrate in and out of the core. In particular, we expect the core binary fraction to initially increase over time due to binary stars mass segregating into the core due to two-body relaxation. We also expect that the probability of a given binary being ejected from the core due to the recoil from linear momentum conservation during a single-binary interaction will be higher for more compact binaries. These are likely to eject singles at higher velocities, imparting a larger recoil velocity to the binary and increasing the probability of ejecting it from the core, or even the cluster. All of these aspects of our model can easily be improved upon by adopting more realistic boundary conditions. One way to go about this is to increase the number of zones, and expanding the model to include regions of the cluster outside of the core (e.g., adopting a shell-like structure for the cluster). As illustrated in Figures 6 and 7, we have shown that adapting our model to a two-zone model does indeed improve the agreement, treating the core and halo independently. This allows for binaries to flow into and out of the cluster core and halo, due to being ejected from the core via single-binary interactions and subsequent mass segregation back into the core from the halo. We also do not allow for binaries to be kicked out of the cluster, since in our simulations these events are very rare. However, for those clusters where such events are more likely, additional sink/source terms could be included in our existing Boltzmann equation, improving the accuracy of our existing two-zone model even further. Second, we do not consider binary formation via three-body interactions involving initially all isolated single stars, since these events are exceedingly rare in our simulations (or do not occur at all). This effect should become important primarily at very high densities, such as during core collapse or in very massive, dense star clusters and galactic nuclei. In future work, this aspect of our model can be improved upon by including an additional density-dependent source term in our Boltzmann equation. Third, we have assumed that the properties of the core are constant (i.e., a constant density and velocity dispersion throughout the core at a given time) in our analytic model since this is approximately true for the initial cluster conditions considered in this paper. However, we manually update the averaged core properties at regular intervals upon performing our comparisons to the N -body simulations. The inclusion in our analytic model of a more realistic gradient in the core gravitational potential would introduce a dependence of the encounter rate on the distance from the cluster centre r. In future work, this aspect of our model can be improved upon by including a radial r dependence into Equation 17. We do not expect this to have significantly affected the comparison between our analytic model and the results of our N -body simulations, since we terminate the comparison before the point at which most simulations reach core collapse and begin to diverge in their time evolution. Fourth, we have neglected binary-binary interactions, which should also always be occurring for non-zero binary fractions, and will hence contribute to the dynamical evolution of the binary orbital properties. This should not have significantly affected the results of our comparison between our analytic model and the simulations. This is because single-binary interactions occur more frequently than binary-binary interactions for binary fractions 10% (Leigh & Sills 2011), and we intentionally adopt very low initial binary fractions in our simulations to minimize the contribution of binary-binary interactions relative to single-binary interactions. We compensate by performing additional simulations in order to generate the required statistical significance (i.e., the total number of simulated binaries) for the comparisons between our analytic model and the simulations, by stacking the results of these simulations all having nearly identical initial conditions. This aspect of our work can be improved upon in future studies, using the methodology described in the Appendix. Finally, we assume identical mass particles throughout this paper. However, our model can easily be improved to include a massdependence using, for example, the method described in Section 3.1 of . Astrophysical implications: A focus on black hole binaries in globular clusters It turns out that the assumptions needed for the methods and modeling techniques presented in this paper to be valid are particularly well-suited to treat the dynamics of stellar-mass black hole (BH) binaries in globular clusters. This is because the assumptions underlying the application of our analytic formalism for single-binary scatterings require low virial ratios -i.e., the total kinetic energy must be a small fraction of the total binding energy of the interaction, to help maximize the fraction of long-lived, chaotic, resonant interactions, for which our formalism is most directly applicable (i.e., the assumption of ergodicity is upheld). Due to energy equipartition, BH binaries can end up with the lowest velocities in the cores of clusters, and also tend to have very large absolute orbital energies due to the larger component masses. Both effects push us toward preferentially low virial ratio interactions, ideal for applying the methods introduced in this paper to study the dynamical evolution of BH-BH binaries in dense star clusters (Leigh et al. 2016b). We intend to explore this potentially interesting connection and application for our model in forthcoming work. Finally, it is also in principle possible to use our model to con-strain the primordial properties of star clusters. In particular, if a given cluster is observed to be in core-collapse and the observer can measure the binary population properties (i.e., the present-day number of binaries and their orbital parameter distributions), then one can constrain the initial conditions using our model. This is because our model assumes a given set of initial conditions, and these become a good candidate for the true set of initial conditions if the model reaches core-collapse on the correct timescale. Degeneracies in the initial conditions yielding approximately the same time of core collapse are likely to exist, but these can easily be identified and quantified using our analytic model in future work, which indicates one of its main strengths and utility for astrophysical research: a quick and fast exploration of the relevant parameter space to help constrain the underlying sets of initial cluster conditions that could have evolved over a Hubble time to reproduce what we observe today (i.e., central density, total mass, surface brightness profile, etc.). SUMMARY Moore's Law is dead. It follows that the demand for alternative models independent of computational limitations is increasing, and the field of gravitational dynamics is no exception. With this in mind, we recently derived, using the ergodic hypothesis (Monaghan 1976a,b), analytic outcome distributions for the products of singlebinary (Stone & Leigh 2019) scatterings (similar techniques may also apply to binary-binary scatterings, Leigh et al. 2016b). With these outcome distributions in hand, it becomes possible to construct a diffusion-based approach to dynamically evolve an entire population of binaries due to single-binary (and eventually including binary-binary) scatterings. We present in this paper a self-consistent statistical mechanicsbased analytic model formulated in terms of a master equation (in the spirit of Goodman & Hut 1993), and eventually evolved in its Fokker-Planck limit. Our model evolves the binary orbital parameter distributions in dense stellar environments forward through time due to strong single-binary interactions, using the analytic outcome distribution functions found in Stone & Leigh (2019). The effects of weaker, perturbative scatterings are incorporated using the secular theory of Hamers & Samsing (2019a,b). We have applied our formalism in various simplified limits, working for now in the equal-mass case. In the space of binary eccentricity eB, we find that the combined effect of strong (resonant) scatterings and more distant, perturbative flybys is to create steady-state eB distributions that are strongly depleted at both the lowest and highest eccentricities. The resulting binary populations do not match the strongly subthermal eB distributions arising from weak scatterings only (Hamers & Samsing 2019b), nor the mildly superthermal distributions coming from strong scatterings only (Stone & Leigh 2019), but rather are peaked at an intermediate eccentricity eB ∼ 0.1 − 0.5. In the space of binary energy, we compare the predictions of our semi-analytic model to the results of numerical N -body simulations performed using the NBODY6 code. We find good agreement between the simulations and our analytic model for the initial conditions considered here, and the adopted time intervals over which the results are compared (i.e., the first 20 core relaxation times, and roughly up until the time at which core collapse occurs). The semi-analytic model presented in this paper represents a first step toward the development of more sophisticated models, which will be the focus of future work. For example, as shown in Section 2.5, it is in principle possible to couple binary evolution equations to a few-zone model to produce physically transparent mod-els of cluster core collapse and gravothermal evolution. More speculatively, the binary evolution formalism presented here could be applied to study the collisional evolution of binaries in (simplified models of) dense star systems with fewer degrees of symmetry, where N -body simulations can be prohibitively expensive and Monte Carlo techniques do not currently work. Rotating star clusters (though see Fiestas, Spurzem, & Kim 2006) and inclinationsegregated stellar disks are two examples of such systems. In future work, we intend to further expand upon the base model presented in this paper to include binary-binary interactions. These can occur more frequently than single-binary interactions in clusters with high binary fractions (i.e., binary-binary interactions dominate over single-binary interactions for binary fractions 10% (Sigurdsson & Phinney 1993;Leigh & Sills 2011)), and hence cannot be neglected in this cluster regime (e.g., open clusters and low-mass globular clusters). In an Appendix, we consider this added complication to our base model, touching upon some basic predictions that motivate the need for further development in this direction. Specifically, we identify a potential steady-state balance between the binary and triple fractions in star clusters, based on simple thermodynamicsbased considerations, and show via a proof-of-concept calculation that this could in principle be used to directly constrain the initial primordial binary and triple fractions, in addition to the primordial properties of the multiple star populations in dense stellar environments. ACKNOWLEDGMENTS NWCL gratefully acknowledges the generous support of a Fondecyt Iniciación grant 11180005, as well as support from Millenium Nucleus NCN19-058 (TITANs) and funding via the BASAL Centro de Excelencia en Astrofisica y Tecnologias Afines (CATA) grant PFB-06/2007. NWCL also thanks support from ANID BASAL project ACE210002 and ANID BASAL projects ACE210002 and FB210003. NCS received financial support from the Israel Science Foundation (Individual Research grant 2565/19), and the BSF portion of a NSF-BSF joint research grant (NSF grant No. AST-2009255 / BSF grant No. 2019772). WL acknowledges support from NASA via grant 20-TCAN20-001 and NSF via grant AST-2007422. APPENDIX A: ADAPTING TO THE FOUR-BODY PROBLEM AND BINARY-BINARY SCATTERINGS Due to their non-negligible binary fractions ( 1%), the dynamical modification of the binary populations in open and globular clusters must necessarily include not only single-binary interactions but also direct binary-binary interactions, and possibly even interactions involving triples (see Leigh & Geller (2013) and Leigh et al. (2014) for more details). The contribution to this evolution from four-body interactions increases with increasing binary fractions. This can be understood by drawing an analogy between stellar multiplicity in star clusters and the formation/destruction of molecules in an isothermal gas (see Leigh & Geller (2013) and Geller & Leigh (2015) for more details). In a hot gas, collisions between particles are energetic and occur frequently, contributing to the destruction of molecules and the formation of more atoms and ions. This is in direct analogy with the destruction of triples and binaries in dynamically hot star clusters (i.e., having high velocity dispersions), and a stark reduction in the fractions of stellar multiples. In a cold gas, on the other hand, particle-particle collisions tend to be of low energy, allowing for particles to more easily "stick together". This stimulates the production of molecules, and increases the relative fractions of molecules and atoms/ions in the gas. This is in direct analogy with the preservation of binaries and even the formation of higher-order stellar multiples (e.g., triples produced during binary-binary interactions) in dynamically cold star clusters (i.e., having low velocity dispersions), having high multiplicity fractions. In this section, we explore and discuss how the model presented in this paper for single-binary scatterings can be adapted to also treat binary-binary scatterings. A1 Accommodating an additional particle In this section, we briefly consider how the model presented in this paper could be adapted to include binary-binary interactions (more generally, we consider temporarily gravitationally-bound configurations of 3 and 4 particles). That is, we consider the role of binarybinary interactions in dynamically evolving an initial distribution of binary orbital energies forward through time. We save the detailed derivations for future work, and instead sketch out rough toy models meant to highlight the potential of further pursuing this line of research. As we will show, we predict a steady-state balance between the fractions of binaries and triples in dense star clusters (in the limit that the host cluster properties are static in time). As already described, any analytic modeling seeking to accurately quantify the time evolution of a population of binary stars in a dense star cluster must include not only single-binary interactions, but also binary-binary interactions. Both types of interactions occur in such clusters, both affecting the orbital parameter distributions. Above a critical binary fraction (i.e., the fraction of unresolved point sources that are binaries) of ∼ 10%, binary-binary encounters dominate over single-binary encounters, with the latter occurring at a higher rate (Sigurdsson & Phinney 1993;Leigh & Sills 2011). Given that open clusters are much more numerous than globular clusters in the Milky Way, binary-binary interactions dominate over single-binary interactions in all environments but the most massive, densest star clusters (with the lowest binary fractions due to their high velocity dispersions, and hence small orbital separations corresponding to the hardsoft boundary which contribute to enhancing soft binary disruption). The first step in extending the model presented in this paper for single-binary scatterings to binary-binary scatterings is the introduction of additional "macroscopic" outcomes (i.e., comprising different numbers of bound and/or unbound triplets, pairs and singles). This is because single-binary scatterings involving point-particles and negative total interaction energies only ever end by producing a final ejected single star and a binary star system, unlike the analogous binary-binary scatterings. The latter have three possible outcomes, if all four objects are point particles: (1) a binary and two ejected single stars; (2) a stable hierarchical triple and an ejected single star; and (3) two binary stars. The introduction of these additional macroscopic states demands the development of a more sophisticated Boltzmann-type equation (i.e., treatment of the underlying dynamical evolution) than is necessary for single-binary scatterings, since each outcome must be treated individually. In the subsequent section, we will discuss a number of key features that should result from an improved model and, ultimately, the vast increase in available phase space (given the addition of a fourth particle). A2 A steady-state dynamical balance between binaries and triples In this section, we focus on an illustrative example centered on the formation/destruction of dynamically-stable triple systems during binary-binary scatterings, in a given host cluster trying to achieve a "steady-state" balance (i.e., binaries and triples are created/destroyed at the same rate). In Figure A1 we show the Multiplicity Interaction Rate Diagram (MIRD) used throughout the following analysis. The MIRD shows the parameter space in the binary fraction-triple fraction-plane for which each type of dynamical interaction (i.e., single-single or 1+1, single-binary or 1+2, single-triple or 1+3, etc.) dominates. That is, the observed binary and triple fractions can be measured for a particular star cluster, and the resulting data point can be placed on the MIRD. Whatever region of the plot the data point falls on immediately indicates the type of dynamical interaction that currently dominates over all others in that cluster. As is clear from Figure A1, for low-mass star clusters in the Milky Way, the rates of interactions involving singles, binaries and triples all occur at comparable rates (see Leigh & Sills (2011) for more details), but binary-binary interactions dominate in all clusters (and the field) considered here. The MIRD shown in Figure A1 is constructed as follows. First, the solid black lines segment the parameter space in the f b −fr-plane for which each type of dynamical interaction dominates. Briefly, these lines are calculated by equating the analytic encounter rate estimates from Leigh & Sills (2011) for each type of interaction, and solving to obtain a relation between the binary and triple fractions (see Leigh & Sills (2011), Leigh & Geller (2013) and Leigh & Wegsman (2018) for more details). We assume R = 1R , a b = 3 AU and at = 10 AU for the mean single star radius, binary orbital separation and outer-most triple orbital separation, respectively. Although somewhat arbitrary, these values are thought to be representative of typical binaries and triples in real open clusters (OCs) and GCs (e.g. Leigh & Geller 2013;Geller & Leigh 2015). We note that the MIRD shown in Figure A1 is meant only as an illustrative example. More realistic assumptions can of course be adopted as our Boltzmann formalism is further developed to include binary-binary scatterings, allowing us to perform more detailed and robust calculations of the steady-state relation between binary and triple fractions and for application of specific MIRDs tailored to individual clusters. Second, the blue arrows show the direction of flow in the binary fraction-triple fraction-plane corresponding to the time evolution of these two parameters, for a given initial combination. These arrows are calculated using a combination of analytic interaction rates and numerical scattering simulations, and correspond to the instantaneous rates (i.e. over an infinitesimally small amount of time). For the indicated initial binary and triple fractions, the number of each type of interaction is calculated over some fixed interval of time (taken to be much smaller than the cluster age). The numbers of single, binary and triple stars produced in these interactions are then calculated using numerical simulations of 1+2, 2+2, 1+3, 2+3 and 3+3 interactions performed with the FEWBODY code (see Fregeau et al. (2004) and Leigh & Geller (2012) for more details about the code). The contribution from each type of interaction is then scaled according to the corresponding analytic rate. For these simulations, we assume identical point particles with masses of 1 M . For simplicity, all binaries have separations of 1 AU initially. The initial inner and outer orbits of all triples have separations of, respectively, 1 AU and 10 AU. The relative velocity at infinity is sampled uniformly between 0 and 0.5vcrit, where vcrit is the critical velocity, defined as the relative velocity at infinity corresponding to a total encounter energy of zero. The initial impact parameter in our calculations is always set to zero, which yields sufficiently informative results for our illustrative purposes here (e.g. Leigh et al. 2016c;Geller et al. 2019). To calculate these instantaneous vectors in f b − ft-space, we obtain the branching ratios (i.e., the fraction of our simulations producing singles, binaries and triples, for any type of interaction, specifically 1+2, 2+2, 2+3, and so on) directly from our numerical scattering experiments. For example, the approximate fractions of singles, binaries and triples produced from 2+2 scatterings can be seen in Figure 4 of Leigh et al. (2016b), and yield for low virial ratios (i.e., corresponding to our chosen initial conditions for the scattering simulations performed here) ft ∼ f b ∼ 10% and fs ∼ 80%. Finally, the solid blue line shows, for a given binary fraction, the predicted triple fraction when the host cluster is in steady-state. That is, when the rates of triple creation and destruction via dynamical interactions are in approximate balance. To calculate the steady-state line, we once again use a combination of our analytic interaction rates and numerical scattering simulations, as described above. We assume that triples can only be created during 2+2 interactions (Littlewood 1952), and can only be destroyed during 1+3, 2+3 and 3+3 interactions. Interestingly, all OCs in the sample from Leigh & Geller (2013) are approximately 1-σ away from the steadystate line, suggesting that the binary and triple fractions in these clusters are technically consistent with being in approximate thermal equilibrium according to our model assumptions. More work is needed to better understand the underlying physics, and how the observed features of the MIRD is informing us about the underlying time evolution (since each MIRD represents a static snapshot of the host cluster evolution in this parameter space). Among other things, a MIRD should be made for each cluster individually using its own observed cluster properties. This work will be addressed in a forthcoming paper. If we assume that all triples are destroyed during every 1+3, 2+3 and 3+3 interaction, then the criterion for steady-state for the binary and triple fractions can be written: where f2+2, f1+3, f2+3 and f3+3 are the fractions of, respectively, 2+2, 1+3, 2+3 and 3+3 encounters that produce stable hierarchical triples. In spite of the simplifying assumptions that went into making Figure A1, it reveals a potentially rich and interesting dynamical evolution that deserves further study. As already discussed, a more accurate and robust way forward in understanding the full scale of this richness could be to construct a Boltzmann equation including both the 1+2 and 2+2 cases. With this, it should be possible to more formally (and hopefully more accurately) calculate a predicted steady-state line in the binary fraction-triple fraction-plane. To achieve this, the required machinery for 1+2 interactions is now entirely in-hand and can be found in Stone & Leigh (2019). For 2+2 interactions, for which the parameter space of possible outcomes is much larger, the procedure begun in Leigh et al. (2016b) can be used as a rough guide moving forward. Subsequent papers already in progress on the four-body problem will deliver the required analytic solutions for each of the three possible outcomes of the chaotic four-body problem. With these in hand, our Boltzmann equation can be completed and used for parameter space explorations, which will be the focus of future work. A3 Understanding the physics of a "steady-state" multiplicity solution, and the potential for using it to constrain the initial conditions Many previous studies trying to understand the dynamical evolution of populations of binaries relied on the assumption of steady-state. That is, that the time-derivative of the global distribution function is equal to zero (e.g. Heggie 1975). But do the multiple star populations in most real clusters ever reach steady-state within a Hubble Time, roughly independent of the initial conditions? If they do, on what characteristic time-scale does this occur? This is an important question since, if all clusters should have reached steady-state by the present-day independent of their initial conditions, then this could make it very difficult to trace back their previous dynamical evolution and constrain their primordial birth environments. Conversely, if no clusters should have reached steady-state by the present-day, then both Figure ?? and Figure A1 illustrate that the observed presentday multiplicity properties can be used to directly constrain the corresponding initial conditions. This is because the flow lines in the binary fraction-triple fraction-plane do not overlap. Hence, a given region in this parameter space can only be accessed by specific initial combinations of f b and ft. In other words, not every region in this parameter space is causally connected via the underlying dynamical evolution, making it in principle possible to use the presently observed binary and triple fractions in a given star cluster to directly constrain the primordial birth properties of the host cluster. APPENDIX B: ANGULAR MOMENTUM DIFFUSION Here we describe how we use the formalism of Hamers & Samsing (2019b) to compute the secular diffusion coefficients ∆R and (∆R) 2 that are employed in §2.4. Our starting point here are the diffusion coefficients ∆R and (∆R) 2 taken from Eqs. 25a/25b in Hamers & Samsing (2019b). These diffusion coefficients are functions of the dimensionless binary angular momentum R and also of the small "secular approximation parameter" where aB is the semimajor axis of the binary, Q is the pericenter of the distant perturber with respect to the binary center of mass, and we have taken (i) equal-mass encounters and (ii) parabolic perturber orbits. Assumption (i) is restrictive and should be relaxed in future work, but assumption (ii) is a reasonable approximation in the limit of hard binaries. We have denoted the Hamers & Samsing (2019b) diffusion coefficients with over-tildes because they depend explicitly on perturber pericenter Q, unlike the strong-scattering diffusion coefficients used elsewhere in this paper, which have been implicitly or explicitly averaged over a range of tertiary pericenters. Our task here is to perform the same procedure for these secular diffusion coefficients. Integrating over a range of tertiary impact parameters b, we can calculate a tertiary-averaged diffusion coefficient The approximate equality in the second line represents the gravitationally focused limit (relevant for the hard binaries we are focused on), and in the third line we simplify the dimensional prefactor by changing variables from Q → q = Q/a and identifying the strong scattering rate Γ (Eq. 26). Unfortunately, these integrals are generally dominated by contributions from tertiaries with small Q, necessitating a cutoff at some minimum pericenter Qmin (or equivalently bmin). Physically, it is tempting to identify this with the transition between resonant and non-resonant encounters, which in the equalmass limit occurs at a value 5 of q ≈ 3.4 (Samsing & Ilan 2018). We thus take qmin = 3.4 as a fiducial value, but examine an alternate case in §2.4 and find generally mild effects for plausible ranges of qmin. With this averaging formalism specified, we may now integrate ∆R and (∆R) 2 over tertiary pericenter. In contrast to the results of Hamers & Samsing (2019b), we convert rational numbers to decimal notation for brevity. In the end, we find: + q −5 min 0.5586R 2 − 0.5640R + 0.1699
2022-06-01T07:34:07.971Z
2022-05-30T00:00:00.000
{ "year": 2022, "sha1": "56a08040b10a61d1712c8a6f26217bdd0e3487dd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "56a08040b10a61d1712c8a6f26217bdd0e3487dd", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
219155620
pes2o/s2orc
v3-fos-license
Cancer awareness and attitude towards cancer screening in India: A narrative review Cancer awareness is the key to early detection and better health-seeking behaviour. Cancer is quite common in both developing as well as developed countries, but awareness is yet poor among the general population. Poor awareness may lead to poor uptake of screening modalities and delay in diagnosis. One factor that has been consistently shown to be associated with late diagnosis and treatment is a delay in seeking help for cancer-like symptoms. This paper reviews the literature on cancer awareness among the general population and attitude towards screening modalities. The poor awareness level among the Indian population shows the need for health education and sensitisation regarding cancer and its different aspects. This will be helpful in the successful implementation of health programmes related to cancer. Introduction Cancer is a global disease and is spreading rapidly. Healthcare systems across the world are facing stiff challenges to tackle this issue. This appears formidable when India's 1.3 billion population, which is considered to, spread across 29 states and 7 union territories with varying degree of population genetics, environment, lifestyle, etc. lead to a heterogeneous distribution of disease burden. [1] In low-and middle-income countries, patients with cancer generally have a poorer prognosis compared with patients in high-income countries; the reasons being lack of awareness, late diagnosis and inequitable access to affordable curative services. [2] Lack of awareness contributes to the late reporting of cancer cases to the healthcare facility. Data from four major centres in India showed that the majority of individuals with cancer seek healthcare for the first time at late stages. [3] The importance of cancer awareness has been emphasised as a means of ensuring behaviour that facilitates early detection, whereas the absence of cancer awareness has been seen as a detriment to this end. [4] Delay in health-seeking is also attributed to factors such as illiteracy, financial constraints, as well as myths and superstitions along with lack of awareness and these go hand-in-hand, most of the time. Screening is an important preventive measure in cancer control. Even though the national programme in India has a screening component, it is yet to take root in most part of the country. At present, most of the screening tests are available at higher centres only. The available screening methods to the population are also not adequately utilised. Efforts should be made to learn why such gaps occur in service delivery and utilisation, and for that, it is pertinent to understand the attitude of people towards screening practices. With the increasing trend of cancer in India, the awareness Cancer awareness and attitude towards cancer screening in India: A narrative review level is expected to change, so is the attitude towards cancer screening. Studies on cancer awareness and attitude towards screening in India are limited. Awareness about cancers and cancer screening procedures will help in early diagnosis and subsequent treatment and a better outcome. Thus, the authors have tried to collate information related to cancer awareness and attitude towards screening methods to get an overall view of the situation. With rolling out of the screening services in the country, there is the need to synthesise a review on cancer awareness. Such information would aid in making systematic changes in the programme if required to improve uptake of the screening programme and overall awareness related to cancer in the population. This study was commenced after receiving ethical approval from the All India Institute of Medical Sciences, Bhubaneswar. [6] These differences could be partly explained by the difference in literacy rates of the study population. If one considers the literacy rate alone, the level of awareness should not be as low as 57% and not as high as 98% [7,8] Cancer awareness is likely to be associated with many other factors besides literacy rate; one of which was found to be level of income. Gadgil et al. in urban women of Mumbai found a significant association between breast cancer awareness and family income level. High-income group participants had better knowledge than that of low-income groups. [9] This is plausible as a better income level would equip them with better access to knowledge. [10][11][12][13] In contrast to these, Reddy et al. in Hyderabad found 60.2% awareness about oral cancer. [14] In the multistate study conducted by Raj et al., mouth cancer was mentioned as one of the most common cancers at 57.9%. [7] Similar to the overall cancer awareness studies, the low level of awareness in Reddy et al. study could be explained by the educational qualification of the study participants where most of them were educated below high school. A higher level of awareness may be attributed to the frequent oral cancer-related advertisement and the warning signs of cancer in tobacco packets. This shows the need for similar education to improve the awareness of cancers. Awareness about Common Cancers Cervical cancer is one of the common cancers among females in India and around the globe. Despite this, the level of awareness was observed to be low from various studies. Few studies that focused on cervical cancer reported a wide range of knowledge variability ranging from 3.6% to 55%. Sabeena et al. found an awareness level of 3.6%, whereas it was 50% by Dahiya et al. in the urban area of New Delhi and 55% by Kadian et al. [15][16][17] Awareness regarding cervical cancer was poor among Indian women and was affected by educational status and urban-rural variation. This also shows the need for specific cancer awareness, especially the female cancers which are associated with stigma. Breast cancer is the most common cancer among females in the world and India, was also found to be the most common cancer known to the participants. It was the most commonly mentioned cancer in the studies by Puri et al. (67%) and Sharma et al. (73.8%). [8,18] Overall, the level of awareness for breast cancer was good as compared to cervical cancer. Strangely, in a hospital-based study conducted by Rao et al. only 18.8% of the participants were aware of breast cancer. [19] Educational level was found to be associated with an awareness level of cancer. Specific cancer awareness was also in a similar range to that of overall cancer. The awareness about cervical cancer was quite poor. Considering that it was the most common cancer among Indian women till recently and is still the second most common, its awareness is abysmally low. This could be due to two reasons; firstly, dangers signs of breast cancer are more appreciable than cervical cancer and secondly, wide-spread publicity about breast cancer through mass media. One also cannot rule out the sociocultural aspect of stigma and not talking about genital areas in the Indian milieu. This highlights the need for qualitative research methods to find out the reasons and ways to deal with it, without which intervention will be difficult to implement and success limited. One common observation in most studies was the association of education with awareness level. This association could also be linked with the better awareness found among higher income groups, as it is a well-known fact that high levels of education beget higher income levels. Thus, if cancer burden in terms of reduced mortality and morbidity is to be achieved, there has to be better awareness levels than what is prevalent and also improvement in education. Better awareness about oral and breast cancer also indicates that maybe in some areas, mass education and health information are penetrating the community. There is a dire need to strengthen the awareness of other cancers also, especially cervical cancer. Awareness about the source of cancer information The level of cancer awareness was affected by the accessibility to a different source of information regarding cancer. [20] [5,7,14,18] Healthcare personnel were the major source of information for cervical cancer as mentioned by Sankheshwari et al. [10] In contrast to this, few studies showed that friends and relatives were the sources of information about cancer (Rao et al. 89%, 36.1% in Raj et al., Patra et al. and 17% in Siddharthar et al). [7,19,21,22] As cancer awareness is a vital component of the cancer control programme, careful consideration of the source of information may be useful to generate awareness. Outdoor patients can be considered as an opportunity for generating cancer awareness as it is less talked about in the community and less advertised. Awareness generation campaigns can be a better way to impart information to the communities. Community health education on cancer needs to be emphasised. Proper utilisation of mass media and the internet can be useful in creating awareness. [23,24] Awareness about danger signs of cancer, preventability and curability of cancer Most of the cancers remain in the precancerous stage for a longer period and early diagnosis will help in reducing mortality. Awareness of early signs of cancer is related to better health-seeking behaviour and early detection of common cancer. The most common reason for delayed healthcare seeking was the failure to recognise a symptom as suspicious. [25] Information about early signs and symptoms for specific cancer was collected in a few of the studies. Ray et al. reported, 88% of participants could identify at least one sign of cancer, but none could identify all the seven cancer warning signs. [5] Unusual bleeding was the most mentioned cancer sign (66.4% in Puri et al., 41.5% in Veerkumar et al. and 23.9% in Raj et al). [7,8,26] Dahiya et al. mentioned pain or discharge from the breast (67%) as the commonest symptom of breast cancer followed by breast lump (57%) in a study done for breast cancer. [27] Awareness was seen to be higher in the study by Dahiya et al. probably due to the better educational level of the participants. This shows educational status plays a significant role in cancer awareness. Change in the shape and size of the breast, any growth, or discharge is perceived to be abnormal by the general population. In a study done by Sankheswari et al. only 17% of participants could identify the signs of oral cancer. [10] Signs of oral cancer were less known to the general population despite oral cancer being the commonest cancer in India. Similarly, 90% population was found to be unaware of warning signals of cervical cancer in a study by Raychaudhuri et al. [28] This shows the lack of complete awareness regarding cancer in the general population. More comprehensive awareness generation strategies need to be developed. As most of the cancer cases are diagnosed at a later stage, awareness about signs and symptoms can improve the health-seeking behaviour regarding cancer and uptake of screening procedures, subsequently the outcome of cancer patients. Awareness about the curability of cancer has an impact on health-seeking behaviour towards cancer. Results of different Indian studies showed that the perception regarding the curability of cancer was quite different. Three fourth of the participants considered cancer as curable in a study done by Sheshachalam et al, [6] it was less in other Indian studies (39.8% in Puri et al., 58.3% in Ray et al). [5,8] Raj et al. reported that 57.1% of participants were aware of the fact that cancers can be cured if detected at an early age. [7] The positive association between awareness about the curability of cancer and an increase in educational status was also demonstrated by Elangovan et al. [29] The results of the studies done for specific cancer showed varied results, where the awareness about the curability of cancer ranged from 34.8% by Thilak et al. for oral cancer to 64% by Agrawal et al. for breast cancer. [11,12] Though people know about cancer, there is a substantial gap in knowledge about the curability aspect. Common cancers such as oral, cervical, breast and lung cancers are preventable to some extent with appropriate preventive measures. Awareness about the preventability of cancer will affect their practice of preventive measures. Very few studies have collected data regarding this. Cancer as preventable was mentioned by 74% of the respondents by Agrawal et al. for oral cancer in Gorakhpur. [12] In contrast to this awareness level of preventability was found to be very low (3.6%) in a study by Raychaudhuri et al. for cervical cancer. [28] Though the population characteristics of both the studies are similar, the large difference can be due to the widespread advertisement of tobacco use attributed to oral cancer. Awareness about Risk Factors of Cancers Awareness about risk factors of cancer and its preventive aspects is essential for early detection through screening and treatment of the precancerous lesion. The awareness about risk factors of cancer was limited to only tobacco and alcohol. Tobacco was identified as the most common risk factor in most of the studies. Smoking was the most mentioned risk factor followed by tobacco chewing. Awareness level about smokeless tobacco as a risk factor was found to be 74.7% by Puri et [7,8] A study done for cervical cancer in North Bengal showed that only 14.6% were aware of the human papillomavirus. [28] A family history of cancer was mentioned as a risk factor for breast cancer in a study done by Dahiya et al. [27] Awareness about risk factors was mostly limited to tobacco in the population. This differential awareness can be the result of the focus on prevention strategies by the government to reduce the use of tobacco products. Comprehensive health education regarding other risk factors of cancer is the need of the hour. Awareness and attitude towards screening and the prevailing screening practice For better survival rates of cancer patients, the knowledge and awareness of cancer and its screening are important. Screening leads to early detection and a better chance of survival. However, awareness regarding screening was abysmally low in the study done by Raychaudhuri et al. (9.5%). [28] Though cervical cancer is one of the most common cancers in females, yet its awareness was low among Indians which might be the cause of a lower level of awareness regarding its screening. A similar lower level (12.2%) of awareness was observed by Siddharthar et al., which also showed a significant difference in awareness among various educational groups. [22] A higher level of awareness about cervical cancer screening was reported by Dahiya [17,30] Dahiya et al. collected information regarding awareness of breast cancer screening in Delhi, where 48.6% were aware of mammography as a screening method. [27] This higher level of awareness is probably due to the better accessibility to a tertiary healthcare facility in an urban population. In contrast to this Siddharth et al. found none knew about breast self-examination in a hospital-based study in central India. [19] Attitude towards screening test has an impact on the practice of the screening procedure. Patra et al. found only one-fourth of the participants willing to participate in cervical cancer screening. [21] A study by Gangane et al. in Wardha reported an attitude score of 6.2 out of 7 in rural and 6.7 out of 7 in an urban area, but very few had undergone screening. [31] The studies clearly show the attitude and practice gap. A positive correlation was seen between knowledge and practice. [32] The practice of cancer screening is much less as compared to the awareness and attitude towards screening. Screening practice for cervical cancer with pap smear was much less (2.4%) in an Indian study done by Sabeena et al. [15] Similarly poor practice score was seen in Gangane et al. study for the same. [31] Poor screening practice (10%) was reported by Agarwal et al. and Khanna et al., where hospital attendees and health workers were studied, respectively. [33,34] However, a good screening practice of 49% BSE was reported by Dahiya et al. [27] This difference could be due to the better educational status of the participants of the study, where around 60% of the participants had an education of 15 years or more. Gadgil et al. reported better screening practice among those women having adequate awareness. [9] This shows that a good level of awareness is necessary for opting for the screening practice. Conclusion General awareness of cancer was poor among the Indian population; similarly, it was also poor for curability, preventability and screening methods. Education and place of residence (rural or urban) plays a vital role in cancer awareness. Studies are done in different periods, which may affect the awareness level of cancer as the burden of cancer is increasing. Television, friends, relatives and health personnel were the common source of information for cancer-related information. Awareness about the risk factor of cancer was largely limited to tobacco and alcohol. Attitude towards screening modalities was found to be good among the Indian population. The screening practice was poor. Screening practice can be improved by creating community-level awareness. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2020-06-01T13:22:50.862Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "73dbbb48db09b6ea53a81a5b969f8d444b4ea9b7", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/jfmpc.jfmpc_145_20", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "73dbbb48db09b6ea53a81a5b969f8d444b4ea9b7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250661000
pes2o/s2orc
v3-fos-license
Static Foot Disturbances and the Quality of Life of Older Person with Rheumatoid Arthritis Disturbed static foot function is one of the main causes of impaired quality of life, which may be related to the frailty syndrome of older adult patients with Rheumatoid Arthitis (RA). The aim of the study was to evaluate the relationship between parameters of static foot function disturbances and quality of life of older adult patients with RA. The study was performed among 102 patients with RA diagnosed according to the American College of Rheumatology (ACR) and EULAR 2010 criteria. Patients were divided into four subgroups depending on radiological evaluation according to the Steinbrocker classification. Plantoconturography examination was conducted using a podoscope with a 3D scanner and software for computer foot examination CQ ST2K. Quality of life of patients with RA was evaluated using the Arthritis Impact Measurement Scales-2 (AIMS-2). A statistically significant relationship between AIMS-2 and parameters of static foot function disturbances was observed. The study revealed correlations between parameters of disturbed static foot function and RA severity in comparison to disease duration. Our results indicate a relationship between static foot function disturbances and quality of life of patients with RA, not only in the area of physical activity, but also in the social an emotional domain. Study results indicate that plantoconturography and assessment of quality of life using AIMS-2 could be useful as a diagnostic and prognostic tool in RA. Introduction Rheumatoid arthritis (RA) is a chronic inflammatory disease of the joints. RA mainly affects the small joints of the hands and feet. In the foot, it provokes deformities in the forefoot and hindfoot. These pathological changes affect the joints and ligaments, limiting movement in the ankle and the foot. It also produces an unequal distribution of pressure, making it painful to remain in a standing position. Foot and ankle problems are especially common in patients with RA, causing significant disability and limitation in daily activities. Improving quality of life of rheumatoid patients is the prime therapeutic goal for medical doctors and physiotherapists. The development of inflammation and progression of pain influence both the physical and psychological functional status of patients. RA is associated with chronic pain which results in a decreased level of physical activity and frequently, the necessity faced by patients to change their life plans. Disease progression may lead to unemployment, social marginalization, economic dependence and even poverty. Patients with dysfunctions resulting from progressing RA are characterised by a significant degree of disability. Clinical studies reveal far lower quality of life in patients with RA compared to healthy individuals [1,2]. Increasingly often, quality of life reported by patients with RA provides valuable information on the effectiveness of pharmacotherapy, rehabilitation and nursing care. The evaluation of treatment outcomes from the perspective of patient quality of life allows for assessment of the long-term effectiveness of treatment for chronic diseases, including RA. In recent years, frailty, defined as a "a biologic syndrome of decreased reserve and resistance to stressors, resulting from cumulative declines across multiple physiologic systems, and causing vulnerability to adverse outcomes" has emerged as a significant area of research in rheumatology. Frailty is a geriatric syndrome that results from a multi-system reduction in reserves, deterioration of the ability to adapt to stressful situations, and thus an increased risk of such adverse phenomena as infections, falls, deterioration of cognitive abilities, and dependence on other people or institutions. The presence of frailty increases the risk of a more severe course of certain diseases or predisposes to the development of additional health problems [3]. Frailty is closely linked to musculoskeletal health. The overall prevalence of frailty in Europe is assessed as 7.7% in the population aged above 50 years [4]. Musculoskeletal functioning is a key component on quantification of frailty, at the same time, frailty is associated with the most common age-related disease conditions such as rheumatoid arthritis (RA). Advancing research into the determinants of weakness in RA of the determinants of frailty in RA are necessary because, even as the therapeutic armamentarium for RA continues to grow, individuals with RA continue to commonly experience physical disability and reduced health-related quality of life [5][6][7]. Inflammatory disorders and deformities, characteristic of RA, cause abnormalities in the morphology, anatomy and biokinematics of the foot. Progressing inflammatory changes in the foot are associated with hypertrophy of the synovial membrane, relaxation of ligaments and abnormal muscle function, resulting in deformity. Furthermore, enzymatically induced damage to cartilage, articular tissues and bone structures plays an important role in this process. Studies indicate that foot problems affect 55-90% of individuals with RA [8][9][10][11]. Approximately 20% of patients with early stages of RA report symptoms of impaired functionality of feet and ankle joints [12]. It is also worth noting that in 36% of cases, the first symptoms reported by patients concern feet [13]. Abnormal foot biomechanics in patients with RA leads to deformities. Pathologies affect all anatomical parts of the foot: the forefoot, metatarsus, hindfoot and the ankle joint. According to the available literature, most deformities occur within the forefoot [8][9][10][11]. Foot deformities diagnosed in patients with RA include varus of the first metatarsal, valgus of the fifth metatarsal, subluxation in metatarsophalangeal joints, hammer toes, stiff toes, pes planus, adduction and pronation of the forefoot, hypermobile Chopart joint and hindfoot valgus. These anomalies are accompanied by subcutaneous bursitis. At present, the degree of deformity is evaluated in clinical studies using a modified electronic version of the podoscopic test. Plantocontourography, which offers a computeraided analysis of parameters, is characterised by higher reliability and thus repeatability. Other advantages of computer-aided foot examination include assessment accuracy and ability to monitor degenerative changes, which are crucial to the development of an appropriate treatment plan. This examination enables not only foot function assessment, but also detection and graphical documentation of biomechanical anomalies in the foot. Plantocontourography offers an accurate representation of the plantar surface and provides detailed information on the three-dimensional structure of the foot arch [6,10,11]. To the best of our knowledge there is a limited number of studies investigating the impact of foot problems on the quality of life of patients with RA [8][9][10][11][12][13]. It seems interesting to estimate if there is a relationship between parameters of static foot function assessed by plantocontourography and the quality of life of older adult patients with RA? Materials and Methods The aim of the study was to evaluate the relationship between parameters of static foot function assessed by plantocontourography and the quality of life of older adult patients with RA. The study included 102 patients with RA diagnosed according to the American College of Rheumatology (ACR) and the 2010 European League Against Rheumatism (EULAR) criteria [14]. All patients underwent a medical evaluation, including a physical and clinical examination. Particular attention was paid to gait disorders and problems within foot joints reported by patients. Patients with comorbidities affecting static foot function such as diabetes, discopathy and cardiovascular diseases were excluded from the study. Other exclusion criteria were post-traumatic and postoperative conditions in the lower extremities. Due to the procedure of the computer-aided foot examination, patients with RA who were unable to stand without assistance were also excluded. Treatment included methotrexate (MTX) 15-20 mg once a week and folic acid 15 mg once a week (given on the day after MTX administration). Steroids and immunosuppressive drugs were not used during the two months prior to the study. Patients were divided into four subgroups depending on the radiological evaluation of RA (the Steinbrocker classification). The Disease Activity Score-28 for Rheumatoid Arthritis (DAS-28) was assessed [15]. All patients were informed about the nature of the study and its purpose. Patients gave written informed consent for study participation. The study was conducted in accordance with the protocol approved by the Bioethics Committee of the Medical University in Bialystok (No. 114-09904P). All methods were carried out in accordance with relevant guidelines and regulations. This study was performed under the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines (Table S1). Our study was conducted by one researcher. The computer-based plantocotourographic examinations were performed first, then the patients completed the questionnaire assessing the quality of life. The evaluation of static foot function was performed using a podoscope with a 3D scanner and software for computer-aided foot examination CQ ST2K (CQ-Stopy USB The quality of life of patients with RA was assessed using the Arthritis Impact Measurement Scales-2 (AIMS-2), a shorter version of AIMS, developed by Boston University School of Public Health. The AIMS-2 instrument is a 78-item questionnaire. The first 57 items are broken down into 12 scales assessing mobility level, walking and bending, hand and finger function, arm function, self-care, household tasks, social activities, support from family and friends, arthritis pain, work, level of tension and mood [18]. The quality of life of patients with RA was assessed using the Arthritis Impact Measurement Scales-2 (AIMS-2), a shorter version of AIMS, developed by Boston University School of Public Health. The AIMS-2 instrument is a 78-item questionnaire. The first 57 items are broken down into 12 scales assessing mobility level, walking and bending, hand and finger function, arm function, self-care, household tasks, social activities, support from family and friends, arthritis pain, work, level of tension and mood [18]. Statistical Analysis Statistical analysis assessed the normality of distribution by means of the Kolmogorov-Smirnov test with Lilliefors correction and the Shapiro-Wilk test. The analysed quantitative variables did not exhibit normality of distribution. The non-parametric Mann-Whitney U test was applied in two groups in order to compare quantitative variables without normality of distribution. Spearman's rank correlation coefficient was also established. The non-parametric ANOVA Kruskal-Wallis test was used to analyse differences in results among the groups. Results were considered to be statistically significant for p < 0.05. Calculations were performed using the Statistica 10.0 package by StatSoft. Results The study included 102 patients with RA who were in remission (DAS-28 < 2.6). Table 1 shows characteristics of study participants. There were no statistically significant differences between groups. Statistical Analysis Statistical analysis assessed the normality of distribution by means of the Kolmogorov-Smirnov test with Lilliefors correction and the Shapiro-Wilk test. The analysed quantitative variables did not exhibit normality of distribution. The non-parametric Mann-Whitney U test was applied in two groups in order to compare quantitative variables without normality of distribution. Spearman's rank correlation coefficient was also established. The nonparametric ANOVA Kruskal-Wallis test was used to analyse differences in results among the groups. Results were considered to be statistically significant for p < 0.05. Calculations were performed using the Statistica 10.0 package by StatSoft. Results The study included 102 patients with RA who were in remission (DAS-28 < 2.6). Table 1 shows characteristics of study participants. There were no statistically significant differences between groups. Inter-Group Differences in Static Foot Function Depending on the Radiological Classification of RA according to Steinbrocker Classification There were significant inter-group differences in the α angle of the left foot (p = 0.04) depending on RA severity. The study revealed a positive correlation between disease severity evaluated according to the Steinbrocker functional classification and the incidence of hallux valgus (α > 9 • ) (p = 0.04). Incidence rates for hallux valgus in both feet in patients with RA Class II and subjects with RA Class III were comparable. In patients with RA Class IV, hallux valgus most commonly affected the right (75%) and left (93%) foot. The study revealed that impaired static foot function becomes evident in patients with early-stage RA (Class I) and persists, almost unaltered, throughout subsequent stages of the disease. There were significant differences in mobility scores between patients with RA Class I and subjects with RA Classes III (p < 0.001) and IV (p < 0.001), and between patients with RA Class II and subjects with RA Classes III (p < 0.005) and IV (p < 0.000) (Figure 2). Inter-Group Differences in Static Foot Function Depending on the Radiological Classification of RA according to Steinbrocker Classification There were significant inter-group differences in the α angle of the left foot (p = 0.04) depending on RA severity. The study revealed a positive correlation between disease severity evaluated according to the Steinbrocker functional classification and the incidence of hallux valgus (α > 9°) (p = 0.04). Incidence rates for hallux valgus in both feet in patients with RA Class II and subjects with RA Class III were comparable. In patients with RA Class IV, hallux valgus most commonly affected the right (75%) and left (93%) foot. The study revealed that impaired static foot function becomes evident in patients with early-stage RA (Class I) and persists, almost unaltered, throughout subsequent stages of the disease. There were significant differences in mobility scores between patients with RA Class I and subjects with RA Classes III (p < 0.001) and IV (p < 0.001), and between patients with RA Class II and subjects with RA Classes III (p < 0.005) and IV (p < 0.000) (Figure 2). Domain: Walking and Bending There were significant differences in walking and bending scores between patients with RA Class I and subjects with RA Class III and IV (p < 0.001) as well as between patients with RA Class II and subjects with RA Class IV (p < 0.001) (Figure 3). Domain: Walking and Bending There were significant differences in walking and bending scores between patients with RA Class I and subjects with RA Class III and IV (p < 0.001) as well as between patients with RA Class II and subjects with RA Class IV (p < 0.001) (Figure 3). There were significant differences in the level of physical activity related to self-care tasks between patients with RA Class I and subjects with RA Classes III (p < 0.001) and IV (p < 0.001), between patients with RA Class II and individuals with RA Class IV (p < 0.001) as well as between patients with RA Class III and subjects with RA Class IV (p < 0.005) (Figure Domain: Self-Care Tasks There were significant differences in the level of physical activity related to self-care tasks between patients with RA Class I and subjects with RA Classes III (p < 0.001) and IV (p < 0.001), between patients with RA Class II and individuals with RA Class IV (p < 0.001) as well as between patients with RA Class III and subjects with RA Class IV (p < 0.005) (Figure 4). Domain: Self-Care Tasks There were significant differences in the level of physical activity related to self-care tasks between patients with RA Class I and subjects with RA Classes III (p < 0.001) and IV (p < 0.001), between patients with RA Class II and individuals with RA Class IV (p < 0.001) as well as between patients with RA Class III and subjects with RA Class IV (p < 0.005) ( Figure 4). Domain: Household Tasks There were significant differences between patients with RA Class I and those with RA Classes III (p < 0.001) and IV (p < 0.001) as well as between patients with RA Class II and subjects with RA Class IV (p < 0.002) ( Figure 5). Domain: Household Tasks There were significant differences between patients with RA Class I and those with RA Classes III (p < 0.001) and IV (p < 0.001) as well as between patients with RA Class II and subjects with RA Class IV (p < 0.002) ( Figure 5). Domain: Arthritis Pain Significant differences were established between patients with RA Class I and those with RA Classes III (p < 0.001) and IV (p < 0.001) as well as between patients with RA Class II and subjects with RA Classes III (p < 0.001) and IV (p < 0.001) ( Figure 6). Domain: Arthritis Pain Significant differences were established between patients with RA Class I and those with RA Classes III (p < 0.001) and IV (p < 0.001) as well as between patients with RA Class II and subjects with RA Classes III (p < 0.001) and IV (p < 0.001) ( Figure 6). Domain: Arthritis Pain Significant differences were established between patients with RA Class I and t with RA Classes III (p < 0.001) and IV (p < 0.001) as well as between patients with RA C II and subjects with RA Classes III (p < 0.001) and IV (p < 0.001) ( Figure 6). Inter-Group Differences in Emotional Well-Being Scored with AIMS-2 Depending on th Radiological Classification of RA There were significant differences in the level of emotional tension between pati with RA Class I and those with RA Classes II (p < 0.01), III (p < 0.001) and IV (p < 0.00 well as between patients with RA Class II and subjects with RA Class IV (p < 0.05) (Fi 7). Inter-Group Differences in Emotional Well-Being Scored with AIMS-2 Depending on the Radiological Classification of RA There were significant differences in the level of emotional tension between patients with RA Class I and those with RA Classes II (p < 0.01), III (p < 0.001) and IV (p < 0.001) as well as between patients with RA Class II and subjects with RA Class IV (p < 0.05) (Figure 7). Deterioration of mood correlated with duration of RA ( Figure 8). Deterioration of mood correlated with duration of RA ( Figure 8). Discussion Rheumatoid arthritis is one of the most common systemic connective tissue dis ders. In Poland RA affects approximately 1% of the adult population, which is arou 400,000 individuals [19,20]. Recent medical studies reflect an increased interest in hea related quality of life of patients, particularly those with chronic diseases [21][22][23]. Measures designed to address problems limiting patient quality of life, includ conditions referred to as the rheumatoid foot have been examined in many studies [9-The relationship between frailty and RA is not yet fully characterised but these con tions share many of the same clinical outcomes, associations and suggested pathophys ogy. It may be seen that the RA and frailty, are closely related, both are multidimensio concept characterised by deficits in multiple organ systems, i.e., psychological, cognit and/or social support and other environmental factors, as well as physical limitations. Bak et al. in their study concluded that frailty syndrome has no significant impact the quality of life of patients with diagnosed RA [24]. Discussion Rheumatoid arthritis is one of the most common systemic connective tissue disorders. In Poland RA affects approximately 1% of the adult population, which is around 400,000 individuals [19,20]. Recent medical studies reflect an increased interest in health-related quality of life of patients, particularly those with chronic diseases [21][22][23]. Measures designed to address problems limiting patient quality of life, including conditions referred to as the rheumatoid foot have been examined in many studies [9][10][11]. The relationship between frailty and RA is not yet fully characterised but these conditions share many of the same clinical outcomes, associations and suggested pathophysiology. It may be seen that the RA and frailty, are closely related, both are multidimensional concept characterised by deficits in multiple organ systems, i.e., psychological, cognitive, and/or social support and other environmental factors, as well as physical limitations. Bak et al. in their study concluded that frailty syndrome has no significant impact on the quality of life of patients with diagnosed RA [24]. However, there are no studies on the impact of disturbances in foot morphology on the occurrence of the frailty syndrome Studies conducted among patients with RA indicate an insufficient level of knowledge regarding foot care and prevention of pathological changes in the foot. Implementation of educational programmes on healthy behaviours, including learning correct movement patterns, improving accessibility in buildings, and use of mobility aids and equipment determine increased treatment effectiveness [25][26][27]. Functional foot problems affect approximately 10-24% of the population and are more common in geriatric patients and those suffering from RA [10,11,28,29]. Interestingly, Williams et al. reported that the majority of patients with RA complain of foot problems even before the diagnosis of RA is established. Foot deformity leads to a partial or, sometimes, total loss of mobility in foot joints and pain, commonly located in the forefoot. Unfortunately, studies have revealed that in many cases this problem is underestimated by family doctors [30,31]. Foot deformities associated with RA reflect disorders of static foot function. Pathological changes in the rheumatoid foot are considered the most common causes of disability [32,33]. Foot deformities are not just an aesthetic problem, but a cause of persistent pain which reduces patient quality of life. Within the first four years of diagnosis, as many as 75% of patients present with nagging pain which prevents them from being active in many areas of life [34]. Our study revealed differences in AIMS-2 scores on the "walking and bending" scale depending on severity of RA. Our results demonstrate a gradual deterioration in mobility associated with the progression of RA. Other authors have reported a correlation between gait disorders and changes in the radiographic image of the metatarsophalangeal joints. Moreover, the authors emphasised that, most frequently, gait disorders progress rapidly in the early stages of RA and stabilize a year after disease onset [35]. In the present study, evaluation of abnormal static foot function in patients with RA was based on the hallux valgus alpha angle (α), the Clarke angle (CL) and the Wejsflog index (W). In our investigation, statistical analysis of individual parameters of computer-aided plantoconturography examination revealed similar correlations. We established correlations between parameters of static foot function and quality of life depending on the Steinbrocker classification of RA. The study demonstrated that static foot disorders had a significant impact on all areas of life evaluated with AIMS-2 in patients with RA Class I and RA Class IV. In RA Class II, a correlation was established between the value of the α angle and the AIMS-2 score for household tasks. In patients with RA Class III, the only correlation that was observed regarded the value of the CL angle and the AIMS-2 score for self-care tasks. Apart from the biological aspects of health and well-being, clinical studies focus on patients' emotional state and their ability to lead a normal life. Patient self-assessment reports provide information on limitations encountered in specific areas of life and the possibility of learning new behaviours in various situations related to the progression of RA [36]. When the diagnosis of RA is established, patients are informed that a complete recovery is not possible and therapeutic success consists in prolongation of life and improvement in the patient's functional status and quality of life. Recent studies evaluating the quality of life of rheumatoid patients rely on specific measurement tools. Combination, disease-specific questionnaires which can be customised to a given study are used [37]. The suitability of combination questionnaires is emphasised by a number of authors [38][39][40]. A meta-analysis by Matcham et al. revealed that RA has a significant impact on the assessment of health-related quality of life. The authors highlighted the fact that the majority of studies demonstrate a more destructive impact of progressing RA on the mobility of patients than on their emotional state [38]. The quality of life of rheumatoid patients is also determined by their age. El-Labban et al. reported a higher degree of physical disability in patients younger than 60 years compared to those older than 60. Moreover, the authors observed greater prevalence of small joint arthritis of the hand and foot in younger people, which affects their ability to perform self-care tasks [41]. The present study did not demonstrate any significant age-related differences between individual subgroups depending on RA severity (the Steinbrocker classification). Squire used an open interview to analyse the impact of RA on the professional life of patients. The author revealed that patients need to adapt to perform motor activities depending on their functional status and disease duration. Due to disease progression, patients frequently face the necessity of changing their workplace and type of occupation. Nevertheless, the study demonstrated that being able to pursue a professional career has a positive effect on self-esteem and the emotional status of patients [42]. Other studies indicate numerous difficulties resulting from the absence of standardised methods of documenting and reporting foot health assessment data, monitoring the joint degeneration process and planning treatment [43,44]. Williams et al., who are members of the North West Clinical Effectiveness Group for the Foot in Rheumatic Disease (NWCEG), point to the need to establish gold standards for management of patients with foot deformities and emphasise the role of a multidisciplinary team that should be involved in the treatment process. The authors draw particular attention to the prevention of foot deformities at an early stage of RA development. Furthermore, they underline the importance of special measurement tools such as the foot function index (FFI) [45]. The incorporation of questionnaires assessing quality of life in the treatment of patients with RA allows for direct involvement of the patient in the decision-making process regarding therapy, which can improve its outcomes [46,47]. A number of studies have evaluated the effectiveness of pharmacological, surgical and physiotherapy treatment based on the assessment of quality of life of rheumatoid patients [27]. Quality of life questionnaires have also been used to assess the effectiveness of orthotic treatment in patients with RA. Studies demonstrate an improvement in foot function in patients who use orthopaedic appliances for the forefoot [8,47]. There are no detailed reports in the available literature on the correlation between parameters of static foot function and the quality of life of patients with RA. A limited number of studies have evaluated the relationship between parameters of plantocontourography and the quality of life of patients with RA. The present study demonstrated correlations between parameters of static foot function and the domains of physical activity, social interactions and emotional well-being, assessed with AIMS-2 in groups of patients with different severity of RA. One of the few studies investigating the correlation between the clinical status of feet in patients with RA and quality of life was conducted by Otter et al. The authors of the study designed their own questionnaire to assess the impact of foot problems on the quality of life of 390 patients with RA. Almost all surveyed patients reported a negative effect of foot dysfunction on self-perceived quality of life. Impaired mobility caused by foot deformities is a key factor that is directly correlated with limitations in the social domain. Study participants reported greatest problems with mobility and choice of footwear. A correlation was found between quality of life and clinical symptoms such as pain, swelling, stiffness and numbness. Moreover, foot dysfunctions caused a lowering of mood in patients due to significantly limited involvement in social activities. Patients also emphasised the relationship between foot deformities and sleep quality [47]. Unfortunately, our study has some limitations such as: too low number of patients with RA and absence of control group. Conclusions Identifying correlations between the physical, emotional and social status of patients with RA and static foot disorders as well as functional limitations highlights the fundamental role of the assessment of patient quality of life. Prevalence and knowledge of factors that influence frailty in rheumatic diseases, that is connected with in foot deformities need to be develop. Furthermore, findings from the above-mentioned studies indicate the need for a plantocontourography examination and the use of tools for assessing patient quality of life (e.g., AIMS-2), and their incorporation in the diagnostic process of RA. These procedures appear to be helpful in establishing therapeutic standards that include both medical and social support. Plantoconturography is a perfect complement to the standard diagnostics in RA. The results of the conducted research suggest that the plantoconturography test could be useful in diagnosis, prognosis of the course of the disease, and start at the right moment with physiotherapeutic intervention. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/ijerph19148633/s1, Table S1: STROBE Statement-checklist of items that should be included in reports of observational studies. Institutional Review Board Statement: All patients were informed about the nature of the study and its purpose. Patients gave written informed consent for study participation. The study was conducted in accordance with the protocol approved by the Bioethics Committee of the Medical University in Bialystok (R-I-002/120/2011). Informed Consent Statement: All patients were informed about the nature of the study and its purpose. Patients gave written informed consent for study participation. Data Availability Statement: Datasets used and/or analysed during the study are available from the corresponding author on reasonable request.
2022-07-20T15:19:37.376Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "2db4722a565397713a848b5cd7143dfb94993938", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/14/8633/pdf?version=1657884044", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8099154b04473d40951ac7a001f7bb4a5b9f408e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256690695
pes2o/s2orc
v3-fos-license
Superresolution concentration measurement realized by sub-shot-noise absorption spectroscopy Absorption spectroscopy is one of the most widely used spectroscopic methods. The signal-to-noise ratio in conventional absorption spectroscopy is ultimately limited by the shot noise, which arises from the statistical property of the light used for the measurement. Here we show that the noise in absorption spectra can be suppressed below the shot-noise limit when entangled photon pairs are used for the light source. By combining broadband entangled photon pairs and multichannel detection, we realize the acquisition of sub-shot-noise absorption spectra in the entire visible wavelength. Furthermore, we demonstrate the strength of sub-shot-noise absorption spectroscopy for the identification and quantification of chemical species, which are two primary aims of absorption spectroscopy. For highly diluted binary mixture solutions, sub-shot-noise absorption spectroscopy enables us to determine the concentration of each chemical species with precision beyond the limit of conventional absorption spectroscopy. That is, sub-shot-noise absorption spectroscopy achieves superresolution in concentration measurements. Here, the authors use entangled photon pairs as the light source for absorption spectroscopy and demonstrate sub-shot-noise spectra in the entire visible wavelength region. They quantify chemical species in highly diluted solutions with precision beyond the limit of conventional spectroscopy. A bsorption spectroscopy is one of the simplest and most widely used spectroscopic methods. In this method, one irradiates light onto a sample of interest and measures the intensity of the transmitted light. Despite the simplicity of the experiment, it can yield a large variety of information about the sample, from the vibrational modes and the electronic structure of the constituent molecules 1 to the atomic species comprising the sample and their local environment 2 , depending on the wavelength range of the light used in the measurement (i.e., infrared, visible and ultraviolet, or X-ray). In practice, an absorption spectrum is obtained by performing two kinds of measurements, namely sample and reference measurements. In the sample measurement, photons are shone onto a sample, and the transmitted photons are detected in a wavelength-sensitive manner. In the reference measurement, which is usually done in parallel to the sample measurement, the same number of photons are sent directly to another detector without a sample. By writing the number of photons detected in the sample measurement as N S and that in the reference measurement as N R , the absorbance A of the sample is given by An absorption spectrum of the sample is obtained by plotting the absorbance A as a function of wavelength. In this experimental procedure, it is assumed that the number of incident photons in the sample measurement can be chosen to be exactly the same as in the reference measurement. This assumption, however, cannot be strictly satisfied in actual measurements because of the statistical property of the classical light such as a laser 3,4 : No matter how carefully the number of incident photons is adjusted in the two measurements, the best is to balance the statistical average of the photon number per unit time, and the photon number at each instant inevitably deviates from one another because it fluctuates randomly following the Poisson statistics. This mismatch of the incident photon number in the reference and sample measurements results in the noise in the measured absorption spectra, which is called the shot noise. Defining the noise as the standard deviation of absorbance, the noise δA can be expressed in terms of the absorbance A as Here, Var A ð Þ is the variance of A, and hÁi denotes the statistical average. Following this definition, the shot noise δA SN in absorption spectra is given by (see Supplementary Note 4 for the derivation) The shot noise is usually considered to be unavoidable because it arises from the intrinsic property of the light itself. That is, the lower bound for the noise level in an absorption measurement is determined by the shot noise δA SN (commonly known as the shot-noise limit). Contrary to this common belief, there is actually a chance to overcome this shot-noise limit when an exotic state of light, i.e., nonclassical light, is used as the light source. Nonclassical light, or quantum light, is a state of light that cannot be described within the framework of classical electrodynamics. Examples of nonclassical light include single photons, squeezed light, and entangled photon pairs 3,4 . The use of such nonclassical light 5 for spectroscopic purposes is a rather unexplored topic. Nevertheless, considering that the advances of optical spectroscopy so far have been largely supported by the advent of new light sources, lasers in particular 6 , the use of nonclassical light may lead to a breakthrough in the further development of spectroscopic techniques. Several types of nonclassical light sources that achieve a noise level below the shot-noise limit, i.e., sub-shot-noise operation, are reported. The Sandoghdar group, for instance, generated intensity-squeezed light using a single molecule with a near-unity spontaneous emission quantum efficiency 7 . Here, the idea is that the single molecule emits exactly one photon each time it is excited by a train of excitation pulses, resulting in an extremely regular stream of emitted photons. The Walmsley group and Bowen group have also obtained intensity-squeezed light based on degenerate optical parametric amplification, and they applied it to the measurements of stimulated emission 8 and stimulated Raman signals 9 , respectively. Its two-beam variant, i.e., two-mode squeezed light, where the intensity difference between the two beams is squeezed below the shot-noise limit, has been realized as well, for example, by the Leuchs group using nondegenerate optical parametric amplification in a nonlinear crystal 10 and by the Agarwal group using a four-wave mixing process in an atomic vapor 11 . In the latter study, the feasibility of absorption measurements at the sub-shot-noise level was also demonstrated. Using entangled photon pairs, on the other hand, the Rarity group has done pioneering works on sub-shot-noise absorption measurements, in which the noise in the measurement is suppressed below the shot-noise limit 12,13 . More recently, the Genovese group has demonstrated sub-shot-noise absorption imaging 14,15 . Sub-shot-noise absorption spectroscopy has also been reported by the Matthews group 16 . However, their spectral window was only 10 nm in width in the near-infrared region (from 807 nm to 818 nm), and the measurement involved timeconsuming wavelength sweeping. Thus, compared with conventional absorption spectroscopy, there was a substantial limitation in the measurement procedure as well as obtainable spectra. In this study, we report the development of broadband subshot-noise absorption spectroscopy with multichannel detection. This method enables the acquisition of absorption spectra in the entire visible wavelength (from 450 nm to 650 nm) in a single measurement without wavelength sweeping. In addition, at each wavelength in the obtained absorption spectra, the noise is suppressed well below the shot-noise limit. Thus, our broadband multichannel sub-shot-noise absorption spectroscopy yields absorption spectra in a wide wavelength range comparable to the one obtained with conventional absorption spectroscopy, while the noise level is suppressed below the fundamental limit of the conventional method. Using the developed method, we further show how the noise suppression realized in this study enhances the precision of the identification and quantification of the chemical species in a sample, which are two primary aims of performing absorption spectroscopy measurements. Results Principle of operation. As the light source for the sub-shot-noise absorption measurements in this study, entangled photon pairs in the visible wavelength are used, which can be generated via the spontaneous parametric down-conversion (SPDC) process 3,4 . In this SPDC process, ultraviolet pump light is introduced into a nonlinear crystal such as β-barium borate (BBO), and a highenergy ultraviolet photon is split into two paired photons with lower energy. The entangled photon pairs generated in this manner have three key properties that are utilized to realize broadband multichannel sub-shot-noise absorption spectroscopy. First, as is clear from the name, all the photons exist as pairs, meaning that every photon is accompanied by a partner photon. Thus, if we split each photon pair into two, we can have two groups of photons containing exactly the same number of photons without any statistical uncertainty. It is thus possible to beat the shot-noise limit in absorption spectroscopy by using one group of photons for the sample measurement and the other group for the reference measurement. This is the principle of noise suppression utilized in this study. The achievable degree of noise suppression here is determined by the loss of photons in the measurement, because it destroys the balance of the number of photons in the two groups. Among the various loss sources, the absorption of photons by the sample is unavoidable in absorption spectroscopy, and it sets the ultimate limit to the degree of noise suppression achievable by this method. This ultimate noise suppression over the shot-noise limit (Eq. (3)) is given in terms of the absorbance A of the sample as (see Supplementary Note 6 for the derivation) Second, the emission directions of the paired photons are correlated because of the momentum conservation in the SPDC process. Here, k p is the wave vector of the ultraviolet pump light, and k s and k i are the wave vectors of the generated paired photons, usually denoted as signal and idler. With the Type-I phasematching condition, this momentum conservation results in ringshaped emission of the photon pairs (corresponding experimental data will be shown in Fig. 2a), where the photons in the upper half of the ring are paired with those in the lower half of the ring. Thus, it is possible to split the paired photons into two by separating the ring into the upper half and the lower half with the use of, for example, a carefully positioned rectangular mirror that reflects only the upper half of the ring. These two groups of photons obtained in this manner contain exactly the same number of photons, as described in the previous paragraph. Third, the paired photons have correlated frequencies because of the energy conservation in the SPDC process. Here, ω p is the (angular) frequency of the ultraviolet pump light, and ω s and ω i are those of the generated paired photons. Since ω p is determined by the ultraviolet laser used in the experiment, a down-converted photon with a certain frequency ω has a partner photon whose frequency is uniquely determined to be ω p À ω. Thus, when the sample and reference spectra are recorded with frequency-resolved detection, the absorbance at a frequency ω can be obtained by normalizing the number of photons at ω in the sample measurement by the number of photons at ω p À ω in the reference measurement. By performing this analysis for various values of ω, the absorbance can be evaluated simultaneously at multiple wavelengths at the sub-shotnoise level, i.e., broadband multichannel sub-shot-noise absorption spectroscopy can be realized. We note that the frequency of the photons in the sample measurement (ω) and that in the reference measurement used for the normalization (ω p À ω) are in general different, except at the degenerate frequency . This is in sharp contrast to conventional absorption spectroscopy, where a sample measurement performed at a certain frequency is normalized by a reference measurement performed at the identical frequency. Experimental setup. As an experimental realization of broadband multichannel sub-shot-noise absorption spectroscopy described in the previous section, we constructed a setup shown in Fig. 1. The light source is a deep ultraviolet continuous wave (cw) laser providing 266 nm light, and the beam is focused into a BBO crystal to generate entangled photon pairs with the Type-I SPDC process. The resultant ring-shaped emission is separated into two by reflecting only the upper half of the ring using a rectangular mirror. Then, the upper half is sent to a sample cell containing a sample solution, and the lower half is used as a reference. These two portions of the down-converted photons are spectrally dispersed by a prism and are imaged onto a charge-coupled device (CCD) camera, eventually yielding a sample spectrum and a reference spectrum. In this setup, we took care to minimize the loss of photon pairs, because the loss of photons in the system substantially degrades the noise suppression based on the entanglement of photons, as mentioned in the previous section and discussed thoroughly in Supplementary Note 5. The detection efficiency of the photon pairs in this setup is estimated to be approximately 70%. (Due to the wavelength dependence of the quantum efficiency of the CCD camera, we expect the detection efficiency to vary by~5% between 450 nm and 650 nm, with the maximum efficiency located at~530 nm.) Characterization of the entangled photon pairs. Figure 2a shows the emission pattern of the photon pairs generated by the Type-I SPDC process (see Supplementary Note 1 for the details of the measurement). As mentioned already, it has a ring-shaped emission pattern due to the momentum conservation in the SPDC process (Eq. (5)), where the paired photons appear on the opposite sides of the ring. After splitting the emission into the lower and upper halves of the ring, we measured the spectra of each part of the emission as shown in Fig. 2b. The green curve corresponds to the upper half of the ring (used for the sample measurement), and the black curve to the lower half (used for the reference measurement). The spectra show that the photons have a broad bandwidth spanning from 400 nm to 650 nm. Because the red side of the spectrum is clipped by the finite size of the optics in the detection path, the true bandwidth is even broader. The exposure time for obtaining these spectra was 375 msec, which was limited by the saturation level of the CCD shown with the horizontal broken line in the figure. The total photon detection rate in the spectral range between 425 nm and 625 nm was 3:4 10 7 photons/sec for the sample measurement and 3:9 10 7 photons/sec for the reference measurement. All the absorption measurements reported in this paper were performed under this experimental condition. Before performing absorption measurements, we first measured the temporal correlation between the photons in the sample and reference paths to confirm that they are indeed paired. We sent them to two avalanche photodiodes (APDs) connected to a time-correlated single photon counting (TCSPC) board (see Supplementary Note 1 for the details of the measurement) and measured their intensity correlation. In this measurement, the photon detection rate on each APD was attenuated to approximately 2 10 3 photons/sec by using a very low ultraviolet laser power of 1 μW to generate entangled photon pairs (see Supplementary Note 7 for the same measurement performed at different ultraviolet laser powers). The obtained g (2) (secondorder coherence; ð Þ is the intensity of light at time t) exhibits a large coincident count at Δt ¼ 0 as shown in Fig. 2c, which indicates that two photons are very frequently detected simultaneously by the two APDs. This result experimentally proves that the paired photons have been successfully split into the two paths. We have further confirmed the correlation between the photons in the two paths by evaluating the degree of correlation (also known as the noise reduction factor) defined as 10,14,15 Degree of correlation Here, N S ω ð Þ and N R ω ð Þ are the number of photons with the frequency ω in sample and reference paths, respectively, and VarðN S ðωÞ À N R ðω p À ωÞÞ is the variance of N S ðωÞ À N R ðω p À ωÞ. We note that N R is evaluated at the frequency ω p À ω because of the frequency correlation of paired photons shown in Eq. (6). The degree of correlation defined in this manner is equal to or larger than unity when there is no correlation between the photons in the sample and reference paths, whereas it approaches zero as the correlation between the two becomes stronger. We have determined the degree of correlation in our experiment by repeatedly measuring the sample and reference spectra such as the spectra shown in Fig. 2b. The obtained result is shown in Fig. 2d. The degree of correlation below 1 is a clear indication that the photons in the sample path at the frequency ω are indeed correlated with those in the reference path at the frequency ω p À ω. Fig. 2 Characterization of the entangled photon pairs. a Emission pattern of the photon pairs generated by the SPDC process. b Spectrum of the photons sent to the reference path (black curve) and those to the sample path (green curve). The sample used in this measurement was neat DMSO (no absorption within this spectral range). The horizontal broken line indicates the saturation level of the CCD camera. c g (2) curve showing the temporal correlation of the photons in the reference and the sample paths. The curve is normalized so that the value becomes unity at sufficiently large Δt where no photon correlations are expected. d Degree of correlation defined by Eq. (7), which shows the correlation of photons in the sample and reference paths at each wavelength. Sub-shot-noise absorption measurements. With the correlation between the photons in the sample and reference paths established, we utilized them to measure absorption spectra of 518 nM rhodamine 6 G (R6G) solution in dimethyl sulfoxide (DMSO). The measurement was repeated 1000 times to examine the noise contained in the measured spectra. For comparison, we also obtained conventional absorption spectra using the same experimental data set by calculating absorbance from the sample and reference signals detected in different exposures (see Supplementary Notes 2 and 3 for the details of the analysis procedure). In this way, we were able to obtain conventional and subshot-noise absorption spectra under exactly the same experimental condition, including the number of photons used in the measurements, which allowed us to have a fair comparison between the two. Figure 3a and b show the obtained conventional and sub-shot-noise absorption spectra, respectively. The two sets of absorption spectra show essentially the same spectral feature, that is, both of them show an absorption maximum at 538 nm (see also the averaged spectrum in Fig. 3e). However, a careful inspection of the spectra reveals that the sub-shot-noise spectra in Fig. 3b contain considerably less noise than the conventional counterpart in Fig. 3a. This can be confirmed in Fig. 3c and d, where the peak absorbance at 538 nm in the 1000 conventional (Fig. 3c) and sub-shot-noise (Fig. 3d) spectra are plotted with their histograms. The noise δA evaluated from Eq. (2) is shown in each figure, which quantitatively shows that the noise in the subshot-noise absorption spectrum at 538 nm is 32% less than the conventional counterpart. In exactly the same manner, the noise at other wavelengths is evaluated using the experimental data in Fig. 3a and b. The noise obtained in this manner was normalized by the shot noise δA SN evaluated from Fig. 2b using Eq. (3), and the resultant normalized noise δA=δA SN was plotted in Fig. 3f. As expected, the noise in the conventional absorption spectra (black curve in Fig. 3f) is approximately at the shot-noise limit. On the other hand, the noise in the sub-shot-noise absorption spectra (green curve in Fig. 3f) is as much as 30% below the shot-noise limit in a broad bandwidth spanning from~450 to~650 nm. This accords well with a theoretical prediction that takes account of the total photon detection efficiency of~70% (see Supplementary Note 5 for the result of the numerical simulations). The obtained result proves that broadband multichannel sub-shot-noise absorption spectroscopy is successfully realized in our experiment. We note that the theoretical bound in Eq. (4) predicts the noise reduction by 89% below the shot-noise limit (when A ¼ 10 mOD) or even 97% below the shot-noise limit (when A ¼ 1 mOD) in case of perfect detection efficiency. Therefore, in principle, we have a lot of room for further suppression of the noise by ameliorating the detection efficiency. Superresolution concentration measurement. To demonstrate how the noise reduction using entangled photon pairs is beneficial for absorption spectroscopy, we applied broadband multichannel sub-shot-noise absorption spectroscopy for identification and quantification of chemical species in highly diluted solutions. As an exemplary case, we chose R6G and thiazole orange (TO) in DMSO. As shown in Fig. 4a, R6G and TO can be clearly distinguished from each other because of their spectral difference. Furthermore, the concentration of each species can be determined from the intensity of the absorption, provided that the relationship between the absorbance and the concentration is known a priori. Figure 4b plots the peak absorbance of R6G and TO in highly diluted solutions with known concentrations (28,70,140, and 518 nM for R6G solutions and 65, 183, 338, and 954 nM for TO solutions). The measured absorbance clearly shows a linear relationship to the concentration, as expected. Using the quantitative spectral information provided in Fig. 4, we determined the concentration of R6G and TO in binary mixture solutions based on absorption spectra, assuming that there are no specific intermolecular interactions among R6G and TO molecules. Although there are several ways to analyze absorption spectra with multiple spectral components, we 2). e Averaged spectrum obtained from the 1000 sub-shot-noise absorption spectra in b. f Noise δA contained in the measured absorption spectra at each wavelength normalized by the shot noise (δA SN in Eq. (3)). The black curve was obtained from a and the green curve from b. The horizontal broken line corresponds to the shot-noise limit. performed the analysis using deep learning, where the spectral input is processed by an artificial neural network comprising two layers of densely connected neurons. We used the measured absorbance values at 84 wavelengths as the input and obtained the concentrations of R6G and TO as the output. As a preparation, the neural network was trained using the 8 sets of R6G and TO spectra at different concentrations corresponding to the 8 filled circles in Fig. 4b, as well as their linear combinations. (Note that each set contains 1000 spectra, as shown in Fig. 3.) Subsequently, the trained neural network was used to estimate the concentrations of R6G and TO from the absorption spectra of binary mixture solutions. The advantage of this approach is that we do not need to explicitly examine the spectral shape of each species because it is automatically taken care of by the artificial neural network. Thus, exactly the same analysis procedure is applicable to much more complicated spectra such as vibrational spectra, as is already reported in the literature 17 . We prepared three binary mixture solutions with different concentrations of R6G and TO. The concentration combinations of R6G and TO in the three solutions were (55 nM, 254 nM), (110 nM, 127 nM), and (109 nM, 251 nM), where the former number in the parenthesis corresponds to the R6G concentration and the latter to the TO concentration. In these solutions, we made the concentration of TO higher than that of R6G, because TO has a lower absorption cross-section compared with R6G as can be seen from Fig. 4b. For each binary mixture solution, we measured 1000 absorption spectra (3000 spectra in total), and the concentration combination of R6G and TO was evaluated from each of them using deep learning. The 3000 estimated concentration combinations obtained are visualized in two-dimensional (2D) histograms shown in Fig. 5a and b. (Fig. 5a was obtained from conventional absorption spectra, and Fig. 5b from sub-shot-noise absorption spectra). In these 2D histograms, the three crosses indicate the concentration combinations of the binary mixture solutions set by the sample preparation process. As shown in this figure, the conventional absorption spectroscopy cannot resolve the three binary mixtures (Fig. 5a), because the precise evaluation of the spectral shape and intensity is hindered by the noise present in the spectra. This noise is shot-noise limited as can be seen from the black curve in Fig. 3f, and hence this is the fundamental limit to the concentration and species resolution achievable with conventional absorption spectroscopy under this experimental condition. Meanwhile, the sub-shot-noise absorption spectroscopy clearly resolves the three binary mixture solutions (Fig. 5b). The strength of sub-shot-noise absorption spectroscopy over the conventional one becomes even more evident by evaluating the cross sections of the 2D histograms in Fig. 5a and b. Figure 5c shows the vertical cross sections along the blue broken lines in Fig. 5a and b. With conventional absorption spectroscopy (black curve), the two binary mixture solutions are not resolved, while subshot-noise absorption spectroscopy can clearly resolve them (green curve). The same is true for the horizontal cross section along the red broken lines in Fig. 5a and b (Fig. 5d). These results clearly show that sub-shot-noise absorption spectroscopy enables us to determine the concentration of each species in highly diluted solutions with resolution beyond the fundamental limit of conventional absorption spectroscopy. In this sense, it is possible to say that sub-shot-noise absorption spectroscopy can achieve superresolution in concentration measurements. This is analogous to superresolution microscopy, where the spatial resolution is enhanced beyond the diffraction limit using various tricks such as single-molecule localization 18,19 , selective de-excitation of molecules 20 , deconvolution 21 , or photon antibunching of emission from quantum emitters 22 . In the present study, we break the resolution limit in concentration measurements due to the shotnoise limit of conventional absorption spectroscopy by taking advantage of the photon number correlation of entangled photon pairs. One might argue that the noise in conventional measurements can be made smaller by using a brighter classical light source, because Eq. (3) shows that the shot noise δA SN decreases as we increase the number of photons used in sample (N S ) and reference (N R ) measurements. We emphasize that this is not possible in our measurements, because we have already accumulated photons up to the saturation level of our CCD as can be seen from Fig. 2b. Thus, the availability of a brighter classical light source does not enable conventional measurements to achieve a lower noise level. This restriction arising from the saturation may be avoided if we allow for a more frequent readout of the CCD and the subsequent averaging of the resultant multiple spectra, but this possibility is not considered in the present study. Discussion In this study, we developed broadband sub-shot-noise absorption spectroscopy with multichannel detection using entangled photon pairs as the light source. The noise in the measured absorption spectra was suppressed by as much as 30% below the shotnoise limit that is the fundamental limit in conventional absorption spectroscopy using classical light. By taking advantage of this suppressed noise, we demonstrated superresolution measurements of the concentrations of chemical species in highly diluted binary mixture solutions. Sub-shot-noise absorption spectroscopy developed in this study proves particularly useful when the signal-to-noise ratio of the absorption spectra cannot be improved by simply increasing the number of incident photons in a measurement. There are a number of cases when this situation actually happens. The first case is when the number of incident photons is already limited by the saturation of the detector. This applies to the measurement reported in this study, as can be seen in Fig. 2b. The second case is when the measurement needs to be performed within a very short time. In flow cytometry, for instance, a sample quickly passes through the observation volume, and the measurement needs to be performed within a limited time 23 . The third case is when the sample is easily damaged by photoirradiation 24 . Absorption spectroscopy involves an electronic excitation of the sample molecules, and the chance for those molecules to undergo irreversible chemical reactions increases when they are repeatedly excited by intense incident light. The fourth case is when the excited-state lifetime of the molecules is long. Since the absorption occurs only when the molecules are in the ground state, the absorption saturates when the excitation rate becomes comparable to the relaxation rate from the excited state to the ground state 25 . The consideration here shows that there are various cases when the noise suppression is the only means for improving the signal-to-noise ratio. Thus, we envision that broadband multichannel sub-shot-noise absorption spectroscopy developed in this study finds application in various experiments as a unique technique to improve the signal-to-noise ratio beyond the fundamental limit of the conventional absorption measurements. Methods Sample. Dimethyl sulfoxide (Special Grade) was purchased from Wako Chemicals. Rhodamine 6 G (purity 99%) and Thiazole Orange (purity~90%) were purchased from Sigma-Aldrich. All the reagents were used as received without further purifications. Broadband sub-shot-noise absorption spectrometer. The experimental setup is schematically shown in Fig. 1. A deep ultraviolet cw laser at 266 nm (Coherent, Azure) was used as the light source for generating entangled photon pairs. After going through a half-wave plate and a Glan-Taylor prism, the vertically polarized deep ultraviolet light was focused into a BBO crystal (θ = 44.3 deg, ϕ = 0 deg, thickness = 0.5 mm) using a lens (f = 300 mm), in which horizontally polarized photon pairs were generated by the Type-I SPDC process. The typical laser power at the BBO position was 2 mW. (Higher laser power resulted in the increase of noise in absorption measurements, and eventually caused a damage to the BBO crystal.) Since the photon pairs are emitted in a ring-shaped direction, it is possible to separate the photon pairs from the ultraviolet pump light using a dielectric mirror with a hole at the center (hole diameter = 5 mm), through which the pump light is discarded. The photon pairs reflected by this mirror were then collimated by an antireflection (AR)-coated achromatic lens (f = 100 mm). In order to split the paired photons into two, the upper half of the collimated beam was reflected by a rectangular dielectric mirror. The vertical position of this mirror was precisely adjusted using a micrometer-controlled translation stage. The reflected portion of the beam was sent to the sample cell and was used for the sample measurement, whereas the lower half of the beam that passed below the mirror was used for the reference measurement. Subsequently, both the upper and lower halves of the beam were spectrally dispersed in a prism made of flint glass (F2). A prism was chosen as a dispersive optic, instead of a grating, in order to minimize the loss of the photons during the spectral dispersion. The reflection loss at the prism surface was also minimized by choosing the photon pair polarization to be horizontal. Finally, using an AR-coated achromatic lens (f = 150 mm), the two spectrally dispersed beams were imaged onto different positions on a thermoelectrically-cooled charge-coupled device (CCD) camera (Princeton Instruments/Acton, PhotonMAX 512B), yielding sample and reference spectra. The readout speed was set at 5 MHz, and the on-chip multiplication gain of the CCD camera was disabled to minimize the noise. Under this condition, the readout noise was 9.90 photons rms for each pixel, and 1 count on the CCD camera corresponded to the detection of 0.78 photons. The sample solutions were placed in a sample cell and were used for the broadband multichannel sub-shot-noise absorption measurements. The sample cell was also specially designed to minimize the reflection loss of the incident photons. It consisted of two fused silica windows, with AR-coating on the two outer surfaces. The reflection loss at the two inner surfaces was minimized by using a solvent that was indexmatched to the fused silica windows, i.e., DMSO. The thickness of the sample solution was adjusted to be 2 mm by placing a spacer between the two fused silica windows. Spectral analysis based on deep learning. The deep learning spectral analysis was performed using TensorFlow with Keras frontend 26,27 . Each of our experimental absorption spectra in the spectral range between 460 and 570 nm consisted of 84 data points, and these 84 absorbance values at each wavelength were used as the input for the artificial neural network. The absorption spectra were normalized beforehand using a common normalization factor so that the maximum absorbance in the entire data set becomes one. This input was fed into a sequential model comprising two densely connected layers. The first layer consisted of 64 neurons with the Sigmoid activation function, and the second layer consisted of two neurons without an activation function (linear activation). The output of the two neurons in the second layer corresponded directly to the concentrations of R6G and TO. We trained this neural network using the absorption spectra of 28, 70, 140, and 518 nM R6G solutions and 65, 183, 338, and 954 nM TO solutions, whose peak intensities are plotted in Fig. 4b with filled circles. Each solution was measured 1000 times with the exposure time of 375 msec, yielding 8000 absorption spectra in total. In order to augment the amount of training data, we took linear combinations of every two data sets. That is to say, if we write the original absorption spectra as s 1 ; s 2 ; Á Á Á ; s 8 , we artificially generated spectra expressed by c s i þ ffiffiffiffiffiffiffiffiffiffiffiffi 1 À c 2 p s j with i; j ¼ 1; 2; Á Á Á ; 8 and c ¼ 1=11; 2=11; Á Á Á ; 10=11. In this way, we prepared 288000 spectra in total to train our neural network. After this training procedure, the concentrations of R6G and TO were estimated from an absorption spectrum by feeding it into the trained neural network. Data availability The data that support the findings of this study are available from the corresponding author upon request.
2023-02-09T15:20:53.618Z
2022-02-17T00:00:00.000
{ "year": 2022, "sha1": "36c8d69c7b617417da9dbce4872072be49528192", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-022-28617-w.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "36c8d69c7b617417da9dbce4872072be49528192", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
258513282
pes2o/s2orc
v3-fos-license
Pintxos 2: small delicacies & chance encounters : Jacques Derrida wrote in Glas that “The glue of chance creates sense” and he was almost correct. It is, in fact, the toothpick of reading and writing – taken in their most expansive senses – that connects one chance event to another, that binds together, however briefly, the volatility of events. Chance and art: the pleasures of sensuous sensibilities, the distinctions of the conceptual ticles. As Walter Benjamin observed with those Argos-eyes of his that moved simultaneously in a thousand different directions across scales from the microscopic to the cosmic: "the interior is not just the universe but also the étui of the private individual. To dwell means to leave traces. In the interior these are accentuated. Coverlets and antimacassars, cases and containers are devised in abundance. . . ". 2 Aletheia: hiding and revealing, covering and uncovering, (un)veiling. Antimacassars and draperies. (Both parties of this analogy would have despised the comparison.) Benjamin goes on about these remarkably revelatory tiny jewel-boxes: "The destructive character is the enemy of the étui-man. The étui-man looks for comfort and the case (Gehäuse) is its quintessence. The inside of the case is the velvet-lined trace that he has imprinted in the world. The destructive character obliterates even the traces of destruction". 3 Traces are left, as memories and clues of a past presence still present as a softly indented quasireadability. If we are extremely lucky, talented and our étui has been, against the odds, preserved, there will be others who will comefor a short time -in order to read the slivers we have left behind upon our departure. There are those, and they are many, who wish to destroy all the traces, even the traces of destruction. For example, they exhume a body that has been shot and then burn it into ash, scattering that ash to the winds. They want no legibility at all, no memory or anticipation, but the small bejeweled case gives an odd sense of comfort. Gehäuse. Encased, but as a form of being-inhoused, of dwelling as a form of the longing for comfort. The étui is a home, an infinite cabinet of curiosity, a swirl of universes. Reading such signs, as Benjamin intimately knew, is the task of a detective. The first section of "The Sign of the Four," the second Sherlock Holmes' story, is entitled "The Science of Deduction" and it begins like this: "Sherlock Holmes took his bottle from the corner of the mantel-piece and his hypodermic syringe from its neat morocco case. With his long, white, nervous fingers he adjusted the delicate needle, and rolled back his left shirt-cuff. For some little time his eyes rested thoughtfully upon the sinewy forearm and wrist all dotted and scarred with innumerable puncture-marks. Finally he thrust the sharp point home, pressed down the tiny piston, and sank back into the velvet-lined arm-chair with a long sigh of satisfaction" (1). There is the "neat morocco case" and the "velvet-lined arm-chair," both receiving imprints, and the "long sigh of satisfaction." It's almost like writing, isn't it? Watson wants to know more, to know how Holmes's science of deduction, necessarily connected with but not identical to, the perspicacity of his observational skills, ruminates to his friend: "I have heard you say that it is difficult for a man to have any object in daily use without leaving the impress of his individuality upon it in such a way that a trained observer might read it. Now, I have here a watch which has recently come into my possession. Would you have the kindness to let me have an opinion upon the character or habits of the late owner?" It is now a watch-case in question, but again a case upon the surfaces of which someone (indeed a host of others) has left the "impress of his individuality" -an imprinting, a series of indentations and pin-pricks -that is readable by the astute observer to come. Holmes to Watson to the Reader (always hypothetical). A circulation catches a cogwheel and gets underway. Lethargically, Holmes responds to Watson that he "cannot live without brain-work. What else is there to live for? Stand at the window here. Was ever such a dreary, dismal, unprofitable world? See how the yellow fog swirls down the street and drifts across the duncoloured houses. What could be more hopelessly prosaic and material? What is the use of having powers, doctor, when one has no field upon which to exert them? Crime is commonplace, existence is commonplace, and no qualities save those which are commonplace have any function upon earth." Brain-work, perhaps what is called philosophy, is the only antidote for the banal commonplace that is existence, at least for this addictive deducer named Holmes, who is also a writer of specialist tracts on types of ash, the impressions of foot-prints, and the influence of various trades on the hand. Holmes-Watson, which are a necessary dyad, are a function of writing that combines the observational, deductive logic, and storytelling that, contextualizing the scenario, moves the deductions along. Conan Doyle is yet another positionality in this circuitry of polylogos that is reading, writing, watching, thinking, and, always, a sharing. We invest shares of value each time we make in imprint, an impress with the insertion of a needle, the light tap of a key, or the stroke of a pen. We are always tracing, leaving scat and scenting our way through the world. We carry the étui on our back, snail-like, but eventually it no longer fits and we cast it aside for another, larger habitation. It's a gamble, this investment strategy of reading and moving along, but it keeps us on our toes. Baulücke I The gap between buildings, empty space whispering for our attention, unobtrusively, as potential for what-might-come. Spacing emptiness: waiting as the whisper of an invitation. "Surfaces can function as linearities and lines can cooperate in surfaces, and holes can exist at all scales. Everything between the dimensions is materialized. . . There is no randomness: there is only variation. The truly amazing feature of this system is that it is in fact structured by holes" (Spuybroek 10). This is the "structure of vagueness," which as it becomes more precisely determined also produces a complementary vagueness that awaits. Quinsy An illness of the tonsils, from which Montaigne died on September 13, 1592. Was his last silent sentence "I am dying of quinsy"? Will our last sentence be declarative, imperative, or interrogative? We won't know. A final inhalation, a rattle. Sheep livers and the language of the birds Speaking of the Essayist, Montaigne remarked sardonically in "On Prognostication," that "As for those who understand the language of birds and learn more from the liver of a beast than from their own thought, they should be heard, I think, rather than heeded." It's an interesting question, this relationship between observing, reading as interpretation of signs, the concept of evidence in different cultural and historical scenarios, and "one's own thought." Evidence is not self-evident, although it is usually presented as if it is. Evidence is always a construction -individual, collective, and institutional -in need of interpretation. It is always, that is, an interpretation in need of another series of interpretations that comes to a halt when a decision is rendered. Time to move on. This is philosophy's "hermeneutic circle," a circle that is not vicious but, rather, one that is necessary and inescapable. Thinking about evidence always remains within this circlealthough perhaps this circle wobbles, twists, or spirals into dimensionality and fractals. If we were to try to escape this circle, which we will always wish for, we would be forced into an infinite regress toward first principles and a discourse of origins. We do not have time for this detour. This interpretive task, unending, is not, needless to say, a form of skepticism that says "we can know nothing," but, rather, a condition for knowing what we consider the "true" or the "real." Thinking "our own thought" or "thinking through evidence"raises the question of the evi-dence for learning by either what we traditionally call the "teacher" or what we traditionally call the "student," both of which are inadequate to the situation of learning. Nonetheless, these approximations will have to do. The marshalling of evidence of any sort is the art of reading the world and of giving reasonsredde rationem, or to "give an account of oneself"(cf the Gospel of Luke 16.2) -in a particularly designated domain of action: divination, experiments, exams, sports, and lovers'quarrels. If there is evidence offered that demonstratesanother interesting word -that learning has occurred, we must already understand what we mean by evidence. What, however, is the evidence for evidence? Where does this process begin or end? And how might we articulate different types of evidence at work in the entangled transmission called learning in a philosophical or artistic sense? Where else would we begin but with sheeps' livers? It is haruspicy, hepatoscopy, astrology, and harpedonaptes (Serres on the Egyptian Geometers). Or, perhaps, with poetry? since feeling is first who pays any attention to the syntax of things will never wholly kiss you; wholly to be a fool while Spring is in the world my blood approves, and kisses are a better fate than wisdom Is this evidence? Does a Twombly or a Kiefer offer "evidence"? If so, of what sort and about what? All evidence involves a gap -actually many gaps -between the "evidence" and the "object of evidence" to which the evidence points toward in order to bind the object ever more tightly toward the "truth" of what is being affirmed, negated, proven, demonstrated. (And we are close to the monstrous here.) This gap can never be closed for then there is evidence of nothing at all, but only a concrete and faceless block of the real that stands as self-sufficiently self-evident. This, though, never happens. All attempts to articulate matters-offact and states-of-affairs are always within the movement of a more encompassing event. Learning is an event and every event engages and (re)distributes a temporalized spatiality, a spatialized temporality, all of which is incessantly, and without interruption, in motion, the movement of worlding itself. Learning is learning to ride these crosscurrents with insight, flexibility, appositeness, tenacity, andalthough only very occasionally -with a graceful and powerful beauty. As Heidegger has reminded us in What Calls for Thinking? there is never any teaching except when teaching becomes a "lettinglearn."Teaching can construct contexts, platforms, scenarios and situations in which learning may occur -it almost certainly will in one form or another -but teaching does not directly cause learning. In other words, teaching and learning is not a linear function that is duplicable over time and in different spaces. It is not a controlled or controllable experiment. One series of images for the evidentiary situation that we are seeking to clarify is that of interlacing, as in an arabesque, a Möbius strip or a Klein Bottle, a Celtic knot, the tying of a pair of shoes, or the game of Cat's Cradle: string figures, games, artworks, and mathematical puzzles. Twists in thought. Evidence is here entangled in an infinite pattern -the hermeneutic circle warped -rather than being a linear and causal function, a finite scenario of successful learning, or, God forbid, a decimalized score that looks precise but which is actually both hilarious and nonsensical. The circle is broken and this enables manifestation of the complexity of shape-shifting forms of evidence. There is pattern, but this patternfractals, archipelagos, unfinished rugs -always leaves loose ends, other threads to pull upon, and evidentiary strands to extend. . . The Crito Philosophy is ventriloquism with a twist. It echoes, vibrates, resonates. Speaks-forth via the vehicles of echo-machines. At the end of the Phaedo Plato tells us, through the dead letter of his writing, that Socrates' last words were ô Krítn, éph, tôi Asklpiôi opheílomen alektruóna; allà apódote kaì m amelste. Crito asks a follow-up question, but there is only silence. And, yet, here we are: saying the same things over and over again. The silence, somehow, is built into a vast machine of writing and reading. Let's plug it in and see how it runs? Each and every one of us, in absolute solitude and in absolute togetherness, comes to this inescapable edge, have always been, at each instant of appearing at this precipitous edge of disappearance. This is a radically different edge than all other edges. An "edge," after all, divides one region from another; we have absolutely no idea about the regionality of death, for this remains not a region -although it has of course often been imagined as just that with its own underworld or overworld geography -but a blankness. Absolute opacity. "Edge"ar-rives at the tail end of a long and circuitous history that takes us back to the Proto-Indo-European root ak-, 4 to "be sharp, rise (out) to a point, pierce." We are pierced, punctuated, and punctured by our own dying and the dying of all that-is. Coming-and-going. Full-stop. Each day we edge toward and away from the inescapability of erasure, but none of our sacrifices and none of our ruses work. Nothing we do can give us an edge over this edge. We do, though, owe roosters and money to others, including Asklepios, the son of Apollo and some beautiful princess or another. He was, along with Achilles and a number of other notables, raised by Chiron, the most sublimated of the centaurs who exchanged his immortality for mortality and was then rewarded by Zeus by being set into the heavens as a constellation. There is a great deal more to be said, but we would soon have to begin practicing chiromancy and the infinite art of reading hands. Crito and Socrates -or their masks and mimes -await our return. We are, always, in mourning ahead of time (What can such nonsense possibly mean: ahead of time)? And, yet, Socrates comes to the edge where he has always stood with a magnanimous equanimity, an incessant reflective curiosity, and even a kind of liberatory joy. He remembers something, right at the end as his body is growing cold from the hemlock, and projects a request to a friend to keep a promise forward into the future. It is a debt, payable by a cock, that he owes Asklepios, the god of healing. At the moment of death, presumably unhealability itself, Socrates reminds his friend to make an offering to the god of healing on his behalf. He owes him something and wants to fulfil the debt, through the intermediary of a friend, just before he goes. Then Socrates took hold of his own feet and legs, saying that when the poison reaches his heart, then he will be gone. He was beginning to get cold around the abdomen. Then he uncovered his face, for he had covered himself up, and said -this was the last thing he uttered -"Crito, I owe the sacrifice of a rooster to Asklepios; will you pay that debt and not neglect to do so?" "I will make it so," said Crito, "and, tell me, is there anything else?" When Crito asked this question, no answer came back anymore from Socrates. 5 This is the last moment of dia-logos in the Phaedo, but it had begun earlier in the Crito. The last day is divisible; the logos is always at least two. As is so often the case, Socrates opens the drama with what looks to be the simplest of questions: "Why have you come at this hour, Crito? Or isn't it still early?" (43a). It's just before dawn and Socrates' death is imminent, but there is still time, just a bit of time but perhaps enough time, to talk, to think once more together, to remember the gods. It is the time night is giving way to the day, when the sun's fingers are first casting an initially pale luminescence from over the distant horizon. What time is it? What is your reason for coming this early? Socrates is awake and time is on the move. A timing, a rhythm, of languaging, reasoning, questioning, and the morning light of the last day. Socrates recounts his dream of a woman in white and its connections to Achilles in Homer's Illiad; they speak of money, friends, and value; and then discuss whether Socrates should escape from prison and go into exile, still alive even if not able to live out his days in Athens. Crito is urging his friend to extend the time of his days on the earth and Socrates is encouraging him to think about all the implications of such a decision. Everything must be done by the coming night for the ship of Apollo is on its way from Delos and its arrival in Piraeus is the signal for execution of the condemned. Crito is in a rushed and anxious panic, overcome with fear and grief. Socrates, bringing his friend Crito along with him, slowly turns through the labyrinth of the argument, the pros and cons of an escape, and what gives value to existence, concluding that "it's not living that should be our priority, but living well" (48a). For his part, Crito keeps saying the same things over and over again -this is bad repetition -and Socrates, as usual, returns to the beginning to measure the adequacy of the starting point. "I think it is most important to act with your consent and not against your will. See, then, that the starting point of the inquiry is laid down to your satisfaction and try to answer the questions in the way you think best" (49e). We are in the midst of an argument at the very edge of things about the finalities of things, rotations within rotations, but becoming differentiated as we move along with Plato-Socrates-Crito through the labyrinth of reasoning. What happens if, instead of following a string back out, that we, instead, enter into the mouth of the Minotaur? And then, suddenly, the Laws of Athens appear as if in person, as if to speak on their own behalf. The voice of Plato is split as Socrates divides himself into new roles. Everyone is masked and multiple and we are in the theatre of Dionysos. The Laws are ventriloquized: If the laws and the community of the city came to us when we were about to run away from here, or whatever it should be called, and standing over us were to ask, 'Tell me, Socrates, what are you intending to do? By attempting this deed, aren't you planning to do nothing other than destroy us, the laws, and the civic community, as much as you can? Or does it seem possible to you that any city where the verdicts reached have no force but are made powerless and corrupted by private citizens could continue to exist and not be in ruins"? (50b) In this question, contemporaneous for the contemporary political moment, Socrates has become the virtual addressee of the discussion with the Laws, which, in this dramatically staged scenario, occupy the usual place of "Socrates" as the guiding questioner and clarifier, while Socrates -speaking on behalf of the Laws in the voice of the Laws -has slipped into the symbolic position of Crito (and all of the others who respond to Socrates's inquiries). "Socrates" is speaker and listener as he takes on the theatrical mask, the persona, of the Laws. Is this a tragedy, comedy, or satyr play? All three, intertwined? How do we tell the difference? And Plato, of course, is off-stage acting out all the characters in the opening act of the drama of western metaphysics. He had wanted to be a playwright and that, perhaps, is what he became as he staged the opening acts of philosophy. Speaking of repetition, Deleuze reminds us that philosophy and theatre -contra the manifest and self-contradicted trajectory of Plato -are inseparable. Kierkegaard and Nietzsche, the masters of maskings and street-corner buskings, "want to put metaphysics in motion, in action. . . they no longer reflect on the theatre in the Hegelian manner. Neither do they set up a philosophical theatre. They invent an incredible equivalent of theatre within philosophy, thereby founding simultaneously this theatre of the future and a new philosophy" (1994 DR 8). Theatre inhabits philosophy; philosophy inhabits theatre: each is a mask of the other and all masks multiply. Mise-en-abyme; miseen-scène. But the Laws have been patiently awaiting our return. They have reminded Socrates that he has always been free to leave the city at any time that he chooses, but that "whoever remains with us, having observed how we decide lawsuits and take care of other civic matters, we claim that this man by his action has now made an agreement with us to do what we command him to do. . . although we allow him either of two possibilities: either to persuade us or to comply. . . " (52a). All of this is spoken in the conditional tense of "what-if," creating the fictive space of virtuality in which language occurs. The Laws continue, noting that they are open to persuasion if they are deemed inadequate or unjust but if such persuasion hasn't occurred, then the compliance of all those who have "signed" an implied civic contract through their behavior is expected. Athens, Socrates's mother and father, has birthed and nurtured him, given him a place and a series of interlocuters to natter on about gadflies, money, love, and being-beyond-being throughout his life. Why flee now, once Athens has judged him to be guilty of impiety against the gods of the city and of corrupting the young? Democracy has its dangers. We, however, might well resist this conclusion. After all, the charges were trumped up out of the merest shreds of truth which lent their support to the revengeful autocratic use of democracy against someone that Athens, in its official political guise, had grown frustrated with. We could easily make an argument, following Plato's metaphysical trajectory, that the empirical Laws are not congruent with the supersensible idea of justice and that the latter, always dividing the former, would allow Socrates to be free to escape. Socrates, in truth, might well agree with this conclusion, but he is not willing to abandon his city, the city of his mother, father, lovers, and friends. Has the court misjudged the scenario in condemning him to death? Yes, but that is no reason to try to extricate himself from its judgment, for they are simply continuing to act out of an ignorance that believes it knows that what it does is best. Socrates understands the delusional nature of this judgement, but he sets himself along a counter-track. The time for argument is coming to a close, but the time of offerings and a continuing question are just opening up. Having put themselves on trial, the Laws rest their case and Socrates, Plato's greatest character, picks up the thread in his artificially natural voice. "Rest assured, my dear friend Crito, that this is what I seem to hear, just as the Korybantes seem to hear the pipes and, this sound, from these words, resonates within me and makes me unable to hear anything else" (54d). This is strange. The Korybantes, offspring of the Thalia, the Muse of comedy and poetry, and Apollo -a figure who flows like a subterranean river throughout these dialogues -are the caretakers of both Zeus and Dionysos (but let's not get into the complexities of lineage and temporalities). They are a band of dancing ecstatics -male, armored, and associated with rites of initiations as boys moved into manhood -most likely originating from distant Phrygia on the Anatolian plateau. Socrates, who has earlier been instructed in a dream to learn music -the arts of all the muses -now turns toward another form of ecstatic listening: the pipes of the Korybantes, which resonate in his inner ear, drowning out all other sounds. The Laws are deafening and Socrates, moved to this point of ecstatic beingovercome through the patience of the contours of a dialogical argument, consents with equanimity to their decree. Of course he will not flee. Athens, his home, is the place of his birth and the place of his death. The gods and the Laws have converged and he has heard nothing from his daimonion, that quiet but imperative sign of resistance that reminds him what not to do. The offering to Asclepius and the trust of friendship, based not on merit but on a generosity of thought that is another direction for philosophy's task. None of Socrates' friends, including Crito who has by the end left the scene out of grief, "deserves" his friendship, but he offers it, like the questioning conversation that is philosophy itself, free of charge. The city as a crime scene Walter Benjamin has famously asked: "But isn't every square inch of our cities a crime scene? Every passer-by a culprit? Isn't the task of the photographer -descendant of augurs and haruspices -to reveal guilt and to point out the guilty in his pictures?" (GS 2.1:385). Photography as a descendant of augurs and haruspices? Only Benjamin could have thought and said this. Photography, of a certain sort, reads the city and its future in its image-lives. A technoshamanism at work. How might we transport it to the next stop of the metaphorai of the city? Black patent leather shoes Easter Sunday in the Atlanta suburbs, circa 1960. The American South. Burroughs had just published Naked Lunch, Civil Rights and Vietnam were slowly beginning to generate heat, and Kennedy would soon announce the Peace Corps and the moon-shot. Everyone in the neighborhood was laboring with a mighty effort to become-more-American, to becomemore-middle class, to become-more-Christian. It was a small world -a good world in many ways -but was already cross-cut and opening to the winds of change that were swirling from over all the horizons and wafting up, with a slightly acrid stench, from the underworld drift of tectonic plates. We did not know much; we did not talk much. The maids, bedecked in starched white uniforms, came from across town on the bus once a week, lined up at the corner in the afternoon to go back. I did not know much. The last Big War was still turbulently impressed on our parents and grandparents, but we did not understand that, just felt the tragedies, the invisible wrenching, the lack of understanding. More-American; More-Christian; More-Middle-Class. Flannery O'Conner was living at the Andalusia Farm in Milledgeville, on the banks of the Oconee River, and publishing The Violent Bear It Away. We all remember, of course, the Oconee from the first page of Finnegans Wake: "Sir Tristram, violer d'amores, fr'over the short sea, had passencore rearrived from North Armorica on this side the scraggy isthmus of Europe Minor to wielderfight his penisolate war: nor had topsawyer's rocks by the stream Oconee exaggerated themselse to Laurens County's gorgios while they went doublin their mumper all the time"(Joyce 1). 6 Do not be tempted by an attempt at an exegesis of this passage or you will be lost forever in the riverine circularities of the Purgatory of reading. Get Thee Behind Me! All we need to know, right now, is that the Oconee runs through both Dublin and Milledgeville. Those Sunday black patent leather shoes of my mother and sister are images of my own erratic efforts at understanding. Everything is reflected in those dark and polished surfaces, reflecting only a shimmer of light but not the determinations of specific objects. They are signs, bearers of histories, encasers (Gehäuse) of adult feet and of small feet skipping toward the unknown. We dressed up and sang hymns. I got bored, became petulant, and fell in love with the girl in the next pew. He is Risen. Then we played together in the sandbox, the shoes stored away in the closet for the next Sunday. On genre confusion I do not know what it means to write fiction, philosophy, novels or essays. One aspect of this conundrum is a bad faith desire on my part, constructed on a fear of predators, not to be seen, not to be held accountable. Simply to hide via multiple forms of camouflage: Camouflage is the use of any combination of materials, coloration, or illumination for concealment, either by making animals or objects hard to see, or by disguising them as something else. Examples include the leopard's spotted coat, the battledress of a modern soldier, and the leaf-mimic katydid's wings. A third approach, motion dazzle, confuses the observer with a conspic-uous pattern, making the object visible but momentarily harder to locate. The majority of camouflage methods aim for crypsis, often through a general resemblance to the background, high contrast disruptive coloration, eliminating shadow, and countershading. In the open ocean, where there is no background, the principal methods of camouflage are transparency, silvering, and countershading, while the ability to produce light is among other things used for counter-illumination on the undersides of cephalopods such as squid. Some animals, such as chameleons and octopuses, are capable of actively changing their skin pattern and colors, whether for camouflage or for signaling. It is possible that some plants use camouflage to evade being eaten by herbivores. 7 What is all of this but the brilliance of subterfuge, the deep desire to stay alive in a hostile environment that, indifferently, is hoping to eat each of us? This desire of the other to eat us is a desire that will, without any doubt at all, be fulfilled. Evolution, that great goddess of mutability, teaches "us" -though there is no "us" being taught -to hide, to flicker, to transmutate so that we will have just one more instance of "life." Each instant, precious. Each instant, dissipating as it appears. The other will be sated on the feast that is each of us, just as the other is being-eaten as it devours us. Spinoza's conatus is set in the context of a death-drive that overpowers desire to remain-in-being. If I keep moving fast enough -moving from essay to aphorism to vignette to the novel as a practice-ground -I will create a motion-dazzle that will hide me in the woods, unspottable. I can become-chameleon and be a professor or a poet, a taverna-owner or a diplomat of cosmopolitanism. Who knows what? Catch me if you can. Here now, gone now. Feint, weave. This discourse is often set against that of "authenticity," where we are who we are, we are transparent, we are pinnable-downable, we are for-real. Good faith is a good thing; common sense is a good sense; the common good is a good good. Most of us are suspicious of such talk, tied as it is to suspicious characters in the history of philosophy or to a metaphysics of essences, but -for any ethics to occur -there comes a point where each of us needs to say "Here I am; I am in hiding but will share my code of camouflage with those who are hunting me." The worlding of the world as the noir of movement: visibility-hiding; aletheia as Earth and World. Writing, simultaneously, demonstrates and obscures. All of this, of course, is simply obfuscation. In my own genre confusion, I am simply hiding the fact that I have little to no talent either in fiction, essay, philosophy, or critique, much less in poetry as a proper domain of its own. I love language, the move and fit of words, the mist that arises in a field in the morning or the moment when evening brings a slight shift of the winds. The fit-shiftingness-fit of words. But I have no sense of character, plot, cadence or the patience required of the novel or the poem. Perhaps I should simply be satisfied with one form of writing rather than try to move between genres to create new genres. That is a sensible suggestion, worth heeding. Heeding, turning in the that direction, emerges from an infinite complex that includes "Old English hedan 'observe; to take care, attend, care for, protect, take charge of,' from West Germanic *hodjan (source also of Old Saxon hodian, Old Frisian hoda, Middle Dutch and Dutch hoeden, Old High German huotan, German hüten 'to guard, watch'), from PIE *kadh-'to shelter'. 8 The pathways would wind through the dark and interminable forests of language -near the bend of a green-white rushing mountain river and through the winter woods we think we know -initially skirting the cities with their artificial lighting, although I will, by hook or crook, always return to the boulevards, mazes, and alleyways of the city. That, I suppose, is the accident of my destiny. At least for now. Or, perhaps, it's only another ruse, a way of moving through the dappled light. A step ahead, or behind, of perception. The comet of the inner eye Mouches volantes, flying flies, they are sometimes called, but in fact they are microscopic comets darting through the universe of the interior of the eyeball. The comets block the incoming light and thus cast shadows and the eye sees, quite miraculously, a part of the inside of itself. The video of the floating speck, backed by networked blood-vessels and the blindspot of the optic nerve, is an expression of our fascination about how the body operates and how it comes unstitched, a miniscule part at a time. The molecules are preparing to be freed from this gross form of the mesosphere and released to the larger and the smaller. The stretched whiling that is the person that is each of us will, however slowly, snap. The long history of the eye, of visuality, and of instrumentation has been internalized, magnified, compressed. Muses My sisters, my brothers, my friends, my masters. There are not many, only the Nine: an infinity. The knell of chance "And since the question here concerns a glottic gesture," Derrida writes, "the tongue's work on (it)self, saliva is the element that also glues the unities to one another. Association is a sort of gluing contiguity, never a process of reasoning or symbolic appeal; the glue of chance makes sense, and progress is rhythmed by little jerks, gripping and suctions, patchwork tacking -in every sense and direction -and gliding penetration" (Glas, 142, second column). A surface is struck; there is a blow, however minute. A resounding vibrates, solemnly and in diminishing waves, felt. Everything glides, imperturbably, along. . . Reference Derrida, Jacques. Glas. Translated by John P. Leavey, Jr. and Richard Rand. Lincoln and London: University of Nebraska Press, 1986. Obliterature As it appears, literature erases itself. Autonomy and autoimmunity simultaneously at work. Ficticity: the irreal, the paradox of sense, the noema, the non-existent objects of St. Meinong's logic. A tale, like all tales, told by an idiot. We are all born and die idiots, speaking in our own idioms. All literature worth its salt is idiomatic, immediately recognizable by its differentiated style. Writing, as art, idiomatizes. Literature obliterates the world for the sake of the imagination, of possibility, of virtuality that creates, sometimes fleetingly and sometimes for millennia, a new earth. None of these words approaches anything like adequacy, but we do not know how to speak about what matters most that through these expressivities. Oblivion awaits all. In the meantime, some of us write. Virality An invisible and potent contagion that is, perhaps, floating innocently if not innocuously through the air. The perhaps drives behavior, feelings, plans: what-if ? This perhaps opens the world and turns the corners of the streets. It allays chance -I will plan for the possibility, the probability -but it simultaneously keeps determinations constrained, keeping chance on the move: whatever I do cannot converge absolutely with the real, for the real always blooms with the perhaps. Perhaps I'll go for a walk; perhaps I will write; perhaps I will die from the COVID-19, my lungs inflamed, without the succor of the sweetness of air. Perhaps not. This "not" as a negation of the perhaps negates only the specific determinations of the perhaps, but not the structure of the perhaps itself, which twists and turns unpredictably, opening up unforeseen avenues of action. The perhaps is a structure of objectivity. Perhaps, which is constrained and operationalized by the historical a priori that gives us our particular set of chances, keeps the mark of the question perpetually mobile. Perpetually to be addressed. Is the virus living or non-living? That assumes that we know the difference between the two terms, even though we have only recently, as modernists, constructed this dividing line. The Virus is the figure for that which seeks to disrupt the current arrangements of Life and Nonlife by claiming that is it is a difference that makes no difference not because all is alive, vital, and potent, nor because all is inert, replicative, unmoving, dormant and endurant. Because the division of Life and Nonlife does not define or contain the Virus, it can use and ignore this division for the sole purpose of diverting the energies of arrangements of existence in order to extend itself. The Virus copies, duplicates and lies dormant even as it continually adjusts to, experiments with, and tests its circumstances. It confuses and levels the different between Life and Nonlife while carefully taking advantage of the minutest aspect of their differentiation. (Povinelli 2016 19) The virus is living and not-living; not-living and living (and this is all connected to the related false dichotomy of the machinic and nonmachinic). It doesn't "traverse" or "bridge" that distinction, since the distinction is not stabilized in-place ahead of time, but, rather, the virus dissipates this structure from its origin (which is historical and not ontological). This round of the corona-virus has shown us how powerful the "sole purpose of diverting the energies of arrangements of existence in order to extend itself" can be, as well as the way in which it "experiments with, tests it circumstances." This is precisely how "we humans" also respond, stretched out along the way of living our dying and dying our living. Starry Night in a coffee cup How could I resist? There it was on the menu in the Lex on Belcher's Street: Van Gogh's Starry Night floating on the top layer of marvelous white cream in a creamy porcelain cup layered below with the darkness of an espresso as thick as the thickest impasto. HKD$88. The waitress said they printed the image on the screen of the cream. Brilliant! Or, they could print a picture that I sent them. Van Gogh must be killing himself again as his images become the very image of the simulacrum. Cream as a canvas as a screen; a printer as a projector interfacing with the aromatic liquidity of the coffee; a successful capitalist lure; a stimulant that, sipped, enters the depths of my body, incorporating art as a flavored scrim of color, to be, eventually, transmogrified and excreted, sent on its way but in a form that is unrecognizable to the aesthetic eye. "Painted in June 1889, it describes the view from the east-facing window of his asylum room at Saint-Rémy-de-Provence 9 , just before sunrise, with the addition of an ideal village." That is helpful, but not sufficient. There cannot, of course, ever be the "sufficient" when it comes to art: every commentary is necessary but radically in sufficient. Lavender blooms wildly purple outside the creamy beige walls of the monastery. "The nocturne series was limited by the difficulties posed by painting such scenes from nature, i.e., at night. The first painting in the series was Café Terrace at Night 10 , painted in Arles in early September 1888, followed by Starry Night Over the Rhône 11 later that same month." I have never stepped outside of the Terrasse du café le soir. Nightboat A resonant image: timbrous. The night evokes itself, always full of eddies, pulls, currents. A boat stays afloat, but only for so long and only if it knows how to minimally construct itself and navigate the currents. A plank, raft, canoe, steamer, cruiser, a floating world. Queequeg and Ishmael knew the nightboat; Ahab was blinded by the boat of white (although white manifests differently in different games of writing, such as Serres' on the white blankness of the omnivalence of hominescence). "Perhaps writing a book is a way of locating and capturing something that will never stop moving. Perhaps reading a book is a related gesture." The night is scintillant, quivers with invisibility, visibility, orbs of radiant midnight blues. Dietrologia "In Italian, there is a useful word, dietrologialiterally, 'behindology,' the art of deciphering the hidden meanings of things, including the most transparent of them. Forget transparence -there is always a plot" (1). In other words, there is always a stain on the mirror, a blank and black spot, a remainder outside the field of intelligible or sensible vision, which is itself a false dichotomy. We are always falling behind in our quest to see what is behind the scenes of appearances. This is solved, in a way, when -as Deleuze reminds us -we remember that there are no more appearances, but only apparitions. In another way, we cannot not look for the behind, the out-of-sight, the around-the-corner. The Wizard of Oz. Books in the cedar tree Looking out the kitchen window in the early evening I saw a shelf of colorful books inside the cedar on the other side of the glass. The shelf fit perfectly inside the trunk of the tree. It was quite beautiful, a striking image. The shelf was in the tree; the window was in front of me, transparent and reflective; the bookshelf was off at an angle, invisible except in the reflection in the window, the shelf inside the tree. The angles had to be exact; the light had to be exact. The singularity of multiple vision. The smooth porcelain whiteness of the inside curve of the coffee cup was covered, unevenly, with the foam of the leftover singleshot latte. A moonscape; the sea as it washes back out over the sand, leaving traces, pockmarks, foam, and tributaries. The pileated woodpecker is back, moving from place to place out in the woods attacking the trees like a huge mechanical jackhammer. The holes sculpted over time are beautifully shaped doors into food and home. Walking down the road this morning, I saw one banging away at a cedar. It looked like it had just been to the barber, with its red Mohawk crest, a fashion statement indeed. Very elegant. It has zygodactyl feet, two toes pointed forward and two backwards, so that it can climb vertically and hang out with great panache on a tree trunk. They say the woodpeckers have been around for 50 million years or more. Not bad. They are the pterodactyls in miniature. A sun-bleached book -José Saramago's The Double -sits in the back seat of my car, every day absorbing more destructive light from the sun and every night wishing it were some-where else, absorbing the freshwater of a lake or being read by lamplight. I'm sure the back window focuses the light and turns the pages even yellower than they would be if the book were sitting in the sun on the porch. But there is something profoundly appealing to me about that book in the back seat, exhibiting its mortality, its wear. That's what reading and writing do. They wear down, wear away, become crinkled with age, use, and the weather. Not to reveal a secret essence -there is no such thing -but to reveal the public secret of reading and writing: it must go on; it can't go on. We must take up the task again and again and each time we take it up a step closer to death, our own edges curling and becoming sunstruck, the letters fading into illegibility as the paper dries and the spine cracks. The moon is a creamy thumbprint in the blue afternoon sky, visible at first through a V of tall firs on the hill and then, when I'm out on the sand-flats, high in the sky. The moon had moved quickly -whether of its own accord or moved by an invisible greater power I couldn't say -for I had only been walking for about ten minutes. The eelgrass smelled of sewage and a green pipe jutted into the beach from the houses. The sky was huge and blue, the mountains of the Olympic Peninsula blue ridges, the water a fluctuation of blues. The shingled and faded beach houses, modest but confident along the shoreline, stand under the infinite expanse of blue. The bald eagle floats down from a tall tree on the hill. The water is cold on my feet as it curls in from the Sound. I want to disappear into summer; I want to become summer. My sandals are full of wet sand. The books are all in the cedar tree, visible and untouchable, waiting to be read and never to be read. The window, the angle, the light, the shimmering reflection that is the world. The terror of Monday It is absolutely predictable. Sometime between 2 and 5pm on Sunday afternoon, perhaps with the slightest premonition of the evening (but I'm not sure), a small terror drifts down upon me: Monday. I am walking through Sheung Wan, sitting on a café terrace down at Cyberport, or reading Le roman policier -or perhaps just sitting in the flat doing nothing -and it quietly strikes. A barely audible whisper of terror -something like a soft hiss -and the slightest brush of an angel's wing. There is an intake of breath, not much but enough to notice, and the weightless weight of a foreboding. Not much, just a little. It has a wing, a hand, and a voice. What does it say? "You must go back to work." My time, which I have been pretending is my own, will once again belong to another and I will step back into the role of employee, servant, slave. Since they pay me for my time, my time is their time. I am uprooted by the trowel of temporality, just ever so slightly, and the serenity of my time that is my own is disrupted by the repossession of my time by another. (Cf. Hegel, then Marx.) It says: "You cannot get everything done; you cannot, no matter how hard you work at it, finish." It addresses me, as it addresses you, as "you."It is an impersonal personalized utterance. A universal that is singularized. Things will not be wrapped up on Monday for Monday, or on Friday for the previous week, or on the weekends for the pleasures of the weekends. There will be loose-ends; things will remain undone, partial, out-of-joint. And at some indeterminable point Monday will not be the first instance of a new work-week, but the last instance of what we call "a life." My disappearance will quickly disappear, but I will, at last, no longer know Monday's terror. My time has never been my time: always and only a gift. The gift The gift of time, inseparable from the gift of dying and becoming-corpsed, makes us cry out, laugh, speak, snap, take pictures, write, and attempt -always insufficiently -to learn how to live before it is too late. We will, undoubtedly, fail. Impossible sorrow To do nothing except let the sorrow of each thing, passing, ride through me. An impossibility. The need to make marks. Vaghezza "Surely, it cannot be a coincidence" writes Lars Spuybroek, "that one of the modern texts in the history of aesthetics to employ the term 'grace,' Firenzuola's 1541 On the Beauty of Women, compares it to the Italian term vaghezza, which means 'vagueness,' 'charm,' as well as 'movement from place to place' " (2020 24). Vagueness can be charming as it moves along at its own pace and of its own accord. There is, for instance, the wandering spirits of Petrarch's Sonnets: Q Amor i begli occhi a terra inchina E i vaghi spirti in un sospiro accoglie Con le sue mani, e poi in voce gli scioglie Chiara, soave, angelica, divina; Sento far del mio cor dolce rapina. . . (II) Spuybroek, as we have seen, has long been interested in the "structure of vagueness," a phrase that pricks our logical hackles, but also triggers our fascination with the inevitable necessity of the obscure, the muzzy, the vague, the errant and the wandering. This, after all, accompanies everything we do. Vagaries have structure, but it is a structure of a strange sort, mobile and self-generating. We are all born on, and under, a wandering planet. Analyzing the "agential action" of the "wool-water machines" of Frei Otto and his team at the Institute for Lightweight Structures, Spuybroek writes that what emerges from the process is a "complex or soft rigidity. . . we should therefore resist the idea that the first stage is a rigid order and the end result is just a romantic labyrinth or a park. Actually the arabesque order of the end result is as rigid as the first stage of the grid, but much more intelligent because it optimizes between individual necessities and collective economy. Yet it is not an easily readable and clear form of order, but a vague order; it is hardly possible to distinguish between surface area, linear elements, and holes" (2005). The action of the wool-water machines is of an agential intelligence based on a constrained freedom set into motion by "three algorithms."This is the intelligence of materiality itself that forms itself as it goes and "though the order is vague, it should nonetheless be considered very precise, because nothing is left out. There is no randomness: there is only variation" (2005). A precise vagueness; a vague precision. We have stepped outside the fabled Cartesian grid of precise points mapped to a set of coordinates nor are we in the fabled box of ordinary thinking. We are quite close, instead, to the Stoic-Deleuzean "paradox of the logic of sense," but not quite in a situation of equivalence. Neighborhoods; affinities. I will not attempt to list the things that wander, for lists themselves certainly wander, but suffice it to say that there is nothing that does not wander. Worlding: errancy, disinclinations, vortices. Turbulence. The vague can, on occasion, be clarified, but the clarified also then produces the next penumbra of obscurity. Chance events: clouds. Causal and casual ramifications. Art, intimacy, poverty Not much can be said, just a tiny morsel of an utterance that, perhaps, we are mishearing in its quiet reserve: Art is the intimacy of poverty Art is the poverty of intimacy Intimacy is the art of poverty Intimacy is the poverty of art Poverty is the intimacy of art Poverty is the art of intimacy Art is a generosity that arises out of a deep sense of poverty. Emptiness as the requisite for the line of making. Every artist feels incapable of the task of art. This is not because of a personality defect or because of a miserable self-image. Nor is it, in any simple quantifiable manner, because the abundance of the world, even the simplest thing of the world, infinitely exceeds our capacity for engaging it in a medium of expression. The task of art makes demands that are impossible to fulfill. Cézanne in the thunder and the downpour, buying paints. We are poverty-stricken, stricken by the pure meagerness of what we have to give as the task of art is given to us to undertake. With the deepest passion we can bring to bear on the task, we begin to craft something -a poem, a painting, a score -and we are bound to fail. How do we live with this failure? How is this failure the very success of art, even as it undermines all the usual notions of success? We remain intimate with emptiness, with the almost-emptiness of words, color, sound. We remain as close as we can bear to the silent and invisible fire that is always consuming us. Art consumes us; it consumes itself; and, there, art remains. There, shimmering, are the remains of art. We learn, as we are able, the enigmatic, difficult gifts of poverty that demands a response of a making that fails to achieve itself. This is its glory. Poverty is simple open-handedness. Waiting, then turning the hands over and beginning, without knowing what we are doing, to make art. Poverty is a waiting with expectant lack of expectations. Then: act. But even act as a form of waiting on the edge of the crease between subject-object-world, on the edge of knowability, on the edge of appearancevanishing. In "The Prose of the Trans-Siberian and of Little Jeanne of France,"Blaise Cendrars wrote: My poor life This shawl Frayed on strongboxes full of gold I roll along with Dream And smoke And the only flame in the universe Is a poor thought. . . . Nowhere "The only flame of the universe/Is a poor thought." What a thought, what a flame. We are stricken with poverty, with incapacity and impossibility, and this is enough. Ashes to ashes, but in the dash of the between, the poverty of poetry, the poverty of thought. The gift that lifts a voice. A slow flame: all is fire. Gray space Speaking about Marcel Duchamp's innovations and looking ahead into the then next century, David Bowie insisted that "What the piece of art is about is the gray space in the middle. That gray space in the middle is what the 21 st century is going to be about." This would be "exhilarating" and "terrifying," "absolute fragmentation." Here we are. BBC Archive. "David Bowie: Internet is the new rock n roll." Newsnight |Classic Celebrity Interview. https://www.youtube.com/watch?-v=tLf6KZmJyrA On becoming writing "It," thankfully, will outlast "me." When I first set out to become-a-writer, I did not know, not quite, what that might entail. I still don't, not quite, and I am still working at becomingwriter. Feeling my way in the dark -through my fingers, my feet, my nose, my eyes, my ears, and my tongue -along the log-strewn edge of a northern beach, through the thick loam of the forest of firs, across a trackless desert where my feet and my heart become unbearably heavy, always in a certain darkness with an occasional flash of something or the trembling vibration of a thread. Experimenting, failing to experiment. Stuttering, stammering. Trying-out. I was, first of all, a reader who felt an obligation to contribute to that enigmatic pleasure that kneaded distances and opened possible worlds. I tried poetry but have little sense of a visual imagination and a tin-ear for music. (Ask my wife, whom I met at a poetry reading, so at least all was not lost.) I tried novels, but I have no sense at all of plot or character, which I have heard are important for the craft of the novelist. I tried, even earlier, fictionalizing an encounter with Descartes and something about that has stuck. Odd, what sticks. A sentence here, a sentence there; a word there and a word here. Here and there never stay-put, they are never set-into-place. Perhaps a paragraph, maybe even a page. That's about all I can hope for and the obliteration of all obliterature is certain. This will be, in large part, an act of ventriloquism in which I parrot the codes that the world has given me, at least the ones that I have been able to pick up in small shreds, some with what seems like blurred letters that have run together across all the surfaces of the Earth. My hope is that a tiny line of the codes bequeathed by ventriloquism might be inflected with an accent, partially the long vowels from Tennessee and Georgia, partially inflected by bits-and-pieces of French and German, a few poorly pronounced syllables of Cantonese, and the infinite worlds of the lost languages that pinpoint my location by their immense absence that is nonetheless in the closest proximity to what I am able to say and how I can say it. Style, with is rhythms, is the essential task. The about recedes before the singing and stuttering of style. Language is not an enclosure and it is not essentially a machine for representing the world outside of itself in an harmonious accordance. Nothing is contained "in" language. Languaging is a flowing forth in all directions that sweeps us along, a torrential force that almost always goes unnoticed and is absolutely modest in its willingness to stay off-stage, obscured and quiet. It is (in)human, operating at every scale and at every con-junction and every dis-junction, every criss-crossing of every expressivity. (These occur at the same instant as the instant appears and dissipates instantly.) With the time I have left -today, tomorrow, or the day after tomorrow -I will scribble across the page in as many ways as I can. I will accentuate with an accent, a style. This -along with my attempts at generosity -are what I ask to be judged for.
2023-05-06T15:14:39.927Z
2023-01-15T00:00:00.000
{ "year": 2023, "sha1": "fa4a514f4d563158e6ca1631c2286e93d7fb28a6", "oa_license": "CCBY", "oa_url": "https://www.tankebanen.no/inscriptions/index.php/inscriptions/article/download/219/120", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "59a3a4d84d37e2c4953925d372a36e1343e3e997", "s2fieldsofstudy": [ "Art", "Philosophy" ], "extfieldsofstudy": [] }
234858937
pes2o/s2orc
v3-fos-license
Direct Provision, Rights and Everyday Life for Asylum Seekers in Ireland during COVID-19 This article considers the impact of COVID-19 on international protection applicants in the Irish asylum system It presents a critical reflection on the failings of direct provision and how the experience of COVID-19 has further heightened the issues at stake for asylum seekers and refugees living in Ireland In Ireland, international protection applicants are detained in a system of institutionalized living called direct provision where they must remain until they receive status Direct provision centres offer substandard accommodation and are often overcrowded During the pandemic, many asylum seekers could not effectively socially isolate, so many centres experienced COVID-19 outbreaks This article examines these experiences and joins a community of scholars calling for the urgent end to the system of direct provision Introduction Protest, hunger strikes, COVID-19 outbreaks. In the midst of Ireland's first lockdown (March 2020), a group of asylum seekers living in an over-crowded direct provision centre in one of the more rural parts of Ireland experienced an outbreak of COVID-19. Together with local townspeople, they stand outside the hotel serving as a direct provision centre, one of which has been poorly repurposed as a full-time residency for international protection applications. They have been sent from the capital city Dublin to the rural Southwest, Co. Kerry, from other direct provision centres due to overcrowding so as to ensure the possibility of social distancing. This is a weak government attempt to address the concerns of those living in direct provision regarding the pandemic; in total, they move approximately 600 international protection applicants to different centres, with 105 being sent to Co. Kerry (Gusciute 2020). The protestors hold placards demanding that the hotel be closed with immediate effect. Some residents state emphatically that they will go on hunger strike should this not happen. The residents of the centre have the support of the townspeople and a local radio station captures their voices, their complaints, even their fear. The protest is widely broadcast and written about in a number of different news outlets. In spite of their combined protests, over 25% of the residents in this particular centre get COVID-19 (Gusciute 2020) and the Government's response comes slowly. Direct provision is a highly commercialized system of institutionalized living (Fanning and Veale 2004;Gusciute 2020;Lentin 2020;McGuirk and Pine 2020) for individuals or families seeking international protection (asylum seekers) awaiting the outcome of the determination of their refugee status claim (O'Reilly 2018). While for the most part asylum seekers have freedom of movement during the day and children attend local schools, the system is one which closely resembles other forms of asylum detainment. Direct provision was only ever meant as a temporary measure (in 2000) but still exists over 20 years later in spite of repeated public protest and calls to end the system (#enddirectprovision) (Breen 2008). The pandemic has further heightened the extant issues within a weak, overcrowded system and, since March 2020, a number of direct provision centres have experienced COVID-19 outbreaks (IRC 2020). Carlo Caduff reminds us that the measure of a society during a pandemic (Caduff 2020) is its response to it. Globally, there has been great mismanagement of COVID-19 with rolling lockdowns putting massive pressure on people's everyday existence. In an Irish context, very strict lockdowns have impacted individuals in many different parts of society, but direct provision residents stand out as having had to endure much suffering during these very harsh restrictions (Gusciute 2020). This article thus takes the form of a critical reflection on the experience of Ireland's asylum seekers during the pandemic in the Irish asylum system, known as direct provision. It constructs the Irish asylum system as a site of warehousing and alienation. The main focus herein is to, therefore, highlight how the broad failings of direct provision have become further heightened by the impact of COVID-19. This has impacted the pyschosocial well-being of asylum seekers in multiple and overlapping ways. In conjunction with this, the ways in which a dynamic of blame and risk in society at large played out regarding fluctuations in COVID-19 numbers, brought renewed attention to "crowded" places, such as direct provision in both negative and positive ways. Widespread media attention has also centred on how challenging life has been in direct provision during the pandemic. This has subsequently mapped onto a well-organised solidarity movement that has been working for many years to have direct provision ended. As such, calls to end direct provision from many different sectors of the Irish public have hardened, culminating in the issuing of a very promising government white paper (February 2021) committing to the end of direct provision by 2024. The COVID-19 pandemic, or indeed, what we should more accurately approach as a syndemic to draw on medical anthropologist Meryll Singer's 1990s work (Singer 2000;Meyer et al. 2011;Willen et al. 2017) has heightened the impact of racial capitalism, social exclusion and precarity across the globe. Bordering practices (Balibar 2010) anchored in notions of risk, fear and exclusion now criss-cross all of our daily realities (some more than others) with even more fervour. Questions of citizenship and fervent nationalisms are on the rise. We are all living this pandemic, but unevenly so, and the ignition of an alacritous rebordering within the EU region and elsewhere has had a direct and very grave impact on asylum seekers and their right to seek international protection. Many feel that the geographical position of the island of Ireland has been underexploited with respect to managing the pandemic. Frequent comparisons have been made between the island of Ireland and New Zealand with calls for a #zerocovid response and mandated hotel quarantine to be adopted akin to that of Australia and New Zealand. However, with BREXIT, political trust between the Irish and UK jurisdictions is low, and the politics of division on the island of Ireland, between North and South has become heightened (Heenan 2021;Matthews 2021). Additionally, cross-border coordination and collaboration with respect to managing the pandemic has been weak, in spite of the signing of a Memorandum of Agreement in April 2020 by both jurisdictions agreeing to find solutions to managing the pandemic in a unified and co-ordinated manner (Heenan 2021). However, such efforts remain to be seen as largely tokenistic with no real policy or practice change forthcoming (at least not by the time of writing). The crisis of borders on the island of Ireland, while not only symptomatic of the pandemic, is not a crisis that is happening in isolation. Rebordering practices are visible everywhere, with a failure of cross-border approaches apparent through a number of regions in the EU (Rumford 2006;Fanning 2019;Chetail 2020;Ní Ghráinne 2020) and some countries, such as Australia and New Zealand, closing their borders (a response many say is effective with respect to managing the virus). This is an intersection of medical and political borders (Ticktin 2020a(Ticktin , 2020b which, while touted as being the best way to protect a country from the spread of the pandemic, has in fact also contributed to increased securitization and restriction of movement (and a foregrounding of citizenship). This is particularly so for international protection applicants held up or detained at such borders (Ní Ghráinne 2020). It is thus asylum seekers and refugees that bear the brunt of these border closures with many being delayed at frontiers (and even returned from closed borders), thus breaching the principles of international refugee protection law and of refoulement (Ní Ghráinne 2020). Within a number of states, asylum seekers have been subjected to lengthy delays in having their international protection applications processed and in an Irish context, as one example, the deleterious conditions of our asylum system have been further exposed. At the heart of this is state inaction, a limited sense of who is considered worthy to "protect " or indeed "vulnerable". As such, there are widespread concerns that the principle of non-refoulement is not being respected, which as the conceptual pillar of international refugee law, prevents a signatory state from forcing an individual seeking international protection into repatriation (Chetail 2020;Ní Ghráinne 2020). The closure of borders also signposts a failure of cross-border co-operation around matters of global health (particularly visible in Europe). A core element here is how the rhetoric of war and terrorism has been drawn on to speak about the pandemic. This is done so in a manner which deflects from what Povinelli (2020) calls the "quintessential terrorist," that is the form of late liberal capitalism responsible for the global inequities which asylum seekers and refugees experience the blunt force of. The ongoing debate about direct provision and the very public campaign to have this system abolished sits into very many of these broader global issues, particularly during the pandemic (Mfaco 2021). However, within national boundaries in Ireland, it has been decried not only because it is a deleterious system of warehousing but also because many believe it is a continuation of historical logics of containment which lead from Catholic run industrial schools, Magdalene laundries, and mother and baby homes to direct provision (Loyal and Quilley 2016). In 2021, at the time of writing, there is great momentum and appetite, particularly post the marriage equality referendum (2015) and the repeal of the eighth (abortion referendum 2018) to redefine the edges of Irishness (Browne et al. 2018;Mullally 2018;Carregal-Romero 2019). Therein, it is clear that there is no place for an asylum system resembling direct provision. Methods and Scope As an anthropologist of displacement, I have a longitudinal and sustained engagement with individuals and families who have experienced life in direct provision. I have been working on the broad issue of asylum and refuge since 2009 in the Republic of Ireland (Maguire and Murphy 2014Murphy 2019) and have a particular interest in the deleterious impact of asylum systems such as direct provision on individual's everyday life experience and settlement in Ireland. It is this ethnographic engagement that informs my ongoing critique of direct provision in my scholarly and applied work. Much of my work has concerned itself with the pyscho-social impact of asylum systems and its entanglements with loss and trauma on individuals and families seeking international protection in a number of different contexts, such as Northern Ireland (Murphy and Vieten 2017), France and Turkey Murphy and Chatzipanagiotidou 2020). This comparative scope has proven interesting in terms of having a broad range view of how different asylum systems work. The Irish asylum system, however, has always stood out as particularly deleterious where asylum seekers are essentially warehoused and alienated from mainstream Irish society, thereby engendering multiple layers of harm and producing a failed politics of refuge. This article is thus formed and informed by such longitudinal ethnographic engagements coupled with a secondary analysis of key research reports (conducted by the Irish Refugee Council during the pandemic), existing and emerging white papers and policy, activist group's reporting and newspaper articles (March 2020-January 2021). This secondary analysis has been particularly key during a strict and sustained lockdown in Ireland where continuing with face-to-face ethnographic engagement has not been possible with the exception of phone calls or WhatsApp conversations. The voices and experiences that I include in this article are reliant on data collected by the Irish Refugee Council in 2020. This emergent data highlights how challenging life has been (and continues to be) for asylum seekers in direct provision. In spite of a changing political narrative with the publication of a white paper on direct provision that promises to abolish the system by 2024, the pandemic continues to put immense strain on international protection applicants. The vaccination rollout strategy which started in January 2021 has been controversially blighted by supply chain issues. Further, it does not consider asylum seekers living in these overcrowded and harmful conditions as part of the priority population for vaccination. The long-term harm engendered by direct provision and the experience of living therein during the COVID-19 pandemic remains to be fully seen but will need to be properly measured and addressed. As I write from the middle of another lockdown (January 2021), however, the urgency of documenting and responding to the needs of international protection applicants in direct provision remains essential. History of Direct Provision There are now approximately 7700 international protection applicants living in different direct provision centres across Ireland (Ní Raghallaigh and Thornton 2017; Thornton 2020; Thornton et al. 2020). Direct provision centres are located in a range of different kinds of accommodation such as former hotels, a holiday centre, a convent (to name but a few). Rarely are these accommodation centres repurposed correctly for long-term living and there are widespread reports about the poor living conditions that asylum seekers experience (IRC 2020). Reports of over-crowding are frequent and at the beginning of the pandemic (March 2020), the Irish government moved some 600 people into additional accommodation, such as hotels and bed and breakfasts (IRC 2020). Residents of direct provision receive their food, accommodation and a weekly allowance for adults of EUR 38.80 and for children of EUR 29.80 (there is no other entitlement to other social welfare support (Gusciute 2020)). The origins of direct provision are by now well-documented (too numerous to list in full but see (Thornton 2014a;Nedeljković 2018aNedeljković , 2018bKhambule 2019)). Widespread criticism has been directed towards direct provision (from the outset) from NGOS and activists because it was in essence a violation of international protection principles (coupled with questions over direct provision's compliance with national law at the time). However, there has been (until recently) little government interest in finding a solution with the state narrative around direct provision being largely positive (Lentin 2020). Direct provision was meant to be an emergency, temporary solution with international protection applications spending no longer than 6 months in the system. It was implemented in April 2000, after the Irish state believed there was an increase in the number of people seeking international protection in Ireland. Asylum seekers would from that point be placed in a system of institutionalized living (full board) and would no longer receive social welfare but instead get a weekly allowance (at that time of 15 Irish pounds per adult and 7.50 Irish pounds per child). Up until the year 2000, emergency accommodation was being utilized for the first two months of an applicant's arrival. Additionally, applicants were given the standard weekly social welfare allowance. Subsequent to this, applicants would move out of this emergency setting into housing whilst awaiting the outcome of their application (Ní Chiosáin 2019). By late 1999, influenced by the introduction in the UK of a policy of dispersal for asylum seekers and citing concerns about the availability and cost of housing in Dublin, the Irish state introduced its own policy of dispersal and hence, the system of direct provision was borne (Fanning et al. 2000;Quilley 2016, 2018;Lentin 2020). Many critics claim that the Irish state was too concerned with the UK criticisms that Ireland was a backdoor to the UK (because of the Irish border and the 1998 Good Friday Agreement) and direct provision was a whiplash response. In 2004, following years of political hysteria regarding asylum numbers, a citizenship referendum was held. The outcome of this being the amendment of the Irish constitution in order to limit the constitutional right to Irish citizenship of individuals born on the island of Ireland to the children of Irish citizens (Lentin 2020). Direct provision is a highly commercialised asylum system (Hewson 2020; McGuirk and Pine 2020) and many of its failings emanate from a lack of oversight by the Irish state (even though it insists that it conducts regular inspections). From the outset, it was affixed to a project of outsourcing by the Department of Justice through a process of both buying or leasing hotels, guesthouses and other kinds of accommodation centres (Hewson 2020). Asylum seekers living in direct provision by and large do not have the ability to cook their own food (though this has been addressed in a number of centres following the McMahon report in 2015), they cannot apply for driving licenses and until 2018, could not work (Kerwin 2013;O'Reilly 2018). Given the often very rural location of many of these centres, all of these issues are very challenging. In 2003, when the EU introduced a directive to lay down minimum standards for asylum reception, Ireland was one of only 2 of the 27 member states to opt out, allowing the Irish state to continue with the ad hoc, deleterious and inconsistent system of direct provision. However, and in spite of the wide-spread documenting of its failings, direct provision has continued for 20 years on a non-statutory basis with few changes-the bedrock of government inaction. Even a ten-year appraisal in 2010 by the Reception and Integration Agency did not result in any positive changes. Instead, it saw cost-cutting measures including the closing down of particular centres and the relocation of residents introduced, resulting in a further diminishment of the system. Later challenges to the system included the UN committee for the elimination of racial discrimination condemning the system for its negative impact on asylum's welfare in 2011 and in 2014, a mother and son housed in direct provision brought forth a challenge against the system citing it as inhumane and degrading. The Irish high court rejected the case in spite of a number of studies pointing to the impact of direct provision on asylum seeker's well-being. The year 2014 also saw a series of resident protests in a number of direct provision centres and the Irish Times Lives in Limbo series which squarely highlighted the egregious nature of the system. Ultimately, some of these events led to the then FG/Labour government setting up a Working Group which led to the publication in 2015 of the McMahon report. While welcomed, the report's some 173 recommended changes were only implemented in a selective and partial manner (Thornton 2014b(Thornton , 2014c(Thornton , 2020. The recommendations included the issue of improving living standards, payments, the right to work and access to education. The issue of reducing lengthy waiting times in direct provision centres to have an application processed was also addressed. The European Communities (Reception Conditions) Regulations 2018 came into effect in Ireland on 30 June 2018, this allowed access to the labour market for international protection applicants who nine months post the lodging of their protection application had yet to receive a recommendation. MASI (Movement of Asylum Seekers Ireland) have campaigned to have what they see as a limited right to work expanded, citing concerns that a range of factors such as lack of access to driving licenses, challenges opening bank accounts, and the remote location of many direct provision centres as posing intersecting issues to the ability to seek and find work (Khambule 2019). Indeed, 2019 figures suggest that roughly 3500 international applicants have been granted right to work, with only half of those actually entering the workforce. Some improvements have been made in terms of educational access at third level, with seven Irish universities now having attained the status of University of Sanctuary, thereby providing scholarships and/or tuition waivers to a small number of asylum seekers (Murphy 2020). In December 2020, in spite of the challenges of the pandemic, the Day report (2020) was published. This report and its recommendations insist on abolishing the current direct provision system. It calls for a system where individual's seeking international protection are processed within three months whilst waiting in a reception centre. In that time, they should be issued with a social security number (PPS) and on having their application processed successfully, should be assisted in finding own door accommodation. The right to work is also recommended to be granted after three months. In February 2021, the government published a white paper in response to the Day recommendations with hopes that they take the task of abolishing the system of direct provision by 2024 seriously. This white paper has been met with a mixed response by direct provision residents and activists (Mfaco 2021). COVID-19 in Ireland and Response to Direct Provision The COVID-19 response in Ireland has been much like many other jurisdictions, one of ebb and flow, managed by the tyranny of numbers with daily reporting of COVID-19 results and deaths (Drążkiewicz 2020). At the time of writing (January 2021), there have been three lockdowns, and vaccination rollout began in January 2021. Mask-wearing and social distancing have all been mandated as elsewhere. Since January 2021, the Irish government has come under widespread attack due to their handling of their pandemic response over the Christmas season (2020) and a subsequent spiral of numbers and deaths in the weeks thereafter (due to new variants and general reopening). In April 2020, a Joint Statement from the Department of Justice and Equality and the Health Service Executive on "the Measures to Protect Direct Provision Residents during COVID-19" was made (Gusciute 2020). It stated that it had put in place social distancing and protection measures in direct provision centres and that residents were subject to the same public health measures as the general public. However, widespread reports from residents in direct provision proved that their experiences during the pandemic was in fact one where many of these measures were impossible to follow, largely due to overcrowding (IRC 2020). As a response to this, some 600 direct provision residents were moved out of these overcrowded centres but as the opening scene from this article highlights (with the experience of asylum seekers moved to the Skellig Hotel in Co. Kerry), many of those who were moved did not experience this as a solution, rather a further failed attempt to manage direct provision centres in the midst of the pandemic. The Irish Refugee Council (henceforth IRC) has conducted extensive research on the experience of asylum seekers in direct provision during the pandemic (2020). A number of other activist and NGO organisations have also collected reports on how the pandemic has impacted direct provision residents. In spite of the fact that there were some attempts to address issues of overcrowding, with health workers being allowed to move out into their own accommodation, there is nonetheless strong evidence that asylum seekers in Ireland have been failed during the COVID-19 outbreak. Entitled "Powerless" the Irish Refugee Council report published in August 2020 conducted research with 418 people living in 63 different Direct Provision and emergency centres. The report assesses mental health, stigma and racism, children, schooling and parenting of individuals and families living in direct provision during the pandemic. In doing so, it presents both a statistical overview but also importantly allows space for the voices of direct provision residents to openly express their concerns in each of these individual areas. Overcrowding and a lack of safety and inability to meet public health measures are foregrounded in the report. In total, 55% of respondents felt unsafe during the pandemic and 50% of respondents were unable to socially distance themselves from other residents during the pandemic (IRC 2020, pp. 9-10). One respondent says: Plenty [of] adults and children living under the same roof, people share a lot [of] facilities that may not allow proper social distancing. If one person gets infected it will be hard to control the spread no matter the measures taken. (IRC 2020, p. 19) While some attempts were made to address this issue of overcrowding, they were, as I have noted at the beginning of this piece, a failure. The lack of safety and security due to overcrowding has been compounded by the fact that Ireland has experienced two extended school closures (March-September 2020) and (January-April 2021). I'm in the room with a colleague. Unfortunately, he is a kind of person [who] seems to have problems with emotional control. He can't stand still. During the day he goes out more than 15 times [and] can open the door more than twenty times a day and goes down in five laps. I try to stay at home and in my room to try to protect myself and protect him too. Unfortunately, the other side does not cooperate so it's difficult to find security. (IRC 2020, p. 18) My experience is so saddening. [There are] 22 Covid cases here. We cry out to be moved for safety in vain. I am still living in an infected room for my roommate tested positive of Covid. The local residents are scared of us we are in total lock down and not safe. I am always in a state of fear. (IRC 2020, p. 39) I think living in a Direct Provision Center for a long time is cruel and very frustrating having a family . . . It is worse in Corona Virus Time sharing the same kitchen breathing the same air in the tiny space with more than 30 people is insane. A father I thank the Irish government for all support but [it] is time in this situation to act more responsible with people sharing, the virus can spread quickly. (IRC 2020, p. 41) Homeschooling using technologies such as zoom, loom and seesaw (and it must be noted that the roll out of homeschooling in Ireland has been ad hoc and unequal in provision) means children living in direct provision need access to devices, a Wi-Fi connection, and, of course, space to do their daily schoolwork. In overcrowded centres, where families often share a single room, this has proven challenging, as direct provision residents have noted: Now our children do not go to school and this is a problem for us, they do not receive education and cannot study remotely because we do not have the opportunity to do so. It is impossible to organize training in one room where there are 4 people in a locked room. (IRC 2020, p. 47) If I could be moved to a place where both me and my kids can be in one house as my son is sharing a room with a stranger, it's so difficult for him. We need a place where we can cook for ourselves, my kids struggle with the food cooked in the hotel. (IRC 2020, p. 19) School provides a welcome reprieve for many children from the challenges of living in confined direct provision spaces, so its absence has meant significant disruption to the daily rhythm of their lives, the chance to sustain a friendship network outside of the centre, and of course, ultimately for those unable to engage with online education, a significant educational lag. Given that many children seeking international protection have already experienced a pronounced educational lag due to their complex personal histories (many fleeing conflict and having undertaken long journeys to Ireland) in addition to needing to adjust to a new schooling system (and often language) in Ireland, the absence of schooling during the pandemic is likely to have a significant impact on this cohort of children (IRC 2020). One of the most persistent and significant critiques of direct provision across the years is that it has a direct, negative impact on mental health. Broadly, asylum systems have a significant pyscho-social impact and act as aggressive post-migration stressors (Murphy and Vieten 2020) but systems of containment such as direct provision engender harm in a more pronounced way. One direct provision articulates it as follows: Since I have come into Direct Provision, it has been not easy at all, very stressful. l came here for protection and am traumatised here as well. For now, l don't have pieces of [my] mind, l feel like am no longer needed in this world. I don't want to go anywhere, sometimes l feel like maybe am l dreaming? I am losing my mind here, keeping us for a long time without any answer (from the minister of justice), every day I am living in fear. (IRC 2020, p. 41) These feelings can be particularly acute in children who spend many years (some their entire childhoods) in direct provision (Fanning et al. 2001;Fanning 2004;Fanning and Veale 2004;Ní Raghallaigh and Thornton 2017;Micha et al. 2018;Zhou 2020). A range of studies evidence that particular elements of the asylum system such as protracted waiting in spaces of containment or detention, long processing times and adversarial legal processes for protection applications, lack of access to work or education, food poverty, and stigma all combine to attenuate the mental health and well-being of international protection applicants Vieten 2017, 2020). There have been a number of suicides in direct provision. During the pandemic, asylum seekers have turned to protest and indeed, hunger strike to forcefully make their point about the deleterious impact of direct provision on their mental health and well-being. The IRC report highlights how increased stigma due to consistent media reporting of overcrowding and COVID-19 outbreaks, has led to racism and further alienation in some communities. One respondent states: I do not want to send my child to school here, we had a bad experience while the community rejected us saying 'covid people'. [Threw us] out of the supermarket and told us not to come out of the building. It's a stigma on us to continue here. (IRC 2020, p. 38) They are locked in a single room for over 3 weeks now. They are going bananas. Mental health is compromised. Showing signs of distress living in one room. (IRC 2020, p. 51) The intersections of trust/mistrust and risk have characterised so much of the pandemic experience (for everyone albeit in different ways); for residents in direct provision, this has become more graphically inscribed on their everyday. While media scrutiny of direct provision is necessary and much welcomed, it has also fuelled deepened resentment and fear in some communities towards residents in direct provision over the course of the pandemic. Direct provision centres and meat factories (in which some DP residents and migrants work) have been continually pinpointed as points of COVID-19 outbreaks particularly in a number of rural areas (Co. Kildare as one example). The exclusionary tactics of this kind of asylum system firmly demarcates asylum seekers as "alien" or "outside" mainstream society. During the pandemic, as sites of warehousing, direct provision centres have become further entangled with a politics of exclusion and "danger" (or risk). At the same time, it has awakened many in the general population to the abhorrent conditions that asylum seekers in Ireland face on a daily basis. Mapping New Solidarities Direct provision has long been the subject of activist and scholarly opprobrium. In particular, the work of asylum seeker and refugee activists, such as MASI-the movement of asylum seekers Ireland-stands out, but there are also many other groups and activist/advocate scholars who have worked hard to ensure that the system's deleterious impact is public knowledge. Vukašin Nedeljković's activist-photographic work on direct provision sites is a striking example of placing direct provision in the spotlight by creating an archive of images that graphically condemns this asylum system (Nedeljković 2018b (2019) is a collaborative work edited by Jessica Traynor and Stephen Rea that calls for an end to the system of direct provision in Ireland. Creative, solidarity movements around food, which draw attention to the conditions in direct provision have been hugely successful (Murphy 2019). Media attention coupled with celebrity activism on the matter has also further cemented this overt understanding of direct provision as an egregious asylum system. In spite of the emergence of this new political community of solidarity formed by activists, poets, writers, musicians, actors, film-makers, journalists and academics, the Irish government has remained stalwart in its resistance towards ending direct provision until recent months (2021). The publication of a government white paper in February 2021; however, has indicated a willingness to end direct provision by 2024 and this has renewed hope amongst international protection applicants and support/solidarity circles and networks. In more recent years, the general public has become broadly aware of the substandard living conditions in direct provision. The pandemic has further accelerated this awareness but this has manifested in complex ways, often increasing the stigma associated with living in these centres. It has laid bare, however, what Charlotte Brives calls the "multiplicity of the virus" and the "extremely variable" experience of the virus anchored in class, gender, race, living arrangements and the politics and health policies of different nation-states (Brives 2020). While Ireland largely escaped the significant rise of the far right that swept across Europe from 2015, it nonetheless, has a small and growing faction of far-right extremists (now further fuelled by COVID19 conspiracies) who have targeted direct provision centres. As with elsewhere, the intersection of virus conspiracy theorists, anti-maskers and the far right has seen an escalation in some forms of protest primarily during the more restrictive lockdowns. Both the capital city Dublin and Ireland's second city Cork has seen protests of this kind in 2020 and 2021. Nonetheless, broadly in Ireland with the pandemic, there have been many visible examples of mutual aid solidarity and community building in spite of so many of us having to remain within our own holding spaces during lockdown periods. While such widespread and very public scrutiny of direct provision points to the urgent need to find a remedy to a system which so easily dehumanises, instead of reform, it provoked the social media surveillance of those condemning the system. In 2020, Sian Cowman and Ken Foxe (members of Refugee and Migrant Solidarity Ireland (RAMSI)) requested access to documents which revealed that the Department of Justice and Equality had directed its Transparency Unit to review social media tweets about direct provision. While government monitoring of social media is certainly not new (Trottier and Fuchs 2015), it appears to have increased during the pandemic, but the particular focus on direct provision has ignited much ire. Irish singer Hozier was one of many Irish celebrities whose tweets had been captured in the monitoring report: As the Dept of Justice is monitoring social media posts about Direct Provision, they no-doubt have read countless accounts of dreadful experiences from those suffering within this exploitative, for profit system. I'd encourage them to use these considerable resources to end it (Tweet from Hozier 16 August 2020) The monitoring of social media regarding direct provision suggests a very strong awareness on the part of the Irish Department of Justice for the appetite for reform. Given, the historical trajectories drawn between Ireland's history of containment with industrial schools, mother and baby homes, and Magdalene laundries, it is clear that, in our digital age, there is no room for public secrets of this kind. Further, this very public act of witnessing and condemnation on social media, coupled with increased discontent with the Irish government's approach to the pandemic paints a stark picture of a more generalised malaise which sees the intersections of failed governance through multiple lenses. The publication of the Mother and Baby home report in January 2021 following a lengthy commission of inquiry into the experience of single mothers and their children in predominantly Catholic institutions has been met with public outcry. Primarily, it is clear that the experience of these mothers and children has not been accurately represented in this report. Furthermore, many critics have argued that the commission of inquiry was not properly conducted with much acrimony over the way in which the testimonies of survivors were taken and then destroyed. Critics of direct provision have established clear connections between the logics of containment and secrecy that engendered institutions like the mother and baby homes and indeed, in 2021, their continued misrepresentation. The same approach has essentially been applied to the creation and maintenance of the Irish asylum system which clearly follows similar logics. The privatisation of the system coupled with the rural location of direct provision and the active silencing of direct provision residents aided in the increased invisibility of the system for too long, now no longer possible through accelerated activism and critical/scholarly attention. Conclusions Life in direct provision has been further complicated by the COVID-19 pandemic. Individuals and families have found the following of public health guidelines next to impossible in the overcrowded, substandard living conditions imposed by the Irish asylum system. COVID-19 outbreaks have happened in a number of centres and many individuals have reported feeling a lack of security or trust in their places of residency. Home schooling in these conditions is challenging (or impossible) and children and their parents have suffered immensely trying to navigate the absence of school for extended periods in 2020 and 2021. The pyscho-social impact of the pandemic on international protection applicants is thus likely to be significant and the Irish state has done little except for addressing (inadequately) some of the overcrowding issues. Protest, hunger strikes, fear and stigma have come to define the experience of direct provision residents during the COVID-19 pandemic. This article joins a strong body of extant literature on direct provision which advocates for the abolishing of this harsh, dehumanising asylum system. As with other activists and scholars, I have considered herein, how direct provision serves as a cipher for rebordering practices, by pushing international protection applicants into the margins of Irish society through a system of warehousing. As a highly commercialised asylum system with direct provision centres dispersed in often very rural areas, asylum seekers are hidden in plain sight. In many instances, organisations have noted how much fear residents have of making a complaint when conditions are sub-par with the too vocal being moved frequently between centres. While the European Communities (Reception Conditions) Regulations came into effect in 2018, which allowed a certain cohort of asylum applicants (who had been waiting for 9 months) to access the labour market, the broader living conditions in direct provision were not adequately addressed. During the pandemic, asylum seekers in direct provision who had been furloughed or lost their jobs were unable to access the pandemic payment of EUR 350 a week, in spite of active lobbying by NGOS and activists to make it available. Activism and media attention, particularly during the pandemic, has, however, illuminated for the broader populace the challenges of life for direct provision residents. The COVID-19 pandemic or "syndemic" has heightened experiences of racial capitalism, practices of rebordering, and exclusion. Repeatedly during this pandemic, we have been called upon as individuals, families and communities to protect our most vulnerable; to isolate and socially distance so that others may stay healthy. However, what has been striking is that this definition of vulnerability is firmly anchored in a politics of exclusion, one constitutive of particular kinds of citizenship and belonging. Asylum seekers and refugees have experienced the blunt force of these hardened exclusionary practices during the COVID-19 pandemic in the reproduction of bordering and rebordering practices that are intimately felt in their daily lives. This has compounded the many and intersecting layers of uncertainty that asylum seekers already face, such as lengthy processing times for their protection application, separation from families and challenging living conditions in direct provision. Within the Irish asylum system, we have seen a failure to fully address the ways in which the pandemic heightened the impossibility of life in direct provision. The abolition of direct provision as a system of exclusion and containment requires immediate action. The pandemic has proven that this is a system which fails individuals and families seeking international protection. It is clear that much harm has already been engendered through this system, now further compounded by the pandemic. Strong solidarity networks and links demand the abolition of direct provision but now it is time for a more respectful political vision that embraces a politics of everybody in a new imaginary of protection and refuge; to end direct provision in its current form is now the only meaningful response needed from the Irish state. Acknowledgments: As always, I would like to acknowledge the international applicants who live in direct provision who for many years have engaged with me on my research on issues of asylum and refuge. Furthermore, the work of MASI and the Irish Refugee Council have informed this piece in multiple ways and for this I am grateful.
2021-04-20T13:23:34.595Z
2021-04-15T00:00:00.000
{ "year": 2021, "sha1": "0cae46f238fddae77aaff08f3f12526551371660", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-0760/10/4/140/pdf?version=1618468525", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "675daf9e65545df332daeb01efaa9014af7f72bd", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Political Science" ] }
213471739
pes2o/s2orc
v3-fos-license
Deep Learning for Visual SLAM in Transportation Robotics: A review Visual SLAM (Simultaneously Localization and Mapping) is a solution to achieve localization and mapping of robots simultaneously. Significant achievements have been made during the past decades, geography-based methods are becoming more and more successful in dealing with static environments. However, they still cannot handle a challenging environment. With the great achievements of deep learning methods in the field of computer vision, there is a trend of applying deep learning methods to visual SLAM. In this paper, the latest research progress of deep learning applied to the field of visual SLAM is reviewed. The outstanding research results of deep learning visual odometry and deep learning loop closure detect are summarized. Finally, future development directions of visual SLAM based on deep learning is prospected. Introduction Mobile robot autonomous positioning and navigation needs to solve three problems: localization, mapping and path planning [1]. Localization includes the robot pose and location in the environment. Mapping is used for generating a representation of the surrounding environment, while path planning solves the problem of the robot moving in the optimal route in the map. In the early days, localization and mapping were studied separately until the IEEE Robotics and Automation Conference in 1986 proposed the concept of SLAM (Simultaneously Localization and Mapping). SLAM can solve the problem that the robot locates itself in an unknown environment and gradually construct a continuous map of the environment. In case of a camera as the only external sensor, this concept is called visual SLAM. State-of-the-art visual SLAM frameworks consist of the following four modules [2]: frontend visual odometry, backend optimization, loop closure detection, and mapping. Visual odometry is responsible for preliminarily estimating the pose and location of the robot frame to frame and the position of the map point. The backend optimization is responsible for receiving the pose information measured by visual odometry and calculating Maximum a posteriori estimation (MAP). The loop closure detection aims at recognizing the previously visited places during the travel of the mobile robot, and help the mapping and registration algorithms to obtain a more accurate and consistent result. Finally, the mapping is responsible for reconstructing the map according to the camera pose and frame. Many visual SLAM systems fail while working in external environments, in dynamic environments, in environments with too many or very few salient features, in large scale environments, during erratic movements of the camera and when partial or if a total occlusions of the sensor occurs. As it is known, deep learning has promoted breakthroughs in research on image recognition [8] and speech recognition [9], opening the era of "big data + complex models". The success of deep learning should be attributed to deep model structure, efficient learning methods, support for big data, and ever-changing computing power. Unavoidably, the evolution of visual SLAM from geometry-based methods to deep learning methods occurs. Recently both supervised deep learning methods and unsupervised methods are applied for visual SLAM problems such as visual odometry [10,11] and loop closure [12,13]. These recent advances promise huge potential for deep learning methods to address the challenging issues of visual SLAM by including adaptive and learning capability. This paper provides a review of deep learning methods applied on visual SLAM, including the advantagse and limitations of the different deep learning methods. Finally, future opportunities are focusing on the way to enhance robustness, semantic understanding, and learning capability of visual SLAM. The paper is organized as follows. In chapter 2 we review the related work of visual SLAM method based on geometry briefly, chapter 3 illustrates deep learning VO methods. Chapter 4 illustrates loop closure detection methods based on deep learning methods. Then the open problem and development trend of visual SLAM is discussed in chapter 5. Finally, a conclusion is drawn in chapter 6. Visual SLAM based on Geometry In theory, the visual SLAM method based on geometric theory mainly relies on extracting geometric constraints from images to estimate motions. Since they come from elegant and established principles and are widely investigated, most of the state-of-the-art visual SLAM algorithms belong to this family. They can be further divided into feature-based methods and direct methods [14]. Feature-based Method First feature-based monocular visual SLAM was presented in 2003 by Davison et al., which is called MonoSLAM [15]. MonoSLAM is considered to be a typical filter-based visual SLAM method. In MonoSLAM, an extended Kalman filter (EKF) is used to simultaneously estimate the camera motion and the 3D structure of an unknown environment. The 6-degree-of-freedom (DoF) motion of the camera and the 3D position of the feature point are represented as a state vector in the EKF. In the prediction model, it is assumed that the motion is uniform, and the result of the feature point tracking is taken as the observation result. New feature points are added to the state vector based on camera movement. The problem with this approach is that the amount of computation increases with the size of the environment. In a large environment, the size of the state vector increases due to the large number of feature points. In this case, it is difficult to achieve real-time calculations. To solve the computational problem in MonoSLAM, PTAM [2] splits the tracking and mapping into different threads on CPU. These two threads are executed in parallel. Thus, the computational cost of the mapping does not affect the tracking. Therefore, a BA (Bundle Adjustment) [16] that requires a computational amount in the optimization can be used for mapping. This enables tracking the motion of the camera in real-time, and mapping the estimated 3D position of the feature points with the amount of computation. PTAM is the first method to integrate BA into a real-time visual SLAM algorithm. One of the important contributions of PTAM is the introduction of keyframe-based mapping in visual SLAM. After the release of PTAM, most visual SLAM algorithms follow this type of multi-threading method including ORB-SLAM and LSD-SLAM. ORB-SLAM proposed by Mur-Artal et al. [17] inherits the framework of PTAM and replaces most of the modules, which is one of the most successful feature-based visual SLAM systems by now. Fig. 1. shows the framework of ORB-SLAM. For the first time, they proposed a method for position recognition using ORB features [18] based on Bag-of-Words (BoW) [19] technology. ORB feature is based on BRIEF (Binary Robust Independent Elementary Features) [20] descriptor. Features from Accelerated Segment Test (FAST) [21] key point detector, allows real-time performance without GPUs, providing good invariance to changes in viewpoint and illumination. Monocular cameras are proposed for ORB-SLAM for large-scale environments; this approach has demonstrated superior performance. Afterward, ORB-SLAM2 was extended from monocular cameras to stereo and RGB-D cameras [22]. Direct Method Different from feature-based methods, the direct method does not rely on manually designed sparse features, but rather builds an optimization problem that estimates camera motions directly from pixel information (usually photometric errors). The direct method eliminates the time to extract features at the cost of making the scale of the optimization problem much larger than the feature-based method. Newcombe et al. proposed a density tracking and mapping (DTAM) system [23] that computes a dense depth map for each keyframe by minimizing global, spatially regular energy functionals. The pose of the camera is determined by directly aligning the entire image with a depth map. This method is computationally intensive and only possible through GPU. In order to reduce the amount of calculation, Forster et al. presented a Semi-Direct Visual Odometry (SVO) [24] algorithm using featurecorrespondence. Feature extraction is only required when a keyframe is selected to initialize new 3D points. On an embedded computer MAVs SVO algorithm runs at more than 50 frames per second. Engel Visual Odometry with Deep Learning Feature-based methods use various feature detectors to detect salient feature, for example FAST [21], SURF (Speeded Up Robust Features) [26], BRIEF [20] and Harris [27] corner detectors. The feature point tracker is then used to track these feature points in the framework of the next sequence; the most commonly used tracker is the KLT tracker [28,29]. The output from the tracker is optical flow, after that, the camera parameters proposed by Nister [30] can be used to estimate self-motion. This general approach of detecting feature points and tracking them is followed by most works as is the case in [31,32]. More recent works in this area employed the PTAM [2] approach, as in [33][34][35]. The direct method of visual odometry relies directly on the pixel intensity values in the image, and minimizes errors in sensor space, avoiding feature matching and tracking subsequently. However, these methods require planarity assumptions. Early direct monocular SLAM methods, such as Jin et al. [36], Molton et al. [37] and Silveira et al. [38] used filtering algorithms from Structure from Motion (SfM), while Pretto et al. [39] and Krizhevsky [40] used nonlinear least-squares estimation. Other methods, such as DTAM [23] require a lot of GPU parallelism. In order to reduce this heavy computational demand, many researchers attempt to apply deep learning methods on the OV problem, their work can be divided into supervised methods and unsupervised methods. Supervised Methods Vikram et al. [10] presented a DeepVO framework for analyzing monocular VO. The framework consists of two parallel AlexNet [40] and concatenating at the end of the final convolutional layer to generate fully connected layers. They tested DeepVO using in known environments, with data segregated into training and testing sequence. Results demonstrated a competitive performance. Costante et al. [41] presented a CNNs, which parallel learning of dense optical flow [42] extracted from the global and local image. Wang et al. [14] presented an end-to-end framework for monocular VO by using deep Recurrent Convolutional Neural Networks (RCNNs), which is composed of CNN based feature extraction and RNN based sequential modeling. Firstly, an image is fed into CNN to produce an effective feature for the monocular VO, then the learned feature is passed through an RNN for sequential learning. Experiments on the KITTI [43] VO dataset show competitive performance to state-of-the-art methods. The authors then extended the work by including uncertainty estimation, and evaluation of mobile robots, flying robots and human motion [44]. Melekhov et al. [45] also presented a relative camera pose estimation system with CNN. Similar to the framework of Wang, Turan et al. [46] proposed a monocular visual odometry (VO) method for endoscopic capsule robot operations. The proposed Deep learning network consists of three inception layers and two LSTM layers concatenated sequentially. Unsupervised Methods Zhou et al. [11] presented an unsupervised learning framework with view synthesis as the supervisory signal. The method uses single-view depth and multi-view pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. This system cannot recover the scale from learning monocular images. Inspired by Zhou et al. [11], Vijayanarasimhan et al. [47] proposed SfM-Net, a geometry-aware neural network for motion estimation in videos that decomposes frame-to-frame pixel motions in terms of scene and object depth, camera motion and 3D object rotations and translations. The model can be trained with various degrees of supervision: self-supervised by the reprojection photometric error (completely unsupervised), supervised by ego-motion (camera motion), or supervised by depth (e.g., as provided by RGBD sensors). Mahjourian et al. [48] also focus on learning depth and ego-motion from monocular video, by combining 3D-based loss with 2D losses, based on photometric quality of frame reconstructions using estimated depth and ego-motion from adjacent frames. They also incorporate validity masks to avoid penalizing areas in which no useful information exists. Nguyen et al. proposed a hybrid approach, which combines deep learning and feature-based methods; they use features to compute the homography estimates. The network architecture is based on VGGNet [49]. Zhan et al. [50] proposed a parallel CNNs frame, experiments show jointly training for single view depth and VO improves depth prediction because of the additional constraint imposed on depths and achieves competitive results for VO. Li et al. [51] proposed a monocular visual odometry (VO) system called UnDeepVO to estimate the 6-DoF pose of a monocular camera and the depth of its view by using deep neural networks, which use stereo image pairs to recover the scale. They were tested by using consecutive monocular images. The experiments on KITTI [43] dataset show that UnDeepVO achieves good performance in terms of pose accuracy. Loop Closure with Deep Learning The goal of loop closure detection is to give the robot the ability to recognize the same scene. In other words, the robot can tell whether it has been to the place or not [53][54][55]. This issue has always been one of the biggest obstacles to large-scale SLAM and recover from critical errors. Another problem arising is perceptual aliasing [56,57]. Two different places are considered to be the same. This represents a problem even when the camera is used as a sensor due to the repetitive nature of the environment, e.g. corridor, similar architectural elements or areas with lots of bushes. A good closed-loop detection method cannot return any false positives and must obtain the least false negatives. Ho et al. [52] used a similarity matrix to code the relationships of resemblance between all the possible pairs in captured images. They demonstrated by means of a single value decomposition that it is possible to detect loop closures, despite of the presence of repetitive and visually ambiguous images. Eade et al. [56] presented a unified method to recover from tracking failures and detect loop closures in the problem of monocular visual SLAM in real time. They also proposed a system called GraphSLAM [57] where each node stores landmarks and maintains estimations of the transformations relating nodes using ORB-SLAM [17] employed BoW [19] to detect loop. Due to the great development and success of convolutional neural networks and deep learning in the area of computer vision [58,59] a recent trend in autonomous robots is to exploit learned features instead of hand-crafted traditional features to tackle visual problems, especially loop closure detection problem for visual SLAM systems. Supervised Method Several research groups have tried to use pretrained CNN models as feature generators to obtain whole image representations and demonstrated this on various of datasets. They concluded that CNN features are more robust regarding viewpoint, illumination and scale variations of the environment [49,[60][61][62]. Naseer et al. built a Caffe model by GoogLenet [63] and Alexnet [41] which they employed for detecting loop closure across seasons. The model is pre-trained on Places [64] database. In addition, a directed data association graph over the similarity matrix to leverage sequential information has been build. In [65], Bai et al. also used CNNs to extract features, and tried to improve the realtime performance of the loop closure detection using a feature compression method. Further they used O-SeqCNNSLAM [66] as image descriptors, which is based on SeqCNNSLAM, enables real-time performance and online parameters adjustment. Instead of using features directly, Zhang et al. [67] pre-processed the CNN features by a principal component analysis (PCA) and whitening step; the framework is shown in Fig. 2. By doing so, highdimension features are projected into a lower dimensional space, which makes the detecting process more efficient. Unsupervised Method Gao et al. [68] presented an unsupervised method, in which a stacked auto-encoder is employed to learn features, modify the object function of traditional auto-encoders by adding the denoising, sparsity and continuity cost item into it, and then evaluate the effect of corruption by experiments. Also based on autoencoder, the method in [69] inflict random noise on input data, and utilizes the geometric information and illumination invariance provided by histogram of oriented gradients (HOG) [70] forcing the encoder to reconstruct a HOG descriptor. Based on the stacked denoising auto-encoder (SDA), Gao et al. [13] proposed a multi-layer neural network that autonomously learns a compressed representation from the raw input data in an unsupervised way. Based on SDA, Wang et al. [71] proposed a graphregularization stacked denoising auto-encoder (G-SDA) network and the manifold learning graph regularization structure. Compared with the bagof-words (BoW) method, the OpenFABMAP algorithm, and the traditional SDA method, their method achieves superior performances. Develop trends The geometry-based SLAM method has achieved high precision and real-time, but these algorithms are tending to fail under different lighting conditions, due to the movement of people or objects, the emergence of feature-free regions, day and night transitions or any other unforeseen circumstances. With the power of deep learning shown in a variety of visual tasks, people's attention has gradually turned to deep learning solutions. In addition, a visual SLAM system with learning or adaptive capabilities is a factor which is worth further exploration. The success of deep learning is centered around long-term training on supercomputers and the use of dedicated GPU hardware to achieve one-off results. One of the challenges faced by SLAM researchers is how to provide enough computing power in embedded systems. A bigger and more important challenge is online learning and adaptation, which will be essential for any future long-term visual SLAM system. Most existing networks tend to train a large number of labeled data, but it is not always possible to guarantee the existence of a suitable data set. A semi-supervised network that uses a small set of labeled data and a large amount of unlabeled data to train is an important direction for the future development of visual SLAM. The visual SLAM system usually runs in an open world, where new objects and scenes can be encountered. But so far, deep networks are often trained for closed world scenarios. A deep network tends to perform well in its trained datasets, and tend to fail in untrained datasets. A major challenge is to enable deep learning networks for lifelong learning visual SLAM systems. Conclusion The knowledge of geometry-based visual SLAM is valued in designing the network architecture, the loss function, and the data representation of deep learning-based methods. The availability of largescale datasets is the key to broad applications of deep learning methods. The attempt to employ unsupervised learning is promising to further consolidate the deep learning contribution to visual SLAM. Conflict of interest statement. None declared.
2020-02-27T09:18:47.277Z
2019-12-12T00:00:00.000
{ "year": 2019, "sha1": "95820d9f257811db5f516b599e8b195f493df953", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1093/tse/tdz019", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "38557810527edfa588081b1ee7d5ac79e30a7a4a", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
224998808
pes2o/s2orc
v3-fos-license
Data‐Driven Reservoir Simulation in a Large‐Scale Hydrological and Water Resource Model Large‐scale hydrological and water resource models (LHMs) are used increasingly to study the vulnerability of human systems to water scarcity. These models rely on generic reservoir release schemes that often fail to capture the nuances of operations at individual dams. Here we assess whether empirically derived release‐availability functions tailored to individual dams could improve the simulation performance of an LHM. Seasonally varying, linear piecewise relations that specify water release as a function of prevailing storage levels and forecasted future inflow are compared to a common generic scheme for 36 key reservoirs of the Columbia River Basin. When forced with observed inflows, the empirical approach captures observed release decisions better than the generic scheme—including under conditions of drought. The inclusion of seasonally varying inflow forecasts used by reservoir operators adds further improvement. When exposed to biases and errors inherent in the LHM, data‐driven policies fail to offer a robust improvement; inclusion of forecasts deteriorates LHM reservoir simulation performance in some cases. We perform sensitivity analysis to explain this result, finding that the bias inherent in LHM streamflow is amplified by a reservoir model that relies on forecasts. To harness the potential of interpretable, data‐driven reservoir operating schemes, research must address LHM flow biases arising from inaccuracies in climate input, runoff generation, flow routing, and water withdrawal and consumption data. Introduction Large-scale hydrological and water resource models (LHMs) require a water storage and release scheme to simulate the flow regulating effects of dams (Dang et al., 2020;Nazemi & Wheater, 2015b). Designing such a scheme is challenging, because reservoir operations often depend on complex and undocumented decision processes. In state-of-the-art LHMs, such as PCR-GLOBWB (Wada et al., 2014), H08 (Hanasaki et al., 2008), HiGW-MAT (Pokhrel et al., 2015), WaterGAP (Döll et al., 2009), WBM (Wisser et al., 2010), and VIC/CLM-MOSART-WM (Voisin et al., 2013), a generic, inflow-and-demand-based scheme is deployed; the reservoir's primary purpose (e.g., irrigation, flood control, and water supply) and characteristics of its inflow, storage capacity, and demand, rather than its record of operation, are used to define seasonally varying water release, following the methods of Hanasaki et al. (2006), Haddeland et al. (2006), Döll et al. (2009), Biemans et al. (2011), and Voisin et al. (2013). Important constraints, such as the proportion of water to be allocated for environmental flow, are parameterized arbitrarily and uniformly across dams. This reservoir scheme (herein referred to as a "generic release scheme," following Masaki et al., 2017) has allowed scientists to simulate key flow-regulating behaviors of a large fleet of reservoirs within a consistent framework, relying on data sets with global coverage (e.g., the Global Reservoirs and Dams Database- Lehner et al., 2011) to describe reservoir characteristics and operating purposes where operational records are scarce . Yet simulated flow variability downstream of dams is sensitive to the reservoir algorithm adopted (Masaki et al., 2017), and substantial errors in simulated water releases are inevitable when locally important operational nuances are neglected (Yassin et al., 2019). Flow errors can compound over time through storage memory. Each erroneous release decision provides an erroneous inflow to the next reservoir, propagating the error downstream. Although unimportant when assessing regionally aggregated water scarcity at an annual scale , such errors jeopardize an LHM's ability to simulate drought impacts, since the state of reservoirs at the onset of drought-and their operations throughout drought events-has a significant bearing on likelihood and severity of shortfalls in water supply (Turner & Galelli, 2016). In contrast to generic release schemes, a set of empirically derived release-availability functions (herein referred to as "data-driven scheme") can be parameterized for individual reservoirs replete with sufficiently long observational records of releases, inflows, and storage levels. Data-driven schemes have been demonstrated to outperform generic release schemes in an isolated number of reservoirs and have thus been proposed as a viable alternative that could be implemented in LHMs (Coerver et al., 2018;Mateo et al., 2014;Yassin et al., 2019). To our knowledge, the improvements available from such an approach have yet to be evaluated for a large system with multiple reservoirs in cascade. We also find no published research exploring how such a scheme might affect simulated water supply reliability and vulnerability-metrics that underpin rigorous drought impact assessment (Hashimoto et al., 1982;McMahon et al., 2006). To evaluate the potential benefits of adopting a data-driven reservoir release scheme in LHMs, we compare generic and data-driven reservoir simulations for a major river basin of the United States. Simulation performances are evaluated using standard goodness-of-fit metrics and errors in cumulative water volumes released during drought. The LHM adopted in this study tracks unmet water demands and is therefore also used here to explore sensitivity of water supply reliability and vulnerability to the choice of reservoir scheme. Our study therefore indicates the importance of the reservoir release scheme to drought analyses, in addition to revealing the improvements in flow simulation made available by advancing from a generic to a data-driven approach. Additionally, we compare two possible settings for a data-driven release model: with and without representation of perfect inflow forecasts. In the latter model, forecast lead times vary seasonally according to availability of predictive skill (e.g., longer lead times in early spring when snowpack depths indicate incoming flows weeks ahead). All experiments are conducted using both observed (off-line) and LHM-simulated (online) inflows. Crucially, this allows us to isolate the influence of hydrological model inflow bias on simulation performance. LHMs are used increasingly in national water scarcity assessments as well as multisector dynamics research examining the coevolution and interactions of water, energy, and land systems (e.g., Behrens et al., 2017;Hejazi et al., 2015;Schewe et al., 2014;Voisin et al., 2018;Wada et al., 2016). This study offers a first glimpse into the potential impact and benefits of implementing site-specific operations into reservoir schemes to support these applications. Domain, Models, and Data For this study we use the Columbia River Basin (CRB) of the U.S. Pacific Northwest as the spatial domain. The CRB covers 668,000 km 2 , encompassing territory of five U.S. states as well as British Columbia, Canada. The hydrology of basin is diverse, but overall snowmelt controlled with a large peak freshet in late spring. The CRB is appropriate for this study because the river system is highly regulated, featuring 120 reservoirs of significant storage capacity (>10 million cubic meters, MCM) and operated for various purposes, including hydropower generation, flood control, and water supply for irrigation. The CRB is also rich in lengthy records of reservoir inflow, storage levels, and release; of the 120 reservoirs of the CRB with >10 MCM capacity, we obtain sufficient observational records for 36 reservoirs to develop a data-driven release scheme (>10 years, daily resolution storage, and at least one of inflow or release). The LHM adopted in this study is an amalgamation of two state-of-the-art models-the Variable Infiltration Capacity (VIC) model (Liang et al., 1994) and the Model for Scale Adaptive River Transport and Water Management (MOSART-WM) (Voisin et al., 2013(Voisin et al., , 2017. The VIC hydrological simulations applied in this study are available through the Livneh et al. (2013) daily CONUS near-surface gridded meteorological and derived hydrometeorological data set, which provides spatially distributed runoff simulated with gridded climate observations for the period 1915-2011. We select the period 1980-2011 and then aggregate runoff to (1/8)°. The MOSART component of the model routes runoff accounting for heterogeneous land and river channel effects, while the water management component (WM) (Voisin et al., 2013) introduces reservoir storage and withdrawals of water for multiple human uses. Exogenous water demand is first met within each grid cell by allocating local surface water and ground water; surface water supply is augmented where needed through withdrawals from reservoir releases. Water demands in this study are set to a 2010 profile (as in Voisin et al., 2018)-a common approach due to uncertainty in water demand trend through time (e.g., BPA, 2011). Since actual water demands would have varied substantially over a 30-year period, a fixed demand profile clearly introduces nonnegligible error into the simulated flows. These errors 10.1029/2020WR027902 Water Resources Research TURNER ET AL. accompany other well-documented errors that afflict an LHM simulation, including climatic forcing , runoff generation (reflecting the combined error from a variety of physical process representations) (Fekete et al., 2012), flow routing errors, and, of course, all upstream reservoir operations. The generic release scheme deployed in MOSART-WM follows Hanasaki et al. (2006) and Biemans et al. (2011), with the additional enhancement of dynamic storage targets (Voisin et al., 2013). For flood control reservoirs, a monthly release pattern is initialized to match long-term annual inflow adjusted for interannual variability. Monthly releases are further adjusted based on long-term mean monthly water demands expected to be fulfilled with releases from this reservoir. For irrigation reservoirs, monthly release patterns are derived for each reservoir so as to store as much water as possible before the irrigation season and then release during the irrigation season following monthly water demand patterns. Depending on the reservoir capacity and ability to store the inflow and match the release patterns, daily releases are further adjusted for minimum environmental flow, minimum and maximum reservoir capacity. Dynamic storage targets allow for release patterns for irrigation and flood control purposes to be combined. These operations aim to reduce flood-season spill and then target high storage levels prior to the irrigation season. Releases are constrained on a daily time scale to account minimum environmental flows and minimum and maximum storage constraints. The approach deployed in MOSART-WM is common, and our study is generalizable across the majority of LHMs that include reservoirs. To simulate reservoir releases with a data-driven scheme, we substitute generic operations with a constrained linear piecewise function parameterized for each of the 36 reservoirs in the CRB with sufficient observational records (see Turner et al., 2020). This includes eight very large reservoirs with storage capacity greater than 1,000 MCM, or 1 km 3 (Figure 1). Records of reservoir inflows, releases, and storages are obtained from the US Bureau of Reclamation (2019a, 2019b) and US Army Corps of Engineers (2016). The data-driven scheme consists of 52 piecewise functions, which are trained for each dam using relatively recent data (post-1995) to avoid training to outdated decision schemes, such as past rules involving less stringent environmental flow requirements. The piecewise function defines water release as a function of currentperiod reservoir storage plus incoming water. A different function is trained for each week of the water year (thus 52 functions). The functions are interpretable and parsimonious, while being constrained to realistic reservoir operator behavior-a strategy we use to avoid overfit in a data-scarce environment inhospitable to split-sample calibration validation (see Turner et al., 2020) (a k-fold cross-validation has been performed to ensure the efficacy of this strategy-see the supporting information). The scheme can be simulated at daily resolution by determining the week-ahead release on each day, implementing a day's worth of that release, recalculating a new week-ahead release the following day, and so on. The use of available water (storage plus inflow) rather that storage only is in part designed to avoid overfrequent depletion and spill of reservoirs with low storage relative to annual inflow (see Shin et al., 2019), since in such systems the available water will made up predominantly by inflow to drive a corresponding release. A unique aspect of this data-driven scheme is that operations can be simulated with a "horizon curve" that is generated during the training procedure. The horizon curve specifies the inferred forecast lead time (in number of weeks) being adopted by the operator. This lead time varies by throughout the year in accordance with availability of predictive information for making a release decision. For example, early spring is typically associated with longer forecast lead times, owing to the availability of snowpack information that provides skillful, long-range inflow forecasts. The horizon curve specifies this change in forecast lead time adopted as a function of week of water year. When a horizon curve is implemented in the operating policy, release is a function of storage plus inflow out to the lead time specified by the horizon curve). The release decision is simulated in practice by assuming that the observed future inflow (perfect forecast) may act as a proxy for the forecast available to the operator at the time of making a release decision. One problem is that future inflows are not known in advance in an LHM simulation; they are generated by the model itself during simulation. An iterative approach is therefore required to determine the perfect inflow forecast for a given dam on each day (similar to the approach used in Haddeland et al., 2006, which employs a 12-month forecast). Running MOSART-WM in iterative mode involves extracting from each run the simulated inflows to each reservoir, determining from these inflows the inflow forecast for the specified horizon at each dam, and then feeding those inflow forecasts back into the next simulation. As the iterations progress, the forecasted flows begin to converge, starting with upstream dams and progressing downstream. A simulation that neglects the horizon curves is run first to initialize the inflow forecasts. Convergence is achieved in 5 to 10 iterations, reflecting the degree of cascading of the reservoirs updated in the simulation. Experimental Setup Three experiments are conducted to determine the performances of the release models in different settings of interest (Table 1). Experiments 1 and 2 are conducted off-line: Each reservoir is simulated in isolation (i.e., not within the LHM) and is forced with observed daily inflow time series. In Experiment 1, the storage is reset to observed storage each day. This setting provides the clearest indication of the model accuracy, because simulated releases are determined at each time step using information consistent with that used by the operator in reality. In Experiment 2, storage is fully simulated, so errors in release are allowed to accumulate and compound through time as simulated storage diverges from observed. This experiment provides a useful benchmark that will allow us to determine the contribution of storage error accumulation to model performance deterioration. Experiment 3 is conducted online. The candidate data-driven models for 36 dams are embedded in MOSART-WM and are forced with simulated flows. The differences in model performances between Experiment 3 (online) and Experiment 2 (off-line) indicate the extent to which inflow biases inherent in the hydrological model affect the performance of the reservoirs under different operating configurations. This is important because the candidacy of data-driven release schemes has thus far been based on assessments of their performances in off-line mode (Coerver et al., 2018;Turner et al., 2020;Yassin et al., 2019) or against a benchmark of no reservoir representation (Yassin et al., 2019). Each experiment tests three distinct reservoir release schemes: generic (Scheme A), as described in section 2.1; data-driven based on time of year, storage and 1-week (current period) inflow (Scheme B); and data-driven based on time of year, storage, and inflow forecasts of varying predefined lead times (dynamic horizon-Scheme C), using the horizon curves reported in Turner et al. (2020). Inflow forecasts have been shown to strongly influence seasonal reservoir operations, particularly for high elevation dams in Western United States fed by snowmelt (Turner et al., 2020). We adopt the forecast-based model in this study to understand whether the benefits of representing forecasts are maintained when reservoirs are simulated within the LHM. The data-driven release scheme (C) deploys forecasts based on perfect flow ahead. For the off-line simulations (Experiments 1 and 2) the regulated inflow forcing is known in advance from the observations, so the perfect forecast can be extracted directly and deployed to inform the release decision. For the online experiment, the LHM must be simulated using the iteration mode described in section 2.1. The parameters for both data-driven models (Schemes B and C) are optimized off-line using observations and held constant across all experiments. The generic scheme (Scheme A) is calibrated using long-term mean inflow with the reservoir-specific demand parameter held consistent across experiments. Model Evaluation 2.3.1. Daily and Seasonal Metrics for Releases at Individual Reservoirs For each experiment, models are evaluated using goodness-of-fit metrics computed at daily resolution using simulated versus observed time series of releases. We use release rather than storage because the release integrates storage errors and release decision error (since release is a function of storage in the data-driven reservoirs schemes). The computed metrics are the commonly used normalized root mean squared error (nRMSE), the transformed (normalized) RMSE (nTRMSE), and the slope of the flow duration curve (SFDCE). There are numerous ways in which the RMSE can be normalized; here we divide by the standard deviation of observations. For the nTRMSE, the release time series are first transformed so that the result is weighted by performance during periods of low release (we use Box-Cox transform with exponent of 0.3, as adopted in Van Werkhoven et al., 2009). The SFDCE is absolute error in the slope of the middle section of the flow duration curves between 30th and 70th percentile of flows, capturing error in the variability of the releases (Van Werkhoven et al., 2009). To evaluate model performance during drought, we evaluate the observed and simulated cumulative release volumes and then compare these volumes for all dams on a log-log plot (observed vs. simulated). The release volumes are computed for two separate time periods. The first is during the summer (July through September) that follows the spring (April through June) with the lowest inflow across all years. This captures the model performance for the season of largest water demand during the year with weakest flow augmentation and storage recharge from snowmelt. The second time period analyzed is the full water year (October-September) with the lowest overall inflow, capturing the model's propagated performance across all seasons with competing objectives such as wintertime storage drawdown for flood control and spring refill for water supply. These lengthy periods are deemed sensible due the prolonged nature of drought and its effect on reservoir storages. Propagation to Water Supply Risk Metrics Two analyses are conducted to understand the impacts of these reservoir policies on likelihood and severity of water supply shortfall. The first risk analysis adopts the online LHM simulation (Experiment 3). Here we 10.1029/2020WR027902 Water Resources Research TURNER ET AL. explore water supply risk from the record of demand shortfalls simulated in the model. Each grid cell of the LHM is associated a time-varying demand for water, which is met from local river abstractions and from local reservoir releases. If water is unavailable to meet that demand, a shortfall is registered. We compute the reliability, resilience, and vulnerability of simulated water supply at each grid cell, using the definitions provided in Hashimoto et al. (1982) and updated in McMahon et al. (2006). Reliability is number of days experiencing shortfall as a proportion of total days of simulation (reliability = 1 implies no shortfall periods). Resilience is the inverse of the average duration of continuous supply shortfall sequences (resilience = 1 indicates all shortfall periods are of 1-day duration). Vulnerability is the average severity of shortfall (as a proportion of demand) measured across all continuous shortfall sequences (vulnerability = 1 indicates total shortfall incurred at some point in all shortfall sequences). There are no observational records of water demand shortfall that can be used to determine which reservoir scheme best reproduces these metrics, so this part of our analysis reveals little about model accuracy. Instead, it indicates the impact of alternative water reservoir schemes on metrics that are important for determining the vulnerability of human systems to drought. The second analysis is conducted off-line, focusing on the storage behaviors of the eight dams included in the study with storage capacity greater than 1,000 MCM. Each reservoir is simulated with each scheme (A, B, and C) using both the observed inflow data (as in Experiment 2) and with 100 distinct, 30-year inflow replicate sequences with weekly resolution. These inflow replicates are generated using the nonparametric nearest-neighbor bootstrap (Lall & Sharma, 1996). We measure water supply risk using the number of simulated years for which reservoir levels are drawn below active storage, which would potentially lead to severe supply curtailment. The inflow replicates sequences provide uncertainty distributions on this metric, which are used to infer robustness of differences across reservoir schemes. These results are therefore intended to examine the importance of water reservoir representation for emerging applications of LHMs relating to drought impacts on regional sectors, like water deliveries to agriculture. Model Performances Results demonstrate a powerful influence of LHM inflow simulation error on the performances of reservoir schemes deployed. For off-line Experiments 1 and 2 (observed inflows), the data-driven release scheme without forecasting (Scheme B) significantly outperforms generic rules (Scheme A), while the addition ac dynamic forecast horizons in the data driven model (Scheme C) provides further marginal improvement ( Figure 2). The cumulative distribution of model performance metrics across all dams indicates a robust reduction (>90% of the CDF) in simulation errors for the off-line experiments. In other words, the additional efforts of acquiring operational data and training bespoke models for these reservoirs appear to pay off handsomely. However, this finding does not hold in Experiment 3 (online, LHM simulated flows). LHM simulations indicate no robust improvements, with cumulative distribution functions of goodness-of-fit scores closely aligned and overlapping. Water Resources Research Scores obtained for each individual dam are displayed in Figure 3 (dams are ordered according to storage capacity, with largest to the left). For off-line Experiments 1 and 2, both nRMSE ( Figure 3a) and nTRMSE ( Figure 3b) are reduced in 26-27 out of 29 cases depending on experiment and metric (daily inflow records are unavailable for eight of these reservoirs, so results are reported for only 28 of 36 dams in the off-line experiments). The two cases for which data-driven rules score significantly higher nRMSE and nTRMSE (Island Park and Willow Creek) indicate inconsistent inflow, release, and storage level data employed in model training (a data-driven scheme is only viable if the available operational data are accurate). The addition of the horizon curve to the data-driven model (Experiment 1, Scheme C) improves simulation performance on Experiments 1B and 2B in 75-95% of dams (depending on metric evaluated). For some dams, like American Falls and Palisades of the Upper Snake, the performance improvements are substantial, with an approximate 40% reduction in nTRMSE scores. These results further demonstrate the importance of considering the contribution of inflow forecasts to release decision making (expanding on the evidence presented in Turner et al., 2020). Very high SFDCE scores for the large dams under the generic scheme (Experiment 1, Scheme A) are reduced markedly by the data-driven schemes (Schemes B and C) (e.g., more than 50% reduction in SFDCE for Dworshak, American Falls, and Palisades). Both nRMSE and tRMSE deteriorate significantly when using the LHM simulated flows (Figures 3a and 3b). This is an expected result, because the inflows generated within the LHM simulation are subject to significant error. The perhaps unexpected result is that both nRMSE and nTRMSE are relatively uniform for each reservoir across the three models tested (reflecting the result reported for Experiment 3 in Figure 2). Also interesting is that, contrary to marginally improving simulation results, the addition of the forecast (exposed by the difference between Schemes B and C) appears to erode performance in a number of cases in Experiment 3. At Dworshak Dam, for example, the addition of the forecast reduces the nRMSE by 10% in Experiment 1 and by 30% in Experiment 2. Forecasts have the opposite effect when the models are simulated online in the LHM (Experiment 3), increasing nRMSE by about 30%. In section 4, we shall explore and illuminate the inflow conditions under which an accurate, forecast-based model will tend to underperform relative to a less accurate model that neglects forecasts. This result carries significant ramifications for the applicability of forecast-based schemes to inform water releases in LHMs. This finding that the benefits of a data-driven model are partially lost in the LHM is supported further if we examine the ability of models to reproduce cumulative release volumes under conditions of drought ( Figure 4). In off-line mode, the data-driven model improves vastly on the generic scheme, which has a tendency to underestimate reservoir outflows during drought (Experiment 1 in particular). This is likely caused by underestimation of the minimum flow constraint to meet environmental provisions in the CRB. Results here demonstrate the benefit of the data-driven model in capturing this feature correctly. Nonetheless, the drought-based release volumes recorded for Experiment 3 (LHM) exhibit similar levels of error across both generic (Scheme A) and data-driven schemes (Schemes B and C). For example, the total summer release following low-flow spring conditions (Figure 4a, top-right panel) is overestimated for Grand Coulee, Libby, Hungry Horse, and Albeni Falls but underestimated for Jackson Lake and Palisades. Although not identical in magnitude, the direction of error is similar with the data-driven models (Figure 4a, bottom-right panel), suggesting that upstream inflow biases of the LHM are more influential than the differences in reservoir model structure in driving simulated releases under conditions of drought. Similar results emerge when we compare total water releases for driest water year of record ( Figure 4b)-demonstrating that the findings are not the chance feature of drought metric examined. Impact of Release Scheme on Water Supply Reliability, Resilience, and Vulnerability In MOSART-WM, each grid cell is assigned water demands that can be supplied through local stream abstraction or water delivery from a reservoir within a prespecified distance. Whenever this demand cannot be met by available water, a shortfall event is recorded. The record of shortfall during such events can be used to characterize the reliability, resilience, and vulnerability of water supply (see section 2.3.2). We find that for Experiment 3 (LHM simulation), resilience and vulnerability metrics are available only within the Upper Snake region, because no supply-demand shortfalls occur elsewhere during the study period of 1980-2011 ( Figure 5). Shortfalls occurring in the Upper Snake arise during a single dry year, and so the calculated scores for reliability, resilience, and vulnerability should be understood in this limited context. We find a marginal difference in reliability across reservoir schemes, while resilience and vulnerability 10.1029/2020WR027902 Water Resources Research TURNER ET AL. Despite the marginal differences in vulnerability reported above, stochastic storage simulations for the eight large dams (>1,000 MCM storage capacity) reveal marked differences in likelihood of reservoirs being drawn below active storage capacity-an indicator of vulnerability, since loss of active storage would likely be associated with significantly impaired ability to meet water demands ( Figure 6). The impact is most striking for Dworshak Dam, which is rarely drawn below active storage with the generic release scheme (A) and always drawn below active storage (30 out of 30 years in all 100 simulations) when simulated using data-driven schemes (B and C). In each of the other dams except Libby and Hungry Horse, which remain within Figure 5. Implications of different reservoir release schemes on reliability, resilience, and vulnerability. Resilience and vulnerability are computed from the time series of shortfalls between supply and demand. These metrics are assigned NA values where reliability is equal to 1 (since the metrics cannot be computed without a shortfall period). Schemes are A (generic), B (data-driven, 1-week inflow), and C (data-driven, dynamic inflow forecast). active storage limits across all simulations, the release scheme causes a clear difference in the distribution of the number of years resulting in a breach of active storage. Although the direction of impact varies from case to case, these results indicate strong sensitivity of reservoir vulnerability to release scheme chosen. Interestingly, these differences are often unexposed when comparing schemes using only the observed inflow sequence (red points). Simulation of observed inflows in each of the schemes as applied to American Falls, for example, lead to a similar number of years with drawdown below active storage. The robust difference between schemes emerges clearly, however, with the simulation across 100 replicate sequences. Need and Prospects for Data-Driven Reservoir Schemes in LHMs LHMs address a host of science problems at a range of spatial domains, from river basins to the entire globe. These models have been used to evaluate global water availability (Bierkens, 2015;Döll et al., 2012;Wada et al., 2011), explore impacts of humans and climate on terrestrial water storage and groundwater depletion (Wada et al., 2014), support water forecasting models (Koster et al., 2004), and corroborate remotely sensed water fluxes (Scanlon et al., 2018). LHMs will be relied upon increasingly to provide water availability projections for the study of climate and extreme event impacts on expansive infrastructural systems and economic sectors (Kraucunas et al., 2015;Miara et al., 2017). Integrated power grids or crop-growing regions, for example, often span multiple river basins and political regions. If one wishes to quantify the impacts of a climate trend or extreme event on these water-dependent sectors, then water availability must be simulated coherently throughout the entire spatial domain to preserve realistic spatial and temporal correlation patterns (Voisin et al., 2017). Although LHMs provide this capability, their clout is limited by coarse spatial resolution, low fidelity, and lack of observational data for calibration and verification-particularly for human decision making and when flood and drought impacts are relevant (Nazemi & Wheater, 2015a, 2015b. The question is whether these new applications envisaged would benefit from LHMs furnished with more accurate, bespoke, data-driven water management schemes, where data permit. The results reported above offer three insights relevant to the advancement of data-driven reservoir schemes in LHMs. First, data-driven release-availability functions-despite being more accurate as measured by Figure 6. Comparison of release schemes across observed flows (red point) and using 100, 30-year replicate sequences (black points) for number of simulated years with a breach of active storage. For a given reservoir, each scheme is exposed to the exact same set of inflow sequences. Within each column (A, B, and C) points are positioned randomly on the horizontal axis to avoid overlap. 10.1029/2020WR027902 Water Resources Research standard error metrics-may not guarantee more realistic reservoir releases in an LHM simulation. They may only provide more accurate releases subject to the inflows, which are often significantly biased. This does not mean the community should avoid implementing data-driven rules in LHMs. Data-driven reservoir release rules would enrich LHMs, even if the benefits cannot yet be realized in full. However, it is essential for users of these models to be aware of the limitations; adopting data-driven rules may not yet guarantee more realistic streamflow simulation, nor reservoir variations for water quality modeling purposes, for example. Continual advancement of other LHM features that control streamflow will be necessary to harness the benefits of realistic, data-driven reservoir release schemes. A second insight is that water supply vulnerability is highly sensitive to the choice of reservoir scheme-but that the associated implications may not readily emerge in from an LHM unless forced with many drought sequences to generate a sufficiently large and representative sample of shortfalls between supply and demand. Taking full advantage of data-driven reservoir schemes to study water supply vulnerability implies a need for expansive uncertainty analysis and stress testing. In water supply reservoir operations, there is a trade-off between reliability and vulnerability (Hashimoto et al., 1982). The operator may deliberately cut back the water supply at the onset of drought (causing a minor shortfall) in order to hedge against the risk of a much larger shortfall if releases were maintained (Draper & Lund, 2004). The system is then less reliable (shortfalls are more frequent) but also less vulnerable (shortfalls are less severe). Capturing this type of behavior in an LHM reservoir scheme is essential for adequate characterization of water supply risk during drought-and, importantly, the generic scheme (some variants of which judge hydrological condition only at the beginning of the operational year) is likely at a significant disadvantage for representing such conditions. Yet results in this study suggest that the import of a different reservoir scheme may only be exposed through a process of stress-testing involving a large sample of input scenarios-a nontrivial challenge at the scale of a regional or global LHM. These models ingest distributed climate input across thousands of grid cells; such data cannot be easily synthesized without the aid of extremely computationally intensive climate models. Off-line analytics, such as the approach proposed in this work involving stochastic simulation of reservoirs, could be performed in postprocessing of an LHM to support more in-depth analysis of water supply vulnerability. For example, one could extract the LHM inflow simulations for a set of independent, key reservoirs and then expose those reservoirs to spatially correlated synthetic inflow sequences derived from a multisite stochastic flow model. While we do advocate for further development and testing of data-driven water management schemes for LHMs, we would caution against the use of models that include forecast contribution to the release decision at this time. The third key insight from our analysis is that data-driven models that adopt a forecast horizon may actually erode the performance of an LHM, despite being marginally more accurate when simulated with observed inflows. This finding carries important implications for the prospects of adopting realistic reservoir schemes in LHMs. An explanation is offered in the following section. Sensitivity of Reservoir Simulation Performance to Inflow Error We shall now demonstrate through a small numerical experiment that a forecast-based model tends to be less robust to error in inflow than a similar data-driven model that neglects forecasts. This in an off-line experiment akin to Experiment 2 (observed inflow and simulated storage), with the difference that errors are added to the inflow to explore sensitivity of performance with respect to level of error. For each of the eight largest reservoirs included in our study, we perform the following sensitivity test. First, an inflow error time series is computed by subtracting the daily observed inflow from the daily inflow simulated by the LHM. We then develop a stochastic model of the resulting error time series. This is done by deseasonalizing the data and then fitting a first-order autoregressive model (AR1) to the daily time series to remove autocorrelation. Error time series with similar error can then be synthesized by bootstrapping from the model residuals and then reapplying autocorrelation and the seasonal signal. On each of 1,000 error series generated, we scale the data by a randomly sampled error factor (between 0 and 1.5, to be applied to all points of the flow time series) before adding the error time series back onto the observed inflows. A sample with error factor of 1 has error commensurate with the LHM simulated inflow, while an error factor of 0 has no error and is equal to the observed inflow (i.e., equivalent to Experiment 2). We simulate each reservoir 1,000 times with these varying inflow series. This is done for the data-driven release scheme with 1-week inflow (Scheme B) and again with the dynamic inflow horizon (Scheme C). Normalized RMSE scores are computed for simulated releases relative to observed (Figure 7). When the error is 0, results are unequivocal: The forecast model (Scheme C) better represents the observed release. But as inflow error is introduced (moving from left to right along each horizontal axis), something interesting happens. For most reservoirs, the nRMSE of scheme C release rises at a sharper rate than that of Scheme B. The reason for this behavior is quite intuitive: Scheme C relies more heavily on inflow to inform its decision, so errors in that input must impact that model to a greater extent than a model less reliant on those inputs. When the error reaches the equivalent error of the LHM (error factor = 1) the forecast-based scheme (C) is outperformed by the less accurate Scheme B in half of these reservoirs-namely, at American Falls, Dworshak Dam, Jackson Lake, and Libby. The extent of this behavior varies depending on the importance of forecasting and the coincidence of inflow bias with seasons when forecasts are deployed in Figure 7. Stochastic error simulation results for 1,000 simulations using Scheme B (data-driven, 1-week inflow) and C (data-driven, dynamic inflow forecast). The horizontal axis represents the error factor, where 0 is commensurate with observed inflow and 1 is commensurate with the inflow error in the LHM. Values greater than 1 on the horizontal axis denote inflow errors exceeding the LHM errors. nRMSE (%) is computed on simulated release relative to observations. Figure 8. Conceptual model describing the relationship between reservoir inflow error and model output error, highlighting the greater sensitivity of a forecast-dependent scheme to errors in inflow. An error level that guarantees superiority of the forecast-driven scheme is denoted as "ideal error." 10.1029/2020WR027902 Water Resources Research the release model. Albeni Falls, for example, is simulated with significantly shorter forecast lead times than the other reservoirs, leading to a very marginal difference between the two models in terms of sensitivity to inflow error with this particular reservoir. This sensitivity test replicates the observed results of our study, wherein the addition of the forecast actually deteriorates the performance of some reservoirs simulated in the LHM. One can generalize this result to a simple conceptual model to highlight how a level of improvement in LHM inflow could allow the modeler to harness the benefits of a forecast scheme (Figure 8). This concept is somewhat analogous to the findings of reservoir optimization research, wherein deterioration of forecast skill beyond a certain point compromises the optimality of release decisions (Turner et al., 2017;. Conclusions and Future Work Data-driven reservoir release schemes have emerged as a possible enhancement to LHMs-a natural progression given the increasing availability of observations from in situ devices and remote sensing. Today, data-driven reservoir schemes can be implemented in LHMs wherever sufficiently lengthy operational records are available to train the parameters. In future, the availability of reservoir storage levels and bathymetry from satellite remote sensing may unlock data-driven reservoir schemes at a global scale (Avisse et al., 2017;van Bemmelen et al., 2016;Zhao & Gao, 2019). But are LHMs sufficiently advanced to take full advantage of these new data? The results of this study suggest perhaps not. Our results demonstrate that an interpretable, data-driven scheme based on release-availability functions simulates reservoir releases more accurately than generic release scheme-including during the driest years of record. The addition of seasonally varying forecast horizons in a data-driven model offers further improvement still. But these simulation improvements are available only under conditions of accurate model inputs. When exposed to the errors and bias inherent in LHM flows, the data-driven reservoir release schemes perform no better than generic. When inflow forecasting is included, data-driven schemes may even perform worse, owing to overreliance on inputs subject to significant bias. Further corroborating evidence generated using alternative LHMs, model inputs (e.g., water demands), spatial regions, and data-driven schemes is required to strengthen and confirm these conclusions. Nonetheless, the results indicate a need for further improvement of LHM flow accuracy, requiring attention on runoff generation, groundwater and soil moisture interaction, evaporation, snow cover, and other elements of human influence on the water cycle. Our study shows that key drought metrics, such as reliability and vulnerability, are sensitive to the reservoir scheme deployed but that these differences may only be exposed through extensive sensitivity testing uncommon in LHM study. Drought impact studies that rely on improved reservoir models in LHMs may therefore require off-line sensitivity analysis using stochastic inflow sequences. The parameters of the reservoir operating rules-data-driven or otherwise-will also be subject to significant uncertainty. Uncertainty characterization and sensitivity analysis performed on the reservoir model parameters (as in Zajac et al., 2017) will be particularly important in regions lacking records of reservoir inflow and release. The appropriate tools and methods for executing such analysis in the context of large-scale water resource simulation could be defined in future work. Improvements to LHM flow simulation accompanied with data-driven human-system models and appropriate uncertainty analysis could unlock new capabilities, including rigorous drought impact assessment across large regions, and better representation of water resources management accounting for localized institutional and environmental regulations.
2020-10-19T18:12:51.852Z
2020-09-28T00:00:00.000
{ "year": 2020, "sha1": "58be02825c253a0b119f91915dd7d30a233383ef", "oa_license": "CCBY", "oa_url": "https://agupubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2020WR027902", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "1afe6baccb3d006c1340c4f1598965720f82dcca", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
251131781
pes2o/s2orc
v3-fos-license
Incidence of Congenital Muscular Torticollis in Babies from Southern Portugal: Types, Age of Diagnosis and Risk Factors Congenital torticollis (CMT) is the most common type of torticollis and is defined as a unilateral contracture of the sternocleidomastoid muscle resulting in lateral head tilt associated with contralateral rotation, and early detection and treatment may present a high probability of recovery of head posture symmetry. This study aimed to verify the incidence of torticollis in babies born in southern Portugal types, age of diagnosis and the risk factors. This study comprised 6565 infants born in the south of Portugal at Algarve University Hospital Center, Portimão unit during a 5-year period (January 2016 to December 2020). The cases diagnosed with torticollis referred to the Pediatrics and Pediatric Physiatrist consultations at this hospital were included. 118 babies—77 (65.3%) male and 41 (34.7%) female—were diagnosed with torticollis. The incidence of a 5-year period was 1.5%. Spontaneous vaginal delivery was prevalent (n = 56; 47.5%), with 106 (89.8%) deliveries with cephalic presentation. 53 (44.9%) cases of torticollis were classified as postural, 37 (31.4%) as muscular torticollis with joint limitation and 28 (23.7%) as congenital torticollis (with the presence of a nodule). Postural torticollis was diagnosed at an average age of 70.14 days, muscular torticollis with joint limitation at an average of 64.12 days and congenital torticollis at 33.25 days (p < 0.001). Plagiocephaly was present in 48 (40.7%) babies with torticollis (p = 0.005) and joint limitation in 53 (44.9%) babies (p < 0.001). The data obtained revealed a low incidence of CMT, with the majority being classified as postural. The age of diagnosis varied between 33 to 70 days from birth. The baby’s gender, mode of delivery and the fetal presentation during delivery did not show a statistically significant association with the presence of torticollis. Despite presenting a low incidence, it is important to mention the importance of professional health intervention in the implementation of prevention strategies. The incidence of CMT varies between 0.3 and 2% [1,2,6], being more prevalent in males, with a ratio of 3:2 [1,6], and more common on the right side [1,6,7]. However, these data present discrepancies between the studies, since the investigation by Stellwagen et al. [11] found that 16% of newborns had torticollis. Diagnosis of CMT is made via observation of alignment, cervical range of motion assessment and palpation [12]. There are 3 types of CMT: torticollis with a tumor or a palpable fibrotic nodule in the sternocleidomastoid muscle, torticollis with rigidity of the sternocleidomastoid muscle, but without an associated tumor, called muscular torticollis and postural torticollis, which consists of congenital torticollis with all the clinical features of torticollis, but without rigidity and tumor in the sternocleidomastoid muscle [13]. Children with CMT present with a cervical muscle strength imbalance [14]. There are several theories for the etiology of CMT, but it is still not fully known; it is believed to be attributed to fetal head descent or abnormal intrauterine fetal positioning during the third trimester, resulting in trauma to the sternocleidomastoid muscle. Other theories refer to fibrosis of the sternocleidomastoid muscle as a cause [8,13], resulting from venous occlusion due to persistent intrauterine lateral flexion and rotation of the neck, or trauma to the sternocleidomastoid muscle during delivery [4,13,15]. Other risk factors involved in the development of congenital torticollis include breech presentation, multiple pregnancy and dystocic delivery (vaginal delivery using suction cups or forceps) [4,6,16]. CMT may be associated with one or more comorbidities; some of these comorbidities are: brachial plexus injury, hip dysplasia, limb deformity and early developmental delay, facial asymmetry, plagiocephaly, and temporomandibular joint disorder [2,4,8]. Several studies report an association between the presence of CMT and the development of hip dysplasia with at a rate that can go up to 20% [6,17]. Early detection and treatment of CMT (before 1 month of age) has a high probability of recovering head posture symmetry (1.5 months). If CMT is detected from 6 months of age, the range of motion in the cervical region will gradually decrease, and a longer intervention period (between 9 and 10 months) may be necessary [1,8]. There are no recent studies about incidence of CMT. The aim of this study was to examine incidence of CMT at our institution, type distribution, infant's age at a diagnosis and associated factors. Design This is a longitudinal retrospective study that comprised 6565 infants born in the south of Portugal (Algarve region) between January 2016 and December 2020 at Algarve University Hospital Center (CHUA), Portimão unit, Portugal. The Algarve region occupies an area of 4996 km 2 and is home to 467,495 inhabitants (2021), concentrating 4.5% of the resident population in Portugal, which comprises only one subregion, consisting of 16 municipalities and divided into 67 parishes. Under the public health system, there are 2 university hospitals (CHUA) with delivery and maternity rooms: one located in the city of Portimão and the other in Faro, Portugal. On a private level, there is only one hospital in Faro. Cases Among all children born during the 5-year period, we selected cases diagnosed with torticollis referred to the Pediatrics and Pediatric Physiatrist consultations at the Algarve University Hospital Center, Portimão unit. The diagnosis was made by a physiatrist. This study was approved by the Ethics Committee for health and by the Board of Directors of Algarve University Hospital Center, Portimão unit (Reference: 01/21 of 1 June 2021). Data Analysis Statistical analysis was performed using the Statistical Package for the Social Sciences (SPSS) software, version 28.0. (Armonk, NY, USA: IBM Corp). First, data referring to the descriptive statistics of all the study variables were obtained, through measures of central tendency and dispersion. The Chi-Square Independence test, using contingency tables, was used for the statistical inference of qualitative variables (presented in Table 3). In all inferential analyses, statistical significance was set at 0.05. Incidence Proportion (IP) was computed through the division of the total number of infants who had torticollis by the number of children born each year and in the last 5 years [18]. Results Between January 2016 and December 2020, 6565 live births occurred at Algarve University Hospital Center, Portimão unit. Among them, 118 babies (n = 77; 65.3% male and n = 41; 34.7% female) were diagnosed with torticollis in Pediatrics and Pediatric Physiatrist consultations (incidence = 1.5%, and 95% confidence interval (CI) 1.3-1.8%). Table 1 show the incidence proportion in each year analyzed and the cumulative incidence by a period of 5 years. The average gestational weeks was 37.7 (SD: 2.9), with the shortest period being 28 weeks and the longest being 41 weeks. Most deliveries of babies diagnosed with torticollis were spontaneous vaginal delivery (56; 47.5%), 43 (36.4%) cesarean section, 15 (12.7%) dystocic with the use of a suction cup and 3 (2.5%) with the use of forceps, with 1 delivery mode could not be identified in the process. The fetal presentation was cephalic in most deliveries (106; 89.8%), with only 9 (7.6%) with breech presentation and 3 (2.5%) without reference in the process. Some deliveries were of twins (9; 7.6%), but the majority were of the birth of one baby (109; 92.4%). Regarding the type of torticollis, 53 (44.9%) were classified as postural (congenital torticollis with all the clinical features of torticollis, but without rigidity and tumor in the sternocleidomastoid muscle), 37 (31.4%) as muscular torticollis with joint limitation (torticollis with rigidity of the sternocleidomastoid muscle, but without an associated tumor) and 28 (23.7%) as congenital torticollis with the presence of a nodule (torticollis with a tumor or a palpable fibrotic nodule in the sternocleidomastoid muscle). The mean age of consultation at which babies were diagnosed with torticollis was 83.79 days (SD: 45.23), with a minimum age of 9 days and a maximum of 180 days. Postural torticollis was diagnosed at an average age of 70.14 days, muscular torticollis with joint limitation at an average of 64.12 days and congenital torticollis at 33.25 days (p < 0.001). As for the laterality of torticollis, 57 (48.3%) were diagnosed on the right and 61 (51.7%) on the left. The dominant gender in CMT is male, with 77 (65.3%) and 41 (34.7%) females. Plagiocephaly was present in 48 (40.7%) of the babies with torticollis and absent in 63 (53.4%), with 7 (5.9%) without this reference in the process. As for the laterality of plagiocephaly, 27 (56.3%) were on the right, 19 (39.6%) on the left and 2 (4.1%) were symmetrical. Table 2 presents the association between plagiocephaly and laterality of the torticollis. Regarding the presence of limited range of motion, it was present in 53 (44.9%) babies diagnosed with torticollis, absent in 60 (50.8%) babies and without reference in 5 (4.2%), 28 (52.8%) on the right side and 25 (47.2%) on the left side. Table 3 shows the association between the different types of torticollis and the qualitative variables analyzed in the study. Discussion The data presented revealed a low incidence of CMT in babies who went to the consultation in the period of 5 years (1.5%), with 2018 being the year with the highest incidence with 2.7% and in 2017 there was a lower incidence with 1.3%. Similar data were found in the study by Cheng et al. [14], who verified 624 cases of torticollis in babies born in China over a period of 7 years and whose incidence was 1.3%. Considering only the babies who were diagnosed with torticollis, most were born by spontaneous vaginal delivery (48%), but it was not possible to fulfill the applicability conditions of the statistical test to verify the possibility of obtaining statistical significance. These data are contrary to those found in studies carried out in the center (Coimbra), between 2008 and 2011 [19], and in the north of Portugal (Porto), between 2012 and 2014 [4], where it was found that dystocic delivery was predominant with values of 73%, in both studies, with a higher incidence of delivery instrumented by cupping or forceps. The sternocleidomastoid stretching muscle during childbirth may be a direct cause of CMT; however, no studies were found that showed the relationship between mode of childbirth and torticollis. Hardgrib et al. [20] and Lee et al. [21] compared vaginal deliveries with cesarean sections and found no difference in the clinical severity of CMT according to the mode of delivery, suggesting that prenatal factors can probably cause CMT due to the reduced risk of birth trauma in cesarean sections. Ho et al. [22] found higher rates of assisted breech deliveries, instrumental deliveries and cesarean sections, which led them to conclude that birth trauma could be the main etiological factor of CMT. The cephalic presentation was verified in most deliveries (90%) and only 8% of babies diagnosed with torticollis were born with breech presentation, lower data than those presented in national studies in which breech presentation was verified in 21% of babies with torticollis [4,19]. The low number of births with breech presentation may be due to the fact that there is a growing tendency among obstetricians to recommend cesarean delivery when the fetus is a breech presentation [19]. Most babies diagnosed with torticollis were male (65%), data similar to those found in the study by Bastos et al. [19] (67%) and in the study by Petronic et al. [1]. However, in the study by Amaral et al. [4], there was no big difference in gender, being 51% male and 49% female. Despite the difference between the percentages of the gender observed in this study, there was no statistically significant association with the presence of torticollis. Regarding the classification of torticollis, postural torticollis was the most prevalent (45%), data that differ from the results obtained in other studies, and which present a greater number of babies classified with congenital torticollis and with the presence of a nodule and muscular torticollis [19,23]. Cheng et al. [23] found a higher percentage of congenital torticollis with tumor (55%), compared to muscular (34%) and postural (11%) torticollis in 821 babies, and the data obtained in the study by Ho et al. [22] revealed a 36% prevalence of torticollis with a tumor in 91 babies. The study by Amaral et al. [4] verified the presence of torticollis with the presence of a nodule in 13% of the babies, and muscular torticollis was observed in 21%. The reliability of the diagnosis of the type of torticollis may play a major role in the differences between studies. For example, the clinician's ability for observation the presence of neck and/or facial or cranial asymmetry, to palpate using passive cervical rotation the SCM, can also be crucial to a good diagnosis [10]. Postural torticollis seems to be associated with a preference for head posture at birth, due to the presence of a deformational plagiocephaly, and aggravated by placing the baby most often in the same position, in the first months, when they have poor head control [24]. As for laterality, a difference of only 3.4% was observed between the right and left sides, the latter being the most prevalent (51.7%), data very close to those obtained in the study by Bastos et al. [19] (52.8% of the cases were on the left side) and those of Ho et al. [22] (50.5%). The study by Amaral et al. [4] observed an equal distribution between the sides. The study by Petronic et al. [1] revealed a greater right laterality predominance, but also with little difference in percentages terms. Plagiocephaly was present in less than half of babies with torticollis (41%), with a greater predominance in muscular torticollis (64%). Data obtained in central Portugal region [19] showed a higher percentage of plagiocephaly associated with torticollis (51%). Regarding the laterality of plagiocephaly obtained, in this study, the right side is the most predominant (57%), confirming the prevalence of torticollis on the left (47.5%). Peitsch et al. [25] evaluated a sample of 201 newborns and found that 13.1% of the babies had lateral or posterior cranial flattening, with 54% having flattening on the right side, with a predominance of males, suggesting that boys have larger heads and less flexible than girls, which makes them more susceptible to head deformity anomalies at birth [26]. There is still the perception that torticollis develops secondary to deformational plagiocephaly, due to the maintenance of the head position, in a "comfort position" [25]. The mean age at the time of diagnosis of babies with torticollis was less than 3 months of age. Data from the study Ho et al. [22], which evaluated the presence of torticollis that occurred between 1994 and 1997, revealed that the mean diagnosis was 2 months of age. Bastos et al. [19] revealed that the first diagnostic consultation took place between 2 weeks of age and until almost 4 years of age, most of them being performed with babies aged less than 6 months (77%) and in the study by Amaral et al. [20], the mean was lower at 3 months (11.6 weeks). Most congenital cases of torticollis were diagnosed in babies aged up to 1 month (57%), and most postural torticollis cases were diagnosed in babies aged over three months (45%), with muscular torticollis diagnosed at ages above 2 months (65%). According to the Academy of Pediatric Physical Therapy's guide, infants with CMT should be referred to a physiotherapist as soon as they are identified [10]. Parental education to prevent asymmetries/CTM, the assessment and early identification of CMT (the identification of infants who have postural preference, reduced cervical range of motion, sternocleidomastoid masses, and/or craniofacial asymmetry) are prevention strategies. Prevention consists of an anticipated action, based on epidemiological knowledge to control and reduce the risk of diseases. In this way, prevention and education projects are based on scientific information and normative recommendations, in addition, investments in prevention are always less expensive than those applied in the management and treatment of the disease [27]. This study presents some limitations. For bureaucratic reasons, there may have been some delays in scheduling appointments, not allowing, in some cases, an earlier diagnosis, which may compromise the data on the age of the baby's diagnosis. Furthermore, some infants were not referred for consultation in the Algarve University Hospital Center, Portimão unit and have been diagnosed in another hospital or clinic, thus not being counted in the study. Another limitation is the fact that some babies may have been born in this hospital, but not have gone to the pediatric consultation in this hospital. Examples: at the time of delivery this was the closest hospital, but the parents did not live here; some parents may have changed residence and gone to another location for the pediatric consultation; and some parents may have preferred to go to the consultation at a clinic or hospital particular. Conclusions The data obtained in this study verified an incidence of congenital torticollis of 1.5% in 6565 babies born in a hospital in the south of Portugal between 2016 and 2020, the most prevalent being postural CMT. The baby's gender, mode of delivery and the fetal presentation during delivery did not show a statistically significant association with the presence of torticollis. Since most postural torticollis have been diagnosed in babies over the age of three months, there is an interval of opportunity for intervening as early as possible to prevent this type of torticollis.
2022-07-29T06:17:42.865Z
2022-07-26T00:00:00.000
{ "year": 2022, "sha1": "aa591d107d2f455440f8d19039f1aefed3517fef", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/15/9133/pdf?version=1658844471", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2a9be172d1bf1f90558c80891c7783186c9fabc2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250692259
pes2o/s2orc
v3-fos-license
Online monitoring of alignment noises in TAMA300 We report on online monitoring of alignment noises in TAMA300 detector of Japan. The continuous monitoring of noise contributions is necessary for the veto analysis. To detect gravitational wave signals, fake events should be removed by veto analysis. We investigated a procedure for evaluating various noise contributions continuously. The procedure has been applied to several noise sources such as laser intensity and auxiliary length control. An investigation of alignment noises is the focus of this paper. Introduction Since 1999, the Japanese laser interferometric gravitational-wave (GW) detector TAMA300 has performed observations nine times. Total 3086 hours of data have been accumulated in these observations. The detector sensitivity and its stability have been also improved gradually during the observation periods [2]. The evaluation of the noise contributions has been performed since the beginning of TAMA construction in order to reduce noise level. It is useful to hunt the noise sources. To detect the gravitational wave signals [3], one must remove noises which might be misidentified as GW signals [4,5]. Our study focuses on evaluating various noises quantitatively and on noise mechanisms which contaminate the GW channel. Continuous monitoring of the noise contributions is necessary for veto analyses, because the detector condition will change occasionally during long-time observation. The veto analysis with continuous noise monitoring will remove burst like noises, and reduce non-stationally noise contributions, from the inspiral search and various GW searches. To realize such an online monitoring system, the study has been carried with the following procedure: (i) Possible noise paths were considered, then each transfer function (TF) was calculated, (ii) Measurements of the transfer function were made by injecting artificial signal at the noise sources, (iii) The dominant noise mechanism was identified among the possibilities, (iv) Possible time variation of the noise coupling coefficient and how to monitor it were considered The investigation of noise contamination mechanisms is described in Section 2. In Section 3 and 4, the details of the noise coupling coefficients and the online monitor are described. Alignment noise The above noise investigation procedure has been applied to several noise sources such as laser intensity and auxiliary length controls. As an example, an investigation of alignment noises is reported in this paper. TAMA is a power-recycled Fabry-Perot Michelson interferometer which consists of many mirrors. The alignment noises originate from rotational motion of the mirrors. If a laser beam passes through the center of mirror gyration, there is no influence on GW channel in principle. Otherwise, the rotational motions are converted to the displacement noise along the beam axis. In this paper, such a noise contamination of the GW channels is called alignment noise. Noise coupling coefficients are determined by displacements between beam axis and the mirror center. Hereafter the displacement is called off-centering. In the frequency band of TAMA observation, such rotational motions are caused by alignment sensor noises. The rotational motions are well suppressed by a wave-front sensing and servo systems which have a unity gain frequency of 10 Hz. Measurements of the transfer functions and noise spectra revealed that the rotational motions contaminate GW channel via path illustrated in Fig. 1. In the case of TAMA, GW signal is obtained from L − control servo which has a unity gain frequency of 1 kHz. L − shows differential motion of the two Fabry-Perot cavities. On the other hand, the alignment servo has a unity gain frequency of 10Hz. The alignment noise L align − can be presented by the following formula: therefore, L align Here α is a rotational motion of the mirror. a is a coupling coefficient of the alignment noise. W F , A, H, D and F are transfer functions of whitening filters, coil actuator, an interferometer, photo detector and servo filter, respectively. The coupling coefficient can be obtained by measurement of transfer function (T F ) from V a to V 0 by injecting artificial signal of V inj to the alignment control servo. Here G is open-loop gain of the L − control servo. We measured the transfer function of A, A a , W F fb and W F a beforehand. These are assumed to be stable. The open-loop gain of G is being monitored by online calibration system [1]. By monitoring the ratio of V 0 to V a , we can obtain the coupling coefficient continuously. Coupling coefficient The relationships between the coupling coefficient and off-centering were investigated. As described above, the coupling coefficient a corresponds to the amount of off-centering exactly. It has a dimension of length. To reduce the noise contributions, each mirror is controlled by the following methods. The gyration centers of the front mirror are adjusted against the beam axis. Four coil actuators are installed for each mirror to control cavity length and two rotational motions. The coupling coefficients are minimized by adjusting the actuator gain balance. For end mirrors, beam orientations are controlled by steering mirrors. The relationships between the coupling coefficient and actuator gain balance are shown in Fig. 2. The left and right panels show that of pitch and yaw motion, respectively. After the adjustment of the actuator gain balance, we can reduce the off-centering to 0.1mm or less. Due to the beam jitter, we can not adjust the centering more finely than this. Similar centering accuracies are also realized for the end mirrors with beam orientation control. Monitoring of noise spectra The spectrum of the alignment noise is useful for monitoring various detector conditions, and for veto analysis. To obtain the total amount of the alignment noise, eight degrees of freedom are taken into account. In our detector, four mirrors, which are components of two Fabry-Perot cavities, have the most important role in producing the GW signal. Moreover, each mirror has two important rotational degrees of freedom, pitch and yaw. Because these degrees of freedom are assumed to be independent, their noises are added in quadrature Figure 3 shows noise spectrum of L − and the total alignment noise with solid and dashed line, respectively. To monitor the coupling coefficients, a calibration signal at a fixed frequency is injected into the alignment servo loop. Because there are eight degrees of freedom, eight calibration peaks are needed to monitor each loop simultaneously. To minimize the number of calibration peaks, each coupling coefficient is evaluated every 210 sec with a calibration signal. The calibration frequency is chosen to be 78.125Hz, because alignment noise is dominant in the frequency region below 100Hz as shown in Fig. 3. The Nyquist frequency of our analog-to-digital converter limits an observation band to 156.25Hz. In the frequency region above 200Hz, other noise sources contaminate to the GW channel. Summary We investigated a procedure for evaluating various noise contributions continuously. This procedure has been applied to several noise sources such as laser intensity and auxiliary length control. An investigation of alignment noises is the focus of this paper. Online monitoring of the alignment noises is useful for the veto analysis. To detect gravitational wave signals, fake events must be removed. Fake events can be identified by this noise monitor. The following results were obtained in this study: • Noise contamination mechanisms can be investigated by transfer function measurements. It is a useful tool for noise hunting, • Monitoring of the coupling coefficient is also useful to check detector conditions and to minimize noise contributions. Using this monitor, more stable sensitivity for gravitational signals will be available. Online veto analysis using a similar technique is also under investigation.
2022-06-28T03:48:18.304Z
2006-01-01T00:00:00.000
{ "year": 2006, "sha1": "8b91bc684cebcb4219dd4236c83d64d55f8a796c", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/32/1/015", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "8b91bc684cebcb4219dd4236c83d64d55f8a796c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
225611465
pes2o/s2orc
v3-fos-license
Numerical production of /s/-like sound as Helmholtz resonance in the turbulence-induced pressure field inside a two-dimensional oral front cavity A numerical production of /s/-like sound is dealt with by means of computational fluid dynamics in a two-dimensional model of oral front cavity. The basic hypothesis is that the target sound is the outcome of the lowest or first-mode resonance to the pressure field inside the front cavity, whose resonance frequency is in the range 5 8 kHz. The model domain is composed of a small semi-closed area representing the front cavity, the upstream half-space, the downstream half-space and two channels that connect the upstream/downstream half-spaces to the cavity. Computation was carried out with the method of direct numerical simulation on the two-dimensional Navier-Stokes equations. The pressure wave emitted into the downstream half-space has a continuous spectrum with a single spectral peak around the assumed resonance frequency. The spatial/temporal characteristics of the turbulence and pressure fields are studied in relation to the so-called quadrupole field. INTRODUCTION The primary cause of fricative consonants is attributed to air turbulence where a constriction at some section in the vocal tract is thought to play an essential role for the generation of turbulence [1][2][3]. A number of studies, such as among others, have been done since 1950s on the spectral natures of different fricatives and the associated shapes of vocal tract, and the results have been used to build synthesis models in the framework of source-filter theory. Among these preceding studies, an extensive work by Toda [37] on /s/, /S/ and others revealed that the vital portion of vocal tract for each specific category of fricative changes considerably across the subjects of different genders and languages, in different utterance postures (seated vs. stretched), and for different sound sequences (sustained vs. words with different vowels); and the sounds so spoken show a large diversity in their spectral profiles. Here, the shapes of vocal tract were identified with the midsaggital and coronal sections. There have been reported several results with the approach of fluid dynamics. Howe and McGowan [38] obtained a theoretical solution for /s/. Adachi [39], Grandchamp et al. [40], van Hirtum et al. [41] searched for a numerical solution for /s/ with the use of LES (largeeddy simulation) with three-dimensional non-structured elements, and recently Yoshinaga et al. [42,43] reported a result on the production of sibilant fricatives /s/ and /S/ experimentally and numerically using a simplified threedimensional vocal tract model. It has been generally considered, as stated in Catford [2], that the agent to bring forth /s/ is the turbulence caused by impingement on the teeth, a kind of wake turbulence as contrasted to the channel turbulence that is caused directly by air jet. The model domains in [38] and [43], as well as in other model experiments such as Shadle [16], were set up based on this assumption. However, it is worth noting an argument posed by Meyer-Eppler [5] that the upper and lower incisors were not essential to pronounce a perfect German /s/ since the lower incisors had been found not to contribute to pronouncing it effectively, and the patients whose upper incisors had been extracted could restore the faculty to pronounce a normal /s/ after a short lisping stage. On the basis of the preceding investigations, the present paper studies the basic mechanism of turbulence-induced pressure field generated inside the front cavity and the resultant pressure wave emitted into the free-field in the case /s/, as a continuation of author's preliminary report [44]. Here, the front cavity is meant as part of vocal tract between the constriction formed by the tongue blade and alveolar ridge at back side and the upper-lower incisors gap at front side. Our concern is the sole role of front cavity under no acoustical coupling with other parts of vocal tract. Hence the model domain will be composed of three parts: the air supply, the front cavity and the air discharge. The problem will be dealt with in a two-dimensional setting by taking the following two points into consideration. The governing equations will be solved with the method of direct numerical simulation (DNS). First, since the power spectra of natural /s/ were reported to have several spectral peaks at frequencies below 10 kHz where the most dominant one lies in the range 5$8 kHz or thereabouts, we shall take a hypothesis that it arises from the lowest or the first-mode resonance of sound inside the front cavity coupled with its connecting inflow and outflow channels. Hence, our model domain will assume its first natural frequency, f 1 , to lie in the above mentioned frequency band. Since f 1 is associated with a domain composed of a small semi-closed area and its connecting narrow channels, it must depend essentially on the total extent of domain rather than the representative length, that is, the resonance is in the mode of Helmholtz resonator. Second, by taking account of Meyer-Eppler's argument, the channel at the outlet side of mouth cavity will be configured to be straight. This differs from an L-like channel assumed in [38] and [43], which forces the airflow to bend at right angles. Admittedly, the two-dimensional modeling differs from the three-dimensional one because of (i) the nature of turbulent flow-field itself, typically known as a longer lifespan of vortexes in the former than the latter, and (ii) the structure of inflow constriction in the model geometry. Regarding the latter, the two-dimensional inflow constriction, when embedded in the three-dimensional space, represents a narrow slit of uniform width extending in the coronal direction, whereas the coronal cross-section of the real constriction is a small, oval hole or the like; accordingly, the real midsagittal section of inflow constriction is too wide to adopt for the two-dimensional counterpart. Also, the real midsagittal section of front cavity is too large for the two-dimensional model for a similar reason. PROBLEM SETTING 2.1. Two-dimensional Model Domain Figure 1 shows a schematic view of our model domain where CAV is the front cavity, UHS and DHS are upstream and downstream half-spaces, and UCHN and DCHN are narrow channels. UHS and UCHN together take the role of air supply where UHS is an air tank of infinitely large capacity kept at a constant pressure rise relative to the standard atmosphere, p s , from which the air is supplied to CAV through UCHN that constricts suitably to regulate the airflow. Inside CAV the supplied air is expected to become turbulent. Then, it is discharged into DHS whose pressure is kept at p s , through DCHN that works as the incisors gap and nearby area. UHS and DHS are defined in relative to offsets U 0 and D 0 at which their origins are set so that U 0 ¼ ðu 0x ; u 0y Þ and D 0 ¼ ðd 0x ; d 0y Þ, respectively, are the midpoints of the exit of UCHN and DCHN. Hereafter the set {UCHN, CAV, DCHN} will be referred to as the U-C-D construct, or U-C-D for short. Toda [37] reported several MRI images and many sketches of vocal tract at midsaggital section for /s/ (and others), and several graphs of area-functions; the former show different shapes and sizes of front cavity, from slender to fat, with or without a notch at bottom side, etc., and the latter enable us to get their approximate dimension. It is considered that the Helmholtz resonance is the main reason for allowing such a diversity in the shape and size for /s/. See Sect. 2.3.2. By referring to these evidences, three model domains, to be made in small size for reduction of computational load, will be configured so as to work as a Helmholtz resonator with f 1 in the desired range, driven by turbulence. The first one, labeled D 1 , will have a rectangular CAV so that f 1 takes about 10 kHz. It will work as a pilot model for the two tasks: (i) to determine a reasonable mesh discretization to numerically reproduce the pressure wave that contains high-frequency components up to 10 kHz, and (ii) to get a general understanding on the temporal/spatial behavior of the underlying turbulence field. UCHN will take a funnel-like shape for smooth airflow from UHS, and DCHN will also take a similar shape for the same reason. The second and third ones, labeled D 2 and D 3 , will be those which are supposed to produce /s/-like sounds in two contrastive shapes of CAV. UCHN and DCHN will be similar to those of D 1 . In what follows, D stands for the generic name of D 1 , D 2 and D 3 . Since there is no a priori knowledge about the spatial behavior of turbulence field in CAV, discretizing CAV into uniform square elements is the most desirable to obtain an accurate numerical solution. Hence, CAV will be composed of rectangles whose side lengths satisfy a simple integral ratio. The boundaries of UCHN and DCHN will be configured only with line segments and simple curves. Based on this assumption, a boundary-fitted mesh will be applied to discretize D; it defines a set of 'structured' finite elements as a generalization of finite difference meshes to fit curved boundaries. The Initial-boundary Value Problem Under the assumption of omitting the equation of energy, the governing equations are the Navier-Stokes equations for the unknowns &, p and u ¼ ðu; vÞ in D: where & is the density, u and v are the x and y components of flow velocity, and p is the pressure representing the deviation from p s . The ( xx , ( xy , etc. are the stress tensor components for viscosity with Lamé's coefficients " and !. Under the assumption that p is small, the constitutive law holds, which makes the system of equations closed. The physical constants of air assume the following values at p s ¼ 1 atm: density & s ¼ 1:29 kg/m 3 , c ¼ 340 m/s, " ¼ 18:2  10 À6 PaÁs and ! ¼ Àð2=3Þ". The unknowns are assumed to satisfy the following boundary and initial conditions, where @D is the boundary of D, @p=@n is the normal derivative of p to @D, and p 0 (p 0 > 0) is a constant pressure rise from p s . Boundary condition. Initial condition. The quantity p 0 stands for the so-called intra-oral pressure (IOP). It is the only parameter that determines the behavior of unknowns for a given D. We shall be concerned with both the temporal and spatial aspects of p, u and q. In order to discriminate the two aspects, the following convention will be taken. When it is meant to be the spatial distribution, a suffix '-field' will be put such as p-field, u-field and q-field; alternatively, when it is meant to be the temporal behavior, a suffix '-wav' will be put such as p-wav, which is a synonym of waveform p, etc. Resonance of Turbulence-induced Pressure in a Semi-closed Domain The behavior of turbulence-induced pressure field can be best understood in relation to Lighthill's quadrupole term [45]. By eliminating the first time-derivative of momentum in the three-dimensional Navier-Stokes equations under the omission of viscosity terms, an inhomogeneous wave equation of & with a right-hand side term, q, as defined by Eq. (6) below, is derived. When expressed in p in place of &, the equation is where Á is the Laplacian operator and q is the quadrupole term. In the right-hand side of Eq. (6), & was replaced with & s in [45]. For the two-dimensional problem, Eqs. (5) and (6) are read in terms of Although p and u i are coupled mutually, it is considered that the reaction of p to u i is smaller than the reverse one. Hence with negligence of the former, Eq. (5) is regarded as an inhomogeneous wave equation in p with an 'external source function' q. This view immediately leads to the following when applied to the resonance of p in CAV: (i) the resonance would take place at the natural frequencies, f i (i ¼ 1; 2; . . .), of the homogeneous wave equation in p: and (ii) the actual resonance at a particular f i takes place only when the spectrum of q covers f i with sufficient intensity over a certain sub-domain of CAV. Note that such a sub-domain changes dynamically in shape and location in the turbulent u-field. U-C-D construct as a Helmholtz resonator Let be a finite domain in the three-dimensional space. Obviously, the smallest or first-mode resonance frequency, f 1 , of the wave equation of p in subject to @p=@n ¼ 0 on @ takes on a trivial 0 Hz with which a constant-valued modal shape function is associated, though this mode is in fact not oscillatory. The so-called Helmholtz resonator can be viewed as dealing with the first-mode resonance resulting from a small opening on @ by which the definition domain of problem is extended to the free-field so as to make f 1 > 0, yet the associated modal shape function remain constantvalued approximately over except near the opening. In this context of modification on , there may be a multiple opening in arbitrary shape; let 0 denote the derived domain. Then, 0 and the first-mode resonance, respectively, may also be called the Helmholtz resonator and the Helmholtz resonance. It must be noted that the similarity transformation of 0 with a scaling factor changes f i U-C-D of Fig. 1 is regarded as such an 0 when considered in the two-dimensional space. If the two channels would be rectangular in lengths l k  widths w k (k ¼ 1; 2), then the following formula (which is derived by the usual mass-spring analogy) gives an approximate value of f 1 : where S is the area of CAV, and 1 and 2 are the so-called end corrections. It tells that ffiffi ffi S p for fixed w k and l k , (ii) f H increases as w k increase or tends to null as w k go to shrink for fixed S and l k , (iii) f H remains unchanged if S and w k are doubled but l k are unchanged, etc. Sample It is expected that f H approximates f 1 also for (nonrectangular) UCHN and DCHN when w 1 and w 2 of the formula are assigned the widths at the inlet and outlet of CAV; the validity will be checked numerically in Sect. 3. Oscillation in 0 is in fact a damped one since it loses part of kinetic energy continuously by radiation at the opening. Then, a question arises whether the quality factor, Q, decreases as the size of opening increases. In the case of cylindrical pipe of radius a to open to the free-field, it is positively answered since the reflection coefficient is a decreasing function in 2%a=! (! is the wavelength) [46]. (See also [47].) It is very likely that the same holds for 0 with pipes in general shape, and also for those in the two-dimensional space such as ours. The wall loss of a cylindrical pipe changes the characteristic impedance [48]. But the effect is negligible if a ' 0:5 mm or larger in the frequency range above a few kHz. Finite-element scheme The governing equations are solved on the following boundary-fitted mesh whose elements are associated each with a single set of nodal values fp; u; vg at its barycenter. The scheme is a finite-element version of MacCormack's explicit one [49] to which the so-called artificial viscosity of fourth-order is further added to suppress the numerical instability arising from the nonlinear terms. CAV is discretized with uniform squares whose side length is denoted by h c . As seen below, all other parts are discretized on the base of h c . Hence it represents the fineness of the entire boundary-fitted mesh. DCHN is discretized with quadrilaterals whose side lengths are enlarged smoothly from h c to connect the outlet of CAV with DHS. The same is applied to UCHN to connect the inlet of CAV with UHS. DHS is discretized with a finite number of rectangles in the way their side lengths are stretched toward the infinity starting from those lying on the boundary of DCHN. A function, 0ðsÞ, that maps a finite interval to the half line (R þ ) controls the rate of stretching. See Appendix for the method. The same is applied to UHS. Numerical values of q and A The quadrupole and area velocity will be used as auxiliary quantities to understand the simulated result. The term q ¼ @ 2 &u i u j =@x i @x j at element e is evaluated with the nodal values of u at e and its eight surrounding elements. The area velocity, A, is the two-dimensional counterpart of the volume velocity such that it is associated with a line segment À in D as where u n is the normal component of u to À. À will be either of the inlet and outlet openings of CAV to monitor the airflow. The line integral is evaluated with the trapezoidal rule. Numerical stability Under the condition that the square elements of side length h c are the smallest ones in D, the scheme is considered to be stable when the time step Át satisfies for u ¼ ðu; vÞ in those elements. The actual Át is taken, in view of Eq. (10), as by assuming ffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi u 2 þ v 2 p 0:4c all the time. The amount of artificial viscosity is determined so as not to introduce an excess damping. Time-averaged quantities of p, A and q Let T be the timespan for taking the time-average. The pressure variable p is expected to realize an /s/like sound. In this context, p may be called the sound pressure. However, there need two points to be made clear. First, the waveform of p, or p-wav, represents the so-called instantaneous sound pressure observed at a fixed location. Second, it contains a portion of static p 0 and certain infrasonic components. As far as the sound pressure level is concerned, these low-frequency components must be removed from p. Hence, the sound pressure level of p-wav, written as " p, denotes the root-mean-square (or rms) over T of the preprocessd p-wav with high-pass filter of cutoff frequency 100 Hz. Then, it will be represented in dB (re. 0 dB = 20 mPa). The time-averaged A, written as " A, is given as It will be represented in cm 2 /s. The time-averaged q, written as " q, is the rms of q-wav over T. It will be represented in dB (re. 0 dB = 1 atm/m 2 ). Identification of f i and Q Approximate values of the first and/or second natural frequencies associated with Eq. (7) can be found by spectral analysis on p solved with the initial condition: in place of Eq. (4), where p is a small quantity, e.g., 1:0  10 À5 atm. A timespan of 3 ms is sufficient for simulation to identify f 1 and/or f 2 of the present problem. Also the Q-factor associated with f 1 can be estimated from the decay rate of a band-pass filtered waveform of p processed with the center frequency f 1 . Preliminaries Since the natural sounds of /s/ spoken as a single sustained tone appear to be a stationary noise with a unique spectral profile when observed in the free-field, the simulated sound p is expected to have the same property. In this sense the spectral profile and the sound pressure level are considered to be the key criteria to evaluate the nature of p. In order to fix the argument, a point in DHS located 4 cm downstream the exit of DCHN was chosen as the reference point. In what follows, P ? and p ? -wav, respectively, denote the reference point and the waveform of pressure at this point. Our interest is directed to numerical solutions satisfying " p ? . 90 dB. Computation was carried out in double-precision arithmetic under OpenMP environment on Windows machine with an i7-processor (3.67 GHz). Each run of simulation was done for the first 100 ms of the phenomenon. The outputs fp; u; v; qg at the prespecified observation points were saved in multi-channel wave files in singleprecision format after down-sampling to 192 kHz. (The sampling frequency of the original output data is 68 MHz when h c ¼ 0:01 mm.) Similarly, the outputs of A at the inlet and outlet of CAV, denoted as A 0 and A 1 , respectively, were saved in another wave file. Postprocessing on the wave files was done for the timespan of T ¼ 90 ms by skipping the first 10 ms (for removal of the initial transient response) to obtain the sound pressure levels, power spectra, auto-and crosscorrelations, etc. Unless otherwise stated, the short-term Fourier transform was obtained by DFT with a 1,024-point rectangular window running at 75%-overlapping to take the average. The symbol F is used to denote this transform such as F p on p-wav. Figure 2 shows U-C-D of D 1 . It occupies a total area of 22.4 mm 2 of which CAV is 6.0 mm 2 , fÀ 0 ; À 1 g are the inlet and outlet of CAV (see Eq. (9)), and fU 0 ; D 0 g are the points to which the origins of UHS and DHS, respectively, are attached (see Fig. 1). Model Domain D 1 The first two natural frequencies f f 1 ; f 2 g and Q associated with f 1 were identified as f 1 ¼ 9:8 kHz (Q ¼ 4:5) and f 2 ¼ On the other hand, a practical upper bound of N would be 120 since the cases N > 120 (h c < 1=120 mm), which were not tested, would solve the equations more accurately, but require a too heavy computational load in time and memory. Taking the above facts into consideration, it was decided to choose N ¼ 100 among the three cases N ¼ 80, 100 and 120, i.e., At first glance, the five spectral curves come almost parallel each other, having a prominent peak near f 1 . These peaks are certainly the first-mode resonance. There are two less prominent humps near the origin (0 kHz) and near 20 kHz. The latter is the second-mode resonance. The former is not a resonance, however. It is caused by the high-intensity spectral components of q in the lowfrequency region built up near the outlet of CAV (See Sect. 3 We shall take the case p 0 ¼ 0:8 cmH 2 O. Figure 6 shows the power spectra of p-wav at nine points in CAV, four points in DHS, and two points in UHS, together with their sound pressure levels. Here, the spectral curves are labeled the point names in bold type where the source waveforms are acquired, such as C 1 is the spectral curve of p-wav at point C 1 . The full range of the vertical axis of Fig. 6 is twice as wide as Fig. 4, so all the spectral curves in Fig. 6 are drawn as if the resonance are less acute than those in Fig. 4. These curves can be classified into the following three groups according to the areas they belong to. Group I (CAV). C 1 ; Á Á Á ; C 9 are all decreasing as the frequency increases, keeping within 5 dB of level difference among them except in the low-frequency region near the origin where the difference between C 6 and C 4 amounts to 15 dB. There appear small humps around f 1 , which indicate the existence of resonance. Group II (DHS). Before examining the spectral profiles of p, let us see how p propagates in both DHS and UHS. Figure 7 is a snapshot of high-pass filtered p-field with cutoff frequency 100 Hz, together with p ? -wav for the timespan of simulation (100 ms). D 1 ; Á Á Á ; D 4 have prominent spectral peaks near f 1 showing the first-mode resonance. Regarding " p at the eight points including those whose spectral curves are not shown in Fig. 6, the following holds. First, for all these " p except at D 4 which is the farthest among D i (i ¼ 1; Á Á Á ; 8), the regression equation: holds with R 2 ¼ 0:9971 where x is the ratio of distances D 0 D i to D 0 D 1 . This conforms to the theoretical law of attenuation-by-distance (À3 dB per distance doubled) in the two-dimensional half or whole space. Second, " p at D 4 is lower by 2.06 dB than the value estimated by Eq. (14). This must be due to the effect of stretched meshes that are too coarse to cope with the highfrequency components of p propagating thereinto. A large droop in the tail on the higher-frequency side of D 4 is the evidence as compared to D 1 and D 2 . Also D 3 shows a similar tendency though to a lesser degree. Group III (UHS). The tails on the higher/lower-frequency sides of U 1 and U 2 are raised considerably as compared to those of Group II. The former comes from the secondmode resonance whose occurrence is due to a long and thin geometry of UCHN that behaves almost as an open pipe, while the latter is due to the pressure wave caused by a random fluctuation of air-jet orientation occurring just downstream of the inlet of CAV, whose spectral components concentrate below 1 kHz or thereabouts. Although this effect propagates both upstream and downstream, it is much larger in the upstream side because of the location where it originates. Quadrupole in CAV Take also the case p 0 ¼ 0:8 cmH 2 O. Figure 8 shows the q-field in gray scale (top-left) together with the underlying u-field in oriented line segments (top-right) in CAV at t ¼ 57:0 ms. Here, the gray scale represents jqj up to 1:0  10 4 atm/m 2 at which the scale becomes saturated. (It can be seen in red and blue for positive and negative q in the pdf-version of the paper.) The areas of high-intensity jqj in dark gray will be referred to as the hotspots. The oriented line segments, shown decimated to 1/16 in density, represents the magnitude and direction of u. The figure also shows the high-frequency components above 7 kHz (high-pass filtered at cutoff 7 kHz) contained in the q-field (bottom-left) and u-field (bottom-right). The hotspots appearing therein are the agent that causes resonance of p at f 1 (and higher). In what follows, we shall look into the spatial difference in the spectrum of q-wav inside CAV. Figure 9 shows (i) the power spectra of q-wav in two frequency-dB ranges together with " q at nine points C 1 ; Á Á Á ; C 9 , and (ii) q-wavs at points C 4 , C 6 and C 8 . Here, the spectral curves are labeled the respective point names in bold type such as C 1 for the spectral curve of q-wav at point C 1 , as in the case of p-wav. In the top-left graph of (i), all nine curves are decreasing as the frequency increases toward 40 kHz within 10 dB of level difference at maximum among them and without a hump around f 1 . Note that in terms of rms values, the level difference is 3.9 dB. The top-right graph of (i) is a zoom-in view of the above by limiting to the audio-frequency range. Here, three curves labeled b C 4 , b C 6 and b C 8 in thick line are the smoothed ones obtained by quadratic curve-fitting on C 4 , C 6 and C 8 . They highlight in the frequency domain the essential nature of the respective q-wavs that differ from each other in the amplitude, acuteness, and density of their randomly oscillating motion shown in the time domain. b C 4 slants down the least as the frequency increases, being almost flat below 6 kHz; it resembles a white noise the most. b C 8 is lower than b C 4 by about 5 dB. Point C 4 is at the middle-left of CAV, directly downstream the inlet of CAV, and C 8 is at the bottom-center. In contrast to the above two, b C 6 slants down the most, taking the largest level at low frequencies among all nine curves, about 15 dB high from that of b C 4 near 0 Hz. C 6 is at the middle-right of CAV, which is the nearest point to the outlet of CAV. This area is where the flow becomes stagnant, which is considered to boost the intensity of q at low-frequencies due to slow-down of transport velocity of hotspots, thereby the intensity of p near 0 Hz is raised. At this point, let & i; j be the normalized correlation function between q-wavs at points C i and C j . (It is the auto-correlation if i ¼ j, or the cross-correlation if i 6 ¼ j.) Here, every point is apart from others by 0.37 mm or more. Figure 10 presents the graphs of & 4; j and & 6; j plotted against the delay ( (( ! 0) for j ¼ 1; . . . ; 9, as the most contrasting cases. Case & 4; j : & 4;4 decays rapidly from 1.0 as ( increases, dropping to less than 0.02 in magnitude when ( > 0:05 ms, and every & 4; j for j 6 ¼ 4 stays within the magnitude less than 0.02 for all ( ! 0. This implies that q-wav at C 4 has a nature of a white-noise strongly and is almost independent of q-wav at other points. Case & 6; j : & 6;6 decays slowly in contrast to & 4;4 , as well as & 6; j ( j 6 ¼ 6) keep a noticeable magnitude even at large values of (, among which & 6;3 undulates the most (C 3 is at the top-right, near the outlet of CAV). The above result reveals that the quadrupole tends to lose the randomness both in time and space where the place is far from the air jet. CAV is 11.0 mm 2 . The natural frequencies f f 1 ; f 2 g were identified as f6:1; 9:2g kHz; and Q ¼ 8:8 for f 1 . ( f H ¼ 5:9 kHz when w 1 and w 2 assume the inlet and outlet widths.) Figure 12 shows the power spectra of p à -wav, together with the associated " p ? and " A 0 , at p 0 ¼ 0:6, 0.4 and 0.3 cmH 2 O. The first-mode resonance appears around f 1 . The second-mode one can be seen around f 2 for p 0 ¼ 0:4 and 0.3 cmH 2 O. As a supplementary result, f 1 increases from 5.5 kHz to 6.3 kHz and Q decreases 11.6 to 7.8, when the width of DCHN widens from 0.55 mm to 0.85 mm at neck and from 4.4 mm to 6.8 mm at exit. (ii) D 3 with different lengths of DCHN In comparison with D 2 , D 3 (Fig. 13) is configured so that CAV is a slender one with DCHN having a variable neck length d mm that takes on the three values: Figure 14 shows the power spectra of p à -wav for d of Eq. (15), together with the associated " p ? and " A 0 , at p 0 ¼ 0:6 cmH 2 O. As d increases, f 1 decreases and the peak level at f 1 rises with increasing sharpness of resonance. Also, increase of d results in decrease of f 2 but to a lesser degree than f 1 . DISCUSSION AND CONCLUSIONS The proposed models have been shown to produce /s/like sounds in terms of power spectrum. The following discusses the spatial and temporal behavior of q-field in the case of D 1 in more detail. (The cases of D 2 and D 3 parallel more or less the case of D 1 .) In Fig. 8, the air jet collapses at a very early stage of development. It is caused by a strong entrainment occurring in the small, semi-closed domain of CAV; accordingly, the resultant turbulent-flow prevails throughout CAV. The q-field shown as the top-left image is characterized by a patchy pattern sprinkled with hotspots. An animated sequence of such snapshots (sampled at every 0.01 ms) exhibits that (I) the hotspots are born continuously around the collapsing area of air jet, as well as born almost continuously where a portion of turbulent flow happens to collide violently with the boundary wall or within itself, (II) once born, they start to deform and decay while moving around, and (III) eventually they disappear either by dying out inside or exiting from CAV. An animated sequence of high-pass filtered q-field (cutoff at 7 kHz, the bottom-left image shows a snapshot) reveals a finer and more dynamic structure of hotspots that essentially contribute to the resonance at f 1 . The three q-wavs at the bottom part of Fig. 9 are the temporal records of hotspots passing randomly one after another over the respective locations. It is apparent that the faster in speed or the smaller in size the hotspots, the shorter the period of spike-like swings in the positive and negative directions, which results in higher-frequency components in the spectrum of q-wav, hence in the spectrum of p-wav. Regarding the quality of tone, each p ? -wav of D 2 and D 3 sounds like /s/ to some degree but it is a little rough by author's subjective judgment. The reason for the roughness may be attributed to an insufficient number of hotspots produced with the two-dimensional models. We shall return to the argument posed by Meyer-Eppler [5] on the recovery of /s/ after extraction of upper incisors. The phenomenon can be interpreted on the basis of Helmholtz resonance as follows, under the assumption that the q-field inside the mouth cavity is essential. At the moment the upper incisors are extracted, the condition of Helmholtz resonance for a proper /s/ is lost since Q is lowered and f 1 is raised by an enlarged opening at the exit of mouth (see Sect. 2.3.2). After a short lisping stage, the patient acquires consciously or unconsciously an articulatory configuration suitable for a proper resonance by narrowing the opening and/or enlarging the volume of front cavity to raise Q once lowered and to lower f 1 once raised. Our numerical result with the straight channel at the exit of mouth is opposed to the common theory [2]. However, it does not mean that the teeth has no effect. Under the existence of teeth, the hotspots from the resultant turbulence would contribute to the production of /s/ in corporation with those explained in (I)-(III) above. At last, we note the working range of p 0 . Either of D 2 and D 3 produced /s/-like sounds at " p ? % 90 dB with p 0 ¼ 0:6 cmH 2 O. However, Hixon [11] reported that the intraoral pressure (IOP) for /s/ (averaged over nine speakers) took on 4.34, 7.20 and 10.45 cmH 2 O at three speech effort levels. This suggests that the two-dimensional configuration brings about a lower working range of IOP than the three-dimensional one. The shape of inflow constriction mentioned in Sect. 1 must most probably be the reason for the difference. In conclusion, we claim that the numerical evidence presented herein helps us better understand the underlying mechanism of the production of /s/ by highlighting the role of channel turbulence in terms of hotspots. The remaining issue is to locate the vital area of turbulence in terms of q-field that produces high-frequency components to cover f 1 , with and without the teeth, in the threedimensional configuration. It is worth to mention the production of /S/-like sound in relation to the similarity transformation on the domain. If D 2 or D 3 is dilated with ¼ 1:5$2, then f 0 1 ¼ f 1 = comes in a typical frequency range for the prominent spectral peak of /S/ to stand, so that the dilated domain would become a model domain for production of /S/-like sound. In fact, the tone quality of the resultant sounds from such dilated domains resembles a kind of /S/ by author's subjective judgment. Such a dilation expands the area of the concerned domain by a factor of 2 , or the volume by a factor of 3 in case of the three-dimensional space. Interestingly, the time-stretched waveformpðtÞ,pðtÞ ¼ p ? -wavðt=Þ derived from p ? -wav(t) of the original D 2 or D 3 gives a similar auditory impression. Put ÁV i ¼ V iþ1 À V i . Then it holds, for small h 0 , The same property holds for ÁH j ¼ jH jþ1 À H j j. The following is a guideline for selecting concrete values of fh 0 ; m; a; b; l; Bg to discretize DHS. Here, E denotes the line segment representing the exit opening of DCHN. The theoretical 0ðl À 0Þ ¼ 1 shall be replaced with 0ðlÞ ¼ 0 1 where 0 1 is a large number, e.g. 10 20 , for numerical computation. 1. Interface with DCHN. Put a ¼ ð4=3ÞY (Y is the half length of À), and h 0 ¼ h e (h e is the length of element sides lying on E). 2. Stretch control at s ¼ b. Let ! be the wavelength of plane pressure wave oscillating at 5 kHz. Put B ¼ ð!=20Þ=h 0 . Then, determine b so as to satisfy 0ðbÞ ¼ 2! and 0 0 ðbÞ ¼ B. 3. Size of interval I. Set m ¼ dðb þ 2aÞ=h 0 e (d$e is the integral part of $), and l ¼ mh 0 . Note 1. The discretized elements at the positive side of V mÀ1 , as well as at the positive side of H mÀ1 and at the negative side of H Àmþ1 , are all very large rectangles because of 0ðlÞ ¼ 0 1 ; they are called the farthest rectangles with (two or four) sides that are very large. Note 2. The unknowns p and u on the farthest rectangles are subject to the boundary condition Eq. (3).
2020-07-02T10:24:06.921Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "a5eae083e7eb2750170caea34e46c315e9a2db29", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/ast/41/4/41_E1919/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "b9212d2a5d4f16c7a2d836fb5f7c4d793de9714a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
257770514
pes2o/s2orc
v3-fos-license
Using support vector machine to explore the difference of function connection between deficit and non-deficit schizophrenia based on gray matter volume Objective Schizophrenia can be divided into deficient schizophrenia (DS) and non-deficient schizophrenia (NDS) according to the presence of primary and persistent negative symptoms. So far, there are few studies that have explored the differences in functional connectivity (FC) between the different subtypes based on the region of interest (ROI) from GMV (Gray matter volume), especially since the characteristics of brain networks are still unknown. This study aimed to investigate the alterations of functional connectivity between DS and NDS based on the ROI obtained by machine learning algorithms and differential GMV. Then, the relationships between the alterations and the clinical symptoms were analyzed. In addition, the thalamic functional connection imbalance in the two groups was further explored. Methods A total of 16 DS, 31 NDS, and 38 health controls (HC) underwent resting-state fMRI scans, patient group will further be evaluated by clinical scales including the Brief Psychiatric Rating Scale (BPRS), the Scale for the Assessment of Negative Symptoms (SANS), and the Scale for the Assessment of Positive Symptoms (SAPS). Based on GMV image data, a support vector machine (SVM) is used to classify DS and NDS. Brain regions with high weight in the classification were used as seed points in whole-brain FC analysis and thalamic FC imbalance analysis. Finally, partial correlation analysis explored the relationships between altered FC and clinical scale in the two subtypes. Results The relatively high classification accuracy is obtained based on the SVM. Compared to HC, the FC increased between the right inferior parietal lobule (IPL.R) bilateral thalamus, and lingual gyrus, and between the right inferior temporal gyrus (ITG.R) and the Salience Network (SN) in NDS. The FC between the right thalamus (THA.R) and Visual network (VN), between ITG.R and right superior occipital gyrus in the DS group was higher than that in HC. Furthermore, compared with NDS, the FC between the ITG.R and the left superior and middle frontal gyrus decreased in the DS group. The thalamic FC imbalance, which is characterized by frontotemporal-THA.R hypoconnectivity and sensory motor network (SMN)-THA.R hyperconnectivity was found in both subtypes. The FC value of THA.R and SMN was negatively correlated with the SANS score in the DS group but positively correlated with the SAPS score in the NDS group. Conclusion Using an SVM classification method and based on an ROI from GMV, we highlighted the difference in functional connectivity between DS and NDS from the local to the brain network, which provides new information for exploring the neural physiopathology of the two subtypes of schizophrenic. Introduction Due to the absence of objective biological markers, schizophrenic diagnosis and treatment constitute one of the most complex clinical challenges of modern psychiatry, and extreme heterogeneity among patients further hinders the present research Goldsmith et al., 2021). Therefore, researchers try to parse the symptomatology of schizophrenia into more homogeneous diagnostic categories. Deficit schizophrenia (DS), proposed by Carpenter et al. (1988), is a homogeneous subtype characterized by a trait-like feature of primary and prominent negative symptoms. However, DS patients have more severe negative symptoms, worse long-term prognosis, greater cognitive impairment, lower recovery rates, and a high frequency of family history with schizophrenia (Kirkpatrick et al., 2001;Strauss et al., 2010;Potvin et al., 2021). Therefore, differentiating the two subtypes may have important implications for understanding the psychopathology and improving clinical interventions in these two subgroups of schizophrenia. Magnetic resonance imaging (MRI) is widely used in a variety of mental illness and nervous system disease research, and provided early promise for the discovery of the neuroanatomical differences between the two subtypes of schizophrenia (Wang et al., 2015;Lei et al., 2019). Some studies found excessive DS-specific brain structural changes in gray matter volume (GMV), white matter volume, CSF volume, and cortical structure; including frontal, parietal, and temporal regions (Fischer et al., 2012;Podwalski et al., 2022). However, previous functional neuroimaging studies about the functional connectivity differences between these two subgroups of schizophrenia based on ROI from GMV are scarce. Several studies have reported different patterns of FC abnormalities in DS and NDS patients, although the results remain inconclusive. For example, Ke et al. (2010) suggested that schizophrenia exhibiting positive symptoms had significantly increased leftward asymmetry of functional connectivity, but the negative symptom group exhibited increased rightward asymmetry of functional connectivity, and the strength of the asymmetry in these regions was correlated with symptom ratings. Zhou et al. (2019a) probed numerous abnormal FCs of nerve pathways between the two patient groups, mainly concentrated in the frontooccipital, frontotemporal, and insula-visual cortex, as well as the temporooccipital pathway. In addition, a recent study demonstrated abnormal patterns of FC in the nucleus accumbens network between DS and NDS (Zhou et al., 2021). Meanwhile, using the independent component analysis (ICA) method, further studies found the modular-level alterations in DS compared with the NDS and healthy controls, and the distinct and common disruptions mainly focus on SN, sensory motor network (SMN), DMN, and VN (Yu et al., 2017;Zhou et al., 2019b;Fan et al., 2022). Nevertheless, the limitations of these earlier studies about FC of DS/NDS were that such functional brain connectivity results can be biased by the selection of seed region or spatial network template. Alternatively, there has been a paucity of studies mentioning the thalamocortical imbalance in the two subtypes of schizophrenia which is characterized by prefrontal-thalamic hypoconnectivity and sensorimotor-thalamic hyperconnectivity observed in restingstate fMRI studies, and has been implicated in the pathophysiology of schizophrenia (Anticevic et al., 2015;Avram et al., 2018;Wu et al., 2022). Support vector machine (SVM), which is one of the most commonly used machine learning (ML) methods in pattern recognition (Lei et al., 2020), has been widely utilized as a powerful computational approach to classify schizophrenic patients from healthy controls and predict outcomes based on neuroimaging data (Wang et al., 2018;Chang et al., 2021). There are also very few studies that use SVM to classify and predict DS and NDS. Based on tryptophan catabolites and Consortium To Establish a Registry for Alzheimer's disease features, Kanchanatawan et al. (2018) used SVM to strongly segregate deficit from non-deficit schizophrenia and healthy controls. On the other hand, 144 patients were successfully classified as deficit patients using an SVM classifier based on severity, persistence over time, and possible secondary sources (e.g., depression) of negative symptoms in the research of Fervaha et al. (2016). However, few studies use magnetic resonance data to classify DS and NDS. Based on effective feature selection of these image data, SVM can find more objective seed regions for functional connectivity analysis of schizophrenia. In the present study, we applied SVM to discriminate DS from NDS using their GMV data. Then, the high-weight classified brain regions were used as seed points in whole-brain FC analysis and thalamic FC imbalance analysis of DS and NDS. Finally, we investigated the relationship between altered FC and clinical scale. We hypothesized that (1) Using the SVM classification method, this study can achieve the classification of two subtypes of schizophrenia and obtain key Frontiers in Neuroscience 02 frontiersin.org brain regions from analysis of GMV. (2) This study can find the difference in functional connectivity between DS and NDS from the local to the brain network, which provides new information for exploring the neural physiopathology of the two subtypes of schizophrenic. (3) Observed altered FC between DS, NDS, and HC will also be correlated with clinical scale. Participants A total of 86 naturally right-handed Han Chinese participants ranging in age from 20 to 65 years were recruited in this study, 48 schizophrenia patients and 38 matched healthy controls. All the patients were recruited from the psychiatric rehabilitation unit of Yangzhou Wutaishan Hospital, Jiangsu Province, China. The inclusion criteria for the patients are: (1) An explicit diagnosis of schizophrenia according to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-V); (2) Presenting stable psychiatric symptoms after antipsychotic medication for at least 12 months before participation. The exclusion criteria for the patients are: (1) Severe neuropsychiatric comorbidities, such as head trauma or intellectual disability; (2) Alcoholism or substance abuse; (3) Physical therapy including electroconvulsive therapy; (4) Contraindications for MRI; (5) Noticeable head motion (>3 mm in translation or 3 • in rotation). One patient was excluded because of large head motion, the remaining 47 patients were enrolled in the final research. According to the Chinese version of the Schedule for Deficit Syndrome (SDS) (Wang et al., 2008), patients were divided into two groups: DS and NDS groups, including 16 DS patients and 32 NDS patients, respectively. Patients with two of the following symptoms present at a moderately severe level and persistent over 12 months were defined as having DS: restricted affect, diminished emotional range, poverty of speech, curbing of interests, diminished sense of purpose, and diminished social drive; all the symptoms required the absence of secondary sources (e.g., medication side effects, depression, etc.). In total, 38 gender-, age-, and handedness-matched HC volunteers were recruited from the communlocal advertisements. Unstructured clinical interviews were conducted to exclude HCs who had a history of organic brain disorders, intellectual disability, or severe head trauma as well as a history of personal or family psychiatric disorder. All participants gave informed consent to participate in this study, which was approved by the Institutional Ethical Committee for clinical research of Zhongda Hospital Affiliated with Southeast University. Assessments of clinical symptoms and antipsychotic treatment The severity of the schizophrenic symptoms was evaluated by the Brief Psychiatric Rating Scale (BPRS), the Scale for the Assessment of Negative Symptoms (SANS), and the Scale for the Assessment of Positive Symptoms (SAPS). All patients received antipsychotic medications according to the case clinician's preference for at least 12 months before participation. The details of treatment were assessed from patients' or their guardians' reports and hospital records. The dosage of antipsychotic medication of each patient was recorded and converted to chlorpromazineequivalent mean daily dosages (MDD) (Woods, 2003). Table 1 illustrates the clinical and demographic data of all participants. Multimodal MRI data acquisition All participants were scanned by a 3T MR system (GE HDx, Chicago, IL, USA) with an eight-channel phased array head coil in the Subei Hospital of Jiangsu Province, Yangzhou, China. T1-weighted images were acquired by three-dimensional spoiled gradient echo sequence as follows: repetition time (TR) = 11.94 ms, echo time (TE) = 5.044 ms, flip angle = 15 • , slice thickness = 1 mm without gap, number of slices = 172, field of view (FOV) = 240 × 240 mm, matrix size = 256 × 256. In addition, R-fMRI data were acquired with a gradient recalled echo echo-planar imaging (GRE-EPI) sequence: repetition time (TR) = 2,000 ms, echo time (TE) = 25 ms, flip angle = 90 • , number of slices = 35, field of view (FOV) = 240 × 240 mm, slice thickness = 4 mm without gap, matrix size = 64 × 64, voxel size = 4 × 4 × 4 mm 3 , 240 volumes. All participants were asked to lie quietly awake in the scanner with their eyes closed, and their heads were cozily positioned with cushions inside the coil during the MRI scan to minimize head motion. Image preprocessing and VBM analysis The Statistical Parametric Mapping 8 (SPM8) 1 and Data Processing & Analysis for Brain Imaging (DPABI) 2 were applied to preprocess the fMRI data in MATLAB. 3 The data preprocessing includes the following steps: (1) discarding of the first 10 volumes to achieve equilibrium and a steady-state; (2) slice timing correction; (3) realignment: head motion parameters were computed by estimating the translation in each direction and the angular rotation on each axis for each volume, we required the translational or rotational motion parameters less than 3 mm or 3 • , The frame-wise displacement (FD), which indexes the volume-to-volume changes in head position was also calculated; (4) spatially normalized: individual structural images were firstly co-registered with the mean functional image; then the transformed structural images were segmented and normalized to the Montreal Neurological Institute (MNI) space using a high-level non-linear warping algorithm which use the exponentiated Lie algebra (DARTEL) technique (Ashburner, 2007) to acquire the diffeomorphic anatomical registration, Finally, each functional volume was spatially normalized to MNI space using the deformation parameters estimated during the above step and resampled into a 3 mm cubic voxel; (5) Nuisance covariates regression: six head motion parameters, cerebrospinal fluid signals, white matter signals, and global mean signals were regressed from the data as corrected values; (6) spatial smoothing with a Gaussian kernel of 8 × 8 × 8 mm 3 . T1-weighted structural brain images were visually inspected for motion and artifacts before VBM analysis and for segmentation errors prior to inclusion in the group analyses. Firstly, the VBM8 toolbox 4 was adopted to preprocess and segment images that passed quality control into gray matter, white matter, and cerebrospinal fluid. Then, the images underwent non-linear normalization to MNI space with the DARTEL algorithm after bias correction and segmentation. The images were modulated by nonlinear warping only. Finally, the normalized gray matter images were smoothed with a 6mm kernel and used as characteristic parameters for the SVM method to determine the ROIs for functional connectivity. Determination of ROI with SVM method and functional connectivity analysis In order to classify the three groups of HCs, DS, and NDS, and to find the brain regions with GMV differences between the two subtypes of schizophrenia through imaging data, we adopted the Pattern Recognition for Neuroimaging Toolbox (PRoNTo) 5 (Schrouff et al., 2013) which we ran on Matlab, aiming to facilitate the interaction between machine learning and neuroimaging communities. Based on PRoNTo software, we then applied linear kernel Support vector machines (SVM) which is one of the most commonly used machine learning techniques in neuroimaging to achieve the classification. In the SVM training phase, Spatiotemporal images of the GMV for each subject from 4 http://dbm.neuro.uni-jena.de/vbm 5 www.mlnl.cs.ucl.ac.uk/pronto two subtypes are calculated as features, and weights are assigned to these features for maximal separation between the groups using a hyperplane, which serves as the decision boundary. Classification labels are determined by the sign of the total feature weights multiplied by the test sample. We use the default soft-margin parameter of C = 1. Meanwhile, a 10-fold cross-validation scheme we employed to assess the performance of models generated by these algorithms. Subsequently, we calculated the brain regions that accounted for the top 1% of classification weights, and the numerical values obtained from these high classification weight GMV regions were extracted with the Resting-State fMRI Data Analysis Toolkit (REST), 6 using these brain regions with high classification weight as masks. Thereafter, GMV values were converted to Z-values via Fisher Z transformation and used for the correlation analysis. The study then selected the brain regions whose GMV values negatively correlated with the SANS score as seed points to construct the FC analysis. In the present study, these high-weight classified brain regions between DS and NDS were used as regions of interest for wholebrain-based FC analysis. Pearson correlation analysis was carried out to obtain the correlation coefficient (r-value) of the mean time series between the seed point and the whole-brain voxel in each participant. Fisher Z transformation was then performed to convert the r value to the Z value, which conforms to the normal distribution. Part of the numerical values resulting from regions of altered FC was extracted with REST using altered regions as masks and were later used for correlation analysis. Statistical analyses The statistical descriptive analyses of demographic and clinical scales were conducted using the SPSS 17.0 software package. These parameters were compared with the chi-squared test, the two-sample t-tests, and the analysis of variance (ANOVA) appropriately. ANOVA was performed using the DPABI to investigate differences in whole-brain FC among DS, NDS, and HC groups. Age and education were used as covariates. False discovery rate (FDR) correction was performed for multiple comparisons at the voxel level. The statistical threshold was set at a corrected p < 0.05. The associations between the valuable GMV and FC of brain regions and Clinical scale value in both the DS and NDS groups were performed with Partial correlation analysis (age, education, and chlorpromazine equivalent as covariates). There was no significant difference between the three groups in terms of age, gender, and head motion. Compared with the DS patients, patients with NDS presented higher scores in SAPS-T and lower scores in BPRS-T and SANS-T. Both groups of patients received antipsychotic medications including olanzapine, risperidone, quetiapine, clozapine, and other commonly used antipsychotics, and there was no statistical difference in the chlorpromazine equivalent doses of the two groups (p > 0.05). Classification performance and regions contributing to discrimination between DS and NDS The classification performance for the GMV feature is summarized in Table 2. Total classification accuracy between DS and NDS was 78.6%, and the illustration of classification is presented in Figure 1, ROC curve of GMV was also obtained with the SVM classifier [ Figure 1B, Area Under Curve (AUC) = 0.84]. Regions that contributed to GMV discrimination and accounted for the top 1% of classification weights included right inferior parietal lobule, right upper parietal, left upper parietal, left precentral gyrus, right precentral gyrus, postcentral gyrus, paracentral lobule, precuneus, right inferior temporal gyrus, right thalamus, and cerebellum (Supplementary Figure 1). Then, the GMV values of these 11 high-weight classified brain regions were extracted with REST and were later used for correlation analysis. The results of correlation analysis suggest that six brain regions whose GMV values were negatively correlated with the SANS scale included the right inferior parietal lobule, right upper parietal lobule, left precentral gyrus, precuneus, right inferior temporal gyrus, and right thalamus (shown in Figure 2). But the GMV values were not significantly correlated with other scales. Functional connectivity Functional connectivity analysis based on three seed points between patients and HC. As shown in Figures 3, 4, some abnormal FCs were found among the three groups. Compared to HC, the FC of IPL.R with bilateral thalamus and lingual gyrus, of ITG.R with SN was enhanced in NDS. Both DS groups and NDS groups were found to have an imbalance in thalamic FC which was enhanced between THA.R and SMN but decreased between THA.R and the frontotemporal area. In addition, the FC enhanced between the THA.R and VN, between ITG.R and right superior occipital gyrus in DS group. The FC decreased between the ITG.R and the left superior and middle frontal gyrus in the DS group when compared with NDS. Altered FC of THA-R between DS, NDS, and HCs, and its relationship with clinical symptoms The brain regions with altered FC of THA-R were mentioned in Table 3 and the relationship was mentioned in Figure 5, results show that compared with HC, the FC enhanced between THA.R and SMN, but decreased between THA.R and frontotemporal area both in the DS and NDS groups. In addition, the FC between the THA.R and VN was also enhanced in the DS group. The FC value of THA.R and SMN in the DS group was negatively correlated with the SANS score (P < 0.05). But the FC value of THA.R and SMN in the NDS group was positively correlated with the SAPS score (P < 0.05). Discussion The main findings of the current work are as follows: (1) The classification accuracy of SVM was 78.60% between DS and NDS, and 84.63% between DS and HC, six important regions of interest related to clinical symptoms were obtained from the classification between DS and NDS; (2) Compared to HC, the FC increased between IPL.R and bilateral thalamus, and lingual gyrus, between ITG.R and SN in NDS. Alternatively, the FC enhanced between Interestingly, the FC value of THA.R and SMN in the DS group was negatively correlated with the SANS score, but the FC value of THA.R and SMN in the NDS group was positively correlated with the SAPS score. To the best of our knowledge, this is the first study using machine learning methods to classify the two subtypes of schizophrenia from normal controls based on GMV and perform whole-brain functional connectivity analysis based on ROI with less bias obtained from the classification. This study further explores the neural physiopathology of the two subtypes of schizophrenia. FIGURE 3 Group comparisons of functional connectivity (FC) in six region of interests (ROIs) among deficit schizophrenia (DS), non-deficient schizophrenia (NDS), and health controls (HC). (A) Group analyses of the FC in the right inferior parietal lobule between NDS and HC groups. The significance threshold was set at p < 0.05 after false discovery rate (FDR) was corrected (voxel p < 0.05, cluster size ≥ 146). (B) Group analyses of the FC in the right thalamus between patients and HC groups. The significance threshold was set at p ≤ 0.05 after FDR was corrected (voxel p < 0.05, cluster size ≥ 200). (C) Group analyses of the FC in the right inferior temporal gyrus between patients and HC groups. The significance threshold was set at p < 0.05 after FDR was corrected (voxel p < 0.05, cluster size ≥ 162/100/100 There are few previous studies based on electrophysiological, neurocognitive testing, oxidative stress toxicity, neuroimmune and other parameters, using machine learning methods to classify DS, NDS, and normal controls; correspondingly, the classification accuracy is in the range of 70-85% (Kanchanatawan et al., 2018;Maes et al., 2020;Taylor et al., 2020). Unlike these past studies, the current study extracts features from GMV image maps, uses the SVM method to classify DS and NDS, the six important brain regions for distinguishing DS and NDS, and their GMV values were negatively correlated with the SANS scale, including right inferior parietal lobule, right upper parietal, left precentral gyrus, precuneus, right inferior temporal gyrus, and right thalamus. The results of previous MRI studies of DS and NDS have shown that the frontal lobe, temporal lobe, and precuneus are the main areas of gray matter reduction, the degree of reduction was also negatively correlated with negative symptoms, results of this research are consistent with these findings. Positive symptoms such as hallucinations, delusions, and thinking disorders are important differences between DS and NDS. Meanwhile, the temporal lobe is related to auditory and language processing and thought, so it may be the material basis for the difference in positive symptoms between the two types of patients. The precuneus plays a major role in higher-order self-processes and the attribution of emotion to self and others. The precuneus gray matter volume is significantly different between DS and NDS subtypes, which may explain why patients with deficient schizophrenia have difficulty in emotional expression. Several functional connectivity and local network abnormalities, involving the thalamus, inferior parietal gyrus, salience network, sensory motor network, and visual network, were found in both the DS and NDS groups compared with the HCs. It has been reported previously that most of these regions have abnormal functional connectivity in patients with schizophrenia (Skudlarski et al., 2010;Zalesky et al., 2011). The thalamus is an important nerve nucleus within the brain and is a secondary conduction pathway that plays the role of upward conduction for all sensations except smell. In addition, the thalamus is also involved in people's emotional activities, thalamus damage will lead to emotional disorders, but also cognitive function, speech function decline, and so on (Avram et al., 2018;Wu et al., 2022). In the present study, FC between the THA.R and VN was enhanced in DS group when compared with HC, which represents an abnormal visual process in order to improve neurocognitive function in DS patients, thus may indicate a unique neuropathological mechanism in this schizophrenia subgroup. The present findings also demonstrated hypo-connectivity between the right ITG and The circular graph obtained by comparing the functional connections between the three groups based on the three region of interests (ROIs) (IPL.R, THA.R, ITG.R). IPL.R, right inferior parietal lobule; THA.R, right thalamus; ITG.R, right inferior temporal gyrus. left superior/middle frontal gyrus in the DS group relative to the NDS group. Previously published studies have shown that the right ITG, important for language formulation and face perception (Schultz et al., 2000;Dien et al., 2013), has been reported to have volume reduction in DS patients. Furthermore, abnormally increased activation of the right ITG was found to be related to deficits in facial recognition and interpersonal communication in autistic patients (Schultz et al., 2000), which are phenotypically similar to the negative symptoms of schizophrenia. Part of the frontoparietal circuitry comprises the mirror neuron system, which is activated during basic emotion understanding and emotion experience sharing. Therefore, all these combined factors can be analyzed to show that the hypo-connectivity between the right ITG and frontal-parietal circuit in DS patients is likely to be a potential neural mechanism for the prominence of negative symptoms. Published data (Anticevic et al., 2015;Li et al., 2017;Avram et al., 2018) support the hypothesis that thalamocortical imbalance may be one inherent feature of schizophrenia. The results of this study are consistent with the hypothesis that the FC enhanced between THA.R and SMN, but decreased between THA.R and frontotemporal area both in DS and NDS groups when compared with HC. In addition, the FC value of THA.R and SMN in the DS group was negatively correlated with the SANS score, but the FC value of THA.R and SMN in the NDS group was positively correlated with the SAPS score. Previous studies have shown that the SMN mainly regulating sensory and motor functions, is the core network that is vulnerable to dysfunction in emotional functions, emotion recognition, and cognitive functions of psychiatric disorders (Wood et al., 2016;Davis et al., 2017). The conclusions have also been presented in previous studies, for instance, Cheng et al. (2015) reported an association between sensorimotor cortico-thalamic hyperconnectivity and negative symptoms. Anticevic et al. (2014) reported an association with general psychopathology. Combining these studies, it can be speculated that the presence of persistently progressive negative symptoms in patients with DS further exacerbates sensorimotor cortico-thalamic hyperconnectivity. In conclusion, the present study investigates functional connectivity alterations between DS and NDS from the local to the whole and their relationships with clinical symptoms based on the regions of interest obtained by the SVM classifier, and the results obtained a relatively high classification accuracy. Based on the The relationships between altered functional connectivity (FC) value of right thalamus (THA-R) with clinical symptoms. (A) The relationships between the FC value of THA.R and SMN with the Scale for the Assessment of Negative Symptoms (SANS) score in the deficient schizophrenia (DS) group. (B) The relationships between the FC value of THA.R and SMN with the Scale for the Assessment of Positive Sympt (SAPS) score in the non-deficient schizophrenia (NDS) group. The significance threshold was set at p < 0.05. SMN, sensory motor network. THA-R_SMN, FC of right thalamus and the sensory-motor network; DS_SANS, scoring on the SANS in the DS group; NDS_SAPS, scoring on the SAPS in the NDS group. ROIs, including IPL.R, ITG.R, and THA.R, this study demonstrated the FC between the THA.R and VN enhanced in the DS group when compared with HC, FC between the right ITG and left superior/middle frontal gyrus decreased in DS group relative to NDS group. The findings of this study corroborate the previous conclusion of the hypothesis that thalamocortical imbalance in both of the two subtypes of schizophrenia. In addition, the FC value of THA.R and SMN in the DS group was negatively correlated with the SANS score, but the FC value of THA.R and SMN in the NDS group was positively correlated with the SAPS score, which deepens the understanding of the pathological mechanism of the two subtypes of schizophrenia. Limitations Some limitations of this study should be addressed. First, our study enrolled a small sample size. Larger samples in the future are needed to confirm current findings. Second, only one machine learning method of support vector machine is used for classification in this study, and the classification accuracy is not very high. So later research can improve the performance of the classifier. Third, this study only included the right thalamus in the study of thalamocortical imbalance, which will make the attribution of thalamic FC imbalance analysis in the two subtypes of schizophrenia incomplete. Data availability statement The original contributions presented in this study are included in the article/Supplementary material, further inquiries can be directed to the corresponding authors. Ethics statement The studies involving human participants were reviewed and approved by the Institutional Ethical Committee for Clinical Research of Zhongda Hospital Affiliated to Southeast University. The patients/participants provided their written informed consent to participate in this study.
2023-03-28T18:05:17.314Z
2023-03-27T00:00:00.000
{ "year": 2023, "sha1": "1ed7b0d497e250b2f4aa3e0e8dbb9f5d6d26bd3a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "1ed7b0d497e250b2f4aa3e0e8dbb9f5d6d26bd3a", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
103662829
pes2o/s2orc
v3-fos-license
Effect of Pyrasosulfuron Ethyl, Bensulfuron Methyl, Pretilachlor and Bispyribac Sodium on Soil Microbial Community and Soil Enzymes under Rice-Rice Cropping System Soil microorganisms are an imperative relationship in soil-plant-herbicide-fauna-man and play a major role in herbicide degradation, bioindicators of changes in soil biological activity and bioherbicides. At normal field recommended rates, herbicides are considered to have no major or long-term effect on microbial populations. Soil microorganisms like bacteria, fungi, algae, protozoe, actinomycetes and some nematodes have a vital role in maintaining the soil productivity. Soil microbial biomass is considered an active nutrient pool to plants. However indiscriminate use of herbicides and new molecules of herbicides affects the soil health especially soil microorganism and enzymes and may lead to long-term accumulation. Herbicides toxicity to soil International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 6 Number 12 (2017) pp. 990-998 Journal homepage: http://www.ijcmas.com Introduction Soil microorganisms are an imperative relationship in soil-plant-herbicide-fauna-man and play a major role in herbicide degradation, bioindicators of changes in soil biological activity and bioherbicides. At normal field recommended rates, herbicides are considered to have no major or long-term effect on microbial populations. Soil microorganisms like bacteria, fungi, algae, protozoe, actinomycetes and some nematodes have a vital role in maintaining the soil productivity. Soil microbial biomass is considered an active nutrient pool to plants. However indiscriminate use of herbicides and new molecules of herbicides affects the soil health especially soil microorganism and enzymes and may lead to long-term accumulation. Herbicides toxicity to soil The effect of pyrasosulfuron ethyl (10% WP), bensulfuron methyl (0.6%), pretilachlor and POE bispyribac sodium (10% EC) on soil microflora and soil enzymes under rice-rice ecosystem in long term herbicide trial were studied during Rabi 2015-16 and 2016-17 and Kharif 2015 and 2016. The soil samples were collected from the experiments conducted at Wetland, TNAU by AICRP-Weed Management unit, Coimbatore. The results of the pooled data of two season revealed that all the four herbicides viz., PE pyrasosulfuron ethyl (10% WP), bensulfuron methyl (0.6%), pretilachlor and POE bispyribac sodium (10% EC) reduced the soil microflora except actinobacteria and soil enzymes upto 5 days after herbicide application. Simultaneouly the mciroflora were increased 10 times after 15 days herbicide application when compared with hand weeding. The treatment, bensulfuron methyl (0.6%) along with pretilachlor (6.6%) fb hand weeding significantly increased the microbial population and soil enzymes during Rabi seasons. In kharif, pyrasosulfuron ethyl fb bispyribac sodium recorded the highest microbial population followed by pyrasosulfuron ethyl fb hand weeding. But the activity of soil enzymes viz., alkaline phosphatase and dehydrogenase and Urease were significantly highest in the treatment, pyrasosulfuron ethyl fb hand weeding followed by PE pyrasosulfuron ethyl fb POE bispyribac sodium compraed with unweeded check. The phylogenetic analysis of 16s rDNA from 20 bacterial colonies revealed that the all the 20 bacterial colonies were identifed as Bacillus sp. microorganisms can alter community structure including potential increases in plant or animal pathogens. Soil microflora were found to be effected by the use of herbicides. [1] With this background the present study was aimed to study the influence of pyrasosulfuron ethyl (10% WP), bensulfuron methyl (0.6%), pretilachlor and POE bispyribac sodium (10% EC) on soil microbial community and soil enzymes under rice-rice ecosystem in long term herbicide trial. Materials and Methods Soil samples were collected from all the treatments after application of Pre emergence (PE) and Post emergence (PoE) herbicides during Rabi, 2015-16 and 2016-17 and Kharif, 2015 and 2016 at a depth of 0-3 cm on 1, 3, 5, 15 and 30 days for enumeration of microflora and enzyme activities. Nutrient agar (NA), Rose Bengal medium, kenknights agar medium, Burk's N-free medium and Soil extract medium were used for the enumeration of total bacteria, fungi, actinobacteria, diazotrophs and phosphobacteria, respectively. The serial dilution and Pour plate technique were used for enumeration microbial population. The plates were incubated at 30°C for 48h for bacteria, 3 days for fungi, diazotrops, phosphobacteria and 7 days for actinobacteria colony count. Dehydrogenase activity was measured by reduction of 2,3,5 triphenylotetrazolium chloride (TTC) to red coloured triphenyl formazon (TPF) and determined by spectrophotometrically as per the method described by Nannipieri et al., [2] Activity of alkaline phosphatase was assayed according to Tabatabai and Bremner [3] using pnitrophenyl phosphate solution as the substrate with modified universal buffer (MUB) pH 11. The urease activity was determined based on ammonium released by urease activity when soil is incubated with tris (hydroxymethyl) aminomethane (THAM) buffer. [4] . The results of the Rabi 2015-16 & 2016-17 and Kharif 2015 and 2016 were pooled and the data were statistically analysed. The 16S rRNA gene sequence is used for bacterial identification or assignment of close relationships at the genus and species level. The 16s rDNA gene fragments were amplified from selected bacterial isolates by PCR using primers (27F and 1492R). The sequence data generated by automated sequencing were subjected to homology search through Basic Local Alignment Search Tool (BLAST). Results and Discussion The microbial activities were more sensitive to herbicides leading to slight initial suppressing effect and degradation of herbicides in rice fields favouring high microbial activity. The pooled analysis of Rabi, 2015-16 and 2016-17 and Kharif 2015 and 2016 revelaed that the recommended level of herbicides, PE pyrasosulfuron ethyl (10% WP), bensulfuron methyl (0.6%), pretilachlor (6.6%) and POE bispyribac sodium (10% EC) affected the total bacteria, fungi, diazotrophs and phosphobacteria and soil enzymes alkaline phosphatase, dehydrogenase and urease upto 5 days after herbicde application compared witth hand weeding and unweeded check (Figs. 1 and 2 and Table 1 and 2). Subsequently all the mcicrobial population were increased 10 fole time compared with unweeded check. The same result was reported by Bowels et al., [5] and Chauhan et al., [6] and the initial decrease followed by increase in microbial population could also be due to microbial multiplication on the increased supply of nutrients available by herbicides degradation. The 16S rRNA genes has been used to produce DNA banding patterns that represent the whole microbial community, allowing to detect significant changes of the microbial community structure in nature. In the present study the community structure of Bacteria was also studied by the phylogenetic analysis of 16s rDNA of bacterial isolates using PCR. Five colony from each trial (Rabi, 2015-16 & 2016-17 and Kharif, 2015 & 2016) totally, twenty bacterial colony were subjected to phylogenetic analysis of 16s rDNA by PCR. The automated sequencing results revealed that all the twenty colonies were coming under the different species of Bacillus subtilis, Bacillus cereus, Bacillus licheniformis and Bacillus methylotrophicus (Fig. 3). However the study on effect of herbicide application on microbial community structure in rice ecosystem is limited. From the present study we concluded that the herbicides have toxic effects on microorganisms, reducing their abundance, activity and accordingly, the diversity of their communities. However the toxic effects of herbicides are normally most severe immediately after application. Later on, microorganisms take part in a degradation process, and then the degraded organic herbicides provide carbon rich substrates which in terms maximize the microbial population in the rhizosphere.
2019-04-09T13:07:26.523Z
2017-12-20T00:00:00.000
{ "year": 2017, "sha1": "8633ebb95fa8ad6dff42ee2a730716e9fa42ac05", "oa_license": null, "oa_url": "https://www.ijcmas.com/6-12-2017/A.%20Ramalakshmi,%20et%20al.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0aec8ca1d02c252b87f79adabd69d513c243c38d", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
245770906
pes2o/s2orc
v3-fos-license
Voltage‐gated proton channels in polyneopteran insects Voltage‐gated proton channels (HV1) are expressed in eukaryotes, including basal hexapods and polyneopteran insects. However, currently, there is little known about HV1 channels in insects. A characteristic aspartate (Asp) that functions as the proton selectivity filter (SF) and the RxWRxxR voltage‐sensor motif are conserved structural elements in HV1 channels. By analysing Transcriptome Shotgun Assembly (TSA) databases, we found 33 polyneopteran species meeting these structural requirements. Unexpectedly, an unusual natural variation Asp to glutamate (Glu) at SF was found in Phasmatodea and Mantophasmatodea. Additionally, we analysed the expression and function of HV1 in the phasmatodean stick insect Extatosoma tiaratum (Et). EtHV1 is strongly expressed in nervous tissue and shows pronounced inward proton conduction. This is the first study of a natural occurring Glu within the SF of a functional HV1 and might be instrumental in uncovering the physiological function of HV1 in insects. Voltage-gated proton channels (H V 1) are expressed in eukaryotes, including basal hexapods and polyneopteran insects. However, currently, there is little known about H V 1 channels in insects. A characteristic aspartate (Asp) that functions as the proton selectivity filter (SF) and the RxWRxxR voltage-sensor motif are conserved structural elements in H V 1 channels. By analysing Transcriptome Shotgun Assembly (TSA) databases, we found 33 polyneopteran species meeting these structural requirements. Unexpectedly, an unusual natural variation Asp to glutamate (Glu) at SF was found in Phasmatodea and Mantophasmatodea. Additionally, we analysed the expression and function of H V 1 in the phasmatodean stick insect Extatosoma tiaratum (Et). EtH V 1 is strongly expressed in nervous tissue and shows pronounced inward proton conduction. This is the first study of a natural occurring Glu within the SF of a functional H V 1 and might be instrumental in uncovering the physiological function of H V 1 in insects. Voltage-gated proton channels are found in most eukaryote kingdoms, from coccolithiophores [1] to dinoflagellates [2], chordata, fungus, plants and mammals [3,4]. In dinoflagellates (Lingulodinium polyedrum), H V 1 channels trigger light emission [5]. In mammals, H V 1 channels play a pivotal role in many physiological processes including pH homeostasis, respiratory burst of phagocytes and maturation of sperm [6][7][8]. In several breast and colorectal cancers, H V 1 is significantly upregulated [9,10]. Much less is known about H V 1 channels in hexapoda, especially in insects. Recently, a H V 1 channel of the Zygentoma Nicoletia phytophila was characterized and other H V 1 sequences have been found in phylogenetically more basal hexapodes [11]. As a member of the voltage-gated superfamily of ion channels, H V 1 possesses four transmembrane regions (S1-S4) with a typical voltage-sensor element in the fourth transmembrane segment (S4) [3,4]. Compared to other voltage-gated ion channels, such as potassium, sodium and calcium channels, H V 1 does not have the last two transmembrane regions (S5-S6) of the typical six transmembrane alpha helices, usually composing the pore region. Instead, a different ion conduction pathway is established for protons, including all four transmembrane regions with a typical negatively charged aspartate in S1 as proton selectivity filter (SF) [17][18][19]. Additional site-directed mutagenesis of this aspartate residue (position 112 in human H V 1) revealed new aspects of structure-function relationships within H V 1 channels [2,11,17,20,21]. To date, H V 1 channels are considered to be dimeric with the intracellular C-terminal domain connecting both subunits [22][23][24]. The aim of this study was to characterize the H V 1 channel of the stick insect Extatosoma tiaratum. Besides the H V 1 channel of the Zygentoma Nicoletia, this is the second H V 1 channel from Hexapoda and the first of the class Insecta. Our database analysis provided a detailed picture of the presence and absence of H V 1 genes in different hexapodan and insect orders. With the analysis of the tissue-specific expression in Extatosoma and the electrophysiological characterization, we hope to blaze a trail for uncovering the physiological function of H V 1 channels in insects. The presence of an unusual glutamate residue as SF within the S1 domain of Extatosoma H V 1 was analysed and compared to the other hexapodan H V 1 channel from Nicoletia. Database analysis The BLAST algorithm was used to analyse insect transcriptome shotgun assembly (TSA) and genomic databases at NCBI. The recently described sequence of the Zygentoma Nicoletia phytophila (KT780722 [11,25]) was used as query sequence. Only sequences harbouring the typical RxWRxxR motif in the S4 segment were used for further analysis. Heterologous expression Extatosoma tiaratum EtH V 1 gene was synthesized commercially (Eurofins/Genomics, Ebersberg, Germany). The synthesized DNA including a 5 0 BamHI and 3 0 EcoRI restriction site was cloned into a pEX-A2 plasmid. The gene was later subcloned into a pQBI25-fC3 or pcDNA3.1, using 5 0 BamHI and 3 0 EcoRI restriction sites and GFP fused to N-terminal as previously described [2,11,17,25]. tSA201 cells (human kidney cell line) were grown to 85% confluency in 35 mm culture dishes. Cells were transfected with 1.0 μg plasmid DNA using polyethylenimine (Sigma, St. Louis, MO, USA). After 12 h at 37°C in 5% CO 2 , cells were trypsinized and replated onto glass coverslips at low density for patch clamp recording the same day and the next day. Green cells were selected under fluorescence for recording. As in [11,25], whole cell patch clamp showed no other voltage-or time-dependent conductance under our recording conditions. The level of expression of EtH V 1 was sufficiently high so that potential contamination by native H V 1 currents was negligible. Electrophysiology Patch-clamp recordings were done as described in [11,25]: A patch-clamp amplifier EPC 10 (HEKA, Lambrecht, Germany) was used. Recordings were stored on hard discs and analysed with Origin (Origin 2017, Northampton, MA, USA). Patch pipettes were made from borosilicate capillaries GC 150TF-10 (Harvard Apparatus, Holliston, MA, USA) and pulled using Flaming Brown automatic pipette puller P-1000 (Sutter Instruments, Novato, CA, USA). Pipettes were heat polished to a tip resistance ranging typically from 5 to 9 MΩ with pipette solutions used. Electrical contact with the pipette solution was achieved by a chlorinated silver wire and connected to the bath with an agar bridge made with Ringer's solution. Seals were formed with Ringer's solution (in mM 160 NaCl, 4.5 KCl, 2 CaCl 2 , 1 MgCl 2 , 5 Hepes, pH 7.4) in the bath, and the potential zeroed after the pipette was placed above the cell. Whole-cell and inside-out solutions (pipette and bath) included 100 mM buffer close to its pK a with tetramethylammonium (TMA + ) and methanesulfonate (CH 3 SO 3 − ) as the main ions, 1 mM EGTA, and 1-2 mM Mg 2+ with an osmolarity of 300 mOsmÁkg −1 . Buffers were 2-(N-morpholino) ethanesulfonic acid (MES) at pH 5.5 and pH 6.0, Bis-(2hydroxyethyl)imino-tris-(hydroxymethyl)-methane (BIS-TRIS) at pH 6.5 and PIPES at pH 7.0. Resistance of the seals was usually > 3 GΩ. Currents are shown without correction for leak or liquid junction potentials. Data were collected between 19°C and 23°C. Currents were fitted to a rising exponential to obtain the activation time constant (τ act ). The maximal proton conductance (g H, max ) was calculated from the steady-state current (the fitted current extrapolated to infinite time) using reversal potentials (V rev ) measured in each solution in each cell. In these fits, the initial delay was ignored and the remaining current usually fitted a single exponential well. The threshold potential, V thres , was determined from families of pulses as the potential where the first tail current was observed once the membrane was repolarized. The reversal potential was measured by two methods. When V thres was negative to V rev , it could be readily determined by the zero current. When V thres was positive to V rev , then V rev was determined with the tail current method. Voltage dependence of activation was obtained by linear fittings of the activation kinetics plots at the region of the curve where τ act becomes faster with depolarization. Selectivity for protons was determined by comparison of measured reversal potentials to the Nernst potential for protons (E H ) at the experimental ΔpH (pH o − pH i ). The pH dependence of gating was evaluated in a pH range from 5.5 to 7.0 by linear regression of data from V thres against V rev graphs, in a potential range from −70 mV to +70 mV. Overexpression of the channels in small cells resulted in large proton currents which removed enough protons from the cell to change pH i considerably. Proton channel gating kinetics depend strongly on pH; therefore, proton depletion is a significant source of error. To minimize this problem, families with different pulse lengths were applied. Longer pulses were used to determine pulses close to V thres where τ act is slow, while shorter pulses were used at more positive voltages. Zinc inhibition assays were tested extracellular and performed at 0, 10 and 100 μM ZnCl 2 . EGTA was omitted from zinc-containing solutions. Families of pulses of different lengths were collected in each zinc condition and exchanges of external solutions recorded during test-pulse protocols. The data are shown without corrections for buffer binding. Structural model A structural model of the transmembrane domain was constructed via homology modelling using Modeller [26,27] and the crystallographic structure of Ci-VSD in an open state as template (PDB: 4G7V [28]). The amino acid sequences (residues G23 to S158 for Ci-VSD and residues G31 to V169 for Extatosoma) were aligned with MUSCLE [29]. A few inaccuracies in the alignment were corrected manually. The two sequences have 18% identity and 40% similarity. 100 models were generated. The best model according to the Modeller objective function was refined with 3DRefine [30]. Five solutions were generated and ranked according to the 3Drefine and RWPlus scores. The solution with the best ranking was conserved. Polyneopteran TSA-database analysis Using the typical proton channel signature motif of the voltage sensor in S4 (RxWRxxR), we identified 33 putative polyneopteran H V 1 channels: nineteen in stick insects (Phasmatodea), eight in locusts/crickets (Orthoptera), three in webspinners (Embioptera), two in gladiators (Mantophasmatodea) and one in stoneflies (Plecoptera). A complete list of all identified channels with the respective GenBank Acc. No. can be found in Table S1. All amino acid sequences compiled from TSA files are shown in Fig. S1. No H V 1 sequence homolog was found in cockroach TSA databases and in the species-poor polyneopteran groups of ice crawlers and ground lice. The only sequences found initially in mantis (Metallyticus splendidus, GATB01324360, see Table S1) and earwigs (Forficula auricularia, GAYQ01077212) were subsequently removed from 1KITE datasets as they most likely represent a fungal contamination of the animal sample ( [31], B. Misof personal communication, Table S1). Further database analysis identified seven more partial sequences (six stick insects, one locust) with significant homology to H V 1 channels, however not or only partially covering the signature motif RxWRxxR (Table S1B). Within the 18 full-length clones, sequence identity between species from different polyneopteran orders were 42%-55% within the core region of the channels (S1-S4, 63%-75% homology). Within a polyneopteran order, sequence identity was > 80% (Table S2). All sequences have the typical four-transmembrane structure comparable to other known H V 1 channels: short loops between the transmembrane region (8-16 amino acids) and rather short C-and N-terminal domains (~50 amino acids). Sequence length varied between 211 and 273 amino acids in total. A striking sequence variation is found within the SF in the S1 segment. In most known H V 1 channels, a negatively charged aspartate residue (D112 in human H V 1) is important for proton selectivity. A similar aspartate is found in all orthopteran, mantodean and embiopteran sequences. In phasmatodean and mantophasmatodean sequences however, this aspartate is replaced by an also negatively charged glutamate residue (Figs 1 and S2). A similar exchange (D to E) within the S1 selectivity filter has been artificially generated by different mutagenesis [11,17,20,21,32], resulting in more negative activation, speeding up of activation kinetics but maintaining proton selectivity. Here, for the first time, we show that a glutamate residue occurs naturally at the SF position. Do any hemipteran or holometabolan insects harbour a H V 1 channel homolog? Despite being overrepresented in protein and nucleotide databases, analysis of all TSA and genomic databases of hemipteran and holometabolan insects revealed only seven TSA sequences encoding the S4 signature motif. The genomic sequence data (mainly from Diptera) showed absolutely no evidence of the presence of an H V 1 homolog in Hemiptera and Holometabola. A closer look at the respective TSA sequences showed that none of them has high homology to known hexapodan H V 1 homologs, but show strong homology to fungal H V 1 sequences (in six cases) and to Chelicerata (one case). Therefore, all identified putative H V 1 channels within these insect orders are likely due to parasitic contamination of the animal sample investigated (Table S1C). Indeed, especially fungal contaminations are easily uncovered by sequence analysis of H V 1 homologs, as the third The three voltage-sensor arginines in S4 are shown in blue, the glutamate SF (in S1) in green, and the conserved tryptophan in red sticks. Other arginine and lysine residues are depicted in blue, aspartate and glutamate residues in red lines. A hydrophobic gasket, depicted as yellow surface, is formed by the side chains of V59, F101, V128 and I129 that separates the inner and outer aqueous vestibules. The network of stabilizing interactions shown as gold dots (distances inÅ) is similar to that in human H V 1 (hH V 1). R1, here R156, interacts with a glutamate residue, E69 (E119 in hH V 1), R2 (R159) with the SF E62 (D112 in hH V 1) and E136 (D185 in hH V 1), and R3 with E104 (E153 in hH V 1) and D125 (D174 in hH V 1). (B and C) Sequence logo representation of H V 1 S1-(B) and S4-domains (C) showing the amino acid frequencies at respective positions of four different orders of polyneopteran insects. The first green residues shown (position 10 in B) belong to the SF; the voltage-sensor arginine residues are highlighted in blue and the conserved S4 tryptophan residue in red. Number of sequences used for analysis in S1, S4; respectively: Orthoptera (n = 3, 4), Embioptera (n = 3, 3), Phasmatodea (n = 15, 18) and Mantophasmatodea (n = 2, 2). arginine of the RxWRxxR motif is usually mutated to a lysine residue in fungus. We conclude that there is no evidence for the presence of H V 1 channel homologs in Hemiptera or Holometabola. 526 Structure of the Extatosoma tiaratum H V 1 channel (EtH V 1) For further characterization, we selected the H V 1 channel of the stick insect Extatosoma tiaratum (EtH V 1, GenBank Acc. No. GAWG01024136). EtH V 1 is 236 amino acids (aa) in length, possesses the usual four transmembrane regions and 52 aa N-terminal and 65 aa C-terminal intracellular domains. This sequence harbours the phasmatodean-specific glutamate (E62) as SF in S1 and a typical S4 voltage sensor. Within the core segment, S1-S4 EtH V 1 is 33% identical and 63% homologous to human H V 1. Tissue expression of EtH V 1 An RT-PCR analysis of five different tissues isolated from two animals showed strongest expression in the nervous system as a conglomerate of all Extatosoma ganglia. Moderate expression was found in the digestive system and weak expression was detected in eyes, whereas no clear expression could be detected in muscle and antenna. In Fig. 2, an agarose gel of the RT-PCR is shown; the 363 bp EtH V 1-PCR product is indicated. As a positive control, Extatosoma histone H3 expression was detected in all five tissue samples. PCR products from ganglia and digestive system were verified by DNA sequencing. Electrophysiological characterization of EtH V 1 Extatosoma EtH V 1 was expressed as a GFP fusion protein in tSA cells and was distinguishably localized in the cell membrane when detected under fluorescence. Transfected cells had a capacitance of 9.62 AE 2.03 pF (mean AE SD, n = 9 cells) and presented a mean conductance density of 1.07 AE 0.44 nSÁpF −1 (mean AE SD, n = 9 cells), demonstrating reliable expression levels. Typical proton selective currents were detected during patch-clamp experiments. Consistent with reports of other species [1][2][3][4]11,33,34], robust H + currents presented threshold potential, V thres , and time-dependent behaviour in the order of seconds. The time course of currents has sigmoidal shape which has previously been attributed to the dimeric nature of H V 1 [22,35]. After a short delay, currents rise exponentially during membrane depolarization and large tail currents appear at repolarization steps. Figure 3A depicts an example of a whole-cell patchclamp measurement of EtH V 1 in two different pH conditions. The amplitude of currents is time-dependent and increases with every depolarizing step, clearly indicating voltage-dependent activation. Large and relatively slow tail currents are also seen once the channel deactivates as consequence of repolarization of the cell membrane (e.g. V hold = −80 mV). In common with other H V 1 channels, g H of EtH V 1 is also regulated by the pH gradient across the membrane, ΔpH (pH o − pH i ). When ΔpH increases (ΔpH > 0) or decreases (ΔpH < 0), EtH V 1 adjusts its g H to more positive or to more negative potentials accordingly. Figure 3B shows a clear rightward shift of the conductancevoltage relationship, g H -V, of 25 mV once external pH (pH o ) was diminished from 7.0 to 6.5 (black arrow). The same behaviour was detected in inside-out patches where pH i was exchanged to generate the same ΔpH = −0.5 (Fig. S3). The effect of g H -V change can also be seen on EtH V 1 activation kinetics, τ act (Fig. 3C). The voltage dependence of τ act of EtH V 1 is represented by a slope of −0.10 AE 0.02 e-fold sÁmV −1 (mean AE SD, n = 6) which renders into 10 mV/e-fold change. We further tested the classical proton channel inhibitor, zinc, on EtH V 1. Figure 4 shows how Zn 2+ affects EtH V 1 H + currents in the same cell. The divalent cation drastically reduces the amplitude and kinetics of proton currents activation. In our experiments, we increased [Zn 2+ ] from 0 (Control) to 10 and 100 μM. During test pulse protocols, H + activation and tail currents reduce their amplitude once zinc is added to the bath solution (Fig. 4A). Figure 4B depicts families of pulses under the three different zinc conditions in a whole-cell configuration. Two main effects are shown by the families of pulses: a reduction of activation currents at the same depolarization and a slowing of activation kinetics. In agreement with other H V 1 studies [5,25,[34][35][36], g H -V curves shift rightwards along the voltage axe (Fig. 4C), and τ act becomes slower once [Zn 2+ ] increases (Fig. 4D). Both are the two main effects of zinc inhibition on proton channels. The data demonstrate that EtH V 1 is sensitive to the classical proton channel inhibitor, zinc, in a micromolar range. Proton selectivity and pH-dependent gating of EtH V 1 The reversal potential of EtH V 1 was analysed in a pH o range between 5.5 and 7.0, and pH i 6.5-7.0. Because activation of EtH V 1 was negative to V rev for most of the cases, V rev was determined directly as the zero current in a family of depolarizing pulses. The recorded values follow accurately the predicted Nernst potential for proton conduction, E H , indicating that EtH V 1 is highly proton selective (Fig. 5A). Deviations of V rev values from E H are a consequence of incomplete pH i control even though high pH buffer concentrations were used, for example strong depolarization causes H + depletion that increases pH i . Increase in internal proton concentration, [H + ] i , and the consequent drop of internal pH (pH i ) causes divergences between measured V rev and calculated E H . A rise in [H + ] i is provoked by consistent inward H + currents during channel's activation, when V thres < V rev , or by large tail currents observed during membrane repolarization. In H V 1, voltage and pH modulate the channel's gating. To evaluate the pH dependence of gating of EtH V 1, we applied the 'threshold versus reversal' strategy previously used in other studies [2,11,37]. The approach consists of determining the reversal and threshold potential at a wide range of pH to obtain an equation of the form V thres = slopeÁV rev + offset. We measured V rev and V thres at pH ranges from 5.5 to 7.0 applying different ΔpH (Fig. 4B). In a total of 16 determinations, data permit to define the voltage dependence of EtH V 1 as: Interestingly, EtH V 1 and NpH V 1 present the same voltage dependence of gating translated to a slope of 0.77 V thres /V rev . Nevertheless, a major difference in the offsets of both channels can be seen. Extatosoma is more negatively activated (−23 mV) than Nicoletia (−2.4 mV). The dotted line in Fig. 5B represents equality between V thres and V rev . Data located under the dotted line stand for inward H + conduction, while data points above the dotted line represent outward H + currents. By definition, if V thres is positive to V rev , H V 1 conducts protons outwards, alkalinizing the cytosol. In opposition to this, threshold values negative to V rev show inwardly directed proton currents. Thus, Fig. 5B enables the fast determination of the proton currents direction. In the whole investigated pH range, EtH V 1 activation is negative to V rev . EtH V 1 permits proton influx. In contrast, NpH V 1 activation is 20 mV more positive and permits proton extrusion, while mostly preventing inward H + flux. Distribution among hexapodans Proton channels are unique members of the voltagegated ion channel superfamily, as they are represented in most species by a single gene or by no gene at all. This indicates that H V 1 offers an evolutionary advantage over some species, whereas other species may dispense an H V 1 homolog. The common ancestor of Hexapoda, Crustacea, Myriapoda and Chelicerata clearly possesses a single H V 1 gene. Figure 6 shows a Considering Polyneoptera as monophyletic group, it is obvious that the common ancestor of the sister groups Mantodea and Blattodea lost an H V 1 homolog. The absence of H V 1 within these orders is very likely since sequence coverage of these orders is high. Furthermore, it is extremely unlikely that in all species analysed the putative H V 1 homolog has simply been missed by sequencing, instead of been lost during evolution. During this study, nine TSA sequences were identified with significant homology to known fungal sequences, with up to 99% identity. These clones undoubtedly represent sample contaminations. The 1KITE project and other related TSA studies provide us with a huge amount of transcriptomal sequence data [31]. Despite the overall data being of very good quality and sequence coverage also being high (depending somewhat on the species analysed), a major drawback is sample contaminations from insect parasites, mainly fungus. As whole insects were analysed by TSA studies, such contaminations cannot be excluded within the first sequence drafts. Indeed, algorithms were used to eliminate such noninsect sequences from the dataset; however, some contaminations are still found. Actually, from ten insect TSA database entries representing clear contaminations, only three were subsequently removed by the 1KITE staff (~30% of all contaminated TSA entries,~50% of contaminated 1KITE TSA data). Glutamate as SF of EtH V 1 The unusual glutamate residue in the S1 selectivity filter (E62 in Extatosoma) is found only in the two closely related polyneopteran orders of Phasmatodea and Mantophasmatodea. Indeed, homologous positions at the SF have been characterized in detail by site-directed mutagenesis in human [17], dinoflagellate [2] and also in the Zygentoma Nicoletia phytophila H V 1 [11]. Asp to Glu substitutions at the SF were Comparison of the pH dependence of gating between EtH V 1 and NpH V 1. Threshold potential, V thres , are plotted versus the reversal potential for both, EtH V 1 (blue circles) and NpH V 1 (red triangles), in a voltage range from −70 mV to + 70 mV. Dotted line represents equality between V rev and V thres . Blue and red solid lines show the linear regression of data from EtH V 1 and NpH V 1, respectively. For the same voltage range, NpH V 1 presents a pHdependent gating equal to V thres = 0.77 V rev -2.4 mV (n = 41); meanwhile, EtH V 1 shows a more negative activation defined by V thres = 0.77 V rev -23 mV. n = 16 (9 cells), pH i was 6.5 or 7.0, and measurements made in a pH o range from 5.5 to 7.0. (C) Upper recording: activation of proton currents (the conductance activated negative to V rev ) in a whole-cell patch-clamp configuration show a V rev between +50 and +60 mV when pH i = 6.5 and pH o 5.5, accordingly with a predicted E H of +58 mV in the same pH conditions. Pulses were applied in 10 mV increments from the holding potential (−40 mV) to +60 mV. Lower recording: Tail current records of a patch at symmetrical pH i // pH o 7.0 indicating a V rev close to 0 mV. Test pulses were applied in 10 mV increments after a depolarizing pulse (+55 mV) from the holding potential (−40 mV) to +10 mV. done considering that both residues are negatively charged at physiological conditions. The investigations proved that H V 1 is still proton selective once a glutamate is present at the SF position in S1 [2,11,17]. On the other hand, the extreme proton selectivity is lost once Asp is mutated to a neutral amino acid, for example alanine (Ala), making the channel also permeable to anions [2,11,17]. A potential mechanism explaining the necessity of a negatively charged amino acid at this position has been reported by Dudev et al. [18]. They analysed the selectivity mechanism of H V 1 applying a quantum-based model. In the open state of the channel, the SF is composed of a salt bridge interaction between the Asp and the second or third Arg of the voltage-sensor motif RxWRxxR, in a constricted part of the channel. When placed in-between the Asp-Arg SF, H 3 O + enables protonation of the Asp, breaking the electrostatic Asp-Arg interaction in an energetically favourable process. Later, the gained H + is transferred from the protonated aspartate (AspH) to a neighbouring nucleophile and the Asp-Arg interaction restored, allowing other protons to initiate the process again. In this way, protons can travel through the SF of a H V 1. Other competing ions as Cl − and Na + are repelled by the residue bearing the same charge (e.g. Asp for Cland Arg for Na + ) or trapped by the residue of opposite charge and cannot cross though the Asp-Arg SF [18]. In voltage-gated proton channels (H V 1), the existence of a negatively charged residue at the SF is mandatory for proton selectivity and E62 in Extatosoma meets this requirement. The pk a of the glutamate side chain is~4.25, which enables the residue to be deprotonated at pH > 5.0 and to remain negatively charged as consequence. This would permit EtH V 1 to have a SF composed of a Glu-Arg interaction working in a similar manner as the Asp-Arg selectivity mechanism. In our experiments, protons were in a concentration from 0.1 μM (pH 7.0) to 3.12 μM (pH 5.5), between four to five orders of magnitude lower than the concentrations of the main ions TMA + and CH 3 SO 3 − (90-125 mM). Despite this great disproportion for protons, all measured V rev follow nernstian behaviour of protons (Eqn 3). Small variations are mainly consequence of an imperfect control of the pH i . Depletion and accumulation of H + due to high depolarization and robust inward H + currents, respectively, are a common source of error while measuring H V 1 in the whole-cell patch-clamp configuration. In this configuration, the accuracy of the control of cytosolic pH is limited by the diffusion rate of the buffer between the pipette and the cell. This diffusion exchange lasts from seconds to even minutes [38]. In our experiments, we try to circumvent this problem by shortening pulses at very positive voltages and/or increasing resting times between pulses. Our data specify EtH V 1 as a proton selective channel. EtH V 1 is inhibited by Zn 2+ Inhibition of proton currents by external addition of zinc is considered one of the main characteristics of H V 1. We tested the response of EtH V 1 to external Zn 2+ in micromolar range concentrations. Our experiments recorded inhibition of H + currents at 10 μM Zn 2+ which was augmented once [Zn 2+ ] further increased (Fig. 4A,B). The inhibitory effect is better seen as a shift of the g H -V relationship and as slowing of the kinetics of activation (Fig. 4C,D). The tendency is similar to other tested H V 1 [5,25,[34][35][36]. The mechanism of inhibition of proton channels by zinc is still an ongoing discussion; nevertheless, several studies have identified some of the amino acids involved. Mammalian proton channels possess two identified Zn 2+ binding sites composed exclusively of external His residues [3,36]. The first one is located at the top of S2 alpha helix and the second is placed in the S3-S4 loop (H140 and H193 in the human H V 1). Substitution of these two His residues to Ala renders the channel zinc insensitive [3,35]. In contrast, proton channels of further species show more diversity. For example, the other characterized insect proton channel, NpH V 1, conserves a His residue (H92) at the same relative position of H140 of the human channel but presents a variation to Asp (D145) in the second biding site, H193. A detailed zinc inhibition analysis demonstrated that in the case of NpH V 1, the main inhibitory effect is caused by zinc binding to the first position, H92, with minimal participation of the second binding site, D145. The inhibition of NpH V 1 by Zn 2+ is smaller than in human and rat channels [25]. Nevertheless, zinc sensitivity of Nicoletia is greatly increased by mutation of D145 to His and completely abolished once both amino acids are mutated to Ala, similar to mammalian H V 1 channels [25]. It seems that histidine residues at these two precise locations of H V 1 are important for high zinc sensitivity. Interestingly, the putative Zn 2+ binding sites of EtH V 1 consist of a lysine residue (Lys91) and of an aspartate residue (Asp144) at the first and second positions, respectively (see alignment of Fig. S2). A Lys residue at the first zinc binding site is a peculiarity of EtH V 1 but remarkably common to all identified polyneopteran H V 1. However, this noteworthy difference is lost in other orders as Zygentoma, Diplura, Protura and Archeognatha, whose H V 1 have the regular His residue also present in mammal channels. On the other hand, the second putative binding position (Asp144) on the S3-S4 linker of Extatosoma is preceded by two consecutive histidine residues, His142 and His143, which are potentially coordinating Zn 2+ . The same -His-His-Asppattern is shared with other phasmatodean H V 1 homologs: SsH V 1, RaH V 1 and MeH V 1 (Fig. S2). Yet, other phasmatodean species (Aretaon asperrimus and Peruphasma schultei), the Embioptera Aposthonia japonica and the Archeognatha Pedetontus okajimae, present in contrast only one His residue next to the Asp of the S3-S4 linker. Investigations in Nicoletia confirmed the dimeric nature of an insect proton channel and the possibility of zinc binding at the interface of both monomers [25]. Coordination of zinc in-between H V 1 subunits has also been suggested in inhibition studies of the human channel [35,39]; hence, EtH V 1 stoichiometry might be a factor to be considered. Structural differences between Zn 2+ binding sites among species also generate different zinc sensitivities. Therefore, potency of Zn 2+ on H V 1 can be related to the surrounding [Zn 2+ ] and the function of proton channels in the organisms. Thus, low zinc concentration in human and mouse serum ranges from 13 to 20 μM [40] and the respective H V 1 shows consequently higher zinc sensitivity than in other species, for example Nicoletia phytophila (insect), Ciona intestinalis (sea squirt) and Helisoma trivolvis (snail) [34]. The sensitivity to Zn 2+ revealed by the animal model Danio rerio (zebrafish) is even lower, which associates with the considerably higher zinc concentrations in the serum of the animal (~150 μM) [40]. Unfortunately, there are no data available determining the concentration of zinc in the haemolymph of Extatosoma. Further studies, including the pH dependence of Zn 2+ inhibition, site-directed mutagenesis of putative binding sites and the analysis of the channel oligomerization are still necessary to address the nature of zinc inhibition of polyneopteran H V 1 channels. EtH V 1 has conventional pH dependence of gating with strong voltage-dependent kinetics of activation The pH dependence of gating in EtH V 1 is described by a slope of 0.77 V thres /V rev which translates into a shift Table 1) with exception of Helisoma trivolvis which reports an anomalous pH dependence of gating [34]. Moreover, the two insects Extatosoma and Nicoletia have identical pH dependence of gating for the same voltage range (Fig. 5B). The conformity of the pH-dependent gating of H V 1 of different species indicates a common pH sensing mechanism which to date is still unknown. However, differences in the offsets among different species are evident. Our analysis shows an offset of −23 mV for EtH V 1. A negative offset of the V thres -V rev relationship reflects an early activation which permit protons to flow from the external solution into the cell. Along the whole pH range tested, EtH V 1 conducts H + inwards consistently. The results contradict the more positive activation of NpH V 1 and mammalian channels, whose physiological roles relate to elimination of excessive cytosolic acidification [37] and compensation of electrical charges during the respiratory burst of phagocytes [41]. The negative activation of EtH V 1 is in contrast more similar to kH V 1 from the dinoflagellate Karlodinium veneficum [2]. In dinoflagellates, inward H + currents acidify the interior of membrane specialized compartments (scintillons) which triggers bioluminescence [5,34]. Hypothetically, EtH V 1 in Extatosoma plays a role in an acidification process or in the generation of action potentials. To analyse the activation of EtH V 1 in more detail, we measured the voltage dependence of EtH V 1 kinetics. We applied linear regressions to τ actvoltage plots at different pH. Results show that EtH V 1 has a stark steepness of the τ act -V relationship of 10.0 mV/ e-fold change, similar to the snail channel HtH V 1, and much stronger than mammalian channels which values vary between 40 and 72 mV/e-fold change [37] ( [34]. The other hexapod proton channel, NpH V 1, activates also in the range of seconds [11]. Interestingly, in the human channel, data suggest that a glutamate at the position of the SF speeds up the channel activation kinetics. The hH V 1-D112E mutant is~5 times faster than the wild-type [32] and also shifts V thres to more negative potentials [20,21], indicating a shift of free energy to open the channel. Possible physiological role For a functional analysis of insect proton channels, a detailed cellular expression pattern would be of great importance. So far, only tissue distributions of the H V 1 expression are available. Compared to the Zygentoma Nicoletia phytophila, EtH V 1 showed a more restricted expression pattern in the different tissues tested. In both, Nicoletia and Extatosoma, H V 1 is strongly expressed in the nervous system. Interestingly, no expression in leg muscle was found for Extatosoma, despite it being present in leg and body muscle in Nicoletia. Harrison [42] describes several patterns of acid-base regulation in insects. The passive transport of protons through H V 1 could be related to the pH-homeostasis maintenance in some of these processes in polyneopteran species. There are pH differences across the digestive system of some insects. In crickets and grasshoppers (Orthoptera), passive distribution of protons across the midgut epithelium is associated with low pH in the lumen [42]. Coincidently, we found a mild expression of EtH V 1 in digestive system (Fig. 2). Discontinuous ventilation of insects generating variations of partial CO 2 pressure (P CO2 ) is also mentioned. The fluctuations on P CO2 during discontinued ventilation change the pH of the haemolymph, for example in grasshoppers, where haemolymph pH correlates with fluctuations of P CO2 and the nonbicarbonate buffer values [42]. Nevertheless, these pH variations due to discontinuous ventilation are considered small [42]. Other pH-homeostasis changes in insects are associated with periods of activity. In general, the increase of activity, for example during flight, is accompanied by the use of anaerobic metabolism that generates acid production. In locust, for example, tracheal and fluid P CO2 during flight increases two-to threefold in comparison to the resting state [42]. Accumulation of CO 2 due to insect's activity translates to a drop of haemolymph pH of~0.2 units for grasshoppers and even to 0.9 for cockroaches (do not express H V 1) during flight [42]. Despite EtH V 1 was not found in leg muscle, the haemolymph circulates through the whole body of the animal. Hence, we cannot discard the channel involved in pH regulation of the haemolymph during activity periods. The pH of the haemolymph of some invertebrates decreases linearly with temperature [43]. Similarly, in orthopteran insects (which do have H V 1), the haemolymph pH appears to be also dependent on temperature although the temperature-pH relationship loses linearity. Thus, the orthopterans M. bivittatus and S. nitens are able to keep a constant haemolymph pH at temperatures of 10°C-25°C but the value drops with a rate of 0.017 unitsÁ°C −1 at temperatures higher than 25°C [42]. Transmembrane acid-base transport controlled by the renal system has been suggested to explain this behaviour [44]. EtH V 1 could also play an important role in relation to acid regulation. Another possible physiological role of EtH V 1 could be related to the sensitivity of chemoreceptors to haemolymph pH. For example, cockroaches (which lack of H V 1) abdominal pumping rates are regulated by the pH of solutions in contact with the nerve cord [45]. Grasshoppers on the other hand possess H V 1 and their ventilation rates are unaltered once the haemolymph pH is changed [46]. The EtH V 1 channel is highly expressed in the nervous system (Fig. 2). Remarkably, H V 1 was first discovered in snail neurons by Thomas and Meech [47]. Subsequent studies in neurons of other snail species [48,49] confirmed the existence of H V 1 presenting τ act of few milliseconds [50]. The activation of EtH V 1 is negative to V rev in the whole pH range tested. It implies that EtH V 1 conducts H + inwardly and therefore could depolarize the cell membrane. In the case that H + conductance is dominant in the membrane of neurons under ionic conditions of the animal at certain membrane potentials, small inward currents could effectively depolarize the neuron to action potential threshold. Proton channels of mammals, activating in the order of seconds, restore pH i of small cells after an acid load in the order of tens of seconds because of their surface/volume ratio [50]. However, a role of EtH V 1 in the generation of action potentials is presumably limited due to its relatively slow activation. In neurons of Locusta migratoria (Orthoptera) for example, the times-to-peak range from~2 to 10 ms [51]. Our data do not confirm or discard the participation of EtH V 1 in the generation of action potentials in Extatosoma. Further in vivo electrophysiological studies in Extatosoma neurons are required to evaluate involved conductances and the effects of pH variations on triggering of action potentials. A striking difference we found between the hexapodans proton channels NpH V 1 and EtH V 1 is the more negative opening of the later. Consistently, EtH V 1 activates approximately 20 mV more negative than NpH V 1. This means that in comparison with NpH V 1, activation of EtH V 1 presents a shift of free energy that favours the close→open transition due to the influence of the membrane potential. EtH V 1 requires less membrane depolarization to activate. Hypothetically, the natural occurring variation to Glu in the SF of EtH V 1 might be responsible for it. Site-directed mutations of Asp112 to Glu in the SF of the human H V 1 have revealed negative shifts of threshold of activation [20,21]. Mutations of other amino acids in other parts of the channel also provoke ΔV thres to more negative potentials. However, in accordance with a metaanalysis of mutation studies [32], of all Asp mutants in the SF, only the Asp to Glu mutation shifts V thres negatively. The activation of EtH V 1 is also negative to V rev , which translates to an inward H + current that acidifies the cytosol. Teleologically, EtH V 1 task is related to the Similarly, marine dinoflagellates, whose H V 1 channels activate also negative to V rev , use H V 1 channels to acidify scintillons and trigger bioluminescence [2,5]. The chemistry of the physiological environment must always be considered. Thus, if [Zn 2+ ] is elevated in Extatosoma, then the V thres is shifted to positive potentials. Assuming the voltage shift is sufficient to set V thres positive to V rev , in this case, EtH V 1 functions similar to most known H V 1 and extrude protons out of the cell. Embioptera (Asp in SF) and Phasmatodea (Glu in SF) belong to sister branches with a common ancestor [12]. Interestingly, their cousin branch Mantophasmatodea also has a Glu in the SF [12]. Perhaps the answer to the function of Glu as SF and its relationship with the physiology of the insect lies on the physiological differences between those polyneopteran orders. Supporting information Additional supporting information may be found online in the Supporting Information section at the end of the article. Table S1. List of all identified H V 1 homologs compiled from TSA files and correspondent GenBank accession number. Table S2. Sequence identity percentage between species from different polyneopteran H V 1 proteins. Fig. S1. Amino acid sequences of polyneopteran insects proteins possessing a typical S4 RxWRxxR motif. Fig. S2. Alignment of putative polyneopteran H V 1 channels. Fig. S3. Inside-out patch-clamp measurement of EtH V 1.
2022-01-07T06:17:37.434Z
2022-01-05T00:00:00.000
{ "year": 2022, "sha1": "71704ec66b7956f121e268c74af664a133122bfd", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/2211-5463.13361", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6e7a00fe75337359cda535300e7017cae268ab5e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
259239668
pes2o/s2orc
v3-fos-license
ECA-VFog: An efficient certificateless authentication scheme for 5G-assisted vehicular fog computing Fifth-generation (5G)-enabled vehicular fog computing technologies have always been at the forefront of innovation because they support smart transport like the sharing of traffic data and cooperative processing in the urban fabric. Nevertheless, the most important factors limiting progress are concerns over message protection and safety. To cope with these challenges, several scholars have proposed certificateless authentication schemes with pseudonyms and traceability. These schemes avoid complicated management of certificate and escrow of key in the public key infrastructure-based approaches in the identity-based approaches, respectively. Nevertheless, problems such as high communication costs, security holes, and computational complexity still exist. Therefore, this paper proposes an efficient certificateless authentication called the ECA-VFog scheme for fog computing with 5G-assisted vehicular systems. The proposed ECA-VFog scheme applied efficient operations based on elliptic curve cryptography that is supported by a fog server through a 5G-base station. This work conducts a safety analysis of the security designs to analysis the viability and value of the proposed ECA-VFog scheme. In the performance ovulation section, the computation costs for signing and verification process are 2.3539 ms and 1.5752 ms, respectively. While, the communication costs and energy consumption overhead of the ECA-VFog are 124 bytes and 25.610432 mJ, respectively. Moreover, comparing the ECA-VFog scheme to other existing schemes, the performance estimation reveals that it is more cost-effective with regard to computation cost, communication cost, and energy consumption. Introduction Technologies related to automobiles have consistently ranked among the most promising areas of research and development. Improvements in human well-being have a knock-on effect on automotive engineering [1][2][3]. Nowadays vehicle networks are in the spotlight for a variety of reasons, including urban traffic congestion and road accidents. Many crucial traffic issues are communicated to users via vehicle networks, including speed alerts, cornering, road status information, road conditions, intersection warnings, and pedestrian crossing alerts [4][5][6]. Several countries' transportation systems have recently implemented widespread deployments of 5G technology, vehicle networks, and fog computing to enhance driver safety and better handle increasingly chaotic traffic patterns [7][8][9]. Intelligent transportation systems (ITS) collect, process, and disseminate traffic data in the context of networked cars through the use of wireless devices installed in vehicles (called onboard units, or OBUs) [10][11][12]. Because traffic-related messages travel through a wireless channel, they are vulnerable to eavesdropping, tampering, replaying, and deletion by hostile actors [13,14]. So, vehicular ad hoc networks (VANETs) need to address privacy and security concerns before they can be used in real-world applications. Several works have been presented over the past few years concerning authentication schemes for vehicular communication. These works range from public key infrastructure (PKI) approaches [15][16][17][18][19] and identity (ID) approaches [5,11,[20][21][22][23][24][25]. In PKI-based approaches, the trusted authority forces a huge number of security keys and relevant certificates onto the vehicles to ensure the security of users' private data. However, certificate management complexity is a major drawback of these schemes. While, in the identity (ID)-based approaches, the message is signed using the transmitter's secret key, and the receiver's public key is utilised to verify the signature. But, key escrow is a major flaw in these schemes. Thus, to resolve these issues in PKI-based and ID-based approaches, several scholars have suggested certificateless authentication approaches in order to avoid complicated management of certificate and escrow of key, respectively. However, challenges such as expensive communication, insecure systems, and complicated processing remain. This research, therefore, presents an effective certificateless authentication mechanism for vehicle fog computing over 5G networks; we term it ECA-VFog. The proposed ECA-VFog technique utilized a 5G-base station-supported fog server for its elliptic curve cryptography-based efficient operations. The main lists of the contribution of this work are as follows. • This paper suggests an efficient certificateless authentication called ECA-VFog scheme for fog computing with a 5G-assisted vehicular system. The proposed ECA-VFog scheme applied efficient operations based on elliptic curve cryptography that is supported by a fog server through a 5G-base station. • The innovative of the proposal is that the fog severer receives partial pseudonym-ID and partial private key from key generation center (KGC) for the signature verification process. • The ECA-VFog scheme avoids complicated management of certificate and escrow of key in the public key infrastructure-based studies and in the identity-based approaches, respectively. • Security evaluation shows that the proposed ECA-VFog scheme fulfills security requirements (data authenticity and integrity, pseudonym identity, traceability, unlinkability, location privacy, non-repudiation) and resists security attacks (forgery, modified messages, replay, and man-in-the-middle) for vehicular fog computing based on 5G technology. • The evaluation of the ECA-VFog scheme's performance shows that it is more efficient than existing schemes with regards to computation cost, communication overhead, and energy consumption. Here's how the remainder of the work is laid out: Section 2 shows some relevant work. Section 3 lists the architecture model, security design and cyber-attacks, and operation-based mathematical tool of our ECA-VFog. In Section 4, our ECA-VFog and it's implementation phases are given. Informal and formal analysis are lsited in Section 5. The evaluation of performance is analyzed in Section 6. This study is concluded and summarized in Section 7. Literature review This section reviews some relevant work that proposed authentication schemes for vehicular communication. We classify these schemes based on approaches used to secure messages. These taxonomies are public key infrastructure (PKI)-based, identity (ID)-based, and certificateless authentication approaches. PKI-based approaches Many PKI-based approaches [15][16][17][18][19] have been proposed to secure vehicular systems. These schemes are reviewed as follows. Sakhreliya et al. [15] presented the PKI-SC system, which combines the best of both worlds by integrating the MAC technique into the standard PKI certificate process. The MAC and ECDSA algorithms are deployed on the nodes in order to make a fair comparison among the PKI and PKI-SC systems, and the packet size is utilized to evaluate the time it takes for each system's communications to complete a given task. Utilizing the ideas of Bayesian Coalition Game (BCG) and Learning Automata (LA), Kumar et al. [16] developed an effective decentralized PKI. Los Angeles was supposed to be the game's participants, who work together to share information. In their solution, a coalition of dynamic between the users is created utilizing encryption of symmetric key and message authentication based on hash to protect the privacy and authenticity of the exchanged information. Jiang et al. [17] designed a PKI-based pseudonym authentication scheme by establishing secure session keys and providing the method of disclosing malicious vehicular to construct a timestamp signature. Zhang et al. [18] proved a PKI identity management based on blockchain and model of authentication, which makes use of smart contracts to lessen the load on TRA from handling the entire digital certificate life cycle alone. Moussaoui et al. [19] suggested a decentralized system for pseudonym management in vehicular communication. To carry out the many anonymity-related tasks, Moussaoui et al. [19] employed blockchain technology that calls for two separate blockchains: one for registering aliases and another for deleting them. Nevertheless, the main disadvantage of these schemes is complicated management of certificate. ID-based approaches Many ID-based approaches [5,11,[20][21][22][23][24][25] have been suggested to address limitations on PKIbased schemes. These schemes are reviewed as follows. In order to achieve the goals of confidentiality, anonymity, and security in a VANET, Alazzawi et al. [20] suggested a novel IDbased approach. In the event that the roadside unit (RSU) is compromised, the proposed approach used a pseudonym during the joining procedure to conceal the true identity. For secure vehicle-to-vehicle (V2V) data exchange, Ali et al. [21] suggested applying Elliptic Curve Cryptography (ECC) and general hash functions to create ID-based approaches by using the batch signature investigation mode, a huge volume of data can be authenticated simultaneously. In order to secure V2V communications over vehicular systems, Bansal et al. [22] offered an Identity-based authentication method that makes use of both ID and ECC. Through efficient V2V communications, the approach guaranteed source verification, data integrity, non-repudiation, and vehicle anonymity. Mohammed et al. [23] designed a pseudonym authentication based on fog computing to minimize the performance efficiency in 5G-assisted vehicular systems. The FC-PA study performs only one operation of ECC scalar multiplication to check data. By using fog computing technology, Al-Mekhlafi et al. [5] introduced an authentication scheme for 5G-assisted vehicle systems. A fog server computes and stores a unique selecting of public anonymity identities and signature keys for each legal component. For fog computing with 5G-assisted vehicular systems, Mohammed et al. [11] proposed a pseudonym authentication technique. Under their works, a fog server generates a temporary secret key for each participating vehicle to use for validating digital signatures. To counteract potential sidechannel attacks and slow down the system, Alshudukhi et al. [24] constructed an authentication technique with supporting a privacy factors. In addition, the TPD regularly and often updates its most important data in an effort to thwart side-channel attacks. Bayat et al. [25] suggested an innovative and effective authentication method for vehicular communication by enabling vehicles to authenticate each other without the usual restrictions imposed by the necessity for a designated group of signers, an active network of Road Side Units (RSUs), a secret key, or other similar safeguards. However, the major disadvantage of these schemes is escrow of key. Certificateless authentication approaches To cope with these issues, massive certificateless authentication studies with pseudonyms and traceability have been suggested. These studies avoid complicated certificate management and key escrow in the PKI-assisted studies and the ID-assisted studies, receptively. These schemes are as below. Wang et al. [26] construed a privacy factors scheme by adopting a full aggregation approach for reducing resources in terms of bandwidth and computation. Xu et al. [27] constructed certificateless fixed checker proxy signature using unmanned aerial vehicles (UAVs) to address privacy and security concerns in smart city systems. Ming et al. [28] suggested an efficient certificateless authentication scheme by achieving a security-enhanced solution and addressing massive communication overhead, security vulnerability, and computational complexity. Tan et al. [29] proposed a certificateless UAV group verification approach in order to achieve security communication in infrastructure-less internet of vehicle (IoV). Zhou et al. [30] introduced a secure ECC scheme by utilizing key agreement and a three-party authentication scheme in medical IoT. Rajasekaran et al. [31] proposed a secure ECC method that supports batch verification and mutual authentication for online learning in Industry 4.0. Zhou et al. [32] introduced a security-enhanced solution to combat a forgery attack and satisfy a trade-off between efficiency and safety in vehicular communication. Liang et al. [33] evaluated the safety of a certificateless aggregate signature for vehicular communication, focusing on the preservation of privacy under certain conditions. The investigation reveals that it is suffering from forgery attacks. Thus, Liang et al. [33] proposed a better strategy to cope with the security flaw. Background This section demonstrates the architecture model of the proposal ECA-VFog scheme in terms of the five components used. Then, we list the security design and cyber-attacks that should be resisted in this paper. Finally, the operation-based mathematical tool used to sign and verify messages is also provided. Architecture model Our architecture model has five parts, as depicted in Fig 1: A tracing authority (TRA), a key generation center (KGC), a 5G-base station (5G-BS), a fog server (FS), and onboard units (OBUs). Since 5G-BS doesn't be able to compute and storage any security parameters. Therefore, we don't care the assumption of this proposed in term trust or not trued. While, this paper assumes fog server is not trusted, therefore, the KGC preload partial public key and partial signature to communication with vehicles. Finally, the TRA and KGC are fully trusted in this system model to generate security parameters. The following are some of the roles that these parts play. • Tracing Authority (TRA): Vehicle registration in the vehicular system falls under the purview of the TRA, a central authority in the field. Its other duty is to provide the KGC with a means of partial anonymity. When a malicious event is detected, only the TRA will reveal the true identities of the vehicles and fog servers. Contacting TRA on a regular basis allows vehicles and fog servers to keep their credentials up to date. So they are still part of vehicular communication. If the TRA has previously identified malicious behavior from a user (vehicle/fog server), the TRA will refrain from performing identity updates for that user. • Key Generation Center (KGC): As a credible source, KGC is an asset. It is compatible with TRA and can produce vehicle and RSU partial private keys (PPK). Partial key generation in related schemes is no longer hampered by the need to escrow keys. Related schemes incorporate KGC, which is absent from identity-based authentication schemes. It helps TRA establish the anonymous identities of vehicles and fog servers, too. Only TRA knows the true identity, and KGC can only get a glimpse of this fake one. • 5G-Base Station (5G-BS): The 5G-BSs are stationary base stations set up by the side of the road. Its only use is as a bridge between vehicles, fog servers, and TRA, and it lacks both computing and storage capabilities. This is because it can accommodate a wide variety of device-to-device (D2D) communication standards. Because 5G-BSs are hardware, they are immune to attacks. • Fog Server: Fog server is the roadside infrastructure that enables vehicle-to-infrastructure (V2I) communication and can also realize inter-infrastructure (I2I) communication. FSs can simultaneously relay multiple messages collected from vehicles. FSs are stationed in various areas behind 5G-BS, and passing vehicles are made aware of their location. By sharing information, it can also boost circulation in the area covered by 5G-BS. • Onboard Units (OBUs): OBUs are installed in car. They employ vehicle-to-vehicle (V2V) to talk to one another and V2I to talk to the fog server. Traffic and signature-related messages are generated by vehicles and sent to other vehicles or fog servers. There is a tamper-proof device (TPD) in the vehicles. The data on this device is strictly private. Security design The safety of the vehicular network is compromised by cyber attacks, which can even result in human casualties. The following sections detail the minimum standards for security against cyber attacks and unauthorized access that the ECA-VFog scheme in the 5G-enabled vehicular fog computing must meet. • Message authenticity and safety: To emphasize the integrity of the received data, the receiving component must first confirm that the sending vehicle is also registered in the vehicular communication. • Pseudonym and Traceability: Due to security concerns, the true identities of the vehicles and fog servers must be concealed, so they instead adopt aliases. The pseudonym of vehicles or fog servers does not make them immune to detection when they are engaged in criminal activity. The TRA reveals their true identities here. This guarantees that the vehicles can be tracked at any time. • Un-linkability: An adversary must be unable to connect identical vehicle or fog server-generated signatures and messages. A new identity must be used for each transmission even if the vehicle and fog server sends the message anonymously. The message and the sender's identity can be linked if the sender's pseudonym information is not altered. • Non-repudiation: The content of transmissions from vehicles and fog servers is their own responsibility. They should be unable to deny it even if they send messages using fake names and signatures. • Location Privacy: Protecting the confidentiality of vehicle locations is crucial for their safety. There are a number of means that attackers can use to track down vehicles. To protect their users' location privacy from attackers, vehicles use pseudonym identities rather than their real names, and these identities are randomly generated for each message sent. • Forgery Attack: An adversary posing as a network user (vehicle/fog server) can send messages to other tools and fog servers in vehicular communication. • Various cyber attacks: Cyber attacks such as replay attacks, man-in-the-middle attacks, modified messages, etc., are extremely common in VANETs. Elliptic Curve Cryptography (ECC) Since it is determined on the finite field F p , ECC is an encryption technique of public-key. The equation The proposed ECA-VFog scheme The proposed ECA-VFog scheme includes phases, Setup, Registration phase, GenPPID, GenPPK phases, GenCLSig and CLSigVerify, as shown in Fig 2. Unlike the related works, in the Setup phase, both TRA and KGC issue system parameters based on the elliptic curves and broadcast them to register vehicles and fog servers through the registration stage. While KGC is responsible for creating the partial pseudonym-ID PPID V and partial private key PPK for vehicles and fog servers to maintain the original identity during GenPPID and GenPPK phases. According to the GenCLSig phase, the transmitter signs the data by generating a signature and secret parameters, while the receiver will check the validity and originality of the data during the CLSigVerify stage. These phases are described in detail as follows. Table 1 lists notations and their definition. Setup phase In order to design a secure and effective ECA-VFog scheme, the most crucial part of setting up a system is choosing its parameters. The steps for this stage are outlined below. • Both the TRA and the KGC agree that for any finite field, F p , an elliptic curve E(a, b) exists if and only if p is a sufficiently big prime amount and a, b are fixed integers less than p. The expression y 2 = x 3 + ax + b (mod p) defines the E(a, b). • Both the TRA and the KGC pick values for a and b and verify that 4a 3 + 27b 2 6 ¼ 0. (mod p). If the condition of equality is not met, a and b are chosen again. Then, a starting point P of level q is chosen. • The four functions of general secure hash h 1,2,3,4 are selected as follows. h 1 : • The TRA picks a secret key s 2 Z * q and calculates the relevant key of public Pub TRA = s � P. Registration phase In this phase, fog servers and vehicles are given unique identifiers that can be used to register with the TRA. Here's how this step is carried out: • The TRA calculates Ps v = VID i � h 1 (Pub TRA ||S) utilising vehicle V i 's original identity VID i and its private key s. Ps v is a constraint amount for the vehicle V i and its original identity VID i can only be disclosed by the TRA. • Likewise, the TRA executes the same procedure for the fog server F i . The TRA calculates Ps f = FID i � h 1 (Pub TRA ||S) utilizing fog server F i 's original identity FID i , and its private key S. Ps f is a constraint amount for the fog server and its original identity (FID i ) can only be disclosed by the TRA. • The TRA loads the values of Ps v and Ps f in the TPD of the vehicle V i and fog server F i , respectively. GenPPID and GenPPK phases The privacy and security of vehicular communication depend on the vehicles' and fog servers' ability to maintain their true identities at all times. KGC generates partial pseudonym-ID PPID and partial private key PPK at this phase. Vehicle's partial pseudonym-ID PPID V and fog server's partial pseudonym-ID PPID F are known only to TRA to satisfy traceability requirements. The following procedures are carried out during this stage for vehicles and fog servers. Vehicles. • The vehicle V i inputs the value of Ps v to the KGC. • The KGC verifies Ps v in vehicle database sent by TRA. Vehicles that have received fines or that are not properly registered are not included in this database. The KGC will continue its calculations if the value Ps v is present in the database; otherwise, it will exit. • The vehicle V i 's partial pseudonym-ID PPID V = Ps v � h 2 (x � Pub TRA ) is calculated by utilizing its secret key x. • The KGC chooses a secret key w 2 Z * q . • The KGC transmits parameters {PPID V , PPK v } to the vehicle V i via a secure channel. Fog servers. • The fog server F i inputs the value of Ps f to the KGC. • The KGC verifies Ps f in fog server database sent by TRA. Fog servers that have received fines or that are not properly registered are not included in this database. The KGC will continue its calculations if the value Ps f is present in the database; otherwise, it will exit. • The fog server F i 's partial pseudonym-ID PPID f = Ps f � h 2 (x � Pub TRA ) is calculated by utilizing its secret key x. • The KGC chooses a secret key l 2 Z * q . • The KGC transmits parameters {PPID f , PPK f } to the fog server F i through a secure channel. GenCLSig phase Before sending a message, the vehicle signs it to ensure its safety. Therefore, the vehicle that received the message checks its veracity. After receiving the PPK and PPID calculated by the KGC, the vehicle is able to send messages to other vehicles and fog servers. As a result, the proposed ECA-VFog scheme eliminates the need for vehicles and fog servers to continuously communicate with the KGC and the TRA. The following steps are taken during this stage. • The vehicle v i randomly select secret key r i 2 Z * q . • The vehicle v i computes its vehicle secret key Pri v = PPK v + r i utilising the secret key r i and PPK v . • The vehicle v i computes R p,i = Pri v � P. • The vehicle v i computes its public key R pub = R p,i − Pub KGC . • The vehicle v i computes its anonymous identity AID v = PPID v � h 3 (r i � Pub TRA ) utilising r i and partial anonymous identity PPID v for each message. where m v,i is exchanged message T V,i is freshness timestamp, and anonymous identity AID v . • The vehicle v i issues signature σ v,i = Pri v + r i � δ v,i (mod q). • Finally, the vehicle v i broadcasts the message-parameters {m v,i , AID v , R pub , D v,i , T V,i , σ v,i } to other vehicles and fog servers. CLSigVerify phase In this phase, the vehicle or fog server that received the message performs an authentication and integrity check to decide whether or not to accept the message. This step is carried out as follows for a verification process. • The verifier controls the timestamp T V,i of the data obtained at period T. The message is declined if T − T V,i > ΔT. If not, it advances to the next level. This means that messages that have already expired are deleted without being read. • The verifier computes As a result, (1) can be demonstrated. It is determined during the verification process by employing the message and previously published parameters. If any of these values are altered, the signature can no longer be validated. Meanwhile, the proposed offers batch authentication to raise efficiency in traffic. The verifier checks Eq 2 once receiving multiple messages. Security evaluation This section evaluates security with regards to informal and formal analysis as follows. Security analysis • Message Authenticity and Integrity: The unit and the fog server use the message parame- is computed by the receiver (vehicle/fog server) using a timestamp T V,i , and an anonymous identity AID v , R pub , and m v,i are the car's public key and message, respectively. This ensures that the received message is both authentic and intact. If one variable is changed, the other cannot bring about parity. As a result, our ECA-VFog method proves the authenticity and confidentiality of communications. • Pseudonym Identity: Pseudonym is used by vehicles and fog servers because it is necessary for security reasons to conceal their true identities [34]. Vehicles and fog servers may be able to avoid detection when they engage in illegal activity by using false identities; however, this is not always the case. Their true identities are exposed here thanks to the TRA. With the ECA-VFog scheme, the vehicle (or fog server) generates its own unique pseudonym identity with each signature. Before this happens, TRA, KGC generate a pseudonym partial identifier for the vehicle (or fog server): To ensure privacy, TRA, and KGC issue vehicles with only partial anonymity; the vehicles then use r i and the partial pseudonym identity PPID V to determine the anonymous identity Ps v for each message. As a result, the proposed ECA-VFog scheme ensures the pseudonym identity of communications. where s is TRA's private key and AID v is the pseudonym identity, can be derived using these equations. With these numbers, the TRA can piece together the vehicle's partial anonymous identity, known as AID v . In conclusion, TRA is the only method capable of revealing the identity, with AID v = PPID v � h 3 (S � D v,i ). Therefore, the ECA-VFog allows for identification tracking while also protecting users' privacy (anonymity). • Unlinkability: No two signatures or messages from the same vehicle or fog server should be linked in an attacker's mind. If the pseudonym identity is not altered between each message broadcast, it is still possible to determine the sender and recipient of the message. Each time a signature is calculated in the ECA-VFog scheme, the vehicle AID v , T V,i , R pub used in the calculation are chosen at random. The message parameters that are transmitted along with the message are dynamic and can change for each transmission. As a result, the ECA-VFog scheme guarantees unlinkability. • Location privacy: A vehicle's location and communications should be kept private through the use of a pseudonym identity. Each message in the proposed ECA-VFog scheme undergoes a pseudonym identity calculation prior to transmission. This prevents the attacker from linking any of the messages together. The proposed ECA-VFog scheme is a pseudonym and cannot be traced back to an individual. All of these characteristics have been shown above. Location secrecy is thus ensured by the ECA-VFog. • Non-repudiation: The transmissions made by vehicles and RSUs are their own responsibility. Their true identity will be exposed if they deny sending the message. Therefore, there is no way for the sender (vehicle/fog server) to claim that it did not send the message. AID v = PPID v �h 2 (S � Pub KGC )�h 1 (S||Pub TRA ) is the TRA formula that will reveal the identity of the pseudonym person. Thus, ECA-VFog ensures that claims cannot be contested. • Impersonation attack: The impersonation assault requires the attacker to craft a forged message m v,i and its the message parameters Since this is based on ECDLP, however, it is quite challenging to implement. So, our ECA-VFog method is secure against impersonation. • Modification attack: When the message is checked to see if it contains the equation any tampering by the attacker is immediately revealed. Since , equality cannot be achieved if even one of these parameters is altered. The vehicle's public key R pub and message m v,i . That's why the ECA-V-Fog is resistant to tweaks. • Replay attack: The adversary executes this attack by reusing previously validated messages. Verifying the timeliness of the timestamp used in the parameters The message parameters can thwart the attack. This means that the ECA-V-Fog is resilient against a replay assault. • Man-in-the-middle attack: An attacker can deceive two vehicles or fog servers into thinking they are in constant contact with each other by exchanging information with them. It takes in private information and traffic messages, alters them, and relays the new versions to other vehicles or fog servers. Since all forms of mutual communication in the ECA-VFog require authentication, an unauthenticated attacker would be unable to launch such a campaign against a TRA. After registration, the vehicle contacts the KGC because the TRA has provided it with a partial pseudonym identity and private key, The ECA-VFog scheme would be safe from a MITM attack in this case. Formal security verification using AVISPA tool In order to formally validate the cryptographic protocol's security, we employ the Automated Validation of Internet Security Protocols and Applications (AVISPA) tool [35,36]. To illustrate the security protocol, AVISPA makes use of the HLPSL [37,38], which also enables us to state the security aspects of the protocol that need to be checked. As can be seen in Fig 3 of the AVISPA architecture. SPAN is fed the protocol's CAS+ specification in Alice Bob notation, and it outputs a script in HLPSL. The HLPSL script is sent to an IF translator, which then runs it via the HLPSL to IF translator and the AVISPA backends for analysis. To ensure that the goals specified in HLPSL's goal section are met, AVISPA employs four backends: OFMC, CLAtSe, SATMC, and TA4SP. To ensure that the protocol is secure for the specified number of sessions or until an attack is discovered, the backend runs it through an infinite number of iterations. The HLPSL uses a state machine model of the protocol. There is a variable associated with each state, and as that variable's value shifts, the corresponding state shifts. HLPSL uses the Doley-Yao threat model [39] to ensure that the cryptographic protocol is secure against man-in-the-middle assaults and replay assaults. The model assumes that the intruder can listen in on, steal, and forge any communication. Fig 4 show the simulated outcomes of the proposed ECA-VFog scheme. The protocol analysis tool ATSE found that out of 30 states, 24 are reachable, the translation time was 0.001 seconds, and the computation time was 0.001 seconds. With a depth of 6 heaps and a search period of 0.06 seconds, OFMC visits a total of 64 nodes. This result proves that the suggested protocol is secure against any kind of attack. Performance evaluation This section evaluates and compares the performance of the proposed ECA-VFog scheme with relevant schemes Bayat et al. [25], Wang et al. [26], Zhou et al. [32] and Liang et al. [33]. The evaluation criteria used for performance are computation, communication, and energy consumption overheads. Evaluation and comparison of computation overhead Here, we evaluate and compare the ECA-VFog scheme and some existing works Bayat et al. [25], Wang et al. [26], Zhou et al. [32] and Liang et al. [33] in terms of the overhead of computation. The following steps display the calculated achievement times (ms: millisecond) of cryptographic operations applied in the signing, verification, and batch verification of messages. • T bp : Operation of bilinear pairing (bp) in Q, P 2 G 1 . The running time for T bp is 6.101 ms. • T sm bp : Operation of scalar multiplication s � P of the bp, P 2 G 1 , s 2 Z * q . The running time for T sm bp is 1.6765 ms. This work utilizes the MIRACL cryptographic library [40] to time various cryptographic procedures. The machine runs Windows 10 on an Intel(R) Core(TM) i7-8550u processor at 1.80 GHz with 8 GB of RAM. Let's measure the computation overhead of the proposed ECA-VFog scheme. Table 2 displays the message singing, single verification, and batch verification computation costs for the proposed ECA-VFog scheme as well as other related works. In the ECA-VFog, the computation of the message signing process needs one-point addition, three scalar multiplications, and two general secure hash functions. Therefore, the total running time of the message signing process is computed as 1T pa ecc + 3T sm ecc + 1T h = 1 * 0.0042 + 3 * 0.7829 + 1 * 0.001 � 2.3539ms. Computation of the single verification needs two-point additions, two scalar multiplications, and a general secure hash function, thus the entire computation time of this phase is computed as 2T pa ecc + 2T sm ecc + 1T h = 2 * 0.0042 + 2 * 0.7829 + 1 * 0.001 � 1.5752ms. Lastly, the computation of the batch verification needs (3n-1) addition point, (n + 1) scalar multiplication, and n general secure hash functions, thus the entire computation time of batch verification is computed as ð3n À 1ÞT pa ecc + ðn þ 1ÞT sm ecc + nT h � 0.7965n + 0.7787ms. Similarly, the computation costs of message singing, single verification, and batch verification are computed in terms of computation overhead in schemes Bayat et al. [25], Wang et al. [26], Zhou et al. [32] and Liang et al. [33]. Evaluation and comparison of communication costs Here, we evaluate and compare the ECA-VFog scheme and some existing works Bayat et al. [25], Wang et al. [26], Zhou et al. [32] and Liang et al. [33] in terms of the overhead of communication. To compare the ECA-VFog's communication overhead to those of other schemes, let's assume the following sizes of the various elements. • G 1 : The multiplicative cyclic group. The size of the item in G 1 is 128 bytes. • G: The additional cyclic group. The size of the item in G is 40 bytes. • Z * q : The finite field. The size of the item in Z * q is 20 bytes. • T i : The timestamp. The size of the Z * q is 4 bytes. During the signing message of the proposed ECA-VFog scheme, the vehicle v i broadcasts the message parameters {m v,i , AID v , R pub , D v,i , T V,i , σ v,i } to other vehicles and fog servers, where {R pub , D v,i }2G, fAID v ; s v;i g 2 Z * q , and timestamp {T V,i } = 4 bytes. Therefore, the bandwidth overhead of ECA-VFog is computed as 2 * 20 + 2 * 40 + 4 = 124 bytes. Likewise, the bandwidth overheads of Bayat et al. [25], Wang et al. [26], Zhou et al. [32] and Liang et al. [33] are computed as 237, 388, 208, and 256 bytes, respectively. Table 3 provides a measurement of communication overheads. Evaluation and comparison of energy consumption overhead We use the procedure outlined in Table 2 to resolve the energy requirements of the ECA-VFog proposal. Using the full strength of the CPU (10.88 Watt) and the cost it takes to complete the task, the energy exhaustion can be computed as follows: E = P.t, whitch E is the power used, P is the full strength of the CPU, and t is the computation cost. Let's calculate the energy exhaustion overhead of our ECA-VFog method. The energy exhaustion overhead E of the ECA-VFog in message signing is computed as 10.88 * 2.3539 = 25.610432 mJ (E = P .t). Then, the energy exhaustion overhead E of ECA-VFog in single verification is computed as 10.88 * 1.5752 = 17.138176 mJ. Then, the power exhaustion overhead E of ECA-VFog for 10 messages (x = 10) in batch verification was calculated as 10.88 * 8.7437 = 95.131456 mJ. likewise, the energy exhaustion overhead of data signing, single authentication, and batch authentication is computed in schemes Bayat et al. [25], Wang et al. [26], Zhou et al. [32] and Liang et al. [33]. In Fig 7, we see a measurement among our ECA-V-Fog method and other existing schemes with regards to the message singing and verifying energy exhaustion overheads. Conclusion This paper has proposed an efficient certificateless authentication with pseudonym and traceability called an ECA-VFog scheme for fog computing with a 5G-assisted vehicular system. The ECA-VFog scheme avoids complicated management of certificate and escrow of key in the public key infrastructure-assisted works the identity-assisted works, respectively. The proposed ECA-VFog scheme applied efficient operations based on elliptic curve cryptography that is supported by fog servers through 5G-BS. Security evaluation shows that the proposed ECA-VFog scheme satisfies security factors (data authenticity and integrity, pseudonym identity, traceability, unlinkability, location privacy, non-repudiation) and resists security attacks (Forgery, modified messages, replay, and man-in-the-middle) for vehicular fog computing based on 5G technology. The evaluation of the ECA-VFog scheme's performance shows that it is more efficient than relevant studies with regards to communication overhead, computation overhead, and energy consumption. In future work, we extend this work to apply instead of 5G for secure compunctions in vehicular fog computing.
2023-06-25T05:05:26.199Z
2023-06-23T00:00:00.000
{ "year": 2023, "sha1": "20ba51a1d09989bf25c991d78dfba360339b4acf", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "20ba51a1d09989bf25c991d78dfba360339b4acf", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
228835150
pes2o/s2orc
v3-fos-license
Factorization of Functional Operators with Involutive Rotation on the Unit Circle Following the classical definition of factorization of matrix-functions, we introduce a definition of factorization for functional operators with involutive rotation on the unit circle. Partial indices are defined and their uniqueness is proven. In previous works, the main research method for the study scalar singular integral operators and Riemann boundary value problems with Carlemann shift were operator identities, which allowed to eliminate shift and to reduce scalar problems to matrix problems without shift. In this study, the operator identities were used for the opposite purpose: to transform operators of multiplication by matrix-functions into scalar operators with Carlemann linear-fractional shift. Introduction A large number of works have been dedicated to Riemann boundary value problems and to the related singular integral equations. We point out some monographs that have already become classic on this subject [1] [2] [3] [4]. A special place is occupied by problems with shift in boundary conditions and equations with shift [5]. Listed monographs and their authors played a significant role in the development of this topic. The problem of factorization of matrix functions is closely connected with the solution of matrix Riemann boundary value problems, for which effective solution methods have not yet been found [5] p. 24, Theorem 6. This explains the interest in and motivation for the study of this topic. In [6], we constructed operator identities with invertible operators, which transform a singular integral operator A with involutive fractional linear shift into a vector singular integral operator D without shift. Applications have been identified in which the main method of investigation was operator identities [7] [8] [9] [10]. Simplicity of the shift under consideration permits us, when studying the operator A, to avoid associated operators, and to avoid the appearance of compact operators and to obtain the operator identity, which directly connects the class of singular integral operators with shift and the class of matrix characteristic singular integral operators without shift. For an orientation-preserving shift, this corresponds to a similarity transform In [7], based on the known results on factorization [11], invertibility condi- Factorization of the Operators with Carlemann Rotation on the Unit Circle Let T denote the unit circle. We review definitions that we are going to use [5] [12]. Factorization of non-degenerate matrix function were matrix functions It is known [5] [11] that the partial indices are invariants of the factorization and do not depend on a particular type of representation, and that the numbers 1 2 , κ κ are uniquely defined. We use the following notations for projectors acting in space where I is the identity operator, operator W is the rotation operator: We introduce similar notation for identity operator I in the space ( ) , g z g z g z g z z D D + − ≠ ∈  . We also provide other forms of representation (3), of operator A and of (4) through the projectors A t B t A t B t A t B t A t B t We call integers 1 κ and 2 κ partial indices of A. In works [6] L T : We also note that the operator of multiplication by a function transforms into To describe the similarity transformation structure we need some definitions and operators. Let Γ and γ be contours, and let γ ⊂ Γ . The extension of a function ( ) f t , t γ ∈ , to \ γ Γ by the value zero, will be denoted by ( )( ) The restriction of a function ( ) t ϕ , t ∈ Γ to γ will be denoted by ( )( ) is determined by the composition of the operators M GN Π Z . In our case, these operators have the following form where T + and T − are the upper and the lower parts of the unit circle, respectively, 2 2 2 2 2 2 2 2 2 2 , , ,
2020-11-19T09:15:35.841Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "f88cdb0ee24d10fb376c1784097a4eb761356bd6", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=104111", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c79d874eb73f4b08448741230c054dcba1b9da33", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
264987574
pes2o/s2orc
v3-fos-license
The implication of neutrophil extracellular traps in nonalcoholic fatty liver disease Nonalcoholic fatty liver disease (NAFLD) is an expanding worldwide health concern, and the underlying mechanisms contributing to its progression still need further exploration. Neutrophil extracellular traps (NETs) are intricate formations comprised of nuclear constituents and diverse antimicrobial granules that are released into the extracellular milieu by activated neutrophils upon various triggers, which play a pivotal part in the onset and advancement of NAFLD. NETs actively participate in the genesis of NAFLD by fostering oxidative stress and inflammation, ultimately resulting in hepatic fat accumulation and the escalation of liver injury. Recent insights into the interaction with other hepatic immune populations and mediators, such as macrophages and T regulatory cells, have revealed several important mechanisms that can trigger further liver injury. In conclusion, the formation of NETs emerged as an important factor in the development of NAFLD, offering a promising target for innovative therapeutic approaches against this debilitating condition. This comprehensive review seeks to compile existing studies exploring the involvement of NETs in the genesis of NAFLD and their influence on the immune response throughout the progression of NAFLD. Introduction NAFLD is the prevailing chronic liver ailment, distinguished by the excessive buildup of lipids in the liver.The prevalence of NAFLD is on the rise primarily due to increasing rates of obesity and metabolic syndrome (1).NAFLD activity score (NAS) and Noninvasive scoring systems (NSS) are designed for clinical use to identify and evaluate NAFLD progression (2)(3)(4).Progression stages have been broadly recognized and are derived from simple fatty liver disease (steatosis) without specific hepatocellular inflammation (5).NAFLD can advance into severe forms such as nonalcoholic steatohepatitis (NASH), cirrhosis, and hepatocellular carcinoma (HCC) due to a diverse array of factors, encompassing lipotoxicity-induced endoplasmic reticulum (ER) stress and mitochondrial dysfunction (6,7), activated Kupffer cells (KCs) (8), immune cell-mediated inflammatory processes (9,10), and gut microbiota (11)(12)(13). Recent studies have highlighted the significant role of gut microbiota in NAFLD.Whole-genome shotgun (WGS) sequencing performed by Loomba et al. revealed that levels of Escherichia coli and Bacteroides vulgatus (B.vulgatus) were increased in patients with advanced fibrosis, while Eubacterium rectale and B. vulgatus were increased in patients with mild/ moderated NAFLD (14).Mouries, et al. further found an initial disruption of the intestinal epithelial barrier and gut vascular barrier (GVB) in NASH (15).In recent years, NETs stimulation by microorganisms such as adherent-invasive Escherichia coli (AIEC) (16) and Entamoeba histolytica (E.histolytica) (17) has been observed.One recent study found aberrant intestinal neutrophil migration, increased bacterial translocation in the circulation, and higher lipopolysaccharides (LPS) level in the visceral adipose tissue (VAT) in interleukin-17 (IL-17) receptor-deficient (IL-17RA-/-) mice fed with a HFD (18).These findings collectively suggest that gut microbiota may influence the NETs in NAFLD.Nevertheless, additional investigation is needed to directly confirm the effect of gut microbiota on NETs in the development of NAFLD. Over the past two decades, there has been an increasing emphasis on investigating the influence of immune cells on the development of NAFLD towards NASH-fibrosis.In the early 2000s, studies began to highlight the importance of inflammatory processes in the progression of NAFLD.For example, in an article published in the journal Gastroenterology in 2002, Sanyal, A. J. et al. showed that individuals diagnosed with NASH exhibited elevated liver inflammation levels compared to those with steatosis (19).Since then, numerous studies have investigated the involvement of diverse immune cells and inflammatory agents in the onset and advancement of NAFLD, and some investigations have demonstrated the implication of macrophages, T cells, and cytokines in the pathogenesis of hepatic inflammation and fibrosis during NAFLD (20-25). Neutrophils, a subset of leukocytes, serve as part of the primary line of defense in the immune system, tasked with protecting the body from infections and illnesses by engaging in the destruction of pathogens like viruses, bacteria, and fungi (26-28) through phagocytosis, degranulation, and NETosis (29, 30).Brinkmann, V. et al. first proposed the term "NETs" in 2004, marking the beginning of a new era (31).NETs are reticulated extracellular formations consisting of chromatin, granular proteins, and histones.During NAFLD, hepatic lipid accumulation can prompt an inflammatory reaction, inducing neutrophil activation and subsequent NETs release.NETs can initiate additional inflammation and attract other immune cells to the liver, such as macrophages and T regulatory cells (Treg), ultimately contributing to NASH-HCC development (32,33).In the context of these studies, our research team observed that NETs foster inflammation and facilitate the progression of hepatocellular carcinoma in NASH, providing a novel strategy that targets NETs for chronic liver disease therapy.Within this comprehensive review, we begin by consolidating the research regarding the involvement of NETs at various phases of NAFLD advancement, especially their interaction with the immune microenvironment during NAFLD progression.Finally, we conclude by discussing the potential therapeutic approaches targeting NETs to fight NAFLD. NETs in immune defense NETs released by activated neutrophils were first reported in 2004 as a physical barrier that helps trap and degrade virulent pathogens and kill bacteria.Deoxyribonuclease1 (DNase1) treatment abolished NETs formation, which is consistent with the observation that NETs are primarily composed of DNA (31).In the subsequent year, the same research team discovered that granular proteins but not histones facilitated the destruction of both yeastform and hyphal cells of Candida albicans in the antimicrobial action of NETs (34).As a foremost innate immune responder to inflammation and tissue injury, neutrophils are considered crucial in bolstering immune surveillance.The synergy between NETs and neutrophil elastase (NE), histones, or other constituents enhances the efficacy of antimicrobial capabilities.This immune defense process is called "NETosis."Along with eliminating bacteria, NETs also contribute to fighting viruses and fungi (35)(36)(37)(38)(39)(40). NETs are involved in anti-inflammatory functions and have been shown to be relevant in trapping and killing Staphylococcus aureus (41).In 2014, Schauer C. et al. reported that aggregated NETs limit chronic inflammation by degrading cytokines and chemokines through binding with proteases (42).In 2019, Ribon M. et al. proposed that NETs exert anti-inflammatory actions in rheumatoid arthritis via complement component 1q (C1q) and human cationic antibacterial protein (LL-37) (43). The liver is the most critical organ responsible for maintaining normal host homeostasis.During sepsis-related organ injury, it relies on various cell types, including KCs, hepatocytes, B cells, and neutrophils, to carry out pivotal functions in combating bacterial infections (44,45).The bacteria are captured and removed by resident KCs, which are localized in the liver sinusoids (46,47).Subsequently, neutrophils migrate and accumulate in the infected area, where they interact with platelets and release neutrophil extracellular traps to capture and clear bacteria (48). Excessively expressed NETs have been observed to contribute to inflammation within the liver (49-51).In this review, we will discuss how NETs regulate inflammatory response during NAFLD progression (Figure 1).Prior research has revealed that NETs fulfill a dual function, participating in both pro-and anti-inflammatory processes.Hence, it becomes imperative to comprehend their formation and function under both typical and pathological circumstances to devise precise therapeutic approaches for NAFLD. NETs in the pathophysiological progression of NAFLD As of July 2023, there are over 5000 publications in the PubMed database that pertain to the exploration and comprehension of the various roles of NETs.Since 2018, the study of the potential significance of NETs in the progression of NAFLD has been steadily gaining prominence (Table 1).The collective discoveries across various stages of NAFLD suggest that NETs actively contribute to pro-inflammatory processes, hastening the advancement of the disease. NETs in steatosis As a central part of the innate immune response, neutrophil infiltration in the liver has a notable role in promoting NAFLD progression.Myeloid cells lacking p38 mitogen-activated kinases p38g and p38d (p38g/d) demonstrate resistance to high-fat dietinduced steatosis in association with reduced neutrophil infiltration in the liver.Conversely, wild-type mice with excessive neutrophil infiltration experience enhanced steatosis development (59).The generation of NETs is one of the critical strategies of neutrophils during an inflammatory response; however, whether NETs participate in the development of steatosis is yet to be fully explored. Peptidylarginine deiminase 4 (PAD4) is essential for the formation of NETs, as PAD4 -/-neutrophils lose the ability to form NETs (60).Two constituents of the DNase1 family, DNase1 and DNase1 like 3 (DNase1L3), have been identified as contributors to NETs formation both in vitro and in vivo (61).Aberrant lipid accumulation resulting in lipotoxicity is considered to be a crucial event in hepatic steatosis progression.Elevated production of free fatty acid (FFA) represents a significant hallmark of NAFLD (62, 63).Inhibition of fatty acid synthase (FASN) in human primary liver microtissues prevents the development of steatosis (64).Our research has demonstrated that free fatty acids (FFAs), such as linoleic acid (LA) and palmitic acid (PA) but not oleic acid (OA), induce the formation of NETs in vitro.However, inhibiting NETs through DNase1 or using PAD4 knockout mice did not prevent the increase in FFAs, which suggests that NETs formation is not a causative factor of steatosis but rather an outcome of lipid accumulation (32).Nevertheless, the mechanism under this circumstance still requires further exploration.By employing gas chromatography-mass spectrometry (GC-MS) to examine peripheral blood from individuals, researchers discovered that F6 (furanoid F-acid F6) instigates NETs formation through activating ERK (extracellular signal-regulated kinase), JNK (c-Jun N-terminal kinase), and AKT (protein kinase B) kinases.On the other hand, other common fatty acids such as PA, palmitoleic acid (PO), stearic acid, and OA induce NETs formation by activating ERK, JNK, but not AKT kinase (65).Additionally, in response to lLPS) stimulation, neutrophils release NETs via toll-like receptor 4 (TLR4)-JNK axis activation (66).Thus, it is possible that steatosis induces NETs formation through these distinct pathways. NETs in NASH NASH represents a progressed stage of NAFLD, displaying a robust correlation with both inflammation and metabolic disturbances.Neutrophil infiltration in NASH was identified decades ago (67).NETs, one of the key features of neutrophils, have emerged as modulators of chronic inflammation and subsequently promote the progression of cancers (68)(69)(70).In 2018, our research team initially documented the occurrence of NETs formation in NASH, noting elevated levels of myeloperoxidase (MPO)-associated DNA (MPO-DNA), a hallmark of NETs, in the serum of preoperative NASH patients.To investigate further, we utilized STAM mice, a NASH mouse model created by neonatal streptozotocin (STZ) injection followed by a high-fat diet (HFD) regimen (71).In this established NASH model, we found that NETs formation was accompanied by increased neutrophil infiltration and inflammatory cytokines. NETs regulate the inflammatory environment in NASH by recruiting monocyte-derived macrophages (MDMs).Our study provides fundamental evidence that the formation of NETs is a vital factor in driving the advancement of NASH, bridging the gap between steatosis and NASH. Additionally, our research team discovered that NETs serve as a link between adaptive and innate immunity via promotion of regulatory T cell differentiation and function (33).In this investigation, a murine model was subjected to a western diet to induce a NASH phenotype, unveiling a direct relationship between heightened Treg activity and the generation of NETs.Moreover, the inhibition of Treg led to the prevention of NASH liver development.This fascinating discovery is dependent on the mitochondrial oxidative phosphorylation (OXPHOS) pathway in naïve CD4 positive T cells with TLR4 mediating metabolic reprogramming.NETs exhibit a vital role in the induction of hypercoagulability in NASH patients.In this study, plasma samples obtained from both NASH patients and healthy donors underwent assessment, and NETs isolated from NASH patients showed a higher level of procoagulant activity and pro-inflammatory factors than those from healthy controls (56).A recent publication revealed that changes in linoleic acid and g-linolenic acid (GLA) are responsible for initiating the formation of NETs in an early NASH mouse model, suggesting that fatty acids play a crucial role in regulating NETs formation in the context of NASH (58). NETs in NASH-fibrosis Hepatic fibrosis plays a critical role in determining the mortality rates of NASH patients and is also linked to the long-term prognoses of individuals diagnosed with NAFLD (72,73).A prominent factor during NASH-fibrosis development is hepatic stellate cell (HSC) activation (74, 75).The interaction between neutrophils and HSC in liver fibrosis has been studied for years.For example, increased infiltration of neutrophil-derived IL-17A exhibits advanced liver fibrosis through promoting HSC activation (76,77).Neutrophils activate HSCs through reactive oxygen species (ROS) and MPO production (78,79).However, our understanding of the role of NETs during liver fibrosis is still limited, and the role of NETs in modulating the interaction between neutrophils and HSC is a promising direction for future research. MPO serves as a central element and inflammatory enzyme within neutrophils.MPO-deficient neutrophils led to an inability to form NETs, underscoring the essential role of MPO in NETs formation (80).Within the NASH-fibrosis experimental framework triggered by a high-fat diet lacking methionine and choline, the deficiency of MPO in knockout mice led to a notable decline in fibrosis.This finding implies a potential contribution of NETs to the advancement of NASH-fibrosis (81). Specific G protein-coupled S1P receptor 2 (S1PR2) acts as a stimulator of Sphingosine 1-phosphate (S1P), another key responder of inflammation.A recent study shows that knockdown of SIPR2 can decrease liver inflammation and fibrosis by inhibiting NETs formation (53).The presence of S1PR1 has been identified as essential for the recruitment of neutrophils (82).Considerable research has been carried out on the involvement of S1P receptors in both adaptive and innate immunity and its influence on various immune cell types, such as T cells, B cells, NK cells, macrophages, dendritic cells, and neutrophils (83)(84)(85)(86)(87)(88).Although few studies directly explain the immune response involving NETs and liver fibrosis, the impact of NETs on immune responses could be one of the reasons for the progression of liver fibrosis. NETs in cirrhosis Liver cirrhosis is the irreversible end stage of chronic fatty liver disease.The pathophysiologic diagnosis of liver cirrhosis is characterized by hepatocellular dedifferentiation, fibrous scarring, HSC activation, and increased collagen deposition (89,90).Cirrhosis can be classified into two clinical stages: compensated and decompensated, also known as asymptomatic and symptomatic stages.According to the U.S. Department of Veterans Affairs, patients with compensated cirrhosis with conditions such as HCC or advanced decompensated cirrhosis are considered eligible for liver transplantation. The increased prevalence of bacterial infection in individuals diagnosed with cirrhosis has been established for over three decades (91)(92)(93).Early studies have shown that liver cirrhosis patients have compromised neutrophil recruitment, resulting in impaired immune response (94,95).Changes in neutrophils, such as impaired NADPH oxidase activity and reduced MPO release, could explain the increased vulnerability to bacterial infections in individuals with decompensated cirrhosis (96).Interleukin-22 (IL-22) facilitates the development of cirrhosis to HCC through Signal transducer and activator of transcription 3 (STAT3) signal activation (97).Neutrophils have been shown to be one of the sources for IL-22 production ( 98), and IL-22 secreted from T cells can recruit neutrophils to peripheral tissues (99).Moreover, IL-22 and IL-17 production from neutrophils promote NETs formation (100,101).The association between NETs and IL-22 during NAFLD is unclear.New investigations have been conducted to elucidate the involvement of NETs in liver cirrhosis.Markers of NETs formation, MPO-DNA and citrullinated histone H3 (H3Cit) associated DNA (H3Cit-DNA), were significantly elevated in patients with cirrhotic livers compared to healthy controls (54).Additionally, in cases of liver cirrhosis with portal vein thrombosis (PVT), NETs have been shown to enhance procoagulant activity (55).Elevated NETs levels in cirrhotic livers with PVT may act as a link to malignancy in HCC (102). NETs in NASH-HCC The role of NETs in HCC, the most common form of primary liver cancer, is becoming increasingly recognized.Our previous research has indicated that the invasion of neutrophils and the creation of NETs play a role in the progression of HCC within the context of NASH (32).We observed macrophage infiltration at eight weeks in the STAM NASH mouse model, whereas neutrophil infiltration was observed at five weeks.Notably, the number of KCs decreased at an early age, potentially being replaced by infiltrating macrophages.This observation aligns with recent studies reporting a decrease in KCs during NASH, subsequently replaced by infiltrating lipid-associated macrophages (LAMs) (9,103,104).Collectively, these reports suggest that NETs may interact with macrophages and contribute to the development of NASH-HCC.However, the association between NETs and other immune cells requires further investigation. Cancer cells induce NETs formation, stimulating cancer cell invasion and migration (105, 106) through activation of various pathways (107,108).NETs-DNA promotes cancer metastasis by interacting with the protein coiled-coil domain containing 25 (CCDC25) (109).In the NASH-HCC mouse model, there was an observed reduction in CD4+ and CD8+ T cells, accompanied by elevated levels of PD-L1 and indicators of T cell exhaustion (52).Furthermore, NETs interact with regulatory T cells (Treg) and promote their activity by modulating the metabolic reprogramming of naive CD4+ T cells.Depletion of Treg has been shown to prevent the development of HCC in NASH (33), further demonstrating a potential mechanism by which NETs in NAFLD interact with other immune cells.Additionally, the internalization of NETs by HCC cells increased COX2 expression through Toll-like receptors TLR4 and TLR9 activation (110). Research has indicated a correlation between heightened oxidative stress and inflammation with the formation of NETs in individuals suffering from liver cirrhosis and HCC.The buildup of NETs in the liver can trigger liver injury and foster fibrosis, ultimately leading to cirrhosis and elevating the risk of HCC.A recent study shows that CXCR2-positive neutrophils infiltrate NASH-HCC models, and anti-PD-1/CXCR2 inhibitor combination therapy reprograms tumor-associated neutrophils (TANs).However, it still needs to be examined whether or not NETs participate in this process (111). Potential therapeutic targeting of NETs in NAFLD Therapeutic targeting of NETs in NAFLD is an active area of research.NETs have been implicated in the transformation of NAFLD to its more severe form, NASH, and have been shown to be associated with liver fibrosis, cirrhosis, and HCC.Several potential therapeutic strategies are being explored for targeting NETs in NAFLD, including: Anti-inflammatory agents In a clinical investigation, asthma patients undergoing daily treatment with inhaled corticosteroids (ICS) displayed notably reduced mean plasma NETs level compared to patients who either did not use ICS or only used them infrequently (112).Studies have revealed that the antioxidant drug resveratrol (RESV) can effectively reduce the generation of NETs by neutrophils in individuals afflicted with severe COVID-19 infection (113).This finding reveals a potential role for RESV in mitigating the development and accumulation of NETs in the liver.During a preclinical investigation, the combination of DNase1 with hydroxychloroquine (HCQ) or aspirin, a nonsteroidal anti-inflammatory drug (NSAID), exhibited a remarkable inhibition of HCC metastasis in a murine model (110).Moreover, in a rat model with hepatic fibrosis elicited by thioacetamide (TAA) administration, the experimental groups treated with low-dose aspirin, high-dose aspirin, and enoxaparin exhibited a significantly reduced liver fibrosis score when compared to the untreated group (114). NETs-degrading enzymes Enzymes that can break down NETs, such as DNase1, have been used for treatment of NETs formation for decades.In a mouse model of necrotizing fasciitis, Group A Streptococcus (GAS) expressing DNase (Sda1) has been identified as a contributor to bacterial virulence.Sda1 effectively breaks down NETs both in vitro and in vivo (115).Studies have demonstrated that DNase exhibits therapeutic potential in animal models of NASH-HCC (32). Anticoagulants Medications that inhibit blood clotting, including heparin and warfarin, have been found to decrease NETs formation and enhance liver function in mouse models of NAFLD (116,117).In a rat model administered with chronic carbon tetrachloride (CCl4) to establish liver fibrosis, low molecular weight (LMW)-heparin and dalteparin sodium significantly ameliorated hepatic fibrogenesis (118).Additionally, in prothrombotic factor (F) V Leiden mutant mice, C57BL/6 wild-type mice, and warfarin-treated mice exposed to CCL4, experimental results showed that warfarin effectively reduced hydroxyproline content and fibrosis score (119).Moreover, in a cirrhotic Wistar rats model, notable reductions in liver fibrosis, HSC activation, and desmin expression were found in the enoxaparin treated group (120).Consistently, rivaroxaban (RVXB), an oral anticoagulant, also dramatically reduced HSC activation and intrahepatic microthrombosis in CCL4-induced cirrhosis rat model (121).Along with these preclinical studies, the first clinical trial was established in 2012.In this trial, 70 patients with advanced cirrhosis were randomly assigned to two groups, with or without enoxaparin treatment.No patients in the enoxaparin group had PVT (122).Although no direct evidence has shown that enoxaparin limits NETs accumulation, it provides a potential direction for further exploration of its role in NETs formation. Vitamins Vitamin C is an essential vitamin for human health.A previous study has shown that vitamin C-deficient neutrophils do not undergo NETosis (123).In this study, L-gulono-g-lactone oxidase (Gulo)-/-mice, which are deficient in vitamin C synthesis, undergo decreased neutrophil apoptosis even without a hypoxic environment.Similar results were found in sepsis mouse models where NETs formation was dramatically decreased in vitamin Cdeficient mice (124,125).In contrast to the role of vitamin C in NETs formation, another study showed that 1, 25dihydroxyvitamin D 3 can induce NETs formation and the mRNA levels of NETs-related markers (126).In this study, NETs demonstrated a potential protective role during infections by sequestrating the spread of pathogens. Probiotics, prebiotics and synbiotics Numerous studies have demonstrated that an imbalance in gut microbiota can disrupt the homeostasis of the intestinal system, thereby increasing the risk of advanced NAFLD (127)(128)(129)(130)(131)(132).The activation of NETs by microbiota has been reported previously (133).NETs can be stimulated in a rat model of LPS-induced sepsis, and the disruption of NETs has been shown to ameliorate intestinal injury (134).Depletion of microbiota through antibiotics (ABX, a mixture of Ampicillin, Streptomycin, Metronidazol and Vancomycin) treatment was associated with a decrease in NETs formation (135).These findings suggest that gut microbiota-targeted treatments, such as the use of PPS, hold promise as potential interventions to limit NETs formation during NAFLD (136,137).Although PPS exhibit positive effects on gut homeostasis, their direct impact on NETs during NAFLD remains unclear.In mouse bone marrow-derived neutrophils (BMDNs) and human promyelocytic cell line HL-60, the probiotic L. rhamnosus strain GG (LGG) has been found to inhibit NETs formation, potentially by suppressing ROS and phagocytosis (138). Nevertheless, these potential therapeutic strategies are currently in the preliminary stages of research, necessitating further investigations to determine their effectiveness and safety in human populations.Exploring immune system modulation to reduce the formation and accumulation of NETs and addressing the root causes of inflammation and oxidative stress may present novel therapeutic avenues for individuals with NAFLD.It is important to note that while targeting NETs in NAFLD is an important avenue of research, the management of NAFLD requires a comprehensive approach that addresses the underlying causes of oxidative stress and inflammation, such as obesity, insulin resistance, and poor dietary habits. Future perspectives Targeting NETs in NAFLD holds promise for improving its management and preventing progression to more severe forms like NASH and cirrhosis.Further laboratory studies and clinical trials are needed to develop new drugs that specifically target NETs formation in NAFLD.These drugs could be used alone or in combination with existing therapeutic strategies to improve liver health and reduce the risk of liver cirrhosis and HCC (139,140).As our understanding of the role of NETs in NAFLD improves, personalized medicine approaches may gain wider acceptance.These may involve genetic and biomarker-based approaches to identify patients at high risk of developing NAFLD and tailoring therapeutic strategies accordingly (141)(142)(143).Combining multiple therapeutic strategies, such as antiinflammatory agents (114), NET-degrading enzymes (115), anticoagulants (116), immune modulation, and vitamin C (125) may also provide synergistic benefits and enhance the efficacy of NETs-targeted therapies.To assess the effectiveness and safety of NETs-targeted therapies in human populations, extensive clinical trials on a large scale are necessary.These trials should be rigorously designed, randomized, controlled, and inclusive of participants at different stages of NAFLD (10).NETs-targeted therapies should be integrated with existing therapies for NAFLD, such as lifestyle modifications, weight loss, and medications to improve insulin sensitivity to maximize their therapeutic benefits.Overall, the future of NET-targeted therapies for NAFLD holds great promise, and further research in this area could transform the treatment of chronic liver diseases and improve the health outcomes of millions of individuals worldwide. Conclusion NETs play a complex role in the pathogenesis of NAFLD.While they are critical to the body's defense against infection and inflammation, their excessive formation and accumulation can contribute to liver damage and disease progression, potentially resulting in liver failure.Additional investigations are essential to fully elucidate the precise mechanisms through which NETs impact the development of NAFLD and to identify effective strategies for targeting NETs in NAFLD.This pursuit may involve exploring new drugs, personalized medicine approaches, and combination therapies, as well as their integration with existing treatments. To comprehensively understand the mechanism of NETs in NAFLD, it is essential also to examine their role in other common liver diseases.This broader perspective can contextualize our current knowledge of NETs in NAFLD amidst liver-related conditions.For example, a previous study demonstrated that acute alcohol consumption reduces LPS-induced NETs formation during alcohol hepatitis (AH) in mice (144).The effect may be dependent on the elevated levels of IL-6.In other sterile liver inflammation scenarios, such as ischemia/reperfusion (I/R) injury, NETs formation is induced by damage-associated molecular patterns (DAMPs), activating TLR4 and TLR9 signaling pathways (145). Furthermore, the imbalance of the gut-liver axis (GLA) during NAFLD has been known for years (146).Previous studies have shown that microorganisms can stimulate the formation of NETs (16,17).Therefore, gaining insights into the connection between gut microbiota and NETs in NAFLD represents a promising avenue for research.This exploration provides an opportunity to develop new microbiota-based therapies, such as PPS treatment.The potential benefits of probiotics in delaying NAFLD development through the modulation of the LPS/TLR4 signaling pathway have been examined (147).Moreover, investigations into prebiotics and synbiotics in the context of NAFLD (148,149) suggest that combining the benefits of gut microbiota treatments with NETs inhibition may offer a novel approach to mitigate NAFLD severity. Finally, the interplay between NETs and other immune cells in the development of NAFLD needs deeper exploration.Reports have highlighted interactions between NETs and Treg, along with elevated levels of dendritic cells (DCs) and B cells during NAFLD (33).Emerging evidence also suggests an increased infiltration of macrophages in NAFLD (32, 33, 58, 103).These ongoing investigations imply that NETs may influence a complex network within the immune environment during NAFLD.A more thorough understanding of these interactions may pave the way for innovative therapeutic strategies aimed at managing NAFLD and its complications through immune response modulation.Supervision, Writingreview & editing.HZ: Conceptualization, Funding acquisition, Supervision, Writingreview & editing. Funding The author(s) declare financial support was received for the research, authorship, and/or publication of this article.This work was supported by the National Institutes of Health R01-CA214865, R01-GM95566 to AT.State funding within the UVA Comprehensive Cancer Center "IDEA-Cancer pilot award", "Cancer Therapeutics (CRX) pilot award" to HZ. TABLE 1 The involvement of NETs in the development of NAFLD.
2023-11-04T15:09:47.267Z
2023-11-02T00:00:00.000
{ "year": 2023, "sha1": "e4abf0e346b6b9492804045528bd0340a1668077", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2023.1292679/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2519a525dc4cb7a9964f9a53581f2f21eb1b9ff1", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [] }
250222012
pes2o/s2orc
v3-fos-license
Spermidine reduces neuroinflammation and soluble amyloid beta in an Alzheimer’s disease mouse model Background Deposition of amyloid beta (Aβ) and hyperphosphorylated tau along with glial cell-mediated neuroinflammation are prominent pathogenic hallmarks of Alzheimer’s disease (AD). In recent years, impairment of autophagy has been identified as another important feature contributing to AD progression. Therefore, the potential of the autophagy activator spermidine, a small body-endogenous polyamine often used as dietary supplement, was assessed on Aβ pathology and glial cell-mediated neuroinflammation. Results Oral treatment of the amyloid prone AD-like APPPS1 mice with spermidine reduced neurotoxic soluble Aβ and decreased AD-associated neuroinflammation. Mechanistically, single nuclei sequencing revealed AD-associated microglia to be the main target of spermidine. This microglia population was characterized by increased AXL levels and expression of genes implicated in cell migration and phagocytosis. A subsequent proteome analysis of isolated microglia confirmed the anti-inflammatory and cytoskeletal effects of spermidine in APPPS1 mice. In primary microglia and astrocytes, spermidine-induced autophagy subsequently affected TLR3- and TLR4-mediated inflammatory processes, phagocytosis of Aβ and motility. Interestingly, spermidine regulated the neuroinflammatory response of microglia beyond transcriptional control by interfering with the assembly of the inflammasome. Conclusions Our data highlight that the autophagy activator spermidine holds the potential to enhance Aβ degradation and to counteract glia-mediated neuroinflammation in AD pathology. Supplementary Information The online version contains supplementary material available at 10.1186/s12974-022-02534-7. of microglia, the brain's intrinsic myeloid cells, in regulating and potentially driving AD pathogenesis. Microglia are essential for maintaining brain homeostasis and respond to AD pathology by transforming into diseaseassociated microglia (DAM) [1], an activated and transcriptionally distinct state, which is associated with alterations in proliferation, phagocytic behavior and increased cytokine production [2]. Similarly, astrocytes produce cytokines upon activation with Aβ [3,4]. The link between neuroinflammation and neurodegenerative diseases is strengthened by the profound effects of maternal immune activation (e.g., by poly I:C injections) on the development of neurodegenerative diseases [5][6][7], thus demonstrating a crucial role of inflammatory events in the initiation of a vicious cycle of neuropathological alterations. A growing set of data, including those derived from genome-wide association studies of various human diseases by the Wellcome Trust Case Control Consortium [8], indicates that autophagy, one of the main degradation and quality control pathways of the cell, is dysregulated in AD patients and AD mouse models [9,10]. Autophagy may interfere with AD pathology either by regulating Aβ degradation and/or by modulating neuroinflammatory processes. For both interference points, the mechanisms and target cells are still not fully understood. Mice deficient in the autophagic protein ATG16L1 exhibited a specific increase of Interleukin (IL)-1β and IL-18 in macrophages and severe colitis, which was ameliorated by anti-IL-1β and IL-18 antibody administration [11]. Recently, we could show that a reduction of the key autophagic protein Beclin1 (BECN1), which is also decreased in AD patients [12,13], resulted in an enhanced release of IL-1β and IL-18 by microglia [14]. The multimeric NLRP3 inflammasome complex, responsible for processing Pro-IL-1β and Pro-IL-18 into its mature forms by activated Caspase-1 (CASP1) [15], was shown to be degraded by autophagy [14,16]. Lack of the NLRP3-inflammasome axis resulted in amelioration of neuroinflammation and disease pathology in several neurodegenerative mouse models [17][18][19], thus emphasizing that activation of autophagy presents an intriguing therapeutic target to counteract neuroinflammation. The small endogenous polyamine and nutritional supplement spermidine is known to induce autophagy by inhibiting different acetyltransferases [20,21] and to extend the life span of flies, worms and yeast [21][22][23][24]. In addition, spermidine supplementation improved clinical scores and neuroinflammation in mice with experimental autoimmune encephalomyelitis (EAE) [24,25], protected dopaminergic neurons in a Parkinson's disease rat model [26], and exhibited neuroprotective effects and anti-inflammatory properties in a murine model of accelerated aging [27]. Consistent with these observations, spermidine decreased the inflammatory response of macrophages and the microglial cell line BV2 upon LPS stimulation in vitro [28][29][30]. Recent data showed that polyamines improved age-impaired cognitive function and tau-mediated memory impairment in mice [31,32] and impaired COVID-19 virus particle production [33]. These findings led us to investigate the yet unknown potential of spermidine to interfere with AD pathology and chronic neuroinflammation. Here, we show that spermidine treatment of the ADlike APPPS1 mice reduced soluble Aβ species. Applying single nuclei sequencing and liquid chromatography tandem mass spectrometry, crucial underlying changes in microglia, namely, the DAM marker AXL and pathways associated with cell migration, phagocytosis, autophagy and anti-neuroinflammation were identified. At later stages of disease pathology, spermidine reduced a CNSwide AD-associated neuroinflammation in vivo, which correlates with targeting key inflammatory signaling pathways in vitro. We, therefore, provide evidence that spermidine enhances degradation of Aβ and subsequently counteracts microglia-mediated neuroinflammation. Materials and methods Mice and spermidine treatment APPPS1 +/− mice [34] were used as an Alzheimer's disease-like mouse model. Casp1 −/− mice were a kind gift from F. Knauf and M. Reichel, Medizinische Klinik m.S. Nephrologie und Internistische Intensivmedizin, Charité Berlin. Beclin1 flox/flox mice were a kind gift from Tony Wyss-Coray (Stanford University School of Medicine/ USA). APPPS1 +/− mice and littermate wild type control (WT) mice were treated with 3 mM spermidine dissolved in their drinking water (changed twice a week) from an age of 30 days until an age of either 120 days or 290 days. Control mice received only water (H 2 O). Prior to each exchange of the drinking bottles, the weight of the bottles was determined and used to calculate the average volume consumed per animal per day. Animals were kept in individually ventilated cages with a 12 h light cycle with food and water ad libitum. All animal experiments were conducted in accordance with animal welfare acts and were approved by the regional office for health and social service in Berlin (LaGeSo). Tissue preparation Mice were anesthetized with isoflurane, euthanized by CO 2 exposure and transcardially perfused with PBS. Brains were removed from the skull and sagitally divided. The left hemisphere was fixed with 4% paraformaldehyde for 24 h at 4 °C and subsequently immersed in 30% sucrose until sectioning for immunohistochemistry was performed. The right hemisphere was snap-frozen in liquid nitrogen and stored at − 80 °C for a 3-step protein extraction using buffers with increasing stringency as described previously [35]. In brief, the hemisphere was homogenized in Tris-buffered saline (TBS) buffer (20 mM Tris, 137 mM NaCl, pH = 7.6) to extract soluble proteins, in Triton-X buffer (TBS buffer containing 1% Triton X-100) for membrane-bound proteins and in SDS buffer (2% SDS in ddH 2 O) for the SDS-soluble fraction of Aβ, which we here refer to as insoluble Aβ. The protein fractions were extracted by ultracentrifugation at 100,000 g for 45 min after initial homogenization with a tissue homogenizer and a 1 ml syringe with G26 cannulas. The respective supernatants were collected and frozen at − 80 °C for downstream analysis. Protein concentration was determined using the Quantipro BCA Protein Assay Kit (Pierce) according to the manufacturer's protocol with a Tecan Infinite ® 200 Pro (Tecan Life Sciences). Quantification of Aβ levels and pro-inflammatory cytokines Aβ40 and Aβ42 levels of brain protein fractions were measured using the 96-well MultiSpot V-PLEX Aβ Peptide Panel 1 (6E10) Kit (MesoScale Discovery, K15200E-1). While the TBS and TX fraction were not diluted, the SDS fraction was diluted 1:500 with Diluent 35. Cytokine concentrations were measured in the undiluted TBS fraction or in the cell supernatant using the V-PLEX Pro-inflammatory Panel 1 (MesoScale Discovery, K15048D1). For all samples, duplicates were measured and concentrations in the TBS fraction normalized to BCA values. Histology Paraformaldehyde-fixed and sucrose-treated hemispheres were frozen and cryosectioned coronally at 40 µm using a cryostat (Thermo Scientific HM 560). Details for the different staining procedures are described in the Additional file 2. Brain slice culture The brains of C57Bl/6J and APPPS1 mice were harvested, the cerebellum removed and the hemispheres mounted on a cutting disk using a thin layer of superglue. Hemispheres were cut using the Vibratome platform submerged in chilled medium consisting of DMEM medium (Invitrogen, 41966-029) supplemented with 1% penicillin/streptomycin (Sigma, P0781-20ML). Coronal slicing was performed from anterior to posterior after discarding the first 1 mm of tissue generating 10 × 300 µm sequential slices per brain with vibrating frequency set to 10 and speed to 3. Brain slices were cultured in pairs in 1 ml culture medium at 35 °C, 5% CO 2 in 6-well plates. Pre-treatment with the indicated spermidine concentrations was started immediately for 2 h. Subsequently, LPS (10 µg/ml) was added to the medium for 3 h followed by the addition of ATP (5 mM) for an additional 3 h. Afterwards, the culture medium was frozen for subsequent analyses. After isolating neonatal microglia, neonatal astrocytes were separated by MACS. Neonatal astrocytes were detached with 0.05% trypsin, pelleted by centrifugation and incubated with CD11b microbeads (Miltenyi Biotec, 130-093-634) for 15 min at 4 °C to negatively isolate astrocytes. Afterwards, the cell suspension was passed through LS columns (Miltenyi Biotec, 130-042-401) placed on an OctoMACS ™ manual separator and the flow-through containing the astrocytes was collected. Subsequently, astrocytes were cultured in complete medium for 2 days before being treated. For all experiments, 100,000 cells were seeded on 24 well plates if not stated otherwise. Cell migration/chemotaxis assay Cell migration was assessed using the Cell Migration/ Chemotaxis Assay Kit (96-well, 8 µm) (ab235673). The manufacturer's instructions were followed, and a standard of dyed cells was prepared for each biological replicate. Cell numbers were proportional to the fluorescence at Ex/Em = 530/590 measured with an Infinite ® 200 Pro (Tecan Life Sciences) plate reader. As migration inducing stimulus ATP (300 µM, Sigma Aldrich, A6419-5 g) was used in the bottom chamber. Cells were seeded at a density of 50,000 cells per well in the top chamber and if treated, supplemented with 10 µM spermidine trihydrochloride (Sigma, S2501-5G). In both chambers DMEM medium (Invitrogen, 41966-029) supplemented with 1% penicillin/streptomycin (Sigma, P0781-20ML) was used. After an incubation of approximately 20 h at 37 °C with 5% CO 2 the cells remaining on top of the membrane were removed with a cotton swab and cells that adhered on the bottom were dissociated. The number of cells migrated through the semipermeable membrane of the Boyden chamber was calculated based on the measured fluorescence and the generated linear regression standard curve with a range of 0-12,500 cells. Wound healing/scratch assay Cells were seeded at a density of 300,000 cells/well of a 24-well plate. After 8 h of adherence, cells were treated with 3 µM or 10 µM spermidine trihydrochloride (Sigma, S2501-5G) in complete medium. After 15 h of incubation, the cell layer was scratched with a 200 µl pipette tip. Images were taken with a Zeiss Axio Observer Z1 Inverted Phase Contrast Fluorescence Microscope using the Zen 2 blue software for 72 h at the indicated timepoints. Acquired images were analyzed using ImageJ by defining the gap area right after scratching (0 h) as region of interest. The threshold was set to include every cell inside the region of interest, the area fraction in percent was measured and normalized to the respective value at 0 h. Western blot For ASC crosslinking, 1 mM DSS (Thermo, A39267) was added to freshly harvested microglia in PBS for 30 min. All cell pellets were lysed in protein sample buffer containing 0.12 M Tris-HCl (pH 6.8), 4% SDS, 20% glycerol, 5% β-mercaptoethanol. Proteins were separated by Tris-Tricine polyacrylamide gel electrophoresis (PAGE) and transferred by wet blotting onto a nitrocellulose membrane. After blocking with 1% skim milk in Tris-buffered saline with 0.5% Tween20 (TBST), primary antibodies were added (Additional file 2). For signal detection the SuperSignal West Femto Maximum Sensitivity Substrate (ThermoFisher, 34096) was used. Western blots were analyzed by quantifying the respective intensities of each band using ImageJ. All samples were normalized to ACTIN levels or whole protein content in the supernatant. Quantitative real-time PCR For total RNA isolation, the RNeasy Mini kit (Qiagen, 74104) was used and cells were directly lysed in the provided RLT lysis buffer in the cell culture plate. Reverse transcription into cDNA was performed using the High-Capacity cDNA Reverse Transcription kit (ThermoFisher, 4368813). The manufacturer's instructions for both kits were followed. Quantitative PCR was conducted on a QuantStudio 6 Flex Real-Time PCR System (Applied Biosystems) using 12 ng cDNA per reaction. Gene expression was analyzed in 384 well plates using the TaqMan Fast Universal Master Mix (Applied Biosystems, 4364103) and TaqMan primers as described in the Additional file 2. With the Double delta Ct method, values were normalized to the house keeping gene Actin and non-treated controls. ELISA Cytokine concentrations in the supernatant of cultured cells were measured using the IL-1β (eBioscience, 88701388), IL-18 (Thermo Fisher, 88-50618-22), TNF-α (eBioscience, 88723477) and IL-6 (eBioscience, 88706488) enzyme-linked immunosorbent assay (ELISA) kit according to manufacturer´s instructions. The absorption was read at a wavelength of 450 nm and a reference length of 570 nm with the microplate reader Infinite ® 200 Pro (Tecan Life Sciences) and analyzed using the Magellan Tecan Software. Immunocytochemistry and confocal microscopy Cells were seeded at a density of 50,000 cells per well on 12 mm coverslips. After treatment, cells were fixed with 4% paraformaldehyde for 20 min, permeabilized with 0.1% Triton X-100 in PBS for another 20 min and blocked with 3% bovine serum in PBS for 1 h. The primary antibodies (anti-ASC, AdipoGen, AG-25B-0006, 1:500; anti-IBA1, Wako 019-19741, 1:1000) were added overnight at 4 °C. Subsequently, cells were incubated with the fluorescent secondary antibodies (Alexa Fluor 568-conjugated anti-rabbit IgG, 1:500, Invitrogen, A11011; 488-conjugated anti-rabbit IgG, Invitrogen A21206) for 3 h at room temperature. Cell nuclei were counterstained with DAPI (Roche, 10236276001) and coverslips embedded in fluorescent mounting medium (Dako, S3023). Images were acquired using Leica TCS SP5 confocal laser scanning microscope controlled by LAS AF scan software (Leica Microsystems, Wetzlar, Germany). Z-stacks were taken and images presented as the maximum projection of the z-stack. The number and size of ASC specks was assessed using ImageJ software as described before [14]. Aβ preparation Labeling. Aβ 1-42 peptides (Cayman Chemicals) were resuspended in hexafluoroisopropanol to obtain 1 mM solution, evaporated and stored as aliquots. For each preparation, 125 µg of amyloid-β was dissolved in 2 µL DMSO, sonicated for 10 min in the waterbath and supplemented with 3 × molar excess of NHS-ester ATTO647N dye (Sigma) in 1 × PBS (phosphate buffer saline, Gibco) and pH was adjusted to 9 with sodium bicarbonate. After 1 h of labeling reaction in the dark at room temperature, the labeled peptides were separated using spin columns (Mobicol, Mobitec) and loaded with 0.7 mL of Sephadex G25 beads (Cytiva). Clean, chromatography-grade H 2 O (LiChrosolv LC-MS grade, Merck) was used for washing, equilibration and elution. Peptide concentrations were determined using 15% SDS-PAGE gels and comparing the band intensities of the input with the eluted fractions. Maturation. Aβ peptides were matured according to [36]. In short, to obtain oligomeric forms, Aβ was resuspended in the final concentration of 1 × PBS and incubated at 4 °C overnight. Turbidity measurements and ThT aggregation assay were performed on Synergy H1 Hybrid Multi-Mode Microplate Reader (BioTek instruments) to determine the formation of Aβ oligomers and fibrils, as described in [37]. Phagocytosis assay Neonatal microglia (50,000 cells/ 24 well) were seeded on coverslips and pre-treated for 18 h with spermidine. 0.5 µM 647-labelled Aβ was added and after 24 h cells were fixed and counterstained with anti-IBA1 (Wako 019-19741, 1:1000). Quantification of Z-stacks taken at the confocal microscope with constant settings was performed with Image J: a mask was created for each IBA1 stained cell body and the intensity of the Aβ signal in every cell was determined. The mean intensity/ phagocytic cell was calculated as well as the number of Aβ-containing phagocytic cells. Single nuclei sequencing (snRNA-seq) Nuclei preparation, single nuclei sequencing and single nuclei sequencing analysis are described in the Additional file 2. The dataset has been deposited in the GEO database, GSE206202. Proteomics analysis Sample preparation, Liquid chromatography mass spectrometry and data analysis are described in the Additional file 2. The mass spectrometry proteomics data have been deposited with the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PXD034638. Data analysis All values are presented as mean ± SEM (standard error of the mean). All data sets were tested for normality using the Shapiro-Wilk test. For normally distributed data, parametric tests were used: the student's t test for pairwise comparisons or a one-way ANOVA using the indicated post hoc test for multiple comparisons. If the data distribution was not normal, the corresponding non-parametric tests Mann-Whitney U test or Kruskal-Wallis test with Dunn's multiple comparison test were applied. As a reference for the Dunnett's post hoc test or the Dunn's multiple comparison either LPS/ATP, poly I:C or Aβ samples were used. Outlier testing was performed using the ROUT method (Q = 0.5%). Statistically significant values were determined using the GraphPad Prism software and are indicated as follows: *p < 0.05, **p < 0.01 and ***p < 0.001. Spermidine reduced soluble Aβ in APPPS1 mice To assess the potential of spermidine on AD pathology, we investigated its effects on APPPS1 mice. This AD-like mouse model, which harbors transgenes for the human amyloid precursor protein (APP) bearing the Swedish mutation as well as a presenilin 1 mutation, develops a strong Aβ pathology including neuroinflammation. APPPS1 mice were treated with 3 mM spermidine via their drinking water [31], starting prior to disease onset (namely, substantial Aβ deposition), at the age of 30 days (Fig. 1a). Compared to control APPPS1 mice that received pure water (H 2 O), spermidine-supplemented animals showed no differences in fluid uptake per day (Additional file 1: Fig. S1a). Aβ deposition was analyzed at an intermediate disease state (120 days) and at 290 days, when pathology is known to have reached a plateau. After consecutive protein extractions, soluble and insoluble/SDS soluble Aβ40 and Aβ42 were measured by electrochemiluminescence (MesoScale Discovery panel). Spermidine supplementation significantly reduced soluble Aβ40 in both 120-and 290-day-old APPPS1 mice by 40% and soluble Aβ42 in 290-day-old mice by 49% (Fig. 1a) while not affecting insoluble Aβ (Additional file 1: Fig. S1b). These findings were further substantiated by the fact that no differences in Aβ plaque covered area or plaque size were observed after staining tissue sections with the fluorescent dye pFTAA (Additional file 1: Fig. S1c). Mechanistically, spermidine treatment did neither affect APP production and cleavage nor BACE1 abundance in whole hemisphere lysates or in proximity to 4G8-positive Aβ plaques (Additional file 1: Fig. S1d-f ). As the Aβ-degrading enzyme IDE (insulin-degrading enzyme) was also not altered by spermidine treatment (Additional file 1: Fig. S1g), we concluded that spermidine might target soluble Aβ by altering its phagocytosis and/or degradation. Spermidine treatment of APPPS1 mice induced transcriptomic alterations in microglia To gain insights into the molecular mechanisms mediating the reduced soluble Aβ levels and the cell populations affected by spermidine, comparative single nuclei sequencing (snRNA-seq) was performed. Hemispheres of three male spermidine-treated APPPS1 mice, H 2 O-treated APPPS1 control as well as wild type (WT) mice were analyzed at the age of 180 days, representing a midpoint in the course of pathology in this AD-like mouse model (Fig. 1b). Using fluorescence-activated cell sorted single nuclei and the 10x Genomics platform (Additional file 1: Fig. S2a), between 6500 and 10,000 cells per mouse at a median depth of 1400-1700 genes could be detected. Automated clustering revealed 34 clusters, which were grouped into 7 major cell types, namely, neurons, oligodendrocytes, microglia, oligodendrocyte precursors (OPC), astrocytes, macrophages and fibroblasts/ vascular cells, using label transfer from a previously published mouse brain reference data set [38] (Fig. 1c, d). Interestingly, the strongest transcriptional changes were found in microglia. Fewer genes were altered in oligodendrocytes, neurons, and astrocytes, while OPC and macrophages remained almost unaffected (Fig. 1e, Additional file 1: Fig. S2f ). In agreement with previous single cell transcriptomic analyses of APPPS1 mice [1], two microglia subpopulations were detected. The microglia 2 cluster appeared only in APPPS1 but not in WT mice, thus presenting an AD-associated activated microglia phenotype, which was largely equivalent to the classical DAM published by Keren-Shaul et al. [1] (Additional file 1: Fig. S2b-d). To discover the main characteristics of both microglia clusters, differential gene expression followed by gene set enrichment analysis between these populations was performed. Compared to cluster 1, the AD-associated cluster 2 revealed a downregulation of genes involved in phagocytosis, endocytosis, cell adhesion and cell polarity while upregulating neuroinflammatory responses, cellcycle transition and autophagy (Additional file 1: Fig. S2e). To validate these changes, neonatal microglia were isolated and either activated with LPS followed by ATP, inducing the TLR4 pathway, or with the viral dsRNA poly I:C stimulating the TLR3 pathway (Fig. 4b, c). Indeed, spermidine treatment upregulated Plexin A2 (Plxna2) expression of activated microglia (Fig. 1g). On the functional level, spermidine treatment of neonatal microglia increased the migration in a scratch wound healing assay (Fig. 1h, Additional file 1: Fig. S2h) and towards ATP in a transwell migration assay (Fig. 1i), correlating well with the snRNA-seq changes of cell migration genes. Furthermore, spermidine augmented the expression of the autophagy-associated gene Ets2 in vitro (Additional file 1: Fig. S2i). Also, distinct antiinflammatory-associated genes, Pfn1 [39], Glp2r [40], Per1 [41] and Sirt3 [42,43], were upregulated by spermidine in the AD-associated microglia cluster 2. This upregulation of the anti-inflammatory NAD-dependent deacetylase Sirt3 was confirmed in activated neonatal microglia in vitro (Additional file 1: Fig. S2j). We, therefore, hypothesize that spermidine prolongs and expands the early activated state of microglia, characterized by increased phagocytosis, cell motility, migration and proliferation, thus maintaining the surveillance mode of microglia and thereby reducing soluble Aβ. Spermidine altered AD-associated microglia and their capacity to degrade Aβ Next, the abundance of cell types was compared between spermidine and H 2 O APPPS1 mice. Interestingly, spermidine significantly increased the abundance of microglia cluster 2 (Fig. 2a), which correlates well with the induction of proliferation-associated genes by spermidine, and reduced levels of the anti-proliferatory gene Hpgd after acute spermidine treatment in vitro (Additional file 1: Fig. S2k). To validate that spermidine indeed altered the DAM/ microglia cluster 2, APPPS1 mice treated with spermidine were stained for the established cluster 2 marker and receptor tyrosine kinase AXL (Fig. 2b). In line with the snRNA-seq, the AXL intensity normalized to the IBA1 area was significantly increased in spermidine-treated mice (Fig. 2c). As the AXL-GAS6 signaling pathway was shown to promote phagocytosis and reduce Aβ load [44], the effect of spermidine on phagocytosis of Aβ was assessed in vitro. Spermidine pre-treatment of neonatal microglia significantly decreased the mean Aβ signal per phagocytic cell after 24 h, indicating enhanced Aβ degradation, while the percentage of phagocytic cells was not altered (Fig. 2d). In line with this, spermidine treatment increased the expression of the phagocytosis-associated actin nucleation gene Arpc3 in APPPS1 mice, which could be validated on mRNA and protein level in spermidine-treated neonatal microglia in vitro (Additional file 1: Fig. S2l). Accordingly, spermidine reduced the expression of the transcriptional regulator Celf2 (Fig. 1f ), which negatively regulates the phagocytic receptor TREM2 [45] and preserved the levels of Trem2 in activated neonatal microglia in vitro (Additional file 1: Fig. S2m). Correlating well with the observed reduction in soluble Aβ in spermidine-treated APPPS1 mice, these results show that spermidine indeed alters phagocytosis and degradation of Aβ. Spermidine treatment reduced progressive neuroinflammation in APPPS1 mice Aβ pathology and its associated changes in microglia are a generally accepted and crucial driver of neuroinflammation [2,8]. To complement the observed transcriptomic changes, we performed an unbiased proteome screening using liquid chromatography tandem mass [1]. Significance threshold of adj. p value < 0.01 was used. Axl as a gene of interest is highlighted in red. c Tissue sections of male 180-day-old mice were stained for the microglia cluster 2 marker AXL (red) and IBA1 (green). The AXL intensity normalized to the IBA1 area was determined by ImageJ analysis; n = 6-10, two-tailed t-test. d Neonatal microglia were pre-treated for 18 h with 10 µM spermidine and fluorescently labelled oligomeric (Aβo) Aβ (magenta) was added for further 24 h. Microglia cells were stained for IBA1 (green). The percentage of phagocytic cells and the Aβ mean intensity density per phagocytic cell was assessed by confocal microscopy. Representative images are shown; n = 7, two-tailed t-test. *p < 0.05, **p < 0.01, ***p < 0.001 Freitag et al. Journal of Neuroinflammation (2022) 19:172 spectrometry of microglia isolated from 180-day-old APPPS1 mice treated with spermidine or H 2 O as well as WT controls (Fig. 3a). To reveal the effect of spermidine on AD pathology, we applied linear modelling integrating the information of proteins changing upon spermidine treatment of APPPS1 as well as between WT and APPPS1 mice [(APPPS1 spermidine + WT The 72 differentially expressed proteins (alpha = 0.04, fdr < 0.3) were inversely correlated (slope = − 0.77, R 2 = 0.85), when comparing APPPS1 against spermidine (Fig. 3b). The opposite regulation highlights the beneficial effects of spermidine on ADassociated changes. No proteins were found to be regulated into the same direction, e.g., amplifying potentially disease-driving protein changes, indicating no adverse effects of spermidine on AD pathology (Fig. 3b, Additional file 1: Fig. S3a). To assess whether those proteins can discriminate the spermidine-treated groups from the controls, a principal component analysis (PCA) and hierarchical clustering was performed, resulting in a clear separation based on spermidine treatment (Additional file 1: Fig. S3b). To consider also coordinated changes, which did not pass significance criteria on the single-protein level, we performed gene-set enrichment analysis (GSEA), a functional analysis with a special focus on biological process (GO:BP) and pathways (REACTOME) terms related to inflammation and neurodegeneration. In line with the transcriptomic results, spermidine treatment increased pathways involved in "microtubule cytoskeleton organization", "regulation of actin filament binding" and "regulation of actin binding", thus supporting the observed microglial migration changes upon spermidine treatment (Fig. 3c). While the transcriptomics analysis only revealed a few anti-inflammatory genes to be regulated by spermidine, a clear downregulation of many inflammatory pathways including "acute inflammatory response" as one of the top downregulated pathways, as well as IL-1 and IL-6 signaling and inflammasome pathways were found (Fig. 3c). Among the downregulated pathway clusters within a REACTOME enrichment map was a big cluster of IL signaling-related pathways indicating anti-inflammatory effects of spermidine. Matching previous studies [20,46], spermidine also upregulated the pathway cluster "autophagy". Furthermore, spermidine affected ubiquitin-associated pathways and SUMOylation (Fig. 3d). Comparing the pathways altered in APPPS1 mice with those found to be affected by spermidine revealed a clear reversal of the AD-associated induction of inflammation and oxidative phosphorylation and the downregulation of cytoskeletal pathways (Fig. 3e). Thus, we conclude that spermidine reverted AD-mediated effects in APPPS1 mice. To assess whether these microglia-specific anti-inflammatory effects interfered with neuroinflammation later in the pathology, ten cytokines were quantified by electrochemiluminescence in brain homogenates of 290-dayold male spermidine-treated APPPS1 mice. Spermidine supplementation significantly reduced the AD-relevant pro-inflammatory cytokines IL-6, TNF-α, IL-12, IL-4 and IL-5 in 290-day-old mice (Fig. 3f ), while not altering IL-1β (combined measurement of Pro-IL-1β and IL-1β), IFN-γ, IL-2, IL-10 and KC/GRO (Additional file 1: Fig. S3c), revealing indeed anti-inflammatory effects of spermidine in the CNS at late stages of disease pathology. . Contrast 2 shows regulation due to spermidine effect, Contrast 1 shows the regulation of proteins by Alzheimer disease. Proteins that are regulated by spermidine and show significant anti-APPPS1 effect were marked in red. As such we selected proteins that show significant (alpha = 0.04) regulation due to spermidine in APPPS1 mice (Contrast2) and simultaneously, show significant (alpha = 0.04) effect in Contrast5 = (Contrast2 − Contrast1)/2, in the direction, opposite to the effect of the AD-like model. c Volcano plot of GSEA enrichment of GO BP terms. x-axis shows normalized enrichment score of functional term, y-axis represent the − log10 of its false discovery rate. Labelled are only terms that have relation to neurodegeneration and inflammation. As such we selected terms that have in their names following strings: neuro, inflamm, Clathrin, interleukin, Caspase, TNF, ubiquitin, SUMO, Alzheimer, Parkinson, Huntington, lipoprotein, autophagy, cell migration, cell motility, microtubule, actin, actin-, glia, amyloid. Not all labels appear due to strong overlap, especially at high fdr > ~ 0.5 (− log10(fdr) < ~ 0.3). Long term names are truncated to 50 characters. d GSEA enrichment map using top 50 REACTOME terms from list of neurodegeneration and inflammation terms. As such we selected terms that have in their names following strings: neuro, inflamm, Clathrin, interleukin, Caspase, TNF, ubiquitin, SUMO, Alzheimer, Parkinson, Huntington, lipoprotein, autophagy, cell migration, cell motility, microtubule, actin, actin-, glia, amyloid. e Dot plot of selected functional terms related to neuroinflammation and degeneration. f APPPS1 mice were treated with 3 mM spermidine via their drinking water starting at 30 days until mice reached an age of 290 days. The content of the indicated pro-inflammatory cytokines was measured in the TBS (soluble) fraction of brain homogenates of male spermidine-treated mice and water controls using electrochemiluminescence (MesoScale Discovery panel). Values were normalized to water controls. 290d APPPS1 H 2 O (n = 14), 290d APPPS1 spermidine (n = 12); two-tailed t-test. *p < 0.05, **p < 0.01, ***p < 0.01 Spermidine exhibits direct anti-inflammatory effects on microglia To assess whether spermidine influenced neuroinflammation in a direct manner or whether its effects on neuroinflammation were secondary, acute whole hemisphere slice cultures derived from 200-day-old WT or APPPS1 mice were treated with spermidine and subsequently stimulated with LPS and ATP (Fig. 4a). LPS/ATP treatment of APPPS1 slice cultures resulted in a massive release of IL-1β and IL-6 compared to slices from WT mice. Spermidine significantly reduced their levels in both genotypes (Fig. 4a), underlining that spermidine could directly influence neuroinflammation in APPPS1 mice. Correlating with the induction of autophagy as shown in the proteomic analysis of spermidine-treated APPPS1 mice, the anti-inflammatory effects of spermidine in vitro were autophagy-mediated. Spermidine induced expression of autophagic proteins significantly (Additional file 1: Fig. S5a-d) and no effects of spermidine treatment were measured upon autophagy activation with HBSS (Additional file 1: Fig. S5e-f ). Impairment of autophagy by 3-MA or using primary microglia with BECN1 knockout (Tamoxifen-treated BECN1 flox/flox ·CX3CR1 CreERT2 cultures) on the other hand, abolished spermidine-mediated effects (Additional file 1: Fig. S5g-i). Therefore, we concluded that spermidine exerts direct anti-inflammatory effects on microglia and astrocytes in an autophagydependent manner, correlating with the observed reduced neuroinflammation in 290-day-old APPPS1 mice. Spermidine regulates neuroinflammation beyond transcription by interfering with inflammasome assembly As recent studies and our proteome analysis showed that spermidine mediates some of its effects solely on protein level [20,22,31], the effects of spermidine on neuroinflammation beyond transcriptional control were studied by treating microglia with spermidine after activating/ priming them with LPS (Fig. 5a). Interestingly, post-LPS spermidine treatment (1.45 h) reduced only the release of IL-1β and IL-18 into the cell supernatant (Fig. 5b, c) while not altering the release of all other measured cytokines (Additional file 1: Fig. S6a-c). Il-1β, Il-6 and Tnf-α gene expression revealed no alterations by this spermidine treatment scheme (Additional file 1: Fig. S6d). However, increased protein levels of Pro-IL-1β and uncleaved Pro-CASP1 were found (Fig. 5d, e), indicating reduced processing at the NLRP3 inflammasome. This correlated with a reduction of cleaved and activated CASP1 and Gasdermin D (GSDMD), another substrate of CASP1, in the supernatant (Fig. 5e, Additional file 1: Fig. S6e). While NLRP3 expression was not altered on the mRNA or protein level by spermidine (Additional file 1: Fig. S6f ), staining and quantification of ASC specks/inflammasomes revealed that spermidine treatment reduced the number of ASC specks significantly (Fig. 5f, g). A similar reduction was also detected in Casp1 −/− microglia (Fig. 5g), indicating that spermidine did not directly interfere with Pro-CASP1 cleavage but rather with inflammasome formation. To test this hypothesis, the ASC-oligomerization inhibitor MCC950 [47] was added prior to adding ATP. No additive effects of MCC950 to the spermidine-mediated effects could be detected, underlining that spermidine was indeed interfering with ASC-oligomerization and inflammasome formation (Fig. 5f ). Consistent with this hypothesis, western blot analysis for ASC after chemical crosslinking showed reduced appearance of ASC oligomers in spermidinetreated cells (Additional file 1: Fig. S6g), while the amount of ASC monomers was not altered (Additional file 1: Fig. S6h). Thus, spermidine treatment of activated microglia reduced IL-1β processing by interfering with the oligomerization of ASC-positive inflammasomes. Taken together, we elucidated a novel regulatory mechanism of spermidine in addition to targeting NF-κBmediated transcription of pro-inflammatory genes. Thus, spermidine targets multifarious pathways, such as degradation of Aβ, proliferation and active reduction of inflammatory signaling, which stabilizes a presumably protective microglia population. Discussion Delaying AD progression still presents an urgent unmet clinical need. Based on recent advances in our understanding of AD pathogenesis that resulted in the appreciation of the impact of neuroinflammation and autophagy, we assessed the therapeutic effects of the autophagic activator spermidine on Aβ pathology in APPPS1 mice. Interestingly, spermidine treatment significantly increased Aβ degradation and reduced soluble Aβ levels in vivo, while Aβ plaque burden and size were not altered. Whereas the effects of spermidine on AD pathology have not been assessed so far, De Risi et al. [48] reported that spermidine decreased soluble Aβ and α-synuclein in a mouse model with mild cognitive impairment. Despite the fact that the toxicity and importance of soluble vs. insoluble Aβ in AD pathology is still a matter of debate, there is clear evidence that soluble Aβ causes more synaptotoxicity than plaque-bound insoluble Aβ. It was shown to alter synaptic transmission and to mediate synaptic loss and neuronal death [49][50][51], thus suggesting that targeting soluble Aβ might suffice to ameliorate AD. In line with this, recent data suggest that microglia create core plaques as a protective measure to shield the brain from soluble Aβ. The DAM marker AXL is thought to contribute to this formation [52]. In addition, the AXL-GAS6 pathway has been shown to not only suppress the microglial inflammatory response but also to mediate Aβ phagocytosis [44]. Thus, the increased AXL levels in spermidine-treated APPPS1 mice as well as the in vitro phagocytosis experiments underline that spermidine affects microglial phagocytosis and degradation of Aβ. As the APPPS1 mouse model exhibits a fast disease progression with a strong genetically driven Aβ pathology appearing at 60 days, the substantial effect of spermidine on soluble Aβ highlights its potential to interfere with AD progression. Since neuroinflammation is a known driver for plaque formation [5,6], the additional antiinflammatory effects of spermidine may have a beneficial effect on insoluble Aβ and plaques in mice older than those we analyzed within the frame of this work (namely, older than 290 days). Notably, previous studies assessing the effect of autophagy activation on AD pathology also revealed reduced Aβ pathology [53][54][55][56]; however, the mechanisms by which autophagy modulation targets AD pathology had not been elucidated so far. By applying snRNAseq to 180-day-old spermidine-treated APPPS1 mice, we revealed microglia as the main glial cell type to be targeted by spermidine. The most profound effects of spermidine on the transcriptome level were seen in the AD-associated microglia cluster 2, which was characterized by increased migration, cell motility, phagocytosis and cell proliferation. While acute activation of microglia in early disease pathology induces microglial phagocytosis and migration towards plaques, later stages of AD pathology and chronic priming of microglia with Aβ have adverse effects [57,58]. In accordance, microglial motility in the presence of Aβ plaques was found to be decreased in APPPS1 mice compared to control mice when a focal laser lesion was induced [59]. A common denominator of the transcriptome and proteome analysis was that spermidine prevented ADassociated cytoskeletal changes and thus, might increase microglial migration and cell motility, as demonstrated in vitro. Accordingly, spermidine was found to promote cell migration in neural cells and keratinocytes as well as wound healing processes ex vivo and in vivo [60]. In line with previous publications [61], the proteomics analysis revealed that spermidine also preserved the energy metabolism in microglia from APPPS1 mice by affecting oxidative phosphorylation, glycolysis and gluconeogenesis. By promoting genes and proteins involved in cell motility, migration and phagocytosis, spermidine seems to delay the onset of the late-stage ADassociated microglial population. Interestingly, some of those changes might be exerted by SUMOylation. SUMOylation is, similar to ubiquitination, a post-translational modification regulating transcription, cell proliferation and protein stability and turnover. To our knowledge this is the first time that an effect of spermidine on SUMOylation is described and it may also contribute to autophagy induction and/or protein degradation. In correlation, recent publications on the post-translational modification called hypusination [31,32] indicate that spermidine can exert some of its function on the post-transcriptional levels. In addition, spermidine also increased the abundance of microglia cluster 2. Although it is still under discussion whether proliferation of microglia in AD is beneficial or detrimental [62], spermidine mediated the enlargement of a microglial subpopulation showing increased phagocytosis and cell motility including Axl expression as described above. Several regulated genes, such as Arpc3 [63], Glp2r [64], Sirt3 [65] and Per1 [66,67] have been reported to exert protective effects in neurodegenerative diseases or reverse memory deficits in various models, underlining the observed protective effects of spermidine. For instance, SIRT3 was shown to target similar pathways as spermidine, such as inflammation, including the IL-1β processing pathway [42,43] and microglial migration [68]. Even though microglia were the main glial cells to be affected by spermidine on transcriptional level at 180 days, our in vitro analyses revealed that spermidine also reduced cytokine production in astrocytes, indicating that astrocytes might also be altered at later stages of AD pathology. While only few anti-inflammatory effects of spermidine were found by snRNA-seq, the proteomics analysis of microglia isolated from spermidine-treated APPPS1 revealed a clear downregulation of inflammatory pathways. These changes might pave the path for the brain-wide reduction in cytokines mediated by spermidine at 290 days, when APPPS1 mice are known to exhibit profound neuroinflammation. Of note, while spermidine was found to target the IL-1β processing pathway at 180 days and in vitro, no changes in IL-1β cytokine levels were found in 290-day-old APPPS1 mice. This might be due to the fact that the MSD cytokine panel does not distinguish between Pro-IL-1β and cleaved IL-1β. Next to the in vitro effects of spermidine on transcription of cytokines by modulating the NF-κB signaling pathway, which was previously described in BV2 and macrophages [29,30], we identified a novel spermidine-modulated post-translational mechanism. Spermidine interfered with the ASC assembly of the NLRP3 inflammasome and thereby reduced the production of IL-1β. This pathway was also found to be downregulated in the proteomics analysis of spermidine-treated APPPS1 microglia, again indicating that spermidine acts beyond modulation of transcription. Conclusions Activators of autophagy such as fasting or caloric restriction, exercise, rapamycin, an inhibitor of the mechanistic target of rapamycin (mTOR), and metformin were shown to prolong the life span of several species and to reduce Aβ deposition in different mouse models [53][54][55][56], yet most of these drugs-in contrast to the orally applicable spermidine-were problematic in terms of tolerability and/or administration. Therefore, the body-endogenous substance spermidine seems to be an attractive therapeutic dietary supplement as it attenuated AD-relevant neuroinflammation, reduced synaptotoxic soluble Aβ and reverted AD-associated proteomic changes with no adverse effects. Since spermidine supplementation is already tested in humans, the extension of spermidine supplementation from individuals with subjective cognitive decline [32,[69][70][71]] to clinical trials aimed at testing spermidine efficacy in AD patients appears to be a tempting approach.
2022-07-03T13:08:27.073Z
2022-07-02T00:00:00.000
{ "year": 2022, "sha1": "070c0fde1573ef1ae20fa13392b42563a4c14ee6", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "3449a52eebed49f13fa3029b9029fba762c3c41d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
260145568
pes2o/s2orc
v3-fos-license
Global Impact of Monoclonal Antibodies (mAbs) in Children: A Focus on Anti-GD2 Simple Summary Despite being in use for almost 50 years, monoclonal antibodies face limitations in their implementation in clinical practice, particularly in pediatrics and pediatric cancer. Although technological advancements and research into new therapeutic targets have led to the development of sophisticated and effective molecules, translational barriers still exist. Integrating monoclonal antibodies (mAbs) into current treatment protocols and ensuring accessibility for all children with cancer globally remains a challenge. This review examines the biological, clinical, economic, and social limitations hindering the global implementation of mAbs in pediatric cancer, with a particular focus on anti-GD2 mAbs. Abstract Monoclonal antibodies (mAbs), as the name implies, are clonal antibodies that bind to the same antigen. mAbs are broadly used as diagnostic or therapeutic tools for neoplasms, autoimmune diseases, allergic conditions, and infections. Although most mAbs are approved for treating adult cancers, few are applicable to childhood malignancies, limited mostly to hematological cancers. As for solid tumors, only anti-disialoganglioside (GD2) mAbs are approved specifically for neuroblastoma. Inequities of drug access have continued, affecting most therapeutic mAbs globally. To understand these challenges, a deeper dive into the complex transition from basic research to the clinic, or between marketing and regulatory agencies, is timely. This review focuses on current mAbs approved or under investigation in pediatric cancer, with special attention on solid tumors and anti-GD2 mAbs, and the hurdles that limit their broad global access. Beyond understanding the mechanisms of drug resistance, the continual discovery of next generation drugs safer for children and easier to administer, the discovery of predictive biomarkers to avoid futility should ease the acceptance by patient, health care professionals and regulatory agencies, in order to expand clinical utility. With a better integration into the multimodal treatment for each disease, protocols that align with the regional clinical practice should also improve acceptance and cost-effectiveness. Communication and collaboration between academic institutions, pharmaceutical companies, and regulatory agencies should help to ensure accessible, affordable, and sustainable health care for all. Introduction Immune therapies have exploited antibodies, cytotherapy, viruses, and vaccines designed to promote an active or passive anti-tumoral immune response. The first observation of the immune system having an anti-tumor effect was in 1866 when Wilhelm Busch in Germany documented the regression of sarcoma in a patient after an erysipelas infection [1]. In 1891, an orthopedic surgeon, Coley, demonstrated remission in some patients with inoperable sarcomas by injecting streptococcus and their toxins directly into the bloodstream [2]. Since then, immune therapies have evolved, built on a much deeper understanding of alpha in Crohn's disease or rheumatoid arthritis), excess Abs (e.g., IgE in allergic diseases, CD20 on B cells, BCMA on plasma cells), alloreactive T cells (e.g., CD3 on T cells), or to enhance immune functions (e.g., immune checkpoint inhibitors) [13]. Against infectious agents, palivizumab against the respiratory syncytial virus in premature newborns and children with bronchopulmonary dysplasia [14], or COVID-19 mAbs under emergency use authorization (EUA) are classic examples [15][16][17]. Borrowing approved mAbs from adults, their integration into the current standard of care in pediatrics, and particularly in childhood cancer, has not been easy. Although target discovery for pediatric cancers continues to expand, their mechanisms of action (MOA) explained, and therapeutic potentials uncovered, most mAbs never advance to phase 1 trials or are abandoned after first regulatory disapproval. Traditional regulatory barriers continue to hinder drug development among ultra-rare diseases such as childhood cancer. Novel methodologies to ascertain drug efficacy need to be developed and validated in order to eliminate the bottlenecks. Reliance on expensive, time-consuming, and laborious multinational collaborative clinical trials needs to be revisited. We need to balance profit and service, through collaboration between academic institutions and pharmaceutical companies working in orphan diseases to ensure accessible, affordable, and sustainable health care for all. In this work, we will review the different mAbs in use and under investigation in pediatric cancer, with a special focus on solid tumors and anti-GD2 mAbs for neuroblastoma, and the obstacles that have hindered their global accessibility thus far. Monoclonal Antibodies and Pediatric Cancer Pediatric cancer privileges a survival rate of ≥80% in high-income countries, reaching 95% in acute lymphoblastic leukemia (ALL) or Wilms tumor [18][19][20]. However, the chances of survival are highly variable within tumor entities and among geographic areas in the world. Metastatic or relapsed sarcomas, high-grade brain tumors, and some rare pediatric cancers have a dismal prognosis with no relevant therapeutic advances in decades. Unintended deaths from childhood cancers in low-and middle-income countries, once diagnosed, result from the abandonment of treatment in the setting of complex and intensive treatment regimens, death from toxicity because of insufficient supportive care options, and relapse [19]. The high frequency of long-term sequelae among childhood cancer survivors (nearly 50% with moderate to high multi-organ late effects) [20] has also overshadowed improvement in survival, demanding a reconsideration of treatment intensification typically saddled with toxicities driven to their limits. mAbs are attractive therapeutic alternatives and have already demonstrated efficacy in a variety of childhood cancers [21]. Monoclonal Antibodies in Pediatric Hematological Malignancies The first-ever approved mAb for clinical use was rituximab in 1997, a human-mouse chimeric Ab discovered in 1994. Rituximab targets CD20, an antigen expressed on B cell mature hematological malignancies. The phase 3 clinical trial Inter-B-NHL Ritux 2010 (NCT01516580) demonstrated that rituximab added to the standard chemotherapy backbone achieved an event-free survival (EFS) at 3 years of 93.9% compared to 82.3% for the chemotherapy-only group in children with B-cell mature lymphoid malignancies [22]. In 2021, rituximab obtained the approval for pediatric use and when combined with chemotherapy became the first line treatment of high-risk non-Hodgkin lymphoma [23]. Rituximab has a low toxicity profile, mainly with transfusion reactions and hypogammaglobinemia, and is manageable with immunoglobulin replacement. Early studies with gemtuzumab ozogamicin (GO), a humanized anti-CD33 mAb linked to the DNA-binding cytotoxin calicheamicin, showed single-agent activity in refractory pediatric and adult patients with acute myeloid leukemia (AML) (28-30% overall response) [24]. Efficacy and safety in the pediatric population were further supported by data from AAML0531 (NCT00372593), a multicenter randomized study including 1063 patients with newly-diagnosed AML. GO was added to standard chemotherapy in the study arm achieving an estimated percentage of patients free of induction failure, relapse, or death at five years of 48% compared to 40% (95% CI: 36%, 45%) in the chemotherapy arm alone [25]. GO was FDA-approved for the treatment of relapsed or refractory CD33-positive AML in adults and pediatric patients (older than 2 years old) in 2017. Three years later, the FDA extended the indication of GO to newly diagnosed CD33-positive AML to include pediatric patients 1 month and older [26]. In 2011, brentuximab vedotin, an anti-CD30 mAb drug conjugate (ADC) to monomethyl auristatin E, was approved by the FDA in adults for relapsed or refractory Hodgkin lymphoma (HL) and anaplastic large-cell lymphoma (ALCL). In pediatrics, the clinical trial NCT02166463 confirmed a survival advantage among pediatric high-risk HL (EFS at 3 years 92.1% in the brentuximab vedotin group compared to 82.5% for the standard-care group) with low toxicity profile [27], gaining FDA authorization in 2022. In 2014, the FDA granted accelerated approval of blinatumomab for the treatment of Philadelphia chromosome-negative relapsed or refractory precursor B-cell acute lymphoblastic leukemia (R/R B-ALL). Blinatumomab is a bispecific mAb that elicits a cytotoxic T-cell response against CD19-positive cells. On 29 March 2018, the FDA granted accelerated approval of blinatumomab for the treatment of adult and pediatric patients with B-cell precursor ALL in first or second complete remission with minimal residual disease (MRD) greater than or equal to 0.1%. Approval was based on the open-label, multicenter, single-arm BLAST trial (NCT 01207388) [28]. In 2017, the FDA approved inotuzumab ozogamicin for the treatment of adults with relapsed or refractory B-cell precursor ALL. Pediatric authorization is not yet available, but the results are promising in several ongoing and completed clinical trials (NCT02981628, EUDRA-CT 2016-000227-71). Monoclonal Antibodies Specifically Developed for Pediatric Solid Tumors Metastatic or relapsed sarcomas, high-grade brain gliomas, and some rare entities (like rhabdoid tumors) are hard to cure, with survival rates below 20% [29]. Treatment of pediatric solid tumors often relies on complex and intense chemotherapy, surgery, and radiotherapy combinations encumbered by significant long-term toxicities. For instance, 93% of high-risk neuroblastoma survivors suffer long-term sequelae, 71% of which are severe, including second malignancies [30]. Today, a variety of mAbs is approved for the treatment of solid tumors in adults, directed against epidermal and vascular growth factors and immune checkpoints [31][32][33][34]. In addition, mAbs in adult oncology are being generated in different formats including antibody-drug conjugates (ADCs) or bispecific T-cell engagers (BiTEs), and targeting a variety of different pro-tumorigenic compounds in the microenvironment or immune checkpoint inhibitors. In contrast, the use of mAbs in pediatric solid tumors in current clinical practice remains anecdotal with one exception, i.e., anti-disialoganglioside mAbs, which are now part of the current standard of care for neuroblastoma. Anti-GD2 Monoclonal Antibodies Alteration of ganglioside expression in cancer was first reported in 1966 in brain tumors [35] and has since been demonstrated in a large number of human tumors. Disialoganglioside 2 (GD2) is expressed on the outer cell membrane of neural and mesenchymal stem cells during early development. Although GD2 is overexpressed in cancer, its postnatal expression in healthy tissues is restricted to the peripheral neurons, central nervous system, and skin melanocytes [36]. The density of GD2 in neuroblastoma is unusually high, in some estimates could be as high as millions of molecules per cell [37]. High and homogeneous GD2 expression can also be found in subsets of osteosarcoma [38][39][40][41] melanoma cells [8], and some brain tumors [42][43][44]. Other solid tumors such as soft tissue sarcomas, Ewing sarcoma, or desmoplastic small round cell tumor (DRSCT) [45,46] display a lower prevalence and more heterogeneous expression of GD2. Anti-GD2 mAbs bind GD2-expressing tumor cells, engage FcR bearing myeloid effectors to perform Ab-dependent cell-mediated phagocytosis (ADCP), engage FcR-bearing natural killer (NK) cells to perform Ab-dependent cell-mediated cytotoxicity (ADCC), activate complement to perform complement-dependent cytotoxicity (CDC), and, in some instances, cause direct induction of apoptosis [47]. The first-in-man use of anti-GD2 mAb, the murine 3F8 developed in 1985 [48,49] was published in 1987 [8]. After demonstrating its potential in combating marrow disease in patients with primary refractory disease [9], or those in second and first remission [50], the mouse Ab was humanized [47], brought to the clinic in 2011, and became FDA-approved in 2020 [51]. Dinutuximab (ch14.18), an IgG1 human-mouse chimeric switch variant of murine mAb 14G2a was first authorized by the FDA in 2015 in combination with sargramostim a granulocyte-macrophage colony-stimulating factor (GM-CSF), interleukin-2 (IL-2) and 13cis retinoic acid (RA), for the treatment of pediatric patients with high-risk neuroblastoma with at least partial response after first-line multimodality therapy [52]. The approval was based on results from a phase III, open-label randomized trial conducted by the Children's Oncology Group (NCT00026312: ANBL0032), where event-free and overall survival were significantly improved among patients with high-risk neuroblastoma that responded to induction therapy, autologous stem cell transplantation, and focal radiotherapy [53]. Dinutuximab beta is a mouse-human chimeric IgG1 mAb produced in a mammalian cell line (CHO) by recombinant DNA technology (ch14.18/CHO). On March 2017, the Committee for Medicinal Products (CHMP) for Human Use adopted a positive opinion, recommending the granting of a marketing authorization under exceptional circumstances for the medicinal product (designated as an orphan medicine in 2012) dinutuximab beta, intended for the treatment of high-risk neuroblastoma in children and adults. The committee also concluded that the active substance contained in dinutuximab beta could not be considered a new active substance. The European Medicines Agency (EMA) approved it for pediatric use in 2017 for the post-consolidation treatment of patients with high-risk neuroblastoma in combination with isotretinoin and IL-2. A randomized phase III study conducted by SIOPEN demonstrated no benefit from the addition of IL-2, which has since been omitted in standard clinical practice [54]. During that decade, naxitamab, the humanized version of m3F8 (hu3F8), received FDA breakthrough designation in 2018 and final approval in 2020 in combination with GM-CSF for pediatric and adult patients with relapsed or refractory high-risk neuroblastoma in the bone or bone marrow if there is a partial response, minor response, or stable disease to standard induction therapy [51]. Naxitamab has a 10-fold higher affinity than dinutuximab and is humanized rather than chimeric. Even though it is humanized, its immunogenicity after first-time exposure is at least 10%, with a lifetime immunogenicity estimate of 20% after repeated mAb challenges [47]. The FDA approval of naxitamab was based on the results of the pivotal phase II trial (study 201, NCT03363373) in patients with high-risk neuroblastoma in bone and/or bone marrow refractory to initial standard of care or showing insufficient response to therapy for progressive/relapsed disease. The overall response rate (ORR) was 50% (26/52; 95% CI 36-64%) and complete remission (CR) was 38.5% (95% CI 25-53%) [55]. Given GD2 expression in other solid pediatric tumors [45,46,56,57] there may be potential for these Abs in refractory tumors such as osteosarcoma. However, a phase II study carried out by the Children's Oncology Group (AOST1421) failed to show benefit among patients with recurrent osteosarcoma in complete surgical remission when treated with dinutuximab plus cytokine therapy when compared to historical controls [58]. New clinical trials with naxitamab and dinutuximab beta in osteosarcoma are underway (phase II NCT02502786 and NCT05558280, respectively). Main toxicities from anti-GD2 mAbs are related to GD2 expression by peripheral sensory nerve fibers causing pain in nearly all patients, allergic reactions, myelitis [59], and posterior fossa reversible encephalopathy syndrome (PRES) [60]. No long-term permanent toxicities have been described to date [61]. Monoclonal Antibodies Repurposed for Pediatric Solid Tumors The first mAb approved for solid tumors was trastuzumab (anti-ERBB2) in 1998 for HER2positive breast cancer. In children, up to 50% of osteosarcomas express HER2 [62]; however, Cancers 2023, 15, 3729 6 of 25 trastuzumab did not significantly improve survival when combined with chemotherapy in metastatic osteosarcoma [63]. Trastuzumab deruxtecan, an ADC, is being evaluated in HER2-positive osteosarcoma in the PEPN1924 study (NCT04616560). Immune checkpoints (such as PD1 or CTLA4) inhibitors (ICI) have, in a relatively short time, changed the outlook and treatment paradigms for a broad spectrum of adult cancers, complementing or even replacing standard chemotherapy in selected diagnoses as first-line treatment, producing durable remissions not imaginable in the past [33,34]. However, they have limited efficacy in pediatric tumors, except for those with mismatch repair deficiencies [64]. Clinical trials with ICIs have shown limited objective responses in pediatric patients including nivolumab in the ADVL1412 [65] and KEYNOTE-051 [66] for relapsed/refractory solid tumors and pembrolizumab in SARC028 for patients with bone sarcomas [67]. The only promising result so far was seen for atezolizumab, an anti-PD-L1 mAb, among patients with alveolar soft part sarcoma with an ORR of 37% in a phase II study [68]. B7-H3, an immune checkpoint molecule, is overexpressed in multiple cancers including neuroblastoma, sarcomas, and brain tumors [69]. The mAbs 131 I-8H9 and 124 I-8H9 [70,71], developed against B7-H3, have shown potential in both imaging and treatment of leptomeningeal neuroblastoma [72,73] and diffuse midline gliomas [74]. Different delivery methods (i.e., intraperitoneal, intraOmmaya, or intrapontine) have been employed to avoid hepatic sequestration of the Ab. For a nearly 95% lethal CNS metastasis, 124 I-8H9 intraOmmaya treatment yielded a 2-year OS of 57%, EFS > 40% compared to the median 5.5 months survival reported in the literature [75]. However, in a rare disease where randomized arms are not feasible, the lack of comparable historical controls together with safety concerns have prevented its FDA approval for this indication. When combined with abdominopelvic radiotherapy, intraperitoneal radiolabeled 8H9 also increased median overall survival (54 months vs. 34 months) compared to only radiated patients with DSRCT and peritoneal rhabdomyosarcoma [76]. A number of anti-B7-H3 approaches have been undertaken by other groups including naked Fc-enhanced IgG (MGA27) and ADCs [77]. Racotumomab, a murine gamma-type anti-idiotype mAb against Neu-glycolyl GM3 ganglioside (NeuGcGM3), overexpressed in some solid pediatric tumors [88], has shown a favorable toxicity profile and immune responses [89]. Its activity in high-risk neuroblastoma is still being evaluated in the phase II clinical trial NCT02998983. Monoclonal Antibodies for Childhood Cancer: Current Limitations and Future Strategies Despite the promise of antigen-specific targeted therapy, the application of mAb therapy in childhood cancer is still limited. The following section will focus on anti-GD2 mAbs since they have accumulated most of the clinical experience of mAbs in pediatric solid tumors. Biological Limitations Antitumor activity of IgG mAbs relies on known effector mechanisms (e.g., signaling pathways, ADCC, ADPC, CMC). Key tumor intrinsic and extrinsic features could narrow the utility of mAbs (Refer to Figure 1 for a graphical summary of the different biological barriers and Table 1 for approaches to address them). Biological Limitations Antitumor activity of IgG mAbs relies on known effector mechanisms (e.g., signalin pathways, ADCC, ADPC, CMC). Key tumor intrinsic and extrinsic features could narro the utility of mAbs (Refer to Figure 1 for a graphical summary of the different biologic barriers and Table 1 for approaches to address them). Paucity of Clinically Relevant Targets Despite their hallmarks shared with adult cancers [90], pediatric tumors carry substantial differences. Most pediatric cancers arise from embryonal cells acquiring genetic/epigenetic aberrations in the form of transcriptional abnormalities, copy number variants, and chromosomal rearrangements, unlike adult cancers where mutational drivers accumulate over time. A characteristic low mutational burden in pediatric tumors results in a relative paucity of neo-antigens which limits not just the number of druggable targets (hence low anti-tumor T cell or B cell frequency), but the collective immunological amplification of the anti-tumor response. Low immunogenicity impairs the breadth and depth of the anti-tumor response, leading to insufficient or even absent tumor infiltration by activated T and NK cells [91,92]. Multiple publications have demonstrated the suboptimal frequency of tumor-infiltrating lymphocytes (TILs) in pediatric tumors (with significant variation between individuals) [93][94][95]. Low MHC-I expression, intrinsic (e.g., neuroblastoma), or acquired (under immune pressure) are common among pediatric tumors, thereby compounding the neo-antigen paucity problem. This lack of immunogenicity explains why ICI has not been effective for classic T-cell activation in pediatric cancers [96]. In view of these limitations, attempts are being made to target translocation and gene fusion sequences, splice variants, and genomic retroviral (transposon) driven aberrant proteomes with the help of advanced genetic engineering techniques such as CRISPR/Cas9, RNA interference, and small molecule inhibitors [97][98][99][100]. Alternatively, instead of going after classic targets for T cells, those for B cells/antibodies continued to yield promise. These include GD2, B7H3, L1CAM, GPC3, polysialic acid, DLL3, and HER2 [101][102][103][104]. For B cell targets, tissue distribution of the target is key. The density and the heterogeneity of the target will determine which tumor will escape. Insufficient GD2 density, plus both intratumoral and intertumoral heterogeneity, can account for the failure of anti-GD2 mAbs in tumors other than neuroblastoma (see Figure 1) [45,46,[56][57][58]. Antigen Loss or Downregulation under Immune Pressure Antigen modulation will arise from repeated exposure to sub-optimal doses of the antigen-specific targeting modality resulting in acquired resistance. Antigen loss after therapy represents one of the most important mechanisms of mAbs therapy resistance. Mechanisms responsible for antigen loss after targeted immunotherapy are complex and not fully understood. For classic T cell targets (neo-antigen peptides on the MHC), loss or mutation of the peptide, loss of beta-2 microglobulin, and deficiency of the multiple proteins involved in antigen processing will derail antigen presentation to the CD8(+) killer T cells [105]. For B cell targets (e.g., those targeted by CART or tumor-specific IgG), loss or downregulation are key mechanisms to escape [106]. In addition, antigens can be lost by release, internalization, or trogocytosis [107] (Figure 1). Acquired genetic alterations as seen in adults, are probably rare in pediatric tumors [108]. This antigen modulation phenomenon has been studied more extensively in hematological malignancies. Up to 30% of patients with B cell lymphoma will experience a decreased CD20 expression after treatment with rituximab [109]. In neuroblastoma, although extremely rare, complete loss of GD2 expression could arise during treatment, especially for tumors with initial heterogeneity [110,111]. Neuroblastoma mesenchymal subtypes, because of lineage plasticity, following chemotherapy treatment, especially those with refractory/relapsed variants, could carry a lower expression of GD2 by downregulation of GD3 synthase, which can be pharmacologically reverted by inhibiting EZH2 [112][113][114] ( Table 1). Transcriptional regulation of the enzymes responsible for GD2 synthesis, i.e., ST8SIA1 (GD3 synthase) and B4GALNT1 (GD2 synthase), as well as downstream GD2 depletion enzymes, could modify the GD2 phenotype [115]. GD2 internalization, especially in the presence of Abs, could pose another mechanism of resistance to repeated doses of anti-GD2 therapies [116] (Figure 1). GD2 modulation, if present, does not seem to be permanent. The ability to achieve responses with repeated doses of anti-GD2 mAbs, even after prior failure, suggests that antigen loss in the clinical setting is probably reversible [110]. The mechanisms, patterns, and dynamics of immunophenotypic changes following immune therapy in pediatrics remain a field vastly unexplored and likely to be antigenspecific. Attempts to overcome resistance related to antigen escape include dual-targeting therapies (see Section 3.1.4) or induction of antigen re-expression (Table 1). Poor Tumor Penetration Therapeutic Abs must penetrate physical and physiological barriers in order to distribute uniformly throughout the tumor. In solid tumors, leaky vessels and scarce lymphat- ics result in altered interstitial pressure limiting the passage of Abs from the vascular lumen into the tumor [117] (Figure 1). Other factors influencing Ab distribution and retention in the tumor include Ab size, affinity, specificity, and biology of tumor stroma. Engineered Ab fragments could penetrate better, but their small size below the renal threshold forces their rapid clearance into the urine rendering them sub-therapeutic [118,119]. Additionally, the Ab can be internalized for endocytic destruction before it could exert its anti-tumor functions [116]. Higher Ab affinity and higher antigen expression could mitigate the poor retention of small Ab fragments while increasing the cytotoxic payload could amplify the therapeutic effect. Payload optimization has been successful in at least three approaches: (a) drug conjugates, (b) radio-immuno-conjugates, and (c) drug delivery platforms (Table 1). Drug Conjugates Antibody-drug conjugates were conceived as an approach to enhance the tumor selectivity of drugs and toxins in order to widen the safety margins between efficacy and toxicity. Using Abs to deliver toxic agents to the precise "zip code" address should reduce unintended systemic toxicity. The concept of the therapeutic index (TI), i.e., drug exposure of tumor versus drug exposure of each normal organ, expressed as ratios of the area under the curve (AUC), holds the key. For most IgGs, serum half-life is measured in days to weeks; hence, toxicity to the marrow, the liver, and the kidney is common. Since the first ADC, Mylotarg ® (gemtuzumab ozogamicin) [26], was approved, 14 ADCs have received market approval worldwide, and over 100 ADC candidates are currently being investigated at clinical stages [120]. The clinical implementation of ADCs has encountered significant challenges, mostly myelotoxicity among others, including both on-target off-tumor and off-target side effects, pointing again to a fundamental limitation of using whole IgG as drug carriers. Although Ab design is still searching for a better alternative, linker chemistry has vastly improved to assure plasma stability to prevent the premature release of highly cytotoxic payloads to the systemic circulation. The development of anti-drug antibodies (ADA) is another hurdle that is expected when human IgGs are chemically modified. Although most ADCs have passed in vitro and in vivo efficacy assays, they are expected to encounter toxicities that will limit dose escalation in patients. Anti-GD2 Abs conjugated to the microtubule-depolymerizing agent monomethyl auristatin E (ch14.18-MMAE) or F (ch14.18-MMAF) demonstrated potent and highly selective cytotoxicity in vitro in a number of tumor cell lines of neuroblastoma, glioma, breast cancer, sarcoma, and melanoma [121]. Their clinical utility will depend on the toxicity profile in children. Radio-Immuno-Conjugates Radio-immuno-conjugates use radioisotopes as payload. Given the wealth of knowledge in radio-physics and radiation biology, their application in human cancer has a strong rationale. Yet, the suboptimal TIs using IgGs to deliver radioisotopes to human cancer and the inadequate supply chain issues of radioisotopes have handicapped the development of the field for decades. Until these issues are addressed, their application in children will remain limited. Intravenous anti-GD2 131 I-3F8 was tested in children with metastatic neuroblastoma showing responses to both soft tissue and bone marrow disease; however, survival was not improved compared to patients treated with non-radiolabeled 3F8. Compartmental delivery using intra-Ommaya 131 I-3F8 was an attempt to reduce systemic toxicity and has been modestly successful in patients with relapsed neuroblastoma to the CNS, or metastatic medulloblastoma, achieving long-term remissions in a subset of children (NCT00445965) [122,123]. Drug Delivery Platforms Refinement of drug delivery platforms remains the key challenge if toxic payloads need further dose escalation to achieve cures. Multi-step targeting (MST) separates the Ab delivery step from the payload step, thereby avoiding the unintended bystander toxicity of slow-clearing IgG-carrying poisons. When applied to radio-immunotherapy (RIT), pretargeted strategies (PRIT) could offer TIs not possible in previous decades, e.g., a tumor to blood TI of >100:1 when contrasted with the conventional IgG-based TI of <5:1. PRIT is built on bispecific Abs (BsAbs) targeting tumor antigens while carrying a second specificity for payloads [124]. In the first step, BsAbs without any payload, are allowed to accumulate in the tumor. Once the blood level of BsAbs is sufficiently low (either by waiting or by using a clearing agent in a 3-step PRIT) the payload is administered. Because of the small size of the payload, it rapidly engages with the BsAb in the tumor or is excreted in the urine. If 10,000 cGy is the desired curative dose for the tumor, 100 cGy to the blood (TI of 100:1) should not cause myelotoxicity. SADA (self-assembling and disassembling Abs) was invented to be large (in order to stay for 24-48 h to penetrate the tumor) and to be small (when it monomerizes to below the renal threshold) without the need of a clearing agent to remove unbound Ab. As tetramers, SADAs bind to tumors with high avidity, and as monomers SADAs clear rapidly in the urine to minimize immunogenicity. SADA has been successfully applied to multiple cancer targets to deliver various radioisotopes that emit beta, positron, and alpha particles. Even at ultra-high doses of payloads, toxicity to key organs has so far been avoided in preclinical models [125]. The first human application of SADA in GD2-positive tumors is underway (NCT05130255). Insufficient or Impaired Effector Functions Therapeutic mAbs exhibit direct anti-tumor effects through induced apoptosis [126]. Indirectly, therapeutic mAbs engage Fc receptors (FcR) on immune cells via their Fc domain, leading to Ab-dependent cell cytotoxicity (ADCC) through neutrophils and natural killer (NK) cells, Ab-dependent cell phagocytosis (ADCP) via macrophages, and complementdependent cytotoxicity (CDC) by activating the complement pathway [127,128]. ADCC is considered t main therapeutic mechanism in mAb-mediated cancer therapy. In children with cancer, immune cells are either insufficient or impaired because of heavy prior treatments. Additionally, immune exhaustion and the inhibitory tumor microenvironment are emerging hurdles. Strategies to enhance immune responses in pediatric patients include co-administering pro-inflammatory compounds (i.e., cytokines) and modifying therapeutic Abs to engage FcR-negative effectors such as T cells (Table 1), or to recruit dendritic cells to create a vaccination effect [128][129][130][131]. Anti-GD2 mAbs activate lymphocytes, NK cells, and granulocytes through ADCC, and co-administration with cytokines and stimulating agents amplify these responses. Coadministration of 3F8 mAb with recombinant human GM-CSF strongly enhanced ADCC and was implemented into the standard of care [50,53,132,133] (Table 1). IL-2 has been used [133] but may cause significant toxicity [134] and is no longer used for the treatment of HR-NB. IL-15 shows promise with fewer side effects [135]. Modifying the Fc region of mAbs can increase the affinity for Fc receptors in NK, macrophages, and myeloid cells, enhancing their cytotoxic potential [128] The affinity of Fc for specific FcRs can be increased by changing amino acids or glycosylation [119]. Defucosylation or changing to high mannose can greatly enhance Fc-FcR affinity. MAbs can be manufactured in special CHO cell lines deficient in fucose lacking the enzyme GnT1−/− [136]. A defucosylated high-mannose version of Hu3F8IgG1n (produced in GnT1-deficient CHO cells) or a mutated version of hu3F8, hu3F8IgG1-DEL (S239D/I332E/A330L), were tested in vivo in humanized mice showing IgG1n to be significantly more effective than the unmodified hu3F8 or hu3F8IgG1-DEL. The preferential affinity of IgG1n (versus the DEL mutant) for activating versus inhibitory FcRs offers another theoretical advantage [137]. Bispecific Abs (BsAbs) are engineered to dually target tumor-associated antigens (TAA) and immune cells (e.g., T cells through their surface CD3), inducing a synthetic immune response against tumors. The second specificity can be targeted for the payload as in PRIT. By engaging polyclonal T cells in a major histocompatibility complex (MHC)-independent manner, BsAbs do not need additional co-stimulatory signals, thereby avoiding over-activation or exhaustion typical of chimeric antigen receptor (CAR) modified T cells. Without the need for MHC and the co-stimulation requirement, BsAbs could avoid some of the key resistance mechanisms used by tumors to evade classic T lymphocytes. BsAbs have been successfully implemented to cure hematologic malignancies and are under clinical investigation for solid tumors including neuroblastoma [138][139][140][141]. BsAbs using sequences of anti-CD3 (huOKT3) and anti-GD2 (hu3F8) or anti-HER2 (trastuzumab) successfully directed T cells into tumor tissues and exerted a significant anti-tumor effect in the preclinical setting [142][143][144]. Like ADCs, immunocytokines are intended to drive cytokines inside the tumor while reducing systemic exposure and toxicities. Yet, unlike drugs, cytokines have an affinity for immune cells whose cytokine receptors can compete for immunocytokines and prevent them from localizing to the tumor [145]. Despite these potential limitations, the recombinant fusion protein hu14.18-IL2, by activating NK cells through the IL-2 receptor, achieved a 22% marrow complete response rate in patients with neuroblastoma detectable by MIBG with acceptable tolerance [146]. Other immunocytokines, such as hu14.18-IL15 and hu14.18-IL21, have demonstrated benefits in preclinical studies [135,147]. The clinical utility of these compounds remains to be formally proven. Induced host immunity is likely important for durable remission in patients after mAbs treatment. Anti-idiotypic networks may operate in anti-tumor response, where active immunity is induced by the administration of an Ab [148]. The presence of human antimouse antibodies (HAMA) response has correlated with long-term survival in patients with neuroblastoma [10]. Given the known tolerizing effects of high-dose cyclophosphamide and other alkylating agents on immune response, concurrent use of mAbs and high-dose chemotherapy may negatively impact immune response. Early administration of anti-GD2 hu14.18K322A, co-administered with induction chemotherapy, followed by GM-CSF and IL-2, generated an improved objective response in 76.2% of the patients, significantly higher compared to the chemotherapy-only arm [149] (Table 1). However, similar response rates have been seen in prior chemotherapy-only studies like ANBL02P1 [150]. The long-term benefit of early use versus post-induction use of anti-GD2 mAb therapy will need more patient follow-up or a randomized comparison. Overall, the consistent benefit of anti-GD2 mAbs has provided a strong rationale for developing GD2 conjugate vaccines [151,152] (Table 1), which is beyond the scope of this review. Table 1. Resistant mechanisms to anti-GD2 therapy and potential alternatives. Lack of Biomarkers to Predict Response and Survival The limited success of therapeutic Abs in clinical trials may be partially attributed to the inadequate selection criteria of patients in terms of risk stratification and target antigen expression. Confirmation of the target antigen is often not required prior to enrolment in a clinical trial with mAb therapy. For example, 50% of osteosarcoma lack GD2 expression, and anti-GD2 therapy will likely fail. Clinical trials investigating the efficacy of anti-GD2 mAbs in different solid tumors (NCT02502786, NCT05558280) do not require confirmation of target (GD2) expression and could confound efficacy interpretation given the intra-and inter-tumor heterogeneity of GD2 expression in pediatric solid tumors. Beyond the mere presence or absence of GD2, the density of antigen on tumor cells could also affect the clinical efficacy of specific mAbs. Beyond target expression, a number of biomarkers have been associated with clinical outcomes, including polymorphisms of FcR [153] and KIR mismatch [154], both important effector mechanisms in ADCC. Minimal residual disease is another biomarker strongly associated with response to mAb [155]. Theranostics has emerged as an appealing drug platform in Ab therapy, referring to using the same mAb for in vivo diagnostic imaging as well as for in vivo therapy. Pretargeted radio-immuno-diagnosis is the companion diagnostics for PRIT. It utilizes 177 Lu for SPECT and 86 Y for PET with high precision; at the same time, 177 Lu is used for beta therapy, and 225 Ac for alpha therapy [156]. Using whole IgG as a carrier, 68 Ga and 64 Cu have been used to monitor neuroblastoma during treatment with anti-GD2 [157,158]. Additionally, liquid biopsies (if enough circulating tumor cells or tumor-free DNA) techniques based on the genotype of each patient's own tumor, could be useful for detecting residual disease. Circulating GD2 has also been detected in serum or plasma among patients with high tumor burden, which could help to define tumor load, tumor presence, or tumor recurrence [159,160]. Clinical validation of these biomarkers at the time of minimal residual disease at predefined times during treatment is mostly missing. Difficult Integration into the Standards of Care mAbs show limited antitumor activity as monotherapy, but they can be more efficacious when combined with other agents (e.g., cytokines, chemotherapy, radiotherapy, kinase inhibitors). In the case of anti-GD2 mAbs, the clinical benefit of anti-GD2 monotherapy has been restricted to patients with minimal residual disease (MRD) or exclusive bone/BM involvement [9,[53][54][55]. Soft tissue bulky tumors generally do not respond. Combining anti-GD2 with chemotherapy has proven to be safe and effective. The phase 2, prospective, open-label, randomized clinical trial ANBL1221 (NCT01767194) first demonstrated the superiority of the combination of irinotecan, temozolomide, dinutuximab, and GM-CSF (I/T/DIN/GM-CSF) vs. I/T alone in a group of 35 patients with first-line refractory/relapsed neuroblastoma (ANBL1221). The cohort was expanded to a non-randomly assigned I/T/DIN/GM-CSF. Overall, the ORR was 41.5%, and progression-free survival at one year was 67.9% in 53 patients studied [161]. The HITS study also used I/T with Hu3F8, showing that naxitamab-based chemo-immunotherapy was safe without unexpected immunogenicity. It was effective against chemoresistant neuroblastoma in all disease compartments even in patients with multiple prior relapses, and in patients who previously received anti-GD2 mAbs and/or IT [162]. The preliminary results of the BEACON study also showed the superiority of adding dinutuximab beta into the standard salvage chemotherapy regimen for relapsed neuroblastoma [163]. Anti-GD2 mAbs in combination with agents recruiting immune effector cells is reviewed in Section 3.1.4. Unlike chemotherapy, mAbs have unique toxicities that require staff training in drug administration and safety measures to alleviate side effects. In the case of anti-GD2 mAbs, the main toxicity is visceral pain, which starts in the abdomen and spreads to the axial skeleton, head, and chest, usually requiring intensive management with analgesics and sedation. With short infusion rates (30 min to an hour) in the outpatient department (e.g., 3F8 and hu3F8) other acute side effects include apnea, hypotension or hypertension, and allergic reactions, from rash to anaphylaxis, which occasionally require intensive intervention including resuscitation [164,165]. The need for a facility with trained personnel can limit the acceptance of such treatments in a general pediatric clinic. Ab engineering to reduce pain sides has been attempted, e.g., introducing the K322A mutation in the mAb Hu14.18K322A. However, in the clinical trial pain side effects were still significant [166]. Increasing the infusion time such as those for dinutuximab beta (continuous infusion over 10 days) could reduce the intensity of pain, but the duration of pain was prolonged. In addition, prolonged infusions that require inpatient admission may not be cost-effective in the absence of home hospitalization units. To reduce autonomic side effects, desensitization strategies using a step-up infusion protocol were shown to reduce hemodynamic side effects, whereas alternative pharmacologic interventions (e.g., ketamine) reduced pain complications [164,167]. Commercialization, Regulation, and Political Limitations Although there is increasing interest among pharmaceutical companies to tackle rare diseases (including pediatric cancer), the underlying driver is the profit-driven model that prices drugs at exorbitant levels. Their rationale was based on an industrial projection of one billion dollars needed from preclinical discovery to FDA approval, and in order to recover that investment, the drug price has to be set high even when the manufacturing is a small fraction of the cost. Indeed, mAbs typically are off-the-shelf and less cumbersome when compared to the personalized manufacturing process of CAR-T which is relatively expensive (e.g., the retail price of Kimryah R , the first anti-CD19 CAR-T on the market, was $475,000). CAR-T has shown promising results in patients with H3K27M-mutated diffuse midline gliomas [168] and relapsed or refractory neuroblastoma (63% overall response) with a good safety profile [169]. However, the durability of CAR-T and remission, as well as long-term safety when targeting solid tumors remain to be proven. It may come as a surprise that mAbs, even when curative, are not easily affordable in developing countries where the majority of children live. If mAb for rare diseases could be developed using a cost-driven instead of profit-driven model, where governments assume the driver's seat (like for pandemic vaccines or childhood vaccines), novel biologicals may finally become a reality based on need and not on wealth. According to the World Health Organization Non-communicable Disease Country Capacity Survey published in 2020, only 29% of low-income countries report cancer medicines to be generally available to their populations compared to 96% of high-income countries [170]. Anti-GD2 mAbs distribution exemplifies the uneven availability of mAbs worldwide. More than half of the children in the world with neuroblastoma do not have access to any of the approved anti-GD2 mAbs (Figures 2 and 3). Governments establishing pediatric cancer as a national priority as well as international cooperation are needed to ensure an equitable distribution of resources. Incentives for pharmaceutical companies to invest in pediatric investigations, including financial support, regulatory incentives such as the EMA's Pediatric Investigation Plan or the FDA's Pediatric Research Equity Act, and streamlined approval processes, can be implemented. Joint initiatives involving philanthropic organizations, industry stakeholders, and government financing are also crucial, as they combine resources and expertise to accelerate research, avoid duplications, alleviate financial burdens, and enhance access to innovative therapies. Regulatory and approval delays pose significant challenges in the development and availability of pediatric drugs. In the EU, once a drug is approved by a regulatory agency, national Health Technology Assessment (HTA) agencies evaluate the effectiveness, and safety of medicines to support decisions on their cost and reimbursement and integration into national health systems. In the case of private healthcare, insurers have to incorporate the new drug into their portfolio of services. Currently, these steps incur delays that can last up to 10 years. For blinatumomab and dinutuximab beta, the median time to an HTA decision for pediatric use varied among countries, ranging from 353 to 515 days [171]. To expedite the availability of life-saving treatments for children, it is crucial to streamline regulatory decisions, reduce bureaucratic inertia, and accelerate approval procedures [171]. GD2 mAbs (Figures 2 and 3). Governments establishing pediatric cancer as a national priority as well as international cooperation are needed to ensure an equitable distribution of resources. Incentives for pharmaceutical companies to invest in pediatric investigations, including financial support, regulatory incentives such as the EMA's Pediatric Investigation Plan or the FDA's Pediatric Research Equity Act, and streamlined approval processes, can be implemented. Joint initiatives involving philanthropic organizations, industry stakeholders, and government financing are also crucial, as they combine resources and expertise to accelerate research, avoid duplications, alleviate financial burdens, and enhance access to innovative therapies. Regulatory and approval delays pose significant challenges in the development and availability of pediatric drugs. In the EU, once a drug is approved by a regulatory agency, national Health Technology Assessment (HTA) agencies evaluate the effectiveness, and safety of medicines to support decisions on their cost and reimbursement and integration into national health systems. In the case of private healthcare, insurers have to incorporate the new drug into their portfolio of services. Currently, these steps incur delays that can last up to 10 years. For blinatumomab and dinutuximab beta, the median time to an HTA decision for pediatric use varied among countries, ranging from 353 to 515 days [171]. To expedite the availability of life-saving treatments for children, it is crucial to streamline regulatory decisions, reduce bureaucratic inertia, and accelerate approval procedures To ensure the suitability of cost-effectiveness calculations for the pediatric population, necessary adaptations are necessary. For instance, the current list price of Unituxin R renders it not cost-effective when compared to standard chemotherapy alone, as it escalates treatment costs by $50,000 per quality-adjusted life-year saved (QALY) [172]. However, cost-effectiveness considerations may not be fully addressed by QALY estimation alone in children. Estimating long-term consequences of pediatric conditions without sufficient long-term data necessitates the use of extrapolation techniques, introducing uncertainty. Additionally, QALY in children should also incorporate the impact on family members and caregivers, including indirect costs and overall family quality of life. Limited data availability and quality specific to pediatric populations also require extrapolation from adult data and introduce potential inaccuracies. Moreover, the unique preferences, values, and priorities of children, cannot be accurately described in methods derived from adults [173]. Another case in point is hematopoietic stem cell transplantation (SCT), currently the standard of care for NB for almost all pediatric cancer hospitals. Yet, once the full potential of mAbs is unveiled, including their combined use with chemotherapy upfront during induction, SCT may no longer be necessary, thereby substantially reducing the physical and financial cost of cure [174,175]. The development of biosimilars as affordable versions of therapeutic antibodies may also provide options to reduce prices [176]. Although the market can accommodate multiple agents within a class, each with subtle variations in efficacy, toxicity, and resistance mechanisms, in the context of early-phase pediatric trials, the limited number of available patients discourages the repetition of trials using multiple agents with the same mechanism of action [177]. If the cost and access issues are overcome, there are remaining hurdles in Ab discovery for pediatric cancer. Although at least 28 new oncology drugs with the potential for pediatric malignancies have been approved since 2007, 50% have been waived due to the absence of the adult condition in children [178]. For decades, the adoption of mAbs for neoplasia in children relied on shared surface proteomics with adult cancers, even when aligned with the business model or regulatory requirements. It is unavoidable that this trickle-down approach has caused significant delays (Table 2) in their testing and approval for liquid tumors in children, and even worse for hard-to-treat metastatic solid tumors. To address this, basket trials should be pursued based on shared targets rather than conventional diagnostic groups and expanded age eligibility in early phase II studies [178]. At the drug delivery end, efficient utilization of specialized centers of excellence should improve accessibility to new treatments to maximize patient outcomes per resource expended. In addition, the harmonization of procedures and tests, and unification of care plans and treatment sites will reduce unnecessary paperwork and chances for error. They will provide opportunities for training and education for healthcare professionals at all levels. With each center focused on its unique excellence, the overall standard of care will improve with less time spent on competing for patients and more time on improving care. Figure 4 provides a visual representation of the various barriers at different levels of mAb access control. Conclusions Remarkable progress has been achieved with mAb-based anticancer therapies in the last decade. With continuous innovations in protein engineering and our understanding of the immunobiology of cancer and immunotherapeutics, the future for innovation is wide open. However, human cancers evolve under every treatment pressure and mAbs are no exception. Hurdles for mAb-based therapeutics persist and new ones have emerged, including low TI leading to on-target, off-tumor side effects, target heterogeneity, insufficient tumor penetration of Ab or effector cells, the inhibitory tumor microenvironments, and paucity of accurate biomarkers. Luckily, novel strategies to overcome these limitations are available and some are being tested in the clinic. With each incremental step, response and survival among children with cancer will improve. The final challenge will be their safe and successful integration into standard-of-care regimens and universal accessibility both geographically and economically. With a compound annual growth rate (CAGR) of 14.1%, the mAb market size is projected to be >400 billion by 2028 when compared to small molecule inhibitors growing at 6.8% to 246 billion by 2030. Although pharmaceutical companies often have to make decisions based on profit expectations from shareholders, academic researchers have the tools and responsibility to continue improving new therapeutics and translating them into the clinic. Regulatory agencies and administrators should be tasked with simplifying bureaucracy related to science translation from the bench to the bedside. Economists and social scientists should promote community health policies and international collaboration through active organizations locally, nationally, and internationally. MAbs have the potential of life-changing therapeutics and provide a unique opportunity to call for action. Conclusions Remarkable progress has been achieved with mAb-based anticancer therapies in the last decade. With continuous innovations in protein engineering and our understanding of the immunobiology of cancer and immunotherapeutics, the future for innovation is wide open. However, human cancers evolve under every treatment pressure and mAbs are no exception. Hurdles for mAb-based therapeutics persist and new ones have emerged, including low TI leading to on-target, off-tumor side effects, target heterogeneity, insufficient tumor penetration of Ab or effector cells, the inhibitory tumor microenvironments, and paucity of accurate biomarkers. Luckily, novel strategies to overcome these limitations are available and some are being tested in the clinic. With each incremental step, response and survival among children with cancer will improve. The final challenge will be their safe and successful integration into standard-of-care regimens and universal accessibility both geographically and economically. With a compound annual growth rate (CAGR) of 14.1%, the mAb market size is projected to be >400 billion by 2028 when compared to small molecule inhibitors growing at 6.8% to 246 billion by 2030. Although pharmaceutical companies often have to make decisions based on profit expectations from shareholders, academic researchers have the tools and responsibility to continue improving new therapeutics and translating them into the clinic. Regulatory agencies and administrators should be tasked with simplifying bureaucracy related to science translation from the bench to the bedside. Economists and social scientists should promote community health policies and international collaboration through active organizations locally, nationally, and internationally. MAbs have the potential of life-changing therapeutics and provide a unique opportunity to call for action.
2023-07-26T15:13:21.606Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "58cc25275ff5a2da4b79096e60e798255ddedbd8", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "f1c9d70694510f583e75b4b1efb28d4c12c4edb7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221703347
pes2o/s2orc
v3-fos-license
A Self-Decoupled 32 Channel Receive Array for Human Brain Magnetic Resonance Imaging at 10.5T Purpose: Receive array layout, noise mitigation and B0 field strength are crucial contributors to signal-to-noise ratio (SNR) and parallel imaging performance. Here, we investigate SNR and parallel imaging gains at 10.5Tesla (T) compared to 7T using 32-channel receive arrays at both fields. Methods: A self-decoupled 32-channel receive array for human brain imaging at 10.5T (10.5T-32Rx), consisting of 31 loops and one cloverleaf element, was co-designed and built in tandem with a 16-channel dual-row loop transmitter. Novel receive array design and self-decoupling techniques were implemented. Parallel imaging performance, in terms of SNR and noise amplification (g-factor), of the 10.5T-32Rx was compared to the performance of an industry-standard 32-channel receiver at 7T (7T-32Rx) via experimental measurements using a human head shaped phantom. Results: The 10.5T-32Rx provided 67% more central SNR and 87% more peripheral SNR compared to the 7T-32Rx. Minimum inverse g-factor value of the 10.5T-32Rx (min(1/g) = 0.56) was 51% higher than that of the 7T-32Rx (min(1/g) = 0.37) with R=4x4 2D acceleration, resulting in significantly enhanced parallel imaging performance at 10.5T compared to 7T. The g-factor values of 10.5T-32Rx were on par with those of a 64-channel receiver at 7T, e.g. 1.8 versus 1.9, respectively, with R=4x4 axial acceleration. Conclusion: Experimental measurements demonstrated effective self-decoupling of the receive array as well as substantial gains in SNR and parallel imaging performance at 10.5T compared to 7T. The push for UHF MRI is predicated primarily on the premise of significant ultimate intrinsic SNR 15,16 gains and acceleration with less SNR penalty at higher B0 field strengths 14, [17][18][19] . Although the push to UHF MRI presents significant technical challenges 1,2,20-23 , abovementioned UHF advantages can potentially be materialized via parallel imaging 19 using high density receive arrays [24][25][26] . Acceleration via parallel imaging comes with a penalty in SNR quantified in terms of noise amplification or g-factor 27 . Increasing the B0 field strength has been shown to mitigate the SNR penalty attributed to acceleration 19,28 , hence affording better parallel imaging performance. However, parallel imaging performance depends significantly on receiver array noise correlation 29,30 as well. Increasing receive array density, on the other hand, exacerbates interelement coupling (noise correlation) and electronics noise dominance of smaller loops 24,31,32 . Moving to higher B0 field accentuates the noise correlation challenge via reducing electromagnetic wavelength and attenuates the electronics noise dominance challenge via increasing coil coupling into the sample 24,33,34 . Hence, novel receive array decoupling methods had to be developed to capture SNR gains at ultra-high frequencies. In other words, for two receive arrays with equal channel count, one for 7T and another for 10.5T, it was imperative to implement novel decoupling techniques for the receiver operating at higher frequency to fully exploit the superior SNR and acceleration potential of the higher magnetic field 25 . Various Radio Frequency (RF) array decoupling strategies have been proposed and implemented in the past. Overlap and preamplifier decoupling proposed by Roemer et al. 35 have been improved and used extensively in designs of high density receive arrays 32,33,[36][37][38][39] . Noise matching the preamplifiers 24,33 and inductive decoupling 36 were also heavily relied upon in previous works. Furthermore, self-decoupling techniques for low density (up to 8-channels) transmit arrays using intentionally unbalanced capacitive distribution were proposed 40,41 . Lakshmanan et al. 41 proposed the loopole antenna, where segmenting capacitors inside transceiver loops were distributed unevenly to cause an unbalanced current distribution in the loop with electromagnetic field patterns resembling that of a dipole antenna. Yan et al. 40 built on the idea of unbalanced impedances and proposed a transceiver self-decoupling scheme for 7T / 298MHz where a relatively small RF impedance (e.g. 8.5pF capacitance) is placed opposite a relatively large RF impedance (e.g. 0.4pF) which approaches an open circuit at the RF frequency. This results in the current distribution being unbalanced so that the coil resembles a dipole antenna. However, in contrast to Roemer's work 35 , Yan's analysis assumes electric coupling to be limited to coupling via free space and excludes resistive coupling via the conductive sample. While inter-element coupling via the sample can be negligible in the case of loop transmitters far (more than 4cm away) from the sample, 3D conformal receiver arrays are generally form-fitting and very close to the sample and are designed to be dominated by body noise 24,25,31 . In this work, we present a strategy for receiver self-decoupling at 447MHz based on our observation that higher frequencies allow for a more balanced capacitive segmentation of receive elements while maintaining acceptable decoupling. We compare performance of self-decoupled receivers, in terms of SNR, with overlap-decoupled loops. Receiver self-decoupling provides inter-element isolation comparable to overlap decoupling while it does not require geometric overlap or interelement transformers or decoupling networks and is therefore much more practical to implement in high density receive arrays. Furthermore, we show that receiver self-decoupling presented here provides higher SNR compared to overlap decoupling. Here, we implement the novel receiver self-decoupling technique and build a high density, selfdecoupled 32-channel receive array for 10.5T/447MHz for the first time, without using within-row (axial) overlap 35 , explicit inter-element decoupling networks 36,42,43 , or unbalanced current distributions 40,41 , with preliminary results reported in an ISMRM abstract 44 . The self-decoupled 32-channel receive array (10.5T-32Rx) provided substantial experimental cortical and central SNR gains compared to an industry-standard 32-channel receive array at 7T. Furthermore, parallel imaging performance at the 10.5T was superior to that of the 7T, with 10.5T-32Rx providing acceleration performance comparable to a 64-channel receiver at 7T. 16-Channel Transmitter The primary focus of this paper is on the contribution of receiver technology and B0 field strength to SNR and parallel imaging performance. Therefore, the transmitter design is covered here only briefly, with its meticulous characterization published separately 45 . A 16-channel transmitter comprising two rows of 8 inductively decoupled rectangular loops was used 37,45 . The 2-row design of the transmitter array has the potential of increasing degrees of freedom in parallel transmit (pTx) RF pulse design especially for specific absorption rate (SAR) control. In order to minimize transmit-receive interaction, the transmitter was actively tuned, i.e. a PIN diode circuitry was used to tune the transmitter during signal transmission, leaving it off-resonance during reception. The procedure to obtain necessary approvals for in-vivo human brain imaging using this transmitter is currently ongoing. Receiver Self-Decoupling Rectangular loops similar in size to those used in the array were modeled in several scenarios. First, two 5x5cm 2 loops and then, two 2.5x5cm 2 were positioned on a flat surface in proximity to a cubic phantom (permittivity ϵ = 50, conductivity σ = 0.6S/m to approximate human brain tissue properties 46 across all simulations). These loops were constructed using AWG-18 copper wire (circular cross section, diameter 1.02mm) and were divided into four segments using three capacitors and the feed point. Two fixed capacitors and a trimmer capacitor ( ), with values in the same range as fixed capacitors, were used inside each loop. The feed circuitry, presented in Figure 1c, consisted of a detune trap as well as tune ( ) and match ( ) adjustable capacitors. Similar principles were used to construct non-overlapped 10x10cm 2 loops using balanced capacitive distribution on a cylindrical surface at a constant distance from a cylindrical sample (ϵ = 50, σ = 0.6S/m) as shown in Figure 2b. Electromagnetic/circuit co-simulations were performed using CST Studio (Dassault Systèmes Simulia Corp., Johnston, RI). Coil elements were modeled in SolidWorks (SolidWorks Corp., Waltham, MA) and imported into CST. Simulations were performed over a frequency range of 2GHz using the finite-difference time-domain method to solve Maxwell's equations and were partially accelerated using a GPU. The loops were tuned to 447MHz (proton resonance at 10.5T) and matched to 50Ohm. The simulation pipeline has been discussed in more detail in previous works 45 . Several electromagnetic (EM)/circuit optimization problems were set up with the goal of finding , , to minimize 21, conditioned on 11, ≤ −12dB and 22, ≤ −12dB where f = 447MHz is the resonant frequency. Values of , , were limited to practical ranges guided by bench experiment. Other 3D EM model parameters, including the distance between loop elements and the gap between the coils and the phantom were kept constant and equal to practical receive array values during optimization. Note that the objective function and the cost (loss) function for these optimizations can be defined in various ways. In our experience, incorporating the objective ( 21, ) into the loss function to form a weighted-linear combination of individual L1 norms in the linear (as opposed to dB) scale of real and imaginary parts of the S-parameters resulted in faster convergence. The trust region method and the Nelder Mead simplex algorithm were used to solve for optimums. It should be noted that in practice (on the bench) such an optimization is straight-forward and is substantially less time consuming compared to numerical simulations because on the bench, the results of parameter modifications can be monitored immediately in an analogue manner using a vector network analyzer. Numerical results for S-parameters and per-port, complex-valued H-fields at 447MHz were exported to ASCII files. Post-processing and analysis of magnetic field results were performed using customized Python scripts. Complex-valued receive magnetic fields were calculated for each coil element using 1 − = μ 0 ( * + * )/2 where μ 0 is the permeability of free space and , are Cartesian coordinates orthogonal to the static magnetic field ( 0 ). In order to compare the receive field between overlap and self-decoupling methods, two metrics were calculated: sum of magnitude (SOM) of the complex combined magnetic fields where is a vector composed of complex magnetic receive fields of each individual channel and is Hermitian transpose. Furthermore, RSOS was corrected for noise correlation to calculate noise-correlation-weighted SNR as given by = ∑ √ ψ −1 where ψ is the normalized noise correlation matrix calculated using the simulated complex-valued scatter matrix 24,[47][48][49][50] . The summation over sample is intended to collapse spatial maps into a single numeric metric for comparison purposes. These metrics are particularly appropriate here as we are considering receive-only coils, so we are interested in SNR, not transmit efficiency. 32-Channel Receiver A close-fitting receive former (helmet) was designed while considering physical constraints imposed by the transmitter, the dimensions of which are directed by a head gradient coil that will be used in the future. The shape of the former was optimized based on previous helmet designs, numerical model of a human head, and consideration of average range of head sizes. A structure for mechanical support of the preamplifiers and cables was designed to be mounted as an additional part on top of the head former. The receive visual channel at the eyes' level was designed to be compatible with the needs of future cognitive studies (e.g. involving visual stimulus presentation or eye tracking). The distribution of preamplifier substrates was driven by cable routing design, which will be discussed subsequently, as well as preamplifier interactions and directional sensitivity compared to the static magnetic 0 direction. The 32Rx receive array is composed of 31 loops, divided into four rows along the z-axis, and one cloverleaf element covering the top. The layout of the loops, presented in Figure 1a, allowed for partial overlap among rows (8-10mm) and gaps between neighboring loops in each row (5-8mm) as self-decoupled loops with gaps in the axial (x-y) plane are shown here to provide an SNR advantage compared to overlapped loops (see Table 1). This results in a high-density receive array designed to rely primarily on self-decoupling. Each row of loop elements was shifted (by a half loop) compared to the neighboring rows. This self-decoupled design reduced construction complexity that would arise from overlapping adjacent loops within each row or using decoupling networks or transformers between loop elements and contributed to the SNR improvements. Overlap along the z-axis (along the center of the MR scanner bore) was maintained to further improve SNR via increased channel density. A key consideration in designing the layout was a primary focus on the visual cortex, driven by a large array of vision neuroscience projects conducted at CMRR that can greatly benefit from higher field strength; this, in turn, motivated an increase in density in the posterior array for the rows that will be facing the occipital lobe, at the expense of a reduced density at the top (six loops) and bottom (three loops) rows. As aligning a loop plane perpendicularly to the z-axis would compromise its sensitivity, a cloverleaf element, rather than loop elements, was placed at the top of the coil, resulting in a poynting vector perpendicular to the z-axis. The cloverleaf element was composed of two perpendicular loops, each with crossed-over legs, whose outputs were combined into a single receive channel. A prototype composed of eight channels, arranged in four rows to be representative of the final layout, was initially built and tested on the bench prior to measurements in a head shaped gel phantom at 10.5T. An American Wire Gauge AWG-16 silver-coated wire was used to construct the loops. Lumped capacitor values used in the loops were 3.3pF, 4.7pF, and 6.8pF. One trimmer capacitor with a value range of 8-20pF or 2-6pF (SGC3S300NM or SGC3S060NM, Sprague-Goodman, NY, USA), included inside each loop, was carefully adjusted to decouple the loops in each row based on their scatter matrix parameters. The values of the larger trimmers were measured to be in the range of 8.5-15pF after adjustment. The feed board, including the active detuning circuitry, and the preamplifier board were similar to those presented in a previous ISMRM abstract 51 . Receiver Noise Correlation Mitigation On the bench, the scatter matrix ( ) was measured between all coil pairs using the 8-channel prototype. Similar measurements for the completed 32-channel array were limited to adjacent coils in the same row or overlapping coils from different rows, considered to represent worst-case scenarios. Measurements were done after tuning, matching, and self-decoupling the coils and both prior to and after adding preamplifiers. Self-decoupling of adjacent loops within each row provided robust inter-element isolation. Low noise preamplifiers (WMA447A, WanTcom Inc., Minneapolis, MN) with input impedance of 1.5Ω and noise figure of 0.45dB were used for preamplifier decoupling. Preamplifiers were mounted on a ground plane which helped minimize interactions with transmitter/receiver elements and preamp oscillations. To reflect the actual use case, measurements were done with all receive elements in the tuned state. Preamplifier interactions and coaxial cable interactions were investigated to characterize the effects of stacking preamps/coaxial cables close to each other as well as cable routing. Noise correlation matrices for various cabling, preamplifier configurations, and cable trap locations were measured inside the 10.5T MR scanner with the 8-channel prototype and a head shaped phantom. Coaxial cables were isolated using selfshielded input cable traps to suppress shield-current-induced noise 52,53 . Traps were carefully tuned after being installed on preamplifier input coaxial cables; however, several cable traps were intentionally tuned slightly off-resonance to mitigate their interaction with resonant loops of the receiver or transmitter 37 . These input cable traps were constructed via resonating a trimmer capacitor (8-30pF, SGC3S300NM, Sprague-Goodman, NY, USA) with copper tape soldered to the outer conductor of preamp input coaxials. One further s-matrix measurement was made with output cable traps detuned to investigate potential crosstalk between those resonant structures as well. Transmit-Receive Interaction Design of routing paths for preamplifier input coaxial cables was driven by noise correlation and receive-transmit interaction considerations. Coaxial cables may present a significant conductor barrier to the transmitter. Coaxial cables longer than 1/10 th wavelength (6.7cm at 10.5T) have considerable electromagnetic interaction, significantly distorting transmit magnetic field (B1 + ) and adversely affecting the receive array noise correlation matrix. At 7T, or 298MHz, 1/10 th of the wavelength is 10cm; as such, input coaxials coaxial cables (8-9cm) would not be as electromagnetically problematic as they are at 10.5T. Based on previous experience with loop transmitter designs 36 , knowledge of transmit field patterns 45 , and prototype experiments explained above, it was determined that collecting coaxial cables in five paths along the z-axis parallel to the center of the transmit loops would result in minimum Tx/Rx interaction. Measurements of the receive-transmit interaction were performed both on the bench and in the scanner. On the bench, scatter matrix parameters of the transmitter were monitored before, during, and after insertion of the receive array, while the receiver was actively detuned using DC supply to PIN diodes 51 and loaded with a human head shaped phantom. In the scanner, transmit field maps were acquired in two configurations: first, with the 16-channel transmitter and 32-channel receiver as an ensemble transmit-only receive-only coil, second, with the 16-channel transmitter used as a transceiver in the absence of the 32-channel receiver. Relative transmit B1 + maps were acquired using a small flip angle multi-slice gradient echo (GRE) sequence with magnitude images from sequential single channel transmissions normalized by their sum 54 . Absolute transmit B1 + maps were then generated by normalizing relative transmit maps by (α( ⃗)) where α( ⃗) is actual flip angle as a function of spatial coordinate ( ⃗) obtained via 3D GRE Actual Flip Angle (AFI) acquisitions 55,56 . Absolute transmit field maps were acquired both with and without the receive array in place, with the transmitter used as a transceiver when the 32-channel receive array was not inserted. SNR and g-Factor Data acquisition All data were acquired on a 10.5T MRI (Siemens Healthcare, Erlangen, Germany) system using human head shaped gel phantoms (with permittivity ϵ ≃ 50, conductivity σ ≃ 0.6S/m as measured with dielectric probe to approximate human brain tissue properties 46 ). All SNR measurements were replicated two times, once with the 16Tx/32Rx coil described above at 10.5T, then with an industry-standard 1Tx/32Rx coil (Nova Medical Inc., Wilmington, MA) at 7T using protocols, acquisition parameters, setups, and data analysis pipelines similar to those at 10.5T. Relative transmit field maps were obtained using a series of small flip angle multi-slice gradient echo (2D GRE) sequences, pulsing on one transmit channel at a time 57,58 . Typical sequence parameters were seven 5 mm thick axial slices (with center slice being isocenter), TR=100ms, TE=3.5ms, FA=10°, pixel bandwidth = 300 Hz/pixel. Actual flip angle (AFI) 56 maps were acquired (3D, TR=75ms, TE=2.0ms, nominal FA=50°) with all channel transmitting together. Individual channel transmit field maps were derived from both acquisitions. SNR data were acquired in an approximately full longitudinal relaxation state, with a multi-slice GRE sequence in the same 7 axial slices used for B1 mapping, with TR=7000ms-10000ms, TE=3.5ms, FA=80°, pixel bandwidth = 300 Hz/pixel. This was followed by a noise scan which was identical to the SNR sequence except for FA=0° (no RF pulse), TR=70-100ms 59 . For the 10.5T 16Tx, circularly polarized (CP-like) transmit field 1 + phase shim setting were calculated with an efficiencyhomogeneity trade-off as the objective, allowing for acceptable 1 + efficiency at the center 23,60 .The same resulting 16-channel 1 + shim setting was used in all acquisitions. SNR and g-factor calculations The noise correlation matrix was calculated based on complex noise data (obtained in the absence of RF pulsing) and used to decorrelate the SNR data before they were combined using root sum-of-square method 35,48 . In steady state gradient echo, signal intensity is proportional to ρ(1 − 1 )( θ)/(1 − 1 θ) 2 where ρ represents proton density, 1 = (− / 1 ), 2 = (− / 2 * ) and θ is the spatially varying, voltage-normalized actual flip angle map. With ≫ 1 it follows that 1 ≪ 1 which results in ∝ ρ (θ) 2 where (θ) is the flip angle distribution reflecting transmit 1 + inhomogeneity. SNR maps were normalized by (θ), voxel size, number of acquisitions, number of samples along the read-out and phase-encoding directions, and bandwidth to make SNR calculations comparable across experiments 55 . 2 * decay and the noise figure from receiver chain of the MR scanners (excluding RF coil and preamplifiers) were not reflected in these calculations. Noise amplification in accelerated images was quantified in terms of g-factor 27 and calculated as = /( × √ ) where R is the acceleration factor and is the accelerated SNR calculated based on fully sampled acquisitions retrospectively under-sampled. In order to be able to compare g-factor numbers of the 10.5-32Rx presented here with previously published 13 g-factors of 7T-32Rx and 7T-64Rx, the same acquisition and post-processing pipeline used for 7T-32Rx and 7T-64Rx were maintained in our experiments with the 10.5T-32Rx. RESULTS 3.1. Receiver Self-Decoupling Figure 2(b-e) presents simulation results for two 10x10cm 2 receive loops at 10.5T/447MHz. The resulting crosstalk between receive elements is 21 = −12 dB ≃ 6 percent which leads to the magnetic field of the second loop to be slightly visible in the 1 − plot. However, this does not distort the field pattern of the excited loop, implying sufficient isolation. The surface current distribution is almost symmetric with strong current distributed on all segments of the excited loop (and negligible current induced on the second loop), again demonstrating sufficient interelement isolation Simulation results for 25x50mm 2 receive loops demonstrated -11.2dB isolation using the selfdecoupling method (compared to -13.8dB using overlap) at 447MHz. In the case of 50x50mm 2 loops, selfdecoupling provided 21 = −12.9dB interelement isolation (compared to -16.8dB using overlap). These results indicate that the proposed self-decoupling method achieves better than -11dB isolation (in the worstcase scenario) which is sufficient for receive array design given that preamplifier decoupling will further improve the isolation. Figure 3 depicts the receive magnetic fields ( 1 − ) resulting from self-decoupled loops and provides a qualitative comparison with 1 − from overlapped loops. The RSOS combination of receive signals from self-decoupled loops shows strong receive signal despite the gap between the two loops. Table 1 presents a quantitative comparison between SOM, RSOS and SNR of self-decoupled and overlapped loops for two sets of loop sizes. Compared to overlapped loops, the proposed self-decoupling method provides 17% more RSOS 1 − for the 25x50 mm2 loops and 32% more RSOS 1 − for the 50x50 mm2 loops. Furthermore, selfdecoupling results in 10% and 26% higher noise-correlation-weighted SNR for 25x50mm 2 and 50x50mm 2 loops, respectively. Receiver Noise Correlation and Interaction with Transmit Field The values of inter-element isolation measured prior to preamplifier decoupling in terms of S21 at the resonance frequency of 447MHz, using a 16-port vector network analyzer (VNA, ZNBT8, Rohde & Schwarz), corrected for 2dB cable loss, were in the range of 11-12dB for self-decoupled adjacent coils in the same row and 12-15dB for partially-overlapped coils from different rows (see Figure 2a), and in the range of 20-30dB negative for distant neighbors, demonstrating effective self-decoupling of the 32-channel receive array without the need for overlap in each row, explicit transformer decoupling, or unbalanced capacitive distribution. Preamplifier decoupling further improved crosstalk between receive array elements to 35-40dB negative. The noise correlation matrix measured inside the 10.5T MR scanner resulted in maximum noise correlation of 0.37 (Figure 4), which is a significant improvement compared to previous works 13 . We attribute this to the novel self-decoupling technique, experimentally optimized cable routing (relative to transmit elements to minimize shield-current-induced noise and transmit field distortion) and cable trap locations (to minimize trap interference with receive element resonances). Figure 5 shows power-normalized transmit field maps (in μ /√ ) measured in the 10.5T scanner using the 16-channel transmitter with and without insertion of the 32Rx receive array. These maps demonstrated less than 10% distortion of the transmit field upon inserting receive array; as such, characterizing the limited transmit field change following the insertion of the receive array. This characterization allows for streamlining safety validation studies by obviating the need for inclusion of the receive array in electromagnetic simulations of specific absorption rate (SAR). Figure 6 illustrates experimental unaccelerated SNR comparisons in seven axial slices obtained with identical protocol, experimental setups, and analysis pipeline using a 32-channel receiver at 7T (7T-32Rx) and a 32-channel receiver at 10.5T (10.5T-32Rx). This comparison resulted in 67% more central SNR and 84% more peripheral SNR obtained using the 10.5T-32Rx compared with the 7T-32Rx. This enhanced SNR is primarily due to higher static magnetic field 0 strength as well as noise mitigation and SNR advantages provided by the self-decoupling method (see Table 1). Figure 7 provides unaccelerated 3D (axial, coronal, sagittal) SNR comparisons between 10.5T-32Rx and 7T-32Rx in the central slices. The SNR ratio maps presented in Figure 7 show local hot spots where 10.5T-32Rx SNR is 2-2.5 times the 7T-32Rx SNR. These posterior hot spots follow the anatomically guided design of the 10.5T-32Rx intended to provide an SNR boost at the visual cortex. Figure 8 provides a line plot of the SNR along the y-axis of the axial slice of Figure 7 demonstrating significant SNR gains at 10.5T. SNR and g-Factor The performance of the 10.5T-32Rx in 2D accelerated acquisitions can be compared with the 7T-32Rx in terms of g-factors (presented in Table 2). The mean inverse g-factor of the 10.5T-32Rx (1/g = 0.69) is 18% more than mean inverse g-factor of the 7T-32Rx (1/g = 0.59) for 4x4 acceleration. Taking both the unaccelerated SNR advantage and the lower acceleration penalty of 10.5T-32Rx into account, the 4x4 accelerated SNR of the 10.5T-32Rx is expected to be 89% more than the 4x4 accelerated SNR of 7T-32Rx. DISCUSSION Previous studies have shown the necessity of using high density receive arrays at ultra-high field (UHF) to approach ultimate intrinsic SNR 15,25 and capitalize on the acceleration potential of ultra-high field MRI. In accelerated imaging with k-space undersampling, SNR is penalized by noise amplification of the receive array parametrized by the g-factor 27 . Therefore, receiver noise mitigation strategies play a critical role in the accelerated imaging performance at UHF MRI. Here, we scaled up a novel self-decoupling approach 40 to noise mitigation to build a 10.5T 32-channel receive array. The self-decoupling method presented by Yan et al. 40 introduced a promising new method for decoupling of high density receiver arrays. However, the method at 3T and 7T operating frequencies requires significantly unbalanced distributed capacitors or introduction of inductive circuit elements to achieve the desirable unbalanced current distributions. Our data confirms Yan et al.'s suggestion that in high-density, 3D conformal receive arrays, self-decoupling can be paired with preamplifier decoupling to improve inter-element isolation. Importantly we demonstrate that the combination of higher frequency (447MHz), smaller loop sizes appropriate for a 32-channel receiver and distributed inductance of such loops being in the range required for self-decoupling, can be exploited to achieve acceptable inter-element isolation with a more uniform capacitive distribution at 447MHz. The proposed method improves the rootsum-of-squares of magnitudes of receive fields substantially, and results in better noise-correlationweighted SNR compared to overlapped decoupling. We implemented the self-decoupling method in a 3D conformal, high density 32-channel receive-only array for human brain imaging at 447MHz for the first time. We anticipate that at extreme high frequencies (447MHz and higher) self-decoupling strategy will significantly simplify future high density receive array design and construction as it obviates practical complexities of common decoupling techniques. Experimental noise correlation matrix demonstrated effective interelement receiver decoupling and suggest this array configuration has a promising potential for parallel imaging. The receive array is shown to result in limited transmit field distortion following receive array insertion. This will facilitate the electromagnetic simulation effort for specific absorption rate (SAR) and regulatory validation. Furthermore, limited receiver-transmitter interaction can enable an interchangeable coil setup where a single transmitter can be used with 32-channel, 64-channel, and 128channel receivers. Peripheral and central SNR gains presented here in comparison to 7T confirm the expected gains 61 for ultra-high field imaging [62][63][64][65] and are crucial to various high resolution clinical and research applications of UHF MRI 2,8-11 . SNR and g-factor improvements can be attributed to several factors. On top of the fundamental contributions of increased static magnetic field strength, the self-decoupling method employed in this receive array is shown here to enhance SNR as well. Careful receiver noise correlation management was consequential to improved parallel imaging performance. The g-factor of 2D accelerated imaging using the 10.5T 32-channel receiver presented here was on par with a 64-channel receiver at 7T. This work is a significant milestone towards building 64-channel and 128-channel receiver arrays for human brain imaging at 10.5T. It is fully anticipated that scaling up to a 128-channel design will pose additional challenges in noise mitigation, preamplifier design and oscillations, and receiver-transmitter interactions. However, expected SNR and parallel imaging gains provide strong rationale and impetus to address such challenges. CONCLUSIONS There is significant clinical and research interest in capitalizing on the acceleration and signal-to-noise ratio (SNR) potential of ultra-high field MRI. Here, we present the first self-decoupled 32-channel receive array (32Rx) and capture substantially superior SNR and parallel imaging performance using a 10.5T MRI system compared to a 7T MRI system. The 10.5T-32Rx provided significant peripheral and central SNR gains compared to an industry-standard 7T-32Rx, both in unaccelerated and 2D accelerated acquisitions. This achievement delivers the much-anticipated SNR boost in highly accelerated ultra-high field imaging required for further understanding of human brain function and connectivity. Table 2 -Comparing g-factor values for 4x4 2D accelerated acquisitions using 10.5T-32Rx with g-factor values for 7T-64Rx and 7T-32Rx 13 .
2020-09-16T01:01:03.919Z
2020-09-15T00:00:00.000
{ "year": 2020, "sha1": "0b30caff5396ff79953456a5a414e5e0aa5dccca", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0b30caff5396ff79953456a5a414e5e0aa5dccca", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Computer Science", "Engineering" ] }
204772566
pes2o/s2orc
v3-fos-license
Amiodarone in the aged SUMMARY Amiodarone is a highly effective antiarrhythmic drug, but can have serious adverse effects, particularly in older patients. If possible it should not be used purely for controlling the heart rate If a prescription for amiodarone is contemplated, particularly for an older patient, consult a cardiologist. Avoid amiodarone in patients with significant conduction system disease, significant liver or pulmonary disease, or hyperthyroidism Regular monitoring of the patient, clinically and biochemically, is required to identify complications at an early, treatable stage. Maintain a high level of suspicion if a patient taking amiodarone is experiencing adverse reactions and presents with new symptoms Consider potential drug interactions when other drugs are prescribed with amiodarone. The effects and toxicities of amiodarone may persist weeks after it is stopped goals in management are to prevent disabling symptoms through rhythm or rate control and to reduce the risk of stroke with anticoagulation. 7 Several major trials have compared rate and rhythm control in patients with atrial fibrillation. They found no significant difference in all-cause mortality, cardiovascular death and composite end points including death, stroke, major bleeding, cardiac arrest and congestive cardiac failure. 7 In fact, the AFFIRM study of over 4000 patients showed a trend towards increased mortality with rhythm control, particularly in older patients. 8 The differences were partly explained by non-cardiac deaths with antiarrhythmic therapy that was thought to be more toxic in those with serious medical conditions. There were no differences between the two groups in terms of cardiovascular mortality, deaths due to arrhythmia, vascular events or rates of ischaemic stroke. 9 The majority of the patients treated with rhythm control in AFFIRM were managed with amiodarone. Other, smaller studies have shown similar increases in non-cardiac mortality in patients taking amiodarone. 9,10 Introduction Amiodarone is widely considered to be the most effective antiarrhythmic drug available. 1 It is commonly used to treat atrial fibrillation and ventricular arrhythmias. Amiodarone, and its active metabolite desethylamiodarone, have multiple effects on cardiac depolarisation and repolarisation. Although it primarily blocks potassium channels, amiodarone potentiates its effect through all four of the classic Vaughan Williams mechanisms of antiarrhythmic action. Despite its efficacy, amiodarone is a challenging drug to use in clinical practice due to its prolonged half-life, multiple adverse effects and drug interactions. These adverse effects are particularly problematic for older people who are more susceptible to drug toxicities and who have higher rates of polypharmacy. There is also a lack of information regarding the safety of amiodarone in older people. 2 A cardiologist's opinion is recommended before prescribing. Amiodarone can have adverse effects in multiple organ systems including the lungs, heart, liver, thyroid, gut, skin, nerves and eyes. 3 Its use is also implicated in a range of drug-drug interactions with commonly prescribed cardiovascular drugs. 4 Indications Amiodarone is one the most frequently prescribed antiarrhythmic drugs for atrial fibrillation. 5 It is used by 8-11% of patients. 6 Atrial fibrillation is the commonest arrhythmia in older adults, with an estimated prevalence of 9% in people over the age of 80 years. The primary ARTICLE Full text free online at nps.org.au/australian-prescriber VOLUME 42 : NUMBER 5 : OCTOBER 2019 or a first-line drug in the setting of left ventricular dysfunction, left ventricular hypertrophy or coronary artery disease. While no comment is made specifically about older patients in the guidelines, beta blockers should still be considered first-line drugs in this population. 11 For patients on long-term treatment, the indication for continuing amiodarone should be reviewed. Intravenous amiodarone is indicated to terminate acute ventricular tachycardia in haemodynamically stable patients. It can also be used in the acute management of patients who become haemodynamically stable after maximal energy shock. Amiodarone can suppress events in patients with ischaemic heart disease and non-ischaemic cardiomyopathy who have recurrent ventricular arrhythmias. 12 Pharmacokinetics and dosing Amiodarone has incomplete and erratic absorption following oral administration. It is markedly lipophilic, resulting in a large volume of distribution (average approximately 66L/kg) and subsequently a long halflife. 4 Estimates of half-life vary, however a terminal half-life of up to 142 days has been reported as tissue stores deplete. 1 The principal active metabolite, desethylamiodarone, is reported to have a half-life of 60-90 days in chronic oral dosing. 13 Most of the drug is excreted via the liver and gastrointestinal tract by biliary excretion. The plasma half-life and drug concentrations of amiodarone are further increased in older people due to an increased volume of distribution resulting from proportional increases in body fat. While the plasma concentration of amiodarone can be estimated, this is of limited value as the measurement is inaccurate and does not correlate well with efficacy or adverse effects. 1 The typical maintenance dose of amiodarone is 200 mg per day. In older patients, decreasing the dose to 100 mg per day is advised, particularly if the indication is atrial fibrillation rather than a lifethreatening arrhythmia. 4 The lowest effective loading and maintenance dose should be used in older patients and dose increases should be undertaken with caution. Unlike in ventricular arrhythmias, loading doses are often unnecessary for treating atrial fibrillation. Given the long half-life, it may take weeks before dose increases yield clinically apparent effects, suggesting the need for cautious and slow up-titration. Similarly, clinicians should be aware that the effects and toxicities of amiodarone can still be present weeks to months after stopping the drug. Drug interactions Amiodarone inhibits cytochrome P450 (CYP) enzymes 3A and 2C and the drug transporter P-glycoprotein. 9 This leads to impaired metabolism and, potentially, increased sensitivity of patients to several drugs including warfarin, digoxin, non-steroidal antiinflammatory drugs, statins and benzodiazepines. 14 If amiodarone is added to warfarin, the warfarin dose must be reduced and INR should be closely monitored. 6 Interactions between amiodarone and the direct thrombin inhibitor dabigatran have been associated with a 50-200% increase in the area under the curve, resulting in a potentially increased risk of bleeding. 15 Similarly, non-randomised studies have reported a potential increase in the risk of bleeding with concurrent use of amiodarone and rivaroxaban, 16 although this interaction has not been described with apixaban and amiodarone. 17 Amiodarone can also lead to bradyarrhythmias with an increased risk of complete heart block when used in combination with beta blockers or calcium channel blockers. Amiodarone commonly causes QT prolongation on the ECG and should be used with caution when combined with other drugs that also prolong the QT interval. However, induction of polymorphic ventricular tachycardia is uncommon. 1 Grapefruit juice inhibits CYP3A4 leading to significantly reduced conversion of amiodarone to its active metabolite desethylamiodarone. Grapefruit juice should therefore be avoided with amiodarone therapy. 18 Organ-specific complications Older people are at an increased risk of the organspecific complications of amiodarone. This is because of changes to pharmacokinetics as well as higher rates of medical comorbidities, physiological deterioration in renal and hepatic function, and higher rates of cognitive, motor and sensory impairment. 14 Older people may also present with non-specific complaints secondary to amiodarone including fatigue, nausea and anorexia. A high index of suspicion should be maintained if an older patient presents with new symptoms. Regular monitoring is recommended (see Table). The long half-life of amiodarone means that complications may emerge after the drug is ceased. In a review of 1020 cases of reported amiodaroneinduced toxicity, the most commonly reported adverse reactions were thyroid disorders, followed by skin reactions such as photosensitivity. Pulmonary toxicity was the third most common adverse event, but is considered the most serious as it is associated with increased mortality. 19 ARTICLE Full text free online at nps.org.au/australian-prescriber VOLUME 42 : NUMBER 5 : OCTOBER 2019 Skin Photosensitivity is common following treatment with amiodarone. All patients should be cautioned to use sunscreen and cover exposed skin. Blue skin discolouration can occur, but typically resolves several months after stopping amiodarone. Lungs Pulmonary toxicity occurs in approximately 2-5% of patients taking amiodarone and is the adverse effect most associated with increased mortality. 23 The death rate ranges from 9% in patients who develop a chronic pneumonia to 50% in those with acute respiratory distress syndrome. 24 Pulmonary toxicity is more common in older patients and in patients with underlying lung pathology. 1,19 It increases threefold for every 10 years of age in patients over 60 years old compared with those under 60 years. 24 Toxicity can occur at any time during the course of treatment. Those at the greatest risk are patients who have taken a daily dose of 400 mg or more for more than two months, or a lower dose, commonly 200 mg daily, for more than two years. 25 Thyroid Amiodarone may lead to both hypo-and hyperthyroidism. Patients who already have thyroid abnormalities, such as nodular goitre or Hashimoto's disease, are likely to have a higher risk of complications. Amiodarone-induced hypothyroidism is more common in iodine-sufficient countries and typically occurs within the first two years of therapy. It is treated with thyroxine to normalise the concentrations of thyroid-stimulating hormone. Amiodarone-induced thyrotoxicosis can occur suddenly and at any time during treatment. The management includes stopping amiodarone, and considering antithyroid therapy, prednisone or surgical thyroidectomy. 20,21 Thyroid dysfunction may be asymptomatic, particularly in older patients, 22 and therefore the diagnosis should be based on biochemical tests. Clinical and laboratory assessments are needed at the start of treatment. Thyroid function should be monitored every six months. Clinical symptoms or changes in cardiac function should also prompt evaluation of thyroid function. Common presentations include acute or subacute cough and progressive dyspnoea. 20 Routine screening is of limited value as symptoms can develop rapidly. Patients who present with new respiratory symptoms should be promptly investigated. 26 Pulmonary function tests typically show restriction as well as a decreased diffusing capacity of the lungs for carbon monoxide (DLCO). High resolution CT of the chest generally reveals diffuse ground glass and reticular abnormalities. The treatment of pulmonary toxicity involves stopping amiodarone and often giving corticosteroids. Prolonged courses may be needed because of the long half-life of amiodarone. Heart Sinus node dysfunction and conduction disease are common in older patients so a careful assessment is needed before starting amiodarone. 27,28 Bradycardia and heart block occur in 1-3% of patients treated with amiodarone. Its use is therefore relatively contraindicated in patients with second-or thirddegree heart block who do not have a pacemaker. Gut The gastrointestinal effects of amiodarone include nausea, anorexia and constipation. They can occur in up to 30% of patients and are more common in older people. The effects tend to improve with dose reduction. 3 Liver Hepatic toxicity occurs commonly in patients receiving long-term amiodarone. Liver enzymes should be checked every six months. 3 If concentrations reach three times the upper limit of normal, amiodarone should be discontinued, unless the patient has a lifethreatening arrhythmia. Other adverse effects Neurological toxicity associated with amiodarone can include ataxia, paraesthesia and tremor. In a frail older patient these effects could increase the risk of falls. These neurological effects are often dose-related and improve when the dose is reduced. Corneal microdeposits are visible on slit lamp examination in nearly all patients treated with amiodarone for three months. These deposits rarely affect vision or necessitate discontinuation of amiodarone. 21 Optic neuropathy and optic neuritis have been described in a small number of patients, however a causal relationship has not been well established. Conclusion In older adults, the use of toxic drugs for non-lifethreatening indications should always be avoided. Amiodarone is a highly effective antiarrhythmic, however its unpredictable pharmacodynamics and broad adverse-effect profile make it a challenging drug to use safely in clinical practice. Its use should be reviewed in older patients with multiple comorbidities. Safer alternative drugs should be used preferentially in older patients with atrial fibrillation or minor ventricular arrhythmias, such as ventricular ectopy and non-sustained ventricular tachycardia. When ongoing treatment with amiodarone is required for older patients, care should be taken to use the lowest effective dose. Patients often require dose reductions as they age, in consultation with the patient's cardiologist. Regular monitoring of liver and thyroid function and pulmonary symptoms is required to identify complications at an early stage. Amiodarone toxicity often presents atypically and insidiously, particularly in older patients. New symptoms in a patient taking amiodarone should always be considered as potential adverse effects.
2019-10-03T09:11:21.150Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "9d54bb05b96bc4b94e89aa041973dd584749ed62", "oa_license": "CCBYNCND", "oa_url": "https://www.nps.org.au/assets/p158-Srinivasan-et-al.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "596085f26cc2fbbafa794bd2f2bb6edafe6cc9b3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265535450
pes2o/s2orc
v3-fos-license
Design and Engineering of Light‐Induced Base Editors Facilitating Genome Editing with Enhanced Fidelity Abstract Base editors, which enable targeted locus nucleotide conversion in genomic DNA without double‐stranded breaks, have been engineered as powerful tools for biotechnological and clinical applications. However, the application of base editors is limited by their off‐target effects. Continuously expressed deaminases used for gene editing may lead to unwanted base alterations at unpredictable genomic locations. In the present study, blue‐light‐activated base editors (BLBEs) are engineered based on the distinct photoswitches magnets that can switch from a monomer to dimerization state in response to blue light. By fusing the N‐ and C‐termini of split DNA deaminases with photoswitches Magnets, efficient A‐to‐G and C‐to‐T base editing is achieved in response to blue light in prokaryotic and eukaryotic cells. Furthermore, the results showed that BLBEs can realize precise blue light‐induced gene editing across broad genomic loci with low off‐target activity at the DNA‐ and RNA‐level. Collectively, these findings suggest that the optogenetic utilization of base editing and optical base editors may provide powerful tools to promote the development of optogenetic genome engineering. Table of Contents Supporting Tables Table S1.Target sgRNA-protospacer sequence for E. coli DH10B and HEK293T in this study. Table S2.Sequences of protospacers and primers for sgRNA-independent and dependent off-target sites for E. coli DH10B and HEK293T in this study. Supporting Sequences Sequence S1.Amino acid sequences used for E. coli DH10B in this study. Sequence S2.Amino acid sequences used for HEK293T in this study. Figure S1 . The construction and functional verification of blue light-activated adenine base editor.a) The number of clones with rifampicin resistance for all chimeric sfGFP-TadA-8e proteins.Counts of four independent replicates (n = 4) for each protein variant are displayed.b) Photos of clones for E. coli DH10B containing rifampicin resistance.Intact TadA-8e servers as the positive control.The photos from three independent experiments.c) Schematic diagram of split sfGFP for analyzing the spontaneous dimerization of split TadA-8e variants.The dimerization of split-TadA-8e (N-TadA-8e and C-TadA-8e) induces the polymerization of sfGFP 1-10 and sfGPF 11 to recover the function of fluorescence.Introduce a DNA sequence within the sfGFP coding gene to split sfGFP, including a stop codon (TAA), ribosome binding site (RBS), and an initiation codon (ATG).d) DNA sequencing chromatograms of different BLABE systems with different split sites in the presence and absence of blue light.The arrow indicates that the red base is the potential editing site.The target site ABES4 is treated with different light intensities (2.5 mW cm -2 ; 5 mW cm - 2 ; 10 mW cm -2 ) for 300 min.b) Bar plots showing on-target DNA base editing efficiency of BLCBE under various blue light intensities.The target site CBES3 is treated with different light intensities (dark; 2.5 mW cm -2 ; 5 mW cm -2 ; 10 mW cm -2 ) for 300 min.The editing efficiency of ABES4 and CBES3 are calculated from three independent replicates (n = 3). Figure S8 . Total RNA mutation types in the transcriptome.a-c) The frequencies of total RNA mutation types in the transcriptome for E. coli DH10B (a, negative control), adenine base editor (b), and cytosine base editor (c).The E. coli DH10B is not treated as a negative control.Left, the donut chart shows the proportion of various types of RNA mutation in the total transcriptomic single nucleotide polymorphism (SNP) mutation.Right, the mean values of RNA mutation frequencies across all mutation types of the transcriptome.The RNA mutation frequencies of three independent replicates are displayed (n = 3).Supporting Tables Table S1.Target sgRNA-protospacer sequence for E. coli DH10B and HEK293T in this study. Figure S1 . Figure S1.The construction and functional verification of blue light-activated adenine base editor. Figure S2 . Figure S2.Split deaminases strategy for the construction of BLCBE Figure S3.The optimization of linker length of BLABE and BLCBE. Figure S4 . Figure S4.The performance of BLABE and BLCBE for base editing in E. coli DH10B. Figure S6 . Figure S6.Allele frequencies in the entire amplicon of DNA on-target and sgRNA-dependent offtarget editing at diverse loci in E. coli DH10B. Figure S7 . Figure S7.Allele frequencies in the amplicon of DNA on-target and sgRNA-independent off-target editing at diverse genomic loci in E. coli DH10B. Figure S8 . Figure S8.Total RNA mutation types in the transcriptome. Figure S9 . Figure S9.The optimization of expression strategies and plasmid ratio for HEK293T cells transfection of BLCBE. Figure S10 . Figure S10.Allele frequencies in the amplicon of sgRNA-dependent off-target editing at genomic loci for ABE, BLABE, CBE, and BLCBE systems. Figure S11 . Figure S11.Allele frequencies in the amplicon of sgRNA-independent off-target editing at diverse genomic loci for ABE and BLABE systems. Figure S12 . Figure S12.Allele frequencies in the amplicon of sgRNA-independent off-target editing at diverse genomic loci for CBE and BLCBE systems. Figure S13 . Figure S13.Off-target editing of base editor systems on the transcriptome in HEK293T cells. Figure S14 . Figure S14.RNA off-target editing induced by ABE and BLABE at all chromosome locations. Figure S15 . Figure S15.RNA off-target editing induced by CBE and BLCBE at all chromosome locations. NFigure S2 .Figure S3 .Figure S4 . Figure S2.Split deaminase strategy for the construction of BLCBE.a) Schematic of potential split sites on the APOBEC3A (A3A) amino acid sequence.Four candidate sites for splitting are located in the loop area, marked with the red triangle, and split positioned between two amino acids.The secondary structure of A3A is highlighted in different colors (α helix, dark blue; β sheet, light blue).b) Cartoon representation of A3A protein.The potential sfGFP insertion sites for A3A are behind the red-labeled amino acids.c-f) Base editing efficiency of various BLCBE systems, including N42 (c), D85 (d), T118 (e), and G147 (f).Base editing efficiency is calculated by EditR (N.D. no detected; n = 3 independent replicates).g) DNA sequencing chromatograms of selection of photoswitches for different BLCBE systems.The Magnets are classified into three levels, and the reverse sgRNA sequences are shown.All possible editing sites are marked with black arrows. Figure S5 . Figure S5.Fluorescence image showing sfGFP variants expression.a, b) The E. coli DH10B with plasmids expressing the sfGFP mutants and BLABE (a) or BLCBE (b) are cultured for 540 min and at 240 min treated with blue light.The fluorescence images are obtained at 60 and 540 min under light and dark conditions.Scar bar, 20 μm.Intact ABE and CBE serve as positive controls. Figure S6 . Figure S6.Allele frequencies in the entire amplicon of DNA on-target and sgRNA-dependent offtarget editing at diverse loci.a) Allele nucleotide percentages of DNA on-target and off-target base editing of BLABE targeting two sites, ABES4 and ABES19.ABE serves as the positive control, and the target editing efficiency of BLABE is tested in the presence and absence of blue light.b) Allele frequencies of DNA on-target editing within target sites and off-target allele efficiency at diverse genomic loci in the presence and absence of blue light for BLCBE.The possible editing sites within the protospacer sequence are marked by red arrows and the unexpected editing are indicated by black arrows.The base substitutions and deletions are represented with bold letters and short dashes, respectively.The editing window is indicated by red double-ended dotted lines.PAM sequences for SpCas9 (NGG) and SaCas9 (NNGRRT) are marked by short blue lines.The values on the right of the graph represent frequencies and mutation alleles' reads (n = 3). Figure S7 . Figure S7.Allele frequencies in the amplicon of DNA on-target and sgRNA-independent off-target editing at diverse genomic loci.a,b) Allele frequencies of on-target editing and sgRNA-independent off-target for BLABE (a) and BLCBE (b) in the presence and absence of blue light.The allele frequencies of off-target within the R-loop region are calculated by amplicon sequencing, and the protospacer sequence is pointed by the purple font.The possible editing sites within the protospacer sequence are marked by red arrows and the unexpected editing are indicated by black arrows.The base substitutions and deletions are represented with bold letters and short dashes, respectively.The editing window is indicated by red double-ended dotted lines.PAM sequences for SpCas9 (NGG) and SaCas9 (NNGRRT) are marked by short blue lines.The values on the right of the graph represent frequencies and mutation alleles' reads (n = 3). Figure S9 . Figure S9.The optimization of expression strategies and plasmid ratio for HEK293T cells transfection of BLCBE.a) Target cytosine base editing efficiency for various strategies of expression for BLCBE, HEK293T cells were transfected using BLCBE with various protein expression strategies, where the BLCBE systems were expressed using P2A, IRES, and cleavage assays.The base editing efficiency of HEK2 is shown by the bar chart (n = 3, N.D. no detected).b) Target cytosine base editing efficiency for transfection of HEK293T using dual plasmids system.The numbers of 0.3 ~ 3 in the legend mean the ratios of transfected plasmids (pCMV-A3AN:pCMV-pMag-A3AC).The bar chart shows the efficiency of HEK2 editing efficiency using different ratios of the plasmids for the expression of BLCBE.(n = 3, N.D. no detected). Figure S10 .Figure S11 .Figure S12 .Figure S13 .Figure S14 .Figure S15 . Figure S10.Allele frequencies in the amplicon of sgRNA-dependent off-target editing at genomic loci for ABE, BLABE, CBE, and BLCBE systems.a) Allele frequencies of sgRNA-dependent offtarget for ABE and BLABE with sgRNA targeting HEK2.The off-target site was selected by Cas-OFFinder and named HEK2-OT1.The BLABE system was treated under blue light and darkness respectively.b) Allele frequencies of sgRNA-dependent off-target for CBE and BLCBE with sgRNA targeting HEK2.The off-target site was selected by Cas-OFFinder and named HEK2-OT1.The BLCBE system was treated under blue light and darkness respectively.The grey rectangle indicates the sequence of protospacer within sgRNA.The allele frequencies of off-target within the R-loop region are calculated by amplicon sequencing.The values on the right of the graph represent frequencies and mutation alleles' reads.The possible editing sites within the protospacer sequence are marked by red arrows and the unexpected editing are indicated by black arrows.The base substitutions and deletions are represented with bold letters and short dashes, respectively.The editing window is indicated by red double-ended dotted lines.PAM sequences for SpCas9 (NGG) and SaCas9 (NNGRRT) are marked by short blue lines.The values on the right of the graph represent frequencies and mutation alleles' reads (n = 3).
2023-12-03T06:17:58.522Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "63e8b2a4d37c99b9e78c41a76ea6beb373e12c5c", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/advs.202305311", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e985aedbe599476bbd4acb1b499d55e6608bc5b", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
135190919
pes2o/s2orc
v3-fos-license
Study on main controlling factors of high injection pressure in offshore heavy oil reservoir High injection pressure, insufficient steam injection rate and low bottom hole dryness seriously affect the application effect of steam stimulation in offshore heavy oil reservoir. By analyzing the factors affecting the reservoir conditions of high injection pressure during huff and puff, it can be concluded that the oil viscosity increases and the permeability decreases. The grey correlation analysis shows that the contribution of unit viscosity increase to injection pressure increase is twice that of unit permeability decrease. In view of the main controlling factors, the viscosity change of crude oil was studied indoors. The results showed that: 1) The content of heavy crude oil increased, and the viscosity of crude oil increased linearly; 2) The reverse phase point of oil emulsification at reservoir temperature is 50% water cut, and the viscosity of oil-in-water emulsion will increase with the increase of water cut; 3) the higher the viscosity of asphaltene is, the stronger the viscosity is. According to the actual situation of offshore heavy oilfield, residue of heavy components and emulsification of oil-in-water is the main factor affecting steam injection. Introduction Thermal recovery is widely used in heavy oil reservoirs, among which, steam stimulation technology, due to its simple field construction and fast efficiency, is suitable for multiple types of heavy oil reservoirs, and is also the main technology for the development of heavy oil reservoirs at present [1,2]. Due to the particularity of the environment of offshore oil fields, the research on thermal recovery of heavy oil started late. Since 2008, a pilot study on thermal recovery of multi-thermal fluid and steam stimulation has been gradually carried out in Bohai bay area. In NB35-2 oil field; the pilot test of multi-thermal fluid was carried out first and successfully [3]. At present, the steam stimulation technology has only carried out the pilot test in one offshore well -CBA32H well. Due to lack of understanding of steam stimulation, problems like serious corrosion of pipe column, break-off, high pressure of steam injection, and steam channeling have occurred, but the production is still increasing obviously. In order to speed up the development of steam stimulation technology at sea, research on steam stimulation technology and field pilot test in offshore oil fields [4] were conducted, but problems such as high steam injection pressure, low well bottom dry degree and steam channeling still occurred. The main influence of high steam injection pressure has two aspects: on the one hand, it affects the steam injection speed during the initial injection process, thus reducing the steam injection dryness [5].On the other hand, it is easy to cause steam channeling after late injection into the stratum [6].On the whole, there are many operating wells with high steam injection pressure, short production time, long idling period, high operation cost, low oil/gas ratio, poor economic efficiency and low production speed, which seriously affect the thermal recovery output of heavy oil. Therefore, it is necessary to analyze the factors influencing the high pressure of steam stimulation and injection in offshore heavy oil reservoir, overcome the controllable factors as much as possible, improve the steam injection speed and ensure the dry degree of steam in the bottom of well, so as to improve the development effect of steam stimulation in heavy oil reservoirs. Study on main control factors of steam injection high pressure in an offshore oil field Among the influencing factors of high steam injection pressure, excluding subjective factors such as injection construction, the objective factors affecting steam stimulation and steam injection pressure are formation pollution, crude oil viscosity, reservoir property, formation clay mineral content and stratum energy [7][8][9]. Induction of factors affecting injection pressure The objective factors that affect the high steam injection pressure can be divided into two types. One is that the flow resistance of the reservoir fluid becomes higher due to changes in composition, emulsification, etc. The other is the change of reservoir property caused by blockage and migration, that is, the decrease of permeability K. These two parameters can be explained by the "start-up pressure gradient" of heavy oil [10,11]. The production well is produced by controlling the flow pressure at the bottom of the well and overcoming the flow resistance of crude oil by the driving pressure difference formed by formation energy. Therefore, in the process of steam injection, the injected steam overcomes the two forces in the stratum at the same time, so that the formation can be successfully injected, as shown in Figure 1. Figure 1 that, for the heavy oil reservoir that has its steam stimulation for the first time, formation fluid is in a dynamic equilibrium (P f1 -P f2 =P fw ). Pf2 is "start-up pressure", and the resistance for injecting steam to overcome is P f1 +P f2 =2P f2 +P fw , which means injecting the steam into the reservoir needs to overcome the combined value of 2 times of the start-up pressure and bottom hole flowing pressure. Therefore, for the objective factors affecting steam injection, the "start-up pressure gradient" formula can be used for comprehensive analysis and judgment, and it is also suitable for multi-rotation steam stimulation wells. Based on the "start-up pressure gradient" formula, the changes in oil viscosity and reservoir permeability caused by steam injection can be fully considered. Weight analysis of factors affecting injection pressure From Reference literature [12] the "start-up pressure gradient" of a heavy oil reservoir in Bohai Sea was obtained, as shown in Formula 1: Gray relational analysis method by MATLAB software is used to analyze the fluid viscosity and permeability decline on the contribution of flow resistance. Grey correlation method is a multifactor statistical analysis method, and it describes the strength, size and order of the relationship between factors based on sample data of all the factors, and reflects the changing trend of the two factors according to the sample data. The range of the viscosity is 1,000-25,000mPa•s, and the range of the permeability is 250-3,850mD. The results are showed in Figure 2. Figure 2. Contribution of viscosity and permeability to injection pressure As can be seen from Figure 2, the correlation coefficient of viscosity of crude oil is 0.0134, higher than 0.0068 of permeability. In addition, the viscosity of crude oil accounted for 66.34%, making up most of the contribution. In other words, each increase of fluid viscosity by 1mPa•s is equivalent to a decrease of permeability by 2mD to the starting pressure gradient. Therefore, for the offshore heavy oil field, the viscosity of crude oil in the reservoir is the main factor hindering the fluid flow and steam stimulation injection pressure. Analysis of factors affecting viscosity change of crude oil During steam stimulation, high-temperature steam not only emulsifies with crude oil [13] , forming different types of emulsion, which lead to changes in oil viscosity, but also causes distillation [14]. The light component and heavy component remain in the distilled crude oil results in changes in the viscosity of the crude oil. The influence of component change and emulsification on the viscosity of crude oil in the oil field would be studied in order to guide the application of the subsequent pressure-lowering injection technology. Influence of asphalt content on viscosity of crude oil Deasphalting oil is obtained through separation according to industry standards. N-pentane was mixed with bituminous in different proportions (5%, 10%, 15%, 20%, 25%), and then stirred to be completely dissolved to determine the viscosity of crude oil. The viscosity change of crude oil under different asphalt contents are shown in Figure 3: Figure 3. Changes of viscosity of crude oil with different contents of asphalt As shown in Figure 3, the viscosity of crude oil shows an upward trend with the increase of the content of asphalt in crude oil components. By fitting the characteristic curve, it has a good linear increasing trend, and the fitting degree R is as high as 0.9825. It is consistent with the viscosity model formula of a new heavy oil component derived from literature [15], and based on this, the formula suitable for the reservoir is established, as shown in Formula 2. Figure 4 that as the steam simulation operation proceeds, light components in the stratum are distilled and the increase of heavy components such as asphalt in the remaining oil will increase the viscosity of crude oil. For each increase of heavy components by 1%, the viscosity of crude oil will increase by about 900mPa•s. With the construction of steam stimulation, the reservoir temperature gradually decreased and the crude oil viscosity gradually increased. That is to say, the regular trend of the influence of crude oil component on the crude oil viscosity during steam stimulation should be shown in the red trend line in Figure 4, showing a logarithmic upward trend. The rising range is affected by the change of the content of heavy components. But for crude oil mixture, different components in the crude oil are in a state of relatively balance. After light component is distillation, the balance would be broken, and heavy components would precipitate/deposit [16] to achieve new dynamic equilibrium. So, the content of heavy components in the process of steam simulation is not too big. Yang Sen from China University of Geosciences [14] found in his study that after distilling the light component, asphaltene grows at only around 0.1%. Therefore, the heavy components in the crude oil system will not change much and the influence on the viscosity of the crude oil will not be obvious. IOP Conf. Series: Earth and Environmental Science 218 (2019) 012154 IOP Publishing doi:10.1088/1755-1315/218/1/012154 5 The viscosity of the emulsion at different water contents is shown in Figure 5. Figure 5 that the viscosity of crude oil increases with the increase of water content, and after reaching the reverse phase point (about 50%) [17], it decreases rapidly with the increase of water content. The analysis shows that as the water content increases, the surface of the interphase expands as the number of droplets increases, the frictional collision between particles increases, the interaction between particles increases, and the non-newtonian property of the system increases, resulting in the rise of viscosity, which reaches the maximum at the reverse phase point. Further increasing the water content, the internal structure of emulsion changes from W/O emulsion to O/W or W/O/W multiple emulsion, the shape changes into water phase and the viscosity rapidly decreases. At present, the water content of this reservoir is about 40%, which is lower than that of the reverse phase point. Therefore, the emulsion in the reservoir is mainly oil-water, and with the increase of water content, the viscosity of crude oil will further increase, hindering steam injection. Influence of heavy component content on the effect of emulsion adhesion Crude oil and formation water of different heavy components in 2.2 were mixed according to oil-water ratio (oil: water =8:2, 5:5, 2:8), and the viscosity of crude oil was measured. The viscosity of emulsion of different weight components is shown in Figure 6. water cut-50% water cut-80% Figure 6. Viscosity of emulsion of different weight components It can be seen from Figure 6 that the viscosity of emulsion of different heavy components is different, and the overall trend is that the viscosity of crude oil increases gradually with the increase of heavy components, and the maximum viscosity increases to 4.9 times. This is because of the aromatic and polar nature of the bituminous material. The polar functional group of the bituminous material is oriented towards the water phase, and the non-polar functional group is oriented towards the oil phase. The thing on the oil-water interface plays a role similar to surfactant. The higher the content of the bituminous material, the stronger the effect is [18].The viscosity of crude oil before the reverse phase point forms the emulsion of oil-ladled water (W/O), which has strong viscosity increasing ability. However, the influence of water-ladled oil (O/W) formed after the reverse phase point is mainly affected by the emulsification state, and the viscosity increase is not obvious. The analysis shows that the emulsion formed by water-in-oil emulsion under the condition of higher bituminous content can be more stable and larger. With the increase of water content, the viscosity of crude oil is affected more by the viscosity-increasing method than the pure asphalt content. However, as the crude oil system is covered by water phase, the increase of asphalt content has little effect on the viscosity of crude oil, showing a slight rise. The stability of the emulsion was further analyzed, as shown in Figure 7. Figure 7. Water absorption of different heavy components As can be seen from Figure 7, the higher the heavy component content of crude oil is, the better is its stability and the lower is the water yield. According to the analysis, what's stabilizing the lactescence in the asphaltene is a natural surfactant in oil, which can be absorbed on the oil-water interface to form a viscoelastic interfacial film with certain strength. The formation of this mechanically strong film prevents the merging of dispersed water droplets. Therefore, Asphalt is considered to be a natural surfactant with the strongest and most difficult capacity to stabilize emulsion [19]. Asphalt has a stabilizing effect on water-in-oil emulsion, resulting in the formation of emulsion difficult to demulsify, thus enhancing its resistance in the pore medium [20]. Therefore, the synergistic effect of crude oil composition and crude oil emulsification not only greatly increases the viscosity of crude oil, but also enhances the stability of its emulsion, making its flow resistance established in porous media more stable and hindering the steam injection, which is the main factor for the establishment of reservoir flow resistance during the heavy oil stimulation. Conclusions Through the analysis of main control factors, the contribution of fluid viscosity to flow resistance is twice as much as that of reservoir permeability, and the fluid viscosity is the main controlling factor of injection pressure rise. The maximum increase in crude oil emulsification can also reach by 1.4 times, which has a significant impact on the oil viscosity and will significantly increase the injection pressure. For every 1% increase of the heavy component in crude oil, 900mPa•s of the viscosity should be increased. However, the extraction of light components will lead to the destruction of the balance of the crude oil system, and the heavy components will separate out crude oil and enter into a new balance, so the change of the crude oil components will not increase more than 1%, and the change of crude oil components will not have a great impact on the increase of viscosity. However, the synergistic effect of changes in heavy components and emulsification can significantly increase viscosity, with the highest crude oil viscosity increasing by 4.9 times, and the stability of emulsion is also enhanced, which eventually leads to high and stable flow resistance in the reservoir.
2019-04-27T13:13:10.908Z
2019-02-23T00:00:00.000
{ "year": 2019, "sha1": "dbd4f2cb0b76d991595d8c8569cfe8cea382f44f", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/218/1/012154", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "d172531b48ab615b70f13924b0d80c13a57c326e", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science", "Physics" ] }
215404904
pes2o/s2orc
v3-fos-license
RIPK1 regulates the survival of human melanocytes upon endoplasmic reticulum stress Vitiligo is a common congenital or acquired disfiguring skin disorder. At present, endoplasmic reticulum (ER) stress has been identified to serve a critical role in the pathogenesis of vitiligo. Receptor-interacting serine/threonine-protein kinase 1 (RIPK1) is a protein serine/threonine kinase. The specific molecular mechanism of RIPK1 in human melanocytes upon ER stress remains to be determined. In the present study, RIPK1 was significantly downregulated in tunicamycin (TM)-induced ER stressed-human melanocytes. Subsequently, to explore the role of RIPK1 in ER stress-induced human melanocytes, human melanocytes were transfected with control or RIPK1 plasmids for 24 h and then treated with 3 µM TM for 48 h. Reverse transcription-quantitative PCR and western blot analysis indicated that the expression levels of protein kinase R-like endoplasmic reticulum kinase, eukaryotic translation initiation factor 2 subunit 1 and CCAAT-enhancer-binding protein homologous protein were significantly increased in the TM-treated group compared with the controls. In addition, the effect of high RIPK1 expression on ER stress-induced human melanocyte survival was studied. The present results indicated that TM inhibited cell viability and promoted apoptosis in human primary epidermal melanocytes. Western blot analysis demonstrated that the expression of Bax and caspase-3 was upregulated and the expression of Bcl-2 was downregulated in TM-treated human melanocytes. The effects of TM on human melanocytes were reversed by RIPK1 overexpression. Therefore, RIPK1 overexpression may have an effect on the PI3K/AKT/mTOR signaling pathway in human melanocytes under ER stress. The results of the current study demonstrated that RIPK1 could protect human melanocytes from cell damage induced by ER stress by regulating the PI3K/AKT/mTOR and ER stress signaling pathways, thereby serving a protective role in the occurrence and development of vitiligo. Introduction Vitiligo is a common congenital or acquired disfiguring skin disorder related to melanocyte destruction. The incidence of vitiligo is 0.5-1.0% worldwide (1,2), and causes the skin to lose its natural pigmentation (3). The incidence of vitiligo is not related to age, sex, skin type or ethnicity (4). The endoplasmic reticulum (ER) is an important organelle that is mainly responsible for protein biosynthesis, folding and the maintenance of cell homeostasis (5). However, under certain physiological and pathological conditions, protein folding may be severely impaired, causing ER stress (6). As a result, a specific ER stress response pathway will be activated, which can lead to apoptosis (7). Tyrosinase is a rate-limiting enzyme that catalyzes the production of melanin in melanocytes (8). Tyrosinase is critical for melanogenesis and plays a key role in a number of pigment-deficient diseases. Le Poole et al (9) indicated that vitiligo-related gene 1 expression was decreased in vitiligo patients compared with the healthy controls, which may be due to the transfer of tyrosinase in the ER, but the specific mechanism behind this process remain to be elucidated. Receptor-interacting serine/threonine-protein kinase 1 (RIPK1) was first reported to serve a crucial role in necroptosis (10). Necroptosis is a form of programmed cell death in development, inflammation and tissue homeostasis (11). The function of necroptosis is to regulate downstream molecules through post-transcriptional modifications, including phosphorylation and ubiquitination (12). RIPK1 has a major impact on liver pathogenesis and liver disease prognosis (13,14). Previous research has indicated that RIPK1-mediated necrotic apoptosis can also occur in neuronal cells, leading to neurodegenerative disease (15). However, to the best of our knowledge, the role of RIPK1 in vitiligo remains undetermined. A previous study reported that the PI3K/AKT/mTOR pathway is associated with cell survival in response to oxidative stress (16). Growth factors may protect against oxidative stress-induced apoptosis through the activation of the AKT and mTOR pathways (17)(18)(19). Furthermore, another study suggested that α-melanocyte-stimulating hormone stimulated melanogenesis through activating the mitogen-activated protein kinase kinase/ERK or PI3K/AKT pathways (20). Regulation of the PI3K/AKT/mTOR signaling pathway has been reported to be a novel approach for the clinical treatment of vitiligo (21). Moreover, the association between RIPK1 and the PI3K/AKT/mTOR pathway in melanocytes under ER stress remains largely unclear. Therefore, the present study aimed to explore the mechanisms of action of RIPK1 in ER-stressed human melanocytes. Materials and methods Cell culture and treatment. Human primary epidermal melanocytes were acquired from American Type Culture Collection. Cells were cultured in Medium 254 (Gibco; Thermo Fisher Scientific, Inc.) supplemented with human melanocyte growth supplement (Gibco; Thermo Fisher Scientific, Inc.) at 37˚C and 5% CO 2 . Flow cytometric analysis of apoptosis. Cell apoptosis was detected using the Annexin-V/propidium iodide (PI) Apoptosis Detection kit [cat. no. 70-AP101-100; Hangzhou Multi Sciences (Lianke) Biotech Co., Ltd.]. Human melanocytes were plated in six-well plates at a density of 2-3x10 5 cells per well overnight. Cells were then transfected with control or RIPK1 plasmids for 24 h, followed by treatment with 3 µM TM for 48 h. Cells were then collected by centrifugation (1,000 x g; 5 min; 4˚C), and resuspended in 100 µl of FITC-binding buffer. Subsequently, the buffer was added to 5 µl ready-to-use Annexin V-FITC (BD Biosciences) and 5 µl PI. Cells were incubated in the dark for 30 min at room temperature. Annexin V-FITC and PI fluorescence were assessed using a BD FACSCalibur flow cytometer (BD Biosciences), and the data were analyzed using FlowJo software (version 7.6.1; FlowJo LLC). MTT assay. Human melanocyte viability was determined using an MTT assay. Human melanocytes were plated in 96-well plates at a density of 5x10 3 cells/well. Human melanocytes were transfected with control or RIPK1 plasmids for 24 h and then treated with 3 µM TM for 48 h. Subsequently, 20 µl MTT reagent (Sigma-Aldrich; Merck KGaA) was added into each well for another 4 h at 37˚C. Subsequently, 150 µl DMSO (Sigma-Aldrich; Merck KGaA) was added into each well and shaken for 15 min. The optical density values were read at a wavelength of 490 nm using the FLUOstar ® Omega Microplate Reader (BMG Labtech GmbH). Statistical analysis. Data are presented as the mean ± standard deviation of at least three independent experiments. One-way ANOVA followed by Tukey's post-hoc test was used for multiple comparisons. Unpaired Student's t-test was used to analyze the statistical significance between two groups. P<0.05 was considered to indicate a statistically significant difference. Expression of RIPK1 in human melanocytes induces ER stress. To explore the role of RIPK1 in ER stress-induced human melanocytes, cells were treated with 3 µM TM for 24, 48 and 72 h. Firstly, the expression of ER stress-related proteins in human melanocytes induced by ER stress was investigated. Western blot analysis indicated that the expression of ER stress-related proteins, including PERK, eIF2α and CHOP was upregulated in a time-dependent manner (Fig. 1A), indicating that 3 µM TM activated ER stress in human melanocytes. MTT assay indicated that TM significantly inhibited cell viability (Fig. 1B) and induced cell apoptosis ( Fig. 1C and D) in a time-dependent manner in human melanocytes compared with the control. RT-qPCR and western blot analysis results indicated that RIPK1 expression decreased with the increase of TM treatment time (Fig. 1E and F). RIPK1 expression was decreased in human melanocytes induced by ER stress. Transfection efficiency of RIPK1 plasmid in human melanocytes. Human melanocytes were transfected with control or RIPK1 plasmids for 24 h. RT-qPCR and western blot analysis were performed to detect transfection efficiency. RT-qPCR results demonstrated that compared with the control group, the mRNA expression of RIPK1 significantly increased in RIPK1 plasmid-transfected human melanocytes ( Fig. 2A). Similar results were observed in the western blot analysis assay (Fig. 2B). Effect of RIPK1 upregulation on the expression of ER stress-related proteins in human melanocytes. To investigate the effect of high RIPK1 expression on the expression of ER stress-related proteins in human melanocytes, the expression of PERK, eIF2α and CHOP were examined using RT-qPCR and western blot analysis. The results revealed that RIPK1 protein expression decreased while PERK, eIF2α and CHOP protein expression increased in the TM-treated group compared with the control group (Fig. 3A). Additionally, RIPK1 expression increased (Fig. 3A) while protein expression of PERK, eIF2α and CHOP decreased in the TM + RIPK1-plasmid group compared with the TM-treated group (Fig. 3A). Similar results were observed in the RT-qPCR assays (Fig. 3B-E). Effect of RIPK1 upregulation on the survival of ER stress-induced human melanocytes. The effect of high RIPK1 expression on the survival of ER stress-induced human melanocytes was investigated. MTT and flow cytometry assays revealed that compared with the control group, the cell viability of human melanocytes was significantly reduced, while cell apoptosis significantly increased in the TM treatment groups. RIPK1 plasmid transfection was indicated to significantly increase cell viability (Fig. 4A) and decrease cell apoptosis compared with the TM treatment groups (Fig. 4B and C). The expression of apoptosis-related proteins was also assessed. The results of western blot analysis indicated that compared with the control group, the protein expression of Bax and caspase-3 increased while Bcl-2 expression decreased in the TM treatment group. RIPK1 plasmid transfection decreased Bax and caspase-3 protein expression and increased Bcl-2 protein expression compared with the TM treatment group (Fig. 4D). Therefore, overexpression of RIPK1 reversed cell growth inhibition induced by TM treatment. Effect of RIPK1 upregulation on the PI3K/AKT/mTOR signaling pathway in human melanocytes. Western blot analysis demonstrated that compared with the control group, the protein expression of p-PI3K ( Fig. 5A and B), p-AKT ( Fig. 5A and C) and p-mTOR ( Fig. 5A and D) significantly decreased in the TM treated group, but this effect was reversed by RIPK1 plasmid transfection (Fig. 5). Taken together, the results indicated that the effect of RIPK1 overexpression on human melanocyte growth may be associated with the PI3K/AKT/mTOR signaling pathway. Discussion Vitiligo is a common congenital or acquired skin disease that is characterized by loss of melanocytes, causing progressive skin depigmentation (24). Currently, vitiligo treatment mainly prevents disease development and achieves repigmentation in non-pigmented areas (25,26). Phototherapy is currently the preferred method of vitiligo treatment, but corticosteroids, surgery or local immunomodulators are also used (27)(28)(29). The ER stress response is a cellular process that can be aroused by different conditions that cause homeostatic imbalance (5). ER stress was reported to relate to the pathogenesis of a variety of diseases, including neurodegeneration, inflammation or cancer (30)(31)(32)(33). Emerging evidence has suggested that pharmacological targeting of ER stress can be an effective therapeutic strategy for treating tumors (34)(35)(36). Different natural compounds induced ER stress-mediated death in cancer cells (37). ER stress was also identified to serve a critical role in the pathogenesis of vitiligo (38)(39)(40). However, to the best of our knowledge, the mechanism behind vitiligo pathogenesis caused by ER stress remains to be determined. In the present study, TM enhanced the protein expression of ER stress-related proteins PERK, eIF2α and CHOP in a time-dependent manner. TM inhibited cell viability and induced apoptosis in human melanocytes. RIPK1 is a crucial regulator of tumor necrosis factor receptor 1 signaling (41). RIPK1 regulates the balance between cell survival, apoptosis and necrotic apoptosis after the stimulation of tumor necrosis factor-α (42). In addition, several studies have indicated that RIPK1 promotes or inhibits the effector functions of caspase-8 and RIPK3 (43)(44)(45). In the present study, RIPK1 expression was demonstrated to be downregulated in human melanocytes induced by ER stress. Previous studies have demonstrated that RIPK1 overexpression may lead to apoptosis in a number of cell types (16,46). Luan et al (22) demonstrated that RIPK1 is important for the survival of melanoma cells undergoing pharmacological ER stress. The results of the present study showed that TM inhibited the survival of human melanocytes, but this effect was reversed by RIPK1 plasmid transfection. The PI3K/AKT/mTOR pathway has been indicated to be associated with cell survival in response to oxidative stress (20) and melanogenesis (17). Activation of the PI3K/AKT/mTOR pathway could reduce oxidative stress-induced apoptosis (18,19). The present study explored whether the role of RIPK1 in melanocyte damage induced by oxidative stress was associated with the PI3K/AKT/mTOR pathway. ER stress-induced inhibition of the PI3K/AKT/mTOR signaling pathway in human melanocytes was significantly suppressed by RIPK1 overexpression. In conclusion, RIPK1 may protect human melanocytes from cell damage induced by ER stress by regulating the PI3K/AKT/mTOR and ER stress signaling pathways. The results of the current study indicated that RIPK1 might protect melanocytes from ER stress induced damage. Therefore, RIPK1 might serve a protective role in the occurrence and development of vitiligo. The present research provides potential therapeutic targets and theoretical basis for the treatment of vitiligo. However, the present study is a preliminary study exploring the role of RIPK1 in vitiligo. To elucidate the role of RIPK1 in vitiligo further, future in-depth research is required. For example, the effect of RIPK1 on melanocytes from vitiligo patients should be investigated. The relationship between RIPK1 and PI3K/AKT/mTOR signaling pathway in human melanocytes also requires more in-depth research. The effect of RIPK1 in vitiligo should be investigated in vivo in the future.
2020-03-12T10:43:52.564Z
2020-03-06T00:00:00.000
{ "year": 2020, "sha1": "fb5763ed384cdc59f437e91e3145d191ab885633", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/etm.2020.8575/download", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6d1e7c7152a4b6611c0fda6054c50c2f99807e19", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
246432238
pes2o/s2orc
v3-fos-license
The Chinese Inventory of Psychosocial Balance Short-Form Questionnaire for the Older Adults: Validity and Reliability Study Background Drawing from Erikson’s theory, Domino and Affonso constructed the Inventory of Psychosocial Balance (IPB), a scale with satisfactory reliability and validity. However, the lack of a credible Chinese version of the scale may hinder research on ego development in Taiwan. The aim of the present study was to construct a short form Chinese IPB. In addition, factor analysis was employed to shorten the original 120-item scale to make it suitable for application in the older adults in the future. Methods The study involved three steps: The first step was to establish the 120-items of the Chinese Inventory of Psychosocial Balance (C-IPB), and we conducted translation, back-translation, expert validity, and reliability of pilot study for this step. Following the first step was to construct the short-form C-IPB (CIPB-SF) in the second step, and the CIPB-SF was developed via item analysis and factor analysis. Finally, we assessed the reliability and validity of the CIPB-SF via structural equation model in the third step. Results Three hundred eight older adults without cognitive disorder completed the IPB. The 40-item CIPB-SF was completed through item analysis and factor analysis. The internal consistency test of CIPB-SF and the eight stages were good (Cronbach’s α = 0.81–0.89). The CIPB-SF had acceptable validity, except in the intimacy and identity stages, in which validity was only fair. Compared with the IPB, the CIPB-SF had good reliability and acceptable validity. However, because of its conciseness, the 40-item CIPB-SF was more suited for application among the Chinese elderly population because its application avoids physical overload. Conclusion The CIPB-SF served as a concise scale for assessing ego development in our study. This scale can also serve as a useful tool for convenient screening in the future. INTRODUCTION The advancement of medical progression increases human longevity, but this does not make death avoidable. Adults' awareness of death may be the source of anxiousness and stress, especially in the last stages of life in older adults. Accompanying degradation of physiology, the older adults could be aware that their death is approaching, and so appear some negative emotions such as depression (Busch et al., 2018). According to the Erikson theory, while older people in the last stage of life may accept their past and find meanings from their lives, they may reach ego integrity; which means they may face death peacefully (Erikson, 1963). The psychosocial development in older adults is influenced by their past life experiences (Ardelt et al., 2018). Erikson's life stage theory of psychosocial development is a rare psychosocial theory that encompasses the entire life span and encompasses eight age-graded stages of ego development, ranging from infancy to late adulthood. According to Erikson's theory, failure to successfully complete a stage can result in reduce ability to complete subsequent stages, in turn resulting in an unhealthy personality and a sense of self (Erikson, 1963(Erikson, , 1980(Erikson, , 1982. A specific type of crisis occurs in each stage, during which an individual cannot achieve the goal set for each stage (Erikson, 1963(Erikson, , 1982. The task-crisis pairs are as follows: trust vs. mistrust, autonomy vs. shame and doubt, initiative vs. guilt, industry vs. inferiority, identity vs. role confusion, intimacy vs. isolation, generativity vs. stagnation, and integrity vs. despair. The final stage represents the crisis of "integrity vs. despair, " which can be described as the process through which older adults try to find meanings in their lives. Erikson proposed that individuals who had been unsuccessful in resolving earlier crises would face difficulties in resolving later psychosocial crises as well (Erikson, 1963). An individual's personality subconsciously affects their psychosocial functioning and mental health (Dezutter et al., 2016). Early psychosocial crises may worsen an individual's depression, self-neglect tendencies, and morbidity. Furthermore, early psychosocial crises may negatively influence the wellbeing of the older adults (Cuijpers et al., 2013;Saint Onge et al., 2014). On the contrary, by improving this, ego-integrity can be achieved successfully aging and a better quality of life (QOL; Min, 2020). The ego integrity in older people is important; however, a valid measure tool is deficient in Taiwan; the evaluation of ego integrity and related research in older adults is limited. The measurement tools which include "The Northwestern ego-integrity scale (NEIS)" (Westerhof et al., 2017), "egointegrity/despair scale" " (Van Hiel and Vansteenkiste, 2009), " and the "inventory of psychosocial balance (IPB)" " (Domino and Affonso, 1990)"were developed according to Erikson theory and comprehensively used in ego integrity in older adults. NEIS includes two subscales (ego-despair and ego-integrity) with 15 items. The higher item mean scores indicate more despair and ego-integrity, respectively. The Cronbach α of NEIS is 0.74 in ego integrity and 0.75 in despair (Westerhof et al., 2017), and it has been applied in many studies (Kleijn et al., 2016;Westerhof et al., 2017). Ego-integrity/despair scale, developed by Van Hiel and Vansteenkiste (2009), is a 5-point Likert scale, the scores range from 1 (Completely not true) to 5 (Completely true). This scale included ego-despair and ego-integrity sub-scale with 3 items, respectively. The higher mean scores indicated more despair and ego-integrity, and the Cronbach's alphas were 0.80 in ego integrity and 0.85 in despair (Van Hiel and Vansteenkiste, 2009); it has been used in several studies (Dezutter et al., 2016;Derdaele et al., 2019). Drawing from Erikson's theory, Domino and Affonso (1990) constructed the IPB, which has good reliability (Cronbach's α ranging from 0.64 to 0.79.) and validity and has been applied in many studies (Hannah et al., 1996;Brennan and MacMillan, 2006;Beaumont and Pratt, 2011). IPB covers eight stages of Ericson theory, and each stage contains 15 items (120 items in total); the items were responded in 5-point Likert scale. The higher scores represent better self-development in certain stage (Domino and Affonso, 1990). Although these three measurement tools all possess good validity and reliability in many studies, a credible Chinese version of these tools in ego development is lacking in Taiwan. Therefore, developing a Chinese version of the scale for ego development measurement is necessary. The NEIS and ego-integrity/despair scale only include the older adult stage, whereas the IPB covers all eight stages of self-development. The Erikson theory is an important and rare psychological theory because it involves all stages in life, which is valid in older adult mental development. The IPB could be applied in various research more comprehensively; therefore, this research selected IPB to develop a Chinese tool. However, the original version contains 120 items which may be difficult for older adults to complete the questionnaire. Therefore, developing a short version IPB in Chinese is considered cautious and necessary. The aim of this study was to develop a short version Chinese IPB. STUDY MEASURES Survey tools in this study included sociodemographic characteristics (age, gender, marital status, religious, selfperceived economic situation, and educational level), IPB, and MOS 36-item short-form health survey (SF-36). Inventory of Psychosocial Balance The IPB is a scale that draws from Erikson's ego development theory (Domino and Affonso, 1990). This scale initially had 346 items determined with reference to relevant studies and was developed to reflect both positive and negative aspects of the eight stages. The responses are recorded on a 5-point Likert scale, ranging from strongly agree to strongly disagree, and are scored between 1 and 5 points. Five psychologists familiar with Erikson's theory reviewed each item for clarity of meaning and identified each item's relevance to a specific stage, after which they wrote written descriptions of each life stage, summarized on the basis of Erikson's writings. A total of 208 of the original 346 items survived this first step (Domino and Affonso, 1990). A total of 528 subjects, including students, adults, older adults, and retired adults, were included, and their age ranged from 21 to 88 years. The second process was factor analysis. The factor loading had to be at least 0.3, and the items had to significantly (p < 0.01) correlate with the appropriate self-rating. Finally, the 15 best items were selected (Domino and Affonso, 1990). The IPB thus obtained had good reliability demonstrated through internal consistency, evident in a Cronbach's α ranging from 0.64 to 0.79. The 4-5 week test-retest reliability ranged from 0.79 to 0.90 (Hannah et al., 1996). The validation of the IPB was compared with that of the California Psychological Inventory (CPI)-social maturity index. The results highlighted that six of the eight IPB scales demonstrated positive significant correlations with the CPI-social maturity index, with only the autonomy and intimacy scales exhibiting non-significant coefficients (Hannah et al., 1996). MOS 36-Item Short-Form Health Survey (SF-36) The SF-36 is a 36-item scale to assess the health-related QOL, with higher scores indicating better QOL. The SF-36 involved eight subscales: physical functioning (PF), role physical (RP), bodily pain (BP), general health (GH), vitality (VT), social functioning (SF), role emotional (RE), and mental health (MH). Through applying special algorithms, these eight subscales could contribute to a physical dimension called the physical component scale (PCS), and a mental dimension called the mental component scale (MCS) (Sherbourne and Ware, 1992). The SF-36 Taiwan version was a popular scale in which internal reliability has reached acceptable level for all scales (α > 0.70). The Cronbach's α of eight subscales ranged from 0.65-0.92 (PF = 0.91, RP = 0.92, BP = 0.76, GH = 0.82, VT = 0.79, SF = 0.65, RE = 0.87, and MH = 0.78). In terms of validation, each item showed item-scale correlation higher than 0.4. Except item 5 in MH, every item had higher correlation coefficient with its own subscale than other subscales. This indicated good discriminant validity. SF-36 has Taiwanese version and is used commonly in Taiwan for years. The item-scale correlation coefficients of SF-36 range from 0.40 to 0.83 and the rest of the scales have passed the tests of item discriminant validity, except for MH5 Tseng et al., 2003). METHODOLOGY This study was approved by an institutional review board of China Medical University in Taiwan (Approval No. DMR95-IRB-144). There are various methods for crosscultural adaptation of a questionnaire, but none is considered as the gold standard (Epstein et al., 2015). This study integrated the research methods of Beaton et al. (2000) and Rived-Ocaña et al. (2020) and the steps are shown in Figure 1. The first step was to develop the 120 items of the Chinese IPB (C-IPB). The process of translation and back-translation was used, and expert validity and reliability test in a pilot study were done. The second step was to construct C-IPB short-form (CIPB-SF). This step involved item analysis and factor analysis. The item analysis was used to delete those items without discrimination and with poor homogeneity. The left items were tested in factor analysis, and items which showed factor loading more than 0.55 remained, and the initial CIPB-SF was formed. Finally, the reliability and validity of the CIPB-SF were examined. The reliability was tested in internal consistency, and the convergent validity, confirmatory factor analysis (CFA), and criterion-related validity were done. The First Step: Establish the Chinese Inventory of Psychosocial Balance After obtaining permission from the author of the original IPB questionnaire, the questionnaire was translated following the process recommended by Beaton et al. (2000) and Rived-Ocaña et al. (2020). The first step included "translation, backtranslation, " "expert validity, " and "reliability of pilot study." Translation, Back-Translation The first translator was a professional translator with a master's degree and experience in English-to-Chinese translation. After the English-to-Chinese translation, another professional translator back-translated the first Chinese version of the IPB into English. Both translators were Chinese, and they did not engage in any discussions with each other. Thus, both professional translators produced translations of the IPB independently. Expert Validity Subsequently, expert validity was done. The expert was an academic scholars or a clinical staff with a master or doctoral degree and specialized in psychological or clinical care. We invited five experts (a doctoral degree of nursing, a master degree of psychology, a doctoral degree of social work, a master degree of nursing, and master degree of nursing supervisor) who reviewed both the versions and offered suggestions regarding the questionnaire content, semantics, structure, syntax, orthographic revision, and appropriateness of the translation. They were anonymized to each other during the process. The content validity index (CVI) was used in assessing the content validity. The CVI was a widely useful approach for content validity in instrument development and it could be computed via the item-CVI (I-CVI) and the scale-level-CVI (S-CVI). The experts were asked to rate instrument items on a 4-point ordinal scale [4(highly relevant), 3(quite relevant), 2(not relevant), 1(extreme not relevant)]. Item-CVI (I-CVI) is computed as the number of experts who score a rating of 3 or 4 point of each item, divided by the total number of experts. The scale-level CVI (S-CVI) is defined as "the proportion of items on an instrument that achieved a rating of 3 or 4 by the content experts" (Beck and Gable, 2001). After integrating the opinions from five experts, the corrections were done and the revised IPB was returned to the experts for second review. Until every expert agreed the contents of translated IPB, the 120-item C-IPB was completed. Reliability of Pilot Study Following the expert validity, a pretest was conducted to identify the problems that could be encountered while using the scale in the study and to use the responses obtained during the pretest for consultation purposes in the future. During the pretest, a majority of the participants expressed that filling out the 120item questionnaire took them 30-45 min and that the long questionnaire tired them. To facilitate economic efficiency and widespread usage, we developed a short form of the C-IPB, called the CIPB-SF. The Second Step: Constructing Chinese Inventory of Psychosocial Balance Short-Form In the second step, the CIPB-SF was developed via item analysis and factor analysis. Item Analysis Item discrimination and a test of homogeneity were used to perform the item analysis. Theoretically, items in a scale should have the capability to distinguish good from bad condition of subjects and those items in the same stage had the same characteristics. Therefore, according to the score on C-IPB, we classified the top 27% of the sample as the high group and the bottom 27% as the low group, and t-test was conducted to test if there were differences between the high group and the low group. The items which were not statistically significant implied that the items lacked discrimination and had to be deleted. The homogeneity test was analyzed using correlation coefficients. Correlation coefficients with magnitudes less than 0.3 had little correlation with other items in the same stage. Items were deleted when the correlation coefficients were less than 0.3 (Williams et al., 2010). Based on the above item analysis, the items with discriminative and homogeneous of the scale were entitled in the factor analysis. Factor Analysis Before factor analysis, we performed Bartlett's test of sphericity and the Kaiser-Meyer-Olkin measure of sampling adequacy test (KMO test). The method of principal component analysis and varimax rotation were used. The criterion of eigenvalues > 1 was employed to select components (Munro, 2005). Items with factor loadings that exceeded 0.55 were included (Williams et al., 2010;Demoulin, 2016). Finally, the CIPB-SF was established. The Third Step: Assessment of Reliability and Validity of the Chinese Inventory of Psychosocial Balance Short-Form In the third step, we assessed the reliability and validity of the CIPB-SF. We calculated the Cronbach's α to measure the internal consistency reliability and compare the Cronbach's α between the CIPB-SF and the C-IPB. When the Cronbach's α scores were > 0.70, the reliability was considered to be high. The Cronbach's α scores between 0.35 and 0.7 indicated acceptable reliability (Taber, 2018). Furthermore, we tested the construct validity, CFA, and criterion-related validity of CIPB-SF. "Convergent validity" is one of the construct validity and which states that the items having the similar constructs should be highly correlated. One of the method is calculating the correlation coefficients between tools' subdomains that are considered to measure the same construct (Gregory, 2004). We tested the "CFA" of CIPB-SF via structural equation modeling (SEM). Based on the literature, it is suggested that if the sample size over 500 would be more adequate to use the maximum likelihood (ML) method for estimation (Long, 1997). Due to the sample size in the research was only 308 and the questionnaires were scored as continuous variables, the generalized least square (GLS) method was conducted in the study (Carroll and Ruppert, 1982). In order to evaluate the "criterion-related validity" of CIPB-SF, we planned to compare the CIPB-SF and SF-36 via Pearson's correlation. The original scale, IPB, was compared with the social maturity index for testing the criterion-related validity of IPB (Domino and Affonso, 1990). Though, the Chinese-form social maturity index does not exist, we still have to evaluate the CIPB-SF with the similar concept and Chinese-form scale. Theoretically, participants with better ego development will have higher scores in QOL (Kleijn et al., 2016). Therefore, we compared CIPB-SF and QOL through Pearson correlation to analyze the criterion-related validity of CIPB-SF. General Information of Participants Since the Erikson's theory was applied for the research, the elder adults were qualified for all the eight stages of ego development only. Residential older adults older than 65 years old were contacted by a student pursuing a master's degree in nursing. Since our study concerned general older adults, we wished to recruit healthy Taiwanese older adults who were also independent. Therefore, the data collection sites were settled to public places. Individuals with dementia or other geriatric cognitive disorders and those who lived in long-term care facilities were not eligible. Participants were approached by a researcher who explained the study purpose and content, and they signed informed consents before data collection. The mini-mental state examination (MMSE) was developed in 1975 by Folstein and Mc Huge, and the evaluating dimensions included orientation to time and place, registration, attention and calculation, recall, language, repetition, and complex commands. The total score is 30, and a higher value of MMSE indicates a better recognition. MMSE score less than 23 means mild cognition impairment. MMSE is the most used measurement tool in cognitive functions in clinical settings (Galea and Woodward, 2005;Finney et al., 2016). This study applied the MMES as the screening tool, and MMSE scores under 23 are represented as the cut-off score for cognitive impairment. Those participants who had MMSE scores below 23 were excluded in the data analysis (Galea and Woodward, 2005). A total of 308 questionnaires were retrieved. Participants' mean age was 71 years (standard deviation = 6.4). In addition, 52.3% of them were women and 96.4% were married. Regarding education levels, 18.5% of participants were uneducated, 37.3% have received high school education or above. In terms of marriage, only 11 participants were never married. Most participants were married or divorced or widowed. There were 52.9% participants who were religious, and 63% participants felt their economic statuses were ordinary. In Table 1, the participants' socio-demographic characteristics are presented. Establish the Chinese Inventory of Psychosocial Balance After translation and back-translation, the translated Chinese version IPB and English version IPB were dispatched to five experts for check and reviewing. The content, semantics, structure, syntax, orthographic revision, and appropriateness of the translated IPB were carefully examined and considered with revision suggestions. After the first-time revision, the revised Chinese IPB was returned to the experts for further review. When consensus was reached, the CVI was calculated. The CVI for the C-IPB was 0.7, indicating the scale's acceptable expert validity (Polit and Beck, 2004). After conducting a final review for content validity, all experts agreed with the scale's items. Thus, the 120-item scale, C-IPB, was completed. A pretest was conducted to identify the problems that could be encountered while using the scale in the study and to use the responses obtained during the pretest for consultation purposes in the future. We included 32 subjects who were ≥ 65 years and without any cognitive impairment in the pretest. The pretest results revealed the following: (1) The reliability of the C-IPB was good (Cronbach's α = 0.82). (2) The length of the 120-item C-IPB was too long for the elderly participants to complete. Many participants mentioned that completing the 120-item C-IPB tired them. Therefore, we decided to shorten the C-IPB on the basis of the pretest results to increase its usefulness. The C-IPB possessed good reliability with Cronbach's α = 0.82 and face validity CVI = 0.7; in the meanwhile, we found that completing 120 items is a burden to participants; thus simplifying the C-IPB was necessary. Item Analysis of the Chinese Inventory of Psychosocial Balance A method for the comparison of extreme groups was tested. Items 10,12,17,49,74,76,83,87,and 117 were not statistically significant, implying that those items lacked discrimination and had to be deleted. The remaining 109 items possessed discrimination. The homogeneity test was analyzed using correlation coefficients. On the basis of the homogeneity test results, 74 items were deleted from the following stages because of having correlation coefficients below 0.3: trust stage (eight items deleted), autonomy stage (10 items deleted), initiative stage (nine items deleted), industry stage (nine items deleted), identity stage (10 items deleted), intimacy stage (nine items deleted), generativity stage (10 items deleted), and integrity stage (nine items deleted). The result indicated that items 10,12,17,49,74,76,83,87, and 117 lacked both discrimination and homogeneity and had to be deleted. A total of 46 items were reserved through item analysis and were included in factor analysis. Factor Analysis of the Chinese Inventory of Psychosocial Balance Before factor analysis, we performed the KMO test. The KMO test results for the C-IPB was 0.86, which indicated that sampling is adequate (Kaiser and Rice, 1974). Through principal component analysis and varimax rotation, eight factors could be extracted from the ego development. Items with factor loadings that exceeded 0.55 were included (Williams et al., 2010;Demoulin, 2016). Table 2 presents the factor analysis results. Finally, we retained five items of each stage in order to avoid confusion for the user in the future. Assessment of Reliability and Validity of the Chinese Inventory of Psychosocial Balance Short-Form Reliability Coefficients for the Chinese Inventory of Psychosocial Balance Short-Form Table 3 presents the results. The reliability of the CIPB-SF was found to be adequate, with internal consistency and a Cronbach's α = 0.85. Compared to the CIPB-SF and the C-IPB, the Cronbach's α of each stage was similar. The Cronbach's α of each stage ranged from 0.81 to 0.89 of the CIPB-SF, and from 0.70 to 0.77 of the C-IPB. Table 3 showed the total score and subscores of the eight stages. All stages exhibited sufficient internal consistency, as indicated by good Cronbach's α coefficients. The highest Cronbach's α coefficient was 0.89 in the identity stage, and the lowest was 0.81 in the trust and integrity stages. Criterion-Related Validity of Chinese Inventory of Psychosocial Balance Short-Form We evaluated the Pearson's correlation coefficient between the CIPB-SF and SF-36 ( Table 4). The CIPB-SF subscales all had a significant positive relation (p < 0.001) to the MCS of SF-36. The results indicated that the better ego development status the individuals have, the better mental dimension of QOL they will have. The result was in conformity with the hypothesis (Kleijn et al., 2016), and it means the CIPB-SF had a good criterionrelated validity. Table 4 presents the correlations among each item of the subscale and other stages in the CIPB-SF. All subscales of the CIPB-SF had significant correlations with their own stage, and most of the correlation coefficients were > 0.4 except, item 67 (from the initiative stage), items 69 and 101 (from the identity stage), and items 46, 86, and 6 (from the intimacy stage). The three stages of initiative, intimacy, and identity had fair validity. Validity of the Chinese Inventory of Psychosocial Balance Short-Form The eight subscales of the CIPB-SF and their total scores exhibited significant moderate to high positive correlations with their own stage. All of the items had significant positive correlations with their own stage, with greater correlation coefficient than others (Table 4). The bold values represented the factor loading values in the same group for better readibility. Confirmatory Factor Analysis of the Chinese Inventory of Psychosocial Balance Short-Form The aim of the CFA was to verify the models of questionnaires by structural equation (Olmedo Moreno et al., 2014). Based on the results of item analysis and factor analysis of the C-IPB, forty items were retained as the CIPB-SF and we defined those as Model 1 (Figure 2). The CIPB-SF we developed showed good reliability in all eight stages and appropriate validity, except the fifth and sixth stages. Therefore, the short-form fifth and sixth stages scale were replaced by the original fifth and sixth stages scale of IPB, and we defined those as Model 2 (Figure 3). Both the models will be evaluated by model fit index for assessment. As shown in Table 5, the measurement model showed an acceptable data fit included χ 2 /degrees of freedom, root mean square error of approximation (RMSEA), and Parsimony goodness-offit index (PGFI). However, the root mean square residual (RMR) was above the critical limit, 0.05, and the GFI was slightly less than the standard level. Sharma et al. (2005) indicated that the GFI was affected by sample size. The larger sample size should be considered in the future researches. Akaike information criterion is used for model selection, and the model with lower value means that is better fit for the data. In the study, Model 1 is a better fit for the data compared to the Model 2 ( Table 5). DISCUSSION In this study, we translated the IPB into Chinese to construct the CIPB-SF through factor analysis. The results obtained confirmed that the SC-IPB is a valid and reliable comprehensive tool that can be used to evaluate ego development among the Chinese elderly population. The previous studies indicated that the length of a questionnaire is important because it can directly affect response rates, survey costs, and data quality (Lavrakas, 2008 "Encyclopedia of Survey Research Methods, "). We shortened the 120-item IPB to a 40-item CIPB-SF with similar reliability and validity. In its ability to avoid any unnecessary interference from the questionnaire completion being a time-consuming task, the 40-item CIPB-SF can be considered superior to the original scale. In this study, the CIPB-SF was found to have good reliability. Moreover, the Cronbach's α of each of the eight subscales was between 0.81 and 0.89. We compared the reliability of the CIPB-SF with that of the IPB. The Cronbach's α in the CIPB-SF was greater than that in the IPB. The values of the Cronbach's α coefficients in the IPB were between 0.64 and 0.79 (Domino and Affonso, 1990). In addition, the highest Cronbach's α coefficient value was observed in the industry stage and the lowest was observed in the intimacy stage of the IPB. In this study, the highest value of Cronbach's α coefficients of the CIPB-SF was observed in the identity stage. Although the trust and integrity stages had the lowest Cronbach's α coefficient values of the CIPB-SF, the Cronbach's α coefficients of the eight stages of the CIPB-SF had similar values, ranging between 0.81 and 0.89. The autonomy stage (Cronbach's α = 0.65) had the lowest Cronbach's α value in the IPB. The 30-35-day test-retest reliability in the subscale scores of the IPB were < 0.50 (Domino and Affonso, 1990). During the CIPB-SF development process, the subjects who participated in the study were anonymously recruited from the community. Therefore, assessing the test-retest reliability was not possible. The validity of the IPB was assessed by examining the associations of the IPB subscales to the CPI-social maturity index (Hannah et al., 1996). Six of the eight IPB scales were found to have a positive significant correlation with the CPIsocial maturity index, with only the autonomy and the intimacy stage exhibiting coefficients (Hannah et al., 1996). The CIPB-SF of the study had acceptable validity, except for the intimacy and identity stages. The validity testing results of the CIPB-SF were similar to those of the IPB. The reliability and validity of the identity stage differed between the two questionnaires, IPB and CIPB-SF. Cultural differences between the Eastern and Western versions were expected. The older adults who were a product of the Taiwanese experiment in Progressionism must have studied hard during their adolescence period to succeed in examinations for entering higher education. During their youth, these older adults had also witnessed the colonial rule of Japan. Therefore, they must have faced racial discrimination and unequal treatment in society. The "monomania to study medicine" and strong parental authority also emerged in their era (Chen, 2010). Cultural differences may have resulted in negative consequences, especially in the identity stage. The intimacy vs. isolation stage had the lowest quality in the CIPB-SF in terms of convergent and discriminant validity. The CIPB-SF was developed on the basis of the IPB; however, the social backgrounds of these scales pertain to differ in terms of the expression of love. According to a study that explored Taiwanese male older adults through narrative inquiry, children born in Taiwan in the 1940s tend to have an initial life attitude of fatalism, but are not willing to yield to fate. This finding also highlights that men attach considerable importance to personal achievements and career development (Chuang and Lin, 2017). In view of the aforementioned culture differences, we recommend future researchers to perform another advanced study on the intimacy vs. isolation stage using qualitative research methods. In CFA, the GFI values were 0.76 in Model 1 and 0.67 in Model 2. This result indicated that the model fit is only fair but not in good-fit. This may due to the translation process, under crossculture context, and reduction of the number in items. For instance, in Eastern, people are more conserved in showing intimacy and love than in Western culture. These factors may reduce the grade of model fit. This CIPB-SF could be tested and modified in future studies for developing a more suitable measurement tool based on Erikson theory. To construct the IPB, 528 subjects, including students, adults, elderly individuals, and retired adults, between the ages of 21 and 88 years were included in the study (Domino and Affonso, 1990). A total of 308 subjects aged ≥ 65 years who participated in the development of the CIPB-SF were different from those recruited for the IPB. The IPB is a well-tested questionnaire, and the CIPB-SF was developed following the original questionnaire, which was first translated and then shortened. On the basis of Erikson's theory and the IPB, scales pertaining to individual stages may also be used in isolation. Scales concerning individual stages that are tested in corresponding groups should be considered in the future. Addressing the weaknesses of the present study, the first is the present research that used a cross-sectional design. Undoubtedly, it would be desirable to acknowledge the present study via applying a longitudinal study design. However, the IPB is already a well-tested questionnaire, and the present study focused emphatically on contributing to the Chinese-form questionnaire. Then, the present study excluded the older adults who lived in long-term care institution. There was some discrepancy among older adults who lived in long-term care institution and community. Üzar-Özçetin and Ercan-Şahin (2020) indicated that older adults who have to live in nursing homes may feel deprived and isolated from society. To avoid the interference of the sampling, we recruited healthy Taiwanese elderly adults who were also independent. However, further research is needed to determine institutional residents' validity or crosscultural validity. The CIPB-SF can be used as a tool to investigate the status of ego development in the elderly community. Finally, future studies should implement and test CIPB-SF among different subjects that enhance the applicability and practicality of CIPB-SF. CONCLUSION Erikson's ego development theory is a rare psychosocial theory that encompasses the entire life span ranging from infancy to late adulthood. However, the lack of a credible Chinese version of the scale may impede research on ego development in Taiwan. This study constructed a 40-item CIPB-SF with good reliability (Cronbach's α = 0.81-0.89) and validity. Due to its conciseness, the 40-item CIPB-SF was more appropriate to apply in for the Chinese elderly population to avoid physical overload. This scale can also serve as a useful tool for convenient screening in the future. LIMITATION There was some limitation in this research. First, the sample size of this study was less than 500 subjects, yet the internal consistency test of CIPB-SF and the eight stages were still good (Cronbach's α = 0.81-0.89). We will invite other research teams to participate in the study related to CIPB-SF in the future. Second, the low levels of CFA comparative indices (CFI, GFI, and TLI), although an absolute fit index like RMSEA were good. Although the comparative indices (CFI, GFI, TLI) indicated that the model fit was fair, Sharma et al. (2005) pointed out that the GFI was affected by sample size, and our sample size was less than 500, which is not perfect in SEM analysis. Further study will be needed to recruit multiple research teams to cooperate in crossdomain and crosscultural validity. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Institutional Review Board (IRB) of China Medical University Hospital in Taiwan approved the study. The committee's reference number is "DMR95-IRB-144." Written informed consent was obtained from participants. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS P-YC wrote the main manuscript text, performed the research, and analyzed the data. CL interpreted the data. W-CH checked the statistical process. T-PY revised the sentence and checked the grammar. All authors reviewed the manuscript.
2022-02-01T14:21:39.833Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "f3fd42de90c65c5097365b86ceecb65e71a05aef", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2022.799967/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "f3fd42de90c65c5097365b86ceecb65e71a05aef", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
102687256
pes2o/s2orc
v3-fos-license
Physico-Chemistry of Elechi Creek in the Upper Bonny Estuary , Rivers State , Nigeria The physico-chemistry of tidal Elechi Creek in the upper Bonny Estuary was investigated to assess the toxicity of the water body as this could have adverse effects on the native organisms that form part of the ecosystem and to evaluate the extent and magnitude of petroleum hydrocarbon and heavy metal concentration in the water body. Standard field procedures were adopted in sample collection while laboratory analysis was carried out also following standard methods. Parameters measured include pH, temperature, dissolved oxygen (DO), biological oxygen demand (BOD), electrical conductivity, salinity and total dissolved solids (TDS). Heavy metals namely Copper (Cu), Lead (Pb), Zinc (Zn), nitrate (NO3), sulphate (SO4), Ammonia (NH4), and Total Hydrocarbon Concentration (THC) in the surface water body were also determined on samples collected from five (5) sampling stations spread along the creek in both dry and wet seasons. The results showed that the surface water body is oligohaline with low acidity, pH values ranging from 6.2 to 7.6. Temperature varies from 27 ̊C to 30 ̊C. Dissolved Oxygen (DO) concentration ranges from 2.9 to 7.5 mg/l which is adequate for aquatic organism with Biological Oxygen Demand (BOD) values varying from 0.45 to 7.0 mg/l. The values of total dissolved solids (TDS) vary from 11,700 mg/l to 26,250 mg/l with the highest mean value obtained at the study station located downstream. The mean nutrient concentration showed that the study area is not eutropic. Anthropogenic activities were very high at the sites throughout the period of the study and may have resulted in the stress conditions of the area. Pollution trend along the study stations showed that no one station is unpolluted. The study recommends the introduction of enforceable and stringent waste management plans to highlight and discourage direct discharge of untreated waste and storm water runoff into an aquatic environment. How to cite this paper: Ngah, A.S., Braide, S. and Dike, C.C. (2017) Physico-Chemistry of Elechi Creek in the Upper Bonny Estuary, Rivers State, Nigeria. Journal of Geoscience and Environment Protection, 5, 181-197. https://doi.org/10.4236/gep.2017.58015 Received: March 10, 2017 Accepted: August 4, 2017 Published: August 7, 2017 Copyright © 2017 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Open Access Introduction Water is an indispensable natural resource without which there would have been no life on earth.The importance of water to man cannot be over emphasized. Man can survive longer without food than without water.Water is freely available through rainfall and other components of the hydrologic cycle.Until recently, man has tended to take this abundant natural resource for granted [1].The increased concern on the rapid ecological changes in the natural environment has provided major challenges to the scientific community [2].The emerging empirical facts have shown the susceptibility and sensitivity of ecological assemblages which have been ignored especially the open water body dwelling communities in brackish and other waters of Niger Delta.The first sign of pollution in an aquatic system includes toxic algal blooms and associated deoxygenation of surface water. Studies have been undertaken on macrofauna in Bonny and other Niger Delta environment [3] [4] [5] [6].These studies have provided information on the physicochemistry, Benthos; [7] [8] and [9] on benthos, [10] on intertidal macrofauna while [11] on fisheries of Bonny estuary.A good surface water quality has dissolved oxygen (DO) values exceeding 2 mg/l [12].This fact underpins the necessity of a thorough evaluation of the physicochemistry of Elechi Creek, a busy and actively tidal creek in Port Harcourt. The study is on the creeks stretching from Abonnema wharf to Mgbouosimini waterfront.It attempts to establish possible effects of anthropogenic activities on the physicochemical parameters of the creek and provide mitigating measures with a view to preserving the environment.It also tries to generate an index using developed water quality index from observed temporal and spatial variation in some physicochemical parameters in the area.Assessment of water is not only for suitability for human consumption but also in relation to its agricultural, industrial, recreational, commercial uses and, from environmental point of view, its ability to sustain aquatic life.Water quality monitoring is therefore a fundamental tool in the management of freshwater resources.To achieve the aim of this study, the following evaluations were made: water quality, extent and magnitude of petroleum hydrocarbon and heavy metal concentration, relationship between the physicochemical characteristics and the aim of identifying zones within the study area that could show acute toxicity to native organisms. Description of the Study Area The study was conducted in a brackish wetland of Bonny estuary between Ab-Journal of Geoscience and Environment Protection onnema wharf and Mgboushimini.The creek lies Southwest of Port Harcourt between longitude 04˚46'743"N and 007˚00'557"E; 04˚48'217"N and 006˚48'989"E (Figure 1).It is protected from strong wave actions prevalent in the main Bonny River Channel and the current flow velocity is minimal (about 3 m/s).Tidal amplitude is about 1.20 m.A detailed description of the hydrology of the system is contained elsewhere [13]. The area enjoys tropical equatorial climate.Two main seasons are dominant namely, the rainy season (March to November) and the dry season (December to February) Annual rainfall in the area is about 2405.20 mm (Gobo, 1988).Annual mean air temperature is 29.7˚C with the highest monthly mean temperature at 31.3˚C (in August), and the lowest monthly mean temperature at 27.5˚C (in January).The surface seawater temperature varies between 25.9˚C and 30.6˚C, and the salinity of the surface water varies between 8‰ and 20‰.The tidal variations range between 0.43 m and 1.67 m with a mean tidal variation of 0.90 m. The water current flows in directions, flooding (inundation) during high tide and receding at low tide regimes.The mud (sediment) had a dark appearance with hydrogen sulphide as the major byproduct of sulphate reducing bacteria. The soil type is mainly clay (Chikoko).The intertidal flat consists of moderately sorted sand to silty clay with patches of hard "Chikoko" sediment types [10]. Vegetation in the area is characteristically mangrove, with the dominant types being red mangrove (Rhizophora racemosa), white mangrove (Avicennia africana) and black mangrove (Laguncularia racemosa).The area is also inhabited by other plants (e.g.fern-Achrostichum aureum and grass-Paspalum varginatum).Animals found in the area include (mud skipper Periophthalmus sp., fidder crabs Uca tangeri and Periwinkles).The vegetation of the head water and stream banks are intact, and had been referred to as un-degraded stream systems [14] [15]. Materials and Method Five study stations were established along the creek from Abonnema wharf to Mgbousimini, (Figure 1).They include: 1) Abonnema Wharf station (ABWS 1 & 2).Activities here are mostly boat transportation and sale of petroleum products. 2) Elechi Beach station (EBS 1 & 2).Activities here include lifting of petroleum products and waste disposal.Surface water samples were collected on a monthly interval at ebbing tide during the study period.Water samples were taken at about 30 cm below the water surface by a horizontal water bottle made of polyvinyl chloride. One sample was collected at each of these locations, two replicate samples from each group stations. At each sampling station, in situ measurement of unstable parameters such as temperature was made while the elevation and the sampling location were geo-referenced. The parameters measured are Temperature (in situ), pH, Conductivity, Dissolved Oxygen, Biological Oxygen Demand (BOD), Total Dissolved solid (TDS), Salinity, Total hydrocarbon concentration (THC), and Heavy metals.The samples were taken to the laboratory of the Institute of Pollution Studies, Rivers State University of Science and Technology, Port Harcourt, where the analysis took place following standard analytical procedures. Surface Water pH The result of the mean surface water pH of the study area showed that the surface water is weakly acidic to weakly alkaline with values ranging from 6.40 in AJS in dry season to 7.60 in EBS in the wet season.This is expected since in the dry season, there is some fresh water influence in the study area while the alkaline seawater dominates. The spatial and temporal variation in surface water pH is presented in the Figure 2. Surface Water Temperature The results of surface water temperature range from 27.6˚C to 30˚C.The highest temperature was recorded in January, dry season and the lowest value was recorded in August, wet season, Figure 3. Surface Water Electrical Conductivity (EC) The results of mean EC of surface water in the study sites showed that the maximum value was recorded in January, 2013, dry season; as 28,250 (μs/cm) at Abonnema Wharf station (ABWS) site.The minimum value was obtained in September 2013 wet season, as 12,775 (μs/cm) at Mgboushimini sample sites Considerable spatial and temporal changes in values were observed at the sampling sites throughout the study periods.The temporal change showed a decline in the EC values at MBS sites, but was more pronounced at the ABWS site.Except for the month of December 2012 (dry season), the values obtained at MBS sites showed a continuous decline for the periods of the study. Surface Water Salinity (‰) The results of the measurements of mean salinity of the surface water shows that the values ranged between 7.00(‰) and 17.20(‰) ± 10.20(‰).The variation in the level of salinity is quite high.The highest level of salinity was recorded in August 2013 (wet season), at the Abonnema Wharf study sites as 17.20(‰), while the least value was recorded in September 2013 (wet season), at MBS study site as 7.00(‰).On a general note, salinity levels fluctuates.However, higher values were recorded at Abonnema Wharf study site than other sites.Figure 5 shows the variations of the salinity of the sampling sites. Dissolved Oxygen (DO) concentration of Surface Water Samples The results showed that the values of the DO vary between 2.85 mg/l and 7.50 mg/l.The minimum value of DO was recorded at Elechi Beach in November 2012 (dry season), while the maximum value was recorded at the MBS site in August 2013 (wet season).The concentration of DO during the periods of The result showed that the Nitrate (mg/l) concentrations of the surface water ranged between 0.15 and 2.51 mg/l, with a difference of 1.45 mg/l.The minimum value was obtained at the control site in August 2013 wet season while; the maximum value was obtained at EBS in January 2013 dry season.The spatial and temporal variations in the concentrations of Nitrate are presented in Figure 9. The Nitrate levels of the study area did not show any common trend across the various study stations.The concentration of nitrate varied between 0.15 -2.45 mg/l in the study period.The highest concentration of nitrate was recorded Except for the peak of 581.50 mg/l, at ABWS, the concentrations of Sulphate varied during the period of the study. Concentration of Ammonia (mg/l) in Surface Water of Sample Sites The results indicates that the concentrations of ammonia ranged from <0.05 -0.15 (mg/l).There was no common trend in the levels of the distribution of ammonia.The spatial and temporal variations of Ammonia (mg/l) in the study areas are presented in the Figure 11. Copper (Cu) (mg/l) The lowest value recorded was <0.01 = 0.0001 (mg/l) while the highest value Figure 10.Sulphate concentration (mg/l) in surface water of the the study stations. Discussion The pH values showed that the spatial and temporal variations in the surface water pH were significant.The range of the pH values shows that it was tending towards neutral particularly in the study stations in the upper reaches.This suggests evidence of the presence of waste material which is an attribute to human activity.This may also be attributed to water shed and other human activities. However, there are high anthropogenic activities in the settlements along the water fronts in the study area.Most aquatic species adapt to natural condition as certain aquatic species have different tolerance for acidity.Clams, shrimps and snails generally cannot survive with a pH below 6 [16].pH which is an indicator of acidic or alkaline conditions of water status was within level of 7.00.High or low pH values in a river have been reported to affect aquatic life and alter the toxicity of other pollutant in one form or the other [17]. Though normal biological activity can occur within pH range restricted to 6.00 -8.00, for natural water, the EEC (1980) guide limit for water is 6.50 -8.50. Thus, the values obtained in the study area fall within this range with the exception of samples collected in August and September, all the other months in all the stations recorded a pH of <7.00 mostly in the dry season.According to [18] and [19], the pH of brackish water falls within the range 6.50 to 7.40 and 7.48 and 8.89 respectively.In [20], the pH values ranged from 6.30 to 7.70 for Elechi Creek.The pH profile in all the study stations falls within the desirable limit for the survival of fishes.A lower pH value will not support aquatic life [21] and [22] [23] [24]. The surface water temperature profile has a mean value of 29˚C in the dry season months of November and January and lower during the months of rainy season (27˚C -29˚C).Variation of temperature in water bodies attributable to sunlight was reported to occur particularly in estuaries due to their general shallowness, which exposes the water and mudflats to sunlight [20].[25] reported that temperatures between 26˚C and 30˚C are attributed to the insulating effect of increased nutrient load resulting from input of industrial discharge.It therefore, means that the Elechi Creek study area is polluted and constitutes an envi- The values recorded at these sites can be attributed to the discharge of organic matter from sewage effluence of the waterfront settlement.It could also be explained that the lower mean values of other study sites could be due to relative uptake of nitrate by growing mats of algae which were found covering the exposed mud flats at low tide.The overall low concentration of nitrate in these five areas indicates that they are below the threshold limit of 45.00 mg/l [38].Thus, it can be concluded that these areas are not contaminated with nitrate.These values are well below the internationally accepted value of 45.00 mg/L indicator for nitrate ion contamination and thus this water is not polluted with Nitrate. Heavy metal concentrations were generally low throughout the stations at Elechi Creek.The low concentration of heavy metal reported in the study stations may be attributed to lack of industrial discharges into the Elechi Creek.It is possible that the abundant suspended matter and high pH may have effectively scavenged metals through adsorption and precipitation [20]. The results showed that the values obtained for the present studied suggest that the water quality status of Elechi Creek is reasonably influenced by various discharges of effluents into the creek in terms of heavy metal concentrations. Conclusion Change in physico-chemical characteristics of the Elechi Creek could be attributed to the various anthropogenic activities taking place in the study area. These changes are also increasing risk hazard and continued discharge of waste into the Creek without proper management will pose environmental issues in the future.The seasonal variation was such that most of the physico-chemical parameters showed a higher concentration in dry season. 3 ) 4 ) AGIP Jetty Station (AJS 1 & 2).Activities include ship transportation and runoff.Mgboushimini Station (MBS 1 & 2); influenced by effluent from slaughter, burnt tyres, market and solid waste disposal.5) Control Station (CS 1 & 2); characterised by relatively low human activity.Sample collection was done according to standard procedures.Water samples were collected in both dry season (November-December 2012 and January 2013) and peak wet season (August and September 2013).Journal of Geoscience and Environment Protection Figure 1 . Figure 1.Map of study area. Figure 2 . Figure 2. Variation of surface water pH in the study stations. . The difference between the maximum and the minimum values was 15,475 (μs/cm).Spatial and temporal variations in the EC are presented in Figure 4. Figure 3 . Figure 3. Variations in surface water temperature in the study stations. Figure 4 . Figure 4. Variations of electrical conductivity in study stations. Figure 5 . Figure 5. Variation of salinity in surface water in the study stations. Figure 6 . Figure 6.Variation of surface water DO (mg/l) across the study stations. Figure 7 . Figure 7. Variation of surface water BOD (mg/l) in the study stations. Figure 8 . Figure 8. Variations of surface water TDS (mg/l) in study stations. Figure 9 . Figure 9.The spatial and temporal variation of surface water nitrate (mg/l) in study stations. Figure 11 . Figure 11.Variations of ammonia (mg/l) in surface water of the study stations. Figure 12 . Figure 12.The spatial and temporal variations of the concentrations of copper (mg/l) in surface water of the study stations.
2019-04-09T13:04:05.316Z
2017-07-18T00:00:00.000
{ "year": 2017, "sha1": "c1b6196f9ce97e7af343e24ca823ed4076bcfb13", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=78200", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "c1b6196f9ce97e7af343e24ca823ed4076bcfb13", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }